instance_id
stringlengths 13
37
| text
stringlengths 3.08k
667k
| repo
stringclasses 35
values | base_commit
stringlengths 40
40
| problem_statement
stringlengths 10
256k
| hints_text
stringlengths 0
908k
| created_at
stringlengths 20
20
| patch
stringlengths 18
101M
| test_patch
stringclasses 1
value | version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface__transformers-14487
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Perceiver IO
# 🌟 New model addition
## Model description
Perceiver is a general architecture that works on many kinds of data, including images, video, audio, 3D point clouds, language and symbolic inputs, multimodal combinations, etc. Perceivers can handle new types of data with only minimal modifications. Perceivers process inputs using domain-agnostic Transformer-style attention. Unlike Transformers, Perceivers first map inputs to a small latent space where processing is cheap and doesn't depend on the input size. This makes it possible to build very deep networks even when using large inputs like images or videos.
Perceiver IO is a generalization of Perceiver to handle arbitrary outputs in addition to arbitrary inputs. The original Perceiver only produced a single classification label. In addition to classification labels, Perceiver IO can produce (for example) language, optical flow, and multimodal videos with audio. This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is linear in the input and output size and the bulk of the processing occurs in the latent space, allowing us to process inputs and outputs that are much larger than can be handled by standard Transformers. This means, for example, Perceiver IO can do BERT-style masked language modeling directly using bytes instead of tokenized inputs.
https://arxiv.org/pdf/2107.14795.pdf
## Open source status
* [x] the model implementation is available: https://github.com/deepmind/deepmind-research/tree/master/perceiver (JAX)
* [x] the model weights are available: https://storage.googleapis.com/perceiver_io/language_perceiver_io_bytes.pickle pretrained masked language model (https://github.com/deepmind/deepmind-research/blob/master/perceiver/colabs/masked_language_modelling.ipynb)
* [x] who are the authors: **DeepMind** Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu,
David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff,
Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira
</issue>
<code>
[start of README.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <b>English</b> |
44 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hant.md">繁體中文</a> |
46 <a href="https://github.com/huggingface/transformers/blob/master/README_ko.md">한국어</a>
47 <p>
48 </h4>
49
50 <h3 align="center">
51 <p>State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow</p>
52 </h3>
53
54 <h3 align="center">
55 <a href="https://hf.co/course"><img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/course_banner.png"></a>
56 </h3>
57
58 🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction, question answering, summarization, translation, text generation and more in over 100 languages. Its aim is to make cutting-edge NLP easier to use for everyone.
59
60 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
61
62 🤗 Transformers is backed by the three most popular deep learning libraries — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other.
63
64 ## Online demos
65
66 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) for public and private models.
67
68 Here are a few examples:
69 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
70 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
71 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
72 - [Natural Language Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
73 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
74 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
75 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
76
77 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities.
78
79 ## If you are looking for custom support from the Hugging Face team
80
81 <a target="_blank" href="https://huggingface.co/support">
82 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
83 </a><br>
84
85 ## Quick tour
86
87 To immediately use a model on a given text, we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts:
88
89 ```python
90 >>> from transformers import pipeline
91
92 # Allocate a pipeline for sentiment-analysis
93 >>> classifier = pipeline('sentiment-analysis')
94 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
95 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
96 ```
97
98 The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here the answer is "positive" with a confidence of 99.97%.
99
100 Many NLP tasks have a pre-trained `pipeline` ready to go. For example, we can easily extract question answers given context:
101
102 ``` python
103 >>> from transformers import pipeline
104
105 # Allocate a pipeline for question-answering
106 >>> question_answerer = pipeline('question-answering')
107 >>> question_answerer({
108 ... 'question': 'What is the name of the repository ?',
109 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
110 ... })
111 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
112
113 ```
114
115 In addition to the answer, the pretrained model used here returned its confidence score, along with the start position and end position of the answer in the tokenized sentence. You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/docs/transformers/task_summary).
116
117 To download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version:
118 ```python
119 >>> from transformers import AutoTokenizer, AutoModel
120
121 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
122 >>> model = AutoModel.from_pretrained("bert-base-uncased")
123
124 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
125 >>> outputs = model(**inputs)
126 ```
127 And here is the equivalent code for TensorFlow:
128 ```python
129 >>> from transformers import AutoTokenizer, TFAutoModel
130
131 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
132 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
133
134 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
135 >>> outputs = model(**inputs)
136 ```
137
138 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator.
139
140 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use normally. [This tutorial](https://huggingface.co/docs/transformers/training) explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune on a new dataset.
141
142 ## Why should I use transformers?
143
144 1. Easy-to-use state-of-the-art models:
145 - High performance on NLU and NLG tasks.
146 - Low barrier to entry for educators and practitioners.
147 - Few user-facing abstractions with just three classes to learn.
148 - A unified API for using all our pretrained models.
149
150 1. Lower compute costs, smaller carbon footprint:
151 - Researchers can share trained models instead of always retraining.
152 - Practitioners can reduce compute time and production costs.
153 - Dozens of architectures with over 2,000 pretrained models, some in more than 100 languages.
154
155 1. Choose the right framework for every part of a model's lifetime:
156 - Train state-of-the-art models in 3 lines of code.
157 - Move a single model between TF2.0/PyTorch frameworks at will.
158 - Seamlessly pick the right framework for training, evaluation and production.
159
160 1. Easily customize a model or an example to your needs:
161 - We provide examples for each architecture to reproduce the results published by its original authors.
162 - Model internals are exposed as consistently as possible.
163 - Model files can be used independently of the library for quick experiments.
164
165 ## Why shouldn't I use transformers?
166
167 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
168 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library.
169 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/master/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs.
170
171 ## Installation
172
173 ### With pip
174
175 This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+.
176
177 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
178
179 First, create a virtual environment with the version of Python you're going to use and activate it.
180
181 Then, you will need to install at least one of Flax, PyTorch or TensorFlow.
182 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax installation page](https://github.com/google/flax#quick-install) regarding the specific install command for your platform.
183
184 When one of those backends has been installed, 🤗 Transformers can be installed using pip as follows:
185
186 ```bash
187 pip install transformers
188 ```
189
190 If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
191
192 ### With conda
193
194 Since Transformers version v4.0.0, we now have a conda channel: `huggingface`.
195
196 🤗 Transformers can be installed using conda as follows:
197
198 ```shell script
199 conda install -c huggingface transformers
200 ```
201
202 Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
203
204 ## Model architectures
205
206 **[All the model checkpoints](https://huggingface.co/models)** provided by 🤗 Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co) where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations).
207
208 Current number of checkpoints: 
209
210 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/docs/transformers/model_summary) for a high-level summary of each them):
211
212 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
213 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
214 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
215 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
216 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
217 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
218 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
219 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bertgeneration)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
220 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/bigbird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
221 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
222 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
223 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot_small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
224 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
225 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
226 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
227 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
228 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
229 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
230 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
231 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
232 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
233 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta_v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
234 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
235 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
236 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
237 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT.
238 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval
239 for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon
240 Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
241 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoderdecoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
242 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
243 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
244 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
245 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
246 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
247 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
248 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
249 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
250 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
251 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
252 1. **[ImageGPT](https://huggingface.co/docs/transformers/master/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
253 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
254 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
255 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
256 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
257 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
258 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
259 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
260 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
261 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
262 1. **[MBart](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
263 1. **[MBart-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
264 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron_bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
265 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
266 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
267 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
268 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
269 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
270 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
271 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
272 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
273 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
274 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
275 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
276 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
277 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
278 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
279 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
280 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
281 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
282 1. **[SqueezeBert](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
283 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
284 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
285 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
286 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transformerxl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
287 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
288 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
289 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech_sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
290 AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
291 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
292 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
293 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
294 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
295 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlmprophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
296 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlmroberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
297 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
298 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
299 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
300
301 To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/docs/transformers/index#supported-frameworks).
302
303 These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://huggingface.co/docs/transformers/examples).
304
305
306 ## Learn more
307
308 | Section | Description |
309 |-|-|
310 | [Documentation](https://huggingface.co/docs/transformers/) | Full API documentation and tutorials |
311 | [Task summary](https://huggingface.co/docs/transformers/task_summary) | Tasks supported by 🤗 Transformers |
312 | [Preprocessing tutorial](https://huggingface.co/docstransformers/preprocessing) | Using the `Tokenizer` class to prepare data for the models |
313 | [Training and fine-tuning](https://huggingface.co/docs/transformers/training) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API |
314 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/master/examples) | Example scripts for fine-tuning models on a wide range of tasks |
315 | [Model sharing and uploading](https://huggingface.co/docs/transformers/model_sharing) | Upload and share your fine-tuned models with the community |
316 | [Migration](https://huggingface.co/docs/transformers/migration) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` |
317
318 ## Citation
319
320 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library:
321 ```bibtex
322 @inproceedings{wolf-etal-2020-transformers,
323 title = "Transformers: State-of-the-Art Natural Language Processing",
324 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
325 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
326 month = oct,
327 year = "2020",
328 address = "Online",
329 publisher = "Association for Computational Linguistics",
330 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
331 pages = "38--45"
332 }
333 ```
334
[end of README.md]
[start of README_ko.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <a href="https://github.com/huggingface/transformers/">English</a> |
44 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hant.md">繁體中文</a> |
46 <b>한국어</b>
47 <p>
48 </h4>
49
50 <h3 align="center">
51 <p> Jax, Pytorch, TensorFlow를 위한 최첨단 자연어처리</p>
52 </h3>
53
54 <h3 align="center">
55 <a href="https://hf.co/course"><img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/course_banner.png"></a>
56 </h3>
57
58 🤗 Transformers는 분류, 정보 추출, 질문 답변, 요약, 번역, 문장 생성 등을 100개 이상의 언어로 수행할 수 있는 수천개의 사전학습된 모델을 제공합니다. 우리의 목표는 모두가 최첨단의 NLP 기술을 쉽게 사용하는 것입니다.
59
60 🤗 Transformers는 이러한 사전학습 모델을 빠르게 다운로드해 특정 텍스트에 사용하고, 원하는 데이터로 fine-tuning해 커뮤니티나 우리의 [모델 허브](https://huggingface.co/models)에 공유할 수 있도록 API를 제공합니다. 또한, 모델 구조를 정의하는 각 파이썬 모듈은 완전히 독립적이여서 연구 실험을 위해 손쉽게 수정할 수 있습니다.
61
62 🤗 Transformers는 가장 유명한 3개의 딥러닝 라이브러리를 지원합니다. 이들은 서로 완벽히 연동됩니다 — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). 간단하게 이 라이브러리 중 하나로 모델을 학습하고, 또 다른 라이브러리로 추론을 위해 모델을 불러올 수 있습니다.
63
64 ## 온라인 데모
65
66 대부분의 모델을 [모델 허브](https://huggingface.co/models) 페이지에서 바로 테스트해볼 수 있습니다. 공개 및 비공개 모델을 위한 [비공개 모델 호스팅, 버전 관리, 추론 API](https://huggingface.co/pricing)도 제공합니다.
67
68 예시:
69 - [BERT로 마스킹된 단어 완성하기](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
70 - [Electra를 이용한 개체명 인식](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
71 - [GPT-2로 텍스트 생성하기](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
72 - [RoBERTa로 자연어 추론하기](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
73 - [BART를 이용한 요약](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
74 - [DistilBERT를 이용한 질문 답변](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
75 - [T5로 번역하기](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
76
77 **[Transformer와 글쓰기](https://transformer.huggingface.co)** 는 이 저장소의 텍스트 생성 능력에 관한 Hugging Face 팀의 공식 데모입니다.
78
79 ## Hugging Face 팀의 커스텀 지원을 원한다면
80
81 <a target="_blank" href="https://huggingface.co/support">
82 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
83 </a><br>
84
85 ## 퀵 투어
86
87 원하는 텍스트에 바로 모델을 사용할 수 있도록, 우리는 `pipeline` API를 제공합니다. Pipeline은 사전학습 모델과 그 모델을 학습할 때 적용한 전처리 방식을 하나로 합칩니다. 다음은 긍정적인 텍스트와 부정적인 텍스트를 분류하기 위해 pipeline을 사용한 간단한 예시입니다:
88
89 ```python
90 >>> from transformers import pipeline
91
92 # Allocate a pipeline for sentiment-analysis
93 >>> classifier = pipeline('sentiment-analysis')
94 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
95 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
96 ```
97
98 코드의 두번째 줄은 pipeline이 사용하는 사전학습 모델을 다운로드하고 캐시로 저장합니다. 세번째 줄에선 그 모델이 주어진 텍스트를 평가합니다. 여기서 모델은 99.97%의 확률로 텍스트가 긍정적이라고 평가했습니다.
99
100 많은 NLP 과제들을 `pipeline`으로 바로 수행할 수 있습니다. 예를 들어, 질문과 문맥이 주어지면 손쉽게 답변을 추출할 수 있습니다:
101
102 ``` python
103 >>> from transformers import pipeline
104
105 # Allocate a pipeline for question-answering
106 >>> question_answerer = pipeline('question-answering')
107 >>> question_answerer({
108 ... 'question': 'What is the name of the repository ?',
109 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
110 ... })
111 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
112
113 ```
114
115 답변뿐만 아니라, 여기에 사용된 사전학습 모델은 확신도와 토크나이즈된 문장 속 답변의 시작점, 끝점까지 반환합니다. [이 튜토리얼](https://huggingface.co/docs/transformers/task_summary)에서 `pipeline` API가 지원하는 다양한 과제를 확인할 수 있습니다.
116
117 코드 3줄로 원하는 과제에 맞게 사전학습 모델을 다운로드 받고 사용할 수 있습니다. 다음은 PyTorch 버전입니다:
118 ```python
119 >>> from transformers import AutoTokenizer, AutoModel
120
121 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
122 >>> model = AutoModel.from_pretrained("bert-base-uncased")
123
124 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
125 >>> outputs = model(**inputs)
126 ```
127 다음은 TensorFlow 버전입니다:
128 ```python
129 >>> from transformers import AutoTokenizer, TFAutoModel
130
131 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
132 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
133
134 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
135 >>> outputs = model(**inputs)
136 ```
137
138 토크나이저는 사전학습 모델의 모든 전처리를 책임집니다. 그리고 (위의 예시처럼) 1개의 스트링이나 리스트도 처리할 수 있습니다. 토크나이저는 딕셔너리를 반환하는데, 이는 다운스트림 코드에 사용하거나 언패킹 연산자 ** 를 이용해 모델에 바로 전달할 수도 있습니다.
139
140 모델 자체는 일반적으로 사용되는 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)나 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)입니다. [이 튜토리얼](https://huggingface.co/transformers/training.html)은 이러한 모델을 표준적인 PyTorch나 TensorFlow 학습 과정에서 사용하는 방법, 또는 새로운 데이터로 fine-tune하기 위해 `Trainer` API를 사용하는 방법을 설명해줍니다.
141
142 ## 왜 transformers를 사용해야 할까요?
143
144 1. 손쉽게 사용할 수 있는 최첨단 모델:
145 - NLU와 NLG 과제에서 뛰어난 성능을 보입니다.
146 - 교육자 실무자에게 진입 장벽이 낮습니다.
147 - 3개의 클래스만 배우면 바로 사용할 수 있습니다.
148 - 하나의 API로 모든 사전학습 모델을 사용할 수 있습니다.
149
150 1. 더 적은 계산 비용, 더 적은 탄소 발자국:
151 - 연구자들은 모델을 계속 다시 학습시키는 대신 학습된 모델을 공유할 수 있습니다.
152 - 실무자들은 학습에 필요한 시간과 비용을 절약할 수 있습니다.
153 - 수십개의 모델 구조, 2,000개 이상의 사전학습 모델, 100개 이상의 언어로 학습된 모델 등.
154
155 1. 모델의 각 생애주기에 적합한 프레임워크:
156 - 코드 3줄로 최첨단 모델을 학습하세요.
157 - 자유롭게 모델을 TF2.0나 PyTorch 프레임워크로 변환하세요.
158 - 학습, 평가, 공개 등 각 단계에 맞는 프레임워크를 원하는대로 선택하세요.
159
160 1. 필요한 대로 모델이나 예시를 커스터마이즈하세요:
161 - 우리는 저자가 공개한 결과를 재현하기 위해 각 모델 구조의 예시를 제공합니다.
162 - 모델 내부 구조는 가능한 일관적으로 공개되어 있습니다.
163 - 빠른 실험을 위해 모델 파일은 라이브러리와 독립적으로 사용될 수 있습니다.
164
165 ## 왜 transformers를 사용하지 말아야 할까요?
166
167 - 이 라이브러리는 신경망 블록을 만들기 위한 모듈이 아닙니다. 연구자들이 여러 파일을 살펴보지 않고 바로 각 모델을 사용할 수 있도록, 모델 파일 코드의 추상화 수준을 적정하게 유지했습니다.
168 - 학습 API는 모든 모델에 적용할 수 있도록 만들어지진 않았지만, 라이브러리가 제공하는 모델들에 적용할 수 있도록 최적화되었습니다. 일반적인 머신 러닝을 위해선, 다른 라이브러리를 사용하세요.
169 - 가능한 많은 사용 예시를 보여드리고 싶어서, [예시 폴더](https://github.com/huggingface/transformers/tree/master/examples)의 스크립트를 준비했습니다. 이 스크립트들을 수정 없이 특정한 문제에 바로 적용하지 못할 수 있습니다. 필요에 맞게 일부 코드를 수정해야 할 수 있습니다.
170
171 ## 설치
172
173 ### pip로 설치하기
174
175 이 저장소는 Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+, TensorFlow 2.3+에서 테스트 되었습니다.
176
177 [가상 환경](https://docs.python.org/3/library/venv.html)에 🤗 Transformers를 설치하세요. Python 가상 환경에 익숙하지 않다면, [사용자 가이드](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)를 확인하세요.
178
179 우선, 사용할 Python 버전으로 가상 환경을 만들고 실행하세요.
180
181 그 다음, Flax, PyTorch, TensorFlow 중 적어도 하나는 설치해야 합니다.
182 플랫폼에 맞는 설치 명령어를 확인하기 위해 [TensorFlow 설치 페이지](https://www.tensorflow.org/install/), [PyTorch 설치 페이지](https://pytorch.org/get-started/locally/#start-locally), [Flax 설치 페이지](https://github.com/google/flax#quick-install)를 확인하세요.
183
184 이들 중 적어도 하나가 설치되었다면, 🤗 Transformers는 다음과 같이 pip을 이용해 설치할 수 있습니다:
185
186 ```bash
187 pip install transformers
188 ```
189
190 예시들을 체험해보고 싶거나, 최최최첨단 코드를 원하거나, 새로운 버전이 나올 때까지 기다릴 수 없다면 [라이브러리를 소스에서 바로 설치](https://huggingface.co/docs/transformers/installation#installing-from-source)하셔야 합니다.
191
192 ### conda로 설치하기
193
194 Transformers 버전 v4.0.0부터, conda 채널이 생겼습니다: `huggingface`.
195
196 🤗 Transformers는 다음과 같이 conda로 설치할 수 있습니다:
197
198 ```shell script
199 conda install -c huggingface transformers
200 ```
201
202 Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는 방법을 확인하세요.
203
204 ## 모델 구조
205
206 **🤗 Transformers가 제공하는 [모든 모델 체크포인트](https://huggingface.co/models)** 는 huggingface.co [모델 허브](https://huggingface.co)에 완벽히 연동되어 있습니다. [개인](https://huggingface.co/users)과 [기관](https://huggingface.co/organizations)이 모델 허브에 직접 업로드할 수 있습니다.
207
208 현재 사용 가능한 모델 체크포인트의 개수: 
209
210 🤗 Transformers는 다음 모델들을 제공합니다 (각 모델의 요약은 [여기](https://huggingface.co/docs/transformers/model_summary)서 확인하세요):
211
212 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
213 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
214 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
215 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
216 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
217 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
218 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bertgeneration)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
219 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
220 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
221 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/bigbird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
222 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
223 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot_small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
224 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
225 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
226 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
227 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
228 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
229 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
230 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
231 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
232 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
233 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta_v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
234 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
235 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
236 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
237 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT.
238 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
239 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
240 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoderdecoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
241 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
242 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
243 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
244 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
245 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
246 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
247 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
248 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
249 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
250 1. **[ImageGPT](https://huggingface.co/docs/transformers/master/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
251 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
252 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
253 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
254 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
255 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
256 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
257 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
258 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
259 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
260 1. **[MBart](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
261 1. **[MBart-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
262 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron_bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
263 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
264 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
265 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
266 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
267 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
268 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
269 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
270 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
271 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
272 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
273 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
274 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
275 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
276 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
277 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
278 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
279 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
280 1. **[SqueezeBert](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
281 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
282 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
283 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
284 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transformerxl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
285 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
286 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
287 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech_sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
288 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
289 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
290 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
291 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
292 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlmprophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
293 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlmroberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
294 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
295 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
296 1. 새로운 모델을 올리고 싶나요? 우리가 **상세한 가이드와 템플릿** 으로 새로운 모델을 올리도록 도와드릴게요. 가이드와 템플릿은 이 저장소의 [`templates`](./templates) 폴더에서 확인하실 수 있습니다. [컨트리뷰션 가이드라인](./CONTRIBUTING.md)을 꼭 확인해주시고, PR을 올리기 전에 메인테이너에게 연락하거나 이슈를 오픈해 피드백을 받으시길 바랍니다.
297
298 각 모델이 Flax, PyTorch, TensorFlow으로 구현되었는지 또는 🤗 Tokenizers 라이브러리가 지원하는 토크나이저를 사용하는지 확인하려면, [이 표](https://huggingface.co/docs/transformers/index#supported-frameworks)를 확인하세요.
299
300 이 구현은 여러 데이터로 검증되었고 (예시 스크립트를 참고하세요) 오리지널 구현의 성능과 같아야 합니다. [도큐먼트](https://huggingface.co/docs/transformers/examples)의 Examples 섹션에서 성능에 대한 자세한 설명을 확인할 수 있습니다.
301
302 ## 더 알아보기
303
304 | 섹션 | 설명 |
305 |-|-|
306 | [도큐먼트](https://huggingface.co/transformers/) | 전체 API 도큐먼트와 튜토리얼 |
307 | [과제 요약](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers가 지원하는 과제들 |
308 | [전처리 튜토리얼](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` 클래스를 이용해 모델을 위한 데이터 준비하기 |
309 | [학습과 fine-tuning](https://huggingface.co/docs/transformers/training) | 🤗 Transformers가 제공하는 모델 PyTorch/TensorFlow 학습 과정과 `Trainer` API에서 사용하기 |
310 | [퀵 투어: Fine-tuning/사용 스크립트](https://github.com/huggingface/transformers/tree/master/examples) | 다양한 과제에서 모델 fine-tuning하는 예시 스크립트 |
311 | [모델 공유 및 업로드](https://huggingface.co/docs/transformers/model_sharing) | 커뮤니티에 fine-tune된 모델을 업로드 및 공유하기 |
312 | [마이그레이션](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`나 `pytorch-pretrained-bert`에서 🤗 Transformers로 이동하기|
313
314 ## 인용
315
316 🤗 Transformers 라이브러리를 인용하고 싶다면, 이 [논문](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)을 인용해 주세요:
317 ```bibtex
318 @inproceedings{wolf-etal-2020-transformers,
319 title = "Transformers: State-of-the-Art Natural Language Processing",
320 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
321 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
322 month = oct,
323 year = "2020",
324 address = "Online",
325 publisher = "Association for Computational Linguistics",
326 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
327 pages = "38--45"
328 }
329 ```
330
[end of README_ko.md]
[start of README_zh-hans.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多种语言; 使用 transformers 库。
20 - Use square quotes, e.g.,「引用」
21
22 Dictionary
23
24 Hugging Face: 抱抱脸
25 token: 词符(并用括号标注原英文)
26 tokenize: 词符化(并用括号标注原英文)
27 tokenizer: 词符化器(并用括号标注原英文)
28 transformer: transformer(不翻译)
29 pipeline: 流水线
30 API: API (不翻译)
31 inference: 推理
32 Trainer: 训练器。当作为类名出现时不翻译。
33 pretrained/pretrain: 预训练
34 finetune: 微调
35 community: 社区
36 example: 当特指仓库中 example 目录时翻译为「用例」
37 Python data structures (e.g., list, set, dict): 翻译为列表,集合,词典,并用括号标注原英文
38 NLP/Natural Language Processing: 以 NLP 出现时不翻译,以 Natural Language Processing 出现时翻译为自然语言处理
39 checkpoint: 检查点
40 -->
41
42 <p align="center">
43 <br>
44 <img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png" width="400"/>
45 <br>
46 <p>
47 <p align="center">
48 <a href="https://circleci.com/gh/huggingface/transformers">
49 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
50 </a>
51 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE">
52 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
53 </a>
54 <a href="https://huggingface.co/docs/transformers/index">
55 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
56 </a>
57 <a href="https://github.com/huggingface/transformers/releases">
58 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
59 </a>
60 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md">
61 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
62 </a>
63 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
64 </p>
65
66 <h4 align="center">
67 <p>
68 <a href="https://github.com/huggingface/transformers/">English</a> |
69 <b>简体中文</b> |
70 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hant.md">繁體中文</a> |
71 <a href="https://github.com/huggingface/transformers/blob/master/README_ko.md">한국어</a>
72 <p>
73 </h4>
74
75 <h3 align="center">
76 <p>为 Jax、PyTorch 和 TensorFlow 打造的先进的自然语言处理</p>
77 </h3>
78
79 <h3 align="center">
80 <a href="https://hf.co/course"><img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/course_banner.png"></a>
81 </h3>
82
83 🤗 Transformers 提供了数以千计的预训练模型,支持 100 多种语言的文本分类、信息抽取、问答、摘要、翻译、文本生成。它的宗旨让最先进的 NLP 技术人人易用。
84
85 🤗 Transformers 提供了便于快速下载和使用的API,让你可以把预训练模型用在给定文本、在你的数据集上微调然后通过 [model hub](https://huggingface.co/models) 与社区共享。同时,每个定义的 Python 模块均完全独立,方便修改和快速研究实验。
86
87 🤗 Transformers 支持三个最热门的深度学习库: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — 并与之无缝整合。你可以直接使用一个框架训练你的模型然后用另一个加载和推理。
88
89 ## 在线演示
90
91 你可以直接在模型页面上测试大多数 [model hub](https://huggingface.co/models) 上的模型。 我们也提供了 [私有模型托管、模型版本管理以及推理API](https://huggingface.co/pricing)。
92
93 这里是一些例子:
94 - [用 BERT 做掩码填词](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
95 - [用 Electra 做命名实体识别](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
96 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
97 - [用 RoBERTa 做自然语言推理](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
98 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
99 - [用 DistilBERT 做问答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
100 - [用 T5 做翻译](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
101
102 **[Write With Transformer](https://transformer.huggingface.co)**,由抱抱脸团队打造,是一个文本生成的官方 demo。
103
104 ## 如果你在寻找由抱抱脸团队提供的定制化支持服务
105
106 <a target="_blank" href="https://huggingface.co/support">
107 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
108 </a><br>
109
110 ## 快速上手
111
112 我们为快速使用模型提供了 `pipeline` (流水线)API。流水线聚合了预训练模型和对应的文本预处理。下面是一个快速使用流水线去判断正负面情绪的例子:
113
114 ```python
115 >>> from transformers import pipeline
116
117 # 使用情绪分析流水线
118 >>> classifier = pipeline('sentiment-analysis')
119 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
120 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
121 ```
122
123 第二行代码下载并缓存了流水线使用的预训练模型,而第三行代码则在给定的文本上进行了评估。这里的答案“正面” (positive) 具有 99 的置信度。
124
125 许多的 NLP 任务都有开箱即用的预训练流水线。比如说,我们可以轻松的从给定文本中抽取问题答案:
126
127 ``` python
128 >>> from transformers import pipeline
129
130 # 使用问答流水线
131 >>> question_answerer = pipeline('question-answering')
132 >>> question_answerer({
133 ... 'question': 'What is the name of the repository ?',
134 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
135 ... })
136 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
137
138 ```
139
140 除了给出答案,预训练模型还给出了对应的置信度分数、答案在词符化 (tokenized) 后的文本中开始和结束的位置。你可以从[这个教程](https://huggingface.co/docs/transformers/task_summary)了解更多流水线API支持的任务。
141
142 要在你的任务上下载和使用任意预训练模型也很简单,只需三行代码。这里是 PyTorch 版的示例:
143 ```python
144 >>> from transformers import AutoTokenizer, AutoModel
145
146 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
147 >>> model = AutoModel.from_pretrained("bert-base-uncased")
148
149 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
150 >>> outputs = model(**inputs)
151 ```
152 这里是等效的 TensorFlow 代码:
153 ```python
154 >>> from transformers import AutoTokenizer, TFAutoModel
155
156 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
157 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
158
159 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
160 >>> outputs = model(**inputs)
161 ```
162
163 词符化器 (tokenizer) 为所有的预训练模型提供了预处理,并可以直接对单个字符串进行调用(比如上面的例子)或对列表 (list) 调用。它会输出一个你可以在下游代码里使用或直接通过 `**` 解包表达式传给模型的词典 (dict)。
164
165 模型本身是一个常规的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取决于你的后端),可以常规方式使用。 [这个教程](https://huggingface.co/transformers/training.html)解释了如何将这样的模型整合到经典的 PyTorch 或 TensorFlow 训练循环中,或是如何使用我们的 `Trainer` 训练器)API 来在一个新的数据集上快速微调。
166
167 ## 为什么要用 transformers?
168
169 1. 便于使用的先进模型:
170 - NLU 和 NLG 上表现优越
171 - 对教学和实践友好且低门槛
172 - 高级抽象,只需了解三个类
173 - 对所有模型统一的API
174
175 1. 更低计算开销,更少的碳排放:
176 - 研究人员可以分享亿训练的模型而非次次从头开始训练
177 - 工程师可以减少计算用时和生产环境开销
178 - 数十种模型架构、两千多个预训练模型、100多种语言支持
179
180 1. 对于模型生命周期的每一个部分都面面俱到:
181 - 训练先进的模型,只需 3 行代码
182 - 模型在不同深度学习框架间任意转移,随你心意
183 - 为训练、评估和生产选择最适合的框架,衔接无缝
184
185 1. 为你的需求轻松定制专属模型和用例:
186 - 我们为每种模型架构提供了多个用例来复现原论文结果
187 - 模型内部结构保持透明一致
188 - 模型文件可单独使用,方便魔改和快速实验
189
190 ## 什么情况下我不该用 transformers?
191
192 - 本库并不是模块化的神经网络工具箱。模型文件中的代码特意呈若璞玉,未经额外抽象封装,以便研究人员快速迭代魔改而不致溺于抽象和文件跳转之中。
193 - `Trainer` API 并非兼容任何模型,只为本库之模型优化。若是在寻找适用于通用机器学习的训练循环实现,请另觅他库。
194 - 尽管我们已尽力而为,[examples 目录](https://github.com/huggingface/transformers/tree/master/examples)中的脚本也仅为用例而已。对于你的特定问题,它们并不一定开箱即用,可能需要改几行代码以适之。
195
196 ## 安装
197
198 ### 使用 pip
199
200 这个仓库已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下经过测试。
201
202 你可以在[虚拟环境](https://docs.python.org/3/library/venv.html)中安装 🤗 Transformers。如果你还不熟悉 Python 的虚拟环境,请阅此[用户说明](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
203
204 首先,用你打算使用的版本的 Python 创建一个虚拟环境并激活。
205
206 然后,你需要安装 Flax、PyTorch 或 TensorFlow 其中之一。关于在你使用的平台上安装这些框架,请参阅 [TensorFlow 安装页](https://www.tensorflow.org/install/), [PyTorch 安装页](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安装页](https://github.com/google/flax#quick-install)。
207
208 当这些后端之一安装成功后, 🤗 Transformers 可依此安装:
209
210 ```bash
211 pip install transformers
212 ```
213
214 如果你想要试试用例或者想在正式发布前使用最新的开发中代码,你得[从源代码安装](https://huggingface.co/docs/transformers/installation#installing-from-source)。
215
216 ### 使用 conda
217
218 自 Transformers 4.0.0 版始,我们有了一个 conda 频道: `huggingface`。
219
220 🤗 Transformers 可以通过 conda 依此安装:
221
222 ```shell script
223 conda install -c huggingface transformers
224 ```
225
226 要通过 conda 安装 Flax、PyTorch 或 TensorFlow 其中之一,请参阅它们各自安装页的说明。
227
228 ## 模型架构
229
230 **🤗 Transformers 支持的[所有的模型检查点](https://huggingface.co/models)** 由[用户](https://huggingface.co/users)和[组织](https://huggingface.co/organizations)上传,均与 huggingface.co [model hub](https://huggingface.co) 无缝整合。
231
232 目前的检查点数量: 
233
234 🤗 Transformers 目前支持如下的架构(模型概述请阅[这里](https://huggingface.co/docs/transformers/model_summary)):
235
236 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (来自 Google Research and the Toyota Technological Institute at Chicago) 伴随论文 [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), 由 Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut 发布。
237 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (来自 Facebook) 伴随论文 [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) 由 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer 发布。
238 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (来自 École polytechnique) 伴随论文 [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) 由 Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis 发布。
239 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (来自 VinAI Research) 伴随论文 [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) 由 Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen 发布。
240 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (来自 Microsoft) 伴随论文 [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) 由 Hangbo Bao, Li Dong, Furu Wei 发布。
241 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (来自 Google) 伴随论文 [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) 由 Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova 发布。
242 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bertgeneration)** (来自 Google) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
243 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (来自 VinAI Research) 伴随论文 [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) 由 Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen 发布。
244 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
245 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/bigbird)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
246 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
247 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot_small)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
248 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (来自 Alexa) 伴随论文 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 由 Adrian de Wynter and Daniel J. Perry 发布。
249 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (来自 Google Research) 伴随论文 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 由 Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 发布。
250 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (来自 Inria/Facebook/Sorbonne) 伴随论文 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 由 Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 发布。
251 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (来自 Google Research) 伴随论文 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 由 Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 发布。
252 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (来自 OpenAI) 伴随论文 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 由 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 发布。
253 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (来自 YituTech) 伴随论文 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 由 Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 发布。
254 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (来自 Tsinghua University) 伴随论文 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 由 Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 发布。
255 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (来自 Salesforce) 伴随论文 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 由 Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 发布。
256 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
257 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta_v2)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
258 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (来自 Facebook) 伴随论文 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 由 Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 发布。
259 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (来自 Facebook) 伴随论文 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 由 Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 发布。
260 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (来自 Microsoft Research) 伴随论文 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 由 Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 发布。
261 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (来自 HuggingFace), 伴随论文 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 由 Victor Sanh, Lysandre Debut and Thomas Wolf 发布。 同样的方法也应用于压缩 GPT-2 到 [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa 到 [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT 到 [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) 和德语版 DistilBERT。
262 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (来自 Facebook) 伴随论文 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 由 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 发布。
263 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (来自 Google Research/Stanford University) 伴随论文 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 由 Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 发布。
264 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoderdecoder)** (来自 Google Research) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
265 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (来自 CNRS) 伴随论文 [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) 由 Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab 发布。
266 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (来自 Google Research) 伴随论文 [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) 由 James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon 发布。
267 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (来自 CMU/Google Brain) 伴随论文 [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) 由 Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le 发布。
268 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/gpt)** (来自 OpenAI) 伴随论文 [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) 由 Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever 发布。
269 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (来自 EleutherAI) 随仓库 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) 发布。作者为 Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy 发布。
270 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (来自 OpenAI) 伴随论文 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 由 Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 发布。
271 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (来自 EleutherAI) 伴随论文 [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 由 Ben Wang and Aran Komatsuzaki 发布。
272 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。
273 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (来自 Berkeley) 伴随论文 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 由 Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 发布。
274 1. **[ImageGPT](https://huggingface.co/docs/transformers/master/model_doc/imagegpt)** (来自 OpenAI) 伴随论文 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 由 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 发布。
275 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 由 Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 发布。
276 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 由 Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 发布。
277 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 由 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 发布。
278 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
279 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
280 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (来自 Studio Ousia) 伴随论文 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 由 Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 发布。
281 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (来自 UNC Chapel Hill) 伴随论文 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 由 Hao Tan and Mohit Bansal 发布。
282 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (来自 Facebook) 伴随论文 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 由 Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 发布。
283 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** 用 [OPUS](http://opus.nlpl.eu/) 数据训练的机器翻译模型由 Jörg Tiedemann 发布。[Marian Framework](https://marian-nmt.github.io/) 由微软翻译团队开发。
284 1. **[MBart](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 由 Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 发布。
285 1. **[MBart-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 由 Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 发布。
286 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron_bert)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
287 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
288 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (来自 Microsoft Research) 伴随论文 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 由 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 发布。
289 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (来自 Google AI) 伴随论文 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 由 Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 发布。
290 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。
291 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (来自 VinAI Research) 伴随论文 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 由 Dat Quoc Nguyen and Anh Tuan Nguyen 发布。
292 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
293 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (来自 NVIDIA) 伴随论文 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 由 Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 发布。
294 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (来自 Google Research) 伴随论文 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 由 Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 发布。
295 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (来自 Google Research) 伴随论文 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 由 Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 发布。
296 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (来自 Facebook), 伴随论文 [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 由 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 发布。
297 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (来自 ZhuiyiTechnology), 伴随论文 [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 由 Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 发布。
298 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (来自 NVIDIA) 伴随论文 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 由 Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 发布。
299 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
300 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
301 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (来自 Facebook), 伴随论文 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 发布。
302 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (来自 Facebook) 伴随论文 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 由 Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 发布。
303 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (来自 Tel Aviv University) 伴随论文 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 由 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 发布。
304 1. **[SqueezeBert](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (来自 Berkeley) 伴随论文 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 由 Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 发布。
305 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI) 伴随论文 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
306 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (来自 Google AI) 伴随论文 [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
307 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (来自 Google AI) 伴随论文 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 由 Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 发布。
308 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transformerxl)** (来自 Google/CMU) 伴随论文 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 由 Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 发布。
309 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (来自 Microsoft) 伴随论文 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 由 Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 发布。
310 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (来自 Microsoft Research) 伴随论文 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 由 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 发布。
311 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech_sat)** (来自 Microsoft Research) 伴随论文 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 由 Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 发布。
312 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。
313 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (来自 UCLA NLP) 伴随论文 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 由 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 发布。
314 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (来自 Facebook AI) 伴随论文 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 由 Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 发布。
315 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (来自 Facebook) 伴随论文 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 由 Guillaume Lample and Alexis Conneau 发布。
316 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlmprophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
317 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlmroberta)** (来自 Facebook AI), 伴随论文 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 由 Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 发布。
318 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (来自 Google/CMU) 伴随论文 [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 由 Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 发布。
319 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (来自 Facebook AI) 伴随论文 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 由 Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 发布。
320 1. 想要贡献新的模型?我们这里有一份**详细指引和模板**来引导你添加新的模型。你可以在 [`templates`](./templates) 目录中找到他们。记得查看 [贡献指南](./CONTRIBUTING.md) 并在开始写 PR 前联系维护人员或开一个新的 issue 来获得反馈。
321
322 要检查某个模型是否已有 Flax、PyTorch 或 TensorFlow 的实现,或其是否在 🤗 Tokenizers 库中有对应词符化器(tokenizer),敬请参阅[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
323
324 这些实现均已于多个数据集测试(请参看用例脚本)并应于原版实现表现相当。你可以在用例文档的[此节](https://huggingface.co/docs/transformers/examples)中了解表现的细节。
325
326
327 ## 了解更多
328
329 | 章节 | 描述 |
330 |-|-|
331 | [文档](https://huggingface.co/transformers/) | 完整的 API 文档和教程 |
332 | [任务总结](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支持的任务 |
333 | [预处理教程](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 来为模型准备数据 |
334 | [训练和微调](https://huggingface.co/docstransformers/training) | 在 PyTorch/TensorFlow 的训练循环或 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
335 | [快速上手:微调和用例脚本](https://github.com/huggingface/transformers/tree/master/examples) | 为各种任务提供的用例脚本 |
336 | [模型分享和上传](https://huggingface.co/docs/transformers/model_sharing) | 和社区上传和分享你微调的模型 |
337 | [迁移](https://huggingface.co/docs/transformers/migration) | 从 `pytorch-transformers` 或 `pytorch-pretrained-bert` 迁移到 🤗 Transformers |
338
339 ## 引用
340
341 我们已将此库的[论文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式发表,如果你使用了 🤗 Transformers 库,请引用:
342 ```bibtex
343 @inproceedings{wolf-etal-2020-transformers,
344 title = "Transformers: State-of-the-Art Natural Language Processing",
345 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
346 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
347 month = oct,
348 year = "2020",
349 address = "Online",
350 publisher = "Association for Computational Linguistics",
351 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
352 pages = "38--45"
353 }
354 ```
355
[end of README_zh-hans.md]
[start of README_zh-hant.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Traditional Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多種語言; 使用 transformers 函式庫。
20 - Use square quotes, e.g.,「引用」
21 - Some of terms in the file can be found at National Academy for Educational Research (https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese.
22
23 Dictionary
24
25 API: API (不翻譯)
26 add: 加入
27 checkpoint: 檢查點
28 code: 程式碼
29 community: 社群
30 confidence: 信賴度
31 dataset: 資料集
32 documentation: 文件
33 example: 基本翻譯為「範例」,或依語意翻為「例子」
34 finetune: 微調
35 Hugging Face: Hugging Face(不翻譯)
36 implementation: 實作
37 inference: 推論
38 library: 函式庫
39 module: 模組
40 NLP/Natural Language Processing: 以 NLP 出現時不翻譯,以 Natural Language Processing 出現時翻譯為自然語言處理
41 online demos: 線上Demo
42 pipeline: pipeline(不翻譯)
43 pretrained/pretrain: 預訓練
44 Python data structures (e.g., list, set, dict): 翻譯為串列,集合,字典,並用括號標註原英文
45 repository: repository(不翻譯)
46 summary: 概覽
47 token-: token-(不翻譯)
48 Trainer: Trainer(不翻譯)
49 transformer: transformer(不翻譯)
50 tutorial: 教學
51 user: 使用者
52 -->
53
54 <p align="center">
55 <br>
56 <img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png" width="400"/>
57 <br>
58 <p>
59 <p align="center">
60 <a href="https://circleci.com/gh/huggingface/transformers">
61 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
62 </a>
63 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE">
64 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
65 </a>
66 <a href="https://huggingface.co/docs/transformers/index">
67 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
68 </a>
69 <a href="https://github.com/huggingface/transformers/releases">
70 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
71 </a>
72 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md">
73 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
74 </a>
75 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
76 </p>
77
78 <h4 align="center">
79 <p>
80 <a href="https://github.com/huggingface/transformers/">English</a> |
81 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hans.md">简体中文</a> |
82 <b>繁體中文</b> |
83 <a href="https://github.com/huggingface/transformers/blob/master/README_ko.md">한국어</a>
84 <p>
85 </h4>
86
87 <h3 align="center">
88 <p>為 Jax、PyTorch 以及 TensorFlow 打造的先進自然語言處理函式庫</p>
89 </h3>
90
91 <h3 align="center">
92 <a href="https://hf.co/course"><img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/course_banner.png"></a>
93 </h3>
94
95 🤗 Transformers 提供了數以千計的預訓練模型,支援 100 多種語言的文本分類、資訊擷取、問答、摘要、翻譯、文本生成。它的宗旨是讓最先進的 NLP 技術人人易用。
96
97 🤗 Transformers 提供了便於快速下載和使用的API,讓你可以將預訓練模型用在給定文本、在你的資料集上微調然後經由 [model hub](https://huggingface.co/models) 與社群共享。同時,每個定義的 Python 模組架構均完全獨立,方便修改和快速研究實驗。
98
99 🤗 Transformers 支援三個最熱門的深度學習函式庫: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) 以及 [TensorFlow](https://www.tensorflow.org/) — 並與之完美整合。你可以直接使用其中一個框架訓練你的模型,然後用另一個載入和推論。
100
101 ## 線上Demo
102
103 你可以直接在 [model hub](https://huggingface.co/models) 上測試大多數的模型。我們也提供了 [私有模型託管、模型版本管理以及推論API](https://huggingface.co/pricing)。
104
105 這裡是一些範例:
106 - [用 BERT 做遮蓋填詞](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
107 - [用 Electra 做專有名詞辨識](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
108 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
109 - [用 RoBERTa 做自然語言推論](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
110 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
111 - [用 DistilBERT 做問答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
112 - [用 T5 做翻譯](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
113
114 **[Write With Transformer](https://transformer.huggingface.co)**,由 Hugging Face 團隊所打造,是一個文本生成的官方 demo。
115
116 ## 如果你在尋找由 Hugging Face 團隊所提供的客製化支援服務
117
118 <a target="_blank" href="https://huggingface.co/support">
119 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
120 </a><br>
121
122 ## 快速上手
123
124 我們為快速使用模型提供了 `pipeline` API。 Pipeline 包含了預訓練模型和對應的文本預處理。下面是一個快速使用 pipeline 去判斷正負面情緒的例子:
125
126 ```python
127 >>> from transformers import pipeline
128
129 # 使用情緒分析 pipeline
130 >>> classifier = pipeline('sentiment-analysis')
131 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
132 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
133 ```
134
135 第二行程式碼下載並快取 pipeline 使用的預訓練模型,而第三行程式碼則在給定的文本上進行了評估。這裡的答案“正面” (positive) 具有 99.97% 的信賴度。
136
137 許多的 NLP 任務都有隨選即用的預訓練 `pipeline`。例如,我們可以輕鬆地從給定文本中擷取問題答案:
138
139 ``` python
140 >>> from transformers import pipeline
141
142 # 使用問答 pipeline
143 >>> question_answerer = pipeline('question-answering')
144 >>> question_answerer({
145 ... 'question': 'What is the name of the repository ?',
146 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
147 ... })
148 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
149
150 ```
151
152 除了提供問題解答,預訓練模型還提供了對應的信賴度分數以及解答在 tokenized 後的文本中開始和結束的位置。你可以從[這個教學](https://huggingface.co/docs/transformers/task_summary)了解更多 `pipeline` API支援的任務。
153
154 要在你的任務中下載和使用任何預訓練模型很簡單,只需三行程式碼。這裡是 PyTorch 版的範例:
155 ```python
156 >>> from transformers import AutoTokenizer, AutoModel
157
158 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
159 >>> model = AutoModel.from_pretrained("bert-base-uncased")
160
161 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
162 >>> outputs = model(**inputs)
163 ```
164 這裡是對應的 TensorFlow 程式碼:
165 ```python
166 >>> from transformers import AutoTokenizer, TFAutoModel
167
168 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
169 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
170
171 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
172 >>> outputs = model(**inputs)
173 ```
174
175 Tokenizer 為所有的預訓練模型提供了預處理,並可以直接轉換單一字串(比如上面的例子)或串列 (list)。它會輸出一個的字典 (dict) 讓你可以在下游程式碼裡使用或直接藉由 `**` 運算式傳給模型。
176
177 模型本身是一個常規的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取決於你的後端),可依常規方式使用。 [這個教學](https://huggingface.co/transformers/training.html)解釋了如何將這樣的模型整合到一般的 PyTorch 或 TensorFlow 訓練迴圈中,或是如何使用我們的 `Trainer` API 在一個新的資料集上快速進行微調。
178
179 ## 為什麼要用 transformers?
180
181 1. 便於使用的先進模型:
182 - NLU 和 NLG 上性能卓越
183 - 對教學和實作友好且低門檻
184 - 高度抽象,使用者只須學習 3 個類別
185 - 對所有模型使用的制式化API
186
187 1. 更低的運算成本,更少的碳排放:
188 - 研究人員可以分享預訓練的模型而非從頭開始訓練
189 - 工程師可以減少計算時間以及生產成本
190 - 數十種模型架構、兩千多個預訓練模型、100多種語言支援
191
192 1. 對於模型生命週期的每一個部分都面面俱到:
193 - 訓練先進的模型,只需 3 行程式碼
194 - 模型可以在不同深度學習框架之間任意轉換
195 - 為訓練、評估和生產選擇最適合的框架,並完美銜接
196
197 1. 為你的需求輕鬆客製化專屬模型和範例:
198 - 我們為每種模型架構提供了多個範例來重現原論文結果
199 - 一致的模型內部架構
200 - 模型檔案可單獨使用,便於修改和快速實驗
201
202 ## 什麼情況下我不該用 transformers?
203
204 - 本函式庫並不是模組化的神經網絡工具箱。模型文件中的程式碼並未做額外的抽象封裝,以便研究人員快速地翻閱及修改程式碼,而不會深陷複雜的類別包裝之中。
205 - `Trainer` API 並非相容任何模型,它只為本函式庫中的模型最佳化。對於一般的機器學習用途,請使用其他函式庫。
206 - 儘管我們已盡力而為,[examples 目錄](https://github.com/huggingface/transformers/tree/master/examples)中的腳本也僅為範例而已。對於特定問題,它們並不一定隨選即用,可能需要修改幾行程式碼以符合需求。
207
208 ## 安裝
209
210 ### 使用 pip
211
212 這個 Repository 已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下經過測試。
213
214 你可以在[虛擬環境](https://docs.python.org/3/library/venv.html)中安裝 🤗 Transformers。如果你還不熟悉 Python 的虛擬環境,請閱此[使用者指引](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
215
216 首先,用你打算使用的版本的 Python 創建一個虛擬環境並進入。
217
218 然後,你需要安裝 Flax、PyTorch 或 TensorFlow 其中之一。對於該如何在你使用的平台上安裝這些框架,請參閱 [TensorFlow 安裝頁面](https://www.tensorflow.org/install/), [PyTorch 安裝頁面](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安裝頁面](https://github.com/google/flax#quick-install)。
219
220 當其中一個後端安裝成功後,🤗 Transformers 可依此安裝:
221
222 ```bash
223 pip install transformers
224 ```
225
226 如果你想要試試範例或者想在正式發布前使用最新開發中的程式碼,你必須[從原始碼安裝](https://huggingface.co/docs/transformers/installation#installing-from-source)。
227
228 ### 使用 conda
229
230 自 Transformers 4.0.0 版始,我們有了一個 conda channel: `huggingface`。
231
232 🤗 Transformers 可以藉由 conda 依此安裝:
233
234 ```shell script
235 conda install -c huggingface transformers
236 ```
237
238 要藉由 conda 安裝 Flax、PyTorch 或 TensorFlow 其中之一,請參閱它們各自安裝頁面的說明。
239
240 ## 模型架構
241
242 **🤗 Transformers 支援的[所有的模型檢查點](https://huggingface.co/models)**,由[使用者](https://huggingface.co/users)和[組織](https://huggingface.co/organizations)上傳,均與 huggingface.co [model hub](https://huggingface.co) 完美結合。
243
244 目前的檢查點數量: 
245
246 🤗 Transformers 目前支援以下的架構(模型概覽請參閱[這裡](https://huggingface.co/docs/transformers/model_summary)):
247
248 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
249 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
250 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
251 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
252 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
253 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
254 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bertgeneration)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
255 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
256 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
257 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/bigbird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
258 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
259 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot_small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
260 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
261 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
262 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
263 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
264 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
265 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
266 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
267 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
268 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
269 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta_v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
270 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
271 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
272 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
273 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT.
274 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
275 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
276 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoderdecoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
277 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
278 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
279 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
280 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
281 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
282 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
283 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released with the paper [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
284 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
285 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
286 1. **[ImageGPT](https://huggingface.co/docs/transformers/master/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
287 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
288 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
289 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
290 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
291 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
292 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
293 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
294 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
295 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
296 1. **[MBart](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
297 1. **[MBart-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
298 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron_bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
299 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
300 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
301 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
302 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
303 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
304 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
305 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
306 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
307 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
308 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
309 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
310 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
311 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
312 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
313 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
314 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook) released with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
315 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University) released with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
316 1. **[SqueezeBert](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
317 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
318 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released with the paper [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
319 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
320 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transformerxl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
321 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
322 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
323 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech_sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
324 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
325 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
326 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
327 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
328 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlmprophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
329 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlmroberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
330 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
331 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
332 1. 想要貢獻新的模型?我們這裡有一份**詳細指引和模板**來引導你加入新的模型。你可以在 [`templates`](./templates) 目錄中找到它們。記得查看[貢獻指引](./CONTRIBUTING.md)並在開始寫 PR 前聯繫維護人員或開一個新的 issue 來獲得 feedbacks。
333
334 要檢查某個模型是否已有 Flax、PyTorch 或 TensorFlow 的實作,或其是否在🤗 Tokenizers 函式庫中有對應的 tokenizer,敬請參閱[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
335
336 這些實作均已於多個資料集測試(請參閱範例腳本)並應與原版實作表現相當。你可以在範例文件的[此節](https://huggingface.co/docs/transformers/examples)中了解實作的細節。
337
338
339 ## 了解更多
340
341 | 章節 | 描述 |
342 |-|-|
343 | [文件](https://huggingface.co/transformers/) | 完整的 API 文件和教學 |
344 | [任務概覽](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支援的任務 |
345 | [預處理教學](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 來為模型準備資料 |
346 | [訓練和微調](https://huggingface.co/docs/transformers/training) | 使用 PyTorch/TensorFlow 的內建的訓練方式或於 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
347 | [快速上手:微調和範例腳本](https://github.com/huggingface/transformers/tree/master/examples) | 為各種任務提供的範例腳本 |
348 | [模型分享和上傳](https://huggingface.co/docs/transformers/model_sharing) | 上傳並與社群分享你微調的模型 |
349 | [遷移](https://huggingface.co/docs/transformers/migration) | 從 `pytorch-transformers` 或 `pytorch-pretrained-bert` 遷移到 🤗 Transformers |
350
351 ## 引用
352
353 我們已將此函式庫的[論文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式發表。如果你使用了 🤗 Transformers 函式庫,可以引用:
354 ```bibtex
355 @inproceedings{wolf-etal-2020-transformers,
356 title = "Transformers: State-of-the-Art Natural Language Processing",
357 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
358 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
359 month = oct,
360 year = "2020",
361 address = "Online",
362 publisher = "Association for Computational Linguistics",
363 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
364 pages = "38--45"
365 }
366 ```
367
[end of README_zh-hant.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
huggingface/transformers
|
43f953cc2eec804eba04e2a9ae164d1a33fd97a8
|
Perceiver IO
# 🌟 New model addition
## Model description
Perceiver is a general architecture that works on many kinds of data, including images, video, audio, 3D point clouds, language and symbolic inputs, multimodal combinations, etc. Perceivers can handle new types of data with only minimal modifications. Perceivers process inputs using domain-agnostic Transformer-style attention. Unlike Transformers, Perceivers first map inputs to a small latent space where processing is cheap and doesn't depend on the input size. This makes it possible to build very deep networks even when using large inputs like images or videos.
Perceiver IO is a generalization of Perceiver to handle arbitrary outputs in addition to arbitrary inputs. The original Perceiver only produced a single classification label. In addition to classification labels, Perceiver IO can produce (for example) language, optical flow, and multimodal videos with audio. This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is linear in the input and output size and the bulk of the processing occurs in the latent space, allowing us to process inputs and outputs that are much larger than can be handled by standard Transformers. This means, for example, Perceiver IO can do BERT-style masked language modeling directly using bytes instead of tokenized inputs.
https://arxiv.org/pdf/2107.14795.pdf
## Open source status
* [x] the model implementation is available: https://github.com/deepmind/deepmind-research/tree/master/perceiver (JAX)
* [x] the model weights are available: https://storage.googleapis.com/perceiver_io/language_perceiver_io_bytes.pickle pretrained masked language model (https://github.com/deepmind/deepmind-research/blob/master/perceiver/colabs/masked_language_modelling.ipynb)
* [x] who are the authors: **DeepMind** Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu,
David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff,
Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira
|
2021-11-22T10:59:06Z
|
<patch>
diff --git a/src/transformers/__init__.py b/src/transformers/__init__.py
--- a/src/transformers/__init__.py
+++ b/src/transformers/__init__.py
@@ -250,6 +250,7 @@
"models.mt5": ["MT5Config"],
"models.openai": ["OPENAI_GPT_PRETRAINED_CONFIG_ARCHIVE_MAP", "OpenAIGPTConfig", "OpenAIGPTTokenizer"],
"models.pegasus": ["PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP", "PegasusConfig", "PegasusTokenizer"],
+ "models.perceiver": ["PERCEIVER_PRETRAINED_CONFIG_ARCHIVE_MAP", "PerceiverConfig", "PerceiverTokenizer"],
"models.phobert": ["PhobertTokenizer"],
"models.prophetnet": ["PROPHETNET_PRETRAINED_CONFIG_ARCHIVE_MAP", "ProphetNetConfig", "ProphetNetTokenizer"],
"models.qdqbert": ["QDQBERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "QDQBertConfig"],
@@ -489,6 +490,7 @@
_import_structure["models.layoutlmv2"].append("LayoutLMv2FeatureExtractor")
_import_structure["models.layoutlmv2"].append("LayoutLMv2Processor")
_import_structure["models.layoutxlm"].append("LayoutXLMProcessor")
+ _import_structure["models.perceiver"].append("PerceiverFeatureExtractor")
_import_structure["models.segformer"].append("SegformerFeatureExtractor")
_import_structure["models.vit"].append("ViTFeatureExtractor")
else:
@@ -1129,6 +1131,21 @@
_import_structure["models.pegasus"].extend(
["PegasusForCausalLM", "PegasusForConditionalGeneration", "PegasusModel", "PegasusPreTrainedModel"]
)
+ _import_structure["models.perceiver"].extend(
+ [
+ "PERCEIVER_PRETRAINED_MODEL_ARCHIVE_LIST",
+ "PerceiverForImageClassificationConvProcessing",
+ "PerceiverForImageClassificationFourier",
+ "PerceiverForImageClassificationLearned",
+ "PerceiverForMaskedLM",
+ "PerceiverForMultimodalAutoencoding",
+ "PerceiverForOpticalFlow",
+ "PerceiverForSequenceClassification",
+ "PerceiverLayer",
+ "PerceiverModel",
+ "PerceiverPreTrainedModel",
+ ]
+ )
_import_structure["models.prophetnet"].extend(
[
"PROPHETNET_PRETRAINED_MODEL_ARCHIVE_LIST",
@@ -2247,6 +2264,7 @@
from .models.mt5 import MT5Config
from .models.openai import OPENAI_GPT_PRETRAINED_CONFIG_ARCHIVE_MAP, OpenAIGPTConfig, OpenAIGPTTokenizer
from .models.pegasus import PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP, PegasusConfig, PegasusTokenizer
+ from .models.perceiver import PERCEIVER_PRETRAINED_CONFIG_ARCHIVE_MAP, PerceiverConfig, PerceiverTokenizer
from .models.phobert import PhobertTokenizer
from .models.prophetnet import PROPHETNET_PRETRAINED_CONFIG_ARCHIVE_MAP, ProphetNetConfig, ProphetNetTokenizer
from .models.qdqbert import QDQBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, QDQBertConfig
@@ -2448,6 +2466,7 @@
from .models.imagegpt import ImageGPTFeatureExtractor
from .models.layoutlmv2 import LayoutLMv2FeatureExtractor, LayoutLMv2Processor
from .models.layoutxlm import LayoutXLMProcessor
+ from .models.perceiver import PerceiverFeatureExtractor
from .models.segformer import SegformerFeatureExtractor
from .models.vit import ViTFeatureExtractor
else:
@@ -2982,6 +3001,19 @@
PegasusModel,
PegasusPreTrainedModel,
)
+ from .models.perceiver import (
+ PERCEIVER_PRETRAINED_MODEL_ARCHIVE_LIST,
+ PerceiverForImageClassificationConvProcessing,
+ PerceiverForImageClassificationFourier,
+ PerceiverForImageClassificationLearned,
+ PerceiverForMaskedLM,
+ PerceiverForMultimodalAutoencoding,
+ PerceiverForOpticalFlow,
+ PerceiverForSequenceClassification,
+ PerceiverLayer,
+ PerceiverModel,
+ PerceiverPreTrainedModel,
+ )
from .models.prophetnet import (
PROPHETNET_PRETRAINED_MODEL_ARCHIVE_LIST,
ProphetNetDecoder,
diff --git a/src/transformers/models/__init__.py b/src/transformers/models/__init__.py
--- a/src/transformers/models/__init__.py
+++ b/src/transformers/models/__init__.py
@@ -77,6 +77,7 @@
mt5,
openai,
pegasus,
+ perceiver,
phobert,
prophetnet,
qdqbert,
diff --git a/src/transformers/models/auto/configuration_auto.py b/src/transformers/models/auto/configuration_auto.py
--- a/src/transformers/models/auto/configuration_auto.py
+++ b/src/transformers/models/auto/configuration_auto.py
@@ -37,6 +37,7 @@
("fnet", "FNetConfig"),
("segformer", "SegformerConfig"),
("vision-text-dual-encoder", "VisionTextDualEncoderConfig"),
+ ("perceiver", "PerceiverConfig"),
("gptj", "GPTJConfig"),
("layoutlmv2", "LayoutLMv2Config"),
("beit", "BeitConfig"),
@@ -119,6 +120,7 @@
("fnet", "FNET_PRETRAINED_CONFIG_ARCHIVE_MAP"),
("pegasus", "PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP"),
("segformer", "SEGFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP"),
+ ("perceiver", "PERCEIVER_PRETRAINED_CONFIG_ARCHIVE_MAP"),
("gptj", "GPTJ_PRETRAINED_CONFIG_ARCHIVE_MAP"),
("layoutlmv2", "LAYOUTLMV2_PRETRAINED_CONFIG_ARCHIVE_MAP"),
("beit", "BEIT_PRETRAINED_CONFIG_ARCHIVE_MAP"),
@@ -194,6 +196,7 @@
("fnet", "FNet"),
("segformer", "SegFormer"),
("vision-text-dual-encoder", "VisionTextDualEncoder"),
+ ("perceiver", "Perceiver"),
("gptj", "GPT-J"),
("beit", "BEiT"),
("rembert", "RemBERT"),
diff --git a/src/transformers/models/auto/modeling_auto.py b/src/transformers/models/auto/modeling_auto.py
--- a/src/transformers/models/auto/modeling_auto.py
+++ b/src/transformers/models/auto/modeling_auto.py
@@ -33,6 +33,7 @@
("fnet", "FNetModel"),
("segformer", "SegformerModel"),
("vision-text-dual-encoder", "VisionTextDualEncoderModel"),
+ ("perceiver", "PerceiverModel"),
("gptj", "GPTJModel"),
("layoutlmv2", "LayoutLMv2Model"),
("beit", "BeitModel"),
@@ -247,6 +248,14 @@
("beit", "BeitForImageClassification"),
("segformer", "SegformerForImageClassification"),
("imagegpt", "ImageGPTForImageClassification"),
+ (
+ "perceiver",
+ (
+ "PerceiverForImageClassificationLearned",
+ "PerceiverForImageClassificationFourier",
+ "PerceiverForImageClassificationConvProcessing",
+ ),
+ ),
]
)
@@ -266,6 +275,7 @@
MODEL_FOR_MASKED_LM_MAPPING_NAMES = OrderedDict(
[
# Model for Masked LM mapping
+ ("perceiver", "PerceiverForMaskedLM"),
("qdqbert", "QDQBertForMaskedLM"),
("fnet", "FNetForMaskedLM"),
("rembert", "RemBertForMaskedLM"),
@@ -337,6 +347,7 @@
MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES = OrderedDict(
[
# Model for Sequence Classification mapping
+ ("perceiver", "PerceiverForSequenceClassification"),
("qdqbert", "QDQBertForSequenceClassification"),
("fnet", "FNetForSequenceClassification"),
("gptj", "GPTJForSequenceClassification"),
diff --git a/src/transformers/models/perceiver/__init__.py b/src/transformers/models/perceiver/__init__.py
new file mode 100644
--- /dev/null
+++ b/src/transformers/models/perceiver/__init__.py
@@ -0,0 +1,72 @@
+# flake8: noqa
+# There's no way to ignore "F401 '...' imported but unused" warnings in this
+# module, but to preserve other warnings. So, don't check this module at all.
+
+# Copyright 2021 The HuggingFace Team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from typing import TYPE_CHECKING
+
+from ...file_utils import _LazyModule, is_tokenizers_available, is_torch_available, is_vision_available
+
+
+_import_structure = {
+ "configuration_perceiver": ["PERCEIVER_PRETRAINED_CONFIG_ARCHIVE_MAP", "PerceiverConfig"],
+ "tokenization_perceiver": ["PerceiverTokenizer"],
+}
+
+if is_vision_available():
+ _import_structure["feature_extraction_perceiver"] = ["PerceiverFeatureExtractor"]
+
+if is_torch_available():
+ _import_structure["modeling_perceiver"] = [
+ "PERCEIVER_PRETRAINED_MODEL_ARCHIVE_LIST",
+ "PerceiverForImageClassificationConvProcessing",
+ "PerceiverForImageClassificationFourier",
+ "PerceiverForImageClassificationLearned",
+ "PerceiverForMaskedLM",
+ "PerceiverForMultimodalAutoencoding",
+ "PerceiverForOpticalFlow",
+ "PerceiverForSequenceClassification",
+ "PerceiverLayer",
+ "PerceiverModel",
+ "PerceiverPreTrainedModel",
+ ]
+
+
+if TYPE_CHECKING:
+ from .configuration_perceiver import PERCEIVER_PRETRAINED_CONFIG_ARCHIVE_MAP, PerceiverConfig
+ from .tokenization_perceiver import PerceiverTokenizer
+
+ if is_vision_available():
+ from .feature_extraction_perceiver import PerceiverFeatureExtractor
+
+ if is_torch_available():
+ from .modeling_perceiver import (
+ PERCEIVER_PRETRAINED_MODEL_ARCHIVE_LIST,
+ PerceiverForImageClassificationConvProcessing,
+ PerceiverForImageClassificationFourier,
+ PerceiverForImageClassificationLearned,
+ PerceiverForMaskedLM,
+ PerceiverForMultimodalAutoencoding,
+ PerceiverForOpticalFlow,
+ PerceiverForSequenceClassification,
+ PerceiverLayer,
+ PerceiverModel,
+ PerceiverPreTrainedModel,
+ )
+
+else:
+ import sys
+
+ sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure)
diff --git a/src/transformers/models/perceiver/configuration_perceiver.py b/src/transformers/models/perceiver/configuration_perceiver.py
new file mode 100644
--- /dev/null
+++ b/src/transformers/models/perceiver/configuration_perceiver.py
@@ -0,0 +1,171 @@
+# coding=utf-8
+# Copyright Deepmind and The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+""" Perceiver model configuration """
+
+from ...configuration_utils import PretrainedConfig
+from ...utils import logging
+
+
+logger = logging.get_logger(__name__)
+
+PERCEIVER_PRETRAINED_CONFIG_ARCHIVE_MAP = {
+ "deepmind/language-perceiver": "https://huggingface.co/deepmind/language-perceiver/resolve/main/config.json",
+ # See all Perceiver models at https://huggingface.co/models?filter=perceiver
+}
+
+
+class PerceiverConfig(PretrainedConfig):
+ r"""
+ This is the configuration class to store the configuration of a :class:`~transformers.PerceiverModel`. It is used
+ to instantiate an Perceiver model according to the specified arguments, defining the model architecture.
+ Instantiating a configuration with the defaults will yield a similar configuration to that of the Perceiver
+ `deepmind/language-perceiver <https://huggingface.co/deepmind/language-perceiver>`__ architecture.
+
+ Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model
+ outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information.
+
+ Args:
+ num_latents (:obj:`int`, `optional`, defaults to 256):
+ The number of latents.
+ d_latents (:obj:`int`, `optional`, defaults to 1280):
+ Dimension of the latent embeddings.
+ d_model (:obj:`int`, `optional`, defaults to 768):
+ Dimension of the inputs.
+ num_blocks (:obj:`int`, `optional`, defaults to 1):
+ Number of blocks in the Transformer encoder.
+ num_self_attends_per_block (:obj:`int`, `optional`, defaults to 26):
+ The number of self-attention layers per block.
+ num_self_attention_heads (:obj:`int`, `optional`, defaults to 8):
+ Number of attention heads for each self-attention layer in the Transformer encoder.
+ num_cross_attention_heads (:obj:`int`, `optional`, defaults to 8):
+ Number of attention heads for each cross-attention layer in the Transformer encoder.
+ qk_channels (:obj:`int`, `optional`):
+ Dimension to project the queries + keys before applying attention in the cross-attention and self-attention
+ layers of the encoder. Will default to preserving the dimension of the queries if not specified.
+ v_channels (:obj:`int`, `optional`):
+ Dimension to project the values before applying attention in the cross-attention and self-attention layers
+ of the encoder. Will default to preserving the dimension of the queries if not specified.
+ cross_attention_shape_for_attention (:obj:`str`, `optional`, defaults to :obj:`'kv'`):
+ Dimension to use when downsampling the queries and keys in the cross-attention layer of the encoder.
+ self_attention_widening_factor (:obj:`int`, `optional`, defaults to 1):
+ Dimension of the feed-forward layer in the cross-attention layer of the Transformer encoder.
+ cross_attention_widening_factor (:obj:`int`, `optional`, defaults to 1):
+ Dimension of the feed-forward layer in the self-attention layers of the Transformer encoder.
+ hidden_act (:obj:`str` or :obj:`function`, `optional`, defaults to :obj:`"gelu"`):
+ The non-linear activation function (function or string) in the encoder and pooler. If string,
+ :obj:`"gelu"`, :obj:`"relu"`, :obj:`"selu"` and :obj:`"gelu_new"` are supported.
+ attention_probs_dropout_prob (:obj:`float`, `optional`, defaults to 0.1):
+ The dropout ratio for the attention probabilities.
+ initializer_range (:obj:`float`, `optional`, defaults to 0.02):
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
+ layer_norm_eps (:obj:`float`, `optional`, defaults to 1e-12):
+ The epsilon used by the layer normalization layers.
+ use_query_residual (:obj:`float`, `optional`, defaults to :obj:`True`):
+ Whether to add a query residual in the cross-attention layer of the encoder.
+ vocab_size (:obj:`int`, `optional`, defaults to 262):
+ Vocabulary size for the masked language modeling model.
+ max_position_embeddings (:obj:`int`, `optional`, defaults to 2048):
+ The maximum sequence length that the masked language modeling model might ever be used with. Typically set
+ this to something large just in case (e.g., 512 or 1024 or 2048).
+ image_size (:obj:`int`, `optional`, defaults to 56):
+ Size of the images after preprocessing, for :class:`~transformers.PerceiverForImageClassificationLearned`.
+ train_size (:obj:`List[int]`, `optional`, defaults to [368, 496]):
+ Training size of the images for the optical flow model.
+ num_frames (:obj:`int`, `optional`, defaults to 16):
+ Number of video frames used for the multimodal autoencoding model.
+ audio_samples_per_frame (:obj:`int`, `optional`, defaults to 1920):
+ Number of audio samples per frame for the multimodal autoencoding model.
+ samples_per_patch (:obj:`int`, `optional`, defaults to 16):
+ Number of audio samples per patch when preprocessing the audio for the multimodal autoencoding model.
+ output_shape (:obj:`List[int]`, `optional`, defaults to :obj:`[1, 16, 224, 224]`):
+ Shape of the output for the multimodal autoencoding model.
+
+ Example::
+
+ >>> from transformers import PerceiverModel, PerceiverConfig
+
+ >>> # Initializing a Perceiver deepmind/language-perceiver style configuration
+ >>> configuration = PerceiverConfig()
+
+ >>> # Initializing a model from the deepmind/language-perceiver style configuration
+ >>> model = PerceiverModel(configuration)
+
+ >>> # Accessing the model configuration
+ >>> configuration = model.config
+ """
+ model_type = "perceiver"
+
+ def __init__(
+ self,
+ num_latents=256,
+ d_latents=1280,
+ d_model=768,
+ num_blocks=1,
+ num_self_attends_per_block=26,
+ num_self_attention_heads=8,
+ num_cross_attention_heads=8,
+ qk_channels=None,
+ v_channels=None,
+ cross_attention_shape_for_attention="kv",
+ self_attention_widening_factor=1,
+ cross_attention_widening_factor=1,
+ hidden_act="gelu",
+ attention_probs_dropout_prob=0.1,
+ position_embedding_init_scale=0.02,
+ initializer_range=0.02,
+ layer_norm_eps=1e-12,
+ is_encoder_decoder=False,
+ use_query_residual=True,
+ vocab_size=262,
+ max_position_embeddings=2048,
+ image_size=56,
+ train_size=[368, 496],
+ num_frames=16,
+ audio_samples_per_frame=1920,
+ samples_per_patch=16,
+ output_shape=[1, 16, 224, 224],
+ **kwargs
+ ):
+ super().__init__(**kwargs)
+
+ self.num_latents = num_latents
+ self.d_latents = d_latents
+ self.d_model = d_model
+ self.num_blocks = num_blocks
+ self.num_self_attends_per_block = num_self_attends_per_block
+ self.num_self_attention_heads = num_self_attention_heads
+ self.num_cross_attention_heads = num_cross_attention_heads
+ self.qk_channels = qk_channels
+ self.v_channels = v_channels
+ self.cross_attention_shape_for_attention = cross_attention_shape_for_attention
+ self.self_attention_widening_factor = self_attention_widening_factor
+ self.cross_attention_widening_factor = cross_attention_widening_factor
+ self.hidden_act = hidden_act
+ self.attention_probs_dropout_prob = attention_probs_dropout_prob
+ self.initializer_range = initializer_range
+ self.layer_norm_eps = layer_norm_eps
+ self.use_query_residual = use_query_residual
+ # masked language modeling attributes
+ self.vocab_size = vocab_size
+ self.max_position_embeddings = max_position_embeddings
+ # image classification attributes
+ self.image_size = image_size
+ # flow attributes
+ self.train_size = train_size
+ # multimodal autoencoding attributes
+ self.num_frames = num_frames
+ self.audio_samples_per_frame = audio_samples_per_frame
+ self.samples_per_patch = samples_per_patch
+ self.output_shape = output_shape
diff --git a/src/transformers/models/perceiver/convert_perceiver_haiku_to_pytorch.py b/src/transformers/models/perceiver/convert_perceiver_haiku_to_pytorch.py
new file mode 100644
--- /dev/null
+++ b/src/transformers/models/perceiver/convert_perceiver_haiku_to_pytorch.py
@@ -0,0 +1,468 @@
+# coding=utf-8
+# Copyright 2021 The HuggingFace Inc. team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Convert Perceiver checkpoints originally implemented in Haiku."""
+
+
+import argparse
+import json
+import pickle
+from pathlib import Path
+
+import numpy as np
+import torch
+from PIL import Image
+
+import haiku as hk
+import requests
+from huggingface_hub import cached_download, hf_hub_url
+from transformers import (
+ PerceiverConfig,
+ PerceiverFeatureExtractor,
+ PerceiverForImageClassificationConvProcessing,
+ PerceiverForImageClassificationFourier,
+ PerceiverForImageClassificationLearned,
+ PerceiverForMaskedLM,
+ PerceiverForMultimodalAutoencoding,
+ PerceiverForOpticalFlow,
+ PerceiverTokenizer,
+)
+from transformers.utils import logging
+
+
+logging.set_verbosity_info()
+logger = logging.get_logger(__name__)
+
+
+def prepare_img():
+ # We will verify our results on an image of a dog
+ url = "https://storage.googleapis.com/perceiver_io/dalmation.jpg"
+ im = Image.open(requests.get(url, stream=True).raw)
+ return im
+
+
+def rename_keys(state_dict, architecture):
+ for name in list(state_dict):
+ param = state_dict.pop(name)
+
+ # PREPROCESSORS
+ # rename text preprocessor embeddings (for MLM model)
+ name = name.replace("embed/embeddings", "input_preprocessor.embeddings.weight")
+ if name.startswith("trainable_position_encoding/pos_embs"):
+ name = name.replace(
+ "trainable_position_encoding/pos_embs", "input_preprocessor.position_embeddings.weight"
+ )
+
+ # rename image preprocessor embeddings (for image classification model with learned position embeddings)
+ name = name.replace("image_preprocessor/~/conv2_d/w", "input_preprocessor.convnet_1x1.weight")
+ name = name.replace("image_preprocessor/~/conv2_d/b", "input_preprocessor.convnet_1x1.bias")
+ name = name.replace(
+ "image_preprocessor/~_build_network_inputs/trainable_position_encoding/pos_embs",
+ "input_preprocessor.position_embeddings.position_embeddings",
+ )
+ name = name.replace(
+ "image_preprocessor/~_build_network_inputs/position_encoding_projector/linear/w",
+ "input_preprocessor.positions_projection.weight",
+ )
+ name = name.replace(
+ "image_preprocessor/~_build_network_inputs/position_encoding_projector/linear/b",
+ "input_preprocessor.positions_projection.bias",
+ )
+
+ # rename image preprocessor embeddings (for image classification model with conv processing)
+ if "counter" in name or "hidden" in name:
+ continue
+ name = name.replace(
+ "image_preprocessor/~/conv2_d_downsample/~/conv/w", "input_preprocessor.convnet.conv.weight"
+ )
+ name = name.replace(
+ "image_preprocessor/~/conv2_d_downsample/~/batchnorm/offset", "input_preprocessor.convnet.batchnorm.bias"
+ )
+ name = name.replace(
+ "image_preprocessor/~/conv2_d_downsample/~/batchnorm/scale", "input_preprocessor.convnet.batchnorm.weight"
+ )
+ name = name.replace(
+ "image_preprocessor/~/conv2_d_downsample/~/batchnorm/~/mean_ema/average",
+ "input_preprocessor.convnet.batchnorm.running_mean",
+ )
+ name = name.replace(
+ "image_preprocessor/~/conv2_d_downsample/~/batchnorm/~/var_ema/average",
+ "input_preprocessor.convnet.batchnorm.running_var",
+ )
+
+ # rename image preprocessor embeddings (for optical flow model)
+ name = name.replace("image_preprocessor/patches_linear/b", "input_preprocessor.conv_after_patches.bias")
+ name = name.replace("image_preprocessor/patches_linear/w", "input_preprocessor.conv_after_patches.weight")
+
+ # rename multimodal preprocessor embeddings
+ name = name.replace("multimodal_preprocessor/audio_mask_token/pos_embs", "input_preprocessor.mask.audio")
+ name = name.replace("multimodal_preprocessor/audio_padding/pos_embs", "input_preprocessor.padding.audio")
+ name = name.replace("multimodal_preprocessor/image_mask_token/pos_embs", "input_preprocessor.mask.image")
+ name = name.replace("multimodal_preprocessor/image_padding/pos_embs", "input_preprocessor.padding.image")
+ name = name.replace("multimodal_preprocessor/label_mask_token/pos_embs", "input_preprocessor.mask.label")
+ name = name.replace("multimodal_preprocessor/label_padding/pos_embs", "input_preprocessor.padding.label")
+
+ # DECODERS
+ # rename prefix of decoders
+ # multimodal autoencoding model
+ name = name.replace(
+ "multimodal_decoder/~/basic_decoder/cross_attention/", "decoder.decoder.decoding_cross_attention."
+ )
+ name = name.replace("multimodal_decoder/~decoder_query/audio_padding/pos_embs", "decoder.padding.audio")
+ name = name.replace("multimodal_decoder/~decoder_query/image_padding/pos_embs", "decoder.padding.image")
+ name = name.replace("multimodal_decoder/~decoder_query/label_padding/pos_embs", "decoder.padding.label")
+ name = name.replace("multimodal_decoder/~/basic_decoder/output/b", "decoder.decoder.final_layer.bias")
+ name = name.replace("multimodal_decoder/~/basic_decoder/output/w", "decoder.decoder.final_layer.weight")
+ if architecture == "multimodal_autoencoding":
+ name = name.replace(
+ "classification_decoder/~/basic_decoder/~/trainable_position_encoding/pos_embs",
+ "decoder.modalities.label.decoder.output_position_encodings.position_embeddings",
+ )
+ # flow model
+ name = name.replace(
+ "flow_decoder/~/basic_decoder/cross_attention/", "decoder.decoder.decoding_cross_attention."
+ )
+ name = name.replace("flow_decoder/~/basic_decoder/output/w", "decoder.decoder.final_layer.weight")
+ name = name.replace("flow_decoder/~/basic_decoder/output/b", "decoder.decoder.final_layer.bias")
+ # image models
+ name = name.replace(
+ "classification_decoder/~/basic_decoder/~/trainable_position_encoding/pos_embs",
+ "decoder.decoder.output_position_encodings.position_embeddings",
+ )
+ name = name.replace(
+ "basic_decoder/~/trainable_position_encoding/pos_embs",
+ "decoder.output_position_encodings.position_embeddings",
+ )
+ name = name.replace(
+ "classification_decoder/~/basic_decoder/cross_attention/", "decoder.decoder.decoding_cross_attention."
+ )
+ name = name.replace("classification_decoder/~/basic_decoder/output/b", "decoder.decoder.final_layer.bias")
+ name = name.replace("classification_decoder/~/basic_decoder/output/w", "decoder.decoder.final_layer.weight")
+ name = name = name.replace("classification_decoder/~/basic_decoder/~/", "decoder.decoder.")
+ name = name.replace("basic_decoder/cross_attention/", "decoder.decoding_cross_attention.")
+ name = name.replace("basic_decoder/~/", "decoder.")
+
+ # POSTPROCESSORS
+ name = name.replace(
+ "projection_postprocessor/linear/b", "output_postprocessor.modalities.image.classifier.bias"
+ )
+ name = name.replace(
+ "projection_postprocessor/linear/w", "output_postprocessor.modalities.image.classifier.weight"
+ )
+ name = name.replace(
+ "classification_postprocessor/linear/b", "output_postprocessor.modalities.label.classifier.bias"
+ )
+ name = name.replace(
+ "classification_postprocessor/linear/w", "output_postprocessor.modalities.label.classifier.weight"
+ )
+ name = name.replace("audio_postprocessor/linear/b", "output_postprocessor.modalities.audio.classifier.bias")
+ name = name.replace("audio_postprocessor/linear/w", "output_postprocessor.modalities.audio.classifier.weight")
+
+ # PERCEIVER MODEL
+
+ # rename latent embeddings
+ name = name.replace("perceiver_encoder/~/trainable_position_encoding/pos_embs", "embeddings.latents")
+ # rename latent embeddings (for multimodal model)
+ name = name.replace("encoder/~/trainable_position_encoding/pos_embs", "embeddings.latents")
+
+ # rename prefixes
+ if name.startswith("perceiver_encoder/~/"):
+ if "self_attention" in name:
+ suffix = "self_attends."
+ else:
+ suffix = ""
+ name = name.replace("perceiver_encoder/~/", "encoder." + suffix)
+ if name.startswith("encoder/~/"):
+ if "self_attention" in name:
+ suffix = "self_attends."
+ else:
+ suffix = ""
+ name = name.replace("encoder/~/", "encoder." + suffix)
+ # rename layernorm parameters
+ if "offset" in name:
+ name = name.replace("offset", "bias")
+ if "scale" in name:
+ name = name.replace("scale", "weight")
+ # in HuggingFace, the layernorm in between attention + MLP is just called "layernorm"
+ # rename layernorm in between attention + MLP of cross-attention
+ if "cross_attention" in name and "layer_norm_2" in name:
+ name = name.replace("layer_norm_2", "layernorm")
+ # rename layernorm in between attention + MLP of self-attention
+ if "self_attention" in name and "layer_norm_1" in name:
+ name = name.replace("layer_norm_1", "layernorm")
+
+ # in HuggingFace, the layernorms for queries + keys are called "layernorm1" and "layernorm2"
+ if "cross_attention" in name and "layer_norm_1" in name:
+ name = name.replace("layer_norm_1", "attention.self.layernorm2")
+ if "cross_attention" in name and "layer_norm" in name:
+ name = name.replace("layer_norm", "attention.self.layernorm1")
+ if "self_attention" in name and "layer_norm" in name:
+ name = name.replace("layer_norm", "attention.self.layernorm1")
+
+ # rename special characters by dots
+ name = name.replace("-", ".")
+ name = name.replace("/", ".")
+ # rename keys, queries, values and output of attention layers
+ if ("cross_attention" in name or "self_attention" in name) and "mlp" not in name:
+ if "linear.b" in name:
+ name = name.replace("linear.b", "self.query.bias")
+ if "linear.w" in name:
+ name = name.replace("linear.w", "self.query.weight")
+ if "linear_1.b" in name:
+ name = name.replace("linear_1.b", "self.key.bias")
+ if "linear_1.w" in name:
+ name = name.replace("linear_1.w", "self.key.weight")
+ if "linear_2.b" in name:
+ name = name.replace("linear_2.b", "self.value.bias")
+ if "linear_2.w" in name:
+ name = name.replace("linear_2.w", "self.value.weight")
+ if "linear_3.b" in name:
+ name = name.replace("linear_3.b", "output.dense.bias")
+ if "linear_3.w" in name:
+ name = name.replace("linear_3.w", "output.dense.weight")
+ if "self_attention_" in name:
+ name = name.replace("self_attention_", "")
+ if "self_attention" in name:
+ name = name.replace("self_attention", "0")
+ # rename dense layers of 2-layer MLP
+ if "mlp" in name:
+ if "linear.b" in name:
+ name = name.replace("linear.b", "dense1.bias")
+ if "linear.w" in name:
+ name = name.replace("linear.w", "dense1.weight")
+ if "linear_1.b" in name:
+ name = name.replace("linear_1.b", "dense2.bias")
+ if "linear_1.w" in name:
+ name = name.replace("linear_1.w", "dense2.weight")
+
+ # finally, TRANSPOSE if kernel and not embedding layer, and set value
+ if name[-6:] == "weight" and "embeddings" not in name:
+ param = np.transpose(param)
+
+ # if batchnorm, we need to squeeze it
+ if "batchnorm" in name:
+ param = np.squeeze(param)
+
+ if "embedding_decoder" not in name:
+ state_dict["perceiver." + name] = torch.from_numpy(param)
+ else:
+ state_dict[name] = torch.from_numpy(param)
+
+
[email protected]_grad()
+def convert_perceiver_checkpoint(pickle_file, pytorch_dump_folder_path, architecture="MLM"):
+ """
+ Copy/paste/tweak model's weights to our Perceiver structure.
+ """
+
+ # load parameters as FlatMapping data structure
+ with open(pickle_file, "rb") as f:
+ checkpoint = pickle.loads(f.read())
+
+ state = None
+ if isinstance(checkpoint, dict) and architecture in [
+ "image_classification",
+ "image_classification_fourier",
+ "image_classification_conv",
+ ]:
+ # the image classification_conv checkpoint also has batchnorm states (running_mean and running_var)
+ params = checkpoint["params"]
+ state = checkpoint["state"]
+ else:
+ params = checkpoint
+
+ # turn into initial state dict
+ state_dict = dict()
+ for scope_name, parameters in hk.data_structures.to_mutable_dict(params).items():
+ for param_name, param in parameters.items():
+ state_dict[scope_name + "/" + param_name] = param
+
+ if state is not None:
+ # add state variables
+ for scope_name, parameters in hk.data_structures.to_mutable_dict(state).items():
+ for param_name, param in parameters.items():
+ state_dict[scope_name + "/" + param_name] = param
+
+ # rename keys
+ rename_keys(state_dict, architecture=architecture)
+
+ # load HuggingFace model
+ config = PerceiverConfig()
+ subsampling = None
+ repo_id = "datasets/huggingface/label-files"
+ if architecture == "MLM":
+ config.qk_channels = 8 * 32
+ config.v_channels = 1280
+ model = PerceiverForMaskedLM(config)
+ elif "image_classification" in architecture:
+ config.num_latents = 512
+ config.d_latents = 1024
+ config.d_model = 512
+ config.num_blocks = 8
+ config.num_self_attends_per_block = 6
+ config.num_cross_attention_heads = 1
+ config.num_self_attention_heads = 8
+ config.qk_channels = None
+ config.v_channels = None
+ # set labels
+ config.num_labels = 1000
+ filename = "imagenet-1k-id2label.json"
+ id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename)), "r"))
+ id2label = {int(k): v for k, v in id2label.items()}
+ config.id2label = id2label
+ config.label2id = {v: k for k, v in id2label.items()}
+ if architecture == "image_classification":
+ config.image_size = 224
+ model = PerceiverForImageClassificationLearned(config)
+ elif architecture == "image_classification_fourier":
+ config.d_model = 261
+ model = PerceiverForImageClassificationFourier(config)
+ elif architecture == "image_classification_conv":
+ config.d_model = 322
+ model = PerceiverForImageClassificationConvProcessing(config)
+ else:
+ raise ValueError(f"Architecture {architecture} not supported")
+ elif architecture == "optical_flow":
+ config.num_latents = 2048
+ config.d_latents = 512
+ config.d_model = 322
+ config.num_blocks = 1
+ config.num_self_attends_per_block = 24
+ config.num_self_attention_heads = 16
+ config.num_cross_attention_heads = 1
+ model = PerceiverForOpticalFlow(config)
+ elif architecture == "multimodal_autoencoding":
+ config.num_latents = 28 * 28 * 1
+ config.d_latents = 512
+ config.d_model = 704
+ config.num_blocks = 1
+ config.num_self_attends_per_block = 8
+ config.num_self_attention_heads = 8
+ config.num_cross_attention_heads = 1
+ config.num_labels = 700
+ # define dummy inputs + subsampling (as each forward pass is only on a chunk of image + audio data)
+ images = torch.randn((1, 16, 3, 224, 224))
+ audio = torch.randn((1, 30720, 1))
+ nchunks = 128
+ image_chunk_size = np.prod((16, 224, 224)) // nchunks
+ audio_chunk_size = audio.shape[1] // config.samples_per_patch // nchunks
+ # process the first chunk
+ chunk_idx = 0
+ subsampling = {
+ "image": torch.arange(image_chunk_size * chunk_idx, image_chunk_size * (chunk_idx + 1)),
+ "audio": torch.arange(audio_chunk_size * chunk_idx, audio_chunk_size * (chunk_idx + 1)),
+ "label": None,
+ }
+ model = PerceiverForMultimodalAutoencoding(config)
+ # set labels
+ filename = "kinetics700-id2label.json"
+ id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename)), "r"))
+ id2label = {int(k): v for k, v in id2label.items()}
+ config.id2label = id2label
+ config.label2id = {v: k for k, v in id2label.items()}
+ else:
+ raise ValueError(f"Architecture {architecture} not supported")
+ model.eval()
+
+ # load weights
+ model.load_state_dict(state_dict)
+
+ # prepare dummy input
+ input_mask = None
+ if architecture == "MLM":
+ tokenizer = PerceiverTokenizer.from_pretrained("/Users/NielsRogge/Documents/Perceiver/Tokenizer files")
+ text = "This is an incomplete sentence where some words are missing."
+ encoding = tokenizer(text, padding="max_length", return_tensors="pt")
+ # mask " missing.". Note that the model performs much better if the masked chunk starts with a space.
+ encoding.input_ids[0, 51:60] = tokenizer.mask_token_id
+ inputs = encoding.input_ids
+ input_mask = encoding.attention_mask
+ elif architecture in ["image_classification", "image_classification_fourier", "image_classification_conv"]:
+ feature_extractor = PerceiverFeatureExtractor()
+ image = prepare_img()
+ encoding = feature_extractor(image, return_tensors="pt")
+ inputs = encoding.pixel_values
+ elif architecture == "optical_flow":
+ inputs = torch.randn(1, 2, 27, 368, 496)
+ elif architecture == "multimodal_autoencoding":
+ images = torch.randn((1, 16, 3, 224, 224))
+ audio = torch.randn((1, 30720, 1))
+ inputs = dict(image=images, audio=audio, label=torch.zeros((images.shape[0], 700)))
+
+ # forward pass
+ if architecture == "multimodal_autoencoding":
+ outputs = model(inputs=inputs, attention_mask=input_mask, subsampled_output_points=subsampling)
+ else:
+ outputs = model(inputs=inputs, attention_mask=input_mask)
+ logits = outputs.logits
+
+ # verify logits
+ if not isinstance(logits, dict):
+ print("Shape of logits:", logits.shape)
+ else:
+ for k, v in logits.items():
+ print(f"Shape of logits of modality {k}", v.shape)
+
+ if architecture == "MLM":
+ expected_slice = torch.tensor(
+ [[-11.8336, -11.6850, -11.8483], [-12.8149, -12.5863, -12.7904], [-12.8440, -12.6410, -12.8646]]
+ )
+ assert torch.allclose(logits[0, :3, :3], expected_slice)
+ masked_tokens_predictions = logits[0, 51:60].argmax(dim=-1).tolist()
+ expected_list = [38, 115, 111, 121, 121, 111, 116, 109, 52]
+ assert masked_tokens_predictions == expected_list
+ print("Greedy predictions:")
+ print(masked_tokens_predictions)
+ print()
+ print("Predicted string:")
+ print(tokenizer.decode(masked_tokens_predictions))
+
+ elif architecture in ["image_classification", "image_classification_fourier", "image_classification_conv"]:
+ print("Predicted class:", model.config.id2label[logits.argmax(-1).item()])
+
+ # Finally, save files
+ Path(pytorch_dump_folder_path).mkdir(exist_ok=True)
+ print(f"Saving model to {pytorch_dump_folder_path}")
+ model.save_pretrained(pytorch_dump_folder_path)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ # Required parameters
+ parser.add_argument(
+ "--pickle_file",
+ type=str,
+ default=None,
+ required=True,
+ help="Path to local pickle file of a Perceiver checkpoint you'd like to convert.",
+ )
+ parser.add_argument(
+ "--pytorch_dump_folder_path",
+ default=None,
+ type=str,
+ required=True,
+ help="Path to the output PyTorch model directory, provided as a string.",
+ )
+ parser.add_argument(
+ "--architecture",
+ default="MLM",
+ type=str,
+ help="""
+ Architecture, provided as a string. One of 'MLM', 'image_classification', image_classification_fourier',
+ image_classification_fourier', 'optical_flow' or 'multimodal_autoencoding'.
+ """,
+ )
+
+ args = parser.parse_args()
+ convert_perceiver_checkpoint(args.pickle_file, args.pytorch_dump_folder_path, args.architecture)
diff --git a/src/transformers/models/perceiver/feature_extraction_perceiver.py b/src/transformers/models/perceiver/feature_extraction_perceiver.py
new file mode 100644
--- /dev/null
+++ b/src/transformers/models/perceiver/feature_extraction_perceiver.py
@@ -0,0 +1,189 @@
+# coding=utf-8
+# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Feature extractor class for Perceiver."""
+
+from typing import Optional, Union
+
+import numpy as np
+from PIL import Image
+
+from ...feature_extraction_utils import BatchFeature, FeatureExtractionMixin
+from ...file_utils import TensorType
+from ...image_utils import (
+ IMAGENET_DEFAULT_MEAN,
+ IMAGENET_DEFAULT_STD,
+ ImageFeatureExtractionMixin,
+ ImageInput,
+ is_torch_tensor,
+)
+from ...utils import logging
+
+
+logger = logging.get_logger(__name__)
+
+
+class PerceiverFeatureExtractor(FeatureExtractionMixin, ImageFeatureExtractionMixin):
+ r"""
+ Constructs a Perceiver feature extractor.
+
+ This feature extractor inherits from :class:`~transformers.ImageFeatureExtractionMixin` which contains most of the
+ main methods. Users should refer to this superclass for more information regarding those methods.
+
+ Args:
+ do_center_crop (:obj:`bool`, `optional`, defaults to :obj:`True`):
+ Whether to crop the input at the center. If the input size is smaller than :obj:`crop_size` along any edge,
+ the image is padded with 0's and then center cropped.
+ crop_size (:obj:`int`, `optional`, defaults to 256):
+ Desired output size when applying center-cropping. Only has an effect if :obj:`do_center_crop` is set to
+ :obj:`True`.
+ do_resize (:obj:`bool`, `optional`, defaults to :obj:`True`):
+ Whether to resize the input to a certain :obj:`size`.
+ size (:obj:`int` or :obj:`Tuple(int)`, `optional`, defaults to 224):
+ Resize the input to the given size. If a tuple is provided, it should be (width, height). If only an
+ integer is provided, then the input will be resized to (size, size). Only has an effect if :obj:`do_resize`
+ is set to :obj:`True`.
+ resample (:obj:`int`, `optional`, defaults to :obj:`PIL.Image.BICUBIC`):
+ An optional resampling filter. This can be one of :obj:`PIL.Image.NEAREST`, :obj:`PIL.Image.BOX`,
+ :obj:`PIL.Image.BILINEAR`, :obj:`PIL.Image.HAMMING`, :obj:`PIL.Image.BICUBIC` or :obj:`PIL.Image.LANCZOS`.
+ Only has an effect if :obj:`do_resize` is set to :obj:`True`.
+ do_normalize (:obj:`bool`, `optional`, defaults to :obj:`True`):
+ Whether or not to normalize the input with :obj:`image_mean` and :obj:`image_std`.
+ image_mean (:obj:`List[int]`, defaults to :obj:`[0.485, 0.456, 0.406]`):
+ The sequence of means for each channel, to be used when normalizing images.
+ image_std (:obj:`List[int]`, defaults to :obj:`[0.229, 0.224, 0.225]`):
+ The sequence of standard deviations for each channel, to be used when normalizing images.
+ """
+
+ model_input_names = ["pixel_values"]
+
+ def __init__(
+ self,
+ do_center_crop=True,
+ crop_size=256,
+ do_resize=True,
+ size=224,
+ resample=Image.BICUBIC,
+ do_normalize=True,
+ image_mean=None,
+ image_std=None,
+ **kwargs
+ ):
+ super().__init__(**kwargs)
+ self.do_center_crop = do_center_crop
+ self.crop_size = crop_size
+ self.do_resize = do_resize
+ self.size = size
+ self.resample = resample
+ self.do_normalize = do_normalize
+ self.image_mean = image_mean if image_mean is not None else IMAGENET_DEFAULT_MEAN
+ self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD
+
+ def center_crop(self, image):
+ """
+ Crops :obj:`image` to `self.crop_size` using a center crop. Note that if the image is too small to be cropped
+ to the size given, it will be padded (so the returned result has the size asked).
+
+ Args:
+ image (:obj:`PIL.Image.Image` or :obj:`np.ndarray` or :obj:`torch.Tensor`):
+ The image to resize.
+ """
+
+ if isinstance(image, Image.Image):
+ image = self.to_numpy_array(image)
+
+ image_height, image_width = image.shape[-2:]
+
+ padded_center_crop_size = (
+ (self.size / (self.crop_size)) * np.minimum(image_height, image_width).astype(np.float32)
+ ).astype(np.int32)
+
+ offset_height = ((image_height - padded_center_crop_size) + 1) // 2
+ offset_width = ((image_width - padded_center_crop_size) + 1) // 2
+ crop_window = [offset_height, offset_width, padded_center_crop_size, padded_center_crop_size]
+
+ image = image[
+ :, crop_window[0] : crop_window[0] + crop_window[2], crop_window[1] : crop_window[1] + crop_window[3]
+ ]
+
+ return image
+
+ def __call__(
+ self, images: ImageInput, return_tensors: Optional[Union[str, TensorType]] = None, **kwargs
+ ) -> BatchFeature:
+ """
+ Main method to prepare for the model one or several image(s).
+
+ .. warning::
+
+ NumPy arrays and PyTorch tensors are converted to PIL images when resizing, so the most efficient is to pass
+ PIL images.
+
+ Args:
+ images (:obj:`PIL.Image.Image`, :obj:`np.ndarray`, :obj:`torch.Tensor`, :obj:`List[PIL.Image.Image]`, :obj:`List[np.ndarray]`, :obj:`List[torch.Tensor]`):
+ The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
+ tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
+ number of channels, H and W are image height and width.
+
+ return_tensors (:obj:`str` or :class:`~transformers.file_utils.TensorType`, `optional`, defaults to :obj:`'np'`):
+ If set, will return tensors of a particular framework. Acceptable values are:
+
+ * :obj:`'tf'`: Return TensorFlow :obj:`tf.constant` objects.
+ * :obj:`'pt'`: Return PyTorch :obj:`torch.Tensor` objects.
+ * :obj:`'np'`: Return NumPy :obj:`np.ndarray` objects.
+ * :obj:`'jax'`: Return JAX :obj:`jnp.ndarray` objects.
+
+ Returns:
+ :class:`~transformers.BatchFeature`: A :class:`~transformers.BatchFeature` with the following fields:
+
+ - **pixel_values** -- Pixel values to be fed to a model, of shape (batch_size, num_channels, height,
+ width).
+ """
+ # Input type checking for clearer error
+ valid_images = False
+
+ # Check that images has a valid type
+ if isinstance(images, (Image.Image, np.ndarray)) or is_torch_tensor(images):
+ valid_images = True
+ elif isinstance(images, (list, tuple)):
+ if len(images) == 0 or isinstance(images[0], (Image.Image, np.ndarray)) or is_torch_tensor(images[0]):
+ valid_images = True
+
+ if not valid_images:
+ raise ValueError(
+ "Images must of type `PIL.Image.Image`, `np.ndarray` or `torch.Tensor` (single example),"
+ "`List[PIL.Image.Image]`, `List[np.ndarray]` or `List[torch.Tensor]` (batch of examples)."
+ )
+
+ is_batched = bool(
+ isinstance(images, (list, tuple))
+ and (isinstance(images[0], (Image.Image, np.ndarray)) or is_torch_tensor(images[0]))
+ )
+
+ if not is_batched:
+ images = [images]
+
+ # transformations (center cropping + resizing + normalization)
+ if self.do_center_crop and self.crop_size is not None:
+ images = [self.center_crop(image) for image in images]
+ if self.do_resize and self.size is not None and self.resample is not None:
+ images = [self.resize(image=image, size=self.size, resample=self.resample) for image in images]
+ if self.do_normalize:
+ images = [self.normalize(image=image, mean=self.image_mean, std=self.image_std) for image in images]
+
+ # return as BatchFeature
+ data = {"pixel_values": images}
+ encoded_inputs = BatchFeature(data=data, tensor_type=return_tensors)
+
+ return encoded_inputs
diff --git a/src/transformers/models/perceiver/modeling_perceiver.py b/src/transformers/models/perceiver/modeling_perceiver.py
new file mode 100755
--- /dev/null
+++ b/src/transformers/models/perceiver/modeling_perceiver.py
@@ -0,0 +1,3118 @@
+# coding=utf-8
+# Copyright 2021 Deepmind and The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+""" PyTorch Perceiver model. """
+
+import abc
+import math
+from dataclasses import dataclass
+from functools import reduce
+from operator import __add__
+from typing import Any, Callable, Mapping, Optional, Tuple
+
+import numpy as np
+import torch
+import torch.utils.checkpoint
+from torch import nn
+from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
+
+from ...activations import ACT2FN
+from ...file_utils import (
+ ModelOutput,
+ add_code_sample_docstrings,
+ add_start_docstrings,
+ add_start_docstrings_to_model_forward,
+ replace_return_docstrings,
+)
+from ...modeling_outputs import BaseModelOutputWithCrossAttentions
+from ...modeling_utils import (
+ PreTrainedModel,
+ apply_chunking_to_forward,
+ find_pruneable_heads_and_indices,
+ prune_linear_layer,
+)
+from ...utils import logging
+from .configuration_perceiver import PerceiverConfig
+
+
+ModalitySizeType = Mapping[str, int]
+PreprocessorOutputType = Tuple[torch.Tensor, Optional[torch.Tensor], torch.Tensor]
+PreprocessorType = Callable[..., PreprocessorOutputType]
+PostprocessorType = Callable[..., Any]
+
+logger = logging.get_logger(__name__)
+
+_CHECKPOINT_FOR_DOC = "deepmind/language-perceiver"
+_CONFIG_FOR_DOC = "PerceiverConfig"
+_TOKENIZER_FOR_DOC = "PerceiverTokenizer"
+
+PERCEIVER_PRETRAINED_MODEL_ARCHIVE_LIST = [
+ "deepmind/language-perceiver",
+ # See all Perceiver models at https://huggingface.co/models?filter=perceiver
+]
+
+
+@dataclass
+class PerceiverModelOutput(ModelOutput):
+ """
+ Base class for Perceiver base model's outputs, with potential hidden states, attentions and cross-attentions.
+
+ Args:
+ logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, num_labels)`):
+ Classification (or regression if config.num_labels==1) scores (before SoftMax).
+ last_hidden_state (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`):
+ Sequence of hidden-states at the output of the last layer of the model.
+ hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
+ Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
+ of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of
+ each layer plus the initial embedding outputs.
+ attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
+ Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads,
+ sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the
+ weighted average in the self-attention heads.
+ cross_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
+ Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads,
+ sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the
+ attention softmax, used to compute the weighted average in the cross-attention heads.
+ """
+
+ logits: torch.FloatTensor = None
+ last_hidden_state: torch.FloatTensor = None
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
+ cross_attentions: Optional[Tuple[torch.FloatTensor]] = None
+
+
+@dataclass
+class PerceiverDecoderOutput(ModelOutput):
+ """
+ Base class for Perceiver decoder outputs, with potential cross-attentions.
+
+ Args:
+ logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, num_labels)`):
+ Output of the basic decoder.
+ cross_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
+ Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads,
+ sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the
+ attention softmax, used to compute the weighted average in the cross-attention heads.
+ """
+
+ logits: torch.FloatTensor = None
+ cross_attentions: Optional[Tuple[torch.FloatTensor]] = None
+
+
+@dataclass
+class PerceiverMaskedLMOutput(ModelOutput):
+ """
+ Base class for Perceiver's masked language model outputs.
+
+ Args:
+ loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`labels` is provided):
+ Masked language modeling (MLM) loss.
+ logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, config.vocab_size)`):
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
+ hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
+ Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
+ of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of
+ each layer plus the initial embedding outputs.
+ attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
+ Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, num_latents,
+ num_latents)`. Attentions weights after the attention softmax, used to compute the weighted average in the
+ self-attention heads.
+ cross_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
+ Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads,
+ sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the
+ attention softmax, used to compute the weighted average in the cross-attention heads.
+ """
+
+ loss: Optional[torch.FloatTensor] = None
+ logits: torch.FloatTensor = None
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
+ cross_attentions: Optional[Tuple[torch.FloatTensor]] = None
+
+
+@dataclass
+class PerceiverClassifierOutput(ModelOutput):
+ """
+ Base class for Perceiver's outputs of sequence/image classification models, optical flow and multimodal
+ autoencoding.
+
+ Args:
+ loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`labels` is provided):
+ Classification (or regression if config.num_labels==1) loss.
+ logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, config.num_labels)`):
+ Classification (or regression if config.num_labels==1) scores (before SoftMax).
+ hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
+ Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
+ of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of
+ each layer plus the initial embedding outputs.
+ attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
+ Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads,
+ sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the
+ weighted average in the self-attention heads.
+ cross_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
+ Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads,
+ sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the
+ attention softmax, used to compute the weighted average in the cross-attention heads.
+ """
+
+ loss: Optional[torch.FloatTensor] = None
+ logits: torch.FloatTensor = None
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
+ cross_attentions: Optional[Tuple[torch.FloatTensor]] = None
+
+
+class PerceiverEmbeddings(nn.Module):
+ """Construct the latent embeddings."""
+
+ def __init__(self, config):
+ super().__init__()
+ self.latents = nn.Parameter(torch.randn(config.num_latents, config.d_latents))
+
+ def forward(self, batch_size):
+ return self.latents.expand(batch_size, -1, -1) # Thanks, Phil Wang
+
+
+class PerceiverSelfAttention(nn.Module):
+ """Multi-headed {cross, self}-attention. Can be used both in the encoder as well as in the decoder."""
+
+ def __init__(
+ self,
+ config,
+ is_cross_attention=False,
+ qk_channels=None,
+ v_channels=None,
+ num_heads=1,
+ q_dim=None,
+ kv_dim=None,
+ ):
+ super().__init__()
+ self.num_heads = num_heads
+ # Q and K must have the same number of channels.
+ # Default to preserving Q's input's shape.
+ if qk_channels is None:
+ qk_channels = q_dim
+ # V's num_channels determines the shape of the output of QKV-attention.
+ # Default to the same number of channels used in the key-query operation.
+ if v_channels is None:
+ v_channels = qk_channels
+ if qk_channels % num_heads != 0:
+ raise ValueError(f"qk_channels ({qk_channels}) must be divisible by num_heads ({num_heads}).")
+ if v_channels % num_heads != 0:
+ raise ValueError(f"v_channels ({v_channels}) must be divisible by num_heads ({num_heads}).")
+
+ self.qk_channels = qk_channels
+ self.v_channels = v_channels
+ self.qk_channels_per_head = self.qk_channels // num_heads
+ self.v_channels_per_head = self.v_channels // num_heads
+
+ # Layer normalization
+ self.layernorm1 = nn.LayerNorm(q_dim)
+ self.layernorm2 = nn.LayerNorm(kv_dim) if is_cross_attention else nn.Identity()
+
+ # Projection matrices
+ self.query = nn.Linear(q_dim, qk_channels)
+ self.key = nn.Linear(kv_dim, qk_channels)
+ self.value = nn.Linear(kv_dim, v_channels)
+
+ self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
+
+ def transpose_for_scores(self, x, channels_per_head):
+ new_x_shape = x.size()[:-1] + (self.num_heads, channels_per_head)
+ x = x.view(*new_x_shape)
+ return x.permute(0, 2, 1, 3)
+
+ def forward(
+ self,
+ hidden_states,
+ attention_mask=None,
+ head_mask=None,
+ inputs=None,
+ inputs_mask=None,
+ output_attentions=False,
+ ):
+ hidden_states = self.layernorm1(hidden_states)
+ inputs = self.layernorm2(inputs)
+
+ # Project queries, keys and values to a common feature dimension. If this is instantiated as a cross-attention module,
+ # the keys and values come from the inputs; the attention mask needs to be such that the inputs's non-relevant tokens are not attended to.
+ is_cross_attention = inputs is not None
+ queries = self.query(hidden_states)
+
+ if is_cross_attention:
+ keys = self.key(inputs)
+ values = self.value(inputs)
+ attention_mask = inputs_mask
+ else:
+ keys = self.key(hidden_states)
+ values = self.value(hidden_states)
+
+ # Reshape channels for multi-head attention.
+ # We reshape from (batch_size, time, channels) to (batch_size, num_heads, time, channels per head)
+ queries = self.transpose_for_scores(queries, self.qk_channels_per_head)
+ keys = self.transpose_for_scores(keys, self.qk_channels_per_head)
+ values = self.transpose_for_scores(values, self.v_channels_per_head)
+
+ # Take the dot product between the queries and keys to get the raw attention scores.
+ attention_scores = torch.matmul(queries, keys.transpose(-1, -2))
+
+ batch_size, num_heads, seq_len, q_head_dim = queries.shape
+ _, _, _, v_head_dim = values.shape
+ hiddens = self.num_heads * v_head_dim
+
+ attention_scores = attention_scores / math.sqrt(q_head_dim)
+
+ if attention_mask is not None:
+ # Apply the attention mask (precomputed for all layers in PerceiverModel forward() function)
+ attention_scores = attention_scores + attention_mask
+
+ # Normalize the attention scores to probabilities.
+ attention_probs = nn.Softmax(dim=-1)(attention_scores)
+
+ # This is actually dropping out entire tokens to attend to, which might
+ # seem a bit unusual, but is taken from the original Transformer paper.
+ attention_probs = self.dropout(attention_probs)
+
+ # Mask heads if we want to
+ if head_mask is not None:
+ attention_probs = attention_probs * head_mask
+
+ context_layer = torch.matmul(attention_probs, values)
+
+ context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
+ new_context_layer_shape = context_layer.size()[:-2] + (hiddens,)
+ context_layer = context_layer.view(*new_context_layer_shape)
+
+ outputs = (context_layer, attention_probs) if output_attentions else (context_layer,)
+
+ return outputs
+
+
+class PerceiverSelfOutput(nn.Module):
+ def __init__(self, config, input_channels, output_channels):
+ super().__init__()
+ self.dense = nn.Linear(input_channels, output_channels)
+
+ def forward(self, hidden_states):
+ hidden_states = self.dense(hidden_states)
+ return hidden_states
+
+
+class PerceiverAttention(nn.Module):
+ """Attention module, including a dense block."""
+
+ def __init__(
+ self,
+ config,
+ is_cross_attention=False,
+ qk_channels=None,
+ v_channels=None,
+ num_heads=1,
+ q_dim=None,
+ kv_dim=None,
+ use_query_residual=True,
+ ):
+ super().__init__()
+ # MultiHead attention
+ if is_cross_attention and qk_channels is None:
+ if config.cross_attention_shape_for_attention == "q":
+ qk_channels = q_dim
+ elif config.cross_attention_shape_for_attention == "kv":
+ qk_channels = kv_dim
+ else:
+ raise ValueError(
+ f"Unknown value {config.cross_attention_shape_for_attention} for "
+ "cross_attention_shape_for_attention."
+ )
+ else:
+ if qk_channels is None:
+ qk_channels = q_dim
+ if v_channels is None:
+ v_channels = qk_channels
+ self.self = PerceiverSelfAttention(
+ config,
+ is_cross_attention=is_cross_attention,
+ qk_channels=qk_channels,
+ v_channels=v_channels,
+ num_heads=num_heads,
+ q_dim=q_dim,
+ kv_dim=kv_dim,
+ )
+ # dense block
+ output_channels = None
+ if is_cross_attention:
+ output_channels = q_dim
+ else:
+ if output_channels is None:
+ output_channels = v_channels
+ self.output = PerceiverSelfOutput(config, input_channels=self.self.v_channels, output_channels=output_channels)
+ self.use_query_residual = use_query_residual
+ self.pruned_heads = set()
+
+ def prune_heads(self, heads):
+ if len(heads) == 0:
+ return
+ heads, index = find_pruneable_heads_and_indices(
+ heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads
+ )
+
+ # Prune linear layers
+ self.self.query = prune_linear_layer(self.self.query, index)
+ self.self.key = prune_linear_layer(self.self.key, index)
+ self.self.value = prune_linear_layer(self.self.value, index)
+ self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)
+
+ # Update hyper params and store pruned heads
+ self.self.num_attention_heads = self.self.num_attention_heads - len(heads)
+ self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads
+ self.pruned_heads = self.pruned_heads.union(heads)
+
+ def forward(
+ self,
+ hidden_states,
+ attention_mask=None,
+ head_mask=None,
+ inputs=None,
+ inputs_mask=None,
+ output_attentions=False,
+ ):
+ self_outputs = self.self(
+ hidden_states,
+ attention_mask,
+ head_mask,
+ inputs,
+ inputs_mask,
+ output_attentions,
+ )
+
+ # Output projection
+ attention_output = self.output(self_outputs[0])
+
+ # Optionally include a residual to the original queries.
+ # Consider omitting the residual if the semantics of query and output
+ # are different, e.g. if queries are positions and outputs are pixels.
+ if self.use_query_residual:
+ attention_output = attention_output + hidden_states
+
+ outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them
+ return outputs
+
+
+class PerceiverMLP(nn.Module):
+ """A Transformer-style dense module to follow attention."""
+
+ def __init__(self, config, input_size, widening_factor):
+ super().__init__()
+ self.dense1 = nn.Linear(input_size, widening_factor * input_size)
+ if isinstance(config.hidden_act, str):
+ self.intermediate_act_fn = ACT2FN[config.hidden_act]
+ else:
+ self.intermediate_act_fn = config.hidden_act
+ self.dense2 = nn.Linear(input_size, input_size)
+
+ def forward(self, hidden_states):
+ hidden_states = self.dense1(hidden_states)
+ hidden_states = self.intermediate_act_fn(hidden_states)
+ hidden_states = self.dense2(hidden_states)
+ return hidden_states
+
+
+class PerceiverLayer(nn.Module):
+ def __init__(
+ self,
+ config,
+ is_cross_attention=False,
+ qk_channels=None,
+ v_channels=None,
+ num_heads=1,
+ q_dim=None,
+ kv_dim=None,
+ widening_factor=4,
+ use_query_residual=True,
+ ):
+ super().__init__()
+ self.chunk_size_feed_forward = config.chunk_size_feed_forward
+ self.seq_len_dim = 1
+ self.attention = PerceiverAttention(
+ config,
+ is_cross_attention=is_cross_attention,
+ qk_channels=qk_channels,
+ v_channels=v_channels,
+ num_heads=num_heads,
+ q_dim=q_dim,
+ kv_dim=kv_dim,
+ use_query_residual=use_query_residual,
+ )
+ self.layernorm = nn.LayerNorm(q_dim)
+ self.mlp = PerceiverMLP(config, input_size=q_dim, widening_factor=widening_factor)
+
+ def forward(
+ self,
+ hidden_states,
+ attention_mask=None,
+ head_mask=None,
+ inputs=None,
+ inputs_mask=None,
+ output_attentions=False,
+ ):
+ attention_outputs = self.attention(
+ hidden_states,
+ attention_mask,
+ head_mask,
+ inputs,
+ inputs_mask,
+ output_attentions,
+ )
+ attention_output = attention_outputs[0]
+
+ outputs = attention_outputs[1:] # add attentions if we output attention weights
+
+ layer_output = apply_chunking_to_forward(
+ self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
+ )
+
+ layer_output = layer_output + attention_output # residual connection
+
+ outputs = (layer_output,) + outputs
+
+ return outputs
+
+ def feed_forward_chunk(self, attention_output):
+ layer_output = self.layernorm(attention_output)
+ layer_output = self.mlp(layer_output)
+ return layer_output
+
+
+class PerceiverEncoder(nn.Module):
+ """The Perceiver Encoder: a scalable, fully attentional encoder."""
+
+ def __init__(self, config):
+ super().__init__()
+ self.config = config
+
+ # Check that we can use multihead-attention with these shapes.
+ if config.d_latents % config.num_self_attention_heads != 0:
+ raise ValueError(
+ f"num_z_channels ({config.d_latents}) must be divisible by"
+ f" num_self_attend_heads ({config.num_self_attention_heads})."
+ )
+ if config.d_latents % config.num_cross_attention_heads != 0:
+ raise ValueError(
+ f"num_z_channels ({config.d_latents}) must be divisible by"
+ f" num_cross_attend_heads ({config.num_cross_attention_heads})."
+ )
+
+ # Construct the cross attention layer.
+ self.cross_attention = PerceiverLayer(
+ config,
+ is_cross_attention=True,
+ qk_channels=config.qk_channels,
+ v_channels=config.v_channels,
+ num_heads=config.num_cross_attention_heads,
+ q_dim=config.d_latents,
+ kv_dim=config.d_model,
+ widening_factor=config.cross_attention_widening_factor,
+ use_query_residual=config.use_query_residual,
+ )
+
+ # Construct a single block of self-attention layers.
+ # We get deeper architectures by applying this block more than once.
+ self_attention_layers = []
+ for _ in range(config.num_self_attends_per_block):
+ layer = PerceiverLayer(
+ config,
+ is_cross_attention=False,
+ qk_channels=config.qk_channels,
+ v_channels=config.v_channels,
+ num_heads=config.num_self_attention_heads,
+ q_dim=config.d_latents,
+ kv_dim=config.d_latents,
+ widening_factor=config.self_attention_widening_factor,
+ )
+ self_attention_layers.append(layer)
+
+ self.self_attends = nn.ModuleList(self_attention_layers)
+
+ def forward(
+ self,
+ hidden_states,
+ attention_mask=None,
+ head_mask=None,
+ inputs=None,
+ inputs_mask=None,
+ output_attentions=False,
+ output_hidden_states=False,
+ return_dict=True,
+ ):
+ all_hidden_states = () if output_hidden_states else None
+ all_self_attentions = () if output_attentions else None
+ all_cross_attentions = () if output_attentions else None
+
+ # Apply the cross-attention between the latents (hidden_states) and inputs:
+ layer_outputs = self.cross_attention(
+ hidden_states,
+ attention_mask=attention_mask,
+ head_mask=None,
+ inputs=inputs,
+ inputs_mask=inputs_mask,
+ output_attentions=output_attentions,
+ )
+ hidden_states = layer_outputs[0]
+
+ if output_attentions:
+ all_cross_attentions = all_cross_attentions + (layer_outputs[1],)
+
+ # Apply the block of self-attention layers more than once:
+ for _ in range(self.config.num_blocks):
+ for i, layer_module in enumerate(self.self_attends):
+ if output_hidden_states:
+ all_hidden_states = all_hidden_states + (hidden_states,)
+
+ layer_head_mask = head_mask[i] if head_mask is not None else None
+
+ layer_outputs = layer_module(
+ hidden_states,
+ attention_mask=attention_mask,
+ head_mask=layer_head_mask,
+ output_attentions=output_attentions,
+ )
+
+ hidden_states = layer_outputs[0]
+ if output_attentions:
+ all_self_attentions = all_self_attentions + (layer_outputs[1],)
+
+ if output_hidden_states:
+ all_hidden_states = all_hidden_states + (hidden_states,)
+
+ if not return_dict:
+ return tuple(
+ v
+ for v in [hidden_states, all_hidden_states, all_self_attentions, all_cross_attentions]
+ if v is not None
+ )
+ return BaseModelOutputWithCrossAttentions(
+ last_hidden_state=hidden_states,
+ hidden_states=all_hidden_states,
+ attentions=all_self_attentions,
+ cross_attentions=all_cross_attentions,
+ )
+
+
+class PerceiverPreTrainedModel(PreTrainedModel):
+ """
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
+ models.
+ """
+
+ config_class = PerceiverConfig
+ base_model_prefix = "perceiver"
+
+ def _init_weights(self, module):
+ """Initialize the weights"""
+ if isinstance(module, (nn.Linear, nn.Conv2d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ module.bias.data.zero_()
+ elif hasattr(module, "latents"):
+ module.latents.data.normal_(mean=0.0, std=self.config.initializer_range)
+ elif hasattr(module, "position_embeddings") and isinstance(module, PerceiverTrainablePositionEncoding):
+ module.position_embeddings.data.normal_(mean=0.0, std=self.config.initializer_range)
+ elif isinstance(module, nn.ParameterDict):
+ for modality in module.keys():
+ module[modality].data.normal_(mean=0.0, std=self.config.initializer_range)
+ elif isinstance(module, nn.Embedding):
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+ elif isinstance(module, nn.LayerNorm):
+ module.bias.data.zero_()
+ module.weight.data.fill_(1.0)
+
+
+PERCEIVER_START_DOCSTRING = r"""
+ This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ sub-class. Use
+ it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
+ behavior.
+
+ Parameters:
+ config (:class:`~transformers.PerceiverConfig`): Model configuration class with all the parameters of the model.
+ Initializing with a config file does not load the weights associated with the model, only the
+ configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model
+ weights.
+"""
+
+PERCEIVER_MODEL_START_DOCSTRING = r"""
+ This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ sub-class. Use
+ it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
+ behavior.
+
+ Parameters:
+ config (:class:`~transformers.PerceiverConfig`): Model configuration class with all the parameters of the model.
+ Initializing with a config file does not load the weights associated with the model, only the
+ configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model
+ weights.
+ decoder (`DecoderType`, `optional`):
+ Optional decoder to use to decode the latent representation of the encoder. Examples include
+ `transformers.models.perceiver.modeling_perceiver.PerceiverBasicDecoder`,
+ `transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder`,
+ `transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder`.
+ input_preprocessor (`PreprocessorType`, `optional`):
+ Optional input preprocessor to use. Examples include
+ `transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor`,
+ `transformers.models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor`,
+ `transformers.models.perceiver.modeling_perceiver.PerceiverTextPreprocessor`,
+ `transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor`.
+ output_postprocessor (`PostprocessorType`, `optional`):
+ Optional output postprocessor to use. Examples include
+ `transformers.models.perceiver.modeling_perceiver.PerceiverImagePostprocessor`,
+ `transformers.models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor`,
+ `transformers.models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor`,
+ `transformers.models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor`,
+ `transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor`.
+
+ Note that you can define your own decoders, preprocessors and/or postprocessors to fit your use-case.
+"""
+
+PERCEIVER_INPUTS_DOCSTRING = r"""
+ Args:
+ inputs (:obj:`torch.FloatTensor`):
+ Inputs to the perceiver. Can be anything: images, text, audio, video, etc.
+ attention_mask (:obj:`torch.FloatTensor` of shape :obj:`{0}`, `optional`):
+ Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
+
+ - 1 for tokens that are **not masked**,
+ - 0 for tokens that are **masked**.
+
+ `What are attention masks? <../glossary.html#attention-mask>`__
+ head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`):
+ Mask to nullify selected heads of the self-attention modules. Mask values selected in ``[0, 1]``:
+
+ - 1 indicates the head is **not masked**,
+ - 0 indicates the head is **masked**.
+
+ output_attentions (:obj:`bool`, `optional`):
+ Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned
+ tensors for more detail.
+ output_hidden_states (:obj:`bool`, `optional`):
+ Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for
+ more detail.
+ return_dict (:obj:`bool`, `optional`):
+ Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
+"""
+
+
+@add_start_docstrings(
+ """The Perceiver: a scalable, fully attentional architecture.""",
+ PERCEIVER_MODEL_START_DOCSTRING,
+)
+class PerceiverModel(PerceiverPreTrainedModel):
+ def __init__(
+ self,
+ config,
+ decoder=None,
+ input_preprocessor: PreprocessorType = None,
+ output_postprocessor: PostprocessorType = None,
+ ):
+ super().__init__(config)
+ self.config = config
+
+ self.input_preprocessor = input_preprocessor
+ self.output_postprocessor = output_postprocessor
+ self.embeddings = PerceiverEmbeddings(config)
+ self.encoder = PerceiverEncoder(config)
+ self.decoder = decoder
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings.latents
+
+ def set_input_embeddings(self, value):
+ self.embeddings.latents = value
+
+ def _prune_heads(self, heads_to_prune):
+ """
+ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
+ class PreTrainedModel
+ """
+ for layer, heads in heads_to_prune.items():
+ self.encoder.layer[layer].attention.prune_heads(heads)
+
+ @add_start_docstrings_to_model_forward(PERCEIVER_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
+ @add_code_sample_docstrings(
+ processor_class=_TOKENIZER_FOR_DOC,
+ checkpoint=_CHECKPOINT_FOR_DOC,
+ output_type=PerceiverModelOutput,
+ config_class=_CONFIG_FOR_DOC,
+ )
+ def forward(
+ self,
+ inputs,
+ attention_mask=None,
+ subsampled_output_points=None,
+ head_mask=None,
+ output_attentions=None,
+ output_hidden_states=None,
+ return_dict=None,
+ ):
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ if self.input_preprocessor is not None:
+ inputs, modality_sizes, inputs_without_pos = self.input_preprocessor(inputs)
+ else:
+ modality_sizes = None
+ inputs_without_pos = None
+
+ if inputs.size()[-1] != self.config.d_model:
+ raise ValueError(
+ f"Last dimension of the inputs: {inputs.size()[-1]} doesn't correspond to config.d_model: {self.config.d_model}. "
+ "Please update config.d_model appropriately."
+ )
+ else:
+ input_shape = inputs.size()
+
+ batch_size, seq_length, _ = input_shape
+ device = inputs.device
+
+ # If no attention mask is provided, make them all ones
+ if attention_mask is None:
+ attention_mask = torch.ones(((batch_size, seq_length)), device=device)
+ # Make the attention mask broadcastable to [batch_size, num_heads, seq_length, seq_length]
+ extended_attention_mask = self.invert_attention_mask(attention_mask)
+
+ # Prepare head mask if needed
+ # 1.0 in head_mask indicate we keep the head
+ # attention_probs has shape bsz x n_heads x N x N
+ # input head_mask has shape [num_heads] or [num_blocks x num_heads]
+ # and head_mask is converted to shape [num_blocks x batch x num_heads x N x N]
+ head_mask = self.get_head_mask(head_mask, self.config.num_blocks * self.config.num_self_attends_per_block)
+
+ embedding_output = self.embeddings(batch_size=batch_size)
+
+ encoder_outputs = self.encoder(
+ embedding_output,
+ attention_mask=None,
+ head_mask=head_mask,
+ inputs=inputs,
+ inputs_mask=extended_attention_mask,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ )
+ sequence_output = encoder_outputs[0]
+
+ logits = None
+ if self.decoder:
+ if subsampled_output_points is not None:
+ output_modality_sizes = {
+ "audio": subsampled_output_points["audio"].shape[0],
+ "image": subsampled_output_points["image"].shape[0],
+ "label": 1,
+ }
+ else:
+ output_modality_sizes = None
+ decoder_query = self.decoder.decoder_query(
+ inputs, modality_sizes, inputs_without_pos, subsampled_points=subsampled_output_points
+ )
+ decoder_outputs = self.decoder(
+ decoder_query,
+ z=sequence_output,
+ query_mask=extended_attention_mask,
+ output_attentions=output_attentions,
+ )
+ logits = decoder_outputs.logits
+
+ # add cross-attentions of decoder
+ if output_attentions and decoder_outputs.cross_attentions is not None:
+ if return_dict:
+ encoder_outputs.cross_attentions = (
+ encoder_outputs.cross_attentions + decoder_outputs.cross_attentions
+ )
+ else:
+ encoder_outputs = encoder_outputs + decoder_outputs.cross_attentions
+
+ if self.output_postprocessor:
+ logits = self.output_postprocessor(logits, modality_sizes=output_modality_sizes)
+
+ if not return_dict:
+ if logits is not None:
+ return (logits, sequence_output) + encoder_outputs[1:]
+ else:
+ return (sequence_output,) + encoder_outputs[1:]
+
+ return PerceiverModelOutput(
+ logits=logits,
+ last_hidden_state=sequence_output,
+ hidden_states=encoder_outputs.hidden_states,
+ attentions=encoder_outputs.attentions,
+ cross_attentions=encoder_outputs.cross_attentions,
+ )
+
+
+@add_start_docstrings("""Example use of Perceiver for masked language modeling. """, PERCEIVER_START_DOCSTRING)
+class PerceiverForMaskedLM(PerceiverPreTrainedModel):
+ def __init__(self, config):
+ super().__init__(config)
+
+ trainable_position_encoding_kwargs_decoder = dict(
+ num_channels=config.d_model, index_dims=config.max_position_embeddings
+ )
+
+ self.perceiver = PerceiverModel(
+ config,
+ input_preprocessor=PerceiverTextPreprocessor(config),
+ decoder=PerceiverBasicDecoder(
+ config,
+ output_num_channels=config.d_latents,
+ output_index_dims=config.max_position_embeddings, # we need to define the seq_len of the inputs beforehand
+ num_channels=config.d_model,
+ qk_channels=8 * 32,
+ v_channels=config.d_model,
+ num_heads=8,
+ use_query_residual=False,
+ final_project=False,
+ trainable_position_encoding_kwargs=trainable_position_encoding_kwargs_decoder,
+ ),
+ )
+ self.embedding_decoder = PerceiverEmbeddingDecoder(config)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ @add_start_docstrings_to_model_forward(PERCEIVER_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
+ @add_code_sample_docstrings(
+ processor_class=_TOKENIZER_FOR_DOC,
+ checkpoint=_CHECKPOINT_FOR_DOC,
+ output_type=PerceiverMaskedLMOutput,
+ config_class=_CONFIG_FOR_DOC,
+ )
+ def forward(
+ self,
+ inputs=None,
+ attention_mask=None,
+ head_mask=None,
+ output_attentions=None,
+ output_hidden_states=None,
+ labels=None,
+ return_dict=None,
+ ):
+ r"""
+ labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
+ Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ...,
+ config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored
+ (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]``
+ """
+
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.perceiver(
+ inputs=inputs,
+ attention_mask=attention_mask,
+ head_mask=head_mask,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ )
+
+ logits = self.embedding_decoder(
+ outputs.logits if return_dict else outputs[0], embedding_layer=self.perceiver.input_preprocessor.embeddings
+ )
+
+ masked_lm_loss = None
+ if labels is not None:
+ loss_fct = CrossEntropyLoss() # -100 index = padding token
+ masked_lm_loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[2:]
+ return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
+
+ return PerceiverMaskedLMOutput(
+ loss=masked_lm_loss,
+ logits=logits,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ cross_attentions=outputs.cross_attentions,
+ )
+
+
+@add_start_docstrings("""Example use of Perceiver for text classification. """, PERCEIVER_START_DOCSTRING)
+class PerceiverForSequenceClassification(PerceiverPreTrainedModel):
+ def __init__(self, config):
+ super().__init__(config)
+
+ trainable_position_encoding_kwargs_decoder = dict(num_channels=config.d_latents, index_dims=1)
+
+ self.num_labels = config.num_labels
+ self.perceiver = PerceiverModel(
+ config,
+ input_preprocessor=PerceiverTextPreprocessor(config),
+ decoder=PerceiverClassificationDecoder(
+ config,
+ num_channels=config.d_latents,
+ trainable_position_encoding_kwargs=trainable_position_encoding_kwargs_decoder,
+ use_query_residual=True,
+ ),
+ )
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ @add_start_docstrings_to_model_forward(PERCEIVER_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
+ @add_code_sample_docstrings(
+ processor_class=_TOKENIZER_FOR_DOC,
+ checkpoint=_CHECKPOINT_FOR_DOC,
+ output_type=PerceiverClassifierOutput,
+ config_class=_CONFIG_FOR_DOC,
+ )
+ def forward(
+ self,
+ inputs=None,
+ attention_mask=None,
+ head_mask=None,
+ output_attentions=None,
+ output_hidden_states=None,
+ labels=None,
+ return_dict=None,
+ ):
+ r"""
+ labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
+ Labels for computing the classification/regression loss. Indices should be in :obj:`[0, ...,
+ config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),
+ If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
+
+ Returns:
+
+ Examples::
+
+ >>> from transformers import PerceiverTokenizer, PerceiverForSequenceClassification
+
+ >>> tokenizer = PerceiverTokenizer.from_pretrained('deepmind/language-perceiver')
+ >>> model = PerceiverForSequenceClassification.from_pretrained('deepmind/language-perceiver')
+
+ >>> text = "hello world"
+ >>> inputs = tokenizer(images=image, return_tensors="pt").input_ids
+ >>> outputs = model(inputs=inputs)
+ >>> logits = outputs.logits
+ """
+
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.perceiver(
+ inputs=inputs,
+ attention_mask=attention_mask,
+ head_mask=head_mask,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ )
+
+ logits = outputs.logits if return_dict else outputs[0]
+
+ loss = None
+ if labels is not None:
+ if self.config.problem_type is None:
+ if self.num_labels == 1:
+ self.config.problem_type = "regression"
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
+ self.config.problem_type = "single_label_classification"
+ else:
+ self.config.problem_type = "multi_label_classification"
+
+ if self.config.problem_type == "regression":
+ loss_fct = MSELoss()
+ if self.num_labels == 1:
+ loss = loss_fct(logits.squeeze(), labels.squeeze())
+ else:
+ loss = loss_fct(logits, labels)
+ elif self.config.problem_type == "single_label_classification":
+ loss_fct = CrossEntropyLoss()
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
+ elif self.config.problem_type == "multi_label_classification":
+ loss_fct = BCEWithLogitsLoss()
+ loss = loss_fct(logits, labels)
+
+ if not return_dict:
+ output = (logits,) + outputs[2:]
+ return ((loss,) + output) if loss is not None else output
+
+ return PerceiverClassifierOutput(
+ loss=loss,
+ logits=logits,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ cross_attentions=outputs.cross_attentions,
+ )
+
+
+@add_start_docstrings(
+ """
+Example use of Perceiver for image classification, for tasks such as ImageNet.
+
+This model uses learned position embeddings. In other words, this model is not given any privileged information about
+the structure of images. As shown in the paper, this model can achieve a top-1 accuracy of 72.7 on ImageNet.
+
+`PerceiverForImageClassificationLearned` uses
+`transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with `prep_type` = "conv1x1") to
+preprocess the input images, and `transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder` to
+decode the latent representation of `~transformers.PerceiverModel` into classification logits.
+""",
+ PERCEIVER_START_DOCSTRING,
+)
+class PerceiverForImageClassificationLearned(PerceiverPreTrainedModel):
+ def __init__(self, config):
+ super().__init__(config)
+
+ trainable_position_encoding_kwargs_preprocessor = dict(num_channels=256, index_dims=config.image_size ** 2)
+ trainable_position_encoding_kwargs_decoder = dict(num_channels=config.d_latents, index_dims=1)
+
+ self.num_labels = config.num_labels
+ self.perceiver = PerceiverModel(
+ config,
+ input_preprocessor=PerceiverImagePreprocessor(
+ config,
+ prep_type="conv1x1",
+ spatial_downsample=1,
+ out_channels=256,
+ position_encoding_type="trainable",
+ concat_or_add_pos="concat",
+ project_pos_dim=256,
+ trainable_position_encoding_kwargs=trainable_position_encoding_kwargs_preprocessor,
+ ),
+ decoder=PerceiverClassificationDecoder(
+ config,
+ num_channels=config.d_latents,
+ trainable_position_encoding_kwargs=trainable_position_encoding_kwargs_decoder,
+ use_query_residual=True,
+ ),
+ )
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ @add_start_docstrings_to_model_forward(PERCEIVER_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
+ @replace_return_docstrings(output_type=PerceiverClassifierOutput, config_class=_CONFIG_FOR_DOC)
+ def forward(
+ self,
+ inputs=None,
+ attention_mask=None,
+ head_mask=None,
+ output_attentions=None,
+ output_hidden_states=None,
+ labels=None,
+ return_dict=None,
+ ):
+ r"""
+ labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
+ Labels for computing the image classification/regression loss. Indices should be in :obj:`[0, ...,
+ config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),
+ If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
+
+ Returns:
+
+ Examples::
+
+ >>> from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationLearned
+ >>> from PIL import Image
+ >>> import requests
+
+ >>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
+ >>> image = Image.open(requests.get(url, stream=True).raw)
+
+ >>> feature_extractor = PerceiverFeatureExtractor.from_pretrained('deepmind/vision-perceiver-learned')
+ >>> model = PerceiverForImageClassificationLearned.from_pretrained('deepmind/vision-perceiver-learned')
+
+ >>> inputs = feature_extractor(images=image, return_tensors="pt").pixel_values
+ >>> outputs = model(inputs=inputs)
+ >>> logits = outputs.logits
+ >>> # model predicts one of the 1000 ImageNet classes
+ >>> predicted_class_idx = logits.argmax(-1).item()
+ >>> print("Predicted class:", model.config.id2label[predicted_class_idx])
+ """
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.perceiver(
+ inputs=inputs,
+ attention_mask=attention_mask,
+ head_mask=head_mask,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ )
+ logits = outputs.logits if return_dict else outputs[0]
+
+ loss = None
+ if labels is not None:
+ if self.config.problem_type is None:
+ if self.num_labels == 1:
+ self.config.problem_type = "regression"
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
+ self.config.problem_type = "single_label_classification"
+ else:
+ self.config.problem_type = "multi_label_classification"
+
+ if self.config.problem_type == "regression":
+ loss_fct = MSELoss()
+ if self.num_labels == 1:
+ loss = loss_fct(logits.squeeze(), labels.squeeze())
+ else:
+ loss = loss_fct(logits, labels)
+ elif self.config.problem_type == "single_label_classification":
+ loss_fct = CrossEntropyLoss()
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
+ elif self.config.problem_type == "multi_label_classification":
+ loss_fct = BCEWithLogitsLoss()
+ loss = loss_fct(logits, labels)
+
+ if not return_dict:
+ output = (logits,) + outputs[2:]
+ return ((loss,) + output) if loss is not None else output
+
+ return PerceiverClassifierOutput(
+ loss=loss,
+ logits=logits,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ cross_attentions=outputs.cross_attentions,
+ )
+
+
+@add_start_docstrings(
+ """
+Example use of Perceiver for image classification, for tasks such as ImageNet.
+
+This model uses fixed 2D Fourier position embeddings. As shown in the paper, this model can achieve a top-1 accuracy of
+79.0 on ImageNet, and 84.5 when pre-trained on a large-scale dataset (i.e. JFT).
+
+`PerceiverForImageClassificationLearned` uses
+`transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with `prep_type` = "pixels") to
+preprocess the input images, and `transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder` to
+decode the latent representation of `~transformers.PerceiverModel` into classification logits.
+""",
+ PERCEIVER_START_DOCSTRING,
+)
+class PerceiverForImageClassificationFourier(PerceiverPreTrainedModel):
+ def __init__(self, config):
+ super().__init__(config)
+
+ fourier_position_encoding_kwargs_preprocessor = dict(
+ concat_pos=True, max_resolution=(224, 224), num_bands=64, sine_only=False
+ )
+ trainable_position_encoding_kwargs_decoder = dict(num_channels=config.d_latents, index_dims=1)
+
+ self.num_labels = config.num_labels
+ self.perceiver = PerceiverModel(
+ config,
+ input_preprocessor=PerceiverImagePreprocessor(
+ config,
+ prep_type="pixels",
+ spatial_downsample=1,
+ fourier_position_encoding_kwargs=fourier_position_encoding_kwargs_preprocessor,
+ ),
+ decoder=PerceiverClassificationDecoder(
+ config,
+ num_channels=config.d_latents,
+ trainable_position_encoding_kwargs=trainable_position_encoding_kwargs_decoder,
+ use_query_residual=True,
+ ),
+ )
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ @add_start_docstrings_to_model_forward(PERCEIVER_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
+ @replace_return_docstrings(output_type=PerceiverClassifierOutput, config_class=_CONFIG_FOR_DOC)
+ def forward(
+ self,
+ inputs=None,
+ attention_mask=None,
+ head_mask=None,
+ output_attentions=None,
+ output_hidden_states=None,
+ labels=None,
+ return_dict=None,
+ ):
+ r"""
+ labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
+ Labels for computing the image classification/regression loss. Indices should be in :obj:`[0, ...,
+ config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),
+ If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
+
+ Returns:
+
+ Examples::
+
+ >>> from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationFourier
+ >>> from PIL import Image
+ >>> import requests
+
+ >>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
+ >>> image = Image.open(requests.get(url, stream=True).raw)
+
+ >>> feature_extractor = PerceiverFeatureExtractor.from_pretrained('deepmind/vision-perceiver-fourier')
+ >>> model = PerceiverForImageClassificationFourier.from_pretrained('deepmind/vision-perceiver-fourier')
+
+ >>> inputs = feature_extractor(images=image, return_tensors="pt").pixel_values
+ >>> outputs = model(inputs=inputs)
+ >>> logits = outputs.logits
+ >>> # model predicts one of the 1000 ImageNet classes
+ >>> predicted_class_idx = logits.argmax(-1).item()
+ >>> print("Predicted class:", model.config.id2label[predicted_class_idx])
+ """
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.perceiver(
+ inputs=inputs,
+ attention_mask=attention_mask,
+ head_mask=head_mask,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ )
+ logits = outputs.logits if return_dict else outputs[0]
+
+ loss = None
+ if labels is not None:
+ if self.config.problem_type is None:
+ if self.num_labels == 1:
+ self.config.problem_type = "regression"
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
+ self.config.problem_type = "single_label_classification"
+ else:
+ self.config.problem_type = "multi_label_classification"
+
+ if self.config.problem_type == "regression":
+ loss_fct = MSELoss()
+ if self.num_labels == 1:
+ loss = loss_fct(logits.squeeze(), labels.squeeze())
+ else:
+ loss = loss_fct(logits, labels)
+ elif self.config.problem_type == "single_label_classification":
+ loss_fct = CrossEntropyLoss()
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
+ elif self.config.problem_type == "multi_label_classification":
+ loss_fct = BCEWithLogitsLoss()
+ loss = loss_fct(logits, labels)
+
+ if not return_dict:
+ output = (logits,) + outputs[2:]
+ return ((loss,) + output) if loss is not None else output
+
+ return PerceiverClassifierOutput(
+ loss=loss,
+ logits=logits,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ cross_attentions=outputs.cross_attentions,
+ )
+
+
+@add_start_docstrings(
+ """
+Example use of Perceiver for image classification, for tasks such as ImageNet.
+
+This model uses a 2D conv+maxpool preprocessing network. As shown in the paper, this model can achieve a top-1 accuracy
+of 82.1 on ImageNet.
+
+`PerceiverForImageClassificationLearned` uses
+`transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with `prep_type` = "conv") to preprocess
+the input images, and `transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder` to decode the
+latent representation of `~transformers.PerceiverModel` into classification logits.
+""",
+ PERCEIVER_START_DOCSTRING,
+)
+class PerceiverForImageClassificationConvProcessing(PerceiverPreTrainedModel):
+ def __init__(self, config):
+ super().__init__(config)
+
+ fourier_position_encoding_kwargs_preprocessor = dict(
+ concat_pos=True, max_resolution=(56, 56), num_bands=64, sine_only=False
+ )
+ trainable_position_encoding_kwargs_decoder = dict(num_channels=config.d_latents, index_dims=1)
+
+ self.num_labels = config.num_labels
+ self.perceiver = PerceiverModel(
+ config,
+ input_preprocessor=PerceiverImagePreprocessor(
+ config,
+ prep_type="conv",
+ spatial_downsample=1,
+ position_encoding_type="fourier",
+ fourier_position_encoding_kwargs=fourier_position_encoding_kwargs_preprocessor,
+ ),
+ decoder=PerceiverClassificationDecoder(
+ config,
+ num_channels=config.d_latents,
+ trainable_position_encoding_kwargs=trainable_position_encoding_kwargs_decoder,
+ use_query_residual=True,
+ ),
+ )
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ @add_start_docstrings_to_model_forward(PERCEIVER_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
+ @replace_return_docstrings(output_type=PerceiverClassifierOutput, config_class=_CONFIG_FOR_DOC)
+ def forward(
+ self,
+ inputs=None,
+ attention_mask=None,
+ head_mask=None,
+ output_attentions=None,
+ output_hidden_states=None,
+ labels=None,
+ return_dict=None,
+ ):
+ r"""
+ labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
+ Labels for computing the image classification/regression loss. Indices should be in :obj:`[0, ...,
+ config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),
+ If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
+
+ Returns:
+
+ Examples::
+
+ >>> from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationConvProcessing
+ >>> from PIL import Image
+ >>> import requests
+
+ >>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
+ >>> image = Image.open(requests.get(url, stream=True).raw)
+
+ >>> feature_extractor = PerceiverFeatureExtractor.from_pretrained('deepmind/vision-perceiver-conv')
+ >>> model = PerceiverForImageClassificationConvProcessing.from_pretrained('deepmind/vision-perceiver-conv')
+
+ >>> inputs = feature_extractor(images=image, return_tensors="pt").pixel_values
+ >>> outputs = model(inputs=inputs)
+ >>> logits = outputs.logits
+ >>> # model predicts one of the 1000 ImageNet classes
+ >>> predicted_class_idx = logits.argmax(-1).item()
+ >>> print("Predicted class:", model.config.id2label[predicted_class_idx])
+ """
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.perceiver(
+ inputs=inputs,
+ attention_mask=attention_mask,
+ head_mask=head_mask,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ )
+ logits = outputs.logits if return_dict else outputs[0]
+
+ loss = None
+ if labels is not None:
+ if self.config.problem_type is None:
+ if self.num_labels == 1:
+ self.config.problem_type = "regression"
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
+ self.config.problem_type = "single_label_classification"
+ else:
+ self.config.problem_type = "multi_label_classification"
+
+ if self.config.problem_type == "regression":
+ loss_fct = MSELoss()
+ if self.num_labels == 1:
+ loss = loss_fct(logits.squeeze(), labels.squeeze())
+ else:
+ loss = loss_fct(logits, labels)
+ elif self.config.problem_type == "single_label_classification":
+ loss_fct = CrossEntropyLoss()
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
+ elif self.config.problem_type == "multi_label_classification":
+ loss_fct = BCEWithLogitsLoss()
+ loss = loss_fct(logits, labels)
+
+ if not return_dict:
+ output = (logits,) + outputs[2:]
+ return ((loss,) + output) if loss is not None else output
+
+ return PerceiverClassifierOutput(
+ loss=loss,
+ logits=logits,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ cross_attentions=outputs.cross_attentions,
+ )
+
+
+@add_start_docstrings(
+ """
+Example use of Perceiver for optical flow, for tasks such as Sintel and KITTI. `PerceiverForOpticalFlow` uses
+`transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with `prep_type` = "patches") to
+preprocess the input images, and `transformers.models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder` to
+decode the latent representation of `~transformers.PerceiverModel`.
+
+As input, one concatenates 2 subsequent frames along the channel dimension and extract a 3 x 3 patch around each pixel
+(leading to 3 x 3 x 3 x 2 = 54 values for each pixel). Fixed Fourier position encodings are used to encode the position
+of each pixel in the patch. Next, one applies the Perceiver encoder. To decode, one queries the latent representation
+using the same encoding used for the input.
+""",
+ PERCEIVER_START_DOCSTRING,
+)
+class PerceiverForOpticalFlow(PerceiverPreTrainedModel):
+ def __init__(self, config):
+ super().__init__(config)
+
+ fourier_position_encoding_kwargs_preprocessor = dict(
+ num_bands=64,
+ max_resolution=config.train_size,
+ sine_only=False,
+ concat_pos=True,
+ )
+ fourier_position_encoding_kwargs_decoder = dict(
+ concat_pos=True, max_resolution=config.train_size, num_bands=64, sine_only=False
+ )
+
+ self.perceiver = PerceiverModel(
+ config,
+ input_preprocessor=PerceiverImagePreprocessor(
+ config,
+ prep_type="patches",
+ spatial_downsample=1,
+ conv_after_patching=True,
+ conv_after_patching_in_channels=54,
+ temporal_downsample=2,
+ position_encoding_type="fourier",
+ # position_encoding_kwargs
+ fourier_position_encoding_kwargs=fourier_position_encoding_kwargs_preprocessor,
+ ),
+ decoder=PerceiverOpticalFlowDecoder(
+ config,
+ num_channels=config.d_model,
+ output_image_shape=config.train_size,
+ rescale_factor=100.0,
+ # decoder kwargs
+ use_query_residual=False,
+ output_num_channels=2,
+ # We query the decoder using the first frame features
+ # rather than a standard decoder position encoding.
+ position_encoding_type="fourier",
+ fourier_position_encoding_kwargs=fourier_position_encoding_kwargs_decoder,
+ ),
+ )
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ @add_start_docstrings_to_model_forward(PERCEIVER_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
+ @replace_return_docstrings(output_type=PerceiverClassifierOutput, config_class=_CONFIG_FOR_DOC)
+ def forward(
+ self,
+ inputs=None,
+ attention_mask=None,
+ head_mask=None,
+ output_attentions=None,
+ output_hidden_states=None,
+ labels=None,
+ return_dict=None,
+ ):
+ r"""
+ labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
+ Labels for computing the optical flow loss. Indices should be in :obj:`[0, ..., config.num_labels - 1]`.
+
+ Returns:
+
+ Examples::
+
+ >>> from transformers import PerceiverForOpticalFlow
+ >>> import torch
+
+ >>> model = PerceiverForOpticalFlow.from_pretrained('deepmind/optical-flow-perceiver')
+
+ >>> # in the Perceiver IO paper, the authors extract a 3 x 3 patch around each pixel,
+ >>> # leading to 3 x 3 x 3 = 27 values for each pixel (as each pixel also has 3 color channels)
+ >>> # patches have shape (batch_size, num_frames, num_channels, height, width)
+ >>> # the authors train on resolutions of 368 x 496
+ >>> patches = torch.randn(1, 2, 27, 368, 496)
+ >>> outputs = model(inputs=patches)
+ >>> logits = outputs.logits
+ """
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.perceiver(
+ inputs=inputs,
+ attention_mask=attention_mask,
+ head_mask=head_mask,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ )
+ logits = outputs.logits if return_dict else outputs[0]
+
+ loss = None
+ if labels is not None:
+ raise NotImplementedError("Optical flow training is not yet supported")
+
+ if not return_dict:
+ output = (logits,) + outputs[2:]
+ return ((loss,) + output) if loss is not None else output
+
+ return PerceiverClassifierOutput(
+ loss=loss,
+ logits=logits,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ cross_attentions=outputs.cross_attentions,
+ )
+
+
+@add_start_docstrings(
+ """
+Example use of Perceiver for multimodal (video) autoencoding, for tasks such as Kinetics-700.
+
+`PerceiverForMultimodalAutoencoding` uses
+`transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor` to preprocess the 3 modalities:
+images, audio and class labels. This preprocessor uses modality-specific preprocessors to preprocess every modality
+separately, after which they are concatenated. Trainable position embeddings are used to pad each modality to the same
+number of channels to make concatenation along the time dimension possible. Next, one applies the Perceiver encoder.
+
+`transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder` is used to decode the latent
+representation of `~transformers.PerceiverModel`. This decoder uses each modality-specific decoder to construct
+queries. The decoder queries are created based on the inputs after preprocessing. However, autoencoding an entire video
+in a single forward pass is computationally infeasible, hence one only uses parts of the decoder queries to do
+cross-attention with the latent representation. This is determined by the subsampled indices for each modality, which
+can be provided as additional input to the forward pass of `PerceiverForMultimodalAutoencoding`.
+
+`transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder` also pads the decoder queries of the
+different modalities to the same number of channels, in order to concatenate them along the time dimension. Next,
+cross-attention is performed with the latent representation of `PerceiverModel`.
+
+Finally, `transformers.models.perceiver.modeling_perceiver.PerceiverMultiModalPostprocessor` is used to turn this
+tensor into an actual video. It first splits up the output into the different modalities, and then applies the
+respective postprocessor for each modality.
+
+Note that, by masking the classification label during evaluation (i.e. simply providing a tensor of zeros for the
+"label" modality), this auto-encoding model becomes a Kinetics 700 video classifier.
+""",
+ PERCEIVER_START_DOCSTRING,
+)
+class PerceiverForMultimodalAutoencoding(PerceiverPreTrainedModel):
+ def __init__(self, config):
+ super().__init__(config)
+
+ n_audio_samples = config.num_frames * config.audio_samples_per_frame
+
+ input_preprocessor = PerceiverMultimodalPreprocessor(
+ min_padding_size=4,
+ modalities={
+ "audio": PerceiverAudioPreprocessor(
+ config,
+ position_encoding_type="fourier",
+ fourier_position_encoding_kwargs=dict(
+ num_bands=192,
+ max_resolution=(n_audio_samples,),
+ sine_only=False,
+ concat_pos=True,
+ ),
+ prep_type="patches",
+ samples_per_patch=config.samples_per_patch,
+ ),
+ "image": PerceiverImagePreprocessor(
+ config,
+ position_encoding_type="fourier",
+ fourier_position_encoding_kwargs=dict(
+ num_bands=32,
+ max_resolution=(config.num_frames, config.image_size, config.image_size),
+ sine_only=False,
+ concat_pos=True,
+ ),
+ prep_type="patches",
+ spatial_downsample=4,
+ temporal_downsample=1,
+ ),
+ "label": PerceiverOneHotPreprocessor(config),
+ },
+ mask_probs={"image": 0.0, "audio": 0.0, "label": 1.0},
+ )
+
+ image_decoder = PerceiverBasicVideoAutoencodingDecoder(
+ config,
+ # Autoencoding, don't pass inputs to the queries.
+ concat_preprocessed_input=False,
+ output_shape=config.output_shape,
+ output_num_channels=512,
+ use_query_residual=False,
+ position_encoding_only=True,
+ position_encoding_type="fourier",
+ fourier_position_encoding_kwargs=dict(
+ num_bands=32,
+ max_resolution=(config.num_frames, config.image_size, config.image_size),
+ sine_only=False,
+ concat_pos=True,
+ ),
+ )
+
+ decoder = PerceiverMultimodalDecoder(
+ config,
+ # Autoencoding, don't pass inputs to the queries.
+ concat_preprocessed_input=False,
+ # Modality specific decoders are used ONLY to generate queries.
+ # All modalties are decoded together using a unified decoder.
+ modalities={
+ "audio": PerceiverBasicDecoder(
+ config,
+ # Autoencoding, don't pass inputs to the queries.
+ concat_preprocessed_input=False,
+ output_index_dims=(n_audio_samples // config.samples_per_patch,),
+ output_num_channels=512,
+ use_query_residual=False,
+ position_encoding_only=True,
+ position_encoding_type="fourier",
+ fourier_position_encoding_kwargs=dict(
+ num_bands=192,
+ max_resolution=(n_audio_samples,),
+ sine_only=False,
+ concat_pos=True,
+ ),
+ ),
+ "image": image_decoder,
+ "label": PerceiverClassificationDecoder(
+ config,
+ # Autoencoding, don't pass inputs to the queries.
+ concat_preprocessed_input=False,
+ use_query_residual=False,
+ position_encoding_only=True,
+ position_encoding_type="trainable",
+ trainable_position_encoding_kwargs=dict(
+ num_channels=1024,
+ index_dims=1,
+ ),
+ ),
+ },
+ num_outputs=None,
+ output_num_channels=512,
+ use_query_residual=False,
+ )
+
+ output_postprocessor = PerceiverMultimodalPostprocessor(
+ modalities={
+ "audio": PerceiverAudioPostprocessor(config, in_channels=512),
+ "image": PerceiverProjectionPostprocessor(in_channels=512, out_channels=3),
+ "label": PerceiverClassificationPostprocessor(config, in_channels=512),
+ }
+ )
+
+ self.perceiver = PerceiverModel(
+ config,
+ input_preprocessor=input_preprocessor,
+ decoder=decoder,
+ output_postprocessor=output_postprocessor,
+ )
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ @add_start_docstrings_to_model_forward(PERCEIVER_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
+ @replace_return_docstrings(output_type=PerceiverClassifierOutput, config_class=_CONFIG_FOR_DOC)
+ def forward(
+ self,
+ inputs=None,
+ attention_mask=None,
+ subsampled_output_points=None,
+ head_mask=None,
+ output_attentions=None,
+ output_hidden_states=None,
+ labels=None,
+ return_dict=None,
+ ):
+ r"""
+ labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
+ Labels for computing the image classification/regression loss. Indices should be in :obj:`[0, ...,
+ config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),
+ If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
+
+ Returns:
+
+ Examples::
+
+ >>> from transformers import PerceiverForMultimodalAutoencoding
+ >>> import torch
+
+ >>> images = torch.randn((1, 16, 3, 224, 224))
+ >>> audio = torch.randn((1, 30720, 1))
+ >>> inputs = dict(image=images, audio=audio, label=torch.zeros((images.shape[0], 700)))
+
+ >>> model = PerceiverForMultimodalAutoencoding.from_pretrained('deepmind/multimodal-perceiver')
+
+ >>> outputs = model(inputs=inputs)
+ >>> logits = outputs.logits
+ """
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.perceiver(
+ inputs=inputs,
+ attention_mask=attention_mask,
+ subsampled_output_points=subsampled_output_points,
+ head_mask=head_mask,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ )
+ logits = outputs.logits if return_dict else outputs[0]
+
+ loss = None
+ if labels is not None:
+ raise NotImplementedError("Multimodal autoencoding training is not yet supported")
+
+ if not return_dict:
+ output = (logits,) + outputs[2:]
+ return ((loss,) + output) if loss is not None else output
+
+ return PerceiverClassifierOutput(
+ loss=loss,
+ logits=logits,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ cross_attentions=outputs.cross_attentions,
+ )
+
+
+# Below: position encodings
+
+
+def build_position_encoding(
+ position_encoding_type,
+ out_channels=None,
+ project_pos_dim=-1,
+ trainable_position_encoding_kwargs=None,
+ fourier_position_encoding_kwargs=None,
+):
+ """
+ Builds the position encoding.
+
+ Args:
+
+ - out_channels: refers to the number of channels of the position encodings.
+ - project_pos_dim: if specified, will project the position encodings to this dimension.
+
+ """
+
+ if position_encoding_type == "trainable":
+ if not trainable_position_encoding_kwargs:
+ raise ValueError("Make sure to pass trainable_position_encoding_kwargs")
+ output_pos_enc = PerceiverTrainablePositionEncoding(**trainable_position_encoding_kwargs)
+ elif position_encoding_type == "fourier":
+ # We don't use the index_dims argument, as this is only known during the forward pass
+ if not fourier_position_encoding_kwargs:
+ raise ValueError("Make sure to pass fourier_position_encoding_kwargs")
+ output_pos_enc = PerceiverFourierPositionEncoding(**fourier_position_encoding_kwargs)
+ else:
+ raise ValueError(f"Unknown position encoding type: {position_encoding_type}.")
+
+ # Optionally, project the position encoding to a target dimension:
+ positions_projection = nn.Linear(out_channels, project_pos_dim) if project_pos_dim > 0 else nn.Identity()
+
+ return output_pos_enc, positions_projection
+
+
+# Below: Perceiver decoders
+
+
+class PerceiverAbstractDecoder(nn.Module, metaclass=abc.ABCMeta):
+ """Perceiver abstract decoder."""
+
+ @abc.abstractmethod
+ def decoder_query(self, inputs, modality_sizes=None, inputs_without_pos=None, subsampled_points=None):
+ raise NotImplementedError
+
+ @property
+ @abc.abstractmethod
+ def num_query_channels(self):
+ raise NotImplementedError
+
+ @abc.abstractmethod
+ def forward(self, query, z, query_mask=None):
+ raise NotImplementedError
+
+
+class PerceiverProjectionDecoder(PerceiverAbstractDecoder):
+ """Baseline projection decoder (no cross-attention)."""
+
+ def __init__(self, config):
+ super().__init__()
+ self.classifier = nn.Linear(config.d_latents, config.num_labels)
+
+ def decoder_query(self, inputs, modality_sizes=None, inputs_without_pos=None, subsampled_points=None):
+ return None
+
+ def forward(self, query, z, query_mask=None):
+ # (batch_size, num_latents, d_latents) -> (batch_size, d_latents)
+ z = torch.mean(z, dim=1)
+ # (batch_size, d_latents) -> (batch_size, config.num_labels)
+ logits = self.classifier(z)
+ return logits
+
+
+class PerceiverBasicDecoder(PerceiverAbstractDecoder):
+ """
+ Cross-attention-based decoder.
+
+ Here, `output_num_channels` refers to the number of output channels. `num_channels` refers to the number of
+ channels of the output queries.
+
+ """
+
+ def __init__(
+ self,
+ config,
+ output_num_channels,
+ position_encoding_type="trainable",
+ # The following 2 arguments are ignored if position_encoding_type == 'none':
+ output_index_dims=None,
+ num_channels=128,
+ subsampled_index_dims=None,
+ qk_channels=None,
+ v_channels=None,
+ num_heads=1,
+ widening_factor=1,
+ use_query_residual=False,
+ concat_preprocessed_input=False,
+ final_project=True,
+ position_encoding_only=False,
+ **position_encoding_kwargs,
+ ):
+ super().__init__()
+
+ self.output_num_channels = output_num_channels
+ # If `none`, the decoder will not construct any position encodings.
+ # You should construct your own when quering the decoder.
+ self.output_position_encodings = None
+ self.position_encoding_type = position_encoding_type
+ self.position_encoding_kwargs = position_encoding_kwargs
+ if position_encoding_type != "none":
+ self.output_position_encodings, self.positions_projection = build_position_encoding(
+ position_encoding_type=position_encoding_type, **position_encoding_kwargs
+ )
+
+ self.output_index_dims = output_index_dims
+ self.num_channels = num_channels
+ if subsampled_index_dims is None:
+ subsampled_index_dims = output_index_dims
+ self.subsampled_index_dims = subsampled_index_dims
+ self.concat_preprocessed_input = concat_preprocessed_input
+ self.final_project = final_project
+ self.position_encoding_only = position_encoding_only
+
+ # for multimodal autoencoding, we don't need the decoder cross-attention and final layer
+ # so then we will set position_encoding_only to True
+ if not self.position_encoding_only:
+ self.decoding_cross_attention = PerceiverLayer(
+ config,
+ is_cross_attention=True,
+ qk_channels=qk_channels,
+ v_channels=v_channels,
+ num_heads=num_heads,
+ q_dim=num_channels,
+ kv_dim=config.d_latents,
+ widening_factor=widening_factor,
+ use_query_residual=use_query_residual,
+ )
+ self.final_layer = nn.Linear(num_channels, output_num_channels) if final_project else nn.Identity()
+
+ @property
+ def num_query_channels(self) -> int:
+ if self.position_encoding_type == "none": # Queries come from elsewhere
+ raise ValueError(
+ "You cannot calculate number of decoder query channels when position_encoding_type is set to none"
+ )
+ if self.position_encoding_only:
+ if "project_pos_dim" in self.position_encoding_kwargs:
+ return self.position_encoding_kwargs["project_pos_dim"]
+ return self.output_position_encodings.output_size()
+ if self.final_project:
+ return self.output_num_channels
+ return self.num_channels
+
+ def decoder_query(self, inputs, modality_sizes=None, inputs_without_pos=None, subsampled_points=None):
+ if self.position_encoding_type == "none": # Queries come from elsewhere
+ raise ValueError("You cannot construct decoder queries when position_encoding_type is set to none")
+ if subsampled_points is not None:
+ # subsampled_points are the indices if the inputs would be flattened
+ # however, the inputs aren't flattened, that's why we use unravel_index
+ # to get the indices for the unflattened array
+ # unravel_index returns a tuple (x_idx, y_idx, ...)
+ # stack to get the [n, d] tensor of coordinates
+ indices = list(torch.from_numpy(x) for x in np.unravel_index(subsampled_points, self.output_index_dims))
+ pos = torch.stack(indices, dim=1)
+ batch_size = inputs.shape[0]
+ # Map these coordinates to [-1, 1]
+ pos = -1 + 2 * pos / torch.tensor(self.output_index_dims)[None, :]
+ pos = torch.broadcast_to(pos[None], [batch_size, pos.shape[0], pos.shape[1]])
+ # Construct the position encoding.
+ if self.position_encoding_type == "trainable":
+ pos_emb = self.output_position_encodings(batch_size)
+ elif self.position_encoding_type == "fourier":
+ pos_emb = self.output_position_encodings(
+ self.output_index_dims, batch_size=batch_size, device=inputs.device, pos=pos
+ )
+
+ # Optionally project them to a target dimension.
+ pos_emb = self.positions_projection(pos_emb)
+ pos_emb = torch.reshape(pos_emb, [pos_emb.shape[0], -1, pos_emb.shape[-1]])
+ else:
+ batch_size = inputs.shape[0]
+ index_dims = inputs.shape[2:]
+
+ # Construct the position encoding.
+ if self.position_encoding_type == "trainable":
+ pos_emb = self.output_position_encodings(batch_size)
+ elif self.position_encoding_type == "fourier":
+ pos_emb = self.output_position_encodings(index_dims, batch_size, device=inputs.device)
+
+ # Optionally project them to a target dimension.
+ pos_emb = self.positions_projection(pos_emb)
+
+ if self.concat_preprocessed_input:
+ if inputs_without_pos is None:
+ raise ValueError("Value is required for inputs_without_pos if concat_preprocessed_input is True")
+ pos_emb = torch.cat([inputs_without_pos, pos_emb], div=-1)
+
+ return pos_emb
+
+ def forward(self, query, z, query_mask=None, output_attentions=False):
+ # Cross-attention decoding.
+ # key, value: B x N x K; query: B x M x K
+ # Attention maps -> B x N x M
+ # Output -> B x M x K
+ cross_attentions = () if output_attentions else None
+
+ layer_outputs = self.decoding_cross_attention(
+ query,
+ attention_mask=query_mask,
+ head_mask=None,
+ inputs=z,
+ inputs_mask=None,
+ output_attentions=output_attentions,
+ )
+ output = layer_outputs[0]
+
+ if output_attentions:
+ cross_attentions = cross_attentions + (layer_outputs[1],)
+
+ logits = self.final_layer(output)
+
+ return PerceiverDecoderOutput(logits=logits, cross_attentions=cross_attentions)
+
+
+class PerceiverClassificationDecoder(PerceiverAbstractDecoder):
+ """
+ Cross-attention based classification decoder. Light-weight wrapper of `BasicDecoder` for logit output. Will turn
+ the output of the Perceiver encoder which is of shape (batch_size, num_latents, d_latents) to a tensor of shape
+ (batch_size, num_labels). The queries are of shape (batch_size, 1, num_labels).
+ """
+
+ def __init__(self, config, **decoder_kwargs):
+ super().__init__()
+
+ self.num_labels = config.num_labels
+ self.decoder = PerceiverBasicDecoder(
+ config,
+ output_num_channels=self.num_labels,
+ output_index_dims=1, # Predict a single logit array.
+ **decoder_kwargs,
+ )
+
+ @property
+ def num_query_channels(self) -> int:
+ return self.decoder.num_query_channels
+
+ def decoder_query(self, inputs, modality_sizes=None, inputs_without_pos=None, subsampled_points=None):
+ return self.decoder.decoder_query(
+ inputs, modality_sizes, inputs_without_pos, subsampled_points=subsampled_points
+ )
+
+ def forward(self, query, z, query_mask=None, output_attentions=False):
+ decoder_outputs = self.decoder(query, z, output_attentions=output_attentions)
+
+ # B x 1 x num_classes -> B x num_classes
+ logits = decoder_outputs.logits[:, 0, :]
+
+ return PerceiverDecoderOutput(logits=logits, cross_attentions=decoder_outputs.cross_attentions)
+
+
+class PerceiverOpticalFlowDecoder(PerceiverAbstractDecoder):
+ """Cross-attention based optical flow decoder."""
+
+ def __init__(self, config, output_image_shape, output_num_channels=2, rescale_factor=100.0, **decoder_kwargs):
+ super().__init__()
+
+ self.output_image_shape = output_image_shape
+ self.output_num_channels = output_num_channels
+ self.rescale_factor = rescale_factor
+ self.decoder = PerceiverBasicDecoder(config, output_num_channels=output_num_channels, **decoder_kwargs)
+
+ @property
+ def num_query_channels(self) -> int:
+ return self.decoder.num_query_channels
+
+ def decoder_query(self, inputs, modality_sizes=None, inputs_without_pos=None, subsampled_points=None):
+ if subsampled_points is not None:
+ raise ValueError("FlowDecoder doesn't support subsampling yet.")
+ return inputs
+
+ def forward(self, query, z, query_mask=None, output_attentions=False):
+ decoder_outputs = self.decoder(query, z, output_attentions=output_attentions)
+ preds = decoder_outputs.logits
+ # Output flow and rescale.
+ preds /= self.rescale_factor
+ preds = preds.reshape([preds.shape[0]] + list(self.output_image_shape) + [preds.shape[-1]])
+ return PerceiverDecoderOutput(logits=preds, cross_attentions=decoder_outputs.cross_attentions)
+
+
+class PerceiverBasicVideoAutoencodingDecoder(PerceiverAbstractDecoder):
+ """
+ Cross-attention based video-autoencoding decoder. Light-weight wrapper of `BasicDecoder` with video reshaping
+ logic.
+ """
+
+ def __init__(self, config, output_shape, position_encoding_type, **decoder_kwargs):
+ super().__init__()
+ if len(output_shape) != 4: # B, T, H, W
+ raise ValueError(f"Expected rank 4 output_shape, got {output_shape}.")
+ # Build the decoder components:
+ self.output_shape = output_shape
+ self.output_num_channels = decoder_kwargs["output_num_channels"]
+
+ self.decoder = PerceiverBasicDecoder(
+ config,
+ output_index_dims=self.output_shape[1:4], # T*H*W
+ position_encoding_type=position_encoding_type,
+ **decoder_kwargs,
+ )
+
+ @property
+ def num_query_channels(self) -> int:
+ return self.decoder.num_query_channels
+
+ def decoder_query(self, inputs, modality_sizes=None, inputs_without_pos=None, subsampled_points=None):
+ return self.decoder.decoder_query(
+ inputs,
+ modality_sizes=modality_sizes,
+ inputs_without_pos=inputs_without_pos,
+ subsampled_points=subsampled_points,
+ )
+
+ def forward(self, query, z, query_mask=None):
+ decoder_outputs = self.decoder(query, z)
+ logits = decoder_outputs.logits
+
+ logits = torch.reshape(logits, self.output_shape + [logits.shape[-1]])
+ return PerceiverDecoderOutput(logits=logits, cross_attentions=decoder_outputs.cross_attentions)
+
+
+def restructure(modality_sizes: ModalitySizeType, inputs: torch.Tensor) -> Mapping[str, torch.Tensor]:
+ """
+ Partitions a [B, N, C] tensor into tensors for each modality.
+
+ Args:
+ modality_sizes
+ dict specifying the size of the modality
+ inputs:
+ input tensor
+
+ Returns:
+ dict mapping name of modality to its associated tensor.
+ """
+ outputs = {}
+ index = 0
+ # Apply a predictable ordering to the modalities
+ for modality in sorted(modality_sizes.keys()):
+ size = modality_sizes[modality]
+ inp = inputs[:, index : index + size]
+ index += size
+ outputs[modality] = inp
+ return outputs
+
+
+class PerceiverMultimodalDecoder(PerceiverAbstractDecoder):
+ """
+ Multimodal decoding by composing uni-modal decoders. The modalities argument of the constructor is a dictionary
+ mapping modality name to the decoder of that modality. That decoder will be used to construct queries for that
+ modality. However, there is a shared cross attention across all modalities, using the concatenated per-modality
+ query vectors.
+ """
+
+ def __init__(
+ self,
+ config,
+ modalities,
+ num_outputs,
+ output_num_channels,
+ min_padding_size=2,
+ subsampled_index_dims=None,
+ **decoder_kwargs
+ ):
+ super().__init__()
+ self.modalities = nn.ModuleDict(modalities)
+ self.subsampled_index_dims = subsampled_index_dims
+ self.min_padding_size = min_padding_size
+ self.output_num_channels = output_num_channels
+ self.num_outputs = num_outputs
+ self.decoder = PerceiverBasicDecoder(
+ config,
+ output_index_dims=(num_outputs,),
+ output_num_channels=output_num_channels,
+ position_encoding_type="none",
+ num_channels=self.num_query_channels,
+ **decoder_kwargs,
+ )
+ self.padding = nn.ParameterDict(
+ {
+ modality: nn.Parameter(torch.randn(1, self.num_query_channels - decoder.num_query_channels))
+ for modality, decoder in modalities.items()
+ }
+ )
+
+ @property
+ def num_query_channels(self) -> int:
+ max_channel_size = max(decoder.num_query_channels for _, decoder in self.modalities.items())
+ common_channel_size = max_channel_size + self.min_padding_size
+ return common_channel_size
+
+ def decoder_query(self, inputs, modality_sizes, inputs_without_pos=None, subsampled_points=None):
+ # Partition the flat inputs among the different modalities
+ inputs = restructure(modality_sizes, inputs)
+
+ # Obtain modality-specific decoders' queries
+ subsampled_points = subsampled_points or dict()
+
+ decoder_queries = dict()
+ for modality, decoder in self.modalities.items():
+ # Get input_without_pos for this modality if it exists.
+ input_without_pos = None
+ if inputs_without_pos is not None:
+ input_without_pos = inputs_without_pos.get(modality, None)
+ query = decoder.decoder_query(
+ inputs=inputs[modality],
+ modality_sizes=None,
+ inputs_without_pos=input_without_pos,
+ subsampled_points=subsampled_points.get(modality, None),
+ )
+ decoder_queries[modality] = query
+
+ # Pad all queries with trainable position encodings to make them have the same channels
+
+ def embed(modality, x):
+ x = torch.reshape(x, [x.shape[0], np.prod(x.shape[1:-1]), x.shape[-1]])
+ pos = self.padding[modality]
+ pos = torch.broadcast_to(pos, [x.shape[0], x.shape[1], self.num_query_channels - x.shape[2]])
+ return torch.cat([x, pos], dim=2)
+
+ # Apply a predictable ordering to the modalities
+ return torch.cat(
+ [embed(modality, decoder_queries[modality]) for modality in sorted(self.modalities.keys())], dim=1
+ )
+
+ def forward(self, query, z, query_mask=None, output_attentions=False):
+ # B x 1 x num_classes -> B x num_classes
+ decoder_outputs = self.decoder(query, z, output_attentions=output_attentions)
+
+ return decoder_outputs
+
+
+# Below: IO pre- and post-processor classes for Perceiver.
+def space_to_depth(frames: torch.Tensor, temporal_block_size: int = 1, spatial_block_size: int = 1) -> torch.Tensor:
+ """
+ Space to depth transform. Rearranges blocks of spatial data, into depth.
+
+ This function assumes the channels to be first, but will place the channels last after transformation.
+
+ Based on https://discuss.pytorch.org/t/is-there-any-layer-like-tensorflows-space-to-depth-function/3487/15.
+ """
+ if len(frames.shape) == 4:
+ batch_size, num_channels, height, width = frames.shape
+ # split up dimensions (height by spatial_block_size, width by spatial_block_size)
+ frames = frames.view(
+ batch_size,
+ num_channels,
+ height // spatial_block_size,
+ spatial_block_size,
+ width // spatial_block_size,
+ spatial_block_size,
+ )
+ # move blocks to last dimension: (batch_size, H//bs, W//bs, bs, bs, C)
+ frames = frames.permute(0, 2, 4, 3, 5, 1).contiguous()
+ # concatenate blocks along channel dimension: (batch_size, H//bs, W//bs, bs*bs*C)
+ frames = frames.view(
+ batch_size,
+ height // spatial_block_size,
+ width // spatial_block_size,
+ (spatial_block_size ** 2) * num_channels,
+ )
+ return frames
+ elif len(frames.shape) == 5:
+ batch_size, time, num_channels, height, width = frames.shape
+ # split up dimensions (time by temporal_block_size, height by spatial_block_size, width by spatial_block_size)
+ frames = frames.view(
+ batch_size,
+ time // temporal_block_size,
+ temporal_block_size,
+ num_channels,
+ height // spatial_block_size,
+ spatial_block_size,
+ width // spatial_block_size,
+ spatial_block_size,
+ )
+ # move blocks to last dimension: (batch_size, T//ts, H//bs, W//bs, ts, bs, bs, C)
+ frames = frames.permute(0, 1, 4, 6, 2, 5, 7, 3).contiguous()
+ # concatenate blocks along channel dimension: (batch_size, T//ts, H//bs, W//bs, ts*bs*bs*C)
+ frames = frames.view(
+ batch_size,
+ time // temporal_block_size,
+ height // spatial_block_size,
+ width // spatial_block_size,
+ temporal_block_size * (spatial_block_size ** 2) * num_channels,
+ )
+ return frames
+ else:
+ raise ValueError(
+ "Frames should be of rank 4 (batch, channels, height, width)"
+ " or rank 5 (batch, time, channels, height, width)"
+ )
+
+
+class Conv2dSamePadding(nn.Conv2d):
+ """
+ Conv2d layer with padding="same" support. Source:
+ https://gist.github.com/sumanmichael/4de9dee93f972d47c80c4ade8e149ea6
+ """
+
+ def __init__(self, *args, **kwargs):
+ super(Conv2dSamePadding, self).__init__(*args, **kwargs)
+ self.zero_pad_2d = nn.ZeroPad2d(
+ reduce(__add__, [(k // 2 + (k - 2 * (k // 2)) - 1, k // 2) for k in self.kernel_size[::-1]])
+ )
+
+ def forward(self, input):
+ return self._conv_forward(self.zero_pad_2d(input), self.weight, self.bias)
+
+
+class Conv2DDownsample(nn.Module):
+ """Downsamples 4x by applying a 2D convolution and doing max pooling."""
+
+ def __init__(
+ self,
+ num_layers: int = 1,
+ in_channels: int = 3,
+ out_channels: int = 64,
+ use_batchnorm: bool = True,
+ ):
+ """
+ Constructs a Conv2DDownsample model.
+
+ Args:
+ in_channels (:obj:`int`, `optional`, defaults to 3):
+ The number of input channels.
+ out_channels (:obj:`int`, `optional`, defaults to 64):
+ The number of conv output channels.
+ use_batchnorm (:obj:`bool`, `optional`, defaults to :obj:`True`):
+ Whether to use batchnorm.
+ """
+ super().__init__()
+
+ self.conv = Conv2dSamePadding(
+ in_channels=in_channels, out_channels=out_channels, kernel_size=7, stride=2, bias=False
+ )
+ self.batchnorm = nn.BatchNorm2d(num_features=out_channels) if use_batchnorm else nn.Identity()
+ self.relu = nn.ReLU()
+ self.max_pool = nn.MaxPool2d(kernel_size=3, stride=2)
+
+ def forward(self, inputs: torch.Tensor) -> torch.Tensor:
+ out = self.conv(inputs)
+ out = self.batchnorm(out)
+ out = self.relu(out)
+ out = self.max_pool(out)
+ return out
+
+
+def generate_fourier_features(pos, num_bands, max_resolution=(224, 224), concat_pos=True, sine_only=False):
+ """
+ Generate a Fourier frequency position encoding with linear spacing.
+
+ Args:
+ pos (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length, dim)`):
+ The Tensor containing the position of n points in d dimensional space.
+ num_bands (:obj:`int`):
+ The number of frequency bands (K) to use.
+ max_resolution (:obj:`Tuple[int]`, `optional`, defaults to (224, 224)):
+ The maximum resolution (i.e. the number of pixels per dim). A tuple representing resolution for each dimension.
+ concat_pos (:obj:`bool`, `optional`, defaults to :obj:`True`):
+ Whether to concatenate the input position encoding to the Fourier features.
+ sine_only (:obj:`bool`, `optional`, defaults to :obj:`False`):
+ Whether to use a single phase (sin) or two (sin/cos) for each frequency band.
+
+ Returns:
+ :obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, n_channels)`: The Fourier position
+ embeddings. If :obj:`concat_pos` is `True` and :obj:`sine_only` is `False`, output dimensions are ordered as:
+ [dim_1, dim_2, ..., dim_d, sin(pi*f_1*dim_1), ..., sin(pi*f_K*dim_1), ..., sin(pi*f_1*dim_d), ...,
+ sin(pi*f_K*dim_d), cos(pi*f_1*dim_1), ..., cos(pi*f_K*dim_1), ..., cos(pi*f_1*dim_d), ..., cos(pi*f_K*dim_d)],
+ where dim_i is pos[:, i] and f_k is the kth frequency band.
+ """
+
+ batch_size = pos.shape[0]
+
+ min_freq = 1.0
+ # Nyquist frequency at the target resolution:
+ freq_bands = torch.stack(
+ [torch.linspace(start=min_freq, end=res / 2, steps=num_bands) for res in max_resolution], dim=0
+ )
+
+ # Get frequency bands for each spatial dimension.
+ # Output is size [n, d * num_bands]
+ per_pos_features = pos[0, :, :][:, :, None] * freq_bands[None, :, :]
+ per_pos_features = torch.reshape(per_pos_features, [-1, np.prod(per_pos_features.shape[1:])])
+
+ if sine_only:
+ # Output is size [n, d * num_bands]
+ per_pos_features = torch.sin(np.pi * (per_pos_features))
+ else:
+ # Output is size [n, 2 * d * num_bands]
+ per_pos_features = torch.cat(
+ [torch.sin(np.pi * per_pos_features), torch.cos(np.pi * per_pos_features)], dim=-1
+ )
+ # Concatenate the raw input positions.
+ if concat_pos:
+ # Adds d bands to the encoding.
+ per_pos_features = torch.cat([pos, per_pos_features.expand(batch_size, -1, -1)], dim=-1)
+ return per_pos_features
+
+
+def build_linear_positions(index_dims, output_range=(-1.0, 1.0)):
+ """
+ Generate an array of position indices for an N-D input array.
+
+ Args:
+ index_dims (:obj:`List[int]`):
+ The shape of the index dimensions of the input array.
+ output_range (:obj:`Tuple[float]`, `optional`, defaults to :obj:`(-1.0, 1.0)`):
+ The min and max values taken by each input index dimension.
+
+ Returns:
+ :obj:`torch.FloatTensor` of shape :obj:`(index_dims[0], index_dims[1], .., index_dims[-1], N)`.
+ """
+
+ def _linspace(n_xels_per_dim):
+ return torch.linspace(start=output_range[0], end=output_range[1], steps=n_xels_per_dim, dtype=torch.float32)
+
+ dim_ranges = [_linspace(n_xels_per_dim) for n_xels_per_dim in index_dims]
+ array_index_grid = torch.meshgrid(*dim_ranges)
+
+ return torch.stack(array_index_grid, dim=-1)
+
+
+class PerceiverAbstractPositionEncoding(nn.Module, metaclass=abc.ABCMeta):
+ """Perceiver abstract position encoding."""
+
+ @property
+ @abc.abstractmethod
+ def num_dimensions(self) -> int:
+ raise NotImplementedError
+
+ @abc.abstractmethod
+ def output_size(self, *args, **kwargs) -> int:
+ raise NotImplementedError
+
+ @abc.abstractmethod
+ def forward(self, batch_size, pos):
+ raise NotImplementedError
+
+
+class PerceiverTrainablePositionEncoding(PerceiverAbstractPositionEncoding):
+ """Trainable position encoding."""
+
+ def __init__(self, index_dims, num_channels=128):
+ super().__init__()
+ self._num_channels = num_channels
+ self._index_dims = index_dims
+ index_dim = np.prod(index_dims)
+ self.position_embeddings = nn.Parameter(torch.randn(index_dim, num_channels))
+
+ @property
+ def num_dimensions(self) -> int:
+ if isinstance(self._index_dims, int):
+ return 1
+ return len(self._index_dims)
+
+ def output_size(self, *args, **kwargs) -> int:
+ return self._num_channels
+
+ def forward(self, batch_size):
+ position_embeddings = self.position_embeddings
+
+ if batch_size is not None:
+ position_embeddings = position_embeddings.expand(batch_size, -1, -1)
+ return position_embeddings
+
+
+def _check_or_build_spatial_positions(pos, index_dims, batch_size):
+ """
+ Checks or builds spatial position features (x, y, ...).
+
+ Args:
+ pos (:obj:`torch.FloatTensor`):
+ None, or an array of position features. If None, position features are built. Otherwise, their size is checked.
+ index_dims (:obj:`List[int]`):
+ An iterable giving the spatial/index size of the data to be featurized.
+ batch_size (:obj:`int`):
+ The batch size of the data to be featurized.
+
+ Returns:
+ :obj:`torch.FloatTensor` of shape :obj:`(batch_size, prod(index_dims))` an array of position features.
+ """
+ if pos is None:
+ pos = build_linear_positions(index_dims)
+ pos = torch.broadcast_to(pos[None], (batch_size,) + pos.shape)
+ pos = torch.reshape(pos, [batch_size, np.prod(index_dims), -1])
+ else:
+ # Just a warning label: you probably don't want your spatial features to
+ # have a different spatial layout than your pos coordinate system.
+ # But feel free to override if you think it'll work!
+ if pos.shape[-1] != len(index_dims):
+ raise ValueError("Spatial features have the wrong number of dimensions.")
+ return pos
+
+
+class PerceiverFourierPositionEncoding(PerceiverAbstractPositionEncoding):
+ """Fourier (Sinusoidal) position encoding."""
+
+ def __init__(self, num_bands, max_resolution, concat_pos=True, sine_only=False):
+ super().__init__()
+ self.num_bands = num_bands
+ self.max_resolution = max_resolution
+ self.concat_pos = concat_pos
+ self.sine_only = sine_only
+
+ @property
+ def num_dimensions(self) -> int:
+ return len(self.max_resolution)
+
+ def output_size(self):
+ """Returns size of positional encodings last dimension."""
+ num_dims = len(self.max_resolution)
+ encoding_size = self.num_bands * num_dims
+ if not self.sine_only:
+ encoding_size *= 2
+ if self.concat_pos:
+ encoding_size += self.num_dimensions
+
+ return encoding_size
+
+ def forward(self, index_dims, batch_size, device, pos=None):
+ pos = _check_or_build_spatial_positions(pos, index_dims, batch_size)
+ fourier_pos_enc = generate_fourier_features(
+ pos,
+ num_bands=self.num_bands,
+ max_resolution=self.max_resolution,
+ concat_pos=self.concat_pos,
+ sine_only=self.sine_only,
+ ).to(device)
+ return fourier_pos_enc
+
+
+class AbstractPreprocessor(nn.Module):
+ @property
+ def num_channels(self) -> int:
+ """Returns size of preprocessor output."""
+ raise NotImplementedError()
+
+
+class PerceiverTextPreprocessor(AbstractPreprocessor):
+ """Text preprocessing for Perceiver Encoder."""
+
+ def __init__(self, config):
+ super().__init__()
+ self.embeddings = nn.Embedding(num_embeddings=config.vocab_size, embedding_dim=config.d_model)
+ self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.d_model)
+
+ @property
+ def num_channels(self) -> int:
+ return self.config.d_model
+
+ def forward(self, inputs):
+ embeddings = self.embeddings(inputs)
+
+ seq_length = inputs.shape[1]
+ position_ids = torch.arange(0, seq_length, device=inputs.device)
+ embeddings = embeddings + self.position_embeddings(position_ids)
+
+ return embeddings, None, None
+
+
+class PerceiverEmbeddingDecoder(nn.Module):
+ """Module to decode embeddings (for masked language modeling)."""
+
+ def __init__(self, config):
+ """Constructs the module."""
+ super().__init__()
+ self.config = config
+ self.vocab_size = config.vocab_size
+ self.bias = nn.Parameter(torch.zeros(self.vocab_size))
+
+ def forward(self, hidden_states, embedding_layer):
+ batch_size, seq_len, d_model = hidden_states.shape
+ output = torch.matmul(hidden_states.reshape([-1, d_model]), embedding_layer.weight.T) # Flatten batch dim
+ output = output + self.bias
+
+ return output.reshape([batch_size, seq_len, self.vocab_size])
+
+
+class PerceiverMultimodalPostprocessor(nn.Module):
+ """
+ Multimodal postprocessing for Perceiver.
+
+ Args:
+ modalities (:obj:`Dict[str, PostprocessorType]`):
+ Dictionary mapping modality name to postprocessor class for that modality.
+ input_is_dict (:obj:`bool`, `optional`, defaults to :obj:`False`):
+ If True, input is assumed to be dictionary structured, and outputs keep the same dictionary shape. If
+ False, input is a tensor which is sliced up during postprocessing by `modality_sizes`.
+ """
+
+ def __init__(self, modalities: Mapping[str, PostprocessorType], input_is_dict: bool = False):
+ super().__init__()
+ self.modalities = nn.ModuleDict(modalities)
+ self.input_is_dict = input_is_dict
+
+ def forward(
+ self, inputs: torch.Tensor, pos: Optional[torch.Tensor] = None, modality_sizes=None
+ ) -> Mapping[str, torch.Tensor]:
+ if not self.input_is_dict:
+ # Slice up modalities by their sizes.
+ if modality_sizes is None:
+ raise ValueError("Modality sizes should be specified if input is not a dictionary.")
+ inputs = restructure(modality_sizes=modality_sizes, inputs=inputs)
+
+ outputs = {
+ modality: postprocessor(inputs[modality], pos=pos, modality_sizes=None)
+ for modality, postprocessor in self.modalities.items()
+ }
+ return outputs
+
+
+class PerceiverClassificationPostprocessor(nn.Module):
+ """
+ Classification postprocessing for Perceiver. Can be used to convert the decoder output to classification logits.
+
+ Args:
+ config (:obj:`PerceiverConfig`):
+ Model configuration.
+ in_channels (:obj:`int`):
+ Number of channels in the input.
+ """
+
+ def __init__(self, config, in_channels):
+ super().__init__()
+ self.classifier = nn.Linear(in_channels, config.num_labels)
+
+ def forward(self, inputs, pos: Optional[torch.Tensor] = None, modality_sizes=None) -> torch.Tensor:
+ logits = self.classifier(inputs)
+ return logits[:, 0, :]
+
+
+class PerceiverAudioPostprocessor(nn.Module):
+ """
+ Audio postprocessing for Perceiver. Can be used to convert the decoder output to audio features.
+
+ Args:
+ config (:obj:`PerceiverConfig`):
+ Model configuration.
+ in_channels (:obj:`int`):
+ Number of channels in the input.
+ postproc_type (:obj:`str`, `optional`, defaults to :obj:`"patches"`):
+ Postprocessor type to use. Currently, only "patches" is supported.
+ """
+
+ def __init__(self, config, in_channels, postproc_type: str = "patches"):
+ super().__init__()
+
+ if postproc_type not in ("patches",): # to be supported: 'conv', 'patches', 'pixels'
+ raise ValueError("Invalid postproc_type!")
+
+ # Architecture parameters:
+ self.classifier = nn.Linear(in_channels, config.samples_per_patch)
+
+ def forward(self, inputs: torch.Tensor, pos: Optional[torch.Tensor] = None, modality_sizes=None) -> torch.Tensor:
+
+ logits = self.classifier(inputs)
+ return torch.reshape(logits, [inputs.shape[0], -1])
+
+
+class PerceiverProjectionPostprocessor(nn.Module):
+ """
+ Projection postprocessing for Perceiver. Can be used to convert the project the channels of the decoder output to a
+ lower dimension.
+
+ Args:
+ in_channels (:obj:`int`):
+ Number of channels in the input.
+ out_channels (:obj:`int`):
+ Number of channels in the output.
+ """
+
+ def __init__(self, in_channels, out_channels):
+ super().__init__()
+ self.classifier = nn.Linear(in_channels, out_channels)
+
+ def forward(self, inputs: torch.Tensor, pos: Optional[torch.Tensor] = None, modality_sizes=None) -> torch.Tensor:
+ logits = self.classifier(inputs)
+ return logits
+
+
+class PerceiverImagePreprocessor(AbstractPreprocessor):
+ """
+ Image preprocessing for Perceiver Encoder.
+
+ Note: the `out_channels` argument refers to the output channels of a convolutional layer, if `prep_type` is set to
+ "conv1x1" or "conv". If one adds absolute position embeddings, one must make sure the `num_channels` of the
+ position encoding kwargs are set equal to the `out_channels`.
+
+ Args:
+ config (:obj:`PerceiverConfig`):
+ Model configuration.
+ prep_type (:obj:`str`, `optional`, defaults to :obj:`"conv"`):
+ Preprocessing type. Can be "conv1x1", "conv", "patches", "pixels".
+ spatial_downsample (:obj:`int`, `optional`, defaults to 4):
+ Spatial downsampling factor.
+ temporal_downsample (:obj:`int`, `optional`, defaults to 1):
+ Temporal downsampling factor (only relevant in case a time dimension is present).
+ position_encoding_type (:obj:`str`, `optional`, defaults to :obj:`"fourier"`):
+ Position encoding type. Can be "fourier" or "trainable".
+ in_channels (:obj:`int`, `optional`, defaults to 3):
+ Number of channels in the input.
+ out_channels (:obj:`int`, `optional`, defaults to 64):
+ Number of channels in the output.
+ conv_after_patching (:obj:`bool`, `optional`, defaults to :obj:`False`):
+ Whether to apply a convolutional layer after patching.
+ conv_after_patching_in_channels (:obj:`int`, `optional`, defaults to 54):
+ Number of channels in the input of the convolutional layer after patching.
+ conv2d_use_batchnorm (:obj:`bool`, `optional`, defaults to :obj:`True`):
+ Whether to use batch normalization in the convolutional layer.
+ concat_or_add_pos (:obj:`str`, `optional`, defaults to :obj:`"concat"`):
+ How to concatenate the position encoding to the input. Can be "concat" or "add".
+ project_pos_dim (:obj:`int`, `optional`, defaults to -1):
+ Dimension of the position encoding to project to. If -1, no projection is applied.
+ **position_encoding_kwargs (:obj:`Dict`, `optional`):
+ Keyword arguments for the position encoding.
+ """
+
+ def __init__(
+ self,
+ config,
+ prep_type="conv",
+ spatial_downsample: int = 4,
+ temporal_downsample: int = 1,
+ position_encoding_type: str = "fourier",
+ in_channels: int = 3,
+ out_channels: int = 64,
+ conv_after_patching: bool = False,
+ conv_after_patching_in_channels: int = 54, # only relevant when conv_after_patching = True
+ conv2d_use_batchnorm: bool = True,
+ concat_or_add_pos: str = "concat",
+ project_pos_dim: int = -1,
+ **position_encoding_kwargs,
+ ):
+ super().__init__()
+ self.config = config
+
+ if prep_type not in ("conv", "patches", "pixels", "conv1x1"):
+ raise ValueError(f"Prep_type {prep_type} is invalid")
+
+ if concat_or_add_pos not in ["concat", "add"]:
+ raise ValueError(f"Invalid value {concat_or_add_pos} for concat_or_add_pos.")
+
+ self.in_channels = in_channels
+ self.prep_type = prep_type
+ self.spatial_downsample = spatial_downsample
+ self.temporal_downsample = temporal_downsample
+ self.position_encoding_type = position_encoding_type
+ self.concat_or_add_pos = concat_or_add_pos
+ self.conv_after_patching = conv_after_patching
+ self.out_channels = out_channels
+
+ if self.prep_type == "conv":
+ # Downsampling with conv is currently restricted
+ convnet_num_layers = math.log(spatial_downsample, 4)
+ convnet_num_layers_is_int = convnet_num_layers == np.round(convnet_num_layers)
+ if not convnet_num_layers_is_int or temporal_downsample != 1:
+ raise ValueError(
+ "Only powers of 4 expected for spatial and 1 expected for temporal downsampling with conv."
+ )
+ self.convnet = Conv2DDownsample(
+ in_channels=in_channels,
+ num_layers=int(convnet_num_layers),
+ out_channels=out_channels,
+ use_batchnorm=conv2d_use_batchnorm,
+ )
+
+ elif self.prep_type == "conv1x1":
+ if temporal_downsample != 1:
+ raise ValueError("Conv1x1 does not downsample in time.")
+ self.convnet_1x1 = nn.Conv2d(
+ in_channels=in_channels,
+ out_channels=out_channels,
+ kernel_size=(1, 1),
+ # spatial_downsample is unconstrained for 1x1 convolutions.
+ stride=(spatial_downsample, spatial_downsample),
+ )
+
+ # Position embeddings
+ self.project_pos_dim = project_pos_dim
+ self.position_embeddings, self.positions_projection = build_position_encoding(
+ position_encoding_type=position_encoding_type,
+ out_channels=out_channels,
+ project_pos_dim=project_pos_dim,
+ **position_encoding_kwargs,
+ )
+
+ # Optional convolutional layer after patches.
+ self.conv_after_patches = (
+ nn.Linear(conv_after_patching_in_channels, self.out_channels) if conv_after_patching else nn.Identity()
+ )
+
+ @property
+ def num_channels(self) -> int:
+ # Let's assume that the number of resolutions (in the context of image preprocessing)
+ # of the input data is 2 or 3 depending on whether we are processing image or video respectively.
+ # In this case, for convenience, we will declare is_temporal variable,
+ # which will show whether the data has a temporal dimension or not.
+ is_temporal = self.position_embeddings.num_dimensions > 2
+
+ # position embedding
+ if self.project_pos_dim > 0:
+ pos_dim = self.project_pos_dim
+ else:
+ pos_dim = self.position_embeddings.output_size()
+ if self.concat_or_add_pos == "add":
+ return pos_dim
+
+ # inputs
+ if self.conv_after_patching or self.prep_type in ("conv1x1", "conv"):
+ inp_dim = self.out_channels
+ elif self.prep_type == "pixels":
+ inp_dim = self.in_channels
+ if not is_temporal:
+ inp_dim = math.ceil(inp_dim / self.spatial_downsample)
+ elif self.prep_type == "patches":
+ if self.conv_after_patching:
+ inp_dim = self.out_channels
+ else:
+ inp_dim = self.in_channels * self.spatial_downsample ** 2
+ if is_temporal:
+ inp_dim *= self.temporal_downsample
+
+ return inp_dim + pos_dim
+
+ def _build_network_inputs(self, inputs: torch.Tensor, pos: torch.Tensor, network_input_is_1d: bool = True):
+ """
+ Construct the final input, including position encoding.
+
+ This method expects the inputs to always have channels as last dimension.
+
+ """
+ batch_size = inputs.shape[0]
+ index_dims = inputs.shape[1:-1]
+ indices = np.prod(index_dims)
+
+ # Flatten input features to a 1D index dimension if necessary.
+ if len(inputs.shape) > 3 and network_input_is_1d:
+ inputs = torch.reshape(inputs, [batch_size, indices, -1])
+
+ # Construct the position encoding.
+ if self.position_encoding_type == "trainable":
+ pos_enc = self.position_embeddings(batch_size)
+ elif self.position_encoding_type == "fourier":
+ pos_enc = self.position_embeddings(index_dims, batch_size, device=inputs.device)
+
+ # Optionally project them to a target dimension.
+ pos_enc = self.positions_projection(pos_enc)
+
+ if not network_input_is_1d:
+ # Reshape pos to match the input feature shape
+ # if the network takes non-1D inputs
+ sh = inputs.shape
+ pos_enc = torch.reshape(pos_enc, list(sh)[:-1] + [-1])
+ if self.concat_or_add_pos == "concat":
+ inputs_with_pos = torch.cat([inputs, pos_enc], dim=-1)
+ elif self.concat_or_add_pos == "add":
+ inputs_with_pos = inputs + pos_enc
+ return inputs_with_pos, inputs
+
+ def forward(self, inputs: torch.Tensor, pos: Optional[torch.Tensor] = None, network_input_is_1d: bool = True):
+ if self.prep_type == "conv":
+ # Convnet image featurization.
+ # Downsamples spatially by a factor of 4
+ inputs = self.convnet(inputs)
+
+ elif self.prep_type == "conv1x1":
+ # map inputs to self.out_channels
+ inputs = self.convnet_1x1(inputs)
+
+ elif self.prep_type == "pixels":
+ # if requested, downsamples in the crudest way
+ if inputs.ndim == 4:
+ inputs = inputs[:: self.spatial_downsample, :: self.spatial_downsample]
+ elif inputs.ndim == 5:
+ inputs = inputs[
+ :, :: self.temporal_downsample, :, :: self.spatial_downsample, :: self.spatial_downsample
+ ]
+ else:
+ raise ValueError("Unsupported data format for pixels.")
+
+ elif self.prep_type == "patches":
+ # Space2depth featurization.
+ # Video: B x T x C x H x W
+ inputs = space_to_depth(
+ inputs, temporal_block_size=self.temporal_downsample, spatial_block_size=self.spatial_downsample
+ )
+
+ if inputs.ndim == 5 and inputs.shape[1] == 1:
+ # for flow
+ inputs = inputs.squeeze(dim=1)
+
+ # Optionally apply conv layer.
+ inputs = self.conv_after_patches(inputs)
+
+ if self.prep_type != "patches":
+ # move channels to last dimension, as the _build_network_inputs method below expects this
+ if inputs.ndim == 4:
+ inputs = torch.moveaxis(inputs, 1, -1)
+ elif inputs.ndim == 5:
+ inputs = torch.moveaxis(inputs, 2, -1)
+ else:
+ raise ValueError("Unsupported data format for conv1x1.")
+
+ inputs, inputs_without_pos = self._build_network_inputs(inputs, pos, network_input_is_1d)
+ modality_sizes = None # Size for each modality, only needed for multimodal
+
+ return inputs, modality_sizes, inputs_without_pos
+
+
+class PerceiverOneHotPreprocessor(AbstractPreprocessor):
+ """
+ One-hot preprocessor for Perceiver Encoder. Can be used to add a dummy index dimension to the input.
+
+ Args:
+ config (:obj:`PerceiverConfig`):
+ Model configuration.
+ """
+
+ def __init__(self, config):
+ super().__init__()
+ self.config: PerceiverConfig = config
+
+ @property
+ def num_channels(self) -> int:
+ return self.config.num_labels
+
+ def forward(self, inputs: torch.Tensor, pos: Optional[torch.Tensor] = None, network_input_is_1d: bool = True):
+ # Add a dummy index dimension.
+ inputs = inputs[:, None, :]
+
+ # No position encodings, so the 1st (input) and 3rd (inputs_without_pos)
+ # outputs are identical.
+ return inputs, None, inputs
+
+
+class PerceiverAudioPreprocessor(AbstractPreprocessor):
+ """
+ Audio preprocessing for Perceiver Encoder.
+
+ Args:
+ config (:obj:`PerceiverConfig`):
+ Model configuration.
+ prep_type (:obj:`str`, `optional`, defaults to :obj:`"patches"`):
+ Preprocessor type to use. Only "patches" is supported.
+ samples_per_patch (:obj:`int`, `optional`, defaults to 96):
+ Number of samples per patch.
+ position_encoding_type (:obj:`str`, `optional`, defaults to :obj:`"fourier"`):
+ Type of position encoding to use. Can be "trainable" or "fourier".
+ concat_or_add_pos (:obj:`str`, `optional`, defaults to :obj:`"concat"`):
+ How to concatenate the position encoding to the input. Can be "concat" or "add".
+ out_channels (:obj:`int`, `optional`, defaults to 64):
+ Number of channels in the output.
+ project_pos_dim (:obj:`int`, `optional`, defaults to -1):
+ Dimension of the position encoding to project to. If -1, no projection is applied.
+ **position_encoding_kwargs (:obj:`Dict`, `optional`):
+ Keyword arguments for the position encoding.
+ """
+
+ def __init__(
+ self,
+ config,
+ prep_type: str = "patches",
+ samples_per_patch: int = 96,
+ position_encoding_type: str = "fourier",
+ concat_or_add_pos: str = "concat",
+ out_channels=64,
+ project_pos_dim=-1,
+ **position_encoding_kwargs,
+ ):
+ super().__init__()
+ self.config = config
+
+ if prep_type not in ("patches",):
+ raise ValueError(f"Prep_type {prep_type} is invalid, can only be 'patches'.")
+
+ if concat_or_add_pos not in ["concat", "add"]:
+ raise ValueError(f"Concat_or_pos {concat_or_add_pos} is invalid, can only be 'concat' or 'add'.")
+
+ self.samples_per_patch = samples_per_patch
+ self.position_encoding_type = position_encoding_type
+ self.concat_or_add_pos = concat_or_add_pos
+ self.project_pos_dim = project_pos_dim
+
+ # Position embeddings
+ self.position_embeddings, self.positions_projection = build_position_encoding(
+ position_encoding_type=position_encoding_type,
+ out_channels=out_channels,
+ project_pos_dim=project_pos_dim,
+ **position_encoding_kwargs,
+ )
+
+ @property
+ def num_channels(self) -> int:
+ # position embedding
+ if self.project_pos_dim > 0:
+ pos_dim = self.project_pos_dim
+ else:
+ pos_dim = self.position_embeddings.output_size()
+ if self.concat_or_add_pos == "add":
+ return pos_dim
+ return self.samples_per_patch + pos_dim
+
+ def _build_network_inputs(self, inputs, pos):
+ """Construct the final input, including position encoding."""
+ batch_size = inputs.shape[0]
+ index_dims = inputs.shape[1:-1]
+
+ # Construct the position encoding.
+ if self.position_encoding_type == "trainable":
+ pos_enc = self.position_embeddings(batch_size)
+ elif self.position_encoding_type == "fourier":
+ pos_enc = self.position_embeddings(index_dims, batch_size, device=inputs.device)
+
+ # Optionally project them to a target dimension.
+ pos_enc = self.positions_projection(pos_enc)
+
+ if self.concat_or_add_pos == "concat":
+ inputs_with_pos = torch.cat([inputs, pos_enc], dim=-1)
+ elif self.concat_or_add_pos == "add":
+ inputs_with_pos = inputs + pos_enc
+
+ return inputs_with_pos, inputs
+
+ def forward(self, inputs, pos, network_input_is_1d: bool = True):
+ inputs = torch.reshape(inputs, [inputs.shape[0], -1, self.samples_per_patch])
+
+ inputs, inputs_without_pos = self._build_network_inputs(inputs, pos)
+ modality_sizes = None # Size for each modality, only needed for multimodal
+
+ return inputs, modality_sizes, inputs_without_pos
+
+
+class PerceiverMultimodalPreprocessor(AbstractPreprocessor):
+ """
+ Multimodal preprocessing for Perceiver Encoder.
+
+ Inputs for each modality are preprocessed, then padded with trainable position embeddings to have the same number
+ of channels.
+
+ Args:
+ modalities (:obj:`Dict[str, PreprocessorType]`):
+ Dict mapping modality name to preprocessor.
+ mask_probs (:obj:`Dict[str, float]`):
+ Dict mapping modality name to masking probability of that modality.
+ min_padding_size (:obj:`int`, `optional`, defaults to 2):
+ The minimum padding size for all modalities. The final output will have num_channels equal to the maximum
+ channels across all modalities plus min_padding_size.
+ """
+
+ def __init__(
+ self,
+ modalities: Mapping[str, PreprocessorType],
+ mask_probs: Optional[Mapping[str, float]] = None,
+ min_padding_size: int = 2,
+ ):
+ super().__init__()
+ self.modalities = modalities
+ self.min_padding_size = min_padding_size
+ self.mask_probs = mask_probs if mask_probs is not None else dict()
+ self.padding = nn.ParameterDict(
+ {
+ modality: nn.Parameter(torch.randn(1, self.num_channels - preprocessor.num_channels))
+ for modality, preprocessor in modalities.items()
+ }
+ )
+ self.mask = nn.ParameterDict(
+ {modality: nn.Parameter(torch.randn(1, self.num_channels)) for modality, _ in self.mask_probs.items()}
+ )
+
+ @property
+ def num_channels(self) -> int:
+ max_channel_size = max(processor.num_channels for _, processor in self.modalities.items())
+ common_channel_size = max_channel_size + self.min_padding_size
+ return common_channel_size
+
+ def forward(
+ self, inputs: Mapping[str, torch.Tensor], pos: Optional[torch.Tensor] = None, network_input_is_1d: bool = True
+ ) -> PreprocessorOutputType:
+ padded = {}
+ modality_sizes = {}
+ inputs_without_pos = {}
+ for modality, preprocessor in self.modalities.items():
+ # preprocess each modality using the respective preprocessor.
+ output, _, inputs_without_pos[modality] = preprocessor(
+ inputs[modality], pos=pos, network_input_is_1d=network_input_is_1d
+ )
+
+ # pad to the same common_channel_size.
+ batch_size, num_samples, num_channels = output.shape
+ pos_enc = self.padding[modality].expand(batch_size, -1, -1)
+
+ padding = torch.broadcast_to(
+ pos_enc,
+ [batch_size, num_samples, self.num_channels - num_channels],
+ )
+ output_padded = torch.cat([output, padding], dim=2)
+
+ # mask if required
+ if modality in self.mask_probs:
+ mask_token = self.mask[modality].expand(batch_size, -1, -1)
+ mask_prob = self.mask_probs[modality]
+ mask = torch.bernoulli(torch.full([batch_size, num_samples], mask_prob))
+ mask = torch.unsqueeze(mask, dim=2).to(mask_token.device)
+ output_padded = (1 - mask) * output_padded + mask * mask_token
+
+ padded[modality] = output_padded
+ modality_sizes[modality] = output_padded.shape[1]
+
+ # Apply a predictable ordering to the modalities
+ padded_ls = [padded[k] for k in sorted(padded.keys())]
+
+ # Finally, concatenate along the time dimension
+ final_inputs = torch.cat(padded_ls, dim=1)
+
+ return final_inputs, modality_sizes, inputs_without_pos
diff --git a/src/transformers/models/perceiver/tokenization_perceiver.py b/src/transformers/models/perceiver/tokenization_perceiver.py
new file mode 100644
--- /dev/null
+++ b/src/transformers/models/perceiver/tokenization_perceiver.py
@@ -0,0 +1,204 @@
+# coding=utf-8
+# Copyright 2021 The HuggingFace Inc. team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+""" Tokenization class for Perceiver."""
+
+
+from typing import Dict, List, Optional, Tuple
+
+from ...tokenization_utils import AddedToken, PreTrainedTokenizer
+from ...utils import logging
+
+
+logger = logging.get_logger(__name__)
+
+
+class PerceiverTokenizer(PreTrainedTokenizer):
+ """
+ Construct a Perceiver tokenizer. The Perceiver simply uses raw bytes utf-8 encoding.
+
+ This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the main methods.
+ Users should refer to this superclass for more information regarding those methods.
+
+ Args:
+ pad_token (:obj:`str`, `optional`, defaults to :obj:`"[PAD]"`):
+ The token used for padding, for example when batching sequences of different lengths.
+ bos_token (:obj:`str`, `optional`, defaults to :obj:`"[BOS]"`):
+ The BOS token (reserved in the vocab, but not actually used).
+ eos_token (:obj:`str`, `optional`, defaults to :obj:`"[EOS]"`):
+ The end of sequence token (reserved in the vocab, but not actually used).
+
+ .. note::
+
+ When building a sequence using special tokens, this is not the token that is used for the end of
+ sequence. The token used is the :obj:`sep_token`.
+ mask_token (:obj:`str`, `optional`, defaults to :obj:`"[MASK]"`):
+ The MASK token, useful for masked language modeling.
+ cls_token (:obj:`str`, `optional`, defaults to :obj:`"[CLS]"`):
+ The CLS token (reserved in the vocab, but not actually used).
+ sep_token (:obj:`str`, `optional`, defaults to :obj:`"[SEP]"`):
+ The separator token, which is used when building a sequence from two sequences.
+
+ """
+
+ model_input_names = ["input_ids", "attention_mask"]
+
+ def __init__(
+ self,
+ pad_token="[PAD]",
+ bos_token="[BOS]",
+ eos_token="[EOS]",
+ mask_token="[MASK]",
+ cls_token="[CLS]",
+ sep_token="[SEP]",
+ model_max_length=2048,
+ **kwargs
+ ) -> None:
+
+ pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token
+ bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token
+ eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token
+ mask_token = AddedToken(mask_token, lstrip=False, rstrip=False) if isinstance(mask_token, str) else mask_token
+ cls_token = AddedToken(cls_token, lstrip=False, rstrip=False) if isinstance(cls_token, str) else cls_token
+ sep_token = AddedToken(sep_token, lstrip=False, rstrip=False) if isinstance(sep_token, str) else sep_token
+
+ super().__init__(
+ pad_token=pad_token,
+ bos_token=bos_token,
+ eos_token=eos_token,
+ mask_token=mask_token,
+ cls_token=cls_token,
+ sep_token=sep_token,
+ model_max_length=model_max_length,
+ **kwargs,
+ )
+
+ self._utf_vocab_size = 2 ** 8 # utf is 8 bits
+
+ # define special tokens dict
+ self.special_tokens_encoder: Dict[int, str] = {
+ self.pad_token: 0,
+ self.bos_token: 1,
+ self.eos_token: 2,
+ self.mask_token: 3,
+ self.cls_token: 4,
+ self.sep_token: 5,
+ }
+ self._num_special_tokens = len(self.special_tokens_encoder)
+ self.special_tokens_decoder: Dict[str, int] = {v: k for k, v in self.special_tokens_encoder.items()}
+
+ @property
+ def vocab_size(self):
+ return self._utf_vocab_size + self._num_special_tokens
+
+ def get_special_tokens_mask(
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
+ ) -> List[int]:
+ """
+ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
+ special tokens using the tokenizer ``prepare_for_model`` method.
+
+ Args:
+ token_ids_0 (:obj:`List[int]`):
+ List of IDs.
+ token_ids_1 (:obj:`List[int]`, `optional`):
+ Optional second list of IDs for sequence pairs.
+ already_has_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
+ Whether or not the token list is already formatted with special tokens for the model.
+
+ Returns:
+ :obj:`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
+ """
+ if already_has_special_tokens:
+ return super().get_special_tokens_mask(
+ token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
+ )
+
+ # normal case: some special tokens
+ if token_ids_1 is None:
+ return [1] + [0] * len(token_ids_0) + [1]
+ return [1] + ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1]
+
+ def build_inputs_with_special_tokens(
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
+ ) -> List[int]:
+ """
+ Build model inputs from a sequence or a pair of sequence for sequence classification tasks. A sequence has the
+ following format:
+
+ - single sequence: ``[CLS] X [SEP]``
+ - pair of sequences: ``[CLS] A [SEP] B [SEP]``
+
+ Args:
+ token_ids_0 (:obj:`List[int]`):
+ List of IDs to which the special tokens will be added.
+ token_ids_1 (:obj:`List[int]`, `optional`):
+ Optional second list of IDs for sequence pairs.
+
+ Returns:
+ :obj:`List[int]`: List of `input IDs <../glossary.html#input-ids>`__ with the appropriate special tokens.
+ """
+ if token_ids_1 is None:
+ return [self.cls_token_id] + token_ids_0 + [self.sep_token_id]
+ else:
+ return [self.cls_token_id] + token_ids_0 + [self.sep_token_id] + token_ids_1 + [self.sep_token_id]
+
+ def _tokenize(self, text: str) -> List[str]:
+ """Take as input a string and return a list of strings (tokens) for words/sub-words"""
+ tokens = [chr(i) for i in text.encode("utf-8")]
+ return tokens
+
+ def _convert_token_to_id(self, token):
+ """Converts a token (str) in an id using the vocab."""
+ if token in self.special_tokens_encoder:
+ token_id = self.special_tokens_encoder[token]
+ elif token in self.added_tokens_encoder:
+ token_id = self.added_tokens_encoder[token]
+ elif len(token) != 1:
+ token_id = self.unk_token_id
+ else:
+ token_id = ord(token) + self._num_special_tokens
+ return token_id
+
+ def _convert_id_to_token(self, index):
+ """Converts an index (integer) in a token (str) using the vocab."""
+ if index in self.special_tokens_decoder:
+ token = self.special_tokens_decoder[index]
+ elif index in self.added_tokens_decoder:
+ token = self.added_tokens_decoder[index]
+ else:
+ token = chr(index - self._num_special_tokens)
+ return token
+
+ def convert_tokens_to_string(self, tokens):
+ """Converts a sequence of tokens (string) in a single string."""
+ bstring = b""
+ for token in tokens:
+ if token in self.special_tokens_decoder:
+ tok_string = self.special_tokens_decoder[token].encode("utf-8")
+ elif token in self.added_tokens_decoder:
+ tok_string = self.special_tokens_decoder[token].encode("utf-8")
+ elif token in self.special_tokens_encoder:
+ tok_string = token.encode("utf-8")
+ elif token in self.added_tokens_encoder:
+ tok_string = token.encode("utf-8")
+ else:
+ tok_string = bytes([ord(token)])
+ bstring += tok_string
+ string = bstring.decode("utf-8", errors="replace")
+ return string
+
+ # PerceiverTokenizer has no vocab file
+ def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
+ return ()
diff --git a/src/transformers/utils/dummy_pt_objects.py b/src/transformers/utils/dummy_pt_objects.py
--- a/src/transformers/utils/dummy_pt_objects.py
+++ b/src/transformers/utils/dummy_pt_objects.py
@@ -3729,6 +3729,87 @@ def forward(self, *args, **kwargs):
requires_backends(self, ["torch"])
+PERCEIVER_PRETRAINED_MODEL_ARCHIVE_LIST = None
+
+
+class PerceiverForImageClassificationConvProcessing:
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+
+class PerceiverForImageClassificationFourier:
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+
+class PerceiverForImageClassificationLearned:
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+
+class PerceiverForMaskedLM:
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+ @classmethod
+ def from_pretrained(cls, *args, **kwargs):
+ requires_backends(cls, ["torch"])
+
+ def forward(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+
+class PerceiverForMultimodalAutoencoding:
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+
+class PerceiverForOpticalFlow:
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+
+class PerceiverForSequenceClassification:
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+ @classmethod
+ def from_pretrained(cls, *args, **kwargs):
+ requires_backends(cls, ["torch"])
+
+ def forward(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+
+class PerceiverLayer:
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+
+class PerceiverModel:
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+ @classmethod
+ def from_pretrained(cls, *args, **kwargs):
+ requires_backends(cls, ["torch"])
+
+ def forward(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+
+class PerceiverPreTrainedModel:
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+ @classmethod
+ def from_pretrained(cls, *args, **kwargs):
+ requires_backends(cls, ["torch"])
+
+ def forward(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+
PROPHETNET_PRETRAINED_MODEL_ARCHIVE_LIST = None
diff --git a/src/transformers/utils/dummy_vision_objects.py b/src/transformers/utils/dummy_vision_objects.py
--- a/src/transformers/utils/dummy_vision_objects.py
+++ b/src/transformers/utils/dummy_vision_objects.py
@@ -64,6 +64,11 @@ def from_pretrained(cls, *args, **kwargs):
requires_backends(cls, ["vision"])
+class PerceiverFeatureExtractor:
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["vision"])
+
+
class SegformerFeatureExtractor:
def __init__(self, *args, **kwargs):
requires_backends(self, ["vision"])
diff --git a/utils/check_repo.py b/utils/check_repo.py
--- a/utils/check_repo.py
+++ b/utils/check_repo.py
@@ -102,6 +102,8 @@
# should **not** be the rule.
IGNORE_NON_AUTO_CONFIGURED = PRIVATE_MODELS.copy() + [
# models to ignore for model xxx mapping
+ "PerceiverForMultimodalAutoencoding",
+ "PerceiverForOpticalFlow",
"SegformerDecodeHead",
"SegformerForSemanticSegmentation",
"BeitForSemanticSegmentation",
</patch>
|
[]
|
[]
| ||||
Qiskit__qiskit-910
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Reverse bit order in latex_circuit_drawer with conditional
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### What is the current behavior?
I found a bug in `latex_circuit_drawer` when conditional gate is drawn with `reversebits` option.
Index of registers is not correctly handled in `latex_circuit_drawer` when bit order is reversed.
### Steps to reproduce the problem
```
q1 = QuantumRegister(3, 'q1')
c1 = ClassicalRegister(3, 'c1')
qc = QuantumCircuit(q1, c1)
qc.x(q1[0]).c_if(c1, 2)
circuit_drawer(qc, style={"reversebits": True})
```
The results:
```
...
~/testbench/qiskit/tools/visualization/_circuit_visualization.py in _get_image_depth(self, aliases)
611 for i in range(pos_1, pos_2 + self.cregs[if_reg]):
612 if is_occupied[i] is False:
--> 613 is_occupied[i] = True
614 else:
615 columns += 1
IndexError: list index out of range
```
</issue>
<code>
[start of README.md]
1 # Quantum Information Science Kit (Qiskit)
2
3 [](https://pypi.python.org/pypi/qiskit)
4 [](https://travis-ci.org/Qiskit/qiskit-terra)
5 [](https://travis-ci.org/Qiskit/qiskit-terra)
6
7 The Quantum Information Science Kit (**Qiskit** for short) is a software development kit (SDK) for
8 working with [OpenQASM](https://github.com/Qiskit/qiskit-openqasm) and the
9 [IBM Q Experience (QX)](https://quantumexperience.ng.bluemix.net/).
10
11 Use **Qiskit** to create quantum computing programs, compile them, and execute them on one of
12 several backends (online Real quantum processors, online simulators, and local simulators). For
13 the online backends, Qiskit uses our [python API client](https://github.com/Qiskit/qiskit-api-py)
14 to connect to the IBM Q Experience.
15
16 **We use GitHub issues for tracking requests and bugs. Please see the**
17 [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) **for
18 questions and discussion.**
19
20 **If you'd like to contribute to Qiskit, please take a look at our**
21 [contribution guidelines](.github/CONTRIBUTING.rst).
22
23 Links to Sections:
24
25 * [Installation](#installation)
26 * [Creating your first Quantum Program](#creating-your-first-quantum-program)
27 * [More Information](#more-information)
28 * [Authors](#authors-alphabetical)
29
30 ## Installation
31
32 ### Dependencies
33
34 At least [Python 3.5 or later](https://www.python.org/downloads/) is needed for using Qiskit. In
35 addition, [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) is recommended
36 for interacting with the tutorials.
37 For this reason we recommend installing the [Anaconda 3](https://www.continuum.io/downloads)
38 python distribution, as it comes with all of these dependencies pre-installed.
39
40 In addition, a basic understanding of quantum information is very helpful when interacting with
41 Qiskit. If you're new to quantum, start with our
42 [User Guides](https://github.com/Qiskit/ibmqx-user-guides)!
43
44 ### Instructions
45
46 We encourage to install Qiskit via the PIP tool (a python package manager):
47
48 ```bash
49 pip install qiskit
50 ```
51
52 PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version.
53
54 PIP package comes with prebuilt binaries for these platforms:
55
56 * Linux x86_64
57 * Darwin
58 * Win64
59
60 If your platform is not in the list, PIP will try to build from the sources at installation time. It will require to have CMake 3.5 or higher pre-installed and at least one of the [build environments supported by CMake](https://cmake.org/cmake/help/v3.5/manual/cmake-generators.7.html).
61
62 If during the installation PIP doesn't succeed to build, don't worry, you will have Qiskit installed at the end but you probably couldn't take advantage of some of the high-performance components. Anyway, we always provide a python, not-so-fast alternative as a fallback.
63
64 #### Setup your environment
65
66 We recommend using python virtual environments to improve your experience. Refer to our
67 [Environment Setup documentation](doc/install.rst#3.1-Setup-the-environment) for more information.
68
69 ## Creating your first Quantum Program
70
71 Now that the SDK is installed, it's time to begin working with Qiskit.
72
73 We are ready to try out a quantum circuit example, which runs via the local simulator.
74
75 This is a simple example that makes an entangled state.
76
77 ```python
78 # Import the Qiskit SDK
79 from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
80 from qiskit import available_backends, execute
81
82 # Create a Quantum Register with 2 qubits.
83 q = QuantumRegister(2)
84 # Create a Classical Register with 2 bits.
85 c = ClassicalRegister(2)
86 # Create a Quantum Circuit
87 qc = QuantumCircuit(q, c)
88
89 # Add a H gate on qubit 0, putting this qubit in superposition.
90 qc.h(q[0])
91 # Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting
92 # the qubits in a Bell state.
93 qc.cx(q[0], q[1])
94 # Add a Measure gate to see the state.
95 qc.measure(q, c)
96
97 # See a list of available local simulators
98 print("Local backends: ", available_backends({'local': True}))
99
100 # Compile and run the Quantum circuit on a simulator backend
101 job_sim = execute(qc, "local_qasm_simulator")
102 sim_result = job_sim.result()
103
104 # Show the results
105 print("simulation: ", sim_result)
106 print(sim_result.get_counts(qc))
107 ```
108
109 In this case, the output will be:
110
111 ```python
112 COMPLETED
113 {'counts': {'00': 512, '11': 512}}
114 ```
115
116 This script is available [here](examples/python/hello_quantum.py), where we also show how to
117 run the same program on a real quantum computer.
118
119 ### Executing your code on a real Quantum chip
120
121 You can also use Qiskit to execute your code on a
122 [real quantum chip](https://github.com/Qiskit/ibmqx-backend-information).
123 In order to do so, you need to configure the SDK for using the credentials in
124 your IBM Q Experience account:
125
126 #### Configure your API token and QX credentials
127
128 1. Create an _[IBM Q Experience](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so.
129
130 2. Get an API token from the IBM Q Experience website under _My Account > Advanced > API Token_. This API token allows you to execute your programs with the IBM Q Experience backends. See: [Example](doc/example_real_backend.rst).
131
132 3. We are now going to add the necessary credentials to QISKit. Take your token
133 from step 2, here called `MY_API_TOKEN`, and pass it to the
134 `store_credentials` function:
135
136 ```python
137 from qiskit import store_credentials
138
139 store_credentials('MY_API_TOKEN')
140 ```
141
142 4. If you have access to the IBM Q Network features, you also need to pass the
143 url listed on your IBM Q account page to `store_credentials`.
144
145 After calling `store_credentials()`, your credentials will be stored into disk.
146 Once they are stored, Qiskit will automatically load and use them in your program
147 via:
148
149 ```python
150 from qiskit import register
151
152 register()
153 ```
154
155 For more details on installing Qiskit and for alternative methods for passing
156 the IBM QX credentials, such as using environment variables, sending them
157 explicitly and support for the `Qconfig.py` method available in previous
158 versions, please check
159 [our Qiskit documentation](https://www.qiskit.org/documentation/).
160
161 ### Next Steps
162
163 Now you're set up and ready to check out some of the other examples from our
164 [Tutorial](https://github.com/Qiskit/qiskit-tutorial) repository. Start with the
165 [index tutorial](https://github.com/Qiskit/qiskit-tutorial/blob/master/index.ipynb) and then go to
166 the [‘Getting Started’ example](https://github.com/Qiskit/qiskit-tutorial/blob/master/reference/tools/getting_started.ipynb).
167 If you already have [Jupyter Notebooks installed](https://jupyter.readthedocs.io/en/latest/install.html),
168 you can copy and modify the notebooks to create your own experiments.
169
170 To install the tutorials as part of the Qiskit SDK, see the following
171 [installation details](doc/install.rst#Install-Jupyter-based-tutorials). Complete SDK
172 documentation can be found in the [*doc* directory](doc/qiskit.rst) and in
173 [the official Qiskit site](https://www.qiskit.org/documentation).
174
175 ## More Information
176
177 For more information on how to use Qiskit, tutorial examples, and other helpful links, take a look
178 at these resources:
179
180 * **[User Guides](https://github.com/Qiskit/ibmqx-user-guides)**,
181 a good starting place for learning about quantum information and computing
182 * **[Tutorials](https://github.com/Qiskit/qiskit-tutorial)**,
183 for example notebooks, start with the [index](https://github.com/Qiskit/qiskit-tutorial/blob/master/index.ipynb) and [‘Getting Started’ Jupyter notebook](https://github.com/Qiskit/qiskit-tutorial/blob/002d054c72fc59fc5009bb9fa0ee393e15a69d07/1_introduction/getting_started.ipynb)
184 * **[OpenQASM](https://github.com/Qiskit/openqasm)**,
185 for additional information and examples of QASM code
186 * **[IBM Quantum Experience Composer](https://quantumexperience.ng.bluemix.net/qx/editor)**,
187 a GUI for interacting with real and simulated quantum computers
188 * **[QISkit Python API](https://github.com/Qiskit/qiskit-api-py)**, an API to use the IBM Quantum
189 Experience in Python
190
191 Qiskit was originally developed by researchers and developers on the
192 [IBM-Q](http://www.research.ibm.com/ibm-q/) Team at [IBM Research](http://www.research.ibm.com/),
193 with the aim of offering a high level development kit to work with quantum computers.
194
195 Visit the [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) for
196 questions and discussions on Qiskit and quantum computing more broadly. If you'd like to
197 contribute to Qiskit, please take a look at our [contribution guidelines](.github/CONTRIBUTING.rst).
198
199 ## Multilanguage guide
200
201 * **[Korean Translation](doc/ko/README.md)** - basic guide line written in Korean.
202 * **[Chinese Translation](doc/zh/README.md)** - basic guide line written in Chinese.
203
204 ## Authors (alphabetical)
205
206 Qiskit was originally authored by
207 Luciano Bello, Jim Challenger, Andrew Cross, Ismael Faro, Jay Gambetta, Juan Gomez,
208 Ali Javadi-Abhari, Paco Martin, Diego Moreda, Jesus Perez, Erick Winston and Chris Wood.
209
210 And continues to grow with the help and work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
211 to the project at different levels.
212
[end of README.md]
[start of examples/python/teleport.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Quantum teleportation example based on an OpenQASM example.
10
11 Note: if you have only cloned the Qiskit repository but not
12 used `pip install`, the examples only work from the root directory.
13 """
14
15 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
16 from qiskit import execute
17
18 ###############################################################
19 # Set the backend name and coupling map.
20 ###############################################################
21 backend = "local_qasm_simulator"
22 coupling_map = [[0, 1], [0, 2], [1, 2], [3, 2], [3, 4], [4, 2]]
23
24 ###############################################################
25 # Make a quantum program for quantum teleportation.
26 ###############################################################
27 q = QuantumRegister(3, "q")
28 c0 = ClassicalRegister(1, "c0")
29 c1 = ClassicalRegister(1, "c1")
30 c2 = ClassicalRegister(1, "c2")
31 qc = QuantumCircuit(q, c0, c1, c2, name="teleport")
32
33 # Prepare an initial state
34 qc.u3(0.3, 0.2, 0.1, q[0])
35
36 # Prepare a Bell pair
37 qc.h(q[1])
38 qc.cx(q[1], q[2])
39
40 # Barrier following state preparation
41 qc.barrier(q)
42
43 # Measure in the Bell basis
44 qc.cx(q[0], q[1])
45 qc.h(q[0])
46 qc.measure(q[0], c0[0])
47 qc.measure(q[1], c1[0])
48
49 # Apply a correction
50 qc.z(q[2]).c_if(c0, 1)
51 qc.x(q[2]).c_if(c1, 1)
52 qc.measure(q[2], c2[0])
53
54 ###############################################################
55 # Execute.
56 # Experiment does not support feedback, so we use the simulator
57 ###############################################################
58
59 # First version: not mapped
60 result = execute(qc, backend=backend, coupling_map=None, shots=1024).result()
61 print(result)
62 print(result.get_counts("teleport"))
63
64 # Second version: mapped to qx2 coupling graph
65 result = execute(qc, backend=backend, coupling_map=coupling_map, shots=1024).result()
66 print(result)
67 print(result.get_counts("teleport"))
68
69 # Both versions should give the same distribution
70
[end of examples/python/teleport.py]
[start of qiskit/backends/local/qasm_simulator_py.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 # pylint: disable=invalid-name
9
10 """Contains a (slow) python simulator.
11
12 It simulates a qasm quantum circuit that has been compiled to run on the
13 simulator. It is exponential in the number of qubits.
14
15 We advise using the c++ simulator or online simulator for larger size systems.
16
17 The input is a qobj dictionary
18
19 and the output is a Results object
20
21 results['data']["counts"] where this is dict {"0000" : 454}
22
23 The simulator is run using
24
25 .. code-block:: python
26
27 QasmSimulatorPy(compiled_circuit,shots,seed).run().
28
29 .. code-block:: guess
30
31 compiled_circuit =
32 {
33 "header": {
34 "number_of_qubits": 2, // int
35 "number_of_clbits": 2, // int
36 "qubit_labels": [["q", 0], ["v", 0]], // list[list[string, int]]
37 "clbit_labels": [["c", 2]], // list[list[string, int]]
38 }
39 "operations": // list[map]
40 [
41 {
42 "name": , // required -- string
43 "params": , // optional -- list[double]
44 "qubits": , // required -- list[int]
45 "clbits": , // optional -- list[int]
46 "conditional": // optional -- map
47 {
48 "type": , // string
49 "mask": , // hex string
50 "val": , // bhex string
51 }
52 },
53 ]
54 }
55
56 .. code-block:: python
57
58 result =
59 {
60 'data': {
61 'statevector': array([ 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]),
62 'classical_state': 0
63 'counts': {'0000': 1}
64 'snapshots': { '0': {'statevector': array([1.+0.j, 0.+0.j,
65 0.+0.j, 0.+0.j])}}
66 }
67 }
68 'time_taken': 0.002
69 'status': 'DONE'
70 }
71
72 """
73 import random
74 import uuid
75 import time
76 import logging
77 from collections import Counter
78
79 import numpy as np
80
81 from qiskit.result._utils import copy_qasm_from_qobj_into_result, result_from_old_style_dict
82 from qiskit.backends import BaseBackend
83 from qiskit.backends.local.localjob import LocalJob
84 from ._simulatorerror import SimulatorError
85 from ._simulatortools import single_gate_matrix
86 logger = logging.getLogger(__name__)
87
88
89 class QasmSimulatorPy(BaseBackend):
90 """Python implementation of a qasm simulator."""
91
92 DEFAULT_CONFIGURATION = {
93 'name': 'local_qasm_simulator_py',
94 'url': 'https://github.com/QISKit/qiskit-terra',
95 'simulator': True,
96 'local': True,
97 'description': 'A python simulator for qasm files',
98 'coupling_map': 'all-to-all',
99 'basis_gates': 'u1,u2,u3,cx,id,snapshot'
100 }
101
102 def __init__(self, configuration=None):
103 """
104 Args:
105 configuration (dict): backend configuration
106 """
107 super().__init__(configuration or self.DEFAULT_CONFIGURATION.copy())
108
109 self._local_random = random.Random()
110
111 # Define attributes in __init__.
112 self._classical_state = 0
113 self._statevector = 0
114 self._snapshots = {}
115 self._number_of_cbits = 0
116 self._number_of_qubits = 0
117 self._shots = 0
118 self._qobj_config = None
119
120 @staticmethod
121 def _index1(b, i, k):
122 """Magic index1 function.
123
124 Takes a bitstring k and inserts bit b as the ith bit,
125 shifting bits >= i over to make room.
126 """
127 retval = k
128 lowbits = k & ((1 << i) - 1) # get the low i bits
129
130 retval >>= i
131 retval <<= 1
132
133 retval |= b
134
135 retval <<= i
136 retval |= lowbits
137
138 return retval
139
140 @staticmethod
141 def _index2(b1, i1, b2, i2, k):
142 """Magic index1 function.
143
144 Takes a bitstring k and inserts bits b1 as the i1th bit
145 and b2 as the i2th bit
146 """
147 assert i1 != i2
148
149 if i1 > i2:
150 # insert as (i1-1)th bit, will be shifted left 1 by next line
151 retval = QasmSimulatorPy._index1(b1, i1-1, k)
152 retval = QasmSimulatorPy._index1(b2, i2, retval)
153 else: # i2>i1
154 # insert as (i2-1)th bit, will be shifted left 1 by next line
155 retval = QasmSimulatorPy._index1(b2, i2-1, k)
156 retval = QasmSimulatorPy._index1(b1, i1, retval)
157 return retval
158
159 def _add_qasm_single(self, gate, qubit):
160 """Apply an arbitary 1-qubit operator to a qubit.
161
162 Gate is the single qubit applied.
163 qubit is the qubit the gate is applied to.
164 """
165 psi = self._statevector
166 bit = 1 << qubit
167 for k1 in range(0, 1 << self._number_of_qubits, 1 << (qubit+1)):
168 for k2 in range(0, 1 << qubit, 1):
169 k = k1 | k2
170 cache0 = psi[k]
171 cache1 = psi[k | bit]
172 psi[k] = gate[0, 0] * cache0 + gate[0, 1] * cache1
173 psi[k | bit] = gate[1, 0] * cache0 + gate[1, 1] * cache1
174
175 def _add_qasm_cx(self, q0, q1):
176 """Optimized ideal CX on two qubits.
177
178 q0 is the first qubit (control) counts from 0.
179 q1 is the second qubit (target).
180 """
181 psi = self._statevector
182 for k in range(0, 1 << (self._number_of_qubits - 2)):
183 # first bit is control, second is target
184 ind1 = self._index2(1, q0, 0, q1, k)
185 # swap target if control is 1
186 ind3 = self._index2(1, q0, 1, q1, k)
187 cache0 = psi[ind1]
188 cache1 = psi[ind3]
189 psi[ind3] = cache0
190 psi[ind1] = cache1
191
192 def _add_qasm_decision(self, qubit):
193 """Apply the decision of measurement/reset qubit gate.
194
195 qubit is the qubit that is measured/reset
196 """
197 probability_zero = 0
198 random_number = self._local_random.random()
199 for ii in range(1 << self._number_of_qubits):
200 if ii & (1 << qubit) == 0:
201 probability_zero += np.abs(self._statevector[ii])**2
202 if random_number <= probability_zero:
203 outcome = '0'
204 norm = np.sqrt(probability_zero)
205 else:
206 outcome = '1'
207 norm = np.sqrt(1-probability_zero)
208 return (outcome, norm)
209
210 def _add_qasm_measure(self, qubit, cbit):
211 """Apply the measurement qubit gate.
212
213 qubit is the qubit measured.
214 cbit is the classical bit the measurement is assigned to.
215 """
216 outcome, norm = self._add_qasm_decision(qubit)
217 for ii in range(1 << self._number_of_qubits):
218 # update quantum state
219 if (ii >> qubit) & 1 == int(outcome):
220 self._statevector[ii] = self._statevector[ii]/norm
221 else:
222 self._statevector[ii] = 0
223 # update classical state
224 bit = 1 << cbit
225 self._classical_state = (self._classical_state & (~bit)) | (int(outcome) << cbit)
226
227 def _add_qasm_reset(self, qubit):
228 """Apply the reset to the qubit.
229
230 This is done by doing a measruement and if 0 do nothing and
231 if 1 flip the qubit.
232
233 qubit is the qubit that is reset.
234 """
235 # TODO: slow, refactor later
236 outcome, norm = self._add_qasm_decision(qubit)
237 temp = np.copy(self._statevector)
238 self._statevector.fill(0.0)
239 # measurement
240 for ii in range(1 << self._number_of_qubits):
241 if (ii >> qubit) & 1 == int(outcome):
242 temp[ii] = temp[ii]/norm
243 else:
244 temp[ii] = 0
245 # reset
246 if outcome == '1':
247 for ii in range(1 << self._number_of_qubits):
248 iip = (~ (1 << qubit)) & ii # bit number qubit set to zero
249 self._statevector[iip] += temp[ii]
250 else:
251 self._statevector = temp
252
253 def _add_qasm_snapshot(self, slot):
254 """Snapshot instruction to record simulator's internal representation
255 of quantum statevector.
256
257 slot is an integer indicating a snapshot slot number.
258 """
259 self._snapshots.setdefault(str(int(slot)),
260 {}).setdefault("statevector",
261 []).append(np.copy(self._statevector))
262
263 def run(self, qobj):
264 """Run qobj asynchronously.
265
266 Args:
267 qobj (dict): job description
268
269 Returns:
270 LocalJob: derived from BaseJob
271 """
272 local_job = LocalJob(self._run_job, qobj)
273 local_job.submit()
274 return local_job
275
276 def _run_job(self, qobj):
277 """Run circuits in qobj"""
278 self._validate(qobj)
279 result_list = []
280 self._shots = qobj.config.shots
281 self._qobj_config = qobj.config
282 start = time.time()
283
284 for circuit in qobj.experiments:
285 result_list.append(self.run_circuit(circuit))
286 end = time.time()
287 job_id = str(uuid.uuid4())
288 result = {'backend': self._configuration['name'],
289 'id': qobj.qobj_id,
290 'job_id': job_id,
291 'result': result_list,
292 'status': 'COMPLETED',
293 'success': True,
294 'time_taken': (end - start)}
295
296 copy_qasm_from_qobj_into_result(qobj, result)
297
298 return result_from_old_style_dict(
299 result, [circuit.header.name for circuit in qobj.experiments])
300
301 def run_circuit(self, circuit):
302 """Run a circuit and return a single Result.
303
304 Args:
305 circuit (QobjExperiment): experiment from qobj experiments list
306
307 Returns:
308 dict: A dictionary of results which looks something like::
309
310 {
311 "data":
312 { #### DATA CAN BE A DIFFERENT DICTIONARY FOR EACH BACKEND ####
313 "counts": {'00000': XXXX, '00001': XXXXX},
314 "time" : xx.xxxxxxxx
315 },
316 "status": --status (string)--
317 }
318 Raises:
319 SimulatorError: if an error occurred.
320 """
321 self._number_of_qubits = circuit.header.number_of_qubits
322 self._number_of_cbits = circuit.header.number_of_clbits
323 self._statevector = 0
324 self._classical_state = 0
325 self._snapshots = {}
326 cl_reg_index = [] # starting bit index of classical register
327 cl_reg_nbits = [] # number of bits in classical register
328 cbit_index = 0
329 for cl_reg in circuit.header.clbit_labels:
330 cl_reg_nbits.append(cl_reg[1])
331 cl_reg_index.append(cbit_index)
332 cbit_index += cl_reg[1]
333
334 # Get the seed looking in circuit, qobj, and then random.
335 seed = getattr(circuit.config, 'seed',
336 getattr(self._qobj_config, 'seed',
337 random.getrandbits(32)))
338 self._local_random.seed(seed)
339 outcomes = []
340
341 start = time.time()
342 for _ in range(self._shots):
343 self._statevector = np.zeros(1 << self._number_of_qubits,
344 dtype=complex)
345 self._statevector[0] = 1
346 self._classical_state = 0
347 for operation in circuit.instructions:
348 if getattr(operation, 'conditional', None):
349 mask = int(operation.conditional.mask, 16)
350 if mask > 0:
351 value = self._classical_state & mask
352 while (mask & 0x1) == 0:
353 mask >>= 1
354 value >>= 1
355 if value != int(operation.conditional.val, 16):
356 continue
357 # Check if single gate
358 if operation.name in ('U', 'u1', 'u2', 'u3'):
359 params = getattr(operation, 'params', None)
360 qubit = operation.qubits[0]
361 gate = single_gate_matrix(operation.name, params)
362 self._add_qasm_single(gate, qubit)
363 # Check if CX gate
364 elif operation.name in ('id', 'u0'):
365 pass
366 elif operation.name in ('CX', 'cx'):
367 qubit0 = operation.qubits[0]
368 qubit1 = operation.qubits[1]
369 self._add_qasm_cx(qubit0, qubit1)
370 # Check if measure
371 elif operation.name == 'measure':
372 qubit = operation.qubits[0]
373 cbit = operation.clbits[0]
374 self._add_qasm_measure(qubit, cbit)
375 # Check if reset
376 elif operation.name == 'reset':
377 qubit = operation.qubits[0]
378 self._add_qasm_reset(qubit)
379 # Check if barrier
380 elif operation.name == 'barrier':
381 pass
382 # Check if snapshot command
383 elif operation.name == 'snapshot':
384 params = operation.params
385 self._add_qasm_snapshot(params[0])
386 else:
387 backend = self._configuration['name']
388 err_msg = '{0} encountered unrecognized operation "{1}"'
389 raise SimulatorError(err_msg.format(backend,
390 operation.name))
391 # Turn classical_state (int) into bit string
392 outcomes.append(bin(self._classical_state)[2:].zfill(
393 self._number_of_cbits))
394 # Return the results
395 counts = dict(Counter(outcomes))
396 data = {
397 'counts': self._format_result(counts, cl_reg_index, cl_reg_nbits),
398 'snapshots': self._snapshots
399 }
400 end = time.time()
401 return {'name': circuit.header.name,
402 'seed': seed,
403 'shots': self._shots,
404 'data': data,
405 'status': 'DONE',
406 'success': True,
407 'time_taken': (end-start)}
408
409 def _validate(self, qobj):
410 for experiment in qobj.experiments:
411 if 'measure' not in [op.name for
412 op in experiment.instructions]:
413 logger.warning("no measurements in circuit '%s', "
414 "classical register will remain all zeros.",
415 experiment.header.name)
416
417 def _format_result(self, counts, cl_reg_index, cl_reg_nbits):
418 """Format the result bit string.
419
420 This formats the result bit strings such that spaces are inserted
421 at register divisions.
422
423 Args:
424 counts (dict): dictionary of counts e.g. {'1111': 1000, '0000':5}
425 cl_reg_index (list): starting bit index of classical register
426 cl_reg_nbits (list): total amount of bits in classical register
427 Returns:
428 dict: spaces inserted into dictionary keys at register boundaries.
429 """
430 fcounts = {}
431 for key, value in counts.items():
432 if cl_reg_nbits:
433 new_key = [key[-cl_reg_nbits[0]:]]
434 for index, nbits in zip(cl_reg_index[1:],
435 cl_reg_nbits[1:]):
436 new_key.insert(0, key[-(index+nbits):-index])
437 fcounts[' '.join(new_key)] = value
438 return fcounts
439
[end of qiskit/backends/local/qasm_simulator_py.py]
[start of qiskit/extensions/quantum_initializer/_initializer.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Initialize qubit registers to desired arbitrary state.
10 """
11
12 import math
13 import numpy as np
14 import scipy
15
16 from qiskit import CompositeGate
17 from qiskit import Gate
18 from qiskit import QISKitError
19 from qiskit import QuantumCircuit
20 from qiskit.extensions.standard.cx import CnotGate
21 from qiskit.extensions.standard.ry import RYGate
22 from qiskit.extensions.standard.rz import RZGate
23
24 _EPS = 1e-10 # global variable used to chop very small numbers to zero
25
26
27 class InitializeGate(CompositeGate):
28 """Complex amplitude initialization.
29
30 Class that implements the (complex amplitude) initialization of some
31 flexible collection of qubit registers (assuming the qubits are in the
32 zero state).
33
34 Implements a recursive initialization algorithm including optimizations
35 from "Synthesis of Quantum Logic Circuits" Shende, Bullock, Markov
36 https://arxiv.org/abs/quant-ph/0406176v5
37
38 Additionally implements some extra optimizations: remove zero rotations and
39 double cnots.`
40
41 It inherits from CompositeGate in the same way that the Fredkin (cswap)
42 gate does. Therefore self.data is the list of gates (in order) that must
43 be applied to implement this meta-gate.
44
45 param = list of complex amplitudes
46 arg = list of qubits
47 circ = QuantumCircuit or CompositeGate containing this gate
48 """
49 def __init__(self, param, arg, circ=None):
50 """Create new initialize composite gate."""
51 num_qubits = math.log2(len(param))
52
53 # Check if param is a power of 2
54 if num_qubits == 0 or not num_qubits.is_integer():
55 raise QISKitError("Desired vector not a positive power of 2.")
56
57 self.num_qubits = int(num_qubits)
58
59 # Check if number of desired qubits agrees with available qubits
60 if len(arg) != self.num_qubits:
61 raise QISKitError("Number of complex amplitudes do not correspond "
62 "to the number of qubits.")
63
64 # Check if probabilities (amplitudes squared) sum to 1
65 if not math.isclose(sum(np.absolute(param) ** 2), 1.0,
66 abs_tol=_EPS):
67 raise QISKitError("Sum of amplitudes-squared does not equal one.")
68
69 super().__init__("init", param, arg, circ)
70
71 # call to generate the circuit that takes the desired vector to zero
72 self.gates_to_uncompute()
73 # remove zero rotations and double cnots
74 self.optimize_gates()
75 # invert the circuit to create the desired vector from zero (assuming
76 # the qubits are in the zero state)
77 self.inverse()
78 # do not set the inverse flag, as this is the actual initialize gate
79 # we just used inverse() as a method to obtain it
80 self.inverse_flag = False
81
82 def nth_qubit_from_least_sig_qubit(self, nth):
83 """
84 Return the qubit that is nth away from the least significant qubit
85 (LSB), so n=0 corresponds to the LSB.
86 """
87 # if LSB is first (as is the case with the IBM QE) and significance is
88 # in order:
89 return self.arg[nth]
90 # if MSB is first: return self.arg[self.num_qubits - 1 - n]
91 # equivalent to self.arg[-(n+1)]
92 # to generalize any mapping could be placed here or even taken from
93 # the user
94
95 def reapply(self, circ):
96 """Reapply this gate to the corresponding qubits in circ."""
97 self._modifiers(circ.initialize(self.param, self.arg))
98
99 def gates_to_uncompute(self):
100 """
101 Call to populate the self.data list with gates that takes the
102 desired vector to zero.
103 """
104 # kick start the peeling loop
105 remaining_param = self.param
106
107 for i in range(self.num_qubits):
108 # work out which rotations must be done to disentangle the LSB
109 # qubit (we peel away one qubit at a time)
110 (remaining_param,
111 thetas,
112 phis) = InitializeGate._rotations_to_disentangle(remaining_param)
113
114 # perform the required rotations to decouple the LSB qubit (so that
115 # it can be "factored" out, leaving a
116 # shorter amplitude vector to peel away)
117 self._attach(self._multiplex(RZGate, i, phis))
118 self._attach(self._multiplex(RYGate, i, thetas))
119
120 @staticmethod
121 def _rotations_to_disentangle(local_param):
122 """
123 Static internal method to work out Ry and Rz rotation angles used
124 to disentangle the LSB qubit.
125 These rotations make up the block diagonal matrix U (i.e. multiplexor)
126 that disentangles the LSB.
127
128 [[Ry(theta_1).Rz(phi_1) 0 . . 0],
129 [0 Ry(theta_2).Rz(phi_2) . 0],
130 .
131 .
132 0 0 Ry(theta_2^n).Rz(phi_2^n)]]
133 """
134 remaining_vector = []
135 thetas = []
136 phis = []
137
138 param_len = len(local_param)
139
140 for i in range(param_len // 2):
141 # Ry and Rz rotations to move bloch vector from 0 to "imaginary"
142 # qubit
143 # (imagine a qubit state signified by the amplitudes at index 2*i
144 # and 2*(i+1), corresponding to the select qubits of the
145 # multiplexor being in state |i>)
146 (remains,
147 add_theta,
148 add_phi) = InitializeGate._bloch_angles(
149 local_param[2*i: 2*(i + 1)])
150
151 remaining_vector.append(remains)
152
153 # rotations for all imaginary qubits of the full vector
154 # to move from where it is to zero, hence the negative sign
155 thetas.append(-add_theta)
156 phis.append(-add_phi)
157
158 return remaining_vector, thetas, phis
159
160 @staticmethod
161 def _bloch_angles(pair_of_complex):
162 """
163 Static internal method to work out rotation to create the passed in
164 qubit from the zero vector.
165 """
166 [a_complex, b_complex] = pair_of_complex
167 # Force a and b to be complex, as otherwise numpy.angle might fail.
168 a_complex = complex(a_complex)
169 b_complex = complex(b_complex)
170 mag_a = np.absolute(a_complex)
171 final_r = float(np.sqrt(mag_a ** 2 + np.absolute(b_complex) ** 2))
172 if final_r < _EPS:
173 theta = 0
174 phi = 0
175 final_r = 0
176 final_t = 0
177 else:
178 theta = float(2 * np.arccos(mag_a / final_r))
179 a_arg = np.angle(a_complex)
180 b_arg = np.angle(b_complex)
181 final_t = a_arg + b_arg
182 phi = b_arg - a_arg
183
184 return final_r * np.exp(1.J * final_t/2), theta, phi
185
186 def _multiplex(self, bottom_gate, bottom_qubit_index, list_of_angles):
187 """
188 Internal recursive method to create gates to perform rotations on the
189 imaginary qubits: works by rotating LSB (and hence ALL imaginary
190 qubits) by combo angle and then flipping sign (by flipping the bit,
191 hence moving the complex amplitudes) of half the imaginary qubits
192 (CNOT) followed by another combo angle on LSB, therefore executing
193 conditional (on MSB) rotations, thereby disentangling LSB.
194 """
195 list_len = len(list_of_angles)
196 target_qubit = self.nth_qubit_from_least_sig_qubit(bottom_qubit_index)
197
198 # Case of no multiplexing = base case for recursion
199 if list_len == 1:
200 return bottom_gate(list_of_angles[0], target_qubit)
201
202 local_num_qubits = int(math.log2(list_len)) + 1
203 control_qubit = self.nth_qubit_from_least_sig_qubit(
204 local_num_qubits - 1 + bottom_qubit_index)
205
206 # calc angle weights, assuming recursion (that is the lower-level
207 # requested angles have been correctly implemented by recursion
208 angle_weight = scipy.kron([[0.5, 0.5], [0.5, -0.5]],
209 np.identity(2 ** (local_num_qubits - 2)))
210
211 # calc the combo angles
212 list_of_angles = angle_weight.dot(np.array(list_of_angles)).tolist()
213 combine_composite_gates = CompositeGate(
214 "multiplex" + local_num_qubits.__str__(), [], self.arg)
215
216 # recursive step on half the angles fulfilling the above assumption
217 combine_composite_gates._attach(
218 self._multiplex(bottom_gate, bottom_qubit_index,
219 list_of_angles[0:(list_len // 2)]))
220
221 # combine_composite_gates.cx(control_qubit,target_qubit) -> does not
222 # work as expected because checks circuit
223 # so attach CNOT as follows, thereby flipping the LSB qubit
224 combine_composite_gates._attach(CnotGate(control_qubit, target_qubit))
225
226 # implement extra efficiency from the paper of cancelling adjacent
227 # CNOTs (by leaving out last CNOT and reversing (NOT inverting) the
228 # second lower-level multiplex)
229 sub_gate = self._multiplex(
230 bottom_gate, bottom_qubit_index, list_of_angles[(list_len // 2):])
231 if isinstance(sub_gate, CompositeGate):
232 combine_composite_gates._attach(sub_gate.reverse())
233 else:
234 combine_composite_gates._attach(sub_gate)
235
236 # outer multiplex keeps final CNOT, because no adjacent CNOT to cancel
237 # with
238 if self.num_qubits == local_num_qubits + bottom_qubit_index:
239 combine_composite_gates._attach(CnotGate(control_qubit,
240 target_qubit))
241
242 return combine_composite_gates
243
244 @staticmethod
245 def chop_num(numb):
246 """
247 Set very small numbers (as defined by global variable _EPS) to zero.
248 """
249 return 0 if abs(numb) < _EPS else numb
250
251
252 # ###############################################################
253 # Add needed functionality to other classes (it feels
254 # weird following the QISKit convention of adding functionality to other
255 # classes like this ;),
256 # TODO: multiple inheritance might be better?)
257
258
259 def reverse(self):
260 """
261 Reverse (recursively) the sub-gates of this CompositeGate. Note this does
262 not invert the gates!
263 """
264 new_data = []
265 for gate in reversed(self.data):
266 if isinstance(gate, CompositeGate):
267 new_data.append(gate.reverse())
268 else:
269 new_data.append(gate)
270 self.data = new_data
271
272 # not just a high-level reverse:
273 # self.data = [gate for gate in reversed(self.data)]
274
275 return self
276
277
278 QuantumCircuit.reverse = reverse
279 CompositeGate.reverse = reverse
280
281
282 def optimize_gates(self):
283 """Remove Zero rotations and Double CNOTS."""
284 self.remove_zero_rotations()
285 while self.remove_double_cnots_once():
286 pass
287
288
289 QuantumCircuit.optimize_gates = optimize_gates
290 CompositeGate.optimize_gates = optimize_gates
291
292
293 def remove_zero_rotations(self):
294 """
295 Remove Zero Rotations by looking (recursively) at rotation gates at the
296 leaf ends.
297 """
298 # Removed at least one zero rotation.
299 zero_rotation_removed = False
300 new_data = []
301 for gate in self.data:
302 if isinstance(gate, CompositeGate):
303 zero_rotation_removed |= gate.remove_zero_rotations()
304 if gate.data:
305 new_data.append(gate)
306 else:
307 if ((not isinstance(gate, Gate)) or
308 (not (gate.name == "rz" or gate.name == "ry" or
309 gate.name == "rx") or
310 (InitializeGate.chop_num(gate.param[0]) != 0))):
311 new_data.append(gate)
312 else:
313 zero_rotation_removed = True
314
315 self.data = new_data
316
317 return zero_rotation_removed
318
319
320 QuantumCircuit.remove_zero_rotations = remove_zero_rotations
321 CompositeGate.remove_zero_rotations = remove_zero_rotations
322
323
324 def number_atomic_gates(self):
325 """Count the number of leaf gates. """
326 num = 0
327 for gate in self.data:
328 if isinstance(gate, CompositeGate):
329 num += gate.number_atomic_gates()
330 else:
331 if isinstance(gate, Gate):
332 num += 1
333 return num
334
335
336 QuantumCircuit.number_atomic_gates = number_atomic_gates
337 CompositeGate.number_atomic_gates = number_atomic_gates
338
339
340 def remove_double_cnots_once(self):
341 """
342 Remove Double CNOTS paying attention that gates may be neighbours across
343 Composite Gate boundaries.
344 """
345 num_high_level_gates = len(self.data)
346
347 if num_high_level_gates == 0:
348 return False
349 else:
350 if num_high_level_gates == 1 and isinstance(self.data[0],
351 CompositeGate):
352 return self.data[0].remove_double_cnots_once()
353
354 # Removed at least one double cnot.
355 double_cnot_removed = False
356
357 # last gate might be composite
358 if isinstance(self.data[num_high_level_gates - 1], CompositeGate):
359 double_cnot_removed = \
360 double_cnot_removed or\
361 self.data[num_high_level_gates - 1].remove_double_cnots_once()
362
363 # don't start with last gate, using reversed so that can del on the go
364 for i in reversed(range(num_high_level_gates - 1)):
365 if isinstance(self.data[i], CompositeGate):
366 double_cnot_removed =\
367 double_cnot_removed \
368 or self.data[i].remove_double_cnots_once()
369 left_gate_host = self.data[i].last_atomic_gate_host()
370 left_gate_index = -1
371 # TODO: consider adding if semantics needed:
372 # to remove empty composite gates
373 # if left_gate_host == None: del self.data[i]
374 else:
375 left_gate_host = self.data
376 left_gate_index = i
377
378 if ((left_gate_host is not None) and
379 left_gate_host[left_gate_index].name == "cx"):
380 if isinstance(self.data[i + 1], CompositeGate):
381 right_gate_host = self.data[i + 1].first_atomic_gate_host()
382 right_gate_index = 0
383 else:
384 right_gate_host = self.data
385 right_gate_index = i + 1
386
387 if (right_gate_host is not None) \
388 and right_gate_host[right_gate_index].name == "cx" \
389 and (left_gate_host[left_gate_index].arg ==
390 right_gate_host[right_gate_index].arg):
391 del right_gate_host[right_gate_index]
392 del left_gate_host[left_gate_index]
393 double_cnot_removed = True
394
395 return double_cnot_removed
396
397
398 QuantumCircuit.remove_double_cnots_once = remove_double_cnots_once
399 CompositeGate.remove_double_cnots_once = remove_double_cnots_once
400
401
402 def first_atomic_gate_host(self):
403 """Return the host list of the leaf gate on the left edge."""
404 if self.data:
405 if isinstance(self.data[0], CompositeGate):
406 return self.data[0].first_atomic_gate_host()
407 return self.data
408
409 return None
410
411
412 QuantumCircuit.first_atomic_gate_host = first_atomic_gate_host
413 CompositeGate.first_atomic_gate_host = first_atomic_gate_host
414
415
416 def last_atomic_gate_host(self):
417 """Return the host list of the leaf gate on the right edge."""
418 if self.data:
419 if isinstance(self.data[-1], CompositeGate):
420 return self.data[-1].last_atomic_gate_host()
421 return self.data
422
423 return None
424
425
426 QuantumCircuit.last_atomic_gate_host = last_atomic_gate_host
427 CompositeGate.last_atomic_gate_host = last_atomic_gate_host
428
429
430 def initialize(self, params, qubits):
431 """Apply initialize to circuit."""
432 self._check_dups(qubits)
433 for i in qubits:
434 self._check_qubit(i)
435 # TODO: make initialize an Instruction, and insert reset
436 # TODO: avoid explicit reset if compiler determines a |0> state
437
438 return self._attach(InitializeGate(params, qubits, self))
439
440
441 QuantumCircuit.initialize = initialize
442 CompositeGate.initialize = initialize
443
[end of qiskit/extensions/quantum_initializer/_initializer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
eafeeabbca86cc25ae8675c85fa54748a100a931
|
Reverse bit order in latex_circuit_drawer with conditional
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### What is the current behavior?
I found a bug in `latex_circuit_drawer` when conditional gate is drawn with `reversebits` option.
Index of registers is not correctly handled in `latex_circuit_drawer` when bit order is reversed.
### Steps to reproduce the problem
```
q1 = QuantumRegister(3, 'q1')
c1 = ClassicalRegister(3, 'c1')
qc = QuantumCircuit(q1, c1)
qc.x(q1[0]).c_if(c1, 2)
circuit_drawer(qc, style={"reversebits": True})
```
The results:
```
...
~/testbench/qiskit/tools/visualization/_circuit_visualization.py in _get_image_depth(self, aliases)
611 for i in range(pos_1, pos_2 + self.cregs[if_reg]):
612 if is_occupied[i] is False:
--> 613 is_occupied[i] = True
614 else:
615 columns += 1
IndexError: list index out of range
```
|
I've done a lot of digging on this (it took much more than I was expecting) it looks like the fundamental issue here is that the mask generated by the json unroller for conditionals doesn't know things are reversed So the mask it writes assumes the regular order. So when we go to use that mask in the latex drawer (where the traceback here is and around L900) for figuring out the cbits to mark for writing out the latex it gets mixed up and out of order.
As for fixing it I'm thinking we'll either have to transform the bitmask generated by the json unroller to match the reversed bit order in the latex generator after it's called, or add an option to the json unroller to reverse the bitmasks when they're generated.
|
2018-09-17T20:12:40Z
|
<patch>
diff --git a/qiskit/tools/visualization/_circuit_visualization.py b/qiskit/tools/visualization/_circuit_visualization.py
--- a/qiskit/tools/visualization/_circuit_visualization.py
+++ b/qiskit/tools/visualization/_circuit_visualization.py
@@ -34,8 +34,8 @@
from qiskit._qiskiterror import QISKitError
from qiskit._quantumcircuit import QuantumCircuit
from qiskit.wrapper import load_qasm_file
-
from qiskit.dagcircuit import DAGCircuit
+from qiskit.tools.visualization._error import VisualizationError
from qiskit.transpiler import transpile
logger = logging.getLogger(__name__)
@@ -440,7 +440,11 @@ def __init__(self, circuit, scale, style=None):
for item in self.header['clbit_labels']:
self.cregs[item[0]] = item[1]
self.clbit_list = []
- for cr in self.cregs:
+ cregs = self.cregs
+ if self._style.reverse:
+ self.orig_cregs = self.cregs
+ cregs = reversed(self.cregs)
+ for cr in cregs:
for i in range(self.cregs[cr]):
self.clbit_list.append((cr, i))
self.ordered_regs = [(item[0], item[1]) for
@@ -604,6 +608,8 @@ def _get_image_depth(self, aliases=None):
qarglist[0][1])]
if 'conditional' in op:
mask = int(op['conditional']['mask'], 16)
+ if self._style.reverse:
+ mask = self._convert_mask(mask)
cl_reg = self.clbit_list[self._ffs(mask)]
if_reg = cl_reg[0]
pos_2 = self.img_regs[cl_reg]
@@ -629,6 +635,8 @@ def _get_image_depth(self, aliases=None):
if 'conditional' in op:
mask = int(op['conditional']['mask'], 16)
+ if self._style.reverse:
+ mask = self._convert_mask(mask)
cl_reg = self.clbit_list[self._ffs(mask)]
if_reg = cl_reg[0]
pos_3 = self.img_regs[(if_reg, 0)]
@@ -685,6 +693,8 @@ def _get_image_depth(self, aliases=None):
if 'conditional' in op:
mask = int(op['conditional']['mask'], 16)
+ if self._style.reverse:
+ mask = self._convert_mask(mask)
cl_reg = self.clbit_list[self._ffs(mask)]
if_reg = cl_reg[0]
pos_4 = self.img_regs[(if_reg, 0)]
@@ -842,6 +852,25 @@ def total_2_register_index(self, index, registers):
count += size
raise ValueError('qubit index lies outside range of qubit registers')
+ def _convert_mask(self, mask):
+ orig_clbit_list = []
+ for cr in self.orig_cregs:
+ for i in range(self.orig_cregs[cr]):
+ orig_clbit_list.append((cr, i))
+ bit_list = [(mask >> bit) & 1 for bit in range(
+ len(orig_clbit_list) - 1, -1, -1)]
+ converted_mask_list = [None] * len(bit_list)
+ converted_mask = 0
+ for pos, bit in enumerate(reversed(bit_list)):
+ new_pos = self.clbit_list.index(orig_clbit_list[pos])
+ converted_mask_list[new_pos] = bit
+ if None in converted_mask_list:
+ raise VisualizationError('Reverse mask creation failed')
+ converted_mask_list = list(reversed(converted_mask_list))
+ for bit in converted_mask_list:
+ converted_mask = (converted_mask << 1) | bit
+ return converted_mask
+
def _build_latex_array(self, aliases=None):
"""Returns an array of strings containing \\LaTeX for this circuit.
@@ -866,6 +895,8 @@ def _build_latex_array(self, aliases=None):
for _, op in enumerate(self.circuit['instructions']):
if 'conditional' in op:
mask = int(op['conditional']['mask'], 16)
+ if self._style.reverse:
+ mask = self._convert_mask(mask)
cl_reg = self.clbit_list[self._ffs(mask)]
if_reg = cl_reg[0]
pos_2 = self.img_regs[cl_reg]
@@ -881,6 +912,8 @@ def _build_latex_array(self, aliases=None):
qarglist[0][1])]
if 'conditional' in op:
mask = int(op['conditional']['mask'], 16)
+ if self._style.reverse:
+ mask = self._convert_mask(mask)
cl_reg = self.clbit_list[self._ffs(mask)]
if_reg = cl_reg[0]
pos_2 = self.img_regs[cl_reg]
@@ -1252,7 +1285,10 @@ def _ffs(self, mask):
Returns:
int: index of the first set bit.
"""
- return (mask & (-mask)).bit_length() - 1
+ origin = (mask & (-mask)).bit_length()
+ if self._style.reverse:
+ return origin + 1
+ return origin - 1
def _get_register_specs(bit_labels):
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-17266
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CLN: remove have_pytz?
From [tslib.pyx](https://github.com/pandas-dev/pandas/blob/master/pandas/_libs/tslib.pyx#L4083)
```
try:
import pytz
UTC = pytz.utc
have_pytz = True
except:
have_pytz = False
[...]
def tz_convert_single(int64_t val, object tz1, object tz2):
[...]
if not have_pytz:
import pytz
```
From much [earlier](https://github.com/pandas-dev/pandas/blob/master/pandas/_libs/tslib.pyx#L63) in tslib.pyx
```
from pytz.tzinfo import BaseTzInfo as _pytz_BaseTzInfo
```
Is the try/except still necessary? If so, is `import pytz` the right thing to do in `tz_convert_single`?
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td>
13 </tr>
14 <td></td>
15 <td><img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /></td>
16 </tr>
17 <tr>
18 <td>Package Status</td>
19 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
20 </tr>
21 <tr>
22 <td>License</td>
23 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td>
24 </tr>
25 <tr>
26 <td>Build Status</td>
27 <td>
28 <a href="https://travis-ci.org/pandas-dev/pandas">
29 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td></td>
35 <td>
36 <a href="https://circleci.com/gh/pandas-dev/pandas">
37 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td></td>
43 <td>
44 <a href="https://ci.appveyor.com/project/pandas-dev/pandas">
45 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td>Coverage</td>
51 <td><img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /></td>
52 </tr>
53 <tr>
54 <td>Conda</td>
55 <td>
56 <a href="https://pandas.pydata.org">
57 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda default downloads" />
58 </a>
59 </td>
60 </tr>
61 <tr>
62 <td>Conda-forge</td>
63 <td>
64 <a href="https://pandas.pydata.org">
65 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
66 </a>
67 </td>
68 </tr>
69 <tr>
70 <td>PyPI</td>
71 <td>
72 <a href="https://pypi.python.org/pypi/pandas/">
73 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" />
74 </a>
75 </td>
76 </tr>
77 </table>
78
79 [](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
80
81 ## What is it
82
83 **pandas** is a Python package providing fast, flexible, and expressive data
84 structures designed to make working with "relational" or "labeled" data both
85 easy and intuitive. It aims to be the fundamental high-level building block for
86 doing practical, **real world** data analysis in Python. Additionally, it has
87 the broader goal of becoming **the most powerful and flexible open source data
88 analysis / manipulation tool available in any language**. It is already well on
89 its way toward this goal.
90
91 ## Main Features
92 Here are just a few of the things that pandas does well:
93
94 - Easy handling of [**missing data**][missing-data] (represented as
95 `NaN`) in floating point as well as non-floating point data
96 - Size mutability: columns can be [**inserted and
97 deleted**][insertion-deletion] from DataFrame and higher dimensional
98 objects
99 - Automatic and explicit [**data alignment**][alignment]: objects can
100 be explicitly aligned to a set of labels, or the user can simply
101 ignore the labels and let `Series`, `DataFrame`, etc. automatically
102 align the data for you in computations
103 - Powerful, flexible [**group by**][groupby] functionality to perform
104 split-apply-combine operations on data sets, for both aggregating
105 and transforming data
106 - Make it [**easy to convert**][conversion] ragged,
107 differently-indexed data in other Python and NumPy data structures
108 into DataFrame objects
109 - Intelligent label-based [**slicing**][slicing], [**fancy
110 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
111 large data sets
112 - Intuitive [**merging**][merging] and [**joining**][joining] data
113 sets
114 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
115 data sets
116 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
117 labels per tick)
118 - Robust IO tools for loading data from [**flat files**][flat-files]
119 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
120 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
121 - [**Time series**][timeseries]-specific functionality: date range
122 generation and frequency conversion, moving window statistics,
123 moving window linear regressions, date shifting and lagging, etc.
124
125
126 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
127 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
128 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
129 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
130 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
131 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
132 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
133 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
134 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
135 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
136 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
137 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
138 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
139 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
140 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
141 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
142 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
143 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
144
145 ## Where to get it
146 The source code is currently hosted on GitHub at:
147 https://github.com/pandas-dev/pandas
148
149 Binary installers for the latest released version are available at the [Python
150 package index](https://pypi.python.org/pypi/pandas) and on conda.
151
152 ```sh
153 # conda
154 conda install pandas
155 ```
156
157 ```sh
158 # or PyPI
159 pip install pandas
160 ```
161
162 ## Dependencies
163 - [NumPy](http://www.numpy.org): 1.7.0 or higher
164 - [python-dateutil](https://labix.org/python-dateutil): 1.5 or higher
165 - [pytz](https://pythonhosted.org/pytz)
166 - Needed for time zone support with ``pandas.date_range``
167
168 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
169 for recommended and optional dependencies.
170
171 ## Installation from sources
172 To install pandas from source you need Cython in addition to the normal
173 dependencies above. Cython can be installed from pypi:
174
175 ```sh
176 pip install cython
177 ```
178
179 In the `pandas` directory (same one where you found this file after
180 cloning the git repo), execute:
181
182 ```sh
183 python setup.py install
184 ```
185
186 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
187
188 ```sh
189 python setup.py develop
190 ```
191
192 Alternatively, you can use `pip` if you want all the dependencies pulled
193 in automatically (the `-e` option is for installing it in [development
194 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
195
196 ```sh
197 pip install -e .
198 ```
199
200 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
201
202 ## License
203 [BSD 3](LICENSE)
204
205 ## Documentation
206 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
207
208 The Sphinx documentation should provide a good starting point for learning how
209 to use the library. Expect the docs to continue to expand as time goes on.
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Getting Help
216
217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
219
220 ## Discussion and Development
221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
222
223 ## Contributing to pandas
224 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
225
226 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)**
227
228 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [Difficulty Novice](https://github.com/pandas-dev/pandas/issues?q=is%3Aopen+is%3Aissue+label%3A%22Difficulty+Novice%22) where you could start out.
229
230 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
231
232 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
233
[end of README.md]
[start of doc/sphinxext/numpydoc/numpydoc.py]
1 """
2 ========
3 numpydoc
4 ========
5
6 Sphinx extension that handles docstrings in the Numpy standard format. [1]
7
8 It will:
9
10 - Convert Parameters etc. sections to field lists.
11 - Convert See Also section to a See also entry.
12 - Renumber references.
13 - Extract the signature from the docstring, if it can't be determined otherwise.
14
15 .. [1] https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
16
17 """
18 from __future__ import division, absolute_import, print_function
19
20 import os, sys, re, pydoc
21 import sphinx
22 import inspect
23 import collections
24
25 if sphinx.__version__ < '1.0.1':
26 raise RuntimeError("Sphinx 1.0.1 or newer is required")
27
28 from .docscrape_sphinx import get_doc_object, SphinxDocString
29 from sphinx.util.compat import Directive
30
31 if sys.version_info[0] >= 3:
32 sixu = lambda s: s
33 else:
34 sixu = lambda s: unicode(s, 'unicode_escape')
35
36
37 def mangle_docstrings(app, what, name, obj, options, lines,
38 reference_offset=[0]):
39
40 cfg = dict(use_plots=app.config.numpydoc_use_plots,
41 show_class_members=app.config.numpydoc_show_class_members,
42 class_members_toctree=app.config.numpydoc_class_members_toctree,
43 )
44
45 # PANDAS HACK (to remove the list of methods/attributes for Categorical)
46 if what == "class" and (name.endswith(".Categorical") or
47 name.endswith("CategoricalIndex") or
48 name.endswith("IntervalIndex")):
49 cfg['class_members_list'] = False
50
51 if what == 'module':
52 # Strip top title
53 title_re = re.compile(sixu('^\\s*[#*=]{4,}\\n[a-z0-9 -]+\\n[#*=]{4,}\\s*'),
54 re.I|re.S)
55 lines[:] = title_re.sub(sixu(''), sixu("\n").join(lines)).split(sixu("\n"))
56 else:
57 doc = get_doc_object(obj, what, sixu("\n").join(lines), config=cfg)
58 if sys.version_info[0] >= 3:
59 doc = str(doc)
60 else:
61 doc = unicode(doc)
62 lines[:] = doc.split(sixu("\n"))
63
64 if app.config.numpydoc_edit_link and hasattr(obj, '__name__') and \
65 obj.__name__:
66 if hasattr(obj, '__module__'):
67 v = dict(full_name=sixu("%s.%s") % (obj.__module__, obj.__name__))
68 else:
69 v = dict(full_name=obj.__name__)
70 lines += [sixu(''), sixu('.. htmlonly::'), sixu('')]
71 lines += [sixu(' %s') % x for x in
72 (app.config.numpydoc_edit_link % v).split("\n")]
73
74 # replace reference numbers so that there are no duplicates
75 references = []
76 for line in lines:
77 line = line.strip()
78 m = re.match(sixu('^.. \\[([a-z0-9_.-])\\]'), line, re.I)
79 if m:
80 references.append(m.group(1))
81
82 # start renaming from the longest string, to avoid overwriting parts
83 references.sort(key=lambda x: -len(x))
84 if references:
85 for i, line in enumerate(lines):
86 for r in references:
87 if re.match(sixu('^\\d+$'), r):
88 new_r = sixu("R%d") % (reference_offset[0] + int(r))
89 else:
90 new_r = sixu("%s%d") % (r, reference_offset[0])
91 lines[i] = lines[i].replace(sixu('[%s]_') % r,
92 sixu('[%s]_') % new_r)
93 lines[i] = lines[i].replace(sixu('.. [%s]') % r,
94 sixu('.. [%s]') % new_r)
95
96 reference_offset[0] += len(references)
97
98 def mangle_signature(app, what, name, obj, options, sig, retann):
99 # Do not try to inspect classes that don't define `__init__`
100 if (inspect.isclass(obj) and
101 (not hasattr(obj, '__init__') or
102 'initializes x; see ' in pydoc.getdoc(obj.__init__))):
103 return '', ''
104
105 if not (isinstance(obj, collections.Callable) or hasattr(obj, '__argspec_is_invalid_')): return
106 if not hasattr(obj, '__doc__'): return
107
108 doc = SphinxDocString(pydoc.getdoc(obj))
109 if doc['Signature']:
110 sig = re.sub(sixu("^[^(]*"), sixu(""), doc['Signature'])
111 return sig, sixu('')
112
113 def setup(app, get_doc_object_=get_doc_object):
114 if not hasattr(app, 'add_config_value'):
115 return # probably called by nose, better bail out
116
117 global get_doc_object
118 get_doc_object = get_doc_object_
119
120 app.connect('autodoc-process-docstring', mangle_docstrings)
121 app.connect('autodoc-process-signature', mangle_signature)
122 app.add_config_value('numpydoc_edit_link', None, False)
123 app.add_config_value('numpydoc_use_plots', None, False)
124 app.add_config_value('numpydoc_show_class_members', True, True)
125 app.add_config_value('numpydoc_class_members_toctree', True, True)
126
127 # Extra mangling domains
128 app.add_domain(NumpyPythonDomain)
129 app.add_domain(NumpyCDomain)
130
131 #------------------------------------------------------------------------------
132 # Docstring-mangling domains
133 #------------------------------------------------------------------------------
134
135 from docutils.statemachine import ViewList
136 from sphinx.domains.c import CDomain
137 from sphinx.domains.python import PythonDomain
138
139 class ManglingDomainBase(object):
140 directive_mangling_map = {}
141
142 def __init__(self, *a, **kw):
143 super(ManglingDomainBase, self).__init__(*a, **kw)
144 self.wrap_mangling_directives()
145
146 def wrap_mangling_directives(self):
147 for name, objtype in list(self.directive_mangling_map.items()):
148 self.directives[name] = wrap_mangling_directive(
149 self.directives[name], objtype)
150
151 class NumpyPythonDomain(ManglingDomainBase, PythonDomain):
152 name = 'np'
153 directive_mangling_map = {
154 'function': 'function',
155 'class': 'class',
156 'exception': 'class',
157 'method': 'function',
158 'classmethod': 'function',
159 'staticmethod': 'function',
160 'attribute': 'attribute',
161 }
162 indices = []
163
164 class NumpyCDomain(ManglingDomainBase, CDomain):
165 name = 'np-c'
166 directive_mangling_map = {
167 'function': 'function',
168 'member': 'attribute',
169 'macro': 'function',
170 'type': 'class',
171 'var': 'object',
172 }
173
174 def wrap_mangling_directive(base_directive, objtype):
175 class directive(base_directive):
176 def run(self):
177 env = self.state.document.settings.env
178
179 name = None
180 if self.arguments:
181 m = re.match(r'^(.*\s+)?(.*?)(\(.*)?', self.arguments[0])
182 name = m.group(2).strip()
183
184 if not name:
185 name = self.arguments[0]
186
187 lines = list(self.content)
188 mangle_docstrings(env.app, objtype, name, None, None, lines)
189 self.content = ViewList(lines, self.content.parent)
190
191 return base_directive.run(self)
192
193 return directive
194
[end of doc/sphinxext/numpydoc/numpydoc.py]
[start of setup.py]
1 #!/usr/bin/env python
2
3 """
4 Parts of this file were taken from the pyzmq project
5 (https://github.com/zeromq/pyzmq) which have been permitted for use under the
6 BSD license. Parts are from lxml (https://github.com/lxml/lxml)
7 """
8
9 import os
10 import sys
11 import shutil
12 import warnings
13 import re
14 import platform
15 from distutils.version import LooseVersion
16
17 def is_platform_windows():
18 return sys.platform == 'win32' or sys.platform == 'cygwin'
19
20 def is_platform_linux():
21 return sys.platform == 'linux2'
22
23 def is_platform_mac():
24 return sys.platform == 'darwin'
25
26 # versioning
27 import versioneer
28 cmdclass = versioneer.get_cmdclass()
29
30 min_cython_ver = '0.23'
31 try:
32 import Cython
33 ver = Cython.__version__
34 _CYTHON_INSTALLED = ver >= LooseVersion(min_cython_ver)
35 except ImportError:
36 _CYTHON_INSTALLED = False
37
38 try:
39 import pkg_resources
40 from setuptools import setup, Command
41 _have_setuptools = True
42 except ImportError:
43 # no setuptools installed
44 from distutils.core import setup, Command
45 _have_setuptools = False
46
47 setuptools_kwargs = {}
48 min_numpy_ver = '1.7.0'
49 if sys.version_info[0] >= 3:
50
51 setuptools_kwargs = {
52 'zip_safe': False,
53 'install_requires': ['python-dateutil >= 2',
54 'pytz >= 2011k',
55 'numpy >= %s' % min_numpy_ver],
56 'setup_requires': ['numpy >= %s' % min_numpy_ver],
57 }
58 if not _have_setuptools:
59 sys.exit("need setuptools/distribute for Py3k"
60 "\n$ pip install distribute")
61
62 else:
63 setuptools_kwargs = {
64 'install_requires': ['python-dateutil',
65 'pytz >= 2011k',
66 'numpy >= %s' % min_numpy_ver],
67 'setup_requires': ['numpy >= %s' % min_numpy_ver],
68 'zip_safe': False,
69 }
70
71 if not _have_setuptools:
72 try:
73 import numpy
74 import dateutil
75 setuptools_kwargs = {}
76 except ImportError:
77 sys.exit("install requires: 'python-dateutil < 2','numpy'."
78 " use pip or easy_install."
79 "\n $ pip install 'python-dateutil < 2' 'numpy'")
80
81 from distutils.extension import Extension
82 from distutils.command.build import build
83 from distutils.command.build_ext import build_ext as _build_ext
84
85 try:
86 if not _CYTHON_INSTALLED:
87 raise ImportError('No supported version of Cython installed.')
88 try:
89 from Cython.Distutils.old_build_ext import old_build_ext as _build_ext
90 except ImportError:
91 # Pre 0.25
92 from Cython.Distutils import build_ext as _build_ext
93 cython = True
94 except ImportError:
95 cython = False
96
97
98 if cython:
99 try:
100 try:
101 from Cython import Tempita as tempita
102 except ImportError:
103 import tempita
104 except ImportError:
105 raise ImportError('Building pandas requires Tempita: '
106 'pip install Tempita')
107
108
109 from os.path import join as pjoin
110
111
112 _pxi_dep_template = {
113 'algos': ['_libs/algos_common_helper.pxi.in',
114 '_libs/algos_take_helper.pxi.in', '_libs/algos_rank_helper.pxi.in'],
115 'groupby': ['_libs/groupby_helper.pxi.in'],
116 'join': ['_libs/join_helper.pxi.in', '_libs/join_func_helper.pxi.in'],
117 'reshape': ['_libs/reshape_helper.pxi.in'],
118 'hashtable': ['_libs/hashtable_class_helper.pxi.in',
119 '_libs/hashtable_func_helper.pxi.in'],
120 'index': ['_libs/index_class_helper.pxi.in'],
121 'sparse': ['_libs/sparse_op_helper.pxi.in'],
122 'interval': ['_libs/intervaltree.pxi.in']
123 }
124
125 _pxifiles = []
126 _pxi_dep = {}
127 for module, files in _pxi_dep_template.items():
128 pxi_files = [pjoin('pandas', x) for x in files]
129 _pxifiles.extend(pxi_files)
130 _pxi_dep[module] = pxi_files
131
132
133 class build_ext(_build_ext):
134 def build_extensions(self):
135
136 # if builing from c files, don't need to
137 # generate template output
138 if cython:
139 for pxifile in _pxifiles:
140 # build pxifiles first, template extention must be .pxi.in
141 assert pxifile.endswith('.pxi.in')
142 outfile = pxifile[:-3]
143
144 if (os.path.exists(outfile) and
145 os.stat(pxifile).st_mtime < os.stat(outfile).st_mtime):
146 # if .pxi.in is not updated, no need to output .pxi
147 continue
148
149 with open(pxifile, "r") as f:
150 tmpl = f.read()
151 pyxcontent = tempita.sub(tmpl)
152
153 with open(outfile, "w") as f:
154 f.write(pyxcontent)
155
156 numpy_incl = pkg_resources.resource_filename('numpy', 'core/include')
157
158 for ext in self.extensions:
159 if hasattr(ext, 'include_dirs') and not numpy_incl in ext.include_dirs:
160 ext.include_dirs.append(numpy_incl)
161 _build_ext.build_extensions(self)
162
163
164 DESCRIPTION = ("Powerful data structures for data analysis, time series,"
165 "and statistics")
166 LONG_DESCRIPTION = """
167 **pandas** is a Python package providing fast, flexible, and expressive data
168 structures designed to make working with structured (tabular, multidimensional,
169 potentially heterogeneous) and time series data both easy and intuitive. It
170 aims to be the fundamental high-level building block for doing practical,
171 **real world** data analysis in Python. Additionally, it has the broader goal
172 of becoming **the most powerful and flexible open source data analysis /
173 manipulation tool available in any language**. It is already well on its way
174 toward this goal.
175
176 pandas is well suited for many different kinds of data:
177
178 - Tabular data with heterogeneously-typed columns, as in an SQL table or
179 Excel spreadsheet
180 - Ordered and unordered (not necessarily fixed-frequency) time series data.
181 - Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
182 column labels
183 - Any other form of observational / statistical data sets. The data actually
184 need not be labeled at all to be placed into a pandas data structure
185
186 The two primary data structures of pandas, Series (1-dimensional) and DataFrame
187 (2-dimensional), handle the vast majority of typical use cases in finance,
188 statistics, social science, and many areas of engineering. For R users,
189 DataFrame provides everything that R's ``data.frame`` provides and much
190 more. pandas is built on top of `NumPy <http://www.numpy.org>`__ and is
191 intended to integrate well within a scientific computing environment with many
192 other 3rd party libraries.
193
194 Here are just a few of the things that pandas does well:
195
196 - Easy handling of **missing data** (represented as NaN) in floating point as
197 well as non-floating point data
198 - Size mutability: columns can be **inserted and deleted** from DataFrame and
199 higher dimensional objects
200 - Automatic and explicit **data alignment**: objects can be explicitly
201 aligned to a set of labels, or the user can simply ignore the labels and
202 let `Series`, `DataFrame`, etc. automatically align the data for you in
203 computations
204 - Powerful, flexible **group by** functionality to perform
205 split-apply-combine operations on data sets, for both aggregating and
206 transforming data
207 - Make it **easy to convert** ragged, differently-indexed data in other
208 Python and NumPy data structures into DataFrame objects
209 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
210 of large data sets
211 - Intuitive **merging** and **joining** data sets
212 - Flexible **reshaping** and pivoting of data sets
213 - **Hierarchical** labeling of axes (possible to have multiple labels per
214 tick)
215 - Robust IO tools for loading data from **flat files** (CSV and delimited),
216 Excel files, databases, and saving / loading data from the ultrafast **HDF5
217 format**
218 - **Time series**-specific functionality: date range generation and frequency
219 conversion, moving window statistics, moving window linear regressions,
220 date shifting and lagging, etc.
221
222 Many of these principles are here to address the shortcomings frequently
223 experienced using other languages / scientific research environments. For data
224 scientists, working with data is typically divided into multiple stages:
225 munging and cleaning data, analyzing / modeling it, then organizing the results
226 of the analysis into a form suitable for plotting or tabular display. pandas is
227 the ideal tool for all of these tasks.
228
229 Note
230 ----
231 Windows binaries built against NumPy 1.8.1
232 """
233
234 DISTNAME = 'pandas'
235 LICENSE = 'BSD'
236 AUTHOR = "The PyData Development Team"
237 EMAIL = "[email protected]"
238 URL = "http://pandas.pydata.org"
239 DOWNLOAD_URL = ''
240 CLASSIFIERS = [
241 'Development Status :: 5 - Production/Stable',
242 'Environment :: Console',
243 'Operating System :: OS Independent',
244 'Intended Audience :: Science/Research',
245 'Programming Language :: Python',
246 'Programming Language :: Python :: 2',
247 'Programming Language :: Python :: 3',
248 'Programming Language :: Python :: 2.7',
249 'Programming Language :: Python :: 3.5',
250 'Programming Language :: Python :: 3.6',
251 'Programming Language :: Cython',
252 'Topic :: Scientific/Engineering',
253 ]
254
255 class CleanCommand(Command):
256 """Custom distutils command to clean the .so and .pyc files."""
257
258 user_options = [("all", "a", "")]
259
260 def initialize_options(self):
261 self.all = True
262 self._clean_me = []
263 self._clean_trees = []
264
265 base = pjoin('pandas','_libs', 'src')
266 dt = pjoin(base,'datetime')
267 src = base
268 util = pjoin('pandas','util')
269 parser = pjoin(base,'parser')
270 ujson_python = pjoin(base,'ujson','python')
271 ujson_lib = pjoin(base,'ujson','lib')
272 self._clean_exclude = [pjoin(dt,'np_datetime.c'),
273 pjoin(dt,'np_datetime_strings.c'),
274 pjoin(src,'period_helper.c'),
275 pjoin(parser,'tokenizer.c'),
276 pjoin(parser,'io.c'),
277 pjoin(ujson_python,'ujson.c'),
278 pjoin(ujson_python,'objToJSON.c'),
279 pjoin(ujson_python,'JSONtoObj.c'),
280 pjoin(ujson_lib,'ultrajsonenc.c'),
281 pjoin(ujson_lib,'ultrajsondec.c'),
282 pjoin(util,'move.c'),
283 ]
284
285 for root, dirs, files in os.walk('pandas'):
286 for f in files:
287 filepath = pjoin(root, f)
288 if filepath in self._clean_exclude:
289 continue
290
291 if os.path.splitext(f)[-1] in ('.pyc', '.so', '.o',
292 '.pyo',
293 '.pyd', '.c', '.orig'):
294 self._clean_me.append(filepath)
295 for d in dirs:
296 if d == '__pycache__':
297 self._clean_trees.append(pjoin(root, d))
298
299 # clean the generated pxi files
300 for pxifile in _pxifiles:
301 pxifile = pxifile.replace(".pxi.in", ".pxi")
302 self._clean_me.append(pxifile)
303
304 for d in ('build', 'dist'):
305 if os.path.exists(d):
306 self._clean_trees.append(d)
307
308 def finalize_options(self):
309 pass
310
311 def run(self):
312 for clean_me in self._clean_me:
313 try:
314 os.unlink(clean_me)
315 except Exception:
316 pass
317 for clean_tree in self._clean_trees:
318 try:
319 shutil.rmtree(clean_tree)
320 except Exception:
321 pass
322
323
324 # we need to inherit from the versioneer
325 # class as it encodes the version info
326 sdist_class = cmdclass['sdist']
327
328 class CheckSDist(sdist_class):
329 """Custom sdist that ensures Cython has compiled all pyx files to c."""
330
331 _pyxfiles = ['pandas/_libs/lib.pyx',
332 'pandas/_libs/hashtable.pyx',
333 'pandas/_libs/tslib.pyx',
334 'pandas/_libs/period.pyx',
335 'pandas/_libs/index.pyx',
336 'pandas/_libs/algos.pyx',
337 'pandas/_libs/join.pyx',
338 'pandas/_libs/interval.pyx',
339 'pandas/_libs/hashing.pyx',
340 'pandas/_libs/testing.pyx',
341 'pandas/_libs/window.pyx',
342 'pandas/_libs/sparse.pyx',
343 'pandas/_libs/parsers.pyx',
344 'pandas/io/sas/sas.pyx']
345
346 def initialize_options(self):
347 sdist_class.initialize_options(self)
348
349 '''
350 self._pyxfiles = []
351 for root, dirs, files in os.walk('pandas'):
352 for f in files:
353 if f.endswith('.pyx'):
354 self._pyxfiles.append(pjoin(root, f))
355 '''
356
357 def run(self):
358 if 'cython' in cmdclass:
359 self.run_command('cython')
360 else:
361 for pyxfile in self._pyxfiles:
362 cfile = pyxfile[:-3] + 'c'
363 msg = "C-source file '%s' not found." % (cfile) +\
364 " Run 'setup.py cython' before sdist."
365 assert os.path.isfile(cfile), msg
366 sdist_class.run(self)
367
368
369 class CheckingBuildExt(build_ext):
370 """
371 Subclass build_ext to get clearer report if Cython is necessary.
372
373 """
374
375 def check_cython_extensions(self, extensions):
376 for ext in extensions:
377 for src in ext.sources:
378 if not os.path.exists(src):
379 print("{}: -> [{}]".format(ext.name, ext.sources))
380 raise Exception("""Cython-generated file '%s' not found.
381 Cython is required to compile pandas from a development branch.
382 Please install Cython or download a release package of pandas.
383 """ % src)
384
385 def build_extensions(self):
386 self.check_cython_extensions(self.extensions)
387 build_ext.build_extensions(self)
388
389
390 class CythonCommand(build_ext):
391 """Custom distutils command subclassed from Cython.Distutils.build_ext
392 to compile pyx->c, and stop there. All this does is override the
393 C-compile method build_extension() with a no-op."""
394 def build_extension(self, ext):
395 pass
396
397
398 class DummyBuildSrc(Command):
399 """ numpy's build_src command interferes with Cython's build_ext.
400 """
401 user_options = []
402
403 def initialize_options(self):
404 self.py_modules_dict = {}
405
406 def finalize_options(self):
407 pass
408
409 def run(self):
410 pass
411
412 cmdclass.update({'clean': CleanCommand,
413 'build': build})
414
415 try:
416 from wheel.bdist_wheel import bdist_wheel
417
418 class BdistWheel(bdist_wheel):
419 def get_tag(self):
420 tag = bdist_wheel.get_tag(self)
421 repl = 'macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64'
422 if tag[2] == 'macosx_10_6_intel':
423 tag = (tag[0], tag[1], repl)
424 return tag
425 cmdclass['bdist_wheel'] = BdistWheel
426 except ImportError:
427 pass
428
429 if cython:
430 suffix = '.pyx'
431 cmdclass['build_ext'] = CheckingBuildExt
432 cmdclass['cython'] = CythonCommand
433 else:
434 suffix = '.c'
435 cmdclass['build_src'] = DummyBuildSrc
436 cmdclass['build_ext'] = CheckingBuildExt
437
438 lib_depends = ['reduce', 'inference', 'properties']
439
440
441 def srcpath(name=None, suffix='.pyx', subdir='src'):
442 return pjoin('pandas', subdir, name + suffix)
443
444 if suffix == '.pyx':
445 lib_depends = [srcpath(f, suffix='.pyx', subdir='_libs/src') for f in lib_depends]
446 lib_depends.append('pandas/_libs/src/util.pxd')
447 else:
448 lib_depends = []
449 plib_depends = []
450
451 common_include = ['pandas/_libs/src/klib', 'pandas/_libs/src']
452
453
454 def pxd(name):
455 return os.path.abspath(pjoin('pandas', name + '.pxd'))
456
457 # args to ignore warnings
458 if is_platform_windows():
459 extra_compile_args=[]
460 else:
461 extra_compile_args=['-Wno-unused-function']
462
463 lib_depends = lib_depends + ['pandas/_libs/src/numpy_helper.h',
464 'pandas/_libs/src/parse_helper.h',
465 'pandas/_libs/src/compat_helper.h']
466
467
468 tseries_depends = ['pandas/_libs/src/datetime/np_datetime.h',
469 'pandas/_libs/src/datetime/np_datetime_strings.h',
470 'pandas/_libs/src/datetime_helper.h',
471 'pandas/_libs/src/period_helper.h',
472 'pandas/_libs/src/datetime.pxd']
473
474
475 # some linux distros require it
476 libraries = ['m'] if not is_platform_windows() else []
477
478 ext_data = {
479 '_libs.lib': {'pyxfile': '_libs/lib',
480 'depends': lib_depends + tseries_depends},
481 '_libs.hashtable': {'pyxfile': '_libs/hashtable',
482 'pxdfiles': ['_libs/hashtable'],
483 'depends': (['pandas/_libs/src/klib/khash_python.h']
484 + _pxi_dep['hashtable'])},
485 '_libs.tslib': {'pyxfile': '_libs/tslib',
486 'pxdfiles': ['_libs/src/util', '_libs/lib'],
487 'depends': tseries_depends,
488 'sources': ['pandas/_libs/src/datetime/np_datetime.c',
489 'pandas/_libs/src/datetime/np_datetime_strings.c',
490 'pandas/_libs/src/period_helper.c']},
491 '_libs.period': {'pyxfile': '_libs/period',
492 'depends': tseries_depends,
493 'sources': ['pandas/_libs/src/datetime/np_datetime.c',
494 'pandas/_libs/src/datetime/np_datetime_strings.c',
495 'pandas/_libs/src/period_helper.c']},
496 '_libs.index': {'pyxfile': '_libs/index',
497 'sources': ['pandas/_libs/src/datetime/np_datetime.c',
498 'pandas/_libs/src/datetime/np_datetime_strings.c'],
499 'pxdfiles': ['_libs/src/util', '_libs/hashtable'],
500 'depends': _pxi_dep['index']},
501 '_libs.algos': {'pyxfile': '_libs/algos',
502 'pxdfiles': ['_libs/src/util', '_libs/algos', '_libs/hashtable'],
503 'depends': _pxi_dep['algos']},
504 '_libs.groupby': {'pyxfile': '_libs/groupby',
505 'pxdfiles': ['_libs/src/util', '_libs/algos'],
506 'depends': _pxi_dep['groupby']},
507 '_libs.join': {'pyxfile': '_libs/join',
508 'pxdfiles': ['_libs/src/util', '_libs/hashtable'],
509 'depends': _pxi_dep['join']},
510 '_libs.reshape': {'pyxfile': '_libs/reshape',
511 'depends': _pxi_dep['reshape']},
512 '_libs.interval': {'pyxfile': '_libs/interval',
513 'pxdfiles': ['_libs/hashtable'],
514 'depends': _pxi_dep['interval']},
515 '_libs.window': {'pyxfile': '_libs/window',
516 'pxdfiles': ['_libs/src/skiplist', '_libs/src/util'],
517 'depends': ['pandas/_libs/src/skiplist.pyx',
518 'pandas/_libs/src/skiplist.h']},
519 '_libs.parsers': {'pyxfile': '_libs/parsers',
520 'depends': ['pandas/_libs/src/parser/tokenizer.h',
521 'pandas/_libs/src/parser/io.h',
522 'pandas/_libs/src/numpy_helper.h'],
523 'sources': ['pandas/_libs/src/parser/tokenizer.c',
524 'pandas/_libs/src/parser/io.c']},
525 '_libs.sparse': {'pyxfile': '_libs/sparse',
526 'depends': (['pandas/_libs/sparse.pyx'] +
527 _pxi_dep['sparse'])},
528 '_libs.testing': {'pyxfile': '_libs/testing',
529 'depends': ['pandas/_libs/testing.pyx']},
530 '_libs.hashing': {'pyxfile': '_libs/hashing',
531 'depends': ['pandas/_libs/hashing.pyx']},
532 'io.sas._sas': {'pyxfile': 'io/sas/sas'},
533 }
534
535 extensions = []
536
537 for name, data in ext_data.items():
538 sources = [srcpath(data['pyxfile'], suffix=suffix, subdir='')]
539 pxds = [pxd(x) for x in data.get('pxdfiles', [])]
540 if suffix == '.pyx' and pxds:
541 sources.extend(pxds)
542
543 sources.extend(data.get('sources', []))
544
545 include = data.get('include', common_include)
546
547 obj = Extension('pandas.%s' % name,
548 sources=sources,
549 depends=data.get('depends', []),
550 include_dirs=include,
551 extra_compile_args=extra_compile_args)
552
553 extensions.append(obj)
554
555
556 #----------------------------------------------------------------------
557 # msgpack
558
559 if sys.byteorder == 'big':
560 macros = [('__BIG_ENDIAN__', '1')]
561 else:
562 macros = [('__LITTLE_ENDIAN__', '1')]
563
564 packer_ext = Extension('pandas.io.msgpack._packer',
565 depends=['pandas/_libs/src/msgpack/pack.h',
566 'pandas/_libs/src/msgpack/pack_template.h'],
567 sources = [srcpath('_packer',
568 suffix=suffix if suffix == '.pyx' else '.cpp',
569 subdir='io/msgpack')],
570 language='c++',
571 include_dirs=['pandas/_libs/src/msgpack'] + common_include,
572 define_macros=macros,
573 extra_compile_args=extra_compile_args)
574 unpacker_ext = Extension('pandas.io.msgpack._unpacker',
575 depends=['pandas/_libs/src/msgpack/unpack.h',
576 'pandas/_libs/src/msgpack/unpack_define.h',
577 'pandas/_libs/src/msgpack/unpack_template.h'],
578 sources = [srcpath('_unpacker',
579 suffix=suffix if suffix == '.pyx' else '.cpp',
580 subdir='io/msgpack')],
581 language='c++',
582 include_dirs=['pandas/_libs/src/msgpack'] + common_include,
583 define_macros=macros,
584 extra_compile_args=extra_compile_args)
585 extensions.append(packer_ext)
586 extensions.append(unpacker_ext)
587
588 #----------------------------------------------------------------------
589 # ujson
590
591 if suffix == '.pyx' and 'setuptools' in sys.modules:
592 # undo dumb setuptools bug clobbering .pyx sources back to .c
593 for ext in extensions:
594 if ext.sources[0].endswith(('.c','.cpp')):
595 root, _ = os.path.splitext(ext.sources[0])
596 ext.sources[0] = root + suffix
597
598 ujson_ext = Extension('pandas._libs.json',
599 depends=['pandas/_libs/src/ujson/lib/ultrajson.h',
600 'pandas/_libs/src/datetime_helper.h',
601 'pandas/_libs/src/numpy_helper.h'],
602 sources=['pandas/_libs/src/ujson/python/ujson.c',
603 'pandas/_libs/src/ujson/python/objToJSON.c',
604 'pandas/_libs/src/ujson/python/JSONtoObj.c',
605 'pandas/_libs/src/ujson/lib/ultrajsonenc.c',
606 'pandas/_libs/src/ujson/lib/ultrajsondec.c',
607 'pandas/_libs/src/datetime/np_datetime.c',
608 'pandas/_libs/src/datetime/np_datetime_strings.c'],
609 include_dirs=['pandas/_libs/src/ujson/python',
610 'pandas/_libs/src/ujson/lib',
611 'pandas/_libs/src/datetime'] + common_include,
612 extra_compile_args=['-D_GNU_SOURCE'] + extra_compile_args)
613
614
615 extensions.append(ujson_ext)
616
617 #----------------------------------------------------------------------
618 # util
619 # extension for pseudo-safely moving bytes into mutable buffers
620 _move_ext = Extension('pandas.util._move',
621 depends=[],
622 sources=['pandas/util/move.c'])
623 extensions.append(_move_ext)
624
625
626 if _have_setuptools:
627 setuptools_kwargs["test_suite"] = "nose.collector"
628
629 # The build cache system does string matching below this point.
630 # if you change something, be careful.
631
632 setup(name=DISTNAME,
633 maintainer=AUTHOR,
634 version=versioneer.get_version(),
635 packages=['pandas',
636 'pandas.api',
637 'pandas.api.types',
638 'pandas.compat',
639 'pandas.compat.numpy',
640 'pandas.core',
641 'pandas.core.dtypes',
642 'pandas.core.indexes',
643 'pandas.core.computation',
644 'pandas.core.reshape',
645 'pandas.core.sparse',
646 'pandas.core.tools',
647 'pandas.core.util',
648 'pandas.computation',
649 'pandas.errors',
650 'pandas.formats',
651 'pandas.io',
652 'pandas.io.json',
653 'pandas.io.sas',
654 'pandas.io.msgpack',
655 'pandas.io.formats',
656 'pandas.io.clipboard',
657 'pandas._libs',
658 'pandas.plotting',
659 'pandas.stats',
660 'pandas.types',
661 'pandas.util',
662 'pandas.tests',
663 'pandas.tests.api',
664 'pandas.tests.dtypes',
665 'pandas.tests.computation',
666 'pandas.tests.sparse',
667 'pandas.tests.frame',
668 'pandas.tests.indexing',
669 'pandas.tests.indexes',
670 'pandas.tests.indexes.datetimes',
671 'pandas.tests.indexes.timedeltas',
672 'pandas.tests.indexes.period',
673 'pandas.tests.internals',
674 'pandas.tests.io',
675 'pandas.tests.io.json',
676 'pandas.tests.io.parser',
677 'pandas.tests.io.sas',
678 'pandas.tests.io.msgpack',
679 'pandas.tests.io.formats',
680 'pandas.tests.groupby',
681 'pandas.tests.reshape',
682 'pandas.tests.series',
683 'pandas.tests.scalar',
684 'pandas.tests.tseries',
685 'pandas.tests.plotting',
686 'pandas.tests.tools',
687 'pandas.tests.util',
688 'pandas.tools',
689 'pandas.tseries',
690 ],
691 package_data={'pandas.tests': ['data/*.csv'],
692 'pandas.tests.indexes': ['data/*.pickle'],
693 'pandas.tests.io': ['data/legacy_hdf/*.h5',
694 'data/legacy_pickle/*/*.pickle',
695 'data/legacy_msgpack/*/*.msgpack',
696 'data/*.csv*',
697 'data/*.dta',
698 'data/*.pickle',
699 'data/*.txt',
700 'data/*.xls',
701 'data/*.xlsx',
702 'data/*.xlsm',
703 'data/*.table',
704 'parser/data/*.csv',
705 'parser/data/*.gz',
706 'parser/data/*.bz2',
707 'parser/data/*.txt',
708 'parser/data/*.tar',
709 'parser/data/*.tar.gz',
710 'sas/data/*.csv',
711 'sas/data/*.xpt',
712 'sas/data/*.sas7bdat',
713 'data/*.html',
714 'data/html_encoding/*.html',
715 'json/data/*.json'],
716 'pandas.tests.io.formats': ['data/*.csv'],
717 'pandas.tests.io.msgpack': ['data/*.mp'],
718 'pandas.tests.reshape': ['data/*.csv'],
719 'pandas.tests.tseries': ['data/*.pickle'],
720 'pandas.io.formats': ['templates/*.tpl']
721 },
722 ext_modules=extensions,
723 maintainer_email=EMAIL,
724 description=DESCRIPTION,
725 license=LICENSE,
726 cmdclass=cmdclass,
727 url=URL,
728 download_url=DOWNLOAD_URL,
729 long_description=LONG_DESCRIPTION,
730 classifiers=CLASSIFIERS,
731 platforms='any',
732 **setuptools_kwargs)
733
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
6fe68325de93a5f745ff49eac57589d33a1d53c1
|
CLN: remove have_pytz?
From [tslib.pyx](https://github.com/pandas-dev/pandas/blob/master/pandas/_libs/tslib.pyx#L4083)
```
try:
import pytz
UTC = pytz.utc
have_pytz = True
except:
have_pytz = False
[...]
def tz_convert_single(int64_t val, object tz1, object tz2):
[...]
if not have_pytz:
import pytz
```
From much [earlier](https://github.com/pandas-dev/pandas/blob/master/pandas/_libs/tslib.pyx#L63) in tslib.pyx
```
from pytz.tzinfo import BaseTzInfo as _pytz_BaseTzInfo
```
Is the try/except still necessary? If so, is `import pytz` the right thing to do in `tz_convert_single`?
|
Reminds me of #17173. We wouldn't deprecated anything but just remove the check if it is no longer necessary, but I suspect a more involved solution will be needed @jreback ?
`pytz` is a hard dependency so I think this could simply be removed.
> We wouldn't deprecated anything but just remove the check
Note to self: learn the difference between "deprecate" and "remove".
If there's a consensus, I'll make a PR to remove this check. While we're at it, do we still need to check for dateutil versions?
@jbrockmendel : #17002 might be relevant for this. I would refrain from this for the moment, but you can give it a shot in a separate commit to see.
There are also several try/excepts in src/inference.pyx for importing dateutil.parser
``pytz`` is a strict requirement, so this could be removed.
|
2017-08-16T17:33:34Z
|
<patch>
diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -32,13 +32,9 @@ cdef extern from "datetime.h":
cdef int64_t iNaT = util.get_nat()
-try:
- from dateutil.tz import tzutc as _du_utc
- import pytz
- UTC = pytz.utc
- have_pytz = True
-except ImportError:
- have_pytz = False
+from dateutil.tz import tzutc as _du_utc
+import pytz
+UTC = pytz.utc
PyDateTime_IMPORT
diff --git a/pandas/_libs/period.pyx b/pandas/_libs/period.pyx
--- a/pandas/_libs/period.pyx
+++ b/pandas/_libs/period.pyx
@@ -3,8 +3,7 @@ import operator
from cpython cimport (
PyObject_RichCompareBool,
- Py_EQ, Py_NE,
-)
+ Py_EQ, Py_NE)
from numpy cimport (int8_t, int32_t, int64_t, import_array, ndarray,
NPY_INT64, NPY_DATETIME, NPY_TIMEDELTA)
@@ -24,14 +23,13 @@ cimport util, lib
from lib cimport is_null_datetimelike, is_period
from pandas._libs import tslib, lib
from pandas._libs.tslib import (Timedelta, Timestamp, iNaT,
- NaT, have_pytz, _get_utcoffset)
+ NaT, _get_utcoffset)
from tslib cimport (
maybe_get_tz,
_is_utc,
_is_tzlocal,
_get_dst_info,
- _nat_scalar_rules,
-)
+ _nat_scalar_rules)
from pandas.tseries import offsets
from pandas.core.tools.datetimes import parse_time_string
@@ -610,9 +608,6 @@ cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
ndarray[int64_t] trans, deltas, pos
pandas_datetimestruct dts
- if not have_pytz:
- raise Exception('Could not find pytz module')
-
if _is_utc(tz):
for i in range(n):
if stamps[i] == NPY_NAT:
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -4080,12 +4080,8 @@ def i8_to_pydt(int64_t i8, object tzinfo = None):
#----------------------------------------------------------------------
# time zone conversion helpers
-try:
- import pytz
- UTC = pytz.utc
- have_pytz = True
-except:
- have_pytz = False
+import pytz
+UTC = pytz.utc
@cython.boundscheck(False)
@@ -4112,9 +4108,6 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
int64_t v, offset, delta
pandas_datetimestruct dts
- if not have_pytz:
- import pytz
-
if len(vals) == 0:
return np.array([], dtype=np.int64)
@@ -4229,9 +4222,6 @@ def tz_convert_single(int64_t val, object tz1, object tz2):
int64_t v, offset, utc_date
pandas_datetimestruct dts
- if not have_pytz:
- import pytz
-
if val == NPY_NAT:
return val
@@ -4444,9 +4434,6 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None,
assert is_coerce or is_raise
- if not have_pytz:
- raise Exception("Could not find pytz module")
-
if tz == UTC or tz is None:
return vals
</patch>
|
[]
|
[]
| |||
jupyterlab__jupyterlab-9760
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`ArgumentConflict` is not defined
<!--
Welcome! Before creating a new issue:
* Search for relevant issues
* Follow the issue reporting guidelines:
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html
-->
## Description
When I add
```python
c.DevelopLabExtensionApp.labextensions_dir=<some_path>
```
to a custom configuration
I get the error:
```traceback
Traceback (most recent call last):
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/debuglog.py", line 47, in debug_logging
yield
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/labextensions.py", line 127, in start
ans = self.run_task()
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/labextensions.py", line 199, in run_task
develop_labextension_py(arg, user=self.user, sys_prefix=self.sys_prefix, labextensions_dir=self.labextensions_dir, logger=self.log, overwrite=self.overwrite,
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/federated_labextensions.py", line 157, in develop_labextension_py
full_dest = develop_labextension(
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/federated_labextensions.py", line 83, in develop_labextension
labext = _get_labextension_dir(user=user, sys_prefix=sys_prefix, labextensions_dir=labextensions_dir)
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/federated_labextensions.py", line 332, in _get_labextension_dir
raise ArgumentConflict(
```
It looks like this `ArgumentConflict` error isn't defined anywhere in jupyterlab.
I can submit a PR to fix this later today by adding an `ArgumentConflict` class that subclasses from `traitlets.TraitError`. I'm open to other recomendations as well though.
## Reproduce
<!--Describe step-by-step instructions to reproduce the behavior-->
add
```python
c.DevelopLabExtensionApp.labextensions_dir=<some_path>
```
to a `jupyter_config.py` from a jupyter lab extension repo and run:
```bash
jupyter labextension develop . --config jupyter_config.py
```
<!--Describe how you diagnosed the issue. See the guidelines at
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html -->
## Expected behavior
`ArgumentConflict` is properly raised
## Context
- Operating System and version: Ubuntu 18.04.5
- Browser and version: N/A
- JupyterLab version: 3.0.6
<details><summary>Command Line Output</summary>
<pre>
Installing /home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/jupyterlab_git/labextension -> @jupyterlab/git
An error occured.
NameError: name 'ArgumentConflict' is not defined
</pre>
</details>
<details><summary>Browser Output</summary>
<pre>
N/A
</pre>
</details>
</issue>
<code>
[start of README.md]
1 **[Installation](#installation)** |
2 **[Documentation](http://jupyterlab.readthedocs.io)** |
3 **[Contributing](#contributing)** |
4 **[License](#license)** |
5 **[Team](#team)** |
6 **[Getting help](#getting-help)** |
7
8 # [JupyterLab](http://jupyterlab.github.io/jupyterlab/)
9
10 [](https://badge.fury.io/py/jupyterlab)
11 [](https://pepy.tech/project/jupyterlab/month)
12 [](https://github.com/jupyterlab/jupyterlab/actions?query=workflow%3A%22Linux+Tests%22)
13 [](https://github.com/jupyterlab/jupyterlab/actions?query=workflow%3A%22Windows+Tests%22)
14 [](http://jupyterlab.readthedocs.io/en/stable/)
15 [](https://crowdin.com/project/jupyterlab)
16 [](https://github.com/jupyterlab/jupyterlab/issues)
17 [](https://discourse.jupyter.org/c/jupyterlab)
18 [](https://gitter.im/jupyterlab/jupyterlab)
19
20 [](https://mybinder.org/v2/gh/jupyterlab/jupyterlab-demo/3818244?urlpath=lab/tree/demo)
21
22 An extensible environment for interactive and reproducible computing, based on the
23 Jupyter Notebook and Architecture. [Currently ready for users.](https://blog.jupyter.org/jupyterlab-is-ready-for-users-5a6f039b8906)
24
25 [JupyterLab](http://jupyterlab.readthedocs.io/en/stable/) is the next-generation user interface for [Project Jupyter](https://jupyter.org) offering
26 all the familiar building blocks of the classic Jupyter Notebook (notebook,
27 terminal, text editor, file browser, rich outputs, etc.) in a flexible and
28 powerful user interface.
29 JupyterLab will eventually replace the classic Jupyter Notebook.
30
31 JupyterLab can be extended using [npm](https://www.npmjs.com/) packages
32 that use our public APIs. To find JupyterLab extensions, search for the npm keyword [jupyterlab-extension](https://www.npmjs.com/search?q=keywords:jupyterlab-extension) or the GitHub topic [jupyterlab-extension](https://github.com/topics/jupyterlab-extension). To learn more about extensions, see the [user documentation](https://jupyterlab.readthedocs.io/en/stable/user/extensions.html).
33
34 The current JupyterLab releases are suitable for general
35 usage, and the extension APIs will continue to
36 evolve for JupyterLab extension developers.
37
38 Read the current JupyterLab documentation on [ReadTheDocs](http://jupyterlab.readthedocs.io/en/stable/).
39
40 ---
41
42 ## Getting started
43
44 ### Installation
45
46 JupyterLab can be installed using `conda` or `pip`. For more detailed instructions, consult the [installation guide](http://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html).
47
48 Project installation instructions from the git sources are available in the [contributor documentation](CONTRIBUTING.md).
49
50 ### conda
51
52 If you use `conda`, you can install it with:
53
54 ```shell
55 conda install -c conda-forge jupyterlab
56 ```
57
58 ### pip
59
60 If you use `pip`, you can install it with:
61
62 ```shell
63 pip install jupyterlab
64 ```
65
66 If installing using `pip install --user`, you must add the user-level `bin` directory to your `PATH` environment variable in order to launch `jupyter lab`. If you are using a Unix derivative (FreeBSD, GNU / Linux, OS X), you can achieve this by using `export PATH="$HOME/.local/bin:$PATH"` command.
67
68 #### Installing with Previous Versions of Jupyter Notebook
69
70 When using a version of Jupyter Notebook earlier than 5.3, the following command must be run
71 after installation to enable the JupyterLab server extension:
72
73 ```bash
74 jupyter serverextension enable --py jupyterlab --sys-prefix
75 ```
76
77 ### Running
78
79 Start up JupyterLab using:
80
81 ```bash
82 jupyter lab
83 ```
84
85 JupyterLab will open automatically in the browser. See the [documentation](http://jupyterlab.readthedocs.io/en/stable/getting_started/starting.html) for additional details.
86
87 If you encounter an error like "Command 'jupyter' not found", please make sure `PATH` environment variable is set correctly. Alternatively, you can start up JupyterLab using `~/.local/bin/jupyter lab` without changing the `PATH` environment variable.
88
89 ### Prerequisites and Supported Browsers
90
91 The latest versions of the following browsers are currently _known to work_:
92
93 - Firefox
94 - Chrome
95 - Safari
96
97 See our [documentation](http://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html) for additional details.
98
99 ---
100
101 ## Getting help
102
103 We encourage you to ask questions on the [Discourse forum](https://discourse.jupyter.org/c/jupyterlab). A question answered there can become a useful resource for others.
104
105 ### Bug report
106
107 To report a bug please read the [guidelines](https://jupyterlab.readthedocs.io/en/stable/getting_started/issue.html) and then open a [Github issue](https://github.com/jupyterlab/jupyterlab/issues/new?template=bug_report.md). To keep resolved issues self-contained, the [lock bot](https://github.com/apps/lock) will lock closed issues as resolved after a period of inactivity. If related discussion is still needed after an issue is locked, please open a new issue and reference the old issue.
108
109 ### Feature request
110
111 We also welcome suggestions for new features as they help make the project more useful for everyone. To request a feature please use the [feature request template](https://github.com/jupyterlab/jupyterlab/issues/new?template=feature_request.md).
112
113 ---
114
115 ## Development
116
117 ### Extending JupyterLab
118
119 To start developing an extension for JupyterLab, see the [developer documentation](https://jupyterlab.readthedocs.io/en/stable/extension/extension_dev.html) and the [API docs](https://jupyterlab.readthedocs.io/en/stable/api/).
120
121 ### Contributing
122
123 To contribute code or documentation to JupyterLab itself, please read the [contributor documentation](https://jupyterlab.readthedocs.io/en/latest/developer/contributing.html).
124
125 JupyterLab follows the Jupyter [Community Guides](https://jupyter.readthedocs.io/en/latest/community/content-community.html).
126
127 ### License
128
129 JupyterLab uses a shared copyright model that enables all contributors to maintain the
130 copyright on their contributions. All code is licensed under the terms of the revised [BSD license](https://github.com/jupyterlab/jupyterlab/blob/master/LICENSE).
131
132 ### Team
133
134 JupyterLab is part of [Project Jupyter](http://jupyter.org/) and is developed by an open community. The maintenance team is assisted by a much larger group of contributors to JupyterLab and Project Jupyter as a whole.
135
136 JupyterLab's current maintainers are listed in alphabetical order, with affiliation, and main areas of contribution:
137
138 - Mehmet Bektas, Bloomberg (general development, extensions).
139 - Alex Bozarth, IBM (general development, extensions).
140 - Eric Charles, Datalayer, (general development, extensions).
141 - Martha Cryan, IBM (general development, extensions).
142 - Afshin Darian, Two Sigma (co-creator, application/high-level architecture,
143 prolific contributions throughout the code base).
144 - Vidar T. Fauske, JPMorgan Chase (general development, extensions).
145 - Tim George, Cal Poly (UI/UX design, strategy, management, user needs analysis)
146 - Brian Granger, AWS (co-creator, strategy, vision, management, UI/UX design,
147 architecture).
148 - Jason Grout, Bloomberg (co-creator, vision, general development).
149 - Max Klein, JPMorgan Chase (UI Package, build system, general development, extensions).
150 - Fernando Perez, UC Berkeley (co-creator, vision).
151 - Ian Rose, Quansight/City of LA (general core development, extensions).
152 - Andrew Schlaepfer, Bloomberg (general development, extensions).
153 - Saul Shanabrook, Quansight (general development, extensions)
154 - Steven Silvester, Apple (co-creator, release management, packaging,
155 prolific contributions throughout the code base).
156
157 Maintainer emeritus:
158
159 - Chris Colbert, Project Jupyter (co-creator, application/low-level architecture,
160 technical leadership, vision, PhosphorJS)
161 - Jessica Forde, Project Jupyter (demo, documentation)
162 - Cameron Oelsen, Cal Poly (UI/UX design).
163
164 This list is provided to give the reader context on who we are and how our team functions.
165 To be listed, please submit a pull request with your information.
166
167 ---
168
169 ### Weekly Dev Meeting
170
171 We have videoconference meetings every week where we discuss what we have been working on and get feedback from one another.
172
173 Anyone is welcome to attend, if they would like to discuss a topic or just to listen in.
174
175 - When: Wednesdays [9AM Pacific Time](https://www.thetimezoneconverter.com/?t=9%3A00%20am&tz=San%20Francisco&)
176 - Where: [`jovyan` Zoom](https://zoom.us/my/jovyan?pwd=c0JZTHlNdS9Sek9vdzR3aTJ4SzFTQT09)
177 - What: [Meeting notes](https://hackmd.io/Y7fBMQPSQ1C08SDGI-fwtg?both)
178
[end of README.md]
[start of clean.py]
1 import os
2 import subprocess
3
4 here = os.path.abspath(os.path.dirname(__file__))
5
6
7 # Workaround for https://github.com/git-for-windows/git/issues/607
8 if os.name == 'nt':
9 for (root, dnames, files) in os.walk(here):
10 if 'node_modules' in dnames:
11 subprocess.check_call(['rmdir', '/s', '/q', 'node_modules'],
12 cwd=root, shell=True)
13 dnames.remove('node_modules')
14
15
16 subprocess.check_call('python -m pip uninstall -y jupyterlab'.split(), cwd=here)
17
18 def resolvePattern(pat):
19 """handle a leading `#` or `@` in a pattern
20 """
21 pat = pat.strip()
22
23 if not pat or pat.startswith('#'):
24 return []
25 elif pat.startswith('@'):
26 raw = pat[1:]
27 return [
28 raw,
29 f'!packages/**/{raw}',
30 f'!**/node_modules/**/{raw}'
31 ]
32 else:
33 return [pat]
34
35 # get the exclude patterns listed in .cleanignore
36 with open(os.path.join(here, '.cleanignore')) as f:
37 git_clean_exclude = [f'--exclude={pat}'
38 for line in f
39 for pat in resolvePattern(line)]
40
41 git_clean_command = ['git', 'clean', '-dfx'] + git_clean_exclude
42 subprocess.check_call(git_clean_command, cwd=here)
43
[end of clean.py]
[start of jupyterlab/debuglog.py]
1 # coding: utf-8
2 """A mixin for adding a debug log file.
3
4 """
5
6 # Copyright (c) Jupyter Development Team.
7 # Distributed under the terms of the Modified BSD License.
8
9 import contextlib
10 import logging
11 import os
12 import sys
13 import tempfile
14 import traceback
15
16 from traitlets import Unicode, default
17 from traitlets.config import Configurable
18
19 class DebugLogFileMixin(Configurable):
20 debug_log_path = Unicode('', config=True, help='Path to use for the debug log file')
21
22 @contextlib.contextmanager
23 def debug_logging(self):
24 log_path = self.debug_log_path
25 if os.path.isdir(log_path):
26 log_path = os.path.join(log_path, 'jupyterlab-debug.log')
27 if not log_path:
28 handle, log_path = tempfile.mkstemp(prefix='jupyterlab-debug-', suffix='.log')
29 os.close(handle)
30 log = self.log
31
32 # Transfer current log level to the handlers:
33 for h in log.handlers:
34 h.setLevel(self.log_level)
35 log.setLevel('DEBUG')
36
37 # Create our debug-level file handler:
38 _debug_handler = logging.FileHandler(
39 log_path, 'w', 'utf8', delay=True)
40 _log_formatter = self._log_formatter_cls(fmt=self.log_format, datefmt=self.log_datefmt)
41 _debug_handler.setFormatter(_log_formatter)
42 _debug_handler.setLevel('DEBUG')
43
44 log.addHandler(_debug_handler)
45
46 try:
47 yield
48 except Exception as ex:
49 _, _, exc_traceback = sys.exc_info()
50 msg = traceback.format_exception(ex.__class__, ex, exc_traceback)
51 for line in msg:
52 self.log.debug(line)
53 if isinstance(ex, SystemExit):
54 print('An error occured. See the log file for details: ', log_path)
55 raise
56 print('An error occured.')
57 print(msg[-1].strip())
58 print('See the log file for details: ', log_path)
59 self.exit(1)
60 else:
61 log.removeHandler(_debug_handler)
62 _debug_handler.flush()
63 _debug_handler.close()
64 try:
65 os.remove(log_path)
66 except FileNotFoundError:
67 pass
68 log.removeHandler(_debug_handler)
69
70
[end of jupyterlab/debuglog.py]
[start of jupyterlab/federated_labextensions.py]
1 # coding: utf-8
2 """Utilities for installing Javascript extensions for the notebook"""
3
4 # Copyright (c) Jupyter Development Team.
5 # Distributed under the terms of the Modified BSD License.
6
7 from __future__ import print_function
8
9 import importlib
10 import json
11 import os
12 import os.path as osp
13 import shutil
14 import sys
15 from os.path import basename, join as pjoin, normpath
16 import subprocess
17 import sys
18
19 from jupyter_core.paths import (
20 jupyter_data_dir, SYSTEM_JUPYTER_PATH, ENV_JUPYTER_PATH,
21 )
22 from jupyter_core.utils import ensure_dir_exists
23 from ipython_genutils.py3compat import cast_unicode_py2
24 from jupyterlab_server.config import get_federated_extensions
25
26 from .commands import _test_overlap
27
28
29 DEPRECATED_ARGUMENT = object()
30
31 HERE = osp.abspath(osp.dirname(__file__))
32
33
34 #------------------------------------------------------------------------------
35 # Public API
36 #------------------------------------------------------------------------------
37
38 def develop_labextension(path, symlink=True, overwrite=False,
39 user=False, labextensions_dir=None,
40 destination=None,
41 logger=None, sys_prefix=False
42 ):
43 """Install a prebuilt extension for JupyterLab
44
45 Stages files and/or directories into the labextensions directory.
46 By default, this compares modification time, and only stages files that need updating.
47 If `overwrite` is specified, matching files are purged before proceeding.
48
49 Parameters
50 ----------
51
52 path : path to file, directory, zip or tarball archive, or URL to install
53 By default, the file will be installed with its base name, so '/path/to/foo'
54 will install to 'labextensions/foo'. See the destination argument below to change this.
55 Archives (zip or tarballs) will be extracted into the labextensions directory.
56 user : bool [default: False]
57 Whether to install to the user's labextensions directory.
58 Otherwise do a system-wide install (e.g. /usr/local/share/jupyter/labextensions).
59 overwrite : bool [default: False]
60 If True, always install the files, regardless of what may already be installed.
61 symlink : bool [default: True]
62 If True, create a symlink in labextensions, rather than copying files.
63 Windows support for symlinks requires a permission bit which only admin users
64 have by default, so don't rely on it.
65 labextensions_dir : str [optional]
66 Specify absolute path of labextensions directory explicitly.
67 destination : str [optional]
68 name the labextension is installed to. For example, if destination is 'foo', then
69 the source file will be installed to 'labextensions/foo', regardless of the source name.
70 logger : Jupyter logger [optional]
71 Logger instance to use
72 """
73 # the actual path to which we eventually installed
74 full_dest = None
75
76 labext = _get_labextension_dir(user=user, sys_prefix=sys_prefix, labextensions_dir=labextensions_dir)
77 # make sure labextensions dir exists
78 ensure_dir_exists(labext)
79
80 if isinstance(path, (list, tuple)):
81 raise TypeError("path must be a string pointing to a single extension to install; call this function multiple times to install multiple extensions")
82
83 path = cast_unicode_py2(path)
84
85 if not destination:
86 destination = basename(normpath(path))
87 destination = cast_unicode_py2(destination)
88
89 full_dest = normpath(pjoin(labext, destination))
90 if overwrite and os.path.lexists(full_dest):
91 if logger:
92 logger.info("Removing: %s" % full_dest)
93 if os.path.isdir(full_dest) and not os.path.islink(full_dest):
94 shutil.rmtree(full_dest)
95 else:
96 os.remove(full_dest)
97
98 # Make sure the parent directory exists
99 os.makedirs(os.path.dirname(full_dest), exist_ok=True)
100
101 if symlink:
102 path = os.path.abspath(path)
103 if not os.path.exists(full_dest):
104 if logger:
105 logger.info("Symlinking: %s -> %s" % (full_dest, path))
106 os.symlink(path, full_dest)
107 elif not os.path.islink(full_dest):
108 raise ValueError("%s exists and is not a symlink" % full_dest)
109
110 elif os.path.isdir(path):
111 path = pjoin(os.path.abspath(path), '') # end in path separator
112 for parent, dirs, files in os.walk(path):
113 dest_dir = pjoin(full_dest, parent[len(path):])
114 if not os.path.exists(dest_dir):
115 if logger:
116 logger.info("Making directory: %s" % dest_dir)
117 os.makedirs(dest_dir)
118 for file_name in files:
119 src = pjoin(parent, file_name)
120 dest_file = pjoin(dest_dir, file_name)
121 _maybe_copy(src, dest_file, logger=logger)
122 else:
123 src = path
124 _maybe_copy(src, full_dest, logger=logger)
125
126 return full_dest
127
128
129 def develop_labextension_py(module, user=False, sys_prefix=False, overwrite=True, symlink=True, labextensions_dir=None, logger=None):
130 """Develop a labextension bundled in a Python package.
131
132 Returns a list of installed/updated directories.
133
134 See develop_labextension for parameter information."""
135 m, labexts = _get_labextension_metadata(module)
136 base_path = os.path.split(m.__file__)[0]
137
138 full_dests = []
139
140 for labext in labexts:
141 src = os.path.join(base_path, labext['src'])
142 dest = labext['dest']
143 if logger:
144 logger.info("Installing %s -> %s" % (src, dest))
145
146 if not os.path.exists(src):
147 build_labextension(base_path, logger=logger)
148
149 full_dest = develop_labextension(
150 src, overwrite=overwrite, symlink=symlink,
151 user=user, sys_prefix=sys_prefix, labextensions_dir=labextensions_dir,
152 destination=dest, logger=logger
153 )
154 full_dests.append(full_dest)
155
156 return full_dests
157
158
159 def build_labextension(path, logger=None, development=False, static_url=None, source_map = False):
160 """Build a labextension in the given path"""
161 core_path = osp.join(HERE, 'staging')
162 ext_path = osp.abspath(path)
163
164 if logger:
165 logger.info('Building extension in %s' % path)
166
167 builder = _ensure_builder(ext_path, core_path)
168
169 arguments = ['node', builder, '--core-path', core_path, ext_path]
170 if static_url is not None:
171 arguments.extend(['--static-url', static_url])
172 if development:
173 arguments.append('--development')
174 if source_map:
175 arguments.append('--source-map')
176
177 subprocess.check_call(arguments, cwd=ext_path)
178
179
180 def watch_labextension(path, labextensions_path, logger=None, development=False, source_map=False):
181 """Watch a labextension in a given path"""
182 core_path = osp.join(HERE, 'staging')
183 ext_path = osp.abspath(path)
184
185 if logger:
186 logger.info('Building extension in %s' % path)
187
188 # Check to see if we need to create a symlink
189 federated_extensions = get_federated_extensions(labextensions_path)
190
191 with open(pjoin(ext_path, 'package.json')) as fid:
192 ext_data = json.load(fid)
193
194 if ext_data['name'] not in federated_extensions:
195 develop_labextension_py(ext_path, sys_prefix=True)
196 else:
197 full_dest = pjoin(federated_extensions[ext_data['name']]['ext_dir'], ext_data['name'])
198 output_dir = pjoin(ext_path, ext_data['jupyterlab'].get('outputDir', 'static'))
199 if not osp.islink(full_dest):
200 shutil.rmtree(full_dest)
201 os.symlink(output_dir, full_dest)
202
203 builder = _ensure_builder(ext_path, core_path)
204 arguments = ['node', builder, '--core-path', core_path, '--watch', ext_path]
205 if development:
206 arguments.append('--development')
207 if source_map:
208 arguments.append('--source-map')
209
210 subprocess.check_call(arguments, cwd=ext_path)
211
212
213 #------------------------------------------------------------------------------
214 # Private API
215 #------------------------------------------------------------------------------
216
217
218 def _ensure_builder(ext_path, core_path):
219 """Ensure that we can build the extension and return the builder script path
220 """
221 # Test for compatible dependency on @jupyterlab/builder
222 with open(osp.join(core_path, 'package.json')) as fid:
223 core_data = json.load(fid)
224 with open(osp.join(ext_path, 'package.json')) as fid:
225 ext_data = json.load(fid)
226 depVersion1 = core_data['devDependencies']['@jupyterlab/builder']
227 depVersion2 = ext_data.get('devDependencies', dict()).get('@jupyterlab/builder')
228 depVersion2 = depVersion2 or ext_data.get('dependencies', dict()).get('@jupyterlab/builder')
229 if depVersion2 is None:
230 raise ValueError('Extensions require a devDependency on @jupyterlab/builder@%s' % depVersion1)
231
232 # if we have installed from disk (version is a path), assume we know what
233 # we are doing and do not check versions.
234 if '/' in depVersion2:
235 with open(osp.join(ext_path, depVersion2, 'package.json')) as fid:
236 depVersion2 = json.load(fid).get('version')
237 overlap = _test_overlap(depVersion1, depVersion2, drop_prerelease1=True, drop_prerelease2=True)
238 if not overlap:
239 raise ValueError('Extensions require a devDependency on @jupyterlab/builder@%s, you have a dependency on %s' % (depVersion1, depVersion2))
240 if not osp.exists(osp.join(ext_path, 'node_modules')):
241 subprocess.check_call(['jlpm'], cwd=ext_path)
242
243 # Find @jupyterlab/builder using node module resolution
244 # We cannot use a script because the script path is a shell script on Windows
245 target = ext_path
246 while not osp.exists(osp.join(target, 'node_modules', '@jupyterlab', 'builder')):
247 if osp.dirname(target) == target:
248 raise ValueError('Could not find @jupyterlab/builder')
249 target = osp.dirname(target)
250
251 return osp.join(target, 'node_modules', '@jupyterlab', 'builder', 'lib', 'build-labextension.js')
252
253
254 def _should_copy(src, dest, logger=None):
255 """Should a file be copied, if it doesn't exist, or is newer?
256
257 Returns whether the file needs to be updated.
258
259 Parameters
260 ----------
261
262 src : string
263 A path that should exist from which to copy a file
264 src : string
265 A path that might exist to which to copy a file
266 logger : Jupyter logger [optional]
267 Logger instance to use
268 """
269 if not os.path.exists(dest):
270 return True
271 if os.stat(src).st_mtime - os.stat(dest).st_mtime > 1e-6:
272 # we add a fudge factor to work around a bug in python 2.x
273 # that was fixed in python 3.x: https://bugs.python.org/issue12904
274 if logger:
275 logger.warn("Out of date: %s" % dest)
276 return True
277 if logger:
278 logger.info("Up to date: %s" % dest)
279 return False
280
281
282 def _maybe_copy(src, dest, logger=None):
283 """Copy a file if it needs updating.
284
285 Parameters
286 ----------
287
288 src : string
289 A path that should exist from which to copy a file
290 src : string
291 A path that might exist to which to copy a file
292 logger : Jupyter logger [optional]
293 Logger instance to use
294 """
295 if _should_copy(src, dest, logger=logger):
296 if logger:
297 logger.info("Copying: %s -> %s" % (src, dest))
298 shutil.copy2(src, dest)
299
300
301 def _get_labextension_dir(user=False, sys_prefix=False, prefix=None, labextensions_dir=None):
302 """Return the labextension directory specified
303
304 Parameters
305 ----------
306
307 user : bool [default: False]
308 Get the user's .jupyter/labextensions directory
309 sys_prefix : bool [default: False]
310 Get sys.prefix, i.e. ~/.envs/my-env/share/jupyter/labextensions
311 prefix : str [optional]
312 Get custom prefix
313 labextensions_dir : str [optional]
314 Get what you put in
315 """
316 conflicting = [
317 ('user', user),
318 ('prefix', prefix),
319 ('labextensions_dir', labextensions_dir),
320 ('sys_prefix', sys_prefix),
321 ]
322 conflicting_set = ['{}={!r}'.format(n, v) for n, v in conflicting if v]
323 if len(conflicting_set) > 1:
324 raise ArgumentConflict(
325 "cannot specify more than one of user, sys_prefix, prefix, or labextensions_dir, but got: {}"
326 .format(', '.join(conflicting_set)))
327 if user:
328 labext = pjoin(jupyter_data_dir(), u'labextensions')
329 elif sys_prefix:
330 labext = pjoin(ENV_JUPYTER_PATH[0], u'labextensions')
331 elif prefix:
332 labext = pjoin(prefix, 'share', 'jupyter', 'labextensions')
333 elif labextensions_dir:
334 labext = labextensions_dir
335 else:
336 labext = pjoin(SYSTEM_JUPYTER_PATH[0], 'labextensions')
337 return labext
338
339
340 def _get_labextension_metadata(module):
341 """Get the list of labextension paths associated with a Python module.
342
343 Returns a tuple of (the module path, [{
344 'src': 'mockextension',
345 'dest': '_mockdestination'
346 }])
347
348 Parameters
349 ----------
350
351 module : str
352 Importable Python module exposing the
353 magic-named `_jupyter_labextension_paths` function
354 """
355
356 mod_path = osp.abspath(module)
357 if not osp.exists(mod_path):
358 raise FileNotFoundError('The path `{}` does not exist.'.format(mod_path))
359
360 # Check if the path is a valid labextension
361 try:
362 m = importlib.import_module(module)
363 if hasattr(m, '_jupyter_labextension_paths') :
364 labexts = m._jupyter_labextension_paths()
365 return m, labexts
366 else :
367 m = None
368
369 except Exception:
370 m = None
371
372 # Try getting the package name from setup.py
373 try:
374 package = subprocess.check_output([sys.executable, 'setup.py', '--name'], cwd=mod_path).decode('utf8').strip()
375 except subprocess.CalledProcessError:
376 raise FileNotFoundError('The Python package `{}` is not a valid package, '
377 'it is missing the `setup.py` file.'.format(module))
378
379 # Make sure the package is installed
380 import pkg_resources
381 try:
382 dist = pkg_resources.get_distribution(package)
383 except pkg_resources.DistributionNotFound:
384 subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-e', mod_path])
385 sys.path.insert(0, mod_path)
386
387 # Importing module with the same name as package
388 try:
389 # Replace hyphens with underscores to match Python convention
390 package = package.replace('-', '_')
391 m = importlib.import_module(package)
392 if hasattr(m, '_jupyter_labextension_paths') :
393 return m, m._jupyter_labextension_paths()
394 except Exception:
395 m = None
396
397 # Looking for modules in the package
398 from setuptools import find_packages
399 packages = find_packages(mod_path)
400
401 # Looking for the labextension metadata
402 for package in packages :
403 try:
404 m = importlib.import_module(package)
405 if hasattr(m, '_jupyter_labextension_paths') :
406 return m, m._jupyter_labextension_paths()
407 except Exception:
408 m = None
409
410 raise ModuleNotFoundError('There is not a labextensions at {}'.format(module))
411
[end of jupyterlab/federated_labextensions.py]
[start of jupyterlab/labextensions.py]
1 # coding: utf-8
2 """Jupyter LabExtension Entry Points."""
3
4 # Copyright (c) Jupyter Development Team.
5 # Distributed under the terms of the Modified BSD License.
6 from glob import glob
7 import json
8 import os
9 import shutil
10 import sys
11 import traceback
12
13 from copy import copy
14
15 from jupyter_core.application import JupyterApp, base_flags, base_aliases
16 from jupyter_core.paths import jupyter_path
17 from jupyterlab.coreconfig import CoreConfig
18 from jupyterlab.debuglog import DebugLogFileMixin
19 from traitlets import Bool, Instance, List, Unicode, default
20
21 from .commands import (
22 install_extension, uninstall_extension, list_extensions,
23 enable_extension, disable_extension, check_extension,
24 link_package, unlink_package, build, get_app_version, HERE,
25 update_extension, AppOptions,
26 )
27 from .federated_labextensions import develop_labextension_py, build_labextension, watch_labextension
28 from .labapp import LabApp
29
30
31 flags = dict(base_flags)
32 flags['no-build'] = (
33 {'BaseExtensionApp': {'should_build': False}},
34 "Defer building the app after the action."
35 )
36 flags['dev-build'] = (
37 {'BaseExtensionApp': {'dev_build': True}},
38 "Build in development mode."
39 )
40 flags['no-minimize'] = (
41 {'BaseExtensionApp': {'minimize': False}},
42 "Do not minimize a production build."
43 )
44 flags['clean'] = (
45 {'BaseExtensionApp': {'should_clean': True}},
46 "Cleanup intermediate files after the action."
47 )
48
49 check_flags = copy(flags)
50 check_flags['installed'] = (
51 {'CheckLabExtensionsApp': {'should_check_installed_only': True}},
52 "Check only if the extension is installed."
53 )
54
55 develop_flags = copy(flags)
56 develop_flags['overwrite'] = (
57 {'DevelopLabExtensionApp': {'overwrite': True}},
58 "Overwrite files"
59 )
60
61 update_flags = copy(flags)
62 update_flags['all'] = (
63 {'UpdateLabExtensionApp': {'all': True}},
64 "Update all extensions"
65 )
66
67 uninstall_flags = copy(flags)
68 uninstall_flags['all'] = (
69 {'UninstallLabExtensionApp': {'all': True}},
70 "Uninstall all extensions"
71 )
72
73 aliases = dict(base_aliases)
74 aliases['app-dir'] = 'BaseExtensionApp.app_dir'
75 aliases['dev-build'] = 'BaseExtensionApp.dev_build'
76 aliases['minimize'] = 'BaseExtensionApp.minimize'
77 aliases['debug-log-path'] = 'DebugLogFileMixin.debug_log_path'
78
79 install_aliases = copy(aliases)
80 install_aliases['pin-version-as'] = 'InstallLabExtensionApp.pin'
81
82 enable_aliases = copy(aliases)
83 enable_aliases['level'] = 'EnableLabExtensionsApp.level'
84
85 disable_aliases = copy(aliases)
86 disable_aliases['level'] = 'DisableLabExtensionsApp.level'
87
88 VERSION = get_app_version()
89
90
91 class BaseExtensionApp(JupyterApp, DebugLogFileMixin):
92 version = VERSION
93 flags = flags
94 aliases = aliases
95 name = "lab"
96
97 # Not configurable!
98 core_config = Instance(CoreConfig, allow_none=True)
99
100 app_dir = Unicode('', config=True,
101 help="The app directory to target")
102
103 should_build = Bool(True, config=True,
104 help="Whether to build the app after the action")
105
106 dev_build = Bool(None, allow_none=True, config=True,
107 help="Whether to build in dev mode. Defaults to True (dev mode) if there are any locally linked extensions, else defaults to False (production mode).")
108
109 minimize = Bool(True, config=True,
110 help="Whether to minimize a production build (defaults to True).")
111
112 should_clean = Bool(False, config=True,
113 help="Whether temporary files should be cleaned up after building jupyterlab")
114
115 labextensions_path = List(Unicode(), help='The standard paths to look in for prebuilt JupyterLab extensions')
116
117 @default('labextensions_path')
118 def _default_labextensions_path(self):
119 lab = LabApp()
120 lab.load_config_file()
121 return lab.extra_labextensions_path + lab.labextensions_path
122
123 def start(self):
124 if self.app_dir and self.app_dir.startswith(HERE):
125 raise ValueError('Cannot run lab extension commands in core app')
126 with self.debug_logging():
127 ans = self.run_task()
128 if ans and self.should_build:
129 production = None if self.dev_build is None else not self.dev_build
130 app_options = AppOptions(app_dir=self.app_dir, logger=self.log,
131 core_config=self.core_config)
132 build(clean_staging=self.should_clean,
133 production = production, minimize = self.minimize, app_options=app_options)
134
135 def run_task(self):
136 pass
137
138 def _log_format_default(self):
139 """A default format for messages"""
140 return "%(message)s"
141
142
143 class InstallLabExtensionApp(BaseExtensionApp):
144 description = """Install labextension(s)
145
146 Usage
147
148 jupyter labextension install [--pin-version-as <alias,...>] <package...>
149
150 This installs JupyterLab extensions similar to yarn add or npm install.
151
152 Pass a list of comma seperate names to the --pin-version-as flag
153 to use as alises for the packages providers. This is useful to
154 install multiple versions of the same extension.
155 These can be uninstalled with the alias you provided
156 to the flag, similar to the "alias" feature of yarn add.
157 """
158 aliases = install_aliases
159
160 pin = Unicode('', config=True,
161 help="Pin this version with a certain alias")
162
163 def run_task(self):
164 pinned_versions = self.pin.split(',')
165 self.extra_args = self.extra_args or [os.getcwd()]
166 return any([
167 install_extension(
168 arg,
169 # Pass in pinned alias if we have it
170 pin=pinned_versions[i] if i < len(pinned_versions) else None,
171 app_options=AppOptions(
172 app_dir=self.app_dir,
173 logger=self.log,
174 core_config=self.core_config,
175 labextensions_path=self.labextensions_path
176 )
177 )
178 for i, arg in enumerate(self.extra_args)
179 ])
180
181
182 class DevelopLabExtensionApp(BaseExtensionApp):
183 desciption = "Develop labextension"
184 flags = develop_flags
185
186 user = Bool(False, config=True, help="Whether to do a user install")
187 sys_prefix = Bool(True, config=True, help="Use the sys.prefix as the prefix")
188 overwrite = Bool(False, config=True, help="Whether to overwrite files")
189 symlink = Bool(True, config=False, help="Whether to use a symlink")
190
191 labextensions_dir = Unicode('', config=True,
192 help="Full path to labextensions dir (probably use prefix or user)")
193
194 def run_task(self):
195 "Add config for this labextension"
196 self.extra_args = self.extra_args or [os.getcwd()]
197 for arg in self.extra_args:
198 develop_labextension_py(arg, user=self.user, sys_prefix=self.sys_prefix, labextensions_dir=self.labextensions_dir, logger=self.log, overwrite=self.overwrite,
199 symlink=self.symlink)
200
201
202 class BuildLabExtensionApp(BaseExtensionApp):
203 description = "Build labextension"
204
205 static_url = Unicode('', config=True,
206 help="Sets the url for static assets when building")
207
208 development = Bool(False, config=True,
209 help="Build in development mode")
210
211 source_map = Bool(False, config=True,
212 help="Generage source maps")
213
214 aliases = {
215 'static-url': 'BuildLabExtensionApp.static_url',
216 'development': 'BuildLabExtensionApp.development',
217 'source-map': 'BuildLabExtensionApp.source_map'
218 }
219
220 def run_task(self):
221 self.extra_args = self.extra_args or [os.getcwd()]
222 build_labextension(self.extra_args[0], logger=self.log, development=self.development, static_url=self.static_url or None, source_map = self.source_map)
223
224
225 class WatchLabExtensionApp(BaseExtensionApp):
226 description = "Watch labextension"
227
228 development = Bool(True, config=True,
229 help="Build in development mode")
230
231 source_map = Bool(False, config=True,
232 help="Generage source maps")
233
234 aliases = {
235 'development': 'BuildLabExtensionApp.development',
236 'source-map': 'BuildLabExtensionApp.source_map'
237
238 }
239 def run_task(self):
240 self.extra_args = self.extra_args or [os.getcwd()]
241 labextensions_path = self.labextensions_path
242 watch_labextension(self.extra_args[0], labextensions_path, logger=self.log, development=self.development, source_map=self.source_map)
243
244
245 class UpdateLabExtensionApp(BaseExtensionApp):
246 description = "Update labextension(s)"
247 flags = update_flags
248
249 all = Bool(False, config=True,
250 help="Whether to update all extensions")
251
252 def run_task(self):
253 if not self.all and not self.extra_args:
254 self.log.warn('Specify an extension to update, or use --all to update all extensions')
255 return False
256 app_options = AppOptions(app_dir=self.app_dir, logger=self.log,
257 core_config=self.core_config, labextensions_path=self.labextensions_path)
258 if self.all:
259 return update_extension(all_=True, app_options=app_options)
260 return any([
261 update_extension(name=arg, app_options=app_options)
262 for arg in self.extra_args
263 ])
264
265
266 class LinkLabExtensionApp(BaseExtensionApp):
267 description = """
268 Link local npm packages that are not lab extensions.
269
270 Links a package to the JupyterLab build process. A linked
271 package is manually re-installed from its source location when
272 `jupyter lab build` is run.
273 """
274 should_build = Bool(True, config=True,
275 help="Whether to build the app after the action")
276
277 def run_task(self):
278 self.extra_args = self.extra_args or [os.getcwd()]
279 options = AppOptions(
280 app_dir=self.app_dir, logger=self.log,
281 labextensions_path=self.labextensions_path,
282 core_config=self.core_config)
283 return any([
284 link_package(
285 arg,
286 app_options=options)
287 for arg in self.extra_args
288 ])
289
290
291 class UnlinkLabExtensionApp(BaseExtensionApp):
292 description = "Unlink packages by name or path"
293
294 def run_task(self):
295 self.extra_args = self.extra_args or [os.getcwd()]
296 options = AppOptions(
297 app_dir=self.app_dir, logger=self.log,
298 labextensions_path=self.labextensions_path,
299 core_config=self.core_config)
300 return any([
301 unlink_package(
302 arg,
303 app_options=options)
304 for arg in self.extra_args
305 ])
306
307
308 class UninstallLabExtensionApp(BaseExtensionApp):
309 description = "Uninstall labextension(s) by name"
310 flags = uninstall_flags
311
312 all = Bool(False, config=True,
313 help="Whether to uninstall all extensions")
314
315 def run_task(self):
316 self.extra_args = self.extra_args or [os.getcwd()]
317
318 options = AppOptions(
319 app_dir=self.app_dir, logger=self.log,
320 labextensions_path=self.labextensions_path,
321 core_config=self.core_config)
322 return any([
323 uninstall_extension(
324 arg, all_=self.all,
325 app_options=options)
326 for arg in self.extra_args
327 ])
328
329
330 class ListLabExtensionsApp(BaseExtensionApp):
331 description = "List the installed labextensions"
332
333 def run_task(self):
334 list_extensions(app_options=AppOptions(
335 app_dir=self.app_dir, logger=self.log, core_config=self.core_config,
336 labextensions_path=self.labextensions_path))
337
338
339 class EnableLabExtensionsApp(BaseExtensionApp):
340 description = "Enable labextension(s) by name"
341 aliases = enable_aliases
342
343 level = Unicode('sys_prefix', help="Level at which to enable: sys_prefix, user, system").tag(config=True)
344
345 def run_task(self):
346 app_options = AppOptions(
347 app_dir=self.app_dir, logger=self.log, core_config=self.core_config,
348 labextensions_path=self.labextensions_path)
349 [enable_extension(arg, app_options=app_options, level=self.level) for arg in self.extra_args]
350
351
352 class DisableLabExtensionsApp(BaseExtensionApp):
353 description = "Disable labextension(s) by name"
354 aliases = disable_aliases
355
356 level = Unicode('sys_prefix', help="Level at which to enable: sys_prefix, user, system").tag(config=True)
357
358 def run_task(self):
359 app_options = AppOptions(
360 app_dir=self.app_dir, logger=self.log, core_config=self.core_config,
361 labextensions_path=self.labextensions_path)
362 [disable_extension(arg, app_options=app_options, level=self.level) for arg in self.extra_args]
363
364
365 class CheckLabExtensionsApp(BaseExtensionApp):
366 description = "Check labextension(s) by name"
367 flags = check_flags
368
369 should_check_installed_only = Bool(False, config=True,
370 help="Whether it should check only if the extensions is installed")
371
372 def run_task(self):
373 app_options = AppOptions(
374 app_dir=self.app_dir, logger=self.log, core_config=self.core_config,
375 labextensions_path=self.labextensions_path)
376 all_enabled = all(
377 check_extension(
378 arg,
379 installed=self.should_check_installed_only,
380 app_options=app_options)
381 for arg in self.extra_args)
382 if not all_enabled:
383 self.exit(1)
384
385
386 _examples = """
387 jupyter labextension list # list all configured labextensions
388 jupyter labextension develop # develop a prebuilt labextension
389 jupyter labextension build # build a prebuilt labextension
390 jupyter labextension watch # watch a prebuilt labextension
391 jupyter labextension install <extension name> # install a labextension
392 jupyter labextension uninstall <extension name> # uninstall a labextension
393 """
394
395
396 class LabExtensionApp(JupyterApp):
397 """Base jupyter labextension command entry point"""
398 name = "jupyter labextension"
399 version = VERSION
400 description = "Work with JupyterLab extensions"
401 examples = _examples
402
403 subcommands = dict(
404 install=(InstallLabExtensionApp, "Install labextension(s)"),
405 develop=(DevelopLabExtensionApp, "Develop labextension(s)"),
406 build=(BuildLabExtensionApp, "Build labextension"),
407 watch=(WatchLabExtensionApp, "Watch labextension"),
408 update=(UpdateLabExtensionApp, "Update labextension(s)"),
409 uninstall=(UninstallLabExtensionApp, "Uninstall labextension(s)"),
410 list=(ListLabExtensionsApp, "List labextensions"),
411 link=(LinkLabExtensionApp, "Link labextension(s)"),
412 unlink=(UnlinkLabExtensionApp, "Unlink labextension(s)"),
413 enable=(EnableLabExtensionsApp, "Enable labextension(s)"),
414 disable=(DisableLabExtensionsApp, "Disable labextension(s)"),
415 check=(CheckLabExtensionsApp, "Check labextension(s)"),
416 )
417
418 def start(self):
419 """Perform the App's functions as configured"""
420 super(LabExtensionApp, self).start()
421
422 # The above should have called a subcommand and raised NoStart; if we
423 # get here, it didn't, so we should self.log.info a message.
424 subcmds = ", ".join(sorted(self.subcommands))
425 self.exit("Please supply at least one subcommand: %s" % subcmds)
426
427
428 main = LabExtensionApp.launch_instance
429
430 if __name__ == '__main__':
431 sys.exit(main())
432
[end of jupyterlab/labextensions.py]
[start of jupyterlab/utils.py]
1 import functools
2 import warnings
3
4
5 class jupyterlab_deprecation(Warning):
6 """Create our own deprecation class, since Python >= 2.7
7 silences deprecations by default.
8 """
9 pass
10
11
12 class deprecated(object):
13 """Decorator to mark deprecated functions with warning.
14 Adapted from `scikit-image/skimage/_shared/utils.py`.
15
16 Parameters
17 ----------
18 alt_func : str
19 If given, tell user what function to use instead.
20 behavior : {'warn', 'raise'}
21 Behavior during call to deprecated function: 'warn' = warn user that
22 function is deprecated; 'raise' = raise error.
23 removed_version : str
24 The package version in which the deprecated function will be removed.
25 """
26
27 def __init__(self, alt_func=None, behavior='warn', removed_version=None):
28 self.alt_func = alt_func
29 self.behavior = behavior
30 self.removed_version = removed_version
31
32 def __call__(self, func):
33
34 alt_msg = ''
35 if self.alt_func is not None:
36 alt_msg = ' Use ``%s`` instead.' % self.alt_func
37 rmv_msg = ''
38 if self.removed_version is not None:
39 rmv_msg = (' and will be removed in version %s' %
40 self.removed_version)
41
42 msg = ('Function ``%s`` is deprecated' % func.__name__ +
43 rmv_msg + '.' + alt_msg)
44
45 @functools.wraps(func)
46 def wrapped(*args, **kwargs):
47 if self.behavior == 'warn':
48 func_code = func.__code__
49 warnings.simplefilter('always', jupyterlab_deprecation)
50 warnings.warn_explicit(msg,
51 category=jupyterlab_deprecation,
52 filename=func_code.co_filename,
53 lineno=func_code.co_firstlineno + 1)
54 elif self.behavior == 'raise':
55 raise jupyterlab_deprecation(msg)
56 return func(*args, **kwargs)
57
58 # modify doc string to display deprecation warning
59 doc = '**Deprecated function**.' + alt_msg
60 if wrapped.__doc__ is None:
61 wrapped.__doc__ = doc
62 else:
63 wrapped.__doc__ = doc + '\n\n ' + wrapped.__doc__
64
65 return wrapped
66
[end of jupyterlab/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
jupyterlab/jupyterlab
|
5340e7161b4c5b67cd3e8184a018e142af86b22c
|
`ArgumentConflict` is not defined
<!--
Welcome! Before creating a new issue:
* Search for relevant issues
* Follow the issue reporting guidelines:
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html
-->
## Description
When I add
```python
c.DevelopLabExtensionApp.labextensions_dir=<some_path>
```
to a custom configuration
I get the error:
```traceback
Traceback (most recent call last):
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/debuglog.py", line 47, in debug_logging
yield
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/labextensions.py", line 127, in start
ans = self.run_task()
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/labextensions.py", line 199, in run_task
develop_labextension_py(arg, user=self.user, sys_prefix=self.sys_prefix, labextensions_dir=self.labextensions_dir, logger=self.log, overwrite=self.overwrite,
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/federated_labextensions.py", line 157, in develop_labextension_py
full_dest = develop_labextension(
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/federated_labextensions.py", line 83, in develop_labextension
labext = _get_labextension_dir(user=user, sys_prefix=sys_prefix, labextensions_dir=labextensions_dir)
File "/home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/_build/pip_packages/lib/python3.8/site-packages/jupyterlab/federated_labextensions.py", line 332, in _get_labextension_dir
raise ArgumentConflict(
```
It looks like this `ArgumentConflict` error isn't defined anywhere in jupyterlab.
I can submit a PR to fix this later today by adding an `ArgumentConflict` class that subclasses from `traitlets.TraitError`. I'm open to other recomendations as well though.
## Reproduce
<!--Describe step-by-step instructions to reproduce the behavior-->
add
```python
c.DevelopLabExtensionApp.labextensions_dir=<some_path>
```
to a `jupyter_config.py` from a jupyter lab extension repo and run:
```bash
jupyter labextension develop . --config jupyter_config.py
```
<!--Describe how you diagnosed the issue. See the guidelines at
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html -->
## Expected behavior
`ArgumentConflict` is properly raised
## Context
- Operating System and version: Ubuntu 18.04.5
- Browser and version: N/A
- JupyterLab version: 3.0.6
<details><summary>Command Line Output</summary>
<pre>
Installing /home/andrew/Dropbox/quansight/jupyter/jupyterlab-git/jupyterlab_git/labextension -> @jupyterlab/git
An error occured.
NameError: name 'ArgumentConflict' is not defined
</pre>
</details>
<details><summary>Browser Output</summary>
<pre>
N/A
</pre>
</details>
|
Yep, I think you're right that the def of `ArgumentConflict` is missing. Previously, it looks like the def was in [`notebook.extensions`](https://github.com/jupyter/notebook/blob/7f7ea8e85c9c90b264317a7c2f4995ab99fd8f60/notebook/extensions.py#L15), but now that we've switched over to [`jupyter_server`](https://github.com/jupyter-server/jupyter_server/blob/b2e089506baf0723fe63112cb9df7ef8b493d093/jupyter_server/extension/serverextension.py#L72) we'll have to add:
```python
from jupyter_server.extension.serverextension import ArgumentConflict
```
@andrewfulton9 If you want the honors, could you please submit a PR that adds the above import right after the following lines:
https://github.com/jupyterlab/jupyterlab/blob/5340e7161b4c5b67cd3e8184a018e142af86b22c/jupyterlab/federated_labextensions.py#L19-L24
Thanks for spotting this!
|
2021-02-08T21:46:18Z
|
<patch>
diff --git a/jupyterlab/federated_labextensions.py b/jupyterlab/federated_labextensions.py
--- a/jupyterlab/federated_labextensions.py
+++ b/jupyterlab/federated_labextensions.py
@@ -22,6 +22,7 @@
from jupyter_core.utils import ensure_dir_exists
from ipython_genutils.py3compat import cast_unicode_py2
from jupyterlab_server.config import get_federated_extensions
+from jupyter_server.extension.serverextension import ArgumentConflict
from .commands import _test_overlap
</patch>
|
[]
|
[]
| |||
huggingface__transformers-18358
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Global/local import with replicated name in the Trainer leading to UnboundLocalError
### System Info
- `transformers` version: 4.21.0
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
### Who can help?
@pacman100 @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running the `run_glue`([optimum version](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/training/text-classification/run_glue.py)) with the distributed launcher
```
python -m torch.distributed.run --nproc_per_node=2 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge --task_name MRPC --do_train --output_dir /tmp/deberta_res --fp16 --sharded_ddp simple --num_train_epochs 1
```
Error message:
```
Traceback (most recent call last):
File "run_glue.py", line 610, in <module>
main()
File "run_glue.py", line 503, in main
trainer = ORTTrainer(
File "/workspace/optimum/onnxruntime/trainer.py", line 144, in __init__
super().__init__(
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 569, in __init__
self.scaler = ShardedGradScaler()
UnboundLocalError: local variable 'ShardedGradScaler' referenced before assignment
```
### Expected behavior
`ShardedGradScaler` was firstly imported as global variable
https://github.com/huggingface/transformers/blob/da503ea02f7623542bd588b509d0fc31aff92735/src/transformers/trainer.py#L190
Then it was imported as a local variable for fsdp with the same name
https://github.com/huggingface/transformers/blob/da503ea02f7623542bd588b509d0fc31aff92735/src/transformers/trainer.py#L568
And it won't fall back to the global `ShardedGradScaler`, even when the local one is not imported leading, to an UnboundLocalError.
P.S. However I don't have problem running the `run_glue.py` in transformers, the problem seems to occur when using classes inherited from `Trainer`.
Possible solution: use different name / both import locally
*REF:*
*https://docs.python.org/3/faq/programming.html#why-am-i-getting-an-unboundlocalerror-when-the-variable-has-a-value*
*https://stackoverflow.com/questions/58750517/why-unboundlocalerror-occurs-when-importing-inside-function*
</issue>
<code>
[start of README.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <b>English</b> |
44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
46 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
47 <p>
48 </h4>
49
50 <h3 align="center">
51 <p>State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow</p>
52 </h3>
53
54 <h3 align="center">
55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
56 </h3>
57
58 🤗 Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.
59
60 These models can be applied on:
61
62 * 📝 Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages.
63 * 🖼️ Images, for tasks like image classification, object detection, and segmentation.
64 * 🗣️ Audio, for tasks like speech recognition and audio classification.
65
66 Transformer models can also perform tasks on **several modalities combined**, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
67
68 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
69
70 🤗 Transformers is backed by the three most popular deep learning libraries — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other.
71
72 ## Online demos
73
74 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) for public and private models.
75
76 Here are a few examples:
77
78 In Natural Language Processing:
79 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
80 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
81 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
82 - [Natural Language Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
83 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
84 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
85 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
86
87 In Computer Vision:
88 - [Image classification with ViT](https://huggingface.co/google/vit-base-patch16-224)
89 - [Object Detection with DETR](https://huggingface.co/facebook/detr-resnet-50)
90 - [Image Segmentation with DETR](https://huggingface.co/facebook/detr-resnet-50-panoptic)
91
92 In Audio:
93 - [Automatic Speech Recognition with Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
94 - [Keyword Spotting with Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
95
96 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities.
97
98 ## If you are looking for custom support from the Hugging Face team
99
100 <a target="_blank" href="https://huggingface.co/support">
101 <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
102 </a><br>
103
104 ## Quick tour
105
106 To immediately use a model on a given input (text, image, audio, ...), we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts:
107
108 ```python
109 >>> from transformers import pipeline
110
111 # Allocate a pipeline for sentiment-analysis
112 >>> classifier = pipeline('sentiment-analysis')
113 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
114 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
115 ```
116
117 The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here the answer is "positive" with a confidence of 99.97%.
118
119 Many tasks have a pre-trained `pipeline` ready to go, in NLP but also in computer vision and speech. For example, we can easily extract detected objects in an image:
120
121 ``` python
122 >>> import requests
123 >>> from PIL import Image
124 >>> from transformers import pipeline
125
126 # Download an image with cute cats
127 >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
128 >>> image_data = requests.get(url, stream=True).raw
129 >>> image = Image.open(image_data)
130
131 # Allocate a pipeline for object detection
132 >>> object_detector = pipeline('object_detection')
133 >>> object_detector(image)
134 [{'score': 0.9982201457023621,
135 'label': 'remote',
136 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
137 {'score': 0.9960021376609802,
138 'label': 'remote',
139 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
140 {'score': 0.9954745173454285,
141 'label': 'couch',
142 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
143 {'score': 0.9988006353378296,
144 'label': 'cat',
145 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
146 {'score': 0.9986783862113953,
147 'label': 'cat',
148 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
149 ```
150
151 Here we get a list of objects detected in the image, with a box surrounding the object and a confidence score. Here is the original image on the right, with the predictions displayed on the left:
152
153 <h3 align="center">
154 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
155 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
156 </h3>
157
158 You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/docs/transformers/task_summary).
159
160 To download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version:
161 ```python
162 >>> from transformers import AutoTokenizer, AutoModel
163
164 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
165 >>> model = AutoModel.from_pretrained("bert-base-uncased")
166
167 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
168 >>> outputs = model(**inputs)
169 ```
170
171 And here is the equivalent code for TensorFlow:
172 ```python
173 >>> from transformers import AutoTokenizer, TFAutoModel
174
175 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
176 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
177
178 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
179 >>> outputs = model(**inputs)
180 ```
181
182 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator.
183
184 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use normally. [This tutorial](https://huggingface.co/docs/transformers/training) explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune on a new dataset.
185
186 ## Why should I use transformers?
187
188 1. Easy-to-use state-of-the-art models:
189 - High performance on natural language understanding & generation, computer vision, and audio tasks.
190 - Low barrier to entry for educators and practitioners.
191 - Few user-facing abstractions with just three classes to learn.
192 - A unified API for using all our pretrained models.
193
194 1. Lower compute costs, smaller carbon footprint:
195 - Researchers can share trained models instead of always retraining.
196 - Practitioners can reduce compute time and production costs.
197 - Dozens of architectures with over 20,000 pretrained models, some in more than 100 languages.
198
199 1. Choose the right framework for every part of a model's lifetime:
200 - Train state-of-the-art models in 3 lines of code.
201 - Move a single model between TF2.0/PyTorch/JAX frameworks at will.
202 - Seamlessly pick the right framework for training, evaluation and production.
203
204 1. Easily customize a model or an example to your needs:
205 - We provide examples for each architecture to reproduce the results published by its original authors.
206 - Model internals are exposed as consistently as possible.
207 - Model files can be used independently of the library for quick experiments.
208
209 ## Why shouldn't I use transformers?
210
211 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
212 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library.
213 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/main/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs.
214
215 ## Installation
216
217 ### With pip
218
219 This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+.
220
221 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
222
223 First, create a virtual environment with the version of Python you're going to use and activate it.
224
225 Then, you will need to install at least one of Flax, PyTorch or TensorFlow.
226 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific install command for your platform.
227
228 When one of those backends has been installed, 🤗 Transformers can be installed using pip as follows:
229
230 ```bash
231 pip install transformers
232 ```
233
234 If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
235
236 ### With conda
237
238 Since Transformers version v4.0.0, we now have a conda channel: `huggingface`.
239
240 🤗 Transformers can be installed using conda as follows:
241
242 ```shell script
243 conda install -c huggingface transformers
244 ```
245
246 Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
247
248 ## Model architectures
249
250 **[All the model checkpoints](https://huggingface.co/models)** provided by 🤗 Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co) where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations).
251
252 Current number of checkpoints: 
253
254 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/docs/transformers/model_summary) for a high-level summary of each them):
255
256 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
257 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
258 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
259 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
260 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
261 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
262 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
263 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
264 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
265 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
266 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
267 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
268 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
269 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
270 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
271 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
272 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
273 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
274 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
275 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
276 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
277 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
278 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
279 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
280 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
281 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
282 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
283 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
284 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
285 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
286 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
287 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT.
288 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
289 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
290 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
291 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
292 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
293 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
294 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
295 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
296 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
297 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
298 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
299 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
300 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
301 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
302 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
303 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
304 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
305 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
306 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
307 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
308 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
309 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
310 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
311 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
312 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
313 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
314 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
315 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
316 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
317 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
318 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
319 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
320 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
321 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
322 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
323 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
324 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
325 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
326 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
327 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
328 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
329 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
330 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
331 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
332 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
333 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
334 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
335 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
336 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
337 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
338 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
339 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
340 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
341 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
342 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
343 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
344 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
345 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
346 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
347 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
348 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
349 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
350 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
351 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
352 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
353 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
354 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
355 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
356 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
357 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
358 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
359 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
360 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
361 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
362 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
363 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
364 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
365 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
366 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
367 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
368 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
369 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
370 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
371 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
372 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
373 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
374 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
375 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
376 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
377 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
378 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
379 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
380 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
381 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
382 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
383 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
384 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
385 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
386 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
387 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
388 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
389 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
390
391 To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/docs/transformers/index#supported-frameworks).
392
393 These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://huggingface.co/docs/transformers/examples).
394
395
396 ## Learn more
397
398 | Section | Description |
399 |-|-|
400 | [Documentation](https://huggingface.co/docs/transformers/) | Full API documentation and tutorials |
401 | [Task summary](https://huggingface.co/docs/transformers/task_summary) | Tasks supported by 🤗 Transformers |
402 | [Preprocessing tutorial](https://huggingface.co/docs/transformers/preprocessing) | Using the `Tokenizer` class to prepare data for the models |
403 | [Training and fine-tuning](https://huggingface.co/docs/transformers/training) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API |
404 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/main/examples) | Example scripts for fine-tuning models on a wide range of tasks |
405 | [Model sharing and uploading](https://huggingface.co/docs/transformers/model_sharing) | Upload and share your fine-tuned models with the community |
406 | [Migration](https://huggingface.co/docs/transformers/migration) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` |
407
408 ## Citation
409
410 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library:
411 ```bibtex
412 @inproceedings{wolf-etal-2020-transformers,
413 title = "Transformers: State-of-the-Art Natural Language Processing",
414 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
415 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
416 month = oct,
417 year = "2020",
418 address = "Online",
419 publisher = "Association for Computational Linguistics",
420 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
421 pages = "38--45"
422 }
423 ```
424
[end of README.md]
[start of README_ko.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <a href="https://github.com/huggingface/transformers/">English</a> |
44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
46 <b>한국어</b>
47 <p>
48 </h4>
49
50 <h3 align="center">
51 <p> Jax, Pytorch, TensorFlow를 위한 최첨단 자연어처리</p>
52 </h3>
53
54 <h3 align="center">
55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
56 </h3>
57
58 🤗 Transformers는 분류, 정보 추출, 질문 답변, 요약, 번역, 문장 생성 등을 100개 이상의 언어로 수행할 수 있는 수천개의 사전학습된 모델을 제공합니다. 우리의 목표는 모두가 최첨단의 NLP 기술을 쉽게 사용하는 것입니다.
59
60 🤗 Transformers는 이러한 사전학습 모델을 빠르게 다운로드해 특정 텍스트에 사용하고, 원하는 데이터로 fine-tuning해 커뮤니티나 우리의 [모델 허브](https://huggingface.co/models)에 공유할 수 있도록 API를 제공합니다. 또한, 모델 구조를 정의하는 각 파이썬 모듈은 완전히 독립적이여서 연구 실험을 위해 손쉽게 수정할 수 있습니다.
61
62 🤗 Transformers는 가장 유명한 3개의 딥러닝 라이브러리를 지원합니다. 이들은 서로 완벽히 연동됩니다 — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). 간단하게 이 라이브러리 중 하나로 모델을 학습하고, 또 다른 라이브러리로 추론을 위해 모델을 불러올 수 있습니다.
63
64 ## 온라인 데모
65
66 대부분의 모델을 [모델 허브](https://huggingface.co/models) 페이지에서 바로 테스트해볼 수 있습니다. 공개 및 비공개 모델을 위한 [비공개 모델 호스팅, 버전 관리, 추론 API](https://huggingface.co/pricing)도 제공합니다.
67
68 예시:
69 - [BERT로 마스킹된 단어 완성하기](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
70 - [Electra를 이용한 개체명 인식](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
71 - [GPT-2로 텍스트 생성하기](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
72 - [RoBERTa로 자연어 추론하기](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
73 - [BART를 이용한 요약](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
74 - [DistilBERT를 이용한 질문 답변](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
75 - [T5로 번역하기](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
76
77 **[Transformer와 글쓰기](https://transformer.huggingface.co)** 는 이 저장소의 텍스트 생성 능력에 관한 Hugging Face 팀의 공식 데모입니다.
78
79 ## Hugging Face 팀의 커스텀 지원을 원한다면
80
81 <a target="_blank" href="https://huggingface.co/support">
82 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
83 </a><br>
84
85 ## 퀵 투어
86
87 원하는 텍스트에 바로 모델을 사용할 수 있도록, 우리는 `pipeline` API를 제공합니다. Pipeline은 사전학습 모델과 그 모델을 학습할 때 적용한 전처리 방식을 하나로 합칩니다. 다음은 긍정적인 텍스트와 부정적인 텍스트를 분류하기 위해 pipeline을 사용한 간단한 예시입니다:
88
89 ```python
90 >>> from transformers import pipeline
91
92 # Allocate a pipeline for sentiment-analysis
93 >>> classifier = pipeline('sentiment-analysis')
94 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
95 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
96 ```
97
98 코드의 두번째 줄은 pipeline이 사용하는 사전학습 모델을 다운로드하고 캐시로 저장합니다. 세번째 줄에선 그 모델이 주어진 텍스트를 평가합니다. 여기서 모델은 99.97%의 확률로 텍스트가 긍정적이라고 평가했습니다.
99
100 많은 NLP 과제들을 `pipeline`으로 바로 수행할 수 있습니다. 예를 들어, 질문과 문맥이 주어지면 손쉽게 답변을 추출할 수 있습니다:
101
102 ``` python
103 >>> from transformers import pipeline
104
105 # Allocate a pipeline for question-answering
106 >>> question_answerer = pipeline('question-answering')
107 >>> question_answerer({
108 ... 'question': 'What is the name of the repository ?',
109 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
110 ... })
111 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
112
113 ```
114
115 답변뿐만 아니라, 여기에 사용된 사전학습 모델은 확신도와 토크나이즈된 문장 속 답변의 시작점, 끝점까지 반환합니다. [이 튜토리얼](https://huggingface.co/docs/transformers/task_summary)에서 `pipeline` API가 지원하는 다양한 과제를 확인할 수 있습니다.
116
117 코드 3줄로 원하는 과제에 맞게 사전학습 모델을 다운로드 받고 사용할 수 있습니다. 다음은 PyTorch 버전입니다:
118 ```python
119 >>> from transformers import AutoTokenizer, AutoModel
120
121 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
122 >>> model = AutoModel.from_pretrained("bert-base-uncased")
123
124 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
125 >>> outputs = model(**inputs)
126 ```
127 다음은 TensorFlow 버전입니다:
128 ```python
129 >>> from transformers import AutoTokenizer, TFAutoModel
130
131 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
132 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
133
134 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
135 >>> outputs = model(**inputs)
136 ```
137
138 토크나이저는 사전학습 모델의 모든 전처리를 책임집니다. 그리고 (위의 예시처럼) 1개의 스트링이나 리스트도 처리할 수 있습니다. 토크나이저는 딕셔너리를 반환하는데, 이는 다운스트림 코드에 사용하거나 언패킹 연산자 ** 를 이용해 모델에 바로 전달할 수도 있습니다.
139
140 모델 자체는 일반적으로 사용되는 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)나 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)입니다. [이 튜토리얼](https://huggingface.co/transformers/training.html)은 이러한 모델을 표준적인 PyTorch나 TensorFlow 학습 과정에서 사용하는 방법, 또는 새로운 데이터로 fine-tune하기 위해 `Trainer` API를 사용하는 방법을 설명해줍니다.
141
142 ## 왜 transformers를 사용해야 할까요?
143
144 1. 손쉽게 사용할 수 있는 최첨단 모델:
145 - NLU와 NLG 과제에서 뛰어난 성능을 보입니다.
146 - 교육자 실무자에게 진입 장벽이 낮습니다.
147 - 3개의 클래스만 배우면 바로 사용할 수 있습니다.
148 - 하나의 API로 모든 사전학습 모델을 사용할 수 있습니다.
149
150 1. 더 적은 계산 비용, 더 적은 탄소 발자국:
151 - 연구자들은 모델을 계속 다시 학습시키는 대신 학습된 모델을 공유할 수 있습니다.
152 - 실무자들은 학습에 필요한 시간과 비용을 절약할 수 있습니다.
153 - 수십개의 모델 구조, 2,000개 이상의 사전학습 모델, 100개 이상의 언어로 학습된 모델 등.
154
155 1. 모델의 각 생애주기에 적합한 프레임워크:
156 - 코드 3줄로 최첨단 모델을 학습하세요.
157 - 자유롭게 모델을 TF2.0나 PyTorch 프레임워크로 변환하세요.
158 - 학습, 평가, 공개 등 각 단계에 맞는 프레임워크를 원하는대로 선택하세요.
159
160 1. 필요한 대로 모델이나 예시를 커스터마이즈하세요:
161 - 우리는 저자가 공개한 결과를 재현하기 위해 각 모델 구조의 예시를 제공합니다.
162 - 모델 내부 구조는 가능한 일관적으로 공개되어 있습니다.
163 - 빠른 실험을 위해 모델 파일은 라이브러리와 독립적으로 사용될 수 있습니다.
164
165 ## 왜 transformers를 사용하지 말아야 할까요?
166
167 - 이 라이브러리는 신경망 블록을 만들기 위한 모듈이 아닙니다. 연구자들이 여러 파일을 살펴보지 않고 바로 각 모델을 사용할 수 있도록, 모델 파일 코드의 추상화 수준을 적정하게 유지했습니다.
168 - 학습 API는 모든 모델에 적용할 수 있도록 만들어지진 않았지만, 라이브러리가 제공하는 모델들에 적용할 수 있도록 최적화되었습니다. 일반적인 머신 러닝을 위해선, 다른 라이브러리를 사용하세요.
169 - 가능한 많은 사용 예시를 보여드리고 싶어서, [예시 폴더](https://github.com/huggingface/transformers/tree/main/examples)의 스크립트를 준비했습니다. 이 스크립트들을 수정 없이 특정한 문제에 바로 적용하지 못할 수 있습니다. 필요에 맞게 일부 코드를 수정해야 할 수 있습니다.
170
171 ## 설치
172
173 ### pip로 설치하기
174
175 이 저장소는 Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+, TensorFlow 2.3+에서 테스트 되었습니다.
176
177 [가상 환경](https://docs.python.org/3/library/venv.html)에 🤗 Transformers를 설치하세요. Python 가상 환경에 익숙하지 않다면, [사용자 가이드](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)를 확인하세요.
178
179 우선, 사용할 Python 버전으로 가상 환경을 만들고 실행하세요.
180
181 그 다음, Flax, PyTorch, TensorFlow 중 적어도 하나는 설치해야 합니다.
182 플랫폼에 맞는 설치 명령어를 확인하기 위해 [TensorFlow 설치 페이지](https://www.tensorflow.org/install/), [PyTorch 설치 페이지](https://pytorch.org/get-started/locally/#start-locally), [Flax 설치 페이지](https://github.com/google/flax#quick-install)를 확인하세요.
183
184 이들 중 적어도 하나가 설치되었다면, 🤗 Transformers는 다음과 같이 pip을 이용해 설치할 수 있습니다:
185
186 ```bash
187 pip install transformers
188 ```
189
190 예시들을 체험해보고 싶거나, 최최최첨단 코드를 원하거나, 새로운 버전이 나올 때까지 기다릴 수 없다면 [라이브러리를 소스에서 바로 설치](https://huggingface.co/docs/transformers/installation#installing-from-source)하셔야 합니다.
191
192 ### conda로 설치하기
193
194 Transformers 버전 v4.0.0부터, conda 채널이 생겼습니다: `huggingface`.
195
196 🤗 Transformers는 다음과 같이 conda로 설치할 수 있습니다:
197
198 ```shell script
199 conda install -c huggingface transformers
200 ```
201
202 Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는 방법을 확인하세요.
203
204 ## 모델 구조
205
206 **🤗 Transformers가 제공하는 [모든 모델 체크포인트](https://huggingface.co/models)** 는 huggingface.co [모델 허브](https://huggingface.co)에 완벽히 연동되어 있습니다. [개인](https://huggingface.co/users)과 [기관](https://huggingface.co/organizations)이 모델 허브에 직접 업로드할 수 있습니다.
207
208 현재 사용 가능한 모델 체크포인트의 개수: 
209
210 🤗 Transformers는 다음 모델들을 제공합니다 (각 모델의 요약은 [여기](https://huggingface.co/docs/transformers/model_summary)서 확인하세요):
211
212 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
213 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
214 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
215 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
216 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
217 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
218 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
219 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
220 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
221 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
222 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
223 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
224 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
225 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
226 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
227 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
228 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
229 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
230 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
231 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
232 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
233 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
234 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
235 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
236 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
237 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
238 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
239 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
240 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
241 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
242 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
243 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT.
244 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
245 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
246 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
247 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
248 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
249 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
250 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
251 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
252 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
253 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
254 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
255 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
256 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
257 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
258 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
259 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
260 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
261 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
262 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
263 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
264 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
265 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
266 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
267 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
268 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
269 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
270 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
271 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
272 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
273 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
274 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
275 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
276 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
277 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
278 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
279 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
280 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
281 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
282 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
283 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
284 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
285 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
286 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
287 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
288 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
289 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
290 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
291 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
292 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
293 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
294 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
295 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
296 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
297 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
298 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
299 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
300 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
301 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
302 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
303 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
304 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
305 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
306 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
307 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
308 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
309 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
310 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
311 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
312 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
313 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
314 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
315 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
316 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
317 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
318 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
319 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
320 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
321 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
322 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
323 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
324 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
325 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
326 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
327 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
328 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
329 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
330 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
331 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
332 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
333 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
334 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
335 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
336 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
337 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
338 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
339 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
340 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
341 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
342 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
343 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
344 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
345 1. 새로운 모델을 올리고 싶나요? 우리가 **상세한 가이드와 템플릿** 으로 새로운 모델을 올리도록 도와드릴게요. 가이드와 템플릿은 이 저장소의 [`templates`](./templates) 폴더에서 확인하실 수 있습니다. [컨트리뷰션 가이드라인](./CONTRIBUTING.md)을 꼭 확인해주시고, PR을 올리기 전에 메인테이너에게 연락하거나 이슈를 오픈해 피드백을 받으시길 바랍니다.
346
347 각 모델이 Flax, PyTorch, TensorFlow으로 구현되었는지 또는 🤗 Tokenizers 라이브러리가 지원하는 토크나이저를 사용하는지 확인하려면, [이 표](https://huggingface.co/docs/transformers/index#supported-frameworks)를 확인하세요.
348
349 이 구현은 여러 데이터로 검증되었고 (예시 스크립트를 참고하세요) 오리지널 구현의 성능과 같아야 합니다. [도큐먼트](https://huggingface.co/docs/transformers/examples)의 Examples 섹션에서 성능에 대한 자세한 설명을 확인할 수 있습니다.
350
351 ## 더 알아보기
352
353 | 섹션 | 설명 |
354 |-|-|
355 | [도큐먼트](https://huggingface.co/transformers/) | 전체 API 도큐먼트와 튜토리얼 |
356 | [과제 요약](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers가 지원하는 과제들 |
357 | [전처리 튜토리얼](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` 클래스를 이용해 모델을 위한 데이터 준비하기 |
358 | [학습과 fine-tuning](https://huggingface.co/docs/transformers/training) | 🤗 Transformers가 제공하는 모델 PyTorch/TensorFlow 학습 과정과 `Trainer` API에서 사용하기 |
359 | [퀵 투어: Fine-tuning/사용 스크립트](https://github.com/huggingface/transformers/tree/main/examples) | 다양한 과제에서 모델 fine-tuning하는 예시 스크립트 |
360 | [모델 공유 및 업로드](https://huggingface.co/docs/transformers/model_sharing) | 커뮤니티에 fine-tune된 모델을 업로드 및 공유하기 |
361 | [마이그레이션](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`나 `pytorch-pretrained-bert`에서 🤗 Transformers로 이동하기|
362
363 ## 인용
364
365 🤗 Transformers 라이브러리를 인용하고 싶다면, 이 [논문](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)을 인용해 주세요:
366 ```bibtex
367 @inproceedings{wolf-etal-2020-transformers,
368 title = "Transformers: State-of-the-Art Natural Language Processing",
369 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
370 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
371 month = oct,
372 year = "2020",
373 address = "Online",
374 publisher = "Association for Computational Linguistics",
375 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
376 pages = "38--45"
377 }
378 ```
379
[end of README_ko.md]
[start of README_zh-hans.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多种语言; 使用 transformers 库。
20 - Use square quotes, e.g.,「引用」
21
22 Dictionary
23
24 Hugging Face: 抱抱脸
25 token: 词符(并用括号标注原英文)
26 tokenize: 词符化(并用括号标注原英文)
27 tokenizer: 词符化器(并用括号标注原英文)
28 transformer: transformer(不翻译)
29 pipeline: 流水线
30 API: API (不翻译)
31 inference: 推理
32 Trainer: 训练器。当作为类名出现时不翻译。
33 pretrained/pretrain: 预训练
34 finetune: 微调
35 community: 社区
36 example: 当特指仓库中 example 目录时翻译为「用例」
37 Python data structures (e.g., list, set, dict): 翻译为列表,集合,词典,并用括号标注原英文
38 NLP/Natural Language Processing: 以 NLP 出现时不翻译,以 Natural Language Processing 出现时翻译为自然语言处理
39 checkpoint: 检查点
40 -->
41
42 <p align="center">
43 <br>
44 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
45 <br>
46 <p>
47 <p align="center">
48 <a href="https://circleci.com/gh/huggingface/transformers">
49 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
50 </a>
51 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
52 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
53 </a>
54 <a href="https://huggingface.co/docs/transformers/index">
55 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
56 </a>
57 <a href="https://github.com/huggingface/transformers/releases">
58 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
59 </a>
60 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
61 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
62 </a>
63 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
64 </p>
65
66 <h4 align="center">
67 <p>
68 <a href="https://github.com/huggingface/transformers/">English</a> |
69 <b>简体中文</b> |
70 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
71 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
72 <p>
73 </h4>
74
75 <h3 align="center">
76 <p>为 Jax、PyTorch 和 TensorFlow 打造的先进的自然语言处理</p>
77 </h3>
78
79 <h3 align="center">
80 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
81 </h3>
82
83 🤗 Transformers 提供了数以千计的预训练模型,支持 100 多种语言的文本分类、信息抽取、问答、摘要、翻译、文本生成。它的宗旨让最先进的 NLP 技术人人易用。
84
85 🤗 Transformers 提供了便于快速下载和使用的API,让你可以把预训练模型用在给定文本、在你的数据集上微调然后通过 [model hub](https://huggingface.co/models) 与社区共享。同时,每个定义的 Python 模块均完全独立,方便修改和快速研究实验。
86
87 🤗 Transformers 支持三个最热门的深度学习库: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — 并与之无缝整合。你可以直接使用一个框架训练你的模型然后用另一个加载和推理。
88
89 ## 在线演示
90
91 你可以直接在模型页面上测试大多数 [model hub](https://huggingface.co/models) 上的模型。 我们也提供了 [私有模型托管、模型版本管理以及推理API](https://huggingface.co/pricing)。
92
93 这里是一些例子:
94 - [用 BERT 做掩码填词](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
95 - [用 Electra 做命名实体识别](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
96 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
97 - [用 RoBERTa 做自然语言推理](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
98 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
99 - [用 DistilBERT 做问答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
100 - [用 T5 做翻译](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
101
102 **[Write With Transformer](https://transformer.huggingface.co)**,由抱抱脸团队打造,是一个文本生成的官方 demo。
103
104 ## 如果你在寻找由抱抱脸团队提供的定制化支持服务
105
106 <a target="_blank" href="https://huggingface.co/support">
107 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
108 </a><br>
109
110 ## 快速上手
111
112 我们为快速使用模型提供了 `pipeline` (流水线)API。流水线聚合了预训练模型和对应的文本预处理。下面是一个快速使用流水线去判断正负面情绪的例子:
113
114 ```python
115 >>> from transformers import pipeline
116
117 # 使用情绪分析流水线
118 >>> classifier = pipeline('sentiment-analysis')
119 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
120 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
121 ```
122
123 第二行代码下载并缓存了流水线使用的预训练模型,而第三行代码则在给定的文本上进行了评估。这里的答案“正面” (positive) 具有 99 的置信度。
124
125 许多的 NLP 任务都有开箱即用的预训练流水线。比如说,我们可以轻松的从给定文本中抽取问题答案:
126
127 ``` python
128 >>> from transformers import pipeline
129
130 # 使用问答流水线
131 >>> question_answerer = pipeline('question-answering')
132 >>> question_answerer({
133 ... 'question': 'What is the name of the repository ?',
134 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
135 ... })
136 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
137
138 ```
139
140 除了给出答案,预训练模型还给出了对应的置信度分数、答案在词符化 (tokenized) 后的文本中开始和结束的位置。你可以从[这个教程](https://huggingface.co/docs/transformers/task_summary)了解更多流水线API支持的任务。
141
142 要在你的任务上下载和使用任意预训练模型也很简单,只需三行代码。这里是 PyTorch 版的示例:
143 ```python
144 >>> from transformers import AutoTokenizer, AutoModel
145
146 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
147 >>> model = AutoModel.from_pretrained("bert-base-uncased")
148
149 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
150 >>> outputs = model(**inputs)
151 ```
152 这里是等效的 TensorFlow 代码:
153 ```python
154 >>> from transformers import AutoTokenizer, TFAutoModel
155
156 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
157 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
158
159 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
160 >>> outputs = model(**inputs)
161 ```
162
163 词符化器 (tokenizer) 为所有的预训练模型提供了预处理,并可以直接对单个字符串进行调用(比如上面的例子)或对列表 (list) 调用。它会输出一个你可以在下游代码里使用或直接通过 `**` 解包表达式传给模型的词典 (dict)。
164
165 模型本身是一个常规的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取决于你的后端),可以常规方式使用。 [这个教程](https://huggingface.co/transformers/training.html)解释了如何将这样的模型整合到经典的 PyTorch 或 TensorFlow 训练循环中,或是如何使用我们的 `Trainer` 训练器)API 来在一个新的数据集上快速微调。
166
167 ## 为什么要用 transformers?
168
169 1. 便于使用的先进模型:
170 - NLU 和 NLG 上表现优越
171 - 对教学和实践友好且低门槛
172 - 高级抽象,只需了解三个类
173 - 对所有模型统一的API
174
175 1. 更低计算开销,更少的碳排放:
176 - 研究人员可以分享亿训练的模型而非次次从头开始训练
177 - 工程师可以减少计算用时和生产环境开销
178 - 数十种模型架构、两千多个预训练模型、100多种语言支持
179
180 1. 对于模型生命周期的每一个部分都面面俱到:
181 - 训练先进的模型,只需 3 行代码
182 - 模型在不同深度学习框架间任意转移,随你心意
183 - 为训练、评估和生产选择最适合的框架,衔接无缝
184
185 1. 为你的需求轻松定制专属模型和用例:
186 - 我们为每种模型架构提供了多个用例来复现原论文结果
187 - 模型内部结构保持透明一致
188 - 模型文件可单独使用,方便魔改和快速实验
189
190 ## 什么情况下我不该用 transformers?
191
192 - 本库并不是模块化的神经网络工具箱。模型文件中的代码特意呈若璞玉,未经额外抽象封装,以便研究人员快速迭代魔改而不致溺于抽象和文件跳转之中。
193 - `Trainer` API 并非兼容任何模型,只为本库之模型优化。若是在寻找适用于通用机器学习的训练循环实现,请另觅他库。
194 - 尽管我们已尽力而为,[examples 目录](https://github.com/huggingface/transformers/tree/main/examples)中的脚本也仅为用例而已。对于你的特定问题,它们并不一定开箱即用,可能需要改几行代码以适之。
195
196 ## 安装
197
198 ### 使用 pip
199
200 这个仓库已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下经过测试。
201
202 你可以在[虚拟环境](https://docs.python.org/3/library/venv.html)中安装 🤗 Transformers。如果你还不熟悉 Python 的虚拟环境,请阅此[用户说明](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
203
204 首先,用你打算使用的版本的 Python 创建一个虚拟环境并激活。
205
206 然后,你需要安装 Flax、PyTorch 或 TensorFlow 其中之一。关于在你使用的平台上安装这些框架,请参阅 [TensorFlow 安装页](https://www.tensorflow.org/install/), [PyTorch 安装页](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安装页](https://github.com/google/flax#quick-install)。
207
208 当这些后端之一安装成功后, 🤗 Transformers 可依此安装:
209
210 ```bash
211 pip install transformers
212 ```
213
214 如果你想要试试用例或者想在正式发布前使用最新的开发中代码,你得[从源代码安装](https://huggingface.co/docs/transformers/installation#installing-from-source)。
215
216 ### 使用 conda
217
218 自 Transformers 4.0.0 版始,我们有了一个 conda 频道: `huggingface`。
219
220 🤗 Transformers 可以通过 conda 依此安装:
221
222 ```shell script
223 conda install -c huggingface transformers
224 ```
225
226 要通过 conda 安装 Flax、PyTorch 或 TensorFlow 其中之一,请参阅它们各自安装页的说明。
227
228 ## 模型架构
229
230 🤗 Transformers 支持的[**所有的模型检查点**](https://huggingface.co/models)由[用户](https://huggingface.co/users)和[组织](https://huggingface.co/organizations)上传,均与 huggingface.co [model hub](https://huggingface.co) 无缝整合。
231
232 目前的检查点数量: 
233
234 🤗 Transformers 目前支持如下的架构(模型概述请阅[这里](https://huggingface.co/docs/transformers/model_summary)):
235
236 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (来自 Google Research and the Toyota Technological Institute at Chicago) 伴随论文 [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), 由 Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut 发布。
237 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (来自 Facebook) 伴随论文 [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) 由 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer 发布。
238 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (来自 École polytechnique) 伴随论文 [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) 由 Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis 发布。
239 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (来自 VinAI Research) 伴随论文 [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) 由 Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen 发布。
240 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (来自 Microsoft) 伴随论文 [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) 由 Hangbo Bao, Li Dong, Furu Wei 发布。
241 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (来自 Google) 伴随论文 [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) 由 Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova 发布。
242 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (来自 Google) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
243 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (来自 VinAI Research) 伴随论文 [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) 由 Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen 发布。
244 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
245 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
246 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
247 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
248 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
249 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (来自 Alexa) 伴随论文 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 由 Adrian de Wynter and Daniel J. Perry 发布。
250 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (来自 Google Research) 伴随论文 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 由 Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 发布。
251 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (来自 Inria/Facebook/Sorbonne) 伴随论文 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 由 Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 发布。
252 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (来自 Google Research) 伴随论文 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 由 Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 发布。
253 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (来自 OpenAI) 伴随论文 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 由 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 发布。
254 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (来自 Salesforce) 伴随论文 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 由 Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 发布。
255 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (来自 YituTech) 伴随论文 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 由 Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 发布。
256 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (来自 Facebook AI) 伴随论文 [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 由 Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie 发布。
257 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (来自 Tsinghua University) 伴随论文 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 由 Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 发布。
258 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (来自 Salesforce) 伴随论文 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 由 Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 发布。
259 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (来自 Microsoft) 伴随论文 [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 由 Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang 发布。
260 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (来自 Facebook) 伴随论文 [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) 由 Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli 发布。
261 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
262 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
263 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (来自 Berkeley/Facebook/Google) 伴随论文 [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) 由 Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch 发布。
264 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (来自 Facebook) 伴随论文 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 由 Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 发布。
265 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (来自 Facebook) 伴随论文 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 由 Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 发布。
266 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (来自 Microsoft Research) 伴随论文 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 由 Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 发布。
267 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (来自 HuggingFace), 伴随论文 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 由 Victor Sanh, Lysandre Debut and Thomas Wolf 发布。 同样的方法也应用于压缩 GPT-2 到 [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa 到 [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT 到 [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) 和德语版 DistilBERT。
268 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (来自 Microsoft Research) 伴随论文 [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) 由 Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei 发布。
269 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (来自 Facebook) 伴随论文 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 由 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 发布。
270 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (来自 Intel Labs) 伴随论文 [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) 由 René Ranftl, Alexey Bochkovskiy, Vladlen Koltun 发布。
271 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (来自 Google Research/Stanford University) 伴随论文 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 由 Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 发布。
272 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (来自 Google Research) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
273 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (来自 CNRS) 伴随论文 [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) 由 Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab 发布。
274 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (来自 Facebook AI) 伴随论文 [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) 由 Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela 发布。
275 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (来自 Google Research) 伴随论文 [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) 由 James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon 发布。
276 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (来自 CMU/Google Brain) 伴随论文 [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) 由 Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le 发布。
277 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (来自 KAIST) 伴随论文 [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) 由 Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim 发布。
278 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (来自 OpenAI) 伴随论文 [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) 由 Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever 发布。
279 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (来自 EleutherAI) 随仓库 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) 发布。作者为 Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy 发布。
280 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
281 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (来自 OpenAI) 伴随论文 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 由 Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 发布。
282 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (来自 EleutherAI) 伴随论文 [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 由 Ben Wang and Aran Komatsuzaki 发布。
283 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (来自 UCSD, NVIDIA) 伴随论文 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 由 Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 发布。
284 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。
285 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (来自 Berkeley) 伴随论文 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 由 Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 发布。
286 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (来自 OpenAI) 伴随论文 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 由 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 发布。
287 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 由 Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 发布。
288 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 由 Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 发布。
289 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) 由 Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei 发布。
290 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 由 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 发布。
291 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
292 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (来自 Meta AI) 伴随论文 [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) 由 Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze 发布。
293 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
294 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (来自 Google AI) released 伴随论文 [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) 由 Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang 发布。
295 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (来自 Studio Ousia) 伴随论文 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 由 Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 发布。
296 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (来自 UNC Chapel Hill) 伴随论文 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 由 Hao Tan and Mohit Bansal 发布。
297 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (来自 Facebook) 伴随论文 [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) 由 Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert 发布。
298 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (来自 Facebook) 伴随论文 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 由 Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 发布。
299 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** 用 [OPUS](http://opus.nlpl.eu/) 数据训练的机器翻译模型由 Jörg Tiedemann 发布。[Marian Framework](https://marian-nmt.github.io/) 由微软翻译团队开发。
300 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov
301 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 由 Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 发布。
302 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 由 Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 发布。
303 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
304 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
305 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (来自 Studio Ousia) 伴随论文 [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 由 Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka 发布。
306 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (来自 CMU/Google Brain) 伴随论文 [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) 由 Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou 发布。
307 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (来自 Apple) 伴随论文 [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) 由 Sachin Mehta and Mohammad Rastegari 发布。
308 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (来自 Microsoft Research) 伴随论文 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 由 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 发布。
309 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (来自 Google AI) 伴随论文 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 由 Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 发布。
310 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (来自 中国人民大学 AI Box) 伴随论文 [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) 由 Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen 发布。
311 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (来自华为诺亚方舟实验室) 伴随论文 [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) 由 Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu 发布。
312 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (来自 Meta) 伴随论文 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 由 the NLLB team 发布。
313 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (来自 the University of Wisconsin - Madison) 伴随论文 [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 由 Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh 发布。
314 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (来自 Meta AI) 伴随论文 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 由 Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 发布。
315 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (来自 Google AI) 伴随论文 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 由 Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 发布。
316 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。
317 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (来自 Deepmind) 伴随论文 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 由 Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 发布。
318 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (来自 VinAI Research) 伴随论文 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 由 Dat Quoc Nguyen and Anh Tuan Nguyen 发布。
319 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (来自 UCLA NLP) 伴随论文 [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) 由 Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang 发布。
320 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (来自 Sea AI Labs) 伴随论文 [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) 由 Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng 发布。
321 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
322 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (来自 NVIDIA) 伴随论文 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 由 Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 发布。
323 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (来自 Facebook) 伴随论文 [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) 由 Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela 发布。
324 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (来自 Google Research) 伴随论文 [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 由 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang 发布。
325 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (来自 Google Research) 伴随论文 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 由 Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 发布。
326 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
327 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (来自 Google Research) 伴随论文 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 由 Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 发布。
328 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
329 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (来自 Facebook), 伴随论文 [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 由 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 发布。
330 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (来自 ZhuiyiTechnology), 伴随论文 [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 由 Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 发布。
331 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (来自 NVIDIA) 伴随论文 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 由 Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 发布。
332 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
333 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
334 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (来自 Facebook), 伴随论文 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 发布。
335 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (来自 Facebook) 伴随论文 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 由 Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 发布。
336 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (来自 Tel Aviv University) 伴随论文 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 由 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 发布。
337 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (来自 Berkeley) 伴随论文 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 由 Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 发布。
338 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (来自 Microsoft) 伴随论文 [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) 由 Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo 发布。
339 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (来自 Microsoft) 伴随论文 [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) 由 Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo 发布。
340 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI) 伴随论文 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
341 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (来自 Google AI) 伴随论文 [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
342 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (来自 Google AI) 伴随论文 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 由 Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 发布。
343 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (来自 Microsoft Research) 伴随论文 [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) 由 Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou 发布。
344 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
345 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (来自 Google/CMU) 伴随论文 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 由 Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 发布。
346 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (来自 Microsoft) 伴随论文 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 由 Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 发布。
347 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
348 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (来自 Microsoft Research) 伴随论文 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 由 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 发布。
349 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (来自 Microsoft Research) 伴随论文 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 由 Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 发布。
350 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (来自 Tsinghua University and Nankai University) 伴随论文 [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) 由 Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu 发布。
351 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (来自 NAVER AI Lab/Kakao Enterprise/Kakao Brain) 伴随论文 [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 由 Wonjae Kim, Bokyung Son, Ildoo Kim 发布。
352 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。
353 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (来自 UCLA NLP) 伴随论文 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 由 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 发布。
354 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (来自 Meta AI) 伴随论文 [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 由 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick 发布。
355 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (来自 Facebook AI) 伴随论文 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 由 Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 发布。
356 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (来自 Facebook AI) 伴随论文 [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino 发布。
357 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (来自 Facebook AI) 伴随论文 [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 由 Qiantong Xu, Alexei Baevski, Michael Auli 发布。
358 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
359 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
360 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (来自 Facebook) 伴随论文 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 由 Guillaume Lample and Alexis Conneau 发布。
361 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
362 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (来自 Facebook AI), 伴随论文 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 由 Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 发布。
363 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (来自 Facebook AI) 伴随论文 [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) 由 Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau 发布。
364 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (来自 Google/CMU) 伴随论文 [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 由 Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 发布。
365 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (来自 Facebook AI) 伴随论文 [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 由 Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli 发布。
366 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (来自 Facebook AI) 伴随论文 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 由 Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 发布。
367 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (来自 Huazhong University of Science & Technology) 伴随论文 [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) 由 Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu 发布。
368 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (来自 the University of Wisconsin - Madison) 伴随论文 [You Only Sample (Almost) 由 Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh 发布。
369 1. 想要贡献新的模型?我们这里有一份**详细指引和模板**来引导你添加新的模型。你可以在 [`templates`](./templates) 目录中找到他们。记得查看 [贡献指南](./CONTRIBUTING.md) 并在开始写 PR 前联系维护人员或开一个新的 issue 来获得反馈。
370
371 要检查某个模型是否已有 Flax、PyTorch 或 TensorFlow 的实现,或其是否在 🤗 Tokenizers 库中有对应词符化器(tokenizer),敬请参阅[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
372
373 这些实现均已于多个数据集测试(请参看用例脚本)并应于原版实现表现相当。你可以在用例文档的[此节](https://huggingface.co/docs/transformers/examples)中了解表现的细节。
374
375
376 ## 了解更多
377
378 | 章节 | 描述 |
379 |-|-|
380 | [文档](https://huggingface.co/transformers/) | 完整的 API 文档和教程 |
381 | [任务总结](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支持的任务 |
382 | [预处理教程](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 来为模型准备数据 |
383 | [训练和微调](https://huggingface.co/docs/transformers/training) | 在 PyTorch/TensorFlow 的训练循环或 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
384 | [快速上手:微调和用例脚本](https://github.com/huggingface/transformers/tree/main/examples) | 为各种任务提供的用例脚本 |
385 | [模型分享和上传](https://huggingface.co/docs/transformers/model_sharing) | 和社区上传和分享你微调的模型 |
386 | [迁移](https://huggingface.co/docs/transformers/migration) | 从 `pytorch-transformers` 或 `pytorch-pretrained-bert` 迁移到 🤗 Transformers |
387
388 ## 引用
389
390 我们已将此库的[论文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式发表,如果你使用了 🤗 Transformers 库,请引用:
391 ```bibtex
392 @inproceedings{wolf-etal-2020-transformers,
393 title = "Transformers: State-of-the-Art Natural Language Processing",
394 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
395 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
396 month = oct,
397 year = "2020",
398 address = "Online",
399 publisher = "Association for Computational Linguistics",
400 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
401 pages = "38--45"
402 }
403 ```
404
[end of README_zh-hans.md]
[start of README_zh-hant.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Traditional Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多種語言; 使用 transformers 函式庫。
20 - Use square quotes, e.g.,「引用」
21 - Some of terms in the file can be found at National Academy for Educational Research (https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese.
22
23 Dictionary
24
25 API: API (不翻譯)
26 add: 加入
27 checkpoint: 檢查點
28 code: 程式碼
29 community: 社群
30 confidence: 信賴度
31 dataset: 資料集
32 documentation: 文件
33 example: 基本翻譯為「範例」,或依語意翻為「例子」
34 finetune: 微調
35 Hugging Face: Hugging Face(不翻譯)
36 implementation: 實作
37 inference: 推論
38 library: 函式庫
39 module: 模組
40 NLP/Natural Language Processing: 以 NLP 出現時不翻譯,以 Natural Language Processing 出現時翻譯為自然語言處理
41 online demos: 線上Demo
42 pipeline: pipeline(不翻譯)
43 pretrained/pretrain: 預訓練
44 Python data structures (e.g., list, set, dict): 翻譯為串列,集合,字典,並用括號標註原英文
45 repository: repository(不翻譯)
46 summary: 概覽
47 token-: token-(不翻譯)
48 Trainer: Trainer(不翻譯)
49 transformer: transformer(不翻譯)
50 tutorial: 教學
51 user: 使用者
52 -->
53
54 <p align="center">
55 <br>
56 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
57 <br>
58 <p>
59 <p align="center">
60 <a href="https://circleci.com/gh/huggingface/transformers">
61 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
62 </a>
63 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
64 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
65 </a>
66 <a href="https://huggingface.co/docs/transformers/index">
67 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
68 </a>
69 <a href="https://github.com/huggingface/transformers/releases">
70 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
71 </a>
72 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
73 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
74 </a>
75 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
76 </p>
77
78 <h4 align="center">
79 <p>
80 <a href="https://github.com/huggingface/transformers/">English</a> |
81 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
82 <b>繁體中文</b> |
83 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
84 <p>
85 </h4>
86
87 <h3 align="center">
88 <p>為 Jax、PyTorch 以及 TensorFlow 打造的先進自然語言處理函式庫</p>
89 </h3>
90
91 <h3 align="center">
92 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
93 </h3>
94
95 🤗 Transformers 提供了數以千計的預訓練模型,支援 100 多種語言的文本分類、資訊擷取、問答、摘要、翻譯、文本生成。它的宗旨是讓最先進的 NLP 技術人人易用。
96
97 🤗 Transformers 提供了便於快速下載和使用的API,讓你可以將預訓練模型用在給定文本、在你的資料集上微調然後經由 [model hub](https://huggingface.co/models) 與社群共享。同時,每個定義的 Python 模組架構均完全獨立,方便修改和快速研究實驗。
98
99 🤗 Transformers 支援三個最熱門的深度學習函式庫: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) 以及 [TensorFlow](https://www.tensorflow.org/) — 並與之完美整合。你可以直接使用其中一個框架訓練你的模型,然後用另一個載入和推論。
100
101 ## 線上Demo
102
103 你可以直接在 [model hub](https://huggingface.co/models) 上測試大多數的模型。我們也提供了 [私有模型託管、模型版本管理以及推論API](https://huggingface.co/pricing)。
104
105 這裡是一些範例:
106 - [用 BERT 做遮蓋填詞](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
107 - [用 Electra 做專有名詞辨識](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
108 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
109 - [用 RoBERTa 做自然語言推論](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
110 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
111 - [用 DistilBERT 做問答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
112 - [用 T5 做翻譯](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
113
114 **[Write With Transformer](https://transformer.huggingface.co)**,由 Hugging Face 團隊所打造,是一個文本生成的官方 demo。
115
116 ## 如果你在尋找由 Hugging Face 團隊所提供的客製化支援服務
117
118 <a target="_blank" href="https://huggingface.co/support">
119 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
120 </a><br>
121
122 ## 快速上手
123
124 我們為快速使用模型提供了 `pipeline` API。 Pipeline 包含了預訓練模型和對應的文本預處理。下面是一個快速使用 pipeline 去判斷正負面情緒的例子:
125
126 ```python
127 >>> from transformers import pipeline
128
129 # 使用情緒分析 pipeline
130 >>> classifier = pipeline('sentiment-analysis')
131 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
132 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
133 ```
134
135 第二行程式碼下載並快取 pipeline 使用的預訓練模型,而第三行程式碼則在給定的文本上進行了評估。這裡的答案“正面” (positive) 具有 99.97% 的信賴度。
136
137 許多的 NLP 任務都有隨選即用的預訓練 `pipeline`。例如,我們可以輕鬆地從給定文本中擷取問題答案:
138
139 ``` python
140 >>> from transformers import pipeline
141
142 # 使用問答 pipeline
143 >>> question_answerer = pipeline('question-answering')
144 >>> question_answerer({
145 ... 'question': 'What is the name of the repository ?',
146 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
147 ... })
148 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
149
150 ```
151
152 除了提供問題解答,預訓練模型還提供了對應的信賴度分數以及解答在 tokenized 後的文本中開始和結束的位置。你可以從[這個教學](https://huggingface.co/docs/transformers/task_summary)了解更多 `pipeline` API支援的任務。
153
154 要在你的任務中下載和使用任何預訓練模型很簡單,只需三行程式碼。這裡是 PyTorch 版的範例:
155 ```python
156 >>> from transformers import AutoTokenizer, AutoModel
157
158 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
159 >>> model = AutoModel.from_pretrained("bert-base-uncased")
160
161 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
162 >>> outputs = model(**inputs)
163 ```
164 這裡是對應的 TensorFlow 程式碼:
165 ```python
166 >>> from transformers import AutoTokenizer, TFAutoModel
167
168 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
169 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
170
171 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
172 >>> outputs = model(**inputs)
173 ```
174
175 Tokenizer 為所有的預訓練模型提供了預處理,並可以直接轉換單一字串(比如上面的例子)或串列 (list)。它會輸出一個的字典 (dict) 讓你可以在下游程式碼裡使用或直接藉由 `**` 運算式傳給模型。
176
177 模型本身是一個常規的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取決於你的後端),可依常規方式使用。 [這個教學](https://huggingface.co/transformers/training.html)解釋了如何將這樣的模型整合到一般的 PyTorch 或 TensorFlow 訓練迴圈中,或是如何使用我們的 `Trainer` API 在一個新的資料集上快速進行微調。
178
179 ## 為什麼要用 transformers?
180
181 1. 便於使用的先進模型:
182 - NLU 和 NLG 上性能卓越
183 - 對教學和實作友好且低門檻
184 - 高度抽象,使用者只須學習 3 個類別
185 - 對所有模型使用的制式化API
186
187 1. 更低的運算成本,更少的碳排放:
188 - 研究人員可以分享預訓練的模型而非從頭開始訓練
189 - 工程師可以減少計算時間以及生產成本
190 - 數十種模型架構、兩千多個預訓練模型、100多種語言支援
191
192 1. 對於模型生命週期的每一個部分都面面俱到:
193 - 訓練先進的模型,只需 3 行程式碼
194 - 模型可以在不同深度學習框架之間任意轉換
195 - 為訓練、評估和生產選擇最適合的框架,並完美銜接
196
197 1. 為你的需求輕鬆客製化專屬模型和範例:
198 - 我們為每種模型架構提供了多個範例來重現原論文結果
199 - 一致的模型內部架構
200 - 模型檔案可單獨使用,便於修改和快速實驗
201
202 ## 什麼情況下我不該用 transformers?
203
204 - 本函式庫並不是模組化的神經網絡工具箱。模型文件中的程式碼並未做額外的抽象封裝,以便研究人員快速地翻閱及修改程式碼,而不會深陷複雜的類別包裝之中。
205 - `Trainer` API 並非相容任何模型,它只為本函式庫中的模型最佳化。對於一般的機器學習用途,請使用其他函式庫。
206 - 儘管我們已盡力而為,[examples 目錄](https://github.com/huggingface/transformers/tree/main/examples)中的腳本也僅為範例而已。對於特定問題,它們並不一定隨選即用,可能需要修改幾行程式碼以符合需求。
207
208 ## 安裝
209
210 ### 使用 pip
211
212 這個 Repository 已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下經過測試。
213
214 你可以在[虛擬環境](https://docs.python.org/3/library/venv.html)中安裝 🤗 Transformers。如果你還不熟悉 Python 的虛擬環境,請閱此[使用者指引](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
215
216 首先,用你打算使用的版本的 Python 創建一個虛擬環境並進入。
217
218 然後,你需要安裝 Flax、PyTorch 或 TensorFlow 其中之一。對於該如何在你使用的平台上安裝這些框架,請參閱 [TensorFlow 安裝頁面](https://www.tensorflow.org/install/), [PyTorch 安裝頁面](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安裝頁面](https://github.com/google/flax#quick-install)。
219
220 當其中一個後端安裝成功後,🤗 Transformers 可依此安裝:
221
222 ```bash
223 pip install transformers
224 ```
225
226 如果你想要試試範例或者想在正式發布前使用最新開發中的程式碼,你必須[從原始碼安裝](https://huggingface.co/docs/transformers/installation#installing-from-source)。
227
228 ### 使用 conda
229
230 自 Transformers 4.0.0 版始,我們有了一個 conda channel: `huggingface`。
231
232 🤗 Transformers 可以藉由 conda 依此安裝:
233
234 ```shell script
235 conda install -c huggingface transformers
236 ```
237
238 要藉由 conda 安裝 Flax、PyTorch 或 TensorFlow 其中之一,請參閱它們各自安裝頁面的說明。
239
240 ## 模型架構
241
242 **🤗 Transformers 支援的[所有的模型檢查點](https://huggingface.co/models)**,由[使用者](https://huggingface.co/users)和[組織](https://huggingface.co/organizations)上傳,均與 huggingface.co [model hub](https://huggingface.co) 完美結合。
243
244 目前的檢查點數量: 
245
246 🤗 Transformers 目前支援以下的架構(模型概覽請參閱[這裡](https://huggingface.co/docs/transformers/model_summary)):
247
248 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
249 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
250 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
251 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
252 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
253 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
254 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
255 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
256 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
257 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
258 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
259 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
260 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
261 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
262 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
263 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
264 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
265 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
266 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
267 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
268 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
269 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
270 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
271 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
272 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
273 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
274 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
275 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
276 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
277 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
278 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
279 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT.
280 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
281 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
282 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
283 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
284 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
285 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
286 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
287 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
288 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
289 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
290 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
291 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
292 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
293 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
294 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released with the paper [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
295 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
296 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
297 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
298 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
299 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
300 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
301 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
302 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
303 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
304 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
305 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
306 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
307 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
308 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
309 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
310 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
311 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
312 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov
313 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
314 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
315 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
316 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
317 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
318 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
319 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
320 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
321 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
322 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
323 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
324 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
325 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
326 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
327 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
328 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
329 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
330 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
331 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
332 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
333 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
334 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
335 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
336 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
337 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
338 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
339 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
340 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
341 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
342 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
343 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
344 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
345 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
346 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
347 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook) released with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
348 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University) released with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
349 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
350 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
351 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
352 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
353 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released with the paper [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
354 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
355 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
356 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
357 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
358 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
359 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
360 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
361 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
362 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
363 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
364 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
365 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
366 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
367 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
368 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
369 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
370 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
371 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
372 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
373 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
374 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
375 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
376 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
377 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
378 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
379 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
380 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
381 1. 想要貢獻新的模型?我們這裡有一份**詳細指引和模板**來引導你加入新的模型。你可以在 [`templates`](./templates) 目錄中找到它們。記得查看[貢獻指引](./CONTRIBUTING.md)並在開始寫 PR 前聯繫維護人員或開一個新的 issue 來獲得 feedbacks。
382
383 要檢查某個模型是否已有 Flax、PyTorch 或 TensorFlow 的實作,或其是否在🤗 Tokenizers 函式庫中有對應的 tokenizer,敬請參閱[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
384
385 這些實作均已於多個資料集測試(請參閱範例腳本)並應與原版實作表現相當。你可以在範例文件的[此節](https://huggingface.co/docs/transformers/examples)中了解實作的細節。
386
387
388 ## 了解更多
389
390 | 章節 | 描述 |
391 |-|-|
392 | [文件](https://huggingface.co/transformers/) | 完整的 API 文件和教學 |
393 | [任務概覽](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支援的任務 |
394 | [預處理教學](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 來為模型準備資料 |
395 | [訓練和微調](https://huggingface.co/docs/transformers/training) | 使用 PyTorch/TensorFlow 的內建的訓練方式或於 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
396 | [快速上手:微調和範例腳本](https://github.com/huggingface/transformers/tree/main/examples) | 為各種任務提供的範例腳本 |
397 | [模型分享和上傳](https://huggingface.co/docs/transformers/model_sharing) | 上傳並與社群分享你微調的模型 |
398 | [遷移](https://huggingface.co/docs/transformers/migration) | 從 `pytorch-transformers` 或 `pytorch-pretrained-bert` 遷移到 🤗 Transformers |
399
400 ## 引用
401
402 我們已將此函式庫的[論文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式發表。如果你使用了 🤗 Transformers 函式庫,可以引用:
403 ```bibtex
404 @inproceedings{wolf-etal-2020-transformers,
405 title = "Transformers: State-of-the-Art Natural Language Processing",
406 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
407 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
408 month = oct,
409 year = "2020",
410 address = "Online",
411 publisher = "Association for Computational Linguistics",
412 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
413 pages = "38--45"
414 }
415 ```
416
[end of README_zh-hant.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
huggingface/transformers
|
a64bcb564dbc2a6329235016613a888ca21d513b
|
Global/local import with replicated name in the Trainer leading to UnboundLocalError
### System Info
- `transformers` version: 4.21.0
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
### Who can help?
@pacman100 @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running the `run_glue`([optimum version](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/training/text-classification/run_glue.py)) with the distributed launcher
```
python -m torch.distributed.run --nproc_per_node=2 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge --task_name MRPC --do_train --output_dir /tmp/deberta_res --fp16 --sharded_ddp simple --num_train_epochs 1
```
Error message:
```
Traceback (most recent call last):
File "run_glue.py", line 610, in <module>
main()
File "run_glue.py", line 503, in main
trainer = ORTTrainer(
File "/workspace/optimum/onnxruntime/trainer.py", line 144, in __init__
super().__init__(
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 569, in __init__
self.scaler = ShardedGradScaler()
UnboundLocalError: local variable 'ShardedGradScaler' referenced before assignment
```
### Expected behavior
`ShardedGradScaler` was firstly imported as global variable
https://github.com/huggingface/transformers/blob/da503ea02f7623542bd588b509d0fc31aff92735/src/transformers/trainer.py#L190
Then it was imported as a local variable for fsdp with the same name
https://github.com/huggingface/transformers/blob/da503ea02f7623542bd588b509d0fc31aff92735/src/transformers/trainer.py#L568
And it won't fall back to the global `ShardedGradScaler`, even when the local one is not imported leading, to an UnboundLocalError.
P.S. However I don't have problem running the `run_glue.py` in transformers, the problem seems to occur when using classes inherited from `Trainer`.
Possible solution: use different name / both import locally
*REF:*
*https://docs.python.org/3/faq/programming.html#why-am-i-getting-an-unboundlocalerror-when-the-variable-has-a-value*
*https://stackoverflow.com/questions/58750517/why-unboundlocalerror-occurs-when-importing-inside-function*
|
2022-07-29T10:00:35Z
|
<patch>
diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -565,9 +565,11 @@ def __init__(
self.scaler = ShardedGradScaler()
elif self.fsdp is not None:
if self.amp_dtype == torch.float16:
- from torch.distributed.fsdp.sharded_grad_scaler import ShardedGradScaler
+ from torch.distributed.fsdp.sharded_grad_scaler import (
+ ShardedGradScaler as FSDPShardedGradScaler,
+ )
- self.scaler = ShardedGradScaler()
+ self.scaler = FSDPShardedGradScaler()
else:
self.do_grad_scaling = False
self.use_cuda_amp = False
@@ -1366,6 +1368,8 @@ def _wrap_model(self, model, training=True, dataloader=None):
transformer_cls_to_wrap = get_module_class_from_name(
model, self.args.fsdp_transformer_layer_cls_to_wrap
)
+ if transformer_cls_to_wrap is None:
+ raise Exception("Could not find the transformer layer class to wrap in the model.")
auto_wrap_policy = functools.partial(
transformer_auto_wrap_policy,
# Transformer layer class to wrap
</patch>
|
[]
|
[]
| ||||
ytdl-org__youtube-dl-3790
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
soundcloud: cannot download secret embedded playlist
$ youtube-dl --verbose "https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/49676035%3Fsecret_token%3Ds-phZhg"
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', 'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/49676035%3Fsecret_token%3Ds-phZhg']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2014.09.06
[debug] Python version 2.6.6 - Linux-2.6.32-431.23.3.el6.x86_64-x86_64-with-redhat-6.5-Santiago
[debug] Proxy map: {}
[soundcloud:playlist] 49676035: Downloading playlist
ERROR: Unable to download JSON metadata: HTTP Error 404: Not Found; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
File "/home/me/bin/youtube-dl/youtube_dl/extractor/common.py", line 211, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/home/me/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1244, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib64/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
File "/usr/lib64/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.6/urllib2.py", line 435, in error
return self._call_chain(_args)
File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain
result = func(_args)
File "/usr/lib64/python2.6/urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
</issue>
<code>
[start of README.md]
1 youtube-dl - download videos from youtube.com or other video platforms
2
3 # SYNOPSIS
4 **youtube-dl** [OPTIONS] URL [URL...]
5
6 # INSTALLATION
7
8 To install it right away for all UNIX users (Linux, OS X, etc.), type:
9
10 sudo curl https://yt-dl.org/latest/youtube-dl -o /usr/local/bin/youtube-dl
11 sudo chmod a+x /usr/local/bin/youtube-dl
12
13 If you do not have curl, you can alternatively use a recent wget:
14
15 sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O /usr/local/bin/youtube-dl
16 sudo chmod a+x /usr/local/bin/youtube-dl
17
18 Windows users can [download a .exe file](https://yt-dl.org/latest/youtube-dl.exe) and place it in their home directory or any other location on their [PATH](http://en.wikipedia.org/wiki/PATH_%28variable%29).
19
20 OS X users can install **youtube-dl** with [Homebrew](http://brew.sh/).
21
22 brew install youtube-dl
23
24 You can also use pip:
25
26 sudo pip install youtube-dl
27
28 Alternatively, refer to the developer instructions below for how to check out and work with the git repository. For further options, including PGP signatures, see https://rg3.github.io/youtube-dl/download.html .
29
30 # DESCRIPTION
31 **youtube-dl** is a small command-line program to download videos from
32 YouTube.com and a few more sites. It requires the Python interpreter, version
33 2.6, 2.7, or 3.3+, and it is not platform specific. It should work on
34 your Unix box, on Windows or on Mac OS X. It is released to the public domain,
35 which means you can modify it, redistribute it or use it however you like.
36
37 # OPTIONS
38 -h, --help print this help text and exit
39 --version print program version and exit
40 -U, --update update this program to latest version. Make
41 sure that you have sufficient permissions
42 (run with sudo if needed)
43 -i, --ignore-errors continue on download errors, for example to
44 skip unavailable videos in a playlist
45 --abort-on-error Abort downloading of further videos (in the
46 playlist or the command line) if an error
47 occurs
48 --dump-user-agent display the current browser identification
49 --list-extractors List all supported extractors and the URLs
50 they would handle
51 --extractor-descriptions Output descriptions of all supported
52 extractors
53 --proxy URL Use the specified HTTP/HTTPS proxy. Pass in
54 an empty string (--proxy "") for direct
55 connection
56 --socket-timeout None Time to wait before giving up, in seconds
57 --default-search PREFIX Use this prefix for unqualified URLs. For
58 example "gvsearch2:" downloads two videos
59 from google videos for youtube-dl "large
60 apple". Use the value "auto" to let
61 youtube-dl guess ("auto_warning" to emit a
62 warning when guessing). "error" just throws
63 an error. The default value "fixup_error"
64 repairs broken URLs, but emits an error if
65 this is not possible instead of searching.
66 --ignore-config Do not read configuration files. When given
67 in the global configuration file /etc
68 /youtube-dl.conf: do not read the user
69 configuration in ~/.config/youtube-dl.conf
70 (%APPDATA%/youtube-dl/config.txt on
71 Windows)
72
73 ## Video Selection:
74 --playlist-start NUMBER playlist video to start at (default is 1)
75 --playlist-end NUMBER playlist video to end at (default is last)
76 --match-title REGEX download only matching titles (regex or
77 caseless sub-string)
78 --reject-title REGEX skip download for matching titles (regex or
79 caseless sub-string)
80 --max-downloads NUMBER Abort after downloading NUMBER files
81 --min-filesize SIZE Do not download any videos smaller than
82 SIZE (e.g. 50k or 44.6m)
83 --max-filesize SIZE Do not download any videos larger than SIZE
84 (e.g. 50k or 44.6m)
85 --date DATE download only videos uploaded in this date
86 --datebefore DATE download only videos uploaded on or before
87 this date (i.e. inclusive)
88 --dateafter DATE download only videos uploaded on or after
89 this date (i.e. inclusive)
90 --min-views COUNT Do not download any videos with less than
91 COUNT views
92 --max-views COUNT Do not download any videos with more than
93 COUNT views
94 --no-playlist download only the currently playing video
95 --age-limit YEARS download only videos suitable for the given
96 age
97 --download-archive FILE Download only videos not listed in the
98 archive file. Record the IDs of all
99 downloaded videos in it.
100 --include-ads Download advertisements as well
101 (experimental)
102 --youtube-include-dash-manifest Try to download the DASH manifest on
103 YouTube videos (experimental)
104
105 ## Download Options:
106 -r, --rate-limit LIMIT maximum download rate in bytes per second
107 (e.g. 50K or 4.2M)
108 -R, --retries RETRIES number of retries (default is 10)
109 --buffer-size SIZE size of download buffer (e.g. 1024 or 16K)
110 (default is 1024)
111 --no-resize-buffer do not automatically adjust the buffer
112 size. By default, the buffer size is
113 automatically resized from an initial value
114 of SIZE.
115
116 ## Filesystem Options:
117 -a, --batch-file FILE file containing URLs to download ('-' for
118 stdin)
119 --id use only video ID in file name
120 -A, --auto-number number downloaded files starting from 00000
121 -o, --output TEMPLATE output filename template. Use %(title)s to
122 get the title, %(uploader)s for the
123 uploader name, %(uploader_id)s for the
124 uploader nickname if different,
125 %(autonumber)s to get an automatically
126 incremented number, %(ext)s for the
127 filename extension, %(format)s for the
128 format description (like "22 - 1280x720" or
129 "HD"), %(format_id)s for the unique id of
130 the format (like Youtube's itags: "137"),
131 %(upload_date)s for the upload date
132 (YYYYMMDD), %(extractor)s for the provider
133 (youtube, metacafe, etc), %(id)s for the
134 video id, %(playlist)s for the playlist the
135 video is in, %(playlist_index)s for the
136 position in the playlist and %% for a
137 literal percent. %(height)s and %(width)s
138 for the width and height of the video
139 format. %(resolution)s for a textual
140 description of the resolution of the video
141 format. Use - to output to stdout. Can also
142 be used to download to a different
143 directory, for example with -o '/my/downloa
144 ds/%(uploader)s/%(title)s-%(id)s.%(ext)s' .
145 --autonumber-size NUMBER Specifies the number of digits in
146 %(autonumber)s when it is present in output
147 filename template or --auto-number option
148 is given
149 --restrict-filenames Restrict filenames to only ASCII
150 characters, and avoid "&" and spaces in
151 filenames
152 -t, --title [deprecated] use title in file name
153 (default)
154 -l, --literal [deprecated] alias of --title
155 -w, --no-overwrites do not overwrite files
156 -c, --continue force resume of partially downloaded files.
157 By default, youtube-dl will resume
158 downloads if possible.
159 --no-continue do not resume partially downloaded files
160 (restart from beginning)
161 --no-part do not use .part files
162 --no-mtime do not use the Last-modified header to set
163 the file modification time
164 --write-description write video description to a .description
165 file
166 --write-info-json write video metadata to a .info.json file
167 --write-annotations write video annotations to a .annotation
168 file
169 --write-thumbnail write thumbnail image to disk
170 --load-info FILE json file containing the video information
171 (created with the "--write-json" option)
172 --cookies FILE file to read cookies from and dump cookie
173 jar in
174 --cache-dir DIR Location in the filesystem where youtube-dl
175 can store some downloaded information
176 permanently. By default $XDG_CACHE_HOME
177 /youtube-dl or ~/.cache/youtube-dl . At the
178 moment, only YouTube player files (for
179 videos with obfuscated signatures) are
180 cached, but that may change.
181 --no-cache-dir Disable filesystem caching
182 --rm-cache-dir Delete all filesystem cache files
183
184 ## Verbosity / Simulation Options:
185 -q, --quiet activates quiet mode
186 --no-warnings Ignore warnings
187 -s, --simulate do not download the video and do not write
188 anything to disk
189 --skip-download do not download the video
190 -g, --get-url simulate, quiet but print URL
191 -e, --get-title simulate, quiet but print title
192 --get-id simulate, quiet but print id
193 --get-thumbnail simulate, quiet but print thumbnail URL
194 --get-description simulate, quiet but print video description
195 --get-duration simulate, quiet but print video length
196 --get-filename simulate, quiet but print output filename
197 --get-format simulate, quiet but print output format
198 -j, --dump-json simulate, quiet but print JSON information.
199 See --output for a description of available
200 keys.
201 --newline output progress bar as new lines
202 --no-progress do not print progress bar
203 --console-title display progress in console titlebar
204 -v, --verbose print various debugging information
205 --dump-intermediate-pages print downloaded pages to debug problems
206 (very verbose)
207 --write-pages Write downloaded intermediary pages to
208 files in the current directory to debug
209 problems
210 --print-traffic Display sent and read HTTP traffic
211
212 ## Workarounds:
213 --encoding ENCODING Force the specified encoding (experimental)
214 --no-check-certificate Suppress HTTPS certificate validation.
215 --prefer-insecure Use an unencrypted connection to retrieve
216 information about the video. (Currently
217 supported only for YouTube)
218 --user-agent UA specify a custom user agent
219 --referer REF specify a custom referer, use if the video
220 access is restricted to one domain
221 --add-header FIELD:VALUE specify a custom HTTP header and its value,
222 separated by a colon ':'. You can use this
223 option multiple times
224 --bidi-workaround Work around terminals that lack
225 bidirectional text support. Requires bidiv
226 or fribidi executable in PATH
227
228 ## Video Format Options:
229 -f, --format FORMAT video format code, specify the order of
230 preference using slashes: "-f 22/17/18".
231 "-f mp4" and "-f flv" are also supported.
232 You can also use the special names "best",
233 "bestvideo", "bestaudio", "worst",
234 "worstvideo" and "worstaudio". By default,
235 youtube-dl will pick the best quality.
236 --all-formats download all available video formats
237 --prefer-free-formats prefer free video formats unless a specific
238 one is requested
239 --max-quality FORMAT highest quality format to download
240 -F, --list-formats list all available formats
241
242 ## Subtitle Options:
243 --write-sub write subtitle file
244 --write-auto-sub write automatic subtitle file (youtube
245 only)
246 --all-subs downloads all the available subtitles of
247 the video
248 --list-subs lists all available subtitles for the video
249 --sub-format FORMAT subtitle format (default=srt) ([sbv/vtt]
250 youtube only)
251 --sub-lang LANGS languages of the subtitles to download
252 (optional) separated by commas, use IETF
253 language tags like 'en,pt'
254
255 ## Authentication Options:
256 -u, --username USERNAME account username
257 -p, --password PASSWORD account password
258 -2, --twofactor TWOFACTOR two-factor auth code
259 -n, --netrc use .netrc authentication data
260 --video-password PASSWORD video password (vimeo, smotri)
261
262 ## Post-processing Options:
263 -x, --extract-audio convert video files to audio-only files
264 (requires ffmpeg or avconv and ffprobe or
265 avprobe)
266 --audio-format FORMAT "best", "aac", "vorbis", "mp3", "m4a",
267 "opus", or "wav"; best by default
268 --audio-quality QUALITY ffmpeg/avconv audio quality specification,
269 insert a value between 0 (better) and 9
270 (worse) for VBR or a specific bitrate like
271 128K (default 5)
272 --recode-video FORMAT Encode the video to another format if
273 necessary (currently supported:
274 mp4|flv|ogg|webm|mkv)
275 -k, --keep-video keeps the video file on disk after the
276 post-processing; the video is erased by
277 default
278 --no-post-overwrites do not overwrite post-processed files; the
279 post-processed files are overwritten by
280 default
281 --embed-subs embed subtitles in the video (only for mp4
282 videos)
283 --embed-thumbnail embed thumbnail in the audio as cover art
284 --add-metadata write metadata to the video file
285 --xattrs write metadata to the video file's xattrs
286 (using dublin core and xdg standards)
287 --prefer-avconv Prefer avconv over ffmpeg for running the
288 postprocessors (default)
289 --prefer-ffmpeg Prefer ffmpeg over avconv for running the
290 postprocessors
291 --exec CMD Execute a command on the file after
292 downloading, similar to find's -exec
293 syntax. Example: --exec 'adb push {}
294 /sdcard/Music/ && rm {}'
295
296 # CONFIGURATION
297
298 You can configure youtube-dl by placing default arguments (such as `--extract-audio --no-mtime` to always extract the audio and not copy the mtime) into `/etc/youtube-dl.conf` and/or `~/.config/youtube-dl/config`. On Windows, the configuration file locations are `%APPDATA%\youtube-dl\config.txt` and `C:\Users\<Yourname>\youtube-dl.conf`.
299
300 # OUTPUT TEMPLATE
301
302 The `-o` option allows users to indicate a template for the output file names. The basic usage is not to set any template arguments when downloading a single file, like in `youtube-dl -o funny_video.flv "http://some/video"`. However, it may contain special sequences that will be replaced when downloading each video. The special sequences have the format `%(NAME)s`. To clarify, that is a percent symbol followed by a name in parenthesis, followed by a lowercase S. Allowed names are:
303
304 - `id`: The sequence will be replaced by the video identifier.
305 - `url`: The sequence will be replaced by the video URL.
306 - `uploader`: The sequence will be replaced by the nickname of the person who uploaded the video.
307 - `upload_date`: The sequence will be replaced by the upload date in YYYYMMDD format.
308 - `title`: The sequence will be replaced by the video title.
309 - `ext`: The sequence will be replaced by the appropriate extension (like flv or mp4).
310 - `epoch`: The sequence will be replaced by the Unix epoch when creating the file.
311 - `autonumber`: The sequence will be replaced by a five-digit number that will be increased with each download, starting at zero.
312 - `playlist`: The name or the id of the playlist that contains the video.
313 - `playlist_index`: The index of the video in the playlist, a five-digit number.
314
315 The current default template is `%(title)s-%(id)s.%(ext)s`.
316
317 In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the `--restrict-filenames` flag to get a shorter title:
318
319 ```bash
320 $ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc
321 youtube-dl test video ''_ä↭𝕐.mp4 # All kinds of weird characters
322 $ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc --restrict-filenames
323 youtube-dl_test_video_.mp4 # A simple file name
324 ```
325
326 # VIDEO SELECTION
327
328 Videos can be filtered by their upload date using the options `--date`, `--datebefore` or `--dateafter`, they accept dates in two formats:
329
330 - Absolute dates: Dates in the format `YYYYMMDD`.
331 - Relative dates: Dates in the format `(now|today)[+-][0-9](day|week|month|year)(s)?`
332
333 Examples:
334
335 ```bash
336 # Download only the videos uploaded in the last 6 months
337 $ youtube-dl --dateafter now-6months
338
339 # Download only the videos uploaded on January 1, 1970
340 $ youtube-dl --date 19700101
341
342 $ # will only download the videos uploaded in the 200x decade
343 $ youtube-dl --dateafter 20000101 --datebefore 20091231
344 ```
345
346 # FAQ
347
348 ### I'm getting an error `Unable to extract OpenGraph title` on YouTube playlists
349
350 YouTube changed their playlist format in March 2014 and later on, so you'll need at least youtube-dl 2014.07.25 to download all YouTube videos.
351
352 If you have installed youtube-dl with a package manager, pip, setup.py or a tarball, please use that to update. Note that Ubuntu packages do not seem to get updated anymore. Since we are not affiliated with Ubuntu, there is little we can do. Feel free to report bugs to the Ubuntu packaging guys - all they have to do is update the package to a somewhat recent version.
353
354 Alternatively, uninstall the youtube-dl package and follow [our manual installation instructions](http://rg3.github.io/youtube-dl/download.html). In a pinch, this should do if you used `apt-get` before to install youtube-dl:
355
356 ```
357 sudo apt-get remove -y youtube-dl
358 sudo wget https://yt-dl.org/latest/youtube-dl -O /usr/local/bin/youtube-dl
359 sudo chmod a+x /usr/local/bin/youtube-dl
360 hash -r
361 ```
362
363 ### Do I always have to pass in `--max-quality FORMAT`, or `-citw`?
364
365 By default, youtube-dl intends to have the best options (incidentally, if you have a convincing case that these should be different, [please file an issue where you explain that](https://yt-dl.org/bug)). Therefore, it is unnecessary and sometimes harmful to copy long option strings from webpages. In particular, `--max-quality` *limits* the video quality (so if you want the best quality, do NOT pass it in), and the only option out of `-citw` that is regularly useful is `-i`.
366
367 ### Can you please put the -b option back?
368
369 Most people asking this question are not aware that youtube-dl now defaults to downloading the highest available quality as reported by YouTube, which will be 1080p or 720p in some cases, so you no longer need the `-b` option. For some specific videos, maybe YouTube does not report them to be available in a specific high quality format you're interested in. In that case, simply request it with the `-f` option and youtube-dl will try to download it.
370
371 ### I get HTTP error 402 when trying to download a video. What's this?
372
373 Apparently YouTube requires you to pass a CAPTCHA test if you download too much. We're [considering to provide a way to let you solve the CAPTCHA](https://github.com/rg3/youtube-dl/issues/154), but at the moment, your best course of action is pointing a webbrowser to the youtube URL, solving the CAPTCHA, and restart youtube-dl.
374
375 ### I have downloaded a video but how can I play it?
376
377 Once the video is fully downloaded, use any video player, such as [vlc](http://www.videolan.org) or [mplayer](http://www.mplayerhq.hu/).
378
379 ### The links provided by youtube-dl -g are not working anymore
380
381 The URLs youtube-dl outputs require the downloader to have the correct cookies. Use the `--cookies` option to write the required cookies into a file, and advise your downloader to read cookies from that file. Some sites also require a common user agent to be used, use `--dump-user-agent` to see the one in use by youtube-dl.
382
383 ### ERROR: no fmt_url_map or conn information found in video info
384
385 youtube has switched to a new video info format in July 2011 which is not supported by old versions of youtube-dl. You can update youtube-dl with `sudo youtube-dl --update`.
386
387 ### ERROR: unable to download video ###
388
389 youtube requires an additional signature since September 2012 which is not supported by old versions of youtube-dl. You can update youtube-dl with `sudo youtube-dl --update`.
390
391 ### SyntaxError: Non-ASCII character ###
392
393 The error
394
395 File "youtube-dl", line 2
396 SyntaxError: Non-ASCII character '\x93' ...
397
398 means you're using an outdated version of Python. Please update to Python 2.6 or 2.7.
399
400 ### What is this binary file? Where has the code gone?
401
402 Since June 2012 (#342) youtube-dl is packed as an executable zipfile, simply unzip it (might need renaming to `youtube-dl.zip` first on some systems) or clone the git repository, as laid out above. If you modify the code, you can run it by executing the `__main__.py` file. To recompile the executable, run `make youtube-dl`.
403
404 ### The exe throws a *Runtime error from Visual C++*
405
406 To run the exe you need to install first the [Microsoft Visual C++ 2008 Redistributable Package](http://www.microsoft.com/en-us/download/details.aspx?id=29).
407
408 # DEVELOPER INSTRUCTIONS
409
410 Most users do not need to build youtube-dl and can [download the builds](http://rg3.github.io/youtube-dl/download.html) or get them from their distribution.
411
412 To run youtube-dl as a developer, you don't need to build anything either. Simply execute
413
414 python -m youtube_dl
415
416 To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
417
418 python -m unittest discover
419 python test/test_download.py
420 nosetests
421
422 If you want to create a build of youtube-dl yourself, you'll need
423
424 * python
425 * make
426 * pandoc
427 * zip
428 * nosetests
429
430 ### Adding support for a new site
431
432 If you want to add support for a new site, you can follow this quick list (assuming your service is called `yourextractor`):
433
434 1. [Fork this repository](https://github.com/rg3/youtube-dl/fork)
435 2. Check out the source code with `git clone [email protected]:YOUR_GITHUB_USERNAME/youtube-dl.git`
436 3. Start a new git branch with `cd youtube-dl; git checkout -b yourextractor`
437 4. Start with this simple template and save it to `youtube_dl/extractor/yourextractor.py`:
438 ```python
439 # coding: utf-8
440 from __future__ import unicode_literals
441
442 import re
443
444 from .common import InfoExtractor
445
446
447 class YourExtractorIE(InfoExtractor):
448 _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
449 _TEST = {
450 'url': 'http://yourextractor.com/watch/42',
451 'md5': 'TODO: md5 sum of the first 10KiB of the video file',
452 'info_dict': {
453 'id': '42',
454 'ext': 'mp4',
455 'title': 'Video title goes here',
456 'thumbnail': 're:^https?://.*\.jpg$',
457 # TODO more properties, either as:
458 # * A value
459 # * MD5 checksum; start the string with md5:
460 # * A regular expression; start the string with re:
461 # * Any Python type (for example int or float)
462 }
463 }
464
465 def _real_extract(self, url):
466 mobj = re.match(self._VALID_URL, url)
467 video_id = mobj.group('id')
468
469 # TODO more code goes here, for example ...
470 webpage = self._download_webpage(url, video_id)
471 title = self._html_search_regex(r'<h1>(.*?)</h1>', webpage, 'title')
472
473 return {
474 'id': video_id,
475 'title': title,
476 # TODO more properties (see youtube_dl/extractor/common.py)
477 }
478 ```
479 5. Add an import in [`youtube_dl/extractor/__init__.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/__init__.py).
480 6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will be then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc.
481 7. Have a look at [`youtube_dl/common/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should return](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L38). Add tests and code for as many as you want.
482 8. If you can, check the code with [pyflakes](https://pypi.python.org/pypi/pyflakes) (a good idea) and [pep8](https://pypi.python.org/pypi/pep8) (optional, ignore E501).
483 9. When the tests pass, [add](https://www.kernel.org/pub/software/scm/git/docs/git-add.html) the new files and [commit](https://www.kernel.org/pub/software/scm/git/docs/git-commit.html) them and [push](https://www.kernel.org/pub/software/scm/git/docs/git-push.html) the result, like this:
484
485 $ git add youtube_dl/extractor/__init__.py
486 $ git add youtube_dl/extractor/yourextractor.py
487 $ git commit -m '[yourextractor] Add new extractor'
488 $ git push origin yourextractor
489
490 10. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
491
492 In any case, thank you very much for your contributions!
493
494 # BUGS
495
496 Bugs and suggestions should be reported at: <https://github.com/rg3/youtube-dl/issues> . Unless you were prompted so or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email.
497
498 Please include the full output of the command when run with `--verbose`. The output (including the first lines) contain important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever.
499
500 For discussions, join us in the irc channel #youtube-dl on freenode.
501
502 When you submit a request, please re-read it once to avoid a couple of mistakes (you can and should use this as a checklist):
503
504 ### Is the description of the issue itself sufficient?
505
506 We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts.
507
508 So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious
509
510 - What the problem is
511 - How it could be fixed
512 - How your proposed solution would look like
513
514 If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a commiter myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over.
515
516 For bug reports, this means that your report should contain the *complete* output of youtube-dl when called with the -v flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information.
517
518 Site support requests **must contain an example URL**. An example URL is a URL you might want to download, like http://www.youtube.com/watch?v=BaW_jenozKc . There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. http://www.youtube.com/ ) is *not* an example URL.
519
520 ### Are you using the latest version?
521
522 Before reporting any issue, type youtube-dl -U. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well.
523
524 ### Is the issue already documented?
525
526 Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or at https://github.com/rg3/youtube-dl/search?type=Issues . If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
527
528 ### Why are existing options not enough?
529
530 Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/rg3/youtube-dl/blob/master/README.md#synopsis). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
531
532 ### Is there enough context in your bug report?
533
534 People want to solve problems, and often think they do us a favor by breaking down their larger problems (e.g. wanting to skip already downloaded files) to a specific request (e.g. requesting us to look whether the file exists before downloading the info page). However, what often happens is that they break down the problem into two steps: One simple, and one impossible (or extremely complicated one).
535
536 We are then presented with a very complicated request when the original problem could be solved far easier, e.g. by recording the downloaded video IDs in a separate file. To avoid this, you must include the greater context where it is non-obvious. In particular, every feature request that does not consist of adding support for a new site should contain a use case scenario that explains in what situation the missing feature would be useful.
537
538 ### Does the issue involve one problem, and one problem only?
539
540 Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones.
541
542 In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, Whitehouse podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service.
543
544 ### Is anyone going to need the feature?
545
546 Only post features that you (or an incapicated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them.
547
548 ### Is your question about youtube-dl?
549
550 It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different or even the reporter's own application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug.
551
552 # COPYRIGHT
553
554 youtube-dl is released into the public domain by the copyright holders.
555
556 This README file was originally written by Daniel Bolton (<https://github.com/dbbolton>) and is likewise released into the public domain.
557
[end of README.md]
[start of devscripts/gh-pages/add-version.py]
1 #!/usr/bin/env python3
2
3 import json
4 import sys
5 import hashlib
6 import os.path
7
8
9 if len(sys.argv) <= 1:
10 print('Specify the version number as parameter')
11 sys.exit()
12 version = sys.argv[1]
13
14 with open('update/LATEST_VERSION', 'w') as f:
15 f.write(version)
16
17 versions_info = json.load(open('update/versions.json'))
18 if 'signature' in versions_info:
19 del versions_info['signature']
20
21 new_version = {}
22
23 filenames = {
24 'bin': 'youtube-dl',
25 'exe': 'youtube-dl.exe',
26 'tar': 'youtube-dl-%s.tar.gz' % version}
27 build_dir = os.path.join('..', '..', 'build', version)
28 for key, filename in filenames.items():
29 url = 'https://yt-dl.org/downloads/%s/%s' % (version, filename)
30 fn = os.path.join(build_dir, filename)
31 with open(fn, 'rb') as f:
32 data = f.read()
33 if not data:
34 raise ValueError('File %s is empty!' % fn)
35 sha256sum = hashlib.sha256(data).hexdigest()
36 new_version[key] = (url, sha256sum)
37
38 versions_info['versions'][version] = new_version
39 versions_info['latest'] = version
40
41 with open('update/versions.json', 'w') as jsonf:
42 json.dump(versions_info, jsonf, indent=4, sort_keys=True)
43
[end of devscripts/gh-pages/add-version.py]
[start of devscripts/gh-pages/update-feed.py]
1 #!/usr/bin/env python3
2
3 import datetime
4 import io
5 import json
6 import textwrap
7
8
9 atom_template = textwrap.dedent("""\
10 <?xml version="1.0" encoding="utf-8"?>
11 <feed xmlns="http://www.w3.org/2005/Atom">
12 <link rel="self" href="http://rg3.github.io/youtube-dl/update/releases.atom" />
13 <title>youtube-dl releases</title>
14 <id>https://yt-dl.org/feed/youtube-dl-updates-feed</id>
15 <updated>@TIMESTAMP@</updated>
16 @ENTRIES@
17 </feed>""")
18
19 entry_template = textwrap.dedent("""
20 <entry>
21 <id>https://yt-dl.org/feed/youtube-dl-updates-feed/youtube-dl-@VERSION@</id>
22 <title>New version @VERSION@</title>
23 <link href="http://rg3.github.io/youtube-dl" />
24 <content type="xhtml">
25 <div xmlns="http://www.w3.org/1999/xhtml">
26 Downloads available at <a href="https://yt-dl.org/downloads/@VERSION@/">https://yt-dl.org/downloads/@VERSION@/</a>
27 </div>
28 </content>
29 <author>
30 <name>The youtube-dl maintainers</name>
31 </author>
32 <updated>@TIMESTAMP@</updated>
33 </entry>
34 """)
35
36 now = datetime.datetime.now()
37 now_iso = now.isoformat() + 'Z'
38
39 atom_template = atom_template.replace('@TIMESTAMP@', now_iso)
40
41 versions_info = json.load(open('update/versions.json'))
42 versions = list(versions_info['versions'].keys())
43 versions.sort()
44
45 entries = []
46 for v in versions:
47 fields = v.split('.')
48 year, month, day = map(int, fields[:3])
49 faked = 0
50 patchlevel = 0
51 while True:
52 try:
53 datetime.date(year, month, day)
54 except ValueError:
55 day -= 1
56 faked += 1
57 assert day > 0
58 continue
59 break
60 if len(fields) >= 4:
61 try:
62 patchlevel = int(fields[3])
63 except ValueError:
64 patchlevel = 1
65 timestamp = '%04d-%02d-%02dT00:%02d:%02dZ' % (year, month, day, faked, patchlevel)
66
67 entry = entry_template.replace('@TIMESTAMP@', timestamp)
68 entry = entry.replace('@VERSION@', v)
69 entries.append(entry)
70
71 entries_str = textwrap.indent(''.join(entries), '\t')
72 atom_template = atom_template.replace('@ENTRIES@', entries_str)
73
74 with io.open('update/releases.atom', 'w', encoding='utf-8') as atom_file:
75 atom_file.write(atom_template)
76
77
[end of devscripts/gh-pages/update-feed.py]
[start of devscripts/transition_helper.py]
1 #!/usr/bin/env python
2
3 import sys, os
4
5 try:
6 import urllib.request as compat_urllib_request
7 except ImportError: # Python 2
8 import urllib2 as compat_urllib_request
9
10 sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
11 sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
12 sys.stderr.write(u'The new location of the binaries is https://github.com/rg3/youtube-dl/downloads, not the git repository.\n\n')
13
14 try:
15 raw_input()
16 except NameError: # Python 3
17 input()
18
19 filename = sys.argv[0]
20
21 API_URL = "https://api.github.com/repos/rg3/youtube-dl/downloads"
22 BIN_URL = "https://github.com/downloads/rg3/youtube-dl/youtube-dl"
23
24 if not os.access(filename, os.W_OK):
25 sys.exit('ERROR: no write permissions on %s' % filename)
26
27 try:
28 urlh = compat_urllib_request.urlopen(BIN_URL)
29 newcontent = urlh.read()
30 urlh.close()
31 except (IOError, OSError) as err:
32 sys.exit('ERROR: unable to download latest version')
33
34 try:
35 with open(filename, 'wb') as outf:
36 outf.write(newcontent)
37 except (IOError, OSError) as err:
38 sys.exit('ERROR: unable to overwrite current version')
39
40 sys.stderr.write(u'Done! Now you can run youtube-dl.\n')
41
[end of devscripts/transition_helper.py]
[start of devscripts/transition_helper_exe/youtube-dl.py]
1 #!/usr/bin/env python
2
3 import sys, os
4 import urllib2
5 import json, hashlib
6
7 def rsa_verify(message, signature, key):
8 from struct import pack
9 from hashlib import sha256
10 from sys import version_info
11 def b(x):
12 if version_info[0] == 2: return x
13 else: return x.encode('latin1')
14 assert(type(message) == type(b('')))
15 block_size = 0
16 n = key[0]
17 while n:
18 block_size += 1
19 n >>= 8
20 signature = pow(int(signature, 16), key[1], key[0])
21 raw_bytes = []
22 while signature:
23 raw_bytes.insert(0, pack("B", signature & 0xFF))
24 signature >>= 8
25 signature = (block_size - len(raw_bytes)) * b('\x00') + b('').join(raw_bytes)
26 if signature[0:2] != b('\x00\x01'): return False
27 signature = signature[2:]
28 if not b('\x00') in signature: return False
29 signature = signature[signature.index(b('\x00'))+1:]
30 if not signature.startswith(b('\x30\x31\x30\x0D\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x01\x05\x00\x04\x20')): return False
31 signature = signature[19:]
32 if signature != sha256(message).digest(): return False
33 return True
34
35 sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
36 sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
37 sys.stderr.write(u'From now on, get the binaries from http://rg3.github.com/youtube-dl/download.html, not from the git repository.\n\n')
38
39 raw_input()
40
41 filename = sys.argv[0]
42
43 UPDATE_URL = "http://rg3.github.io/youtube-dl/update/"
44 VERSION_URL = UPDATE_URL + 'LATEST_VERSION'
45 JSON_URL = UPDATE_URL + 'versions.json'
46 UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537)
47
48 if not os.access(filename, os.W_OK):
49 sys.exit('ERROR: no write permissions on %s' % filename)
50
51 exe = os.path.abspath(filename)
52 directory = os.path.dirname(exe)
53 if not os.access(directory, os.W_OK):
54 sys.exit('ERROR: no write permissions on %s' % directory)
55
56 try:
57 versions_info = urllib2.urlopen(JSON_URL).read().decode('utf-8')
58 versions_info = json.loads(versions_info)
59 except:
60 sys.exit(u'ERROR: can\'t obtain versions info. Please try again later.')
61 if not 'signature' in versions_info:
62 sys.exit(u'ERROR: the versions file is not signed or corrupted. Aborting.')
63 signature = versions_info['signature']
64 del versions_info['signature']
65 if not rsa_verify(json.dumps(versions_info, sort_keys=True), signature, UPDATES_RSA_KEY):
66 sys.exit(u'ERROR: the versions file signature is invalid. Aborting.')
67
68 version = versions_info['versions'][versions_info['latest']]
69
70 try:
71 urlh = urllib2.urlopen(version['exe'][0])
72 newcontent = urlh.read()
73 urlh.close()
74 except (IOError, OSError) as err:
75 sys.exit('ERROR: unable to download latest version')
76
77 newcontent_hash = hashlib.sha256(newcontent).hexdigest()
78 if newcontent_hash != version['exe'][1]:
79 sys.exit(u'ERROR: the downloaded file hash does not match. Aborting.')
80
81 try:
82 with open(exe + '.new', 'wb') as outf:
83 outf.write(newcontent)
84 except (IOError, OSError) as err:
85 sys.exit(u'ERROR: unable to write the new version')
86
87 try:
88 bat = os.path.join(directory, 'youtube-dl-updater.bat')
89 b = open(bat, 'w')
90 b.write("""
91 echo Updating youtube-dl...
92 ping 127.0.0.1 -n 5 -w 1000 > NUL
93 move /Y "%s.new" "%s"
94 del "%s"
95 \n""" %(exe, exe, bat))
96 b.close()
97
98 os.startfile(bat)
99 except (IOError, OSError) as err:
100 sys.exit('ERROR: unable to overwrite current version')
101
102 sys.stderr.write(u'Done! Now you can run youtube-dl.\n')
103
[end of devscripts/transition_helper_exe/youtube-dl.py]
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5
6 import os.path
7 import pkg_resources
8 import warnings
9 import sys
10
11 try:
12 from setuptools import setup
13 setuptools_available = True
14 except ImportError:
15 from distutils.core import setup
16 setuptools_available = False
17
18 try:
19 # This will create an exe that needs Microsoft Visual C++ 2008
20 # Redistributable Package
21 import py2exe
22 except ImportError:
23 if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
24 print("Cannot import py2exe", file=sys.stderr)
25 exit(1)
26
27 py2exe_options = {
28 "bundle_files": 1,
29 "compressed": 1,
30 "optimize": 2,
31 "dist_dir": '.',
32 "dll_excludes": ['w9xpopen.exe'],
33 }
34
35 py2exe_console = [{
36 "script": "./youtube_dl/__main__.py",
37 "dest_base": "youtube-dl",
38 }]
39
40 py2exe_params = {
41 'console': py2exe_console,
42 'options': {"py2exe": py2exe_options},
43 'zipfile': None
44 }
45
46 if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
47 params = py2exe_params
48 else:
49 files_spec = [
50 ('etc/bash_completion.d', ['youtube-dl.bash-completion']),
51 ('etc/fish/completions', ['youtube-dl.fish']),
52 ('share/doc/youtube_dl', ['README.txt']),
53 ('share/man/man1', ['youtube-dl.1'])
54 ]
55 root = os.path.dirname(os.path.abspath(__file__))
56 data_files = []
57 for dirname, files in files_spec:
58 resfiles = []
59 for fn in files:
60 if not os.path.exists(fn):
61 warnings.warn('Skipping file %s since it is not present. Type make to build all automatically generated files.' % fn)
62 else:
63 resfiles.append(fn)
64 data_files.append((dirname, resfiles))
65
66 params = {
67 'data_files': data_files,
68 }
69 if setuptools_available:
70 params['entry_points'] = {'console_scripts': ['youtube-dl = youtube_dl:main']}
71 else:
72 params['scripts'] = ['bin/youtube-dl']
73
74 # Get the version from youtube_dl/version.py without importing the package
75 exec(compile(open('youtube_dl/version.py').read(),
76 'youtube_dl/version.py', 'exec'))
77
78 setup(
79 name='youtube_dl',
80 version=__version__,
81 description='YouTube video downloader',
82 long_description='Small command-line program to download videos from'
83 ' YouTube.com and other video sites.',
84 url='https://github.com/rg3/youtube-dl',
85 author='Ricardo Garcia',
86 author_email='[email protected]',
87 maintainer='Philipp Hagemeister',
88 maintainer_email='[email protected]',
89 packages=[
90 'youtube_dl',
91 'youtube_dl.extractor', 'youtube_dl.downloader',
92 'youtube_dl.postprocessor'],
93
94 # Provokes warning on most systems (why?!)
95 # test_suite = 'nose.collector',
96 # test_requires = ['nosetest'],
97
98 classifiers=[
99 "Topic :: Multimedia :: Video",
100 "Development Status :: 5 - Production/Stable",
101 "Environment :: Console",
102 "License :: Public Domain",
103 "Programming Language :: Python :: 2.6",
104 "Programming Language :: Python :: 2.7",
105 "Programming Language :: Python :: 3",
106 "Programming Language :: Python :: 3.3"
107 ],
108
109 **params
110 )
111
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ytdl-org/youtube-dl
|
9296738f20c1335498a78c99a86767e9bae4f6d2
|
soundcloud: cannot download secret embedded playlist
$ youtube-dl --verbose "https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/49676035%3Fsecret_token%3Ds-phZhg"
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', 'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/49676035%3Fsecret_token%3Ds-phZhg']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2014.09.06
[debug] Python version 2.6.6 - Linux-2.6.32-431.23.3.el6.x86_64-x86_64-with-redhat-6.5-Santiago
[debug] Proxy map: {}
[soundcloud:playlist] 49676035: Downloading playlist
ERROR: Unable to download JSON metadata: HTTP Error 404: Not Found; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
File "/home/me/bin/youtube-dl/youtube_dl/extractor/common.py", line 211, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/home/me/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1244, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib64/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
File "/usr/lib64/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.6/urllib2.py", line 435, in error
return self._call_chain(_args)
File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain
result = func(_args)
File "/usr/lib64/python2.6/urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
|
Works for me?
Works for me too. Please update youtube-dl to the last version and try again.
That's because that playlist was changed on soundcloud from private to public since the bug report. But here's another one:
% youtube-dl --verbose "https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/167464311%3Fsecret_token%3Ds-IcRLW"
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', 'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/167464311%3Fsecret_token%3Ds-IcRLW']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2014.09.16.1
[debug] Python version 2.6.6 - Linux-2.6.32-431.29.2.el6.x86_64-x86_64-with-redhat-6.5-Santiago
[debug] Proxy map: {}
[soundcloud] 167464311: Downloading info JSON
ERROR: Unable to download JSON metadata: HTTP Error 404: Not Found; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
File "/afs/ifh.de/user/w/waschk/public/bin/youtube-dl/youtube_dl/extractor/common.py", line 211, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/afs/ifh.de/user/w/waschk/public/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1264, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib64/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
File "/usr/lib64/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.6/urllib2.py", line 435, in error
return self._call_chain(_args)
File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain
result = func(_args)
File "/usr/lib64/python2.6/urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
It is fixed now for a single track, but not for a secret playlist with more than one track, e.g. this one with a fresh checkout from git after the merge of that commit:
./youtube-dl --verbose "https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/51064053%3Fsecret_token%3Ds-OtAhG"
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', 'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/51064053%3Fsecret_token%3Ds-OtAhG']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2014.09.16.1
[debug] Python version 2.6.6 - Linux-2.6.32-431.29.2.el6.x86_64-x86_64-with-redhat-6.5-Santiago
[debug] Proxy map: {}
[soundcloud:playlist] 51064053: Downloading playlist
ERROR: Unable to download JSON metadata: HTTP Error 404: Not Found; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
File "./youtube-dl/youtube_dl/extractor/common.py", line 211, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "./youtube-dl/youtube_dl/YoutubeDL.py", line 1264, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib64/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
File "/usr/lib64/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.6/urllib2.py", line 435, in error
return self._call_chain(_args)
File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain
result = func(_args)
File "/usr/lib64/python2.6/urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
On it.
Not sure what to do to create a test case for a private playlist...
|
2014-09-18T09:38:32Z
|
<patch>
diff --git a/youtube_dl/extractor/soundcloud.py b/youtube_dl/extractor/soundcloud.py
--- a/youtube_dl/extractor/soundcloud.py
+++ b/youtube_dl/extractor/soundcloud.py
@@ -238,7 +238,7 @@ def _real_extract(self, url):
class SoundcloudSetIE(SoundcloudIE):
- _VALID_URL = r'https?://(?:www\.)?soundcloud\.com/([\w\d-]+)/sets/([\w\d-]+)'
+ _VALID_URL = r'https?://(?:www\.)?soundcloud\.com/(?P<uploader>[\w\d-]+)/sets/(?P<slug_title>[\w\d-]+)(?:/(?P<token>[^?/]+))?'
IE_NAME = 'soundcloud:set'
_TESTS = [{
'url': 'https://soundcloud.com/the-concept-band/sets/the-royal-concept-ep',
@@ -252,14 +252,19 @@ def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
# extract uploader (which is in the url)
- uploader = mobj.group(1)
+ uploader = mobj.group('uploader')
# extract simple title (uploader + slug of song title)
- slug_title = mobj.group(2)
+ slug_title = mobj.group('slug_title')
full_title = '%s/sets/%s' % (uploader, slug_title)
+ url = 'http://soundcloud.com/%s/sets/%s' % (uploader, slug_title)
+
+ token = mobj.group('token')
+ if token:
+ full_title += '/' + token
+ url += '/' + token
self.report_resolve(full_title)
- url = 'http://soundcloud.com/%s/sets/%s' % (uploader, slug_title)
resolv_url = self._resolv_url(url)
info = self._download_json(resolv_url, full_title)
@@ -270,7 +275,7 @@ def _real_extract(self, url):
return {
'_type': 'playlist',
- 'entries': [self._extract_info_dict(track) for track in info['tracks']],
+ 'entries': [self._extract_info_dict(track, secret_token=token) for track in info['tracks']],
'id': info['id'],
'title': info['title'],
}
@@ -333,7 +338,7 @@ def _real_extract(self, url):
class SoundcloudPlaylistIE(SoundcloudIE):
- _VALID_URL = r'https?://api\.soundcloud\.com/playlists/(?P<id>[0-9]+)'
+ _VALID_URL = r'https?://api\.soundcloud\.com/playlists/(?P<id>[0-9]+)(?:/?\?secret_token=(?P<token>[^&]+?))$'
IE_NAME = 'soundcloud:playlist'
_TESTS = [
@@ -353,14 +358,21 @@ def _real_extract(self, url):
playlist_id = mobj.group('id')
base_url = '%s//api.soundcloud.com/playlists/%s.json?' % (self.http_scheme(), playlist_id)
- data = compat_urllib_parse.urlencode({
+ data_dict = {
'client_id': self._CLIENT_ID,
- })
+ }
+ token = mobj.group('token')
+
+ if token:
+ data_dict['secret_token'] = token
+
+ data = compat_urllib_parse.urlencode(data_dict)
data = self._download_json(
base_url + data, playlist_id, 'Downloading playlist')
entries = [
- self._extract_info_dict(t, quiet=True) for t in data['tracks']]
+ self._extract_info_dict(t, quiet=True, secret_token=token)
+ for t in data['tracks']]
return {
'_type': 'playlist',
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-4507
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pandas.ix[:, "2002"] returns a DataFrame
``` python
import pandas as pd
import numpy as np
ind = pd.date_range(start="2000", freq="D", periods=1000)
df = pd.DataFrame(np.random.randn(len(ind), 5), index=ind, columns=list('ABCDE'))
panel = pd.Panel({'frame_'+c:df for c in list('ABC')})
test1 = panel.ix[:, "2002"]
test1.ndim # 3
type(test1) # pandas.core.frame.DataFrame
test2 = panel.ix[:, "2002":"2002-12-31"]
test2.ndim # 3
type(test2) # pandas.core.panel.Panel
print pd.__version__
#0.11.1.dev-d7fe745
```
When trying to grab all the data for year 2002, I get back a DataFrame. If I use a range for the major axis, then it returns a Panel.
http://nbviewer.ipython.org/5853887
</issue>
<code>
[start of README.rst]
1 =============================================
2 pandas: powerful Python data analysis toolkit
3 =============================================
4
5 .. image:: https://travis-ci.org/pydata/pandas.png
6 :target: https://travis-ci.org/pydata/pandas
7
8 What is it
9 ==========
10
11 **pandas** is a Python package providing fast, flexible, and expressive data
12 structures designed to make working with "relational" or "labeled" data both
13 easy and intuitive. It aims to be the fundamental high-level building block for
14 doing practical, **real world** data analysis in Python. Additionally, it has
15 the broader goal of becoming **the most powerful and flexible open source data
16 analysis / manipulation tool available in any language**. It is already well on
17 its way toward this goal.
18
19 Main Features
20 =============
21
22 Here are just a few of the things that pandas does well:
23
24 - Easy handling of **missing data** (represented as NaN) in floating point as
25 well as non-floating point data
26 - Size mutability: columns can be **inserted and deleted** from DataFrame and
27 higher dimensional objects
28 - Automatic and explicit **data alignment**: objects can be explicitly
29 aligned to a set of labels, or the user can simply ignore the labels and
30 let `Series`, `DataFrame`, etc. automatically align the data for you in
31 computations
32 - Powerful, flexible **group by** functionality to perform
33 split-apply-combine operations on data sets, for both aggregating and
34 transforming data
35 - Make it **easy to convert** ragged, differently-indexed data in other
36 Python and NumPy data structures into DataFrame objects
37 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
38 of large data sets
39 - Intuitive **merging** and **joining** data sets
40 - Flexible **reshaping** and pivoting of data sets
41 - **Hierarchical** labeling of axes (possible to have multiple labels per
42 tick)
43 - Robust IO tools for loading data from **flat files** (CSV and delimited),
44 Excel files, databases, and saving / loading data from the ultrafast **HDF5
45 format**
46 - **Time series**-specific functionality: date range generation and frequency
47 conversion, moving window statistics, moving window linear regressions,
48 date shifting and lagging, etc.
49
50 Where to get it
51 ===============
52
53 The source code is currently hosted on GitHub at: http://github.com/pydata/pandas
54
55 Binary installers for the latest released version are available at the Python
56 package index::
57
58 http://pypi.python.org/pypi/pandas/
59
60 And via ``easy_install`` or ``pip``::
61
62 easy_install pandas
63 pip install pandas
64
65 Dependencies
66 ============
67
68 - `NumPy <http://www.numpy.org>`__: 1.6.1 or higher
69 - `python-dateutil <http://labix.org/python-dateutil>`__ 1.5 or higher
70 - `pytz <http://pytz.sourceforge.net/>`__
71 - Needed for time zone support with ``date_range``
72
73 Highly Recommended Dependencies
74 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
75
76 - `numexpr <http://code.google.com/p/numexpr/>`__
77 - Needed to accelerate some expression evaluation operations
78 - Required by `PyTables`
79 - `bottleneck <http://berkeleyanalytics.com/bottleneck>`__
80 - Needed to accelerate certain numerical operations
81
82 Optional dependencies
83 ~~~~~~~~~~~~~~~~~~~~~
84
85 - `Cython <http://www.cython.org>`__: Only necessary to build development version. Version 0.17.1 or higher.
86 - `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions
87 - `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage
88 - `matplotlib <http://matplotlib.sourceforge.net/>`__: for plotting
89 - `statsmodels <http://statsmodels.sourceforge.net/>`__
90 - Needed for parts of :mod:`pandas.stats`
91 - `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__
92 - openpyxl version 1.6.1 or higher, for writing .xlsx files
93 - xlrd >= 0.9.0
94 - Needed for Excel I/O
95 - `boto <https://pypi.python.org/pypi/boto>`__: necessary for Amazon S3
96 access.
97 - One of the following combinations of libraries is needed to use the
98 top-level :func:`~pandas.io.html.read_html` function:
99
100 - `BeautifulSoup4`_ and `html5lib`_ (Any recent version of `html5lib`_ is
101 okay.)
102 - `BeautifulSoup4`_ and `lxml`_
103 - `BeautifulSoup4`_ and `html5lib`_ and `lxml`_
104 - Only `lxml`_, although see :ref:`HTML reading gotchas <html-gotchas>`
105 for reasons as to why you should probably **not** take this approach.
106
107 .. warning::
108
109 - if you install `BeautifulSoup4`_ you must install either
110 `lxml`_ or `html5lib`_ or both.
111 :func:`~pandas.io.html.read_html` will **not** work with *only*
112 `BeautifulSoup4`_ installed.
113 - You are highly encouraged to read :ref:`HTML reading gotchas
114 <html-gotchas>`. It explains issues surrounding the installation and
115 usage of the above three libraries
116 - You may need to install an older version of `BeautifulSoup4`_:
117 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
118 32-bit Ubuntu/Debian
119 - Additionally, if you're using `Anaconda`_ you should definitely
120 read :ref:`the gotchas about HTML parsing libraries <html-gotchas>`
121
122 .. note::
123
124 - if you're on a system with ``apt-get`` you can do
125
126 .. code-block:: sh
127
128 sudo apt-get build-dep python-lxml
129
130 to get the necessary dependencies for installation of `lxml`_. This
131 will prevent further headaches down the line.
132
133
134 .. _html5lib: https://github.com/html5lib/html5lib-python
135 .. _BeautifulSoup4: http://www.crummy.com/software/BeautifulSoup
136 .. _lxml: http://lxml.de
137 .. _Anaconda: https://store.continuum.io/cshop/anaconda
138
139
140 Installation from sources
141 =========================
142
143 To install pandas from source you need ``cython`` in addition to the normal dependencies above,
144 which can be installed from pypi::
145
146 pip install cython
147
148 In the ``pandas`` directory (same one where you found this file after cloning the git repo), execute::
149
150 python setup.py install
151
152 or for installing in `development mode <http://www.pip-installer.org/en/latest/usage.html>`__::
153
154 python setup.py develop
155
156 Alternatively, you can use `pip` if you want all the dependencies pulled in automatically
157 (the optional ``-e`` option is for installing it in
158 `development mode <http://www.pip-installer.org/en/latest/usage.html>`__)::
159
160 pip install -e .
161
162 On Windows, you will need to install MinGW and execute::
163
164 python setup.py build --compiler=mingw32
165 python setup.py install
166
167 See http://pandas.pydata.org/ for more information.
168
169 License
170 =======
171
172 BSD
173
174 Documentation
175 =============
176
177 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
178
179 The Sphinx documentation should provide a good starting point for learning how
180 to use the library. Expect the docs to continue to expand as time goes on.
181
182 Background
183 ==========
184
185 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
186 has been under active development since then.
187
188 Discussion and Development
189 ==========================
190
191 Since ``pandas`` development is related to a number of other scientific
192 Python projects, questions are welcome on the scipy-user mailing
193 list. Specialized discussions or design issues should take place on
194 the pystatsmodels mailing list / Google group, where
195 ``scikits.statsmodels`` and other libraries will also be discussed:
196
197 http://groups.google.com/group/pystatsmodels
198
199 .. _NumPy: http://numpy.scipy.org/
200
[end of README.rst]
[start of pandas/io/wb.py]
1 from __future__ import print_function
2
3 from pandas.compat import map, reduce, range, lrange
4 from pandas.io.common import urlopen
5 from pandas.io import json
6 import pandas
7 import numpy as np
8
9
10 def download(country=['MX', 'CA', 'US'], indicator=['GDPPCKD', 'GDPPCKN'],
11 start=2003, end=2005):
12 """
13 Download data series from the World Bank's World Development Indicators
14
15 Parameters
16 ----------
17
18 indicator: string or list of strings
19 taken from the ``id`` field in ``WDIsearch()``
20 country: string or list of strings.
21 ``all`` downloads data for all countries
22 ISO-2 character codes select individual countries (e.g.``US``,``CA``)
23 start: int
24 First year of the data series
25 end: int
26 Last year of the data series (inclusive)
27
28 Returns
29 -------
30
31 ``pandas`` DataFrame with columns: country, iso2c, year, indicator value.
32 """
33
34 # Are ISO-2 country codes valid?
35 valid_countries = ["AG", "AL", "AM", "AO", "AR", "AT", "AU", "AZ", "BB",
36 "BD", "BE", "BF", "BG", "BH", "BI", "BJ", "BO", "BR", "BS", "BW",
37 "BY", "BZ", "CA", "CD", "CF", "CG", "CH", "CI", "CL", "CM", "CN",
38 "CO", "CR", "CV", "CY", "CZ", "DE", "DK", "DM", "DO", "DZ", "EC",
39 "EE", "EG", "ER", "ES", "ET", "FI", "FJ", "FR", "GA", "GB", "GE",
40 "GH", "GM", "GN", "GQ", "GR", "GT", "GW", "GY", "HK", "HN", "HR",
41 "HT", "HU", "ID", "IE", "IL", "IN", "IR", "IS", "IT", "JM", "JO",
42 "JP", "KE", "KG", "KH", "KM", "KR", "KW", "KZ", "LA", "LB", "LC",
43 "LK", "LS", "LT", "LU", "LV", "MA", "MD", "MG", "MK", "ML", "MN",
44 "MR", "MU", "MW", "MX", "MY", "MZ", "NA", "NE", "NG", "NI", "NL",
45 "NO", "NP", "NZ", "OM", "PA", "PE", "PG", "PH", "PK", "PL", "PT",
46 "PY", "RO", "RU", "RW", "SA", "SB", "SC", "SD", "SE", "SG", "SI",
47 "SK", "SL", "SN", "SR", "SV", "SY", "SZ", "TD", "TG", "TH", "TN",
48 "TR", "TT", "TW", "TZ", "UA", "UG", "US", "UY", "UZ", "VC", "VE",
49 "VN", "VU", "YE", "ZA", "ZM", "ZW", "all"]
50 if type(country) == str:
51 country = [country]
52 bad_countries = np.setdiff1d(country, valid_countries)
53 country = np.intersect1d(country, valid_countries)
54 country = ';'.join(country)
55 # Work with a list of indicators
56 if type(indicator) == str:
57 indicator = [indicator]
58 # Download
59 data = []
60 bad_indicators = []
61 for ind in indicator:
62 try:
63 tmp = _get_data(ind, country, start, end)
64 tmp.columns = ['country', 'iso2c', 'year', ind]
65 data.append(tmp)
66 except:
67 bad_indicators.append(ind)
68 # Warn
69 if len(bad_indicators) > 0:
70 print('Failed to obtain indicator(s): %s' % '; '.join(bad_indicators))
71 print ('The data may still be available for download at http://data.worldbank.org')
72 if len(bad_countries) > 0:
73 print('Invalid ISO-2 codes: %s' % ' '.join(bad_countries))
74 # Merge WDI series
75 if len(data) > 0:
76 out = reduce(lambda x, y: x.merge(y, how='outer'), data)
77 # Clean
78 out = out.drop('iso2c', axis=1)
79 out = out.set_index(['country', 'year'])
80 out = out.convert_objects(convert_numeric=True)
81 return out
82
83
84 def _get_data(indicator="NY.GNS.ICTR.GN.ZS", country='US',
85 start=2002, end=2005):
86 # Build URL for api call
87 url = "http://api.worldbank.org/countries/" + country + "/indicators/" + \
88 indicator + "?date=" + str(start) + ":" + str(end) + "&per_page=25000" + \
89 "&format=json"
90 # Download
91 with urlopen(url) as response:
92 data = response.read()
93 # Parse JSON file
94 data = json.loads(data)[1]
95 country = [x['country']['value'] for x in data]
96 iso2c = [x['country']['id'] for x in data]
97 year = [x['date'] for x in data]
98 value = [x['value'] for x in data]
99 # Prepare output
100 out = pandas.DataFrame([country, iso2c, year, value]).T
101 return out
102
103
104 def get_countries():
105 '''Query information about countries
106 '''
107 url = 'http://api.worldbank.org/countries/all?format=json'
108 with urlopen(url) as response:
109 data = response.read()
110 data = json.loads(data)[1]
111 data = pandas.DataFrame(data)
112 data.adminregion = [x['value'] for x in data.adminregion]
113 data.incomeLevel = [x['value'] for x in data.incomeLevel]
114 data.lendingType = [x['value'] for x in data.lendingType]
115 data.region = [x['value'] for x in data.region]
116 data = data.rename(columns={'id': 'iso3c', 'iso2Code': 'iso2c'})
117 return data
118
119
120 def get_indicators():
121 '''Download information about all World Bank data series
122 '''
123 url = 'http://api.worldbank.org/indicators?per_page=50000&format=json'
124 with urlopen(url) as response:
125 data = response.read()
126 data = json.loads(data)[1]
127 data = pandas.DataFrame(data)
128 # Clean fields
129 data.source = [x['value'] for x in data.source]
130 fun = lambda x: x.encode('ascii', 'ignore')
131 data.sourceOrganization = data.sourceOrganization.apply(fun)
132 # Clean topic field
133
134 def get_value(x):
135 try:
136 return x['value']
137 except:
138 return ''
139 fun = lambda x: [get_value(y) for y in x]
140 data.topics = data.topics.apply(fun)
141 data.topics = data.topics.apply(lambda x: ' ; '.join(x))
142 # Clean outpu
143 data = data.sort(columns='id')
144 data.index = pandas.Index(lrange(data.shape[0]))
145 return data
146
147
148 _cached_series = None
149
150
151 def search(string='gdp.*capi', field='name', case=False):
152 """
153 Search available data series from the world bank
154
155 Parameters
156 ----------
157
158 string: string
159 regular expression
160 field: string
161 id, name, source, sourceNote, sourceOrganization, topics
162 See notes below
163 case: bool
164 case sensitive search?
165
166 Notes
167 -----
168
169 The first time this function is run it will download and cache the full
170 list of available series. Depending on the speed of your network
171 connection, this can take time. Subsequent searches will use the cached
172 copy, so they should be much faster.
173
174 id : Data series indicator (for use with the ``indicator`` argument of
175 ``WDI()``) e.g. NY.GNS.ICTR.GN.ZS"
176 name: Short description of the data series
177 source: Data collection project
178 sourceOrganization: Data collection organization
179 note:
180 sourceNote:
181 topics:
182 """
183 # Create cached list of series if it does not exist
184 global _cached_series
185 if type(_cached_series) is not pandas.core.frame.DataFrame:
186 _cached_series = get_indicators()
187 data = _cached_series[field]
188 idx = data.str.contains(string, case=case)
189 out = _cached_series.ix[idx].dropna()
190 return out
191
[end of pandas/io/wb.py]
[start of pandas/rpy/common.py]
1 """
2 Utilities for making working with rpy2 more user- and
3 developer-friendly.
4 """
5 from __future__ import print_function
6
7 from pandas.compat import zip, range
8 import numpy as np
9
10 import pandas as pd
11 import pandas.core.common as com
12 import pandas.util.testing as _test
13
14 from rpy2.robjects.packages import importr
15 from rpy2.robjects import r
16 import rpy2.robjects as robj
17
18 __all__ = ['convert_robj', 'load_data', 'convert_to_r_dataframe',
19 'convert_to_r_matrix']
20
21
22 def load_data(name, package=None, convert=True):
23 if package:
24 importr(package)
25
26 r.data(name)
27
28 robj = r[name]
29
30 if convert:
31 return convert_robj(robj)
32 else:
33 return robj
34
35
36 def _rclass(obj):
37 """
38 Return R class name for input object
39 """
40 return r['class'](obj)[0]
41
42
43 def _is_null(obj):
44 return _rclass(obj) == 'NULL'
45
46
47 def _convert_list(obj):
48 """
49 Convert named Vector to dict
50 """
51 values = [convert_robj(x) for x in obj]
52 return dict(zip(obj.names, values))
53
54
55 def _convert_array(obj):
56 """
57 Convert Array to ndarray
58 """
59 # this royally sucks. "Matrices" (arrays) with dimension > 3 in R aren't
60 # really matrices-- things come out Fortran order in the first two
61 # dimensions. Maybe I'm wrong?
62
63 dim = list(obj.dim)
64 values = np.array(list(obj))
65
66 if len(dim) == 3:
67 arr = values.reshape(dim[-1:] + dim[:-1]).swapaxes(1, 2)
68
69 if obj.names is not None:
70 name_list = [list(x) for x in obj.names]
71 if len(dim) == 2:
72 return pd.DataFrame(arr, index=name_list[0], columns=name_list[1])
73 elif len(dim) == 3:
74 return pd.Panel(arr, items=name_list[2],
75 major_axis=name_list[0],
76 minor_axis=name_list[1])
77 else:
78 print('Cannot handle dim=%d' % len(dim))
79 else:
80 return arr
81
82
83 def _convert_vector(obj):
84 if isinstance(obj, robj.IntVector):
85 return _convert_int_vector(obj)
86 elif isinstance(obj, robj.StrVector):
87 return _convert_str_vector(obj)
88
89 return list(obj)
90
91 NA_INTEGER = -2147483648
92
93
94 def _convert_int_vector(obj):
95 arr = np.asarray(obj)
96 mask = arr == NA_INTEGER
97 if mask.any():
98 arr = arr.astype(float)
99 arr[mask] = np.nan
100 return arr
101
102
103 def _convert_str_vector(obj):
104 arr = np.asarray(obj, dtype=object)
105 mask = arr == robj.NA_Character
106 if mask.any():
107 arr[mask] = np.nan
108 return arr
109
110
111 def _convert_DataFrame(rdf):
112 columns = list(rdf.colnames)
113 rows = np.array(rdf.rownames)
114
115 data = {}
116 for i, col in enumerate(columns):
117 vec = rdf.rx2(i + 1)
118 values = _convert_vector(vec)
119
120 if isinstance(vec, robj.FactorVector):
121 levels = np.asarray(vec.levels)
122 if com.is_float_dtype(values):
123 mask = np.isnan(values)
124 notmask = -mask
125 result = np.empty(len(values), dtype=object)
126 result[mask] = np.nan
127
128 locs = (values[notmask] - 1).astype(np.int_)
129 result[notmask] = levels.take(locs)
130 values = result
131 else:
132 values = np.asarray(vec.levels).take(values - 1)
133
134 data[col] = values
135
136 return pd.DataFrame(data, index=_check_int(rows), columns=columns)
137
138
139 def _convert_Matrix(mat):
140 columns = mat.colnames
141 rows = mat.rownames
142
143 columns = None if _is_null(columns) else list(columns)
144 index = None if _is_null(rows) else list(rows)
145
146 return pd.DataFrame(np.array(mat), index=_check_int(index),
147 columns=columns)
148
149
150 def _check_int(vec):
151 try:
152 # R observation numbers come through as strings
153 vec = vec.astype(int)
154 except Exception:
155 pass
156
157 return vec
158
159 _pandas_converters = [
160 (robj.DataFrame, _convert_DataFrame),
161 (robj.Matrix, _convert_Matrix),
162 (robj.StrVector, _convert_vector),
163 (robj.FloatVector, _convert_vector),
164 (robj.Array, _convert_array),
165 (robj.Vector, _convert_list),
166 ]
167
168 _converters = [
169 (robj.DataFrame, lambda x: _convert_DataFrame(x).toRecords(index=False)),
170 (robj.Matrix, lambda x: _convert_Matrix(x).toRecords(index=False)),
171 (robj.IntVector, _convert_vector),
172 (robj.StrVector, _convert_vector),
173 (robj.FloatVector, _convert_vector),
174 (robj.Array, _convert_array),
175 (robj.Vector, _convert_list),
176 ]
177
178
179 def convert_robj(obj, use_pandas=True):
180 """
181 Convert rpy2 object to a pandas-friendly form
182
183 Parameters
184 ----------
185 obj : rpy2 object
186
187 Returns
188 -------
189 Non-rpy data structure, mix of NumPy and pandas objects
190 """
191 if not isinstance(obj, robj.RObjectMixin):
192 return obj
193
194 converters = _pandas_converters if use_pandas else _converters
195
196 for rpy_type, converter in converters:
197 if isinstance(obj, rpy_type):
198 return converter(obj)
199
200 raise Exception('Do not know what to do with %s object' % type(obj))
201
202
203 def convert_to_r_posixct(obj):
204 """
205 Convert DatetimeIndex or np.datetime array to R POSIXct using
206 m8[s] format.
207
208 Parameters
209 ----------
210 obj : source pandas object (one of [DatetimeIndex, np.datetime])
211
212 Returns
213 -------
214 An R POSIXct vector (rpy2.robjects.vectors.POSIXct)
215
216 """
217 import time
218 from rpy2.rinterface import StrSexpVector
219
220 # convert m8[ns] to m8[s]
221 vals = robj.vectors.FloatSexpVector(obj.values.view('i8') / 1E9)
222 as_posixct = robj.baseenv.get('as.POSIXct')
223 origin = StrSexpVector([time.strftime("%Y-%m-%d",
224 time.gmtime(0)), ])
225
226 # We will be sending ints as UTC
227 tz = obj.tz.zone if hasattr(
228 obj, 'tz') and hasattr(obj.tz, 'zone') else 'UTC'
229 tz = StrSexpVector([tz])
230 utc_tz = StrSexpVector(['UTC'])
231
232 posixct = as_posixct(vals, origin=origin, tz=utc_tz)
233 posixct.do_slot_assign('tzone', tz)
234 return posixct
235
236
237 VECTOR_TYPES = {np.float64: robj.FloatVector,
238 np.float32: robj.FloatVector,
239 np.float: robj.FloatVector,
240 np.int: robj.IntVector,
241 np.int32: robj.IntVector,
242 np.int64: robj.IntVector,
243 np.object_: robj.StrVector,
244 np.str: robj.StrVector,
245 np.bool: robj.BoolVector}
246
247 NA_TYPES = {np.float64: robj.NA_Real,
248 np.float32: robj.NA_Real,
249 np.float: robj.NA_Real,
250 np.int: robj.NA_Integer,
251 np.int32: robj.NA_Integer,
252 np.int64: robj.NA_Integer,
253 np.object_: robj.NA_Character,
254 np.str: robj.NA_Character,
255 np.bool: robj.NA_Logical}
256
257
258 def convert_to_r_dataframe(df, strings_as_factors=False):
259 """
260 Convert a pandas DataFrame to a R data.frame.
261
262 Parameters
263 ----------
264 df: The DataFrame being converted
265 strings_as_factors: Whether to turn strings into R factors (default: False)
266
267 Returns
268 -------
269 A R data.frame
270
271 """
272
273 import rpy2.rlike.container as rlc
274
275 columns = rlc.OrdDict()
276
277 # FIXME: This doesn't handle MultiIndex
278
279 for column in df:
280 value = df[column]
281 value_type = value.dtype.type
282
283 if value_type == np.datetime64:
284 value = convert_to_r_posixct(value)
285 else:
286 value = [item if pd.notnull(item) else NA_TYPES[value_type]
287 for item in value]
288
289 value = VECTOR_TYPES[value_type](value)
290
291 if not strings_as_factors:
292 I = robj.baseenv.get("I")
293 value = I(value)
294
295 columns[column] = value
296
297 r_dataframe = robj.DataFrame(columns)
298
299 del columns
300
301 r_dataframe.rownames = robj.StrVector(df.index)
302
303 return r_dataframe
304
305
306 def convert_to_r_matrix(df, strings_as_factors=False):
307
308 """
309 Convert a pandas DataFrame to a R matrix.
310
311 Parameters
312 ----------
313 df: The DataFrame being converted
314 strings_as_factors: Whether to turn strings into R factors (default: False)
315
316 Returns
317 -------
318 A R matrix
319
320 """
321
322 if df._is_mixed_type:
323 raise TypeError("Conversion to matrix only possible with non-mixed "
324 "type DataFrames")
325
326 r_dataframe = convert_to_r_dataframe(df, strings_as_factors)
327 as_matrix = robj.baseenv.get("as.matrix")
328 r_matrix = as_matrix(r_dataframe)
329
330 return r_matrix
331
332
333 def test_convert_list():
334 obj = r('list(a=1, b=2, c=3)')
335
336 converted = convert_robj(obj)
337 expected = {'a': [1], 'b': [2], 'c': [3]}
338
339 _test.assert_dict_equal(converted, expected)
340
341
342 def test_convert_nested_list():
343 obj = r('list(a=list(foo=1, bar=2))')
344
345 converted = convert_robj(obj)
346 expected = {'a': {'foo': [1], 'bar': [2]}}
347
348 _test.assert_dict_equal(converted, expected)
349
350
351 def test_convert_frame():
352 # built-in dataset
353 df = r['faithful']
354
355 converted = convert_robj(df)
356
357 assert np.array_equal(converted.columns, ['eruptions', 'waiting'])
358 assert np.array_equal(converted.index, np.arange(1, 273))
359
360
361 def _test_matrix():
362 r('mat <- matrix(rnorm(9), ncol=3)')
363 r('colnames(mat) <- c("one", "two", "three")')
364 r('rownames(mat) <- c("a", "b", "c")')
365
366 return r['mat']
367
368
369 def test_convert_matrix():
370 mat = _test_matrix()
371
372 converted = convert_robj(mat)
373
374 assert np.array_equal(converted.index, ['a', 'b', 'c'])
375 assert np.array_equal(converted.columns, ['one', 'two', 'three'])
376
377
378 def test_convert_r_dataframe():
379
380 is_na = robj.baseenv.get("is.na")
381
382 seriesd = _test.getSeriesData()
383 frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
384
385 # Null data
386 frame["E"] = [np.nan for item in frame["A"]]
387 # Some mixed type data
388 frame["F"] = ["text" if item % 2 == 0 else np.nan for item in range(30)]
389
390 r_dataframe = convert_to_r_dataframe(frame)
391
392 assert np.array_equal(convert_robj(r_dataframe.rownames), frame.index)
393 assert np.array_equal(convert_robj(r_dataframe.colnames), frame.columns)
394 assert all(is_na(item) for item in r_dataframe.rx2("E"))
395
396 for column in frame[["A", "B", "C", "D"]]:
397 coldata = r_dataframe.rx2(column)
398 original_data = frame[column]
399 assert np.array_equal(convert_robj(coldata), original_data)
400
401 for column in frame[["D", "E"]]:
402 for original, converted in zip(frame[column],
403 r_dataframe.rx2(column)):
404
405 if pd.isnull(original):
406 assert is_na(converted)
407 else:
408 assert original == converted
409
410
411 def test_convert_r_matrix():
412
413 is_na = robj.baseenv.get("is.na")
414
415 seriesd = _test.getSeriesData()
416 frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
417 # Null data
418 frame["E"] = [np.nan for item in frame["A"]]
419
420 r_dataframe = convert_to_r_matrix(frame)
421
422 assert np.array_equal(convert_robj(r_dataframe.rownames), frame.index)
423 assert np.array_equal(convert_robj(r_dataframe.colnames), frame.columns)
424 assert all(is_na(item) for item in r_dataframe.rx(True, "E"))
425
426 for column in frame[["A", "B", "C", "D"]]:
427 coldata = r_dataframe.rx(True, column)
428 original_data = frame[column]
429 assert np.array_equal(convert_robj(coldata),
430 original_data)
431
432 # Pandas bug 1282
433 frame["F"] = ["text" if item % 2 == 0 else np.nan for item in range(30)]
434
435 # FIXME: Ugly, this whole module needs to be ported to nose/unittest
436 try:
437 wrong_matrix = convert_to_r_matrix(frame)
438 except TypeError:
439 pass
440 except Exception:
441 raise
442
443
444 if __name__ == '__main__':
445 pass
446
[end of pandas/rpy/common.py]
[start of pandas/sparse/panel.py]
1 """
2 Data structures for sparse float data. Life is made simpler by dealing only
3 with float64 data
4 """
5
6 # pylint: disable=E1101,E1103,W0231
7
8 from pandas.compat import range, lrange, zip
9 from pandas import compat
10 import numpy as np
11
12 from pandas.core.index import Index, MultiIndex, _ensure_index
13 from pandas.core.frame import DataFrame
14 from pandas.core.panel import Panel
15 from pandas.sparse.frame import SparseDataFrame
16 from pandas.util.decorators import deprecate
17
18 import pandas.core.common as com
19
20
21 class SparsePanelAxis(object):
22
23 def __init__(self, cache_field, frame_attr):
24 self.cache_field = cache_field
25 self.frame_attr = frame_attr
26
27 def __get__(self, obj, type=None):
28 return getattr(obj, self.cache_field, None)
29
30 def __set__(self, obj, value):
31 value = _ensure_index(value)
32
33 if isinstance(value, MultiIndex):
34 raise NotImplementedError
35
36 for v in compat.itervalues(obj._frames):
37 setattr(v, self.frame_attr, value)
38
39 setattr(obj, self.cache_field, value)
40
41
42 class SparsePanel(Panel):
43 """
44 Sparse version of Panel
45
46 Parameters
47 ----------
48 frames : dict of DataFrame objects
49 items : array-like
50 major_axis : array-like
51 minor_axis : array-like
52 default_kind : {'block', 'integer'}, default 'block'
53 Default sparse kind for converting Series to SparseSeries. Will not
54 override SparseSeries passed into constructor
55 default_fill_value : float
56 Default fill_value for converting Series to SparseSeries. Will not
57 override SparseSeries passed in
58
59 Notes
60 -----
61 """
62 ndim = 3
63
64 def __init__(self, frames, items=None, major_axis=None, minor_axis=None,
65 default_fill_value=np.nan, default_kind='block'):
66 if isinstance(frames, np.ndarray):
67 new_frames = {}
68 for item, vals in zip(items, frames):
69 new_frames[item] = \
70 SparseDataFrame(vals, index=major_axis,
71 columns=minor_axis,
72 default_fill_value=default_fill_value,
73 default_kind=default_kind)
74 frames = new_frames
75
76 if not (isinstance(frames, dict)):
77 raise AssertionError()
78
79 self.default_fill_value = fill_value = default_fill_value
80 self.default_kind = kind = default_kind
81
82 # pre-filter, if necessary
83 if items is None:
84 items = Index(sorted(frames.keys()))
85 items = _ensure_index(items)
86
87 (clean_frames,
88 major_axis,
89 minor_axis) = _convert_frames(frames, major_axis,
90 minor_axis, kind=kind,
91 fill_value=fill_value)
92
93 self._frames = clean_frames
94
95 # do we want to fill missing ones?
96 for item in items:
97 if item not in clean_frames:
98 raise Exception('column %s not found in data' % item)
99
100 self._items = items
101 self.major_axis = major_axis
102 self.minor_axis = minor_axis
103
104 def _consolidate_inplace(self): # pragma: no cover
105 # do nothing when DataFrame calls this method
106 pass
107
108 def __array_wrap__(self, result):
109 return SparsePanel(result, items=self.items,
110 major_axis=self.major_axis,
111 minor_axis=self.minor_axis,
112 default_kind=self.default_kind,
113 default_fill_value=self.default_fill_value)
114
115 @classmethod
116 def from_dict(cls, data):
117 """
118 Analogous to Panel.from_dict
119 """
120 return SparsePanel(data)
121
122 def to_dense(self):
123 """
124 Convert SparsePanel to (dense) Panel
125
126 Returns
127 -------
128 dense : Panel
129 """
130 return Panel(self.values, self.items, self.major_axis,
131 self.minor_axis)
132
133 @property
134 def values(self):
135 # return dense values
136 return np.array([self._frames[item].values
137 for item in self.items])
138
139 # need a special property for items to make the field assignable
140
141 _items = None
142
143 def _get_items(self):
144 return self._items
145
146 def _set_items(self, new_items):
147 new_items = _ensure_index(new_items)
148 if isinstance(new_items, MultiIndex):
149 raise NotImplementedError
150
151 # need to create new frames dict
152
153 old_frame_dict = self._frames
154 old_items = self._items
155 self._frames = dict((new_k, old_frame_dict[old_k])
156 for new_k, old_k in zip(new_items, old_items))
157 self._items = new_items
158 items = property(fget=_get_items, fset=_set_items)
159
160 # DataFrame's index
161 major_axis = SparsePanelAxis('_major_axis', 'index')
162
163 # DataFrame's columns / "items"
164 minor_axis = SparsePanelAxis('_minor_axis', 'columns')
165
166 def _get_item_cache(self, key):
167 return self._frames[key]
168
169 def __setitem__(self, key, value):
170 if isinstance(value, DataFrame):
171 value = value.reindex(index=self.major_axis,
172 columns=self.minor_axis)
173 if not isinstance(value, SparseDataFrame):
174 value = value.to_sparse(fill_value=self.default_fill_value,
175 kind=self.default_kind)
176 else:
177 raise ValueError('only DataFrame objects can be set currently')
178
179 self._frames[key] = value
180
181 if key not in self.items:
182 self._items = Index(list(self.items) + [key])
183
184 def set_value(self, item, major, minor, value):
185 """
186 Quickly set single value at (item, major, minor) location
187
188 Parameters
189 ----------
190 item : item label (panel item)
191 major : major axis label (panel item row)
192 minor : minor axis label (panel item column)
193 value : scalar
194
195 Notes
196 -----
197 This method *always* returns a new object. It is not particularly
198 efficient but is provided for API compatibility with Panel
199
200 Returns
201 -------
202 panel : SparsePanel
203 """
204 dense = self.to_dense().set_value(item, major, minor, value)
205 return dense.to_sparse(kind=self.default_kind,
206 fill_value=self.default_fill_value)
207
208 def __delitem__(self, key):
209 loc = self.items.get_loc(key)
210 indices = lrange(loc) + lrange(loc + 1, len(self.items))
211 del self._frames[key]
212 self._items = self._items.take(indices)
213
214 def __getstate__(self):
215 # pickling
216 return (self._frames, com._pickle_array(self.items),
217 com._pickle_array(self.major_axis),
218 com._pickle_array(self.minor_axis),
219 self.default_fill_value, self.default_kind)
220
221 def __setstate__(self, state):
222 frames, items, major, minor, fv, kind = state
223
224 self.default_fill_value = fv
225 self.default_kind = kind
226 self._items = _ensure_index(com._unpickle_array(items))
227 self._major_axis = _ensure_index(com._unpickle_array(major))
228 self._minor_axis = _ensure_index(com._unpickle_array(minor))
229 self._frames = frames
230
231 def copy(self):
232 """
233 Make a (shallow) copy of the sparse panel
234
235 Returns
236 -------
237 copy : SparsePanel
238 """
239 return SparsePanel(self._frames.copy(), items=self.items,
240 major_axis=self.major_axis,
241 minor_axis=self.minor_axis,
242 default_fill_value=self.default_fill_value,
243 default_kind=self.default_kind)
244
245 def to_frame(self, filter_observations=True):
246 """
247 Convert SparsePanel to (dense) DataFrame
248
249 Returns
250 -------
251 frame : DataFrame
252 """
253 if not filter_observations:
254 raise TypeError('filter_observations=False not supported for '
255 'SparsePanel.to_long')
256
257 I, N, K = self.shape
258 counts = np.zeros(N * K, dtype=int)
259
260 d_values = {}
261 d_indexer = {}
262
263 for item in self.items:
264 frame = self[item]
265
266 values, major, minor = _stack_sparse_info(frame)
267
268 # values are stacked column-major
269 indexer = minor * N + major
270 counts.put(indexer, counts.take(indexer) + 1) # cuteness
271
272 d_values[item] = values
273 d_indexer[item] = indexer
274
275 # have full set of observations for each item
276 mask = counts == I
277
278 # for each item, take mask values at index locations for those sparse
279 # values, and use that to select values
280 values = np.column_stack([d_values[item][mask.take(d_indexer[item])]
281 for item in self.items])
282
283 inds, = mask.nonzero()
284
285 # still column major
286 major_labels = inds % N
287 minor_labels = inds // N
288
289 index = MultiIndex(levels=[self.major_axis, self.minor_axis],
290 labels=[major_labels, minor_labels])
291
292 df = DataFrame(values, index=index, columns=self.items)
293 return df.sortlevel(level=0)
294
295 to_long = deprecate('to_long', to_frame)
296 toLong = deprecate('toLong', to_frame)
297
298 def reindex(self, major=None, items=None, minor=None, major_axis=None,
299 minor_axis=None, copy=False):
300 """
301 Conform / reshape panel axis labels to new input labels
302
303 Parameters
304 ----------
305 major : array-like, default None
306 items : array-like, default None
307 minor : array-like, default None
308 copy : boolean, default False
309 Copy underlying SparseDataFrame objects
310
311 Returns
312 -------
313 reindexed : SparsePanel
314 """
315 major = com._mut_exclusive(major, major_axis)
316 minor = com._mut_exclusive(minor, minor_axis)
317
318 if com._all_none(items, major, minor):
319 raise ValueError('Must specify at least one axis')
320
321 major = self.major_axis if major is None else major
322 minor = self.minor_axis if minor is None else minor
323
324 if items is not None:
325 new_frames = {}
326 for item in items:
327 if item in self._frames:
328 new_frames[item] = self._frames[item]
329 else:
330 raise NotImplementedError('Reindexing with new items not yet '
331 'supported')
332 else:
333 new_frames = self._frames
334
335 if copy:
336 new_frames = dict((k, v.copy()) for k, v in compat.iteritems(new_frames))
337
338 return SparsePanel(new_frames, items=items,
339 major_axis=major,
340 minor_axis=minor,
341 default_fill_value=self.default_fill_value,
342 default_kind=self.default_kind)
343
344 def _combine(self, other, func, axis=0):
345 if isinstance(other, DataFrame):
346 return self._combineFrame(other, func, axis=axis)
347 elif isinstance(other, Panel):
348 return self._combinePanel(other, func)
349 elif np.isscalar(other):
350 new_frames = dict((k, func(v, other))
351 for k, v in compat.iteritems(self))
352 return self._new_like(new_frames)
353
354 def _combineFrame(self, other, func, axis=0):
355 index, columns = self._get_plane_axes(axis)
356 axis = self._get_axis_number(axis)
357
358 other = other.reindex(index=index, columns=columns)
359
360 if axis == 0:
361 new_values = func(self.values, other.values)
362 elif axis == 1:
363 new_values = func(self.values.swapaxes(0, 1), other.values.T)
364 new_values = new_values.swapaxes(0, 1)
365 elif axis == 2:
366 new_values = func(self.values.swapaxes(0, 2), other.values)
367 new_values = new_values.swapaxes(0, 2)
368
369 # TODO: make faster!
370 new_frames = {}
371 for item, item_slice in zip(self.items, new_values):
372 old_frame = self[item]
373 ofv = old_frame.default_fill_value
374 ok = old_frame.default_kind
375 new_frames[item] = SparseDataFrame(item_slice,
376 index=self.major_axis,
377 columns=self.minor_axis,
378 default_fill_value=ofv,
379 default_kind=ok)
380
381 return self._new_like(new_frames)
382
383 def _new_like(self, new_frames):
384 return SparsePanel(new_frames, self.items, self.major_axis,
385 self.minor_axis,
386 default_fill_value=self.default_fill_value,
387 default_kind=self.default_kind)
388
389 def _combinePanel(self, other, func):
390 items = self.items + other.items
391 major = self.major_axis + other.major_axis
392 minor = self.minor_axis + other.minor_axis
393
394 # could check that everything's the same size, but forget it
395
396 this = self.reindex(items=items, major=major, minor=minor)
397 other = other.reindex(items=items, major=major, minor=minor)
398
399 new_frames = {}
400 for item in items:
401 new_frames[item] = func(this[item], other[item])
402
403 if not isinstance(other, SparsePanel):
404 new_default_fill = self.default_fill_value
405 else:
406 # maybe unnecessary
407 new_default_fill = func(self.default_fill_value,
408 other.default_fill_value)
409
410 return SparsePanel(new_frames, items, major, minor,
411 default_fill_value=new_default_fill,
412 default_kind=self.default_kind)
413
414 def major_xs(self, key):
415 """
416 Return slice of panel along major axis
417
418 Parameters
419 ----------
420 key : object
421 Major axis label
422
423 Returns
424 -------
425 y : DataFrame
426 index -> minor axis, columns -> items
427 """
428 slices = dict((k, v.xs(key)) for k, v in compat.iteritems(self))
429 return DataFrame(slices, index=self.minor_axis, columns=self.items)
430
431 def minor_xs(self, key):
432 """
433 Return slice of panel along minor axis
434
435 Parameters
436 ----------
437 key : object
438 Minor axis label
439
440 Returns
441 -------
442 y : SparseDataFrame
443 index -> major axis, columns -> items
444 """
445 slices = dict((k, v[key]) for k, v in compat.iteritems(self))
446 return SparseDataFrame(slices, index=self.major_axis,
447 columns=self.items,
448 default_fill_value=self.default_fill_value,
449 default_kind=self.default_kind)
450
451 SparseWidePanel = SparsePanel
452
453
454 def _convert_frames(frames, index, columns, fill_value=np.nan, kind='block'):
455 from pandas.core.panel import _get_combined_index
456 output = {}
457 for item, df in compat.iteritems(frames):
458 if not isinstance(df, SparseDataFrame):
459 df = SparseDataFrame(df, default_kind=kind,
460 default_fill_value=fill_value)
461
462 output[item] = df
463
464 if index is None:
465 all_indexes = [df.index for df in output.values()]
466 index = _get_combined_index(all_indexes)
467 if columns is None:
468 all_columns = [df.columns for df in output.values()]
469 columns = _get_combined_index(all_columns)
470
471 index = _ensure_index(index)
472 columns = _ensure_index(columns)
473
474 for item, df in compat.iteritems(output):
475 if not (df.index.equals(index) and df.columns.equals(columns)):
476 output[item] = df.reindex(index=index, columns=columns)
477
478 return output, index, columns
479
480
481 def _stack_sparse_info(frame):
482 lengths = [s.sp_index.npoints for _, s in compat.iteritems(frame)]
483
484 # this is pretty fast
485 minor_labels = np.repeat(np.arange(len(frame.columns)), lengths)
486
487 inds_to_concat = []
488 vals_to_concat = []
489 for col in frame.columns:
490 series = frame[col]
491
492 if not np.isnan(series.fill_value):
493 raise TypeError('This routine assumes NaN fill value')
494
495 int_index = series.sp_index.to_int_index()
496 inds_to_concat.append(int_index.indices)
497 vals_to_concat.append(series.sp_values)
498
499 major_labels = np.concatenate(inds_to_concat)
500 sparse_values = np.concatenate(vals_to_concat)
501
502 return sparse_values, major_labels, minor_labels
503
[end of pandas/sparse/panel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
3841ae687748fca1c784d768606a5f11539937ed
|
Pandas.ix[:, "2002"] returns a DataFrame
``` python
import pandas as pd
import numpy as np
ind = pd.date_range(start="2000", freq="D", periods=1000)
df = pd.DataFrame(np.random.randn(len(ind), 5), index=ind, columns=list('ABCDE'))
panel = pd.Panel({'frame_'+c:df for c in list('ABC')})
test1 = panel.ix[:, "2002"]
test1.ndim # 3
type(test1) # pandas.core.frame.DataFrame
test2 = panel.ix[:, "2002":"2002-12-31"]
test2.ndim # 3
type(test2) # pandas.core.panel.Panel
print pd.__version__
#0.11.1.dev-d7fe745
```
When trying to grab all the data for year 2002, I get back a DataFrame. If I use a range for the major axis, then it returns a Panel.
http://nbviewer.ipython.org/5853887
|
yep....not tested real well here; nor is some of this implemented for >2 dim, marking as a bug
ha! beat u! sorry :P
wonder if we were looking at the notebook at the same time...
@jreback should be marked as 0.11.1?
you can do a PR if you want, but marked as 0.12; nothing more for 0.11.1 (unless its ready and tested and can be merged right away)....no new issues
excellent.
|
2013-08-07T19:18:42Z
|
<patch>
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -129,6 +129,10 @@ pandas 0.13
(:issue:`4486`)
- Fixed an issue where cumsum and cumprod didn't work with bool dtypes
(:issue:`4170`, :issue:`4440`)
+ - Fixed Panel slicing issued in ``xs`` that was returning an incorrect dimmed object
+ (:issue:`4016`)
+ - Fixed Panel assignment with a transposed frame (:issue:`3830`)
+ - Raise on set indexing with a Panel and a Panel as a value which needs alignment (:issue:`3777`)
pandas 0.12
===========
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -100,7 +100,7 @@ def _convert_tuple(self, key):
return tuple(keyidx)
def _setitem_with_indexer(self, indexer, value):
- from pandas.core.frame import DataFrame, Series
+ from pandas import Panel, DataFrame, Series
# also has the side effect of consolidating in-place
@@ -181,6 +181,9 @@ def setter(item, v):
if isinstance(value, DataFrame):
value = self._align_frame(indexer, value)
+ if isinstance(value, Panel):
+ value = self._align_panel(indexer, value)
+
# 2096
values = self.obj.values
if np.prod(values.shape):
@@ -208,12 +211,11 @@ def _align_series(self, indexer, ser):
raise ValueError('Incompatible indexer with Series')
def _align_frame(self, indexer, df):
- from pandas import DataFrame
- is_frame = isinstance(self.obj, DataFrame)
- if not is_frame:
- df = df.T
+ is_frame = self.obj.ndim == 2
+ is_panel = self.obj.ndim >= 3
if isinstance(indexer, tuple):
idx, cols = None, None
+ sindexers = []
for i, ix in enumerate(indexer):
ax = self.obj.axes[i]
if com._is_sequence(ix) or isinstance(ix, slice):
@@ -223,6 +225,16 @@ def _align_frame(self, indexer, df):
cols = ax[ix].ravel()
else:
break
+ else:
+ sindexers.append(i)
+
+ # panel
+ if is_panel:
+ if len(sindexers) == 1 and idx is None and cols is None:
+ if sindexers[0] == 0:
+ df = df.T
+ return self.obj.conform(df,axis=sindexers[0])
+ df = df.T
if idx is not None and cols is not None:
if df.index.equals(idx) and df.columns.equals(cols):
@@ -244,12 +256,27 @@ def _align_frame(self, indexer, df):
idx = self.obj.axes[1]
cols = self.obj.axes[2]
+ # by definition we are indexing on the 0th axis
+ if is_panel:
+ df = df.T
+
if idx.equals(df.index) and cols.equals(df.columns):
return df.copy().values
+
+ # a passed in dataframe which is actually a transpose
+ # of what is needed
+ elif idx.equals(df.columns) and cols.equals(df.index):
+ return df.T.copy().values
+
return df.reindex(idx, columns=cols).values
raise ValueError('Incompatible indexer with DataFrame')
+ def _align_panel(self, indexer, df):
+ is_frame = self.obj.ndim == 2
+ is_panel = self.obj.ndim >= 3
+ raise NotImplementedError("cannot set using an indexer with a Panel yet!")
+
def _getitem_tuple(self, tup):
try:
return self._getitem_lowerdim(tup)
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -1048,7 +1048,7 @@ def xs(self, key, axis=1, copy=True):
self._consolidate_inplace()
axis_number = self._get_axis_number(axis)
new_data = self._data.xs(key, axis=axis_number, copy=copy)
- return self._constructor_sliced(new_data)
+ return self._construct_return_type(new_data)
_xs = xs
@@ -1263,24 +1263,33 @@ def _reduce(self, op, axis=0, skipna=True):
if result.ndim == 2 and axis_name != self._info_axis:
result = result.T
- return self._constructor_sliced(result,
+ return self._construct_return_type(result, axes)
+
+ def _construct_return_type(self, result, axes=None, **kwargs):
+ """ return the type for the ndim of the result """
+ ndim = result.ndim
+ if self.ndim == ndim:
+ """ return the construction dictionary for these axes """
+ if axes is None:
+ return self._constructor(result)
+ return self._constructor(result, **self._construct_axes_dict())
+
+ elif self.ndim == ndim + 1:
+ if axes is None:
+ return self._constructor_sliced(result)
+ return self._constructor_sliced(result,
**self._extract_axes_for_slice(self, axes))
+ raise PandasError("invalid _construct_return_type [self->%s] [result->%s]" %
+ (self.ndim, result.ndim))
+
def _wrap_result(self, result, axis):
axis = self._get_axis_name(axis)
axes = self._get_plane_axes(axis)
if result.ndim == 2 and axis != self._info_axis:
result = result.T
- # do we have reduced dimensionalility?
- if self.ndim == result.ndim:
- return self._constructor(result, **self._construct_axes_dict())
- elif self.ndim == result.ndim + 1:
- return self._constructor_sliced(result,
- **self._extract_axes_for_slice(self, axes))
-
- raise PandasError("invalid _wrap_result [self->%s] [result->%s]" %
- (self.ndim, result.ndim))
+ return self._construct_return_type(result, axes)
def count(self, axis='major'):
"""
</patch>
|
[]
|
[]
| |||
ipython__ipython-1831
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
__file__ is not defined when file end with .ipy
I named my IPython file.ipy
I propose .ipy ? objections ?
I feel ipython should not be confused with plain python!
```
echo "#!/usr/bin/env ipython" > test.ipy
echo "print __file__" >> test.ipy
cat test.ipy
#!/usr/bin/env ipython
print __file__
```
```
chmod +x test.ipy
./test.ipy
> NameError: name '__file__' is not defined
ipython test.ipy
> NameError: name '__file__' is not defined
## run with plain python
python test.ipy
> ./test.ipy
SUCCESS:
## rename file to be .py seems to solve the issue
mv ./test.ipy ./test.py
ipython test.ipy
> ./test.py
SUCCESS:
./test.py
> ./test.py
SUCCESS:
```
Should we not trust files with different endings... I have a feeling files not ending with .py are blacklisted to prevent issues with cgi or something... not to sure.. Perhaps it has something to do with the she-bang notation..?
debian kubuntu 12.10
```
ipython -v
0.12.1
```
Thanks
</issue>
<code>
[start of README.rst]
1 ===========================================
2 IPython: Productive Interactive Computing
3 ===========================================
4
5 Overview
6 ========
7
8 Welcome to IPython. Our full documentation is available on `our website
9 <http://ipython.org/documentation.html>`_; if you downloaded a built source
10 distribution the ``docs/source`` directory contains the plaintext version of
11 these manuals. If you have Sphinx installed, you can build them by typing
12 ``make html`` for local browsing.
13
14
15 Dependencies and supported Python versions
16 ==========================================
17
18 For full details, see the installation section of the manual. The basic parts
19 of IPython only need the Python standard library, but much of its more advanced
20 functionality requires extra packages.
21
22 Officially, IPython requires Python version 2.6, 2.7, or 3.1 and above.
23
24
25 Instant running
26 ===============
27
28 You can run IPython from this directory without even installing it system-wide
29 by typing at the terminal::
30
31 $ python ipython.py
32
[end of README.rst]
[start of IPython/core/magics/execution.py]
1 """Implementation of execution-related magic functions.
2 """
3 #-----------------------------------------------------------------------------
4 # Copyright (c) 2012 The IPython Development Team.
5 #
6 # Distributed under the terms of the Modified BSD License.
7 #
8 # The full license is in the file COPYING.txt, distributed with this software.
9 #-----------------------------------------------------------------------------
10
11 #-----------------------------------------------------------------------------
12 # Imports
13 #-----------------------------------------------------------------------------
14
15 # Stdlib
16 import __builtin__ as builtin_mod
17 import bdb
18 import os
19 import sys
20 import time
21 from StringIO import StringIO
22
23 # cProfile was added in Python2.5
24 try:
25 import cProfile as profile
26 import pstats
27 except ImportError:
28 # profile isn't bundled by default in Debian for license reasons
29 try:
30 import profile, pstats
31 except ImportError:
32 profile = pstats = None
33
34 # Our own packages
35 from IPython.core import debugger, oinspect
36 from IPython.core import magic_arguments
37 from IPython.core import page
38 from IPython.core.error import UsageError
39 from IPython.core.macro import Macro
40 from IPython.core.magic import (Magics, magics_class, line_magic, cell_magic,
41 line_cell_magic, on_off, needs_local_scope)
42 from IPython.testing.skipdoctest import skip_doctest
43 from IPython.utils import py3compat
44 from IPython.utils.io import capture_output
45 from IPython.utils.ipstruct import Struct
46 from IPython.utils.module_paths import find_mod
47 from IPython.utils.path import get_py_filename, unquote_filename
48 from IPython.utils.timing import clock, clock2
49 from IPython.utils.warn import warn, error
50
51 #-----------------------------------------------------------------------------
52 # Magic implementation classes
53 #-----------------------------------------------------------------------------
54
55 @magics_class
56 class ExecutionMagics(Magics):
57 """Magics related to code execution, debugging, profiling, etc.
58
59 """
60
61 def __init__(self, shell):
62 super(ExecutionMagics, self).__init__(shell)
63 if profile is None:
64 self.prun = self.profile_missing_notice
65 # Default execution function used to actually run user code.
66 self.default_runner = None
67
68 def profile_missing_notice(self, *args, **kwargs):
69 error("""\
70 The profile module could not be found. It has been removed from the standard
71 python packages because of its non-free license. To use profiling, install the
72 python-profiler package from non-free.""")
73
74 @skip_doctest
75 @line_cell_magic
76 def prun(self, parameter_s='', cell=None, user_mode=True,
77 opts=None,arg_lst=None,prog_ns=None):
78
79 """Run a statement through the python code profiler.
80
81 Usage, in line mode:
82 %prun [options] statement
83
84 Usage, in cell mode:
85 %%prun [options] [statement]
86 code...
87 code...
88
89 In cell mode, the additional code lines are appended to the (possibly
90 empty) statement in the first line. Cell mode allows you to easily
91 profile multiline blocks without having to put them in a separate
92 function.
93
94 The given statement (which doesn't require quote marks) is run via the
95 python profiler in a manner similar to the profile.run() function.
96 Namespaces are internally managed to work correctly; profile.run
97 cannot be used in IPython because it makes certain assumptions about
98 namespaces which do not hold under IPython.
99
100 Options:
101
102 -l <limit>: you can place restrictions on what or how much of the
103 profile gets printed. The limit value can be:
104
105 * A string: only information for function names containing this string
106 is printed.
107
108 * An integer: only these many lines are printed.
109
110 * A float (between 0 and 1): this fraction of the report is printed
111 (for example, use a limit of 0.4 to see the topmost 40% only).
112
113 You can combine several limits with repeated use of the option. For
114 example, '-l __init__ -l 5' will print only the topmost 5 lines of
115 information about class constructors.
116
117 -r: return the pstats.Stats object generated by the profiling. This
118 object has all the information about the profile in it, and you can
119 later use it for further analysis or in other functions.
120
121 -s <key>: sort profile by given key. You can provide more than one key
122 by using the option several times: '-s key1 -s key2 -s key3...'. The
123 default sorting key is 'time'.
124
125 The following is copied verbatim from the profile documentation
126 referenced below:
127
128 When more than one key is provided, additional keys are used as
129 secondary criteria when the there is equality in all keys selected
130 before them.
131
132 Abbreviations can be used for any key names, as long as the
133 abbreviation is unambiguous. The following are the keys currently
134 defined:
135
136 Valid Arg Meaning
137 "calls" call count
138 "cumulative" cumulative time
139 "file" file name
140 "module" file name
141 "pcalls" primitive call count
142 "line" line number
143 "name" function name
144 "nfl" name/file/line
145 "stdname" standard name
146 "time" internal time
147
148 Note that all sorts on statistics are in descending order (placing
149 most time consuming items first), where as name, file, and line number
150 searches are in ascending order (i.e., alphabetical). The subtle
151 distinction between "nfl" and "stdname" is that the standard name is a
152 sort of the name as printed, which means that the embedded line
153 numbers get compared in an odd way. For example, lines 3, 20, and 40
154 would (if the file names were the same) appear in the string order
155 "20" "3" and "40". In contrast, "nfl" does a numeric compare of the
156 line numbers. In fact, sort_stats("nfl") is the same as
157 sort_stats("name", "file", "line").
158
159 -T <filename>: save profile results as shown on screen to a text
160 file. The profile is still shown on screen.
161
162 -D <filename>: save (via dump_stats) profile statistics to given
163 filename. This data is in a format understood by the pstats module, and
164 is generated by a call to the dump_stats() method of profile
165 objects. The profile is still shown on screen.
166
167 -q: suppress output to the pager. Best used with -T and/or -D above.
168
169 If you want to run complete programs under the profiler's control, use
170 '%run -p [prof_opts] filename.py [args to program]' where prof_opts
171 contains profiler specific options as described here.
172
173 You can read the complete documentation for the profile module with::
174
175 In [1]: import profile; profile.help()
176 """
177
178 opts_def = Struct(D=[''],l=[],s=['time'],T=[''])
179
180 if user_mode: # regular user call
181 opts,arg_str = self.parse_options(parameter_s,'D:l:rs:T:q',
182 list_all=True, posix=False)
183 namespace = self.shell.user_ns
184 if cell is not None:
185 arg_str += '\n' + cell
186 else: # called to run a program by %run -p
187 try:
188 filename = get_py_filename(arg_lst[0])
189 except IOError as e:
190 try:
191 msg = str(e)
192 except UnicodeError:
193 msg = e.message
194 error(msg)
195 return
196
197 arg_str = 'execfile(filename,prog_ns)'
198 namespace = {
199 'execfile': self.shell.safe_execfile,
200 'prog_ns': prog_ns,
201 'filename': filename
202 }
203
204 opts.merge(opts_def)
205
206 prof = profile.Profile()
207 try:
208 prof = prof.runctx(arg_str,namespace,namespace)
209 sys_exit = ''
210 except SystemExit:
211 sys_exit = """*** SystemExit exception caught in code being profiled."""
212
213 stats = pstats.Stats(prof).strip_dirs().sort_stats(*opts.s)
214
215 lims = opts.l
216 if lims:
217 lims = [] # rebuild lims with ints/floats/strings
218 for lim in opts.l:
219 try:
220 lims.append(int(lim))
221 except ValueError:
222 try:
223 lims.append(float(lim))
224 except ValueError:
225 lims.append(lim)
226
227 # Trap output.
228 stdout_trap = StringIO()
229
230 if hasattr(stats,'stream'):
231 # In newer versions of python, the stats object has a 'stream'
232 # attribute to write into.
233 stats.stream = stdout_trap
234 stats.print_stats(*lims)
235 else:
236 # For older versions, we manually redirect stdout during printing
237 sys_stdout = sys.stdout
238 try:
239 sys.stdout = stdout_trap
240 stats.print_stats(*lims)
241 finally:
242 sys.stdout = sys_stdout
243
244 output = stdout_trap.getvalue()
245 output = output.rstrip()
246
247 if 'q' not in opts:
248 page.page(output)
249 print sys_exit,
250
251 dump_file = opts.D[0]
252 text_file = opts.T[0]
253 if dump_file:
254 dump_file = unquote_filename(dump_file)
255 prof.dump_stats(dump_file)
256 print '\n*** Profile stats marshalled to file',\
257 `dump_file`+'.',sys_exit
258 if text_file:
259 text_file = unquote_filename(text_file)
260 pfile = open(text_file,'w')
261 pfile.write(output)
262 pfile.close()
263 print '\n*** Profile printout saved to text file',\
264 `text_file`+'.',sys_exit
265
266 if opts.has_key('r'):
267 return stats
268 else:
269 return None
270
271 @line_magic
272 def pdb(self, parameter_s=''):
273 """Control the automatic calling of the pdb interactive debugger.
274
275 Call as '%pdb on', '%pdb 1', '%pdb off' or '%pdb 0'. If called without
276 argument it works as a toggle.
277
278 When an exception is triggered, IPython can optionally call the
279 interactive pdb debugger after the traceback printout. %pdb toggles
280 this feature on and off.
281
282 The initial state of this feature is set in your configuration
283 file (the option is ``InteractiveShell.pdb``).
284
285 If you want to just activate the debugger AFTER an exception has fired,
286 without having to type '%pdb on' and rerunning your code, you can use
287 the %debug magic."""
288
289 par = parameter_s.strip().lower()
290
291 if par:
292 try:
293 new_pdb = {'off':0,'0':0,'on':1,'1':1}[par]
294 except KeyError:
295 print ('Incorrect argument. Use on/1, off/0, '
296 'or nothing for a toggle.')
297 return
298 else:
299 # toggle
300 new_pdb = not self.shell.call_pdb
301
302 # set on the shell
303 self.shell.call_pdb = new_pdb
304 print 'Automatic pdb calling has been turned',on_off(new_pdb)
305
306 @line_magic
307 def debug(self, parameter_s=''):
308 """Activate the interactive debugger in post-mortem mode.
309
310 If an exception has just occurred, this lets you inspect its stack
311 frames interactively. Note that this will always work only on the last
312 traceback that occurred, so you must call this quickly after an
313 exception that you wish to inspect has fired, because if another one
314 occurs, it clobbers the previous one.
315
316 If you want IPython to automatically do this on every exception, see
317 the %pdb magic for more details.
318 """
319 self.shell.debugger(force=True)
320
321 @line_magic
322 def tb(self, s):
323 """Print the last traceback with the currently active exception mode.
324
325 See %xmode for changing exception reporting modes."""
326 self.shell.showtraceback()
327
328 @skip_doctest
329 @line_magic
330 def run(self, parameter_s='', runner=None,
331 file_finder=get_py_filename):
332 """Run the named file inside IPython as a program.
333
334 Usage:\\
335 %run [-n -i -t [-N<N>] -d [-b<N>] -p [profile options]] file [args]
336
337 Parameters after the filename are passed as command-line arguments to
338 the program (put in sys.argv). Then, control returns to IPython's
339 prompt.
340
341 This is similar to running at a system prompt:\\
342 $ python file args\\
343 but with the advantage of giving you IPython's tracebacks, and of
344 loading all variables into your interactive namespace for further use
345 (unless -p is used, see below).
346
347 The file is executed in a namespace initially consisting only of
348 __name__=='__main__' and sys.argv constructed as indicated. It thus
349 sees its environment as if it were being run as a stand-alone program
350 (except for sharing global objects such as previously imported
351 modules). But after execution, the IPython interactive namespace gets
352 updated with all variables defined in the program (except for __name__
353 and sys.argv). This allows for very convenient loading of code for
354 interactive work, while giving each program a 'clean sheet' to run in.
355
356 Options:
357
358 -n: __name__ is NOT set to '__main__', but to the running file's name
359 without extension (as python does under import). This allows running
360 scripts and reloading the definitions in them without calling code
361 protected by an ' if __name__ == "__main__" ' clause.
362
363 -i: run the file in IPython's namespace instead of an empty one. This
364 is useful if you are experimenting with code written in a text editor
365 which depends on variables defined interactively.
366
367 -e: ignore sys.exit() calls or SystemExit exceptions in the script
368 being run. This is particularly useful if IPython is being used to
369 run unittests, which always exit with a sys.exit() call. In such
370 cases you are interested in the output of the test results, not in
371 seeing a traceback of the unittest module.
372
373 -t: print timing information at the end of the run. IPython will give
374 you an estimated CPU time consumption for your script, which under
375 Unix uses the resource module to avoid the wraparound problems of
376 time.clock(). Under Unix, an estimate of time spent on system tasks
377 is also given (for Windows platforms this is reported as 0.0).
378
379 If -t is given, an additional -N<N> option can be given, where <N>
380 must be an integer indicating how many times you want the script to
381 run. The final timing report will include total and per run results.
382
383 For example (testing the script uniq_stable.py)::
384
385 In [1]: run -t uniq_stable
386
387 IPython CPU timings (estimated):\\
388 User : 0.19597 s.\\
389 System: 0.0 s.\\
390
391 In [2]: run -t -N5 uniq_stable
392
393 IPython CPU timings (estimated):\\
394 Total runs performed: 5\\
395 Times : Total Per run\\
396 User : 0.910862 s, 0.1821724 s.\\
397 System: 0.0 s, 0.0 s.
398
399 -d: run your program under the control of pdb, the Python debugger.
400 This allows you to execute your program step by step, watch variables,
401 etc. Internally, what IPython does is similar to calling:
402
403 pdb.run('execfile("YOURFILENAME")')
404
405 with a breakpoint set on line 1 of your file. You can change the line
406 number for this automatic breakpoint to be <N> by using the -bN option
407 (where N must be an integer). For example::
408
409 %run -d -b40 myscript
410
411 will set the first breakpoint at line 40 in myscript.py. Note that
412 the first breakpoint must be set on a line which actually does
413 something (not a comment or docstring) for it to stop execution.
414
415 When the pdb debugger starts, you will see a (Pdb) prompt. You must
416 first enter 'c' (without quotes) to start execution up to the first
417 breakpoint.
418
419 Entering 'help' gives information about the use of the debugger. You
420 can easily see pdb's full documentation with "import pdb;pdb.help()"
421 at a prompt.
422
423 -p: run program under the control of the Python profiler module (which
424 prints a detailed report of execution times, function calls, etc).
425
426 You can pass other options after -p which affect the behavior of the
427 profiler itself. See the docs for %prun for details.
428
429 In this mode, the program's variables do NOT propagate back to the
430 IPython interactive namespace (because they remain in the namespace
431 where the profiler executes them).
432
433 Internally this triggers a call to %prun, see its documentation for
434 details on the options available specifically for profiling.
435
436 There is one special usage for which the text above doesn't apply:
437 if the filename ends with .ipy, the file is run as ipython script,
438 just as if the commands were written on IPython prompt.
439
440 -m: specify module name to load instead of script path. Similar to
441 the -m option for the python interpreter. Use this option last if you
442 want to combine with other %run options. Unlike the python interpreter
443 only source modules are allowed no .pyc or .pyo files.
444 For example::
445
446 %run -m example
447
448 will run the example module.
449
450 """
451
452 # get arguments and set sys.argv for program to be run.
453 opts, arg_lst = self.parse_options(parameter_s, 'nidtN:b:pD:l:rs:T:em:',
454 mode='list', list_all=1)
455 if "m" in opts:
456 modulename = opts["m"][0]
457 modpath = find_mod(modulename)
458 if modpath is None:
459 warn('%r is not a valid modulename on sys.path'%modulename)
460 return
461 arg_lst = [modpath] + arg_lst
462 try:
463 filename = file_finder(arg_lst[0])
464 except IndexError:
465 warn('you must provide at least a filename.')
466 print '\n%run:\n', oinspect.getdoc(self.run)
467 return
468 except IOError as e:
469 try:
470 msg = str(e)
471 except UnicodeError:
472 msg = e.message
473 error(msg)
474 return
475
476 if filename.lower().endswith('.ipy'):
477 self.shell.safe_execfile_ipy(filename)
478 return
479
480 # Control the response to exit() calls made by the script being run
481 exit_ignore = 'e' in opts
482
483 # Make sure that the running script gets a proper sys.argv as if it
484 # were run from a system shell.
485 save_argv = sys.argv # save it for later restoring
486
487 # simulate shell expansion on arguments, at least tilde expansion
488 args = [ os.path.expanduser(a) for a in arg_lst[1:] ]
489
490 sys.argv = [filename] + args # put in the proper filename
491 # protect sys.argv from potential unicode strings on Python 2:
492 if not py3compat.PY3:
493 sys.argv = [ py3compat.cast_bytes(a) for a in sys.argv ]
494
495 if 'i' in opts:
496 # Run in user's interactive namespace
497 prog_ns = self.shell.user_ns
498 __name__save = self.shell.user_ns['__name__']
499 prog_ns['__name__'] = '__main__'
500 main_mod = self.shell.new_main_mod(prog_ns)
501 else:
502 # Run in a fresh, empty namespace
503 if 'n' in opts:
504 name = os.path.splitext(os.path.basename(filename))[0]
505 else:
506 name = '__main__'
507
508 main_mod = self.shell.new_main_mod()
509 prog_ns = main_mod.__dict__
510 prog_ns['__name__'] = name
511
512 # Since '%run foo' emulates 'python foo.py' at the cmd line, we must
513 # set the __file__ global in the script's namespace
514 prog_ns['__file__'] = filename
515
516 # pickle fix. See interactiveshell for an explanation. But we need to
517 # make sure that, if we overwrite __main__, we replace it at the end
518 main_mod_name = prog_ns['__name__']
519
520 if main_mod_name == '__main__':
521 restore_main = sys.modules['__main__']
522 else:
523 restore_main = False
524
525 # This needs to be undone at the end to prevent holding references to
526 # every single object ever created.
527 sys.modules[main_mod_name] = main_mod
528
529 try:
530 stats = None
531 with self.shell.readline_no_record:
532 if 'p' in opts:
533 stats = self.prun('', None, False, opts, arg_lst, prog_ns)
534 else:
535 if 'd' in opts:
536 deb = debugger.Pdb(self.shell.colors)
537 # reset Breakpoint state, which is moronically kept
538 # in a class
539 bdb.Breakpoint.next = 1
540 bdb.Breakpoint.bplist = {}
541 bdb.Breakpoint.bpbynumber = [None]
542 # Set an initial breakpoint to stop execution
543 maxtries = 10
544 bp = int(opts.get('b', [1])[0])
545 checkline = deb.checkline(filename, bp)
546 if not checkline:
547 for bp in range(bp + 1, bp + maxtries + 1):
548 if deb.checkline(filename, bp):
549 break
550 else:
551 msg = ("\nI failed to find a valid line to set "
552 "a breakpoint\n"
553 "after trying up to line: %s.\n"
554 "Please set a valid breakpoint manually "
555 "with the -b option." % bp)
556 error(msg)
557 return
558 # if we find a good linenumber, set the breakpoint
559 deb.do_break('%s:%s' % (filename, bp))
560 # Start file run
561 print "NOTE: Enter 'c' at the",
562 print "%s prompt to start your script." % deb.prompt
563 ns = {'execfile': py3compat.execfile, 'prog_ns': prog_ns}
564 try:
565 deb.run('execfile("%s", prog_ns)' % filename, ns)
566
567 except:
568 etype, value, tb = sys.exc_info()
569 # Skip three frames in the traceback: the %run one,
570 # one inside bdb.py, and the command-line typed by the
571 # user (run by exec in pdb itself).
572 self.shell.InteractiveTB(etype, value, tb, tb_offset=3)
573 else:
574 if runner is None:
575 runner = self.default_runner
576 if runner is None:
577 runner = self.shell.safe_execfile
578 if 't' in opts:
579 # timed execution
580 try:
581 nruns = int(opts['N'][0])
582 if nruns < 1:
583 error('Number of runs must be >=1')
584 return
585 except (KeyError):
586 nruns = 1
587 twall0 = time.time()
588 if nruns == 1:
589 t0 = clock2()
590 runner(filename, prog_ns, prog_ns,
591 exit_ignore=exit_ignore)
592 t1 = clock2()
593 t_usr = t1[0] - t0[0]
594 t_sys = t1[1] - t0[1]
595 print "\nIPython CPU timings (estimated):"
596 print " User : %10.2f s." % t_usr
597 print " System : %10.2f s." % t_sys
598 else:
599 runs = range(nruns)
600 t0 = clock2()
601 for nr in runs:
602 runner(filename, prog_ns, prog_ns,
603 exit_ignore=exit_ignore)
604 t1 = clock2()
605 t_usr = t1[0] - t0[0]
606 t_sys = t1[1] - t0[1]
607 print "\nIPython CPU timings (estimated):"
608 print "Total runs performed:", nruns
609 print " Times : %10.2f %10.2f" % ('Total', 'Per run')
610 print " User : %10.2f s, %10.2f s." % (t_usr, t_usr / nruns)
611 print " System : %10.2f s, %10.2f s." % (t_sys, t_sys / nruns)
612 twall1 = time.time()
613 print "Wall time: %10.2f s." % (twall1 - twall0)
614
615 else:
616 # regular execution
617 runner(filename, prog_ns, prog_ns, exit_ignore=exit_ignore)
618
619 if 'i' in opts:
620 self.shell.user_ns['__name__'] = __name__save
621 else:
622 # The shell MUST hold a reference to prog_ns so after %run
623 # exits, the python deletion mechanism doesn't zero it out
624 # (leaving dangling references).
625 self.shell.cache_main_mod(prog_ns, filename)
626 # update IPython interactive namespace
627
628 # Some forms of read errors on the file may mean the
629 # __name__ key was never set; using pop we don't have to
630 # worry about a possible KeyError.
631 prog_ns.pop('__name__', None)
632
633 self.shell.user_ns.update(prog_ns)
634 finally:
635 # It's a bit of a mystery why, but __builtins__ can change from
636 # being a module to becoming a dict missing some key data after
637 # %run. As best I can see, this is NOT something IPython is doing
638 # at all, and similar problems have been reported before:
639 # http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-10/0188.html
640 # Since this seems to be done by the interpreter itself, the best
641 # we can do is to at least restore __builtins__ for the user on
642 # exit.
643 self.shell.user_ns['__builtins__'] = builtin_mod
644
645 # Ensure key global structures are restored
646 sys.argv = save_argv
647 if restore_main:
648 sys.modules['__main__'] = restore_main
649 else:
650 # Remove from sys.modules the reference to main_mod we'd
651 # added. Otherwise it will trap references to objects
652 # contained therein.
653 del sys.modules[main_mod_name]
654
655 return stats
656
657 @skip_doctest
658 @line_cell_magic
659 def timeit(self, line='', cell=None):
660 """Time execution of a Python statement or expression
661
662 Usage, in line mode:
663 %timeit [-n<N> -r<R> [-t|-c]] statement
664 or in cell mode:
665 %%timeit [-n<N> -r<R> [-t|-c]] setup_code
666 code
667 code...
668
669 Time execution of a Python statement or expression using the timeit
670 module. This function can be used both as a line and cell magic:
671
672 - In line mode you can time a single-line statement (though multiple
673 ones can be chained with using semicolons).
674
675 - In cell mode, the statement in the first line is used as setup code
676 (executed but not timed) and the body of the cell is timed. The cell
677 body has access to any variables created in the setup code.
678
679 Options:
680 -n<N>: execute the given statement <N> times in a loop. If this value
681 is not given, a fitting value is chosen.
682
683 -r<R>: repeat the loop iteration <R> times and take the best result.
684 Default: 3
685
686 -t: use time.time to measure the time, which is the default on Unix.
687 This function measures wall time.
688
689 -c: use time.clock to measure the time, which is the default on
690 Windows and measures wall time. On Unix, resource.getrusage is used
691 instead and returns the CPU user time.
692
693 -p<P>: use a precision of <P> digits to display the timing result.
694 Default: 3
695
696
697 Examples
698 --------
699 ::
700
701 In [1]: %timeit pass
702 10000000 loops, best of 3: 53.3 ns per loop
703
704 In [2]: u = None
705
706 In [3]: %timeit u is None
707 10000000 loops, best of 3: 184 ns per loop
708
709 In [4]: %timeit -r 4 u == None
710 1000000 loops, best of 4: 242 ns per loop
711
712 In [5]: import time
713
714 In [6]: %timeit -n1 time.sleep(2)
715 1 loops, best of 3: 2 s per loop
716
717
718 The times reported by %timeit will be slightly higher than those
719 reported by the timeit.py script when variables are accessed. This is
720 due to the fact that %timeit executes the statement in the namespace
721 of the shell, compared with timeit.py, which uses a single setup
722 statement to import function or create variables. Generally, the bias
723 does not matter as long as results from timeit.py are not mixed with
724 those from %timeit."""
725
726 import timeit
727 import math
728
729 # XXX: Unfortunately the unicode 'micro' symbol can cause problems in
730 # certain terminals. Until we figure out a robust way of
731 # auto-detecting if the terminal can deal with it, use plain 'us' for
732 # microseconds. I am really NOT happy about disabling the proper
733 # 'micro' prefix, but crashing is worse... If anyone knows what the
734 # right solution for this is, I'm all ears...
735 #
736 # Note: using
737 #
738 # s = u'\xb5'
739 # s.encode(sys.getdefaultencoding())
740 #
741 # is not sufficient, as I've seen terminals where that fails but
742 # print s
743 #
744 # succeeds
745 #
746 # See bug: https://bugs.launchpad.net/ipython/+bug/348466
747
748 #units = [u"s", u"ms",u'\xb5',"ns"]
749 units = [u"s", u"ms",u'us',"ns"]
750
751 scaling = [1, 1e3, 1e6, 1e9]
752
753 opts, stmt = self.parse_options(line,'n:r:tcp:',
754 posix=False, strict=False)
755 if stmt == "" and cell is None:
756 return
757 timefunc = timeit.default_timer
758 number = int(getattr(opts, "n", 0))
759 repeat = int(getattr(opts, "r", timeit.default_repeat))
760 precision = int(getattr(opts, "p", 3))
761 if hasattr(opts, "t"):
762 timefunc = time.time
763 if hasattr(opts, "c"):
764 timefunc = clock
765
766 timer = timeit.Timer(timer=timefunc)
767 # this code has tight coupling to the inner workings of timeit.Timer,
768 # but is there a better way to achieve that the code stmt has access
769 # to the shell namespace?
770 transform = self.shell.input_splitter.transform_cell
771 if cell is None:
772 # called as line magic
773 setup = 'pass'
774 stmt = timeit.reindent(transform(stmt), 8)
775 else:
776 setup = timeit.reindent(transform(stmt), 4)
777 stmt = timeit.reindent(transform(cell), 8)
778
779 # From Python 3.3, this template uses new-style string formatting.
780 if sys.version_info >= (3, 3):
781 src = timeit.template.format(stmt=stmt, setup=setup)
782 else:
783 src = timeit.template % dict(stmt=stmt, setup=setup)
784
785 # Track compilation time so it can be reported if too long
786 # Minimum time above which compilation time will be reported
787 tc_min = 0.1
788
789 t0 = clock()
790 code = compile(src, "<magic-timeit>", "exec")
791 tc = clock()-t0
792
793 ns = {}
794 exec code in self.shell.user_ns, ns
795 timer.inner = ns["inner"]
796
797 if number == 0:
798 # determine number so that 0.2 <= total time < 2.0
799 number = 1
800 for i in range(1, 10):
801 if timer.timeit(number) >= 0.2:
802 break
803 number *= 10
804
805 best = min(timer.repeat(repeat, number)) / number
806
807 if best > 0.0 and best < 1000.0:
808 order = min(-int(math.floor(math.log10(best)) // 3), 3)
809 elif best >= 1000.0:
810 order = 0
811 else:
812 order = 3
813 print u"%d loops, best of %d: %.*g %s per loop" % (number, repeat,
814 precision,
815 best * scaling[order],
816 units[order])
817 if tc > tc_min:
818 print "Compiler time: %.2f s" % tc
819
820 @skip_doctest
821 @needs_local_scope
822 @line_magic
823 def time(self,parameter_s, user_locals):
824 """Time execution of a Python statement or expression.
825
826 The CPU and wall clock times are printed, and the value of the
827 expression (if any) is returned. Note that under Win32, system time
828 is always reported as 0, since it can not be measured.
829
830 This function provides very basic timing functionality. In Python
831 2.3, the timeit module offers more control and sophistication, so this
832 could be rewritten to use it (patches welcome).
833
834 Examples
835 --------
836 ::
837
838 In [1]: time 2**128
839 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
840 Wall time: 0.00
841 Out[1]: 340282366920938463463374607431768211456L
842
843 In [2]: n = 1000000
844
845 In [3]: time sum(range(n))
846 CPU times: user 1.20 s, sys: 0.05 s, total: 1.25 s
847 Wall time: 1.37
848 Out[3]: 499999500000L
849
850 In [4]: time print 'hello world'
851 hello world
852 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
853 Wall time: 0.00
854
855 Note that the time needed by Python to compile the given expression
856 will be reported if it is more than 0.1s. In this example, the
857 actual exponentiation is done by Python at compilation time, so while
858 the expression can take a noticeable amount of time to compute, that
859 time is purely due to the compilation:
860
861 In [5]: time 3**9999;
862 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
863 Wall time: 0.00 s
864
865 In [6]: time 3**999999;
866 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
867 Wall time: 0.00 s
868 Compiler : 0.78 s
869 """
870
871 # fail immediately if the given expression can't be compiled
872
873 expr = self.shell.prefilter(parameter_s,False)
874
875 # Minimum time above which compilation time will be reported
876 tc_min = 0.1
877
878 try:
879 mode = 'eval'
880 t0 = clock()
881 code = compile(expr,'<timed eval>',mode)
882 tc = clock()-t0
883 except SyntaxError:
884 mode = 'exec'
885 t0 = clock()
886 code = compile(expr,'<timed exec>',mode)
887 tc = clock()-t0
888 # skew measurement as little as possible
889 glob = self.shell.user_ns
890 wtime = time.time
891 # time execution
892 wall_st = wtime()
893 if mode=='eval':
894 st = clock2()
895 out = eval(code, glob, user_locals)
896 end = clock2()
897 else:
898 st = clock2()
899 exec code in glob, user_locals
900 end = clock2()
901 out = None
902 wall_end = wtime()
903 # Compute actual times and report
904 wall_time = wall_end-wall_st
905 cpu_user = end[0]-st[0]
906 cpu_sys = end[1]-st[1]
907 cpu_tot = cpu_user+cpu_sys
908 print "CPU times: user %.2f s, sys: %.2f s, total: %.2f s" % \
909 (cpu_user,cpu_sys,cpu_tot)
910 print "Wall time: %.2f s" % wall_time
911 if tc > tc_min:
912 print "Compiler : %.2f s" % tc
913 return out
914
915 @skip_doctest
916 @line_magic
917 def macro(self, parameter_s=''):
918 """Define a macro for future re-execution. It accepts ranges of history,
919 filenames or string objects.
920
921 Usage:\\
922 %macro [options] name n1-n2 n3-n4 ... n5 .. n6 ...
923
924 Options:
925
926 -r: use 'raw' input. By default, the 'processed' history is used,
927 so that magics are loaded in their transformed version to valid
928 Python. If this option is given, the raw input as typed as the
929 command line is used instead.
930
931 This will define a global variable called `name` which is a string
932 made of joining the slices and lines you specify (n1,n2,... numbers
933 above) from your input history into a single string. This variable
934 acts like an automatic function which re-executes those lines as if
935 you had typed them. You just type 'name' at the prompt and the code
936 executes.
937
938 The syntax for indicating input ranges is described in %history.
939
940 Note: as a 'hidden' feature, you can also use traditional python slice
941 notation, where N:M means numbers N through M-1.
942
943 For example, if your history contains (%hist prints it)::
944
945 44: x=1
946 45: y=3
947 46: z=x+y
948 47: print x
949 48: a=5
950 49: print 'x',x,'y',y
951
952 you can create a macro with lines 44 through 47 (included) and line 49
953 called my_macro with::
954
955 In [55]: %macro my_macro 44-47 49
956
957 Now, typing `my_macro` (without quotes) will re-execute all this code
958 in one pass.
959
960 You don't need to give the line-numbers in order, and any given line
961 number can appear multiple times. You can assemble macros with any
962 lines from your input history in any order.
963
964 The macro is a simple object which holds its value in an attribute,
965 but IPython's display system checks for macros and executes them as
966 code instead of printing them when you type their name.
967
968 You can view a macro's contents by explicitly printing it with::
969
970 print macro_name
971
972 """
973 opts,args = self.parse_options(parameter_s,'r',mode='list')
974 if not args: # List existing macros
975 return sorted(k for k,v in self.shell.user_ns.iteritems() if\
976 isinstance(v, Macro))
977 if len(args) == 1:
978 raise UsageError(
979 "%macro insufficient args; usage '%macro name n1-n2 n3-4...")
980 name, codefrom = args[0], " ".join(args[1:])
981
982 #print 'rng',ranges # dbg
983 try:
984 lines = self.shell.find_user_code(codefrom, 'r' in opts)
985 except (ValueError, TypeError) as e:
986 print e.args[0]
987 return
988 macro = Macro(lines)
989 self.shell.define_macro(name, macro)
990 print 'Macro `%s` created. To execute, type its name (without quotes).' % name
991 print '=== Macro contents: ==='
992 print macro,
993
994 @magic_arguments.magic_arguments()
995 @magic_arguments.argument('output', type=str, default='', nargs='?',
996 help="""The name of the variable in which to store output.
997 This is a utils.io.CapturedIO object with stdout/err attributes
998 for the text of the captured output.
999
1000 CapturedOutput also has a show() method for displaying the output,
1001 and __call__ as well, so you can use that to quickly display the
1002 output.
1003
1004 If unspecified, captured output is discarded.
1005 """
1006 )
1007 @magic_arguments.argument('--no-stderr', action="store_true",
1008 help="""Don't capture stderr."""
1009 )
1010 @magic_arguments.argument('--no-stdout', action="store_true",
1011 help="""Don't capture stdout."""
1012 )
1013 @cell_magic
1014 def capture(self, line, cell):
1015 """run the cell, capturing stdout/err"""
1016 args = magic_arguments.parse_argstring(self.capture, line)
1017 out = not args.no_stdout
1018 err = not args.no_stderr
1019 with capture_output(out, err) as io:
1020 self.shell.run_cell(cell)
1021 if args.output:
1022 self.shell.user_ns[args.output] = io
1023
[end of IPython/core/magics/execution.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ipython/ipython
|
2255c2b87953700d0ca44fdb991dbd85957449a3
|
__file__ is not defined when file end with .ipy
I named my IPython file.ipy
I propose .ipy ? objections ?
I feel ipython should not be confused with plain python!
```
echo "#!/usr/bin/env ipython" > test.ipy
echo "print __file__" >> test.ipy
cat test.ipy
#!/usr/bin/env ipython
print __file__
```
```
chmod +x test.ipy
./test.ipy
> NameError: name '__file__' is not defined
ipython test.ipy
> NameError: name '__file__' is not defined
## run with plain python
python test.ipy
> ./test.ipy
SUCCESS:
## rename file to be .py seems to solve the issue
mv ./test.ipy ./test.py
ipython test.ipy
> ./test.py
SUCCESS:
./test.py
> ./test.py
SUCCESS:
```
Should we not trust files with different endings... I have a feeling files not ending with .py are blacklisted to prevent issues with cgi or something... not to sure.. Perhaps it has something to do with the she-bang notation..?
debian kubuntu 12.10
```
ipython -v
0.12.1
```
Thanks
|
Yes, see `IPython.core.shellapp.InteractiveShellApp._exec_file`:
```
if full_filename.endswith('.ipy'):
self.log.info("Running file in user namespace: %s" %
full_filename)
self.shell.safe_execfile_ipy(full_filename)
else:
# default to python, even without extension
self.log.info("Running file in user namespace: %s" %
full_filename)
# Ensure that __file__ is always defined to match Python behavior
self.shell.user_ns['__file__'] = fname
try:
self.shell.safe_execfile(full_filename, self.shell.user_ns)
finally:
del self.shell.user_ns['__file__']
```
Running a file ending with `*.ipy` current is _exactly_ the same as opening up an ipython terminal and pasting the entire file into one cell. (And in that way, `__file__` is not set).
Should this be changed?
I think it's a reasonable idea, but not very important - .ipy scripts are a convenience, and we don't want people relying on them for anything complex.
Without this patch.
How should I get the **file** string.. without renaming the file to .py?
I don't see the harm with having **file** set.
People often need to know the dynamic location of the file and its name in shell scripts.
part of old .sh scripts that I would like to move to IPython.
```
#!/bin/sh
frl="$(readlink -f $0)"
fdn="$(dirname $frl)"
fn="$(basename $frl)"
fnb="${fn%.*}"
fnbb="${fn%%.*}"
fnxx="${fn##*.}"
# ... many shell calls
```
Anyways ill just rename to .py
Well, you can rename to _anything_ except .ipy.
If you can just call it foo.py, absolutely do so. We don't want to encourage people to write code for IPython when standard Python code would do the job. .ipy scripts exist so you can use IPython syntax, like %magic functions and !shell commands.
But I agree with you that there should be a way to find the running filename from a .ipy script. We're quite busy at the moment, but if you want to dig into the code and make a pull request, that's great. Otherwise, we'll leave this issue open, and someone will get round to it at some point.
|
2012-06-01T15:52:12Z
|
<patch>
diff --git a/IPython/core/interactiveshell.py b/IPython/core/interactiveshell.py
--- a/IPython/core/interactiveshell.py
+++ b/IPython/core/interactiveshell.py
@@ -2419,6 +2419,9 @@ def safe_execfile(self, fname, *where, **kw):
dname = os.path.dirname(fname)
with prepended_to_syspath(dname):
+ # Ensure that __file__ is always defined to match Python behavior
+ save_fname = self.user_ns.get('__file__',None)
+ self.user_ns['__file__'] = fname
try:
py3compat.execfile(fname,*where)
except SystemExit, status:
@@ -2439,6 +2442,8 @@ def safe_execfile(self, fname, *where, **kw):
if kw['raise_exceptions']:
raise
self.showtraceback()
+ finally:
+ self.user_ns['__file__'] = save_fname
def safe_execfile_ipy(self, fname):
"""Like safe_execfile, but for .ipy files with IPython syntax.
@@ -2465,6 +2470,9 @@ def safe_execfile_ipy(self, fname):
dname = os.path.dirname(fname)
with prepended_to_syspath(dname):
+ # Ensure that __file__ is always defined to match Python behavior
+ save_fname = self.user_ns.get('__file__',None)
+ self.user_ns['__file__'] = fname
try:
with open(fname) as thefile:
# self.run_cell currently captures all exceptions
@@ -2475,6 +2483,8 @@ def safe_execfile_ipy(self, fname):
except:
self.showtraceback()
warn('Unknown failure executing file: <%s>' % fname)
+ finally:
+ self.user_ns['__file__'] = save_fname
def safe_run_module(self, mod_name, where):
"""A safe version of runpy.run_module().
diff --git a/IPython/core/shellapp.py b/IPython/core/shellapp.py
--- a/IPython/core/shellapp.py
+++ b/IPython/core/shellapp.py
@@ -277,20 +277,12 @@ def _exec_file(self, fname):
sys.argv = [ py3compat.cast_bytes(a) for a in sys.argv ]
try:
if os.path.isfile(full_filename):
+ self.log.info("Running file in user namespace: %s" % full_filename)
if full_filename.endswith('.ipy'):
- self.log.info("Running file in user namespace: %s" %
- full_filename)
self.shell.safe_execfile_ipy(full_filename)
else:
# default to python, even without extension
- self.log.info("Running file in user namespace: %s" %
- full_filename)
- # Ensure that __file__ is always defined to match Python behavior
- self.shell.user_ns['__file__'] = fname
- try:
- self.shell.safe_execfile(full_filename, self.shell.user_ns)
- finally:
- del self.shell.user_ns['__file__']
+ self.shell.safe_execfile(full_filename, self.shell.user_ns)
finally:
sys.argv = save_argv
</patch>
|
[]
|
[]
| |||
apache__airflow-31998
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Celery Executor cannot connect to the database to get information, resulting in a scheduler exit abnormally
### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We use Celery Executor where using RabbitMQ as a broker and postgresql as a result backend
Airflow Version: 2.2.3
Celery Version: 5.2.3
apache-airflow-providers-celery==2.1.0
Below is the error message:
```
_The above exception was the direct cause of the following exception: Traceback (most recent call last):
File"/app/airflow2.2.3/airflow/airflow/jobs/schedulerjob.py”, line 672, in _execute self._run_scheduler_loop()
File"/app/airflow2.2.3/airflow/airflow/jobs/scheduler_job.py", line 754, in _run_scheduler_loop self.executor.heartbeat()
File"/app/airflow2.2.3/airflow/airflow/executors/base_executor.py”, line 168, in heartbeat self.sync()
File"/app/airflow2.2.3/airflow/airflow/executors/celery_executorpy”, line 330, in sync self.update_all_task_states()
File"/app/airflow223/airflow/airflow/executors/celery_executor.py”,line 442,in update_all_task_states state_and_info_by_celery_task_id=self.bulk_state_fetcher.get_many(self.tasks. values()) File"/app/airflow2.2.3/airflow/airflow/executors/celery_executorpy”,line 598, in get_many result = self._get many_from db backend(async_results)
File"/app/airflow2.2.3/airflow/airflow/executors/celery_executor.py”,line 618, in get_many_from_db_backend tasks-session.query(task_cls).filter(task_cls.task_id.in(task_ids)).all()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”, line 3373, in all return list(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”, line 3535, in iter return self._execute_and_instances(context)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”,line 3556, in _execute_and_instances conn =self._get bind args(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/orm/query.py”, line 3571, in _get_bind_args return fn(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”,line 3550, in _connection_from_session conn=self.session.connection(**kw)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/session.py”, line 1142, in connection return self._connection_for_bind(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/session.py”,line 1150, in _connection_for_bind return self.transaction.connection_for bind(
File“/app/airflow2.2.3/airflow2_env/Iib/python3.8/site-packages/sqlalchemy/orm/session.py”, line 433, in _connection_for_bind conn=bind._contextual_connect()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/engine/base.py”,line 2302, in _contextual_connect self._wrap_pool_connect(self.pool.connect,None),
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 2339, in _wrap_pool_connect
Tracking Connection.handle dbapi_exception_noconnection(
File "/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 1583,in handle_dbapi_exception_noconnection util.raise (
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/util/compat.py”, line 182, in raise
ents raise exception File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/engine/base.py”, line 2336, in _wrap_pool_connect
return fn()
2023-06-05 16:39:05.069 ERROR -Exception when executing SchedulerJob. run scheduler loop Traceback (most recent call last):
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 2336,in _wrap_pool_connect return fno
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 364, in connect returnConnectionFairy.checkout(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 778, in _checkout fairy=ConnectionRecordcheckout(pool)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 495, in checkout rec=pool. do_get()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/impl.py”, line 241, in _do_get return self._createconnection()
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/salalchemy/pool/base.py”, line 309, in _create_connection return _ConnectionRecord(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/sitepackages/sqlalchemy/pool/base.py”, line 440, in init self. connect(firstconnectcheck=True)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 661, in connect pool.logger.debug"Error onconnect(:%s",e)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/util/langhelpers.py”, line 68, in exit compat.raise(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/util/compat.py", line 182, in raise raise exception
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 656, in _connect connection =pool.invoke_creator(sel f)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/strategies.py”, line 114, in connect return dialect.connect(*cargs, **cparans)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/default.py”,line 508, in connect return self.dbapi.connect(*cargs, **cparams)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/psycopg2/init.py”, line 126, in connect conn=connect(dsn,connection_factory=connection_factory, **kwasync) psycopg2.0perationalError: could not connect to server: Connection timed out
Is the server running on host"xxxxxxxxxx”and accepting TCP/IP connections on port 5432?
```
### What you think should happen instead
I think it may be caused by network jitter issues, add retries to solve it
### How to reproduce
celeryExecutor fails to create a PG connection while retrieving metadata information, and it can be reproduced
### Operating System
NAME="RedFlag Asianux" VERSION="7 (Lotus)"
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</issue>
<code>
[start of README.md]
1 <!--
2 Licensed to the Apache Software Foundation (ASF) under one
3 or more contributor license agreements. See the NOTICE file
4 distributed with this work for additional information
5 regarding copyright ownership. The ASF licenses this file
6 to you under the Apache License, Version 2.0 (the
7 "License"); you may not use this file except in compliance
8 with the License. You may obtain a copy of the License at
9
10 http://www.apache.org/licenses/LICENSE-2.0
11
12 Unless required by applicable law or agreed to in writing,
13 software distributed under the License is distributed on an
14 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 KIND, either express or implied. See the License for the
16 specific language governing permissions and limitations
17 under the License.
18 -->
19
20 # Apache Airflow
21
22 [](https://badge.fury.io/py/apache-airflow)
23 [](https://github.com/apache/airflow/actions)
24 [](https://app.codecov.io/gh/apache/airflow/branch/main)
25 [](https://www.apache.org/licenses/LICENSE-2.0.txt)
26 [](https://pypi.org/project/apache-airflow/)
27 [](https://hub.docker.com/r/apache/airflow)
28 [](https://hub.docker.com/r/apache/airflow)
29 [](https://pypi.org/project/apache-airflow/)
30 [](https://artifacthub.io/packages/search?repo=apache-airflow)
31 [](https://github.com/psf/black)
32 [](https://twitter.com/ApacheAirflow)
33 [](https://s.apache.org/airflow-slack)
34 [](https://github.com/apache/airflow/graphs/contributors)
35 [](https://ossrank.com/p/6)
36
37 [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows.
38
39 When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative.
40
41 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
42
43 <!-- START doctoc generated TOC please keep comment here to allow auto update -->
44 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
45 **Table of contents**
46
47 - [Project Focus](#project-focus)
48 - [Principles](#principles)
49 - [Requirements](#requirements)
50 - [Getting started](#getting-started)
51 - [Installing from PyPI](#installing-from-pypi)
52 - [Official source code](#official-source-code)
53 - [Convenience packages](#convenience-packages)
54 - [User Interface](#user-interface)
55 - [Semantic versioning](#semantic-versioning)
56 - [Version Life Cycle](#version-life-cycle)
57 - [Support for Python and Kubernetes versions](#support-for-python-and-kubernetes-versions)
58 - [Base OS support for reference Airflow images](#base-os-support-for-reference-airflow-images)
59 - [Approach to dependencies of Airflow](#approach-to-dependencies-of-airflow)
60 - [Contributing](#contributing)
61 - [Who uses Apache Airflow?](#who-uses-apache-airflow)
62 - [Who Maintains Apache Airflow?](#who-maintains-apache-airflow)
63 - [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation)
64 - [Airflow merchandise](#airflow-merchandise)
65 - [Links](#links)
66 - [Sponsors](#sponsors)
67
68 <!-- END doctoc generated TOC please keep comment here to allow auto update -->
69
70 ## Project Focus
71
72 Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include [Luigi](https://github.com/spotify/luigi), [Oozie](https://oozie.apache.org/) and [Azkaban](https://azkaban.github.io/).
73
74 Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's [XCom feature](https://airflow.apache.org/docs/apache-airflow/stable/concepts/xcoms.html)). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work.
75
76 Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches.
77
78 ## Principles
79
80 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically.
81 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment.
82 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine.
83 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers.
84
85 ## Requirements
86
87 Apache Airflow is tested with:
88
89 | | Main version (dev) | Stable version (2.6.2) |
90 |-------------|------------------------------|---------------------------|
91 | Python | 3.8, 3.9, 3.10, 3.11 | 3.7, 3.8, 3.9, 3.10, 3.11 |
92 | Platform | AMD64/ARM64(\*) | AMD64/ARM64(\*) |
93 | Kubernetes | 1.23, 1.24, 1.25, 1.26, 1.27 | 1.23, 1.24, 1.25, 1.26 |
94 | PostgreSQL | 11, 12, 13, 14, 15 | 11, 12, 13, 14, 15 |
95 | MySQL | 5.7, 8 | 5.7, 8 |
96 | SQLite | 3.15.0+ | 3.15.0+ |
97 | MSSQL | 2017(\*), 2019(\*) | 2017(\*), 2019(\*) |
98
99 \* Experimental
100
101 **Note**: MySQL 5.x versions are unable to or have limitations with
102 running multiple schedulers -- please see the [Scheduler docs](https://airflow.apache.org/docs/apache-airflow/stable/scheduler.html).
103 MariaDB is not tested/recommended.
104
105 **Note**: SQLite is used in Airflow tests. Do not use it in production. We recommend
106 using the latest stable version of SQLite for local development.
107
108 **Note**: Airflow currently can be run on POSIX-compliant Operating Systems. For development it is regularly
109 tested on fairly modern Linux Distros and recent versions of MacOS.
110 On Windows you can run it via WSL2 (Windows Subsystem for Linux 2) or via Linux Containers.
111 The work to add Windows support is tracked via [#10388](https://github.com/apache/airflow/issues/10388) but
112 it is not a high priority. You should only use Linux-based distros as "Production" execution environment
113 as this is the only environment that is supported. The only distro that is used in our CI tests and that
114 is used in the [Community managed DockerHub image](https://hub.docker.com/p/apache/airflow) is
115 `Debian Bullseye`.
116
117 ## Getting started
118
119 Visit the official Airflow website documentation (latest **stable** release) for help with
120 [installing Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation.html),
121 [getting started](https://airflow.apache.org/docs/apache-airflow/stable/start.html), or walking
122 through a more complete [tutorial](https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html).
123
124 > Note: If you're looking for documentation for the main branch (latest development branch): you can find it on [s.apache.org/airflow-docs](https://s.apache.org/airflow-docs/).
125
126 For more information on Airflow Improvement Proposals (AIPs), visit
127 the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvement+Proposals).
128
129 Documentation for dependent projects like provider packages, Docker image, Helm Chart, you'll find it in [the documentation index](https://airflow.apache.org/docs/).
130
131 ## Installing from PyPI
132
133 We publish Apache Airflow as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky
134 because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open, and
135 applications usually pin them, but we should do neither and both simultaneously. We decided to keep
136 our dependencies as open as possible (in `setup.py`) so users can install different versions of libraries
137 if needed. This means that `pip install apache-airflow` will not work from time to time or will
138 produce unusable Airflow installation.
139
140 To have repeatable installation, however, we keep a set of "known-to-be-working" constraint
141 files in the orphan `constraints-main` and `constraints-2-0` branches. We keep those "known-to-be-working"
142 constraints files separately per major/minor Python version.
143 You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify
144 correct Airflow tag/version/branch and Python versions in the URL.
145
146
147 1. Installing just Airflow:
148
149 > Note: Only `pip` installation is currently officially supported.
150
151 While it is possible to install Airflow with tools like [Poetry](https://python-poetry.org) or
152 [pip-tools](https://pypi.org/project/pip-tools), they do not share the same workflow as
153 `pip` - especially when it comes to constraint vs. requirements management.
154 Installing via `Poetry` or `pip-tools` is not currently supported.
155
156 There are known issues with ``bazel`` that might lead to circular dependencies when using it to install
157 Airflow. Please switch to ``pip`` if you encounter such problems. ``Bazel`` community works on fixing
158 the problem in `this PR <https://github.com/bazelbuild/rules_python/pull/1166>`_ so it might be that
159 newer versions of ``bazel`` will handle it.
160
161 If you wish to install Airflow using those tools, you should use the constraint files and convert
162 them to the appropriate format and workflow that your tool requires.
163
164
165 ```bash
166 pip install 'apache-airflow==2.6.2' \
167 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.6.2/constraints-3.8.txt"
168 ```
169
170 2. Installing with extras (i.e., postgres, google)
171
172 ```bash
173 pip install 'apache-airflow[postgres,google]==2.6.2' \
174 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.6.2/constraints-3.8.txt"
175 ```
176
177 For information on installing provider packages, check
178 [providers](http://airflow.apache.org/docs/apache-airflow-providers/index.html).
179
180 ## Official source code
181
182 Apache Airflow is an [Apache Software Foundation](https://www.apache.org) (ASF) project,
183 and our official source code releases:
184
185 - Follow the [ASF Release Policy](https://www.apache.org/legal/release-policy.html)
186 - Can be downloaded from [the ASF Distribution Directory](https://downloads.apache.org/airflow)
187 - Are cryptographically signed by the release manager
188 - Are officially voted on by the PMC members during the
189 [Release Approval Process](https://www.apache.org/legal/release-policy.html#release-approval)
190
191 Following the ASF rules, the source packages released must be sufficient for a user to build and test the
192 release provided they have access to the appropriate platform and tools.
193
194 ## Convenience packages
195
196 There are other ways of installing and using Airflow. Those are "convenience" methods - they are
197 not "official releases" as stated by the `ASF Release Policy`, but they can be used by the users
198 who do not want to build the software themselves.
199
200 Those are - in the order of most common ways people install Airflow:
201
202 - [PyPI releases](https://pypi.org/project/apache-airflow/) to install Airflow using standard `pip` tool
203 - [Docker Images](https://hub.docker.com/r/apache/airflow) to install airflow via
204 `docker` tool, use them in Kubernetes, Helm Charts, `docker-compose`, `docker swarm`, etc. You can
205 read more about using, customising, and extending the images in the
206 [Latest docs](https://airflow.apache.org/docs/docker-stack/index.html), and
207 learn details on the internals in the [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst) document.
208 - [Tags in GitHub](https://github.com/apache/airflow/tags) to retrieve the git project sources that
209 were used to generate official source packages via git
210
211 All those artifacts are not official releases, but they are prepared using officially released sources.
212 Some of those artifacts are "development" or "pre-release" ones, and they are clearly marked as such
213 following the ASF Policy.
214
215 ## User Interface
216
217 - **DAGs**: Overview of all DAGs in your environment.
218
219 
220
221 - **Grid**: Grid representation of a DAG that spans across time.
222
223 
224
225 - **Graph**: Visualization of a DAG's dependencies and their current status for a specific run.
226
227 
228
229 - **Task Duration**: Total time spent on different tasks over time.
230
231 
232
233 - **Gantt**: Duration and overlap of a DAG.
234
235 
236
237 - **Code**: Quick way to view source code of a DAG.
238
239 
240
241 ## Semantic versioning
242
243 As of Airflow 2.0.0, we support a strict [SemVer](https://semver.org/) approach for all packages released.
244
245 There are few specific rules that we agreed to that define details of versioning of the different
246 packages:
247
248 * **Airflow**: SemVer rules apply to core airflow only (excludes any changes to providers).
249 Changing limits for versions of Airflow dependencies is not a breaking change on its own.
250 * **Airflow Providers**: SemVer rules apply to changes in the particular provider's code only.
251 SemVer MAJOR and MINOR versions for the packages are independent of the Airflow version.
252 For example, `google 4.1.0` and `amazon 3.0.3` providers can happily be installed
253 with `Airflow 2.1.2`. If there are limits of cross-dependencies between providers and Airflow packages,
254 they are present in providers as `install_requires` limitations. We aim to keep backwards
255 compatibility of providers with all previously released Airflow 2 versions but
256 there will sometimes be breaking changes that might make some, or all
257 providers, have minimum Airflow version specified. Change of that minimum supported Airflow version
258 is a breaking change for provider because installing the new provider might automatically
259 upgrade Airflow (which might be an undesired side effect of upgrading provider).
260 * **Airflow Helm Chart**: SemVer rules apply to changes in the chart only. SemVer MAJOR and MINOR
261 versions for the chart are independent from the Airflow version. We aim to keep backwards
262 compatibility of the Helm Chart with all released Airflow 2 versions, but some new features might
263 only work starting from specific Airflow releases. We might however limit the Helm
264 Chart to depend on minimal Airflow version.
265 * **Airflow API clients**: SemVer MAJOR and MINOR versions follow MAJOR and MINOR versions of Airflow.
266 The first MAJOR or MINOR X.Y.0 release of Airflow should always be followed by X.Y.0 release of
267 all clients. An airflow PATCH X.Y.Z release can be followed by a PATCH release of API clients, only
268 if this PATCH is relevant to the clients.
269 The clients then can release their own PATCH releases with bugfixes, independently of Airflow PATCH releases.
270 As a consequence, each API client will have its own PATCH version that may or may not be in sync with the Airflow
271 PATCH version. For a specific MAJOR/MINOR Airflow version, users should favor the latest PATCH version of clients
272 independently of their Airflow PATCH version.
273
274 ## Version Life Cycle
275
276 Apache Airflow version life cycle:
277
278 <!-- This table is automatically updated by pre-commit scripts/ci/pre_commit/pre_commit_supported_versions.py -->
279 <!-- Beginning of auto-generated table -->
280
281 | Version | Current Patch/Minor | State | First Release | Limited Support | EOL/Terminated |
282 |-----------|-----------------------|-----------|-----------------|-------------------|------------------|
283 | 2 | 2.6.2 | Supported | Dec 17, 2020 | TBD | TBD |
284 | 1.10 | 1.10.15 | EOL | Aug 27, 2018 | Dec 17, 2020 | June 17, 2021 |
285 | 1.9 | 1.9.0 | EOL | Jan 03, 2018 | Aug 27, 2018 | Aug 27, 2018 |
286 | 1.8 | 1.8.2 | EOL | Mar 19, 2017 | Jan 03, 2018 | Jan 03, 2018 |
287 | 1.7 | 1.7.1.2 | EOL | Mar 28, 2016 | Mar 19, 2017 | Mar 19, 2017 |
288
289 <!-- End of auto-generated table -->
290
291 Limited support versions will be supported with security and critical bug fix only.
292 EOL versions will not get any fixes nor support.
293 We always recommend that all users run the latest available minor release for whatever major version is in use.
294 We **highly** recommend upgrading to the latest Airflow major release at the earliest convenient time and before the EOL date.
295
296 ## Support for Python and Kubernetes versions
297
298 As of Airflow 2.0, we agreed to certain rules we follow for Python and Kubernetes support.
299 They are based on the official release schedule of Python and Kubernetes, nicely summarized in the
300 [Python Developer's Guide](https://devguide.python.org/#status-of-python-branches) and
301 [Kubernetes version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/).
302
303 1. We drop support for Python and Kubernetes versions when they reach EOL. Except for Kubernetes, a
304 version stays supported by Airflow if two major cloud providers still provide support for it. We drop
305 support for those EOL versions in main right after EOL date, and it is effectively removed when we release
306 the first new MINOR (Or MAJOR if there is no new MINOR version) of Airflow. For example, for Python 3.8 it
307 means that we will drop support in main right after 27.06.2023, and the first MAJOR or MINOR version of
308 Airflow released after will not have it.
309
310 2. We support a new version of Python/Kubernetes in main after they are officially released, as soon as we
311 make them work in our CI pipeline (which might not be immediate due to dependencies catching up with
312 new versions of Python mostly) we release new images/support in Airflow based on the working CI setup.
313
314 3. This policy is best-effort which means there may be situations where we might terminate support earlier
315 if circumstances require it.
316
317 ## Base OS support for reference Airflow images
318
319 The Airflow Community provides conveniently packaged container images that are published whenever
320 we publish an Apache Airflow release. Those images contain:
321
322 * Base OS with necessary packages to install Airflow (stable Debian OS)
323 * Base Python installation in versions supported at the time of release for the MINOR version of
324 Airflow released (so there could be different versions for 2.3 and 2.2 line for example)
325 * Libraries required to connect to supported Databases (again the set of databases supported depends
326 on the MINOR version of Airflow.
327 * Predefined set of popular providers (for details see the [Dockerfile](https://raw.githubusercontent.com/apache/airflow/main/Dockerfile)).
328 * Possibility of building your own, custom image where the user can choose their own set of providers
329 and libraries (see [Building the image](https://airflow.apache.org/docs/docker-stack/build.html))
330 * In the future Airflow might also support a "slim" version without providers nor database clients installed
331
332 The version of the base OS image is the stable version of Debian. Airflow supports using all currently active
333 stable versions - as soon as all Airflow dependencies support building, and we set up the CI pipeline for
334 building and testing the OS version. Approximately 6 months before the end-of-life of a previous stable
335 version of the OS, Airflow switches the images released to use the latest supported version of the OS.
336 For example since ``Debian Buster`` end-of-life was August 2022, Airflow switched the images in `main` branch
337 to use ``Debian Bullseye`` in February/March 2022. The version was used in the next MINOR release after
338 the switch happened. In case of the Bullseye switch - 2.3.0 version used ``Debian Bullseye``.
339 The images released in the previous MINOR version continue to use the version that all other releases
340 for the MINOR version used.
341
342 Support for ``Debian Buster`` image was dropped in August 2022 completely and everyone is expected to
343 stop building their images using ``Debian Buster``.
344
345 Users will continue to be able to build their images using stable Debian releases until the end of life and
346 building and verifying of the images happens in our CI but no unit tests were executed using this image in
347 the `main` branch.
348
349 ## Approach to dependencies of Airflow
350
351 Airflow has a lot of dependencies - direct and transitive, also Airflow is both - library and application,
352 therefore our policies to dependencies has to include both - stability of installation of application,
353 but also ability to install newer version of dependencies for those users who develop DAGs. We developed
354 the approach where `constraints` are used to make sure airflow can be installed in a repeatable way, while
355 we do not limit our users to upgrade most of the dependencies. As a result we decided not to upper-bound
356 version of Airflow dependencies by default, unless we have good reasons to believe upper-bounding them is
357 needed because of importance of the dependency as well as risk it involves to upgrade specific dependency.
358 We also upper-bound the dependencies that we know cause problems.
359
360 The constraint mechanism of ours takes care about finding and upgrading all the non-upper bound dependencies
361 automatically (providing that all the tests pass). Our `main` build failures will indicate in case there
362 are versions of dependencies that break our tests - indicating that we should either upper-bind them or
363 that we should fix our code/tests to account for the upstream changes from those dependencies.
364
365 Whenever we upper-bound such a dependency, we should always comment why we are doing it - i.e. we should have
366 a good reason why dependency is upper-bound. And we should also mention what is the condition to remove the
367 binding.
368
369 ### Approach for dependencies for Airflow Core
370
371 Those `extras` and `providers` dependencies are maintained in `setup.cfg`.
372
373 There are few dependencies that we decided are important enough to upper-bound them by default, as they are
374 known to follow predictable versioning scheme, and we know that new versions of those are very likely to
375 bring breaking changes. We commit to regularly review and attempt to upgrade to the newer versions of
376 the dependencies as they are released, but this is manual process.
377
378 The important dependencies are:
379
380 * `SQLAlchemy`: upper-bound to specific MINOR version (SQLAlchemy is known to remove deprecations and
381 introduce breaking changes especially that support for different Databases varies and changes at
382 various speed (example: SQLAlchemy 1.4 broke MSSQL integration for Airflow)
383 * `Alembic`: it is important to handle our migrations in predictable and performant way. It is developed
384 together with SQLAlchemy. Our experience with Alembic is that it very stable in MINOR version
385 * `Flask`: We are using Flask as the back-bone of our web UI and API. We know major version of Flask
386 are very likely to introduce breaking changes across those so limiting it to MAJOR version makes sense
387 * `werkzeug`: the library is known to cause problems in new versions. It is tightly coupled with Flask
388 libraries, and we should update them together
389 * `celery`: Celery is crucial component of Airflow as it used for CeleryExecutor (and similar). Celery
390 [follows SemVer](https://docs.celeryq.dev/en/stable/contributing.html?highlight=semver#versions), so
391 we should upper-bound it to the next MAJOR version. Also when we bump the upper version of the library,
392 we should make sure Celery Provider minimum Airflow version is updated).
393 * `kubernetes`: Kubernetes is a crucial component of Airflow as it is used for the KubernetesExecutor
394 (and similar). Kubernetes Python library [follows SemVer](https://github.com/kubernetes-client/python#compatibility),
395 so we should upper-bound it to the next MAJOR version. Also when we bump the upper version of the library,
396 we should make sure Kubernetes Provider minimum Airflow version is updated.
397
398 ### Approach for dependencies in Airflow Providers and extras
399
400 The main part of the Airflow is the Airflow Core, but the power of Airflow also comes from a number of
401 providers that extend the core functionality and are released separately, even if we keep them (for now)
402 in the same monorepo for convenience. You can read more about the providers in the
403 [Providers documentation](https://airflow.apache.org/docs/apache-airflow-providers/index.html). We also
404 have set of policies implemented for maintaining and releasing community-managed providers as well
405 as the approach for community vs. 3rd party providers in the [providers](PROVIDERS.rst) document.
406
407 Those `extras` and `providers` dependencies are maintained in `provider.yaml` of each provider.
408
409 By default, we should not upper-bound dependencies for providers, however each provider's maintainer
410 might decide to add additional limits (and justify them with comment).
411
412 ## Contributing
413
414 Want to help build Apache Airflow? Check out our [contributing documentation](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst).
415
416 Official Docker (container) images for Apache Airflow are described in [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst).
417
418 ## Who uses Apache Airflow?
419
420 More than 400 organizations are using Apache Airflow
421 [in the wild](https://github.com/apache/airflow/blob/main/INTHEWILD.md).
422
423 ## Who Maintains Apache Airflow?
424
425 Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors),
426 but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow)
427 are responsible for reviewing and merging PRs as well as steering conversations around new feature requests.
428 If you would like to become a maintainer, please review the Apache Airflow
429 [committer requirements](https://github.com/apache/airflow/blob/main/COMMITTERS.rst#guidelines-to-become-an-airflow-committer).
430
431 ## Can I use the Apache Airflow logo in my presentation?
432
433 Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up to date logos are found in [this repo](/docs/apache-airflow/img/logos) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html).
434
435 ## Airflow merchandise
436
437 If you would love to have Apache Airflow stickers, t-shirt, etc. then check out
438 [Redbubble Shop](https://www.redbubble.com/i/sticker/Apache-Airflow-by-comdev/40497530.EJUG5).
439
440 ## Links
441
442 - [Documentation](https://airflow.apache.org/docs/apache-airflow/stable/)
443 - [Chat](https://s.apache.org/airflow-slack)
444
445 ## Sponsors
446
447 The CI infrastructure for Apache Airflow has been sponsored by:
448
449 <!-- Ordered by most recently "funded" -->
450
451 <a href="https://astronomer.io"><img src="https://assets2.astronomer.io/logos/logoForLIGHTbackground.png" alt="astronomer.io" width="250px"></a>
452 <a href="https://aws.amazon.com/opensource/"><img src="docs/integration-logos/aws/[email protected]" alt="AWS OpenSource" width="130px"></a>
453
[end of README.md]
[start of dev/breeze/src/airflow_breeze/utils/cdxgen.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17
18 from __future__ import annotations
19
20 import atexit
21 import multiprocessing
22 import os
23 import signal
24 import sys
25 import time
26 from dataclasses import dataclass
27 from multiprocessing.pool import Pool
28 from pathlib import Path
29
30 import yaml
31
32 from airflow_breeze.global_constants import DEFAULT_PYTHON_MAJOR_MINOR_VERSION
33 from airflow_breeze.utils.console import Output, get_console
34 from airflow_breeze.utils.github import download_constraints_file, download_file_from_github
35 from airflow_breeze.utils.path_utils import AIRFLOW_SOURCES_ROOT, FILES_DIR
36 from airflow_breeze.utils.run_utils import run_command
37 from airflow_breeze.utils.shared_options import get_dry_run
38
39
40 def start_cdxgen_server(application_root_path: Path, run_in_parallel: bool, parallelism: int) -> None:
41 """
42 Start cdxgen server that is used to perform cdxgen scans of applications in child process
43 :param run_in_parallel: run parallel servers
44 :param parallelism: parallelism to use
45 :param application_root_path: path where the application to scan is located
46 """
47 run_command(
48 [
49 "docker",
50 "pull",
51 "ghcr.io/cyclonedx/cdxgen",
52 ],
53 check=True,
54 )
55 if not run_in_parallel:
56 fork_cdxgen_server(application_root_path)
57 else:
58 for i in range(parallelism):
59 fork_cdxgen_server(application_root_path, port=9091 + i)
60 time.sleep(1)
61 get_console().print("[info]Waiting for cdxgen server to start")
62 time.sleep(3)
63
64
65 def fork_cdxgen_server(application_root_path, port=9090):
66 pid = os.fork()
67 if pid:
68 # Parent process - send signal to process group of the child process
69 atexit.register(os.killpg, pid, signal.SIGTERM)
70 # Give the server child process some time to start
71 else:
72 # Check if we are not a group leader already (We should not be)
73 if os.getpid() != os.getsid(0):
74 # and create a new process group where we are the leader
75 os.setpgid(0, 0)
76 run_command(
77 [
78 "docker",
79 "run",
80 "--init",
81 "--rm",
82 "-p",
83 f"{port}:{port}",
84 "-v",
85 "/tmp:/tmp",
86 "-v",
87 f"{application_root_path}:/app",
88 "-t",
89 "ghcr.io/cyclonedx/cdxgen",
90 "--server",
91 "--server-host",
92 "0.0.0.0",
93 "--server-port",
94 str(port),
95 ],
96 check=True,
97 )
98 # we should get here when the server gets terminated
99 sys.exit(0)
100
101
102 def get_port_mapping(x):
103 # if we do not sleep here, then we could skip mapping for some process if it is handle
104 time.sleep(1)
105 return multiprocessing.current_process().name, 9091 + x
106
107
108 def get_cdxgen_port_mapping(parallelism: int, pool: Pool) -> dict[str, int]:
109 """
110 Map processes from pool to port numbers so that there is always the same port
111 used by the same process in the pool - effectively having one multiprocessing
112 process talking to the same cdxgen server
113
114 :param parallelism: parallelism to use
115 :param pool: pool to map ports for
116 :return: mapping of process name to port
117 """
118 port_map: dict[str, int] = dict(pool.map(get_port_mapping, range(parallelism)))
119 return port_map
120
121
122 def get_provider_requirement_image_name(airflow_version: str, python_version: str) -> str:
123 return f"apache/airflow-dev/base_requirements/{airflow_version}/python{python_version}"
124
125
126 def build_providers_base_image(airflow_version: str, python_version: str):
127 image_name = get_provider_requirement_image_name(
128 airflow_version=airflow_version, python_version=python_version
129 )
130 dockerfile = f"""
131 FROM ghcr.io/apache/airflow/main/ci/python{python_version}
132 RUN pip install --upgrade pip
133 # Remove all packages
134 RUN python -m venv /opt/airflow/providers
135 RUN /opt/airflow/providers/bin/pip install --upgrade pip
136 RUN /opt/airflow/providers/bin/pip install apache-airflow=={airflow_version} \
137 --constraint https://raw.githubusercontent.com/apache/airflow/\
138 constraints-{airflow_version}/constraints-{python_version}.txt
139 """
140 run_command(["docker", "build", "--tag", image_name, "-"], input=dockerfile, text=True, check=True)
141
142
143 TARGET_DIR_NAME = "provider_requirements"
144 DOCKER_FILE_PREFIX = f"/files/{TARGET_DIR_NAME}/"
145
146
147 def get_requirements_for_provider(
148 provider_id: str,
149 airflow_version: str,
150 provider_version: str | None = None,
151 python_version: str = DEFAULT_PYTHON_MAJOR_MINOR_VERSION,
152 ):
153 provider_path_array = provider_id.split(".")
154 if not provider_version:
155 provider_file = (AIRFLOW_SOURCES_ROOT / "airflow" / "providers").joinpath(
156 *provider_path_array
157 ) / "provider.yaml"
158 provider_version = yaml.safe_load(provider_file.read_text())["versions"][0]
159 airflow_file_name = f"provider-{provider_id}-{provider_version}-base-requirements.txt"
160 provider_with_airflow_file_name = f"provider-{provider_id}-{provider_version}-airflow-requirements.txt"
161 provider_file_name = f"provider-{provider_id}-{provider_version}-requirements.txt"
162 command = f"""
163 mkdir -pv {DOCKER_FILE_PREFIX}
164 /opt/airflow/providers/bin/pip freeze | sort > {DOCKER_FILE_PREFIX}{airflow_file_name}
165 /opt/airflow/providers/bin/pip install apache-airflow=={airflow_version} \
166 apache-airflow-providers-{provider_id}=={provider_version}
167 /opt/airflow/providers/bin/pip freeze | sort > {DOCKER_FILE_PREFIX}{provider_with_airflow_file_name}
168 chown --recursive {os.getuid()}:{os.getgid()} {DOCKER_FILE_PREFIX}
169 """
170 run_command(
171 [
172 "docker",
173 "run",
174 "--rm",
175 "-e",
176 f"HOST_USER_ID={os.getuid()}",
177 "-e",
178 f"HOST_GROUP_ID={os.getgid()}",
179 "-v",
180 f"{AIRFLOW_SOURCES_ROOT}/files:/files",
181 get_provider_requirement_image_name(
182 airflow_version=airflow_version, python_version=python_version
183 ),
184 "-c",
185 ";".join(command.split("\n")[1:-1]),
186 ]
187 )
188 target_dir = FILES_DIR / TARGET_DIR_NAME
189 airflow_file = target_dir / airflow_file_name
190 provider_with_airflow_file = target_dir / provider_with_airflow_file_name
191 get_console().print(f"[info]Airflow requirements in {airflow_file}")
192 get_console().print(f"[info]Provider requirements in {provider_with_airflow_file}")
193 base_packages = set([package.split("==")[0] for package in airflow_file.read_text().split("\n")])
194 base_packages.add("apache-airflow-providers-" + provider_id.replace(".", "-"))
195 provider_packages = sorted(
196 [
197 line
198 for line in provider_with_airflow_file.read_text().split("\n")
199 if line.split("==")[0] not in base_packages
200 ]
201 )
202 get_console().print(
203 f"[info]Provider {provider_id} has {len(provider_packages)} transitively "
204 f"dependent packages (excluding airflow and it's dependencies)"
205 )
206 get_console().print(provider_packages)
207 provider_file = target_dir / provider_file_name
208 provider_file.write_text("\n".join(provider_packages) + "\n")
209 get_console().print(
210 f"[success]Generated {provider_id}:{provider_version} requirements in {provider_file}"
211 )
212
213
214 @dataclass
215 class SbomApplicationJob:
216 airflow_version: str
217 python_version: str
218 application_root_path: Path
219 include_provider_dependencies: bool
220 target_path: Path
221
222
223 def produce_sbom_for_application_via_cdxgen_server(
224 job: SbomApplicationJob, output: Output | None, port_map: dict[str, int] | None = None
225 ) -> tuple[int, str]:
226 """
227 Produces SBOM for application using cdxgen server.
228 :param job: Job to run
229 :param output: Output to use
230 :param port_map map of process name to port - making sure that one process talks to one server
231 in case parallel processing is used
232 :return: tuple with exit code and output
233 """
234 import requests
235
236 if port_map is None:
237 port = 9090
238 else:
239 port = port_map[multiprocessing.current_process().name]
240 get_console(output=output).print(f"[info]Using port {port}")
241 get_console(output=output).print(
242 f"[info]Updating sbom for Airflow {job.airflow_version} and python {job.python_version}"
243 )
244 source_dir = job.application_root_path / job.airflow_version / job.python_version
245 source_dir.mkdir(parents=True, exist_ok=True)
246 lock_file_relative_path = "airflow/www/yarn.lock"
247 download_file_from_github(
248 tag=job.airflow_version, path=lock_file_relative_path, output_file=source_dir / "yarn.lock"
249 )
250 if not download_constraints_file(
251 airflow_version=job.airflow_version,
252 python_version=job.python_version,
253 include_provider_dependencies=job.include_provider_dependencies,
254 output_file=source_dir / "requirements.txt",
255 ):
256 get_console(output=output).print(
257 f"[warning]Failed to download constraints file for "
258 f"{job.airflow_version} and {job.python_version}. Skipping"
259 )
260 return 0, f"SBOM Generate {job.airflow_version}:{job.python_version}"
261 get_console(output=output).print(
262 f"[info]Generating sbom for Airflow {job.airflow_version} and python {job.python_version} with cdxgen"
263 )
264 url = (
265 f"http://127.0.0.1:{port}/sbom?path=/app/{job.airflow_version}/{job.python_version}&"
266 f"project-name=apache-airflow&project-version={job.airflow_version}&multiProject=true"
267 )
268 get_console(output=output).print(f"[info]Triggering sbom generation in {job.airflow_version} via {url}")
269 if not get_dry_run():
270 response = requests.get(url)
271 if response.status_code != 200:
272 get_console(output=output).print(
273 f"[error]Generation for Airflow {job.airflow_version}:{job.python_version} failed. "
274 f"Status code {response.status_code}"
275 )
276 return response.status_code, f"SBOM Generate {job.airflow_version}:{job.python_version}"
277 job.target_path.write_bytes(response.content)
278 get_console(output=output).print(
279 f"[success]Generated SBOM for {job.airflow_version}:{job.python_version}"
280 )
281
282 return 0, f"SBOM Generate {job.airflow_version}:{job.python_version}"
283
[end of dev/breeze/src/airflow_breeze/utils/cdxgen.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
apache/airflow
|
62a534dbc7fa8ddb4c249ade85c558b64d1630dd
|
Celery Executor cannot connect to the database to get information, resulting in a scheduler exit abnormally
### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We use Celery Executor where using RabbitMQ as a broker and postgresql as a result backend
Airflow Version: 2.2.3
Celery Version: 5.2.3
apache-airflow-providers-celery==2.1.0
Below is the error message:
```
_The above exception was the direct cause of the following exception: Traceback (most recent call last):
File"/app/airflow2.2.3/airflow/airflow/jobs/schedulerjob.py”, line 672, in _execute self._run_scheduler_loop()
File"/app/airflow2.2.3/airflow/airflow/jobs/scheduler_job.py", line 754, in _run_scheduler_loop self.executor.heartbeat()
File"/app/airflow2.2.3/airflow/airflow/executors/base_executor.py”, line 168, in heartbeat self.sync()
File"/app/airflow2.2.3/airflow/airflow/executors/celery_executorpy”, line 330, in sync self.update_all_task_states()
File"/app/airflow223/airflow/airflow/executors/celery_executor.py”,line 442,in update_all_task_states state_and_info_by_celery_task_id=self.bulk_state_fetcher.get_many(self.tasks. values()) File"/app/airflow2.2.3/airflow/airflow/executors/celery_executorpy”,line 598, in get_many result = self._get many_from db backend(async_results)
File"/app/airflow2.2.3/airflow/airflow/executors/celery_executor.py”,line 618, in get_many_from_db_backend tasks-session.query(task_cls).filter(task_cls.task_id.in(task_ids)).all()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”, line 3373, in all return list(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”, line 3535, in iter return self._execute_and_instances(context)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”,line 3556, in _execute_and_instances conn =self._get bind args(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/orm/query.py”, line 3571, in _get_bind_args return fn(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”,line 3550, in _connection_from_session conn=self.session.connection(**kw)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/session.py”, line 1142, in connection return self._connection_for_bind(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/session.py”,line 1150, in _connection_for_bind return self.transaction.connection_for bind(
File“/app/airflow2.2.3/airflow2_env/Iib/python3.8/site-packages/sqlalchemy/orm/session.py”, line 433, in _connection_for_bind conn=bind._contextual_connect()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/engine/base.py”,line 2302, in _contextual_connect self._wrap_pool_connect(self.pool.connect,None),
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 2339, in _wrap_pool_connect
Tracking Connection.handle dbapi_exception_noconnection(
File "/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 1583,in handle_dbapi_exception_noconnection util.raise (
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/util/compat.py”, line 182, in raise
ents raise exception File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/engine/base.py”, line 2336, in _wrap_pool_connect
return fn()
2023-06-05 16:39:05.069 ERROR -Exception when executing SchedulerJob. run scheduler loop Traceback (most recent call last):
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 2336,in _wrap_pool_connect return fno
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 364, in connect returnConnectionFairy.checkout(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 778, in _checkout fairy=ConnectionRecordcheckout(pool)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 495, in checkout rec=pool. do_get()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/impl.py”, line 241, in _do_get return self._createconnection()
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/salalchemy/pool/base.py”, line 309, in _create_connection return _ConnectionRecord(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/sitepackages/sqlalchemy/pool/base.py”, line 440, in init self. connect(firstconnectcheck=True)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 661, in connect pool.logger.debug"Error onconnect(:%s",e)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/util/langhelpers.py”, line 68, in exit compat.raise(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/util/compat.py", line 182, in raise raise exception
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 656, in _connect connection =pool.invoke_creator(sel f)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/strategies.py”, line 114, in connect return dialect.connect(*cargs, **cparans)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/default.py”,line 508, in connect return self.dbapi.connect(*cargs, **cparams)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/psycopg2/init.py”, line 126, in connect conn=connect(dsn,connection_factory=connection_factory, **kwasync) psycopg2.0perationalError: could not connect to server: Connection timed out
Is the server running on host"xxxxxxxxxx”and accepting TCP/IP connections on port 5432?
```
### What you think should happen instead
I think it may be caused by network jitter issues, add retries to solve it
### How to reproduce
celeryExecutor fails to create a PG connection while retrieving metadata information, and it can be reproduced
### Operating System
NAME="RedFlag Asianux" VERSION="7 (Lotus)"
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
2023-06-19T09:05:58Z
|
<patch>
diff --git a/airflow/executors/celery_executor_utils.py b/airflow/executors/celery_executor_utils.py
--- a/airflow/executors/celery_executor_utils.py
+++ b/airflow/executors/celery_executor_utils.py
@@ -31,7 +31,7 @@
from celery import Celery, Task, states as celery_states
from celery.backends.base import BaseKeyValueStoreBackend
-from celery.backends.database import DatabaseBackend, Task as TaskDb, session_cleanup
+from celery.backends.database import DatabaseBackend, Task as TaskDb, retry, session_cleanup
from celery.result import AsyncResult
from celery.signals import import_modules as celery_import_modules
from setproctitle import setproctitle
@@ -250,15 +250,19 @@ def _get_many_from_kv_backend(self, async_tasks) -> Mapping[str, EventBufferValu
return self._prepare_state_and_info_by_task_dict(task_ids, task_results_by_task_id)
- def _get_many_from_db_backend(self, async_tasks) -> Mapping[str, EventBufferValueType]:
- task_ids = self._tasks_list_to_task_ids(async_tasks)
+ @retry
+ def _query_task_cls_from_db_backend(self, task_ids, **kwargs):
session = app.backend.ResultSession()
task_cls = getattr(app.backend, "task_cls", TaskDb)
with session_cleanup(session):
- tasks = session.query(task_cls).filter(task_cls.task_id.in_(task_ids)).all()
+ return session.query(task_cls).filter(task_cls.task_id.in_(task_ids)).all()
+ def _get_many_from_db_backend(self, async_tasks) -> Mapping[str, EventBufferValueType]:
+ task_ids = self._tasks_list_to_task_ids(async_tasks)
+ tasks = self._query_task_cls_from_db_backend(task_ids)
task_results = [app.backend.meta_from_decoded(task.to_dict()) for task in tasks]
task_results_by_task_id = {task_result["task_id"]: task_result for task_result in task_results}
+
return self._prepare_state_and_info_by_task_dict(task_ids, task_results_by_task_id)
@staticmethod
</patch>
|
[]
|
[]
| ||||
scipy__scipy-194
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Logarithmic chirp frequency sweep incorrect (Trac #1105)
_Original ticket http://projects.scipy.org/scipy/ticket/1105 on 2010-01-31 by trac user johntryan, assigned to unknown._
The algorithm used to calculate the frequency sweep in waveforms.py is incorrect.
In the current implementation in scipy, if the start and end frequencies differ by 1, there is an exception due to taking log of 0 in beta = log10(f1-f0)/t1, however this is a valid sweep range and should work.
Ticket 547 quotes this source for the algorithm
[http://www.ualberta.ca/dept/aict/bluejay/usr/local/matlab-6.5/help/toolbox/dspblks/chirp.html#873108]
I think that this source is incorrect.
From basics, a plot of log(frequency) vs time should be a straight line with equation y=mx+c
The slope m is
(log(end_frequency) - log(start_freqency)) / (end_time - start_time)
The reference has this incorrectly as log(start_frequency-end_frequency) / time.
The constant c is log(start_freqency)
The following code fragment implements a logarithmic sweep
```
elif method in ['logarithmic','log','lo']:
logf0 = log10( f0 )
beta = (log10(f1)-logf0)/t1
freq = pow(10,beta*t+logf0)
phase_angle = 2*pi*freq*t
else:
```
With this code the f0 and f1 can be equal, and downward sweeps work (f0 > f1), hence there is no need to raise a valueerror if f1 <= f0.
This defect is present in the 0.8.0 code from subversion.
</issue>
<code>
[start of README.txt]
1 =================================================
2 Developing SciPy
3 =================================================
4
5 .. Contents::
6
7
8 What is SciPy?
9 --------------
10
11 SciPy (pronounced "Sigh Pie") is open-source software for mathematics,
12 science, and engineering. It includes modules for statistics, optimization,
13 integration, linear algebra, Fourier transforms, signal and image processing,
14 ODE solvers, and more. It is also the name of a very popular conference on
15 scientific programming with Python.
16
17 The SciPy library depends on NumPy, which provides convenient and fast
18 N-dimensional array manipulation. The SciPy library is built to work with
19 NumPy arrays, and provides many user-friendly and efficient numerical routines
20 such as routines for numerical integration and optimization. Together, they
21 run on all popular operating systems, are quick to install, and are free of
22 charge. NumPy and SciPy are easy to use, but powerful enough to be depended
23 upon by some of the world's leading scientists and engineers. If you need to
24 manipulate numbers on a computer and display or publish the results, give
25 SciPy a try!
26
27
28 SciPy structure
29 ---------------
30
31 SciPy aims at being a robust and efficient "super-package" of a number
32 of modules, each of a non-trivial size and complexity. In order for
33 "SciPy integration" to work flawlessly, all SciPy modules must follow
34 certain rules that are described in this document. Hopefully this
35 document will be helpful for SciPy contributors and developers as a
36 basic reference about the structure of the SciPy package.
37
38 Currently SciPy consists of the following files and directories:
39
40 INSTALL.txt
41 SciPy prerequisites, installation, testing, and troubleshooting.
42
43 THANKS.txt
44 SciPy developers and contributors. Please keep it up to date!!
45
46 README.txt
47 SciPy structure (this document).
48
49 setup.py
50 Script for building and installing SciPy.
51
52 MANIFEST.in
53 Additions to distutils-generated SciPy tar-balls. Its usage is
54 deprecated.
55
56 scipy/
57 Contains SciPy __init__.py and the directories of SciPy modules.
58
59 SciPy modules
60 +++++++++++++
61
62 In the following, a *SciPy module* is defined as a Python package, say
63 xxx, that is located in the scipy/ directory. All SciPy modules should
64 follow the following conventions:
65
66 * Ideally, each SciPy module should be as self-contained as possible.
67 That is, it should have minimal dependencies on other packages or
68 modules. Even dependencies on other SciPy modules should be kept to a
69 minimum. A dependency on NumPy is of course assumed.
70
71 * Directory ``xxx/`` must contain
72
73 + a file ``setup.py`` that defines
74 ``configuration(parent_package='',top_path=None)`` function.
75 See below for more details.
76
77 + a file ``info.py``. See below more details.
78
79 * Directory ``xxx/`` may contain
80
81 + a directory ``tests/`` that contains files ``test_<name>.py``
82 corresponding to modules ``xxx/<name>{.py,.so,/}``. See below for
83 more details.
84
85 + a file ``MANIFEST.in`` that may contain only ``include setup.py`` line.
86 DO NOT specify sources in MANIFEST.in, you must specify all sources
87 in setup.py file. Otherwise released SciPy tarballs will miss these sources.
88
89 + a directory ``docs/`` for documentation.
90
91 For details, read:
92
93 https://github.com/numpy/numpy/blob/master/doc/DISTUTILS.rst.txt
94
95
96 Documentation
97 -------------
98
99 The documentation site is here
100 http://docs.scipy.org
101
102 Web sites
103 ---------
104
105 The user's site is here
106 http://www.scipy.org/
107
108 The developer's site is here
109 http://projects.scipy.org/scipy/wiki
110
111
112 Mailing Lists
113 -------------
114
115 Please see the developer's list here
116 http://projects.scipy.org/mailman/listinfo/scipy-dev
117
118
119 Bug reports
120 -----------
121
122 To search for bugs, please use the NIPY Bug Tracker at
123 http://projects.scipy.org/scipy/query
124
125 To report a bug, please use the NIPY Bug Tracker at
126 http://projects.scipy.org/scipy/newticket
127
128
129 License information
130 -------------------
131
132 See the file "LICENSE" for information on the history of this
133 software, terms & conditions for usage, and a DISCLAIMER OF ALL
134 WARRANTIES.
135
136
[end of README.txt]
[start of scipy/signal/fir_filter_design.py]
1 """Functions for FIR filter design."""
2
3 from math import ceil, log
4 import numpy as np
5 from numpy.fft import irfft
6 from scipy.special import sinc
7 import sigtools
8
9 __all__ = ['kaiser_beta', 'kaiser_atten', 'kaiserord',
10 'firwin', 'firwin2', 'remez']
11
12
13 # Some notes on function parameters:
14 #
15 # `cutoff` and `width` are given as a numbers between 0 and 1. These
16 # are relative frequencies, expressed as a fraction of the Nyquist rate.
17 # For example, if the Nyquist rate is 2KHz, then width=0.15 is a width
18 # of 300 Hz.
19 #
20 # The `order` of a FIR filter is one less than the number of taps.
21 # This is a potential source of confusion, so in the following code,
22 # we will always use the number of taps as the parameterization of
23 # the 'size' of the filter. The "number of taps" means the number
24 # of coefficients, which is the same as the length of the impulse
25 # response of the filter.
26
27
28 def kaiser_beta(a):
29 """Compute the Kaiser parameter `beta`, given the attenuation `a`.
30
31 Parameters
32 ----------
33 a : float
34 The desired attenuation in the stopband and maximum ripple in
35 the passband, in dB. This should be a *positive* number.
36
37 Returns
38 -------
39 beta : float
40 The `beta` parameter to be used in the formula for a Kaiser window.
41
42 References
43 ----------
44 Oppenheim, Schafer, "Discrete-Time Signal Processing", p.475-476.
45 """
46 if a > 50:
47 beta = 0.1102 * (a - 8.7)
48 elif a > 21:
49 beta = 0.5842 * (a - 21) ** 0.4 + 0.07886 * (a - 21)
50 else:
51 beta = 0.0
52 return beta
53
54
55 def kaiser_atten(numtaps, width):
56 """Compute the attenuation of a Kaiser FIR filter.
57
58 Given the number of taps `N` and the transition width `width`, compute the
59 attenuation `a` in dB, given by Kaiser's formula:
60
61 a = 2.285 * (N - 1) * pi * width + 7.95
62
63 Parameters
64 ----------
65 N : int
66 The number of taps in the FIR filter.
67 width : float
68 The desired width of the transition region between passband and
69 stopband (or, in general, at any discontinuity) for the filter.
70
71 Returns
72 -------
73 a : float
74 The attenuation of the ripple, in dB.
75
76 See Also
77 --------
78 kaiserord, kaiser_beta
79 """
80 a = 2.285 * (numtaps - 1) * np.pi * width + 7.95
81 return a
82
83
84 def kaiserord(ripple, width):
85 """Design a Kaiser window to limit ripple and width of transition region.
86
87 Parameters
88 ----------
89 ripple : float
90 Positive number specifying maximum ripple in passband (dB) and minimum
91 ripple in stopband.
92 width : float
93 Width of transition region (normalized so that 1 corresponds to pi
94 radians / sample).
95
96 Returns
97 -------
98 numtaps : int
99 The length of the kaiser window.
100 beta :
101 The beta parameter for the kaiser window.
102
103 Notes
104 -----
105 There are several ways to obtain the Kaiser window:
106
107 signal.kaiser(numtaps, beta, sym=0)
108 signal.get_window(beta, numtaps)
109 signal.get_window(('kaiser', beta), numtaps)
110
111 The empirical equations discovered by Kaiser are used.
112
113 See Also
114 --------
115 kaiser_beta, kaiser_atten
116
117 References
118 ----------
119 Oppenheim, Schafer, "Discrete-Time Signal Processing", p.475-476.
120
121 """
122 A = abs(ripple) # in case somebody is confused as to what's meant
123 if A < 8:
124 # Formula for N is not valid in this range.
125 raise ValueError("Requested maximum ripple attentuation %f is too "
126 "small for the Kaiser formula." % A)
127 beta = kaiser_beta(A)
128
129 # Kaiser's formula (as given in Oppenheim and Schafer) is for the filter
130 # order, so we have to add 1 to get the number of taps.
131 numtaps = (A - 7.95) / 2.285 / (np.pi * width) + 1
132
133 return int(ceil(numtaps)), beta
134
135
136 def firwin(numtaps, cutoff, width=None, window='hamming', pass_zero=True,
137 scale=True, nyq=1.0):
138 """
139 FIR filter design using the window method.
140
141 This function computes the coefficients of a finite impulse response
142 filter. The filter will have linear phase; it will be Type I if
143 `numtaps` is odd and Type II if `numtaps` is even.
144
145 Type II filters always have zero response at the Nyquist rate, so a
146 ValueError exception is raised if firwin is called with `numtaps` even and
147 having a passband whose right end is at the Nyquist rate.
148
149 Parameters
150 ----------
151 numtaps : int
152 Length of the filter (number of coefficients, i.e. the filter
153 order + 1). `numtaps` must be even if a passband includes the
154 Nyquist frequency.
155
156 cutoff : float or 1D array_like
157 Cutoff frequency of filter (expressed in the same units as `nyq`)
158 OR an array of cutoff frequencies (that is, band edges). In the
159 latter case, the frequencies in `cutoff` should be positive and
160 monotonically increasing between 0 and `nyq`. The values 0 and
161 `nyq` must not be included in `cutoff`.
162
163 width : float or None
164 If `width` is not None, then assume it is the approximate width
165 of the transition region (expressed in the same units as `nyq`)
166 for use in Kaiser FIR filter design. In this case, the `window`
167 argument is ignored.
168
169 window : string or tuple of string and parameter values
170 Desired window to use. See `scipy.signal.get_window` for a list
171 of windows and required parameters.
172
173 pass_zero : bool
174 If True, the gain at the frequency 0 (i.e. the "DC gain") is 1.
175 Otherwise the DC gain is 0.
176
177 scale : bool
178 Set to True to scale the coefficients so that the frequency
179 response is exactly unity at a certain frequency.
180 That frequency is either:
181 0 (DC) if the first passband starts at 0 (i.e. pass_zero
182 is True);
183 `nyq` (the Nyquist rate) if the first passband ends at
184 `nyq` (i.e the filter is a single band highpass filter);
185 center of first passband otherwise.
186
187 nyq : float
188 Nyquist frequency. Each frequency in `cutoff` must be between 0
189 and `nyq`.
190
191 Returns
192 -------
193 h : 1D ndarray
194 Coefficients of length `numtaps` FIR filter.
195
196 Raises
197 ------
198 ValueError
199 If any value in `cutoff` is less than or equal to 0 or greater
200 than or equal to `nyq`, if the values in `cutoff` are not strictly
201 monotonically increasing, or if `numtaps` is even but a passband
202 includes the Nyquist frequency.
203
204 Examples
205 --------
206
207 Low-pass from 0 to f::
208
209 >>> firwin(numtaps, f)
210
211 Use a specific window function::
212
213 >>> firwin(numtaps, f, window='nuttall')
214
215 High-pass ('stop' from 0 to f)::
216
217 >>> firwin(numtaps, f, pass_zero=False)
218
219 Band-pass::
220
221 >>> firwin(numtaps, [f1, f2], pass_zero=False)
222
223 Band-stop::
224
225 >>> firwin(numtaps, [f1, f2])
226
227 Multi-band (passbands are [0, f1], [f2, f3] and [f4, 1])::
228
229 >>>firwin(numtaps, [f1, f2, f3, f4])
230
231 Multi-band (passbands are [f1, f2] and [f3,f4])::
232
233 >>> firwin(numtaps, [f1, f2, f3, f4], pass_zero=False)
234
235 See also
236 --------
237 scipy.signal.firwin2
238
239 """
240
241 # The major enhancements to this function added in November 2010 were
242 # developed by Tom Krauss (see ticket #902).
243
244 cutoff = np.atleast_1d(cutoff) / float(nyq)
245
246 # Check for invalid input.
247 if cutoff.ndim > 1:
248 raise ValueError("The cutoff argument must be at most "
249 "one-dimensional.")
250 if cutoff.size == 0:
251 raise ValueError("At least one cutoff frequency must be given.")
252 if cutoff.min() <= 0 or cutoff.max() >= 1:
253 raise ValueError("Invalid cutoff frequency: frequencies must be "
254 "greater than 0 and less than nyq.")
255 if np.any(np.diff(cutoff) <= 0):
256 raise ValueError("Invalid cutoff frequencies: the frequencies "
257 "must be strictly increasing.")
258
259 if width is not None:
260 # A width was given. Find the beta parameter of the Kaiser window
261 # and set `window`. This overrides the value of `window` passed in.
262 atten = kaiser_atten(numtaps, float(width) / nyq)
263 beta = kaiser_beta(atten)
264 window = ('kaiser', beta)
265
266 pass_nyquist = bool(cutoff.size & 1) ^ pass_zero
267 if pass_nyquist and numtaps % 2 == 0:
268 raise ValueError("A filter with an even number of coefficients must "
269 "have zero response at the Nyquist rate.")
270
271 # Insert 0 and/or 1 at the ends of cutoff so that the length of cutoff
272 # is even, and each pair in cutoff corresponds to passband.
273 cutoff = np.hstack(([0.0] * pass_zero, cutoff, [1.0] * pass_nyquist))
274
275 # `bands` is a 2D array; each row gives the left and right edges of
276 # a passband.
277 bands = cutoff.reshape(-1, 2)
278
279 # Build up the coefficients.
280 alpha = 0.5 * (numtaps - 1)
281 m = np.arange(0, numtaps) - alpha
282 h = 0
283 for left, right in bands:
284 h += right * sinc(right * m)
285 h -= left * sinc(left * m)
286
287 # Get and apply the window function.
288 from signaltools import get_window
289 win = get_window(window, numtaps, fftbins=False)
290 h *= win
291
292 # Now handle scaling if desired.
293 if scale:
294 # Get the first passband.
295 left, right = bands[0]
296 if left == 0:
297 scale_frequency = 0.0
298 elif right == 1:
299 scale_frequency = 1.0
300 else:
301 scale_frequency = 0.5 * (left + right)
302 c = np.cos(np.pi * m * scale_frequency)
303 s = np.sum(h * c)
304 h /= s
305
306 return h
307
308
309 # Original version of firwin2 from scipy ticket #457, submitted by "tash".
310 #
311 # Rewritten by Warren Weckesser, 2010.
312
313 def firwin2(numtaps, freq, gain, nfreqs=None, window='hamming', nyq=1.0, antisymmetric=False):
314 """FIR filter design using the window method.
315
316 From the given frequencies `freq` and corresponding gains `gain`,
317 this function constructs an FIR filter with linear phase and
318 (approximately) the given frequency response.
319
320 Parameters
321 ----------
322 numtaps : int
323 The number of taps in the FIR filter. `numtaps` must be less than
324 `nfreqs`.
325
326 freq : array-like, 1D
327 The frequency sampling points. Typically 0.0 to 1.0 with 1.0 being
328 Nyquist. The Nyquist frequency can be redefined with the argument
329 `nyq`.
330
331 The values in `freq` must be nondecreasing. A value can be repeated
332 once to implement a discontinuity. The first value in `freq` must
333 be 0, and the last value must be `nyq`.
334
335 gain : array-like
336 The filter gains at the frequency sampling points. Certain
337 constraints to gain values, depending on the filter type, are applied,
338 see Notes for details.
339
340 nfreqs : int, optional
341 The size of the interpolation mesh used to construct the filter.
342 For most efficient behavior, this should be a power of 2 plus 1
343 (e.g, 129, 257, etc). The default is one more than the smallest
344 power of 2 that is not less than `numtaps`. `nfreqs` must be greater
345 than `numtaps`.
346
347 window : string or (string, float) or float, or None, optional
348 Window function to use. Default is "hamming". See
349 `scipy.signal.get_window` for the complete list of possible values.
350 If None, no window function is applied.
351
352 nyq : float
353 Nyquist frequency. Each frequency in `freq` must be between 0 and
354 `nyq` (inclusive).
355
356 antisymmetric : bool
357 Flag setting wither resulting impulse responce is symmetric/antisymmetric.
358 See Notes for more details.
359
360 Returns
361 -------
362 taps : numpy 1D array of length `numtaps`
363 The filter coefficients of the FIR filter.
364
365 Examples
366 --------
367 A lowpass FIR filter with a response that is 1 on [0.0, 0.5], and
368 that decreases linearly on [0.5, 1.0] from 1 to 0:
369
370 >>> taps = firwin2(150, [0.0, 0.5, 1.0], [1.0, 1.0, 0.0])
371 >>> print(taps[72:78])
372 [-0.02286961 -0.06362756 0.57310236 0.57310236 -0.06362756 -0.02286961]
373
374 See also
375 --------
376 scipy.signal.firwin
377
378 Notes
379 -----
380
381 From the given set of frequencies and gains, the desired response is
382 constructed in the frequency domain. The inverse FFT is applied to the
383 desired response to create the associated convolution kernel, and the
384 first `numtaps` coefficients of this kernel, scaled by `window`, are
385 returned.
386
387 The FIR filter will have linear phase. The type of filter is determined by
388 the value of 'numtaps` and `antisymmetric` flag.
389 There are four possible combinations:
390 - odd `numtaps`, `antisymmetric` is False, type I filter is produced
391 - even `numtaps`, `antisymmetric` is False, type II filter is produced
392 - odd `numtaps`, `antisymmetric` is True, type III filter is produced
393 - even `numtaps`, `antisymmetric` is True, type IV filter is produced
394
395 Magnitude response of all but type I filters are subjects to following
396 constraints:
397 - type II -- zero at the Nyquist frequency
398 - type III -- zero at zero and Nyquist frequencies
399 - type IV -- zero at zero frequency
400
401 .. versionadded:: 0.9.0
402
403 References
404 ----------
405 .. [1] Oppenheim, A. V. and Schafer, R. W., "Discrete-Time Signal
406 Processing", Prentice-Hall, Englewood Cliffs, New Jersey (1989).
407 (See, for example, Section 7.4.)
408
409 .. [2] Smith, Steven W., "The Scientist and Engineer's Guide to Digital
410 Signal Processing", Ch. 17. http://www.dspguide.com/ch17/1.htm
411
412 """
413
414 if len(freq) != len(gain):
415 raise ValueError('freq and gain must be of same length.')
416
417 if nfreqs is not None and numtaps >= nfreqs:
418 raise ValueError(('ntaps must be less than nfreqs, but firwin2 was '
419 'called with ntaps=%d and nfreqs=%s') %
420 (numtaps, nfreqs))
421
422 if freq[0] != 0 or freq[-1] != nyq:
423 raise ValueError('freq must start with 0 and end with `nyq`.')
424 d = np.diff(freq)
425 if (d < 0).any():
426 raise ValueError('The values in freq must be nondecreasing.')
427 d2 = d[:-1] + d[1:]
428 if (d2 == 0).any():
429 raise ValueError('A value in freq must not occur more than twice.')
430
431 if antisymmetric:
432 if numtaps % 2 == 0:
433 ftype = 4
434 else:
435 ftype = 3
436 else:
437 if numtaps % 2 == 0:
438 ftype = 2
439 else:
440 ftype = 1
441
442 if ftype == 2 and gain[-1] != 0.0:
443 raise ValueError("A Type II filter must have zero gain at the Nyquist rate.")
444 elif ftype == 3 and (gain[0] != 0.0 or gain[-1] != 0.0):
445 raise ValueError("A Type III filter must have zero gain at zero and Nyquist rates.")
446 elif ftype == 4 and gain[0] != 0.0:
447 raise ValueError("A Type IV filter must have zero gain at zero rate.")
448
449 if nfreqs is None:
450 nfreqs = 1 + 2 ** int(ceil(log(numtaps, 2)))
451
452 # Tweak any repeated values in freq so that interp works.
453 eps = np.finfo(float).eps
454 for k in range(len(freq)):
455 if k < len(freq) - 1 and freq[k] == freq[k + 1]:
456 freq[k] = freq[k] - eps
457 freq[k + 1] = freq[k + 1] + eps
458
459 # Linearly interpolate the desired response on a uniform mesh `x`.
460 x = np.linspace(0.0, nyq, nfreqs)
461 fx = np.interp(x, freq, gain)
462
463 # Adjust the phases of the coefficients so that the first `ntaps` of the
464 # inverse FFT are the desired filter coefficients.
465 shift = np.exp(-(numtaps - 1) / 2. * 1.j * np.pi * x / nyq)
466 if ftype > 2:
467 shift *= 1j
468
469 fx2 = fx * shift
470
471
472 # Use irfft to compute the inverse FFT.
473 out_full = irfft(fx2)
474
475 if window is not None:
476 # Create the window to apply to the filter coefficients.
477 from signaltools import get_window
478 wind = get_window(window, numtaps, fftbins=False)
479 else:
480 wind = 1
481
482 # Keep only the first `numtaps` coefficients in `out`, and multiply by
483 # the window.
484 out = out_full[:numtaps] * wind
485
486 if ftype == 3:
487 out[out.size // 2] = 0.0
488
489 return out
490
491
492 def remez(numtaps, bands, desired, weight=None, Hz=1, type='bandpass',
493 maxiter=25, grid_density=16):
494 """
495 Calculate the minimax optimal filter using the Remez exchange algorithm.
496
497 Calculate the filter-coefficients for the finite impulse response
498 (FIR) filter whose transfer function minimizes the maximum error
499 between the desired gain and the realized gain in the specified
500 frequency bands using the Remez exchange algorithm.
501
502 Parameters
503 ----------
504 numtaps : int
505 The desired number of taps in the filter. The number of taps is
506 the number of terms in the filter, or the filter order plus one.
507 bands : array_like
508 A monotonic sequence containing the band edges in Hz.
509 All elements must be non-negative and less than half the sampling
510 frequency as given by `Hz`.
511 desired : array_like
512 A sequence half the size of bands containing the desired gain
513 in each of the specified bands.
514 weight : array_like, optional
515 A relative weighting to give to each band region. The length of
516 `weight` has to be half the length of `bands`.
517 Hz : scalar, optional
518 The sampling frequency in Hz. Default is 1.
519 type : {'bandpass', 'differentiator', 'hilbert'}, optional
520 The type of filter:
521
522 'bandpass' : flat response in bands. This is the default.
523
524 'differentiator' : frequency proportional response in bands.
525
526 'hilbert' : filter with odd symmetry, that is, type III
527 (for even order) or type IV (for odd order)
528 linear phase filters.
529
530 maxiter : int, optional
531 Maximum number of iterations of the algorithm. Default is 25.
532 grid_density : int, optional
533 Grid density. The dense grid used in `remez` is of size
534 ``(numtaps + 1) * grid_density``. Default is 16.
535
536 Returns
537 -------
538 out : ndarray
539 A rank-1 array containing the coefficients of the optimal
540 (in a minimax sense) filter.
541
542 See Also
543 --------
544 freqz : Compute the frequency response of a digital filter.
545
546 References
547 ----------
548 .. [1] J. H. McClellan and T. W. Parks, "A unified approach to the
549 design of optimum FIR linear phase digital filters",
550 IEEE Trans. Circuit Theory, vol. CT-20, pp. 697-701, 1973.
551 .. [2] J. H. McClellan, T. W. Parks and L. R. Rabiner, "A Computer
552 Program for Designing Optimum FIR Linear Phase Digital
553 Filters", IEEE Trans. Audio Electroacoust., vol. AU-21,
554 pp. 506-525, 1973.
555
556 Examples
557 --------
558 We want to construct a filter with a passband at 0.2-0.4 Hz, and
559 stop bands at 0-0.1 Hz and 0.45-0.5 Hz. Note that this means that the
560 behavior in the frequency ranges between those bands is unspecified and
561 may overshoot.
562
563 >>> bpass = sp.signal.remez(72, [0, 0.1, 0.2, 0.4, 0.45, 0.5], [0, 1, 0])
564 >>> freq, response = sp.signal.freqz(bpass)
565 >>> ampl = np.abs(response)
566
567 >>> import matplotlib.pyplot as plt
568 >>> fig = plt.figure()
569 >>> ax1 = fig.add_subplot(111)
570 >>> ax1.semilogy(freq/(2*np.pi), ampl, 'b-') # freq in Hz
571 [<matplotlib.lines.Line2D object at 0xf486790>]
572 >>> plt.show()
573
574 """
575 # Convert type
576 try:
577 tnum = {'bandpass': 1, 'differentiator': 2, 'hilbert': 3}[type]
578 except KeyError:
579 raise ValueError("Type must be 'bandpass', 'differentiator', "
580 "or 'hilbert'")
581
582 # Convert weight
583 if weight is None:
584 weight = [1] * len(desired)
585
586 bands = np.asarray(bands).copy()
587 return sigtools._remez(numtaps, bands, desired, weight, tnum, Hz,
588 maxiter, grid_density)
589
[end of scipy/signal/fir_filter_design.py]
[start of scipy/signal/waveforms.py]
1 # Author: Travis Oliphant
2 # 2003
3 #
4 # Feb. 2010: Updated by Warren Weckesser:
5 # Rewrote much of chirp()
6 # Added sweep_poly()
7
8 from numpy import asarray, zeros, place, nan, mod, pi, extract, log, sqrt, \
9 exp, cos, sin, polyval, polyint
10
11 __all__ = ['sawtooth', 'square', 'gausspulse', 'chirp', 'sweep_poly']
12
13
14 def sawtooth(t, width=1):
15 """
16 Return a periodic sawtooth waveform.
17
18 The sawtooth waveform has a period 2*pi, rises from -1 to 1 on the
19 interval 0 to width*2*pi and drops from 1 to -1 on the interval
20 width*2*pi to 2*pi. `width` must be in the interval [0,1].
21
22 Parameters
23 ----------
24 t : array_like
25 Time.
26 width : float, optional
27 Width of the waveform. Default is 1.
28
29 Returns
30 -------
31 y : ndarray
32 Output array containing the sawtooth waveform.
33
34 Examples
35 --------
36 >>> import matplotlib.pyplot as plt
37 >>> x = np.linspace(0, 20*np.pi, 500)
38 >>> plt.plot(x, sp.signal.sawtooth(x))
39
40 """
41 t, w = asarray(t), asarray(width)
42 w = asarray(w + (t - t))
43 t = asarray(t + (w - w))
44 if t.dtype.char in ['fFdD']:
45 ytype = t.dtype.char
46 else:
47 ytype = 'd'
48 y = zeros(t.shape, ytype)
49
50 # width must be between 0 and 1 inclusive
51 mask1 = (w > 1) | (w < 0)
52 place(y, mask1, nan)
53
54 # take t modulo 2*pi
55 tmod = mod(t, 2 * pi)
56
57 # on the interval 0 to width*2*pi function is
58 # tmod / (pi*w) - 1
59 mask2 = (1 - mask1) & (tmod < w * 2 * pi)
60 tsub = extract(mask2, tmod)
61 wsub = extract(mask2, w)
62 place(y, mask2, tsub / (pi * wsub) - 1)
63
64 # on the interval width*2*pi to 2*pi function is
65 # (pi*(w+1)-tmod) / (pi*(1-w))
66
67 mask3 = (1 - mask1) & (1 - mask2)
68 tsub = extract(mask3, tmod)
69 wsub = extract(mask3, w)
70 place(y, mask3, (pi * (wsub + 1) - tsub) / (pi * (1 - wsub)))
71 return y
72
73
74 def square(t, duty=0.5):
75 """
76 Return a periodic square-wave waveform.
77
78 The square wave has a period 2*pi, has value +1 from 0 to 2*pi*duty
79 and -1 from 2*pi*duty to 2*pi. `duty` must be in the interval [0,1].
80
81 Parameters
82 ----------
83 t : array_like
84 The input time array.
85 duty : float, optional
86 Duty cycle.
87
88 Returns
89 -------
90 y : array_like
91 The output square wave.
92
93 """
94 t, w = asarray(t), asarray(duty)
95 w = asarray(w + (t - t))
96 t = asarray(t + (w - w))
97 if t.dtype.char in ['fFdD']:
98 ytype = t.dtype.char
99 else:
100 ytype = 'd'
101 y = zeros(t.shape, ytype)
102
103 # width must be between 0 and 1 inclusive
104 mask1 = (w > 1) | (w < 0)
105 place(y, mask1, nan)
106
107 # take t modulo 2*pi
108 tmod = mod(t, 2 * pi)
109
110 # on the interval 0 to duty*2*pi function is
111 # 1
112 mask2 = (1 - mask1) & (tmod < w * 2 * pi)
113 tsub = extract(mask2, tmod)
114 wsub = extract(mask2, w)
115 place(y, mask2, 1)
116
117 # on the interval duty*2*pi to 2*pi function is
118 # (pi*(w+1)-tmod) / (pi*(1-w))
119
120 mask3 = (1 - mask1) & (1 - mask2)
121 tsub = extract(mask3, tmod)
122 wsub = extract(mask3, w)
123 place(y, mask3, -1)
124 return y
125
126
127 def gausspulse(t, fc=1000, bw=0.5, bwr=-6, tpr=-60, retquad=False,
128 retenv=False):
129 """
130 Return a gaussian modulated sinusoid: exp(-a t^2) exp(1j*2*pi*fc*t).
131
132 If `retquad` is True, then return the real and imaginary parts
133 (in-phase and quadrature).
134 If `retenv` is True, then return the envelope (unmodulated signal).
135 Otherwise, return the real part of the modulated sinusoid.
136
137 Parameters
138 ----------
139 t : ndarray, or the string 'cutoff'
140 Input array.
141 fc : int, optional
142 Center frequency (Hz). Default is 1000.
143 bw : float, optional
144 Fractional bandwidth in frequency domain of pulse (Hz).
145 Default is 0.5.
146 bwr: float, optional
147 Reference level at which fractional bandwidth is calculated (dB).
148 Default is -6.
149 tpr : float, optional
150 If `t` is 'cutoff', then the function returns the cutoff
151 time for when the pulse amplitude falls below `tpr` (in dB).
152 Default is -60.
153 retquad : bool, optional
154 If True, return the quadrature (imaginary) as well as the real part
155 of the signal. Default is False.
156 retenv : bool, optional
157 If True, return the envelope of the signal. Default is False.
158
159 """
160 if fc < 0:
161 raise ValueError("Center frequency (fc=%.2f) must be >=0." % fc)
162 if bw <= 0:
163 raise ValueError("Fractional bandwidth (bw=%.2f) must be > 0." % bw)
164 if bwr >= 0:
165 raise ValueError("Reference level for bandwidth (bwr=%.2f) must "
166 "be < 0 dB" % bwr)
167
168 # exp(-a t^2) <-> sqrt(pi/a) exp(-pi^2/a * f^2) = g(f)
169
170 ref = pow(10.0, bwr / 20.0)
171 # fdel = fc*bw/2: g(fdel) = ref --- solve this for a
172 #
173 # pi^2/a * fc^2 * bw^2 /4=-log(ref)
174 a = -(pi * fc * bw) ** 2 / (4.0 * log(ref))
175
176 if t == 'cutoff': # compute cut_off point
177 # Solve exp(-a tc**2) = tref for tc
178 # tc = sqrt(-log(tref) / a) where tref = 10^(tpr/20)
179 if tpr >= 0:
180 raise ValueError("Reference level for time cutoff must be < 0 dB")
181 tref = pow(10.0, tpr / 20.0)
182 return sqrt(-log(tref) / a)
183
184 yenv = exp(-a * t * t)
185 yI = yenv * cos(2 * pi * fc * t)
186 yQ = yenv * sin(2 * pi * fc * t)
187 if not retquad and not retenv:
188 return yI
189 if not retquad and retenv:
190 return yI, yenv
191 if retquad and not retenv:
192 return yI, yQ
193 if retquad and retenv:
194 return yI, yQ, yenv
195
196
197 def chirp(t, f0, t1, f1, method='linear', phi=0, vertex_zero=True):
198 """Frequency-swept cosine generator.
199
200 In the following, 'Hz' should be interpreted as 'cycles per time unit';
201 there is no assumption here that the time unit is one second. The
202 important distinction is that the units of rotation are cycles, not
203 radians.
204
205 Parameters
206 ----------
207 t : ndarray
208 Times at which to evaluate the waveform.
209 f0 : float
210 Frequency (in Hz) at time t=0.
211 t1 : float
212 Time at which `f1` is specified.
213 f1 : float
214 Frequency (in Hz) of the waveform at time `t1`.
215 method : {'linear', 'quadratic', 'logarithmic', 'hyperbolic'}, optional
216 Kind of frequency sweep. If not given, `linear` is assumed. See
217 Notes below for more details.
218 phi : float, optional
219 Phase offset, in degrees. Default is 0.
220 vertex_zero : bool, optional
221 This parameter is only used when `method` is 'quadratic'.
222 It determines whether the vertex of the parabola that is the graph
223 of the frequency is at t=0 or t=t1.
224
225 Returns
226 -------
227 A numpy array containing the signal evaluated at 't' with the requested
228 time-varying frequency. More precisely, the function returns:
229
230 ``cos(phase + (pi/180)*phi)``
231
232 where `phase` is the integral (from 0 to t) of ``2*pi*f(t)``.
233 ``f(t)`` is defined below.
234
235 See Also
236 --------
237 scipy.signal.waveforms.sweep_poly
238
239 Notes
240 -----
241 There are four options for the `method`. The following formulas give
242 the instantaneous frequency (in Hz) of the signal generated by
243 `chirp()`. For convenience, the shorter names shown below may also be
244 used.
245
246 linear, lin, li:
247
248 ``f(t) = f0 + (f1 - f0) * t / t1``
249
250 quadratic, quad, q:
251
252 The graph of the frequency f(t) is a parabola through (0, f0) and
253 (t1, f1). By default, the vertex of the parabola is at (0, f0).
254 If `vertex_zero` is False, then the vertex is at (t1, f1). The
255 formula is:
256
257 if vertex_zero is True:
258
259 ``f(t) = f0 + (f1 - f0) * t**2 / t1**2``
260
261 else:
262
263 ``f(t) = f1 - (f1 - f0) * (t1 - t)**2 / t1**2``
264
265 To use a more general quadratic function, or an arbitrary
266 polynomial, use the function `scipy.signal.waveforms.sweep_poly`.
267
268 logarithmic, log, lo:
269
270 ``f(t) = f0 * (f1/f0)**(t/t1)``
271
272 f0 and f1 must be nonzero and have the same sign.
273
274 This signal is also known as a geometric or exponential chirp.
275
276 hyperbolic, hyp:
277
278 ``f(t) = f0*f1*t1 / ((f0 - f1)*t + f1*t1)``
279
280 f1 must be positive, and f0 must be greater than f1.
281
282 """
283
284 # 'phase' is computed in _chirp_phase, to make testing easier.
285 phase = _chirp_phase(t, f0, t1, f1, method, vertex_zero)
286 # Convert phi to radians.
287 phi *= pi / 180
288 return cos(phase + phi)
289
290
291 def _chirp_phase(t, f0, t1, f1, method='linear', vertex_zero=True):
292 """
293 Calculate the phase used by chirp_phase to generate its output.
294
295 See `chirp_phase` for a description of the arguments.
296
297 """
298 f0 = float(f0)
299 t1 = float(t1)
300 f1 = float(f1)
301 if method in ['linear', 'lin', 'li']:
302 beta = (f1 - f0) / t1
303 phase = 2 * pi * (f0 * t + 0.5 * beta * t * t)
304
305 elif method in ['quadratic', 'quad', 'q']:
306 beta = (f1 - f0) / (t1 ** 2)
307 if vertex_zero:
308 phase = 2 * pi * (f0 * t + beta * t ** 3 / 3)
309 else:
310 phase = 2 * pi * (f1 * t + beta * ((t1 - t) ** 3 - t1 ** 3) / 3)
311
312 elif method in ['logarithmic', 'log', 'lo']:
313 if f0 * f1 <= 0.0:
314 raise ValueError("For a geometric chirp, f0 and f1 must be "
315 "nonzero and have the same sign.")
316 if f0 == f1:
317 phase = 2 * pi * f0 * t
318 else:
319 beta = t1 / log(f1 / f0)
320 phase = 2 * pi * beta * f0 * (pow(f1 / f0, t / t1) - 1.0)
321
322 elif method in ['hyperbolic', 'hyp']:
323 if f1 <= 0.0 or f0 <= f1:
324 raise ValueError("hyperbolic chirp requires f0 > f1 > 0.0.")
325 c = f1 * t1
326 df = f0 - f1
327 phase = 2 * pi * (f0 * c / df) * log((df * t + c) / c)
328
329 else:
330 raise ValueError("method must be 'linear', 'quadratic', 'logarithmic',"
331 " or 'hyperbolic', but a value of %r was given." % method)
332
333 return phase
334
335
336 def sweep_poly(t, poly, phi=0):
337 """Frequency-swept cosine generator, with a time-dependent frequency
338 specified as a polynomial.
339
340 This function generates a sinusoidal function whose instantaneous
341 frequency varies with time. The frequency at time `t` is given by
342 the polynomial `poly`.
343
344 Parameters
345 ----------
346 t : ndarray
347 Times at which to evaluate the waveform.
348 poly : 1D ndarray (or array-like), or instance of numpy.poly1d
349 The desired frequency expressed as a polynomial. If `poly` is
350 a list or ndarray of length n, then the elements of `poly` are
351 the coefficients of the polynomial, and the instantaneous
352 frequency is
353
354 ``f(t) = poly[0]*t**(n-1) + poly[1]*t**(n-2) + ... + poly[n-1]``
355
356 If `poly` is an instance of numpy.poly1d, then the
357 instantaneous frequency is
358
359 ``f(t) = poly(t)``
360
361 phi : float, optional
362 Phase offset, in degrees. Default is 0.
363
364 Returns
365 -------
366 A numpy array containing the signal evaluated at 't' with the requested
367 time-varying frequency. More precisely, the function returns
368
369 ``cos(phase + (pi/180)*phi)``
370
371 where `phase` is the integral (from 0 to t) of ``2 * pi * f(t)``;
372 ``f(t)`` is defined above.
373
374 See Also
375 --------
376 scipy.signal.waveforms.chirp
377
378 Notes
379 -----
380 .. versionadded:: 0.8.0
381 """
382 # 'phase' is computed in _sweep_poly_phase, to make testing easier.
383 phase = _sweep_poly_phase(t, poly)
384 # Convert to radians.
385 phi *= pi / 180
386 return cos(phase + phi)
387
388
389 def _sweep_poly_phase(t, poly):
390 """
391 Calculate the phase used by sweep_poly to generate its output.
392
393 See `sweep_poly` for a description of the arguments.
394
395 """
396 # polyint handles lists, ndarrays and instances of poly1d automatically.
397 intpoly = polyint(poly)
398 phase = 2 * pi * polyval(intpoly, t)
399 return phase
400
[end of scipy/signal/waveforms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
scipy/scipy
|
1e5c8f1f520f06ec5302058b54fa95d4a0586021
|
Logarithmic chirp frequency sweep incorrect (Trac #1105)
_Original ticket http://projects.scipy.org/scipy/ticket/1105 on 2010-01-31 by trac user johntryan, assigned to unknown._
The algorithm used to calculate the frequency sweep in waveforms.py is incorrect.
In the current implementation in scipy, if the start and end frequencies differ by 1, there is an exception due to taking log of 0 in beta = log10(f1-f0)/t1, however this is a valid sweep range and should work.
Ticket 547 quotes this source for the algorithm
[http://www.ualberta.ca/dept/aict/bluejay/usr/local/matlab-6.5/help/toolbox/dspblks/chirp.html#873108]
I think that this source is incorrect.
From basics, a plot of log(frequency) vs time should be a straight line with equation y=mx+c
The slope m is
(log(end_frequency) - log(start_freqency)) / (end_time - start_time)
The reference has this incorrectly as log(start_frequency-end_frequency) / time.
The constant c is log(start_freqency)
The following code fragment implements a logarithmic sweep
```
elif method in ['logarithmic','log','lo']:
logf0 = log10( f0 )
beta = (log10(f1)-logf0)/t1
freq = pow(10,beta*t+logf0)
phase_angle = 2*pi*freq*t
else:
```
With this code the f0 and f1 can be equal, and downward sweeps work (f0 > f1), hence there is no need to raise a valueerror if f1 <= f0.
This defect is present in the 0.8.0 code from subversion.
|
2012-04-14T18:30:32Z
|
<patch>
diff --git a/scipy/interpolate/fitpack2.py b/scipy/interpolate/fitpack2.py
--- a/scipy/interpolate/fitpack2.py
+++ b/scipy/interpolate/fitpack2.py
@@ -18,9 +18,8 @@
'RectBivariateSpline',
'RectSphereBivariateSpline']
-from types import NoneType
-
import warnings
+
from numpy import zeros, concatenate, alltrue, ravel, all, diff, array
import numpy as np
@@ -906,7 +905,7 @@ def __init__(self, u, v, r, s=0., pole_continuity=False, pole_values=None,
pole_exact=False, pole_flat=False):
iopt = np.array([0, 0, 0], dtype=int)
ider = np.array([-1, 0, -1, 0], dtype=int)
- if isinstance(pole_values, NoneType):
+ if pole_values is None:
pole_values = (None, None)
elif isinstance(pole_values, (float, np.float32, np.float64)):
pole_values = (pole_values, pole_values)
diff --git a/scipy/optimize/minpack.py b/scipy/optimize/minpack.py
--- a/scipy/optimize/minpack.py
+++ b/scipy/optimize/minpack.py
@@ -18,7 +18,7 @@ def _check_func(checker, argname, thefunc, x0, args, numinputs, output_shape=Non
return shape(res)
msg = "%s: there is a mismatch between the input and output " \
"shape of the '%s' argument" % (checker, argname)
- func_name = getattr(thefunc, 'func_name', None)
+ func_name = getattr(thefunc, '__name__', None)
if func_name:
msg += " '%s'." % func_name
else:
diff --git a/scipy/signal/filter_design.py b/scipy/signal/filter_design.py
--- a/scipy/signal/filter_design.py
+++ b/scipy/signal/filter_design.py
@@ -1708,4 +1708,3 @@ def besselap(N):
'h': 'highpass'
}
-warnings.simplefilter("always", BadCoefficients)
diff --git a/scipy/sparse/coo.py b/scipy/sparse/coo.py
--- a/scipy/sparse/coo.py
+++ b/scipy/sparse/coo.py
@@ -181,14 +181,14 @@ def __init__(self, arg1, shape=None, dtype=None, copy=False):
if np.rank(M) != 2:
raise TypeError('expected rank <= 2 array or matrix')
+
self.shape = M.shape
- self.row,self.col = (M != 0).nonzero()
- self.data = M[self.row,self.col]
+ self.row, self.col = M.nonzero()
+ self.data = M[self.row, self.col]
if dtype is not None:
self.data = self.data.astype(dtype)
-
self._check()
def getnnz(self):
diff --git a/scipy/weave/bytecodecompiler.py b/scipy/weave/bytecodecompiler.py
--- a/scipy/weave/bytecodecompiler.py
+++ b/scipy/weave/bytecodecompiler.py
@@ -693,7 +693,8 @@ def __init__(self,function,signature,name=None):
assert_(inspect.isfunction(function))
assert_(not function.func_defaults,
msg="Function cannot have default args (yet)")
- if name is None: name = function.func_name
+ if name is None:
+ name = function.__name__
self.name = name
self.function = function
self.signature = signature
</patch>
|
[]
|
[]
| ||||
huggingface__transformers-24334
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
(Not So) Bad words list for text generation
### Feature request
Support a soft penalization logits processor in the transformers generate method (extends NoBadWordsLogitsProcessor).
### Motivation
- The [NoBadWordsLogitsProcessor](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.NoBadWordsLogitsProcessor) forbids the generation of certain tokens _in absolute terms_ by overwriting the logits to minus infinity
- The request is to add a softer version of this, one in which certain tokens are penalized or boosted but _only mildly_
- This is in the spirit of the `logit_bias` parameter in the generate methods [here](https://beta.openai.com/docs/api-reference/completions/create#completions/create-logit_bias) (OpenAI) and [here](https://docs.cohere.ai/reference/generate) (Cohere)
- Possible use cases include, but are not limited to: enhance extractiveness during document summarization by boosting tokens present in the input and style guidance by penalizing/boosting the appropriate vocabulary
### Your contribution
**Overview**
- A new class is defined as `BendLogitsProcessor` based on the current `NoBadWordsLogitsProcessor` class
- The current argument `bad_words_ids` is enriched to include a float value per list of tokens_ids, aka the penalization/boosting score. Positive large values encourage the token to be generated while negative large values do the opposite
- Penalization/boosting scores are unbounded but could be later scaled as it seems to be the case in the implementations referenced above, e.g. `logit bias` is in [-10,10] [here](https://beta.openai.com/docs/api-reference/completions/create#completions/create-logit_bias) and [-100,100] [here](https://docs.cohere.ai/reference/generate)
- Observe that `NoBadWordsLogitsProcessor` behavior could be recovered just by explicitly setting penalization/boosting scores to float(“-Inf”)
**The new class**
This is very much the same as `NoBadWordsLogitsProcessor`, I tried to keep as much as possible intact. There might be a more efficient implementation.
```py
class BendLogitsProcessor(LogitsProcessor):
"""
[`LogitsProcessor`] that softly penalizes or boosts certain token/s
Args:
bend_list (`List[Union[float, List[int]]]`):
List of list of lists with penalization/boosting coefficients and list of token ids.
In order to get the token ids of the words, use `tokenizer(bad_words, add_prefix_space=True,
add_special_tokens=False).input_ids`.
eos_token_id (`int`):
The id of the *end-of-sequence* token.
"""
def __init__(self, bend_list: List[Union[float, List[int]]], eos_token_id: int):
self.bend_list = bend_list
coefs = [coef for coef,tok in self.bend_list]
words_ids = [tok for coef,tok in self.bend_list]
if not isinstance(bend_list, List) or len(bend_list) == 0:
raise ValueError(f"`bend_list` has to be a non-empty list, but is {bend_list}.")
if any(not isinstance(word_ids, list) for word_ids in words_ids):
raise ValueError(f"`words_ids` has to be a list of lists, but is {words_ids}.")
if any(
any((not isinstance(token_id, (int, np.integer)) or token_id < 0) for token_id in word_ids)
for word_ids in words_ids
):
raise ValueError(
f"Each list in `words_ids` has to be a list of positive integers, but is {words_ids}."
)
if any(not isinstance(coef, float) for coef in coefs):
raise ValueError(f"`coefs` has to be a float, but is {coefs}.")
words_ids = list(filter(lambda token_seq: token_seq != [eos_token_id], words_ids))
self.words_id_length_1, self.coefs_length_1 = [],[]
self.words_id_length_greater_than_1, self.coefs_length_greater_than_1 = [],[]
for coef,word in zip(coefs,words_ids):
if len(word) == 1:
self.words_id_length_1.append(word[0])
self.coefs_length_1.append(coef)
else:
self.words_id_length_greater_than_1.append(word)
self.coefs_length_greater_than_1.append(coef)
for token_seq in self.words_id_length_greater_than_1:
if len(token_seq) == 0:
raise ValueError(f"Words token sequences {words_ids} cannot have an empty list")
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
masks_length_1, scores_length_1 = [], torch.zeros_like(scores)
masks_length_greater_than_1, scores_length_greater_than_1 = [], torch.zeros_like(scores)
if len(self.words_id_length_1) > 0:
for word_id,coef in zip(self.words_id_length_1,self.coefs_length_1):
mask = self._get_mask_length_1(scores,word_id)
masks_length_1.append(mask)
if coef >= 0:
score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) * (1 + coef) + \
scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) / (1 + coef)
if coef < 0:
score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) / (1 + abs(coef)) + \
scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) * (1 + abs(coef))
scores_length_1 += score
if len(self.words_id_length_greater_than_1) > 0:
for word_ids,coef in zip(self.words_id_length_greater_than_1,self.coefs_length_greater_than_1):
mask = self._get_mask_length_greater_than_1(input_ids.tolist(),scores,word_ids)
masks_length_greater_than_1.append(mask)
if coef >= 0:
score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) * (1 + coef) + \
scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) / (1 + coef)
if coef < 0:
score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) / (1 + abs(coef)) + \
scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) * (1 + abs(coef))
scores_length_greater_than_1 += score
masks_all_lengths = masks_length_1 + masks_length_greater_than_1
one_large_mask = torch.zeros_like(scores).bool()
for mask in masks_all_lengths:
one_large_mask = torch.bitwise_or(one_large_mask,mask)
base_scores = scores.masked_fill(one_large_mask,0.)
new_scores = base_scores + scores_length_1 + scores_length_greater_than_1
return new_scores
def _get_mask_length_1(self, scores: torch.FloatTensor, word_id:List[int]) -> torch.BoolTensor:
mask = torch.zeros(scores.shape[1])
mask[word_id] = 1
return mask.unsqueeze(0).to(scores.device).bool()
def _tokens_match(self, prev_tokens: List[int], tokens: List[int]) -> bool:
if len(tokens) == 0:
return True
elif len(tokens) > len(prev_tokens):
return False
else:
return prev_tokens[-len(tokens) :] == tokens
def _calc_word_ids(self, prev_input_ids: List[List[int]], word_ids:List[int]) -> Iterable[int]:
tokens = []
for prev_input_ids_slice in prev_input_ids:
tokens_slice = []
if self._tokens_match(prev_input_ids_slice, word_ids[:-1]):
tokens_slice.append(word_ids[-1])
tokens.append(tokens_slice)
return tokens
def _get_mask_length_greater_than_1(self, input_ids: list, scores: torch.FloatTensor, word_ids:List[int]) -> torch.BoolTensor:
dynamic_tokens = self._calc_word_ids(input_ids, word_ids)
mask_list = []
for idx, batch_tokens in enumerate(dynamic_tokens):
for token in batch_tokens:
# Eliminates invalid bad word IDs that are over the vocabulary size.
if token <= scores.shape[1]:
mask_list.append([idx, token])
else:
logger.error(
f"An invalid bad word ID is defined: {token}. This ID is not contained in the "
"vocabulary, and is therefore ignored."
)
if not mask_list:
mask = torch.zeros_like(scores).bool()
else:
mask = torch.LongTensor(mask_list)
indices = torch.ones(len(mask))
mask = (
torch.sparse.LongTensor(mask.t(), indices, scores.size())
.to(scores.device)
.to_dense()
.bool()
)
return mask
```
**An example**
Take the summarization example in BART documentation [here](https://huggingface.co/docs/transformers/model_doc/bart#transformers.BartForConditionalGeneration.forward.example). Set `add_prefix_space=True` in the tokenizer and remove the `max_length = 20` in the generate method call.
```py
from transformers import AutoTokenizer, BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn", add_prefix_space=True)
ARTICLE_TO_SUMMARIZE = (
"PG&E stated it scheduled the blackouts in response to forecasts for high winds "
"amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
"scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
)
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0)
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
This yields the following summary:
> Nearly 800 thousand customers were scheduled to be affected by the shutoffs. PG&E stated it scheduled the blackouts in response to forecasts for high winds.
At this point the new logits processor class is applied. The objective will be to make the model output the number of customers affected as digits and replace the word “shutoffs”. We do so by penalizing the token ids for “thousand” and “shutoffs” while boosting the ones for “shutdowns”.
```py
logits_processor = LogitsProcessorList(
[
BendLogitsProcessor(
bend_list = [[-10000.,[7673]], # thousand
[1000.,[5001, 29]], # shutdowns
[-1000000.,[2572, 10816]], # shutoffs
[-1000000.,[2572, 1529]], # shutoffs
],
eos_token_id=model.config.eos_token_id
)
]
)
# Generate Summary
summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0, logits_processor=logits_processor, renormalize_logits=True)
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
If we call the the summary generation again, this time including the logits processor and renormalizing we get:
> Nearly 800,000 customers were scheduled to be affected by the shutdowns. PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions.
</issue>
<code>
[start of README.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <picture>
19 <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg">
20 <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg">
21 <img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;">
22 </picture>
23 <br/>
24 <br/>
25 </p>
26
27 <p align="center">
28 <a href="https://circleci.com/gh/huggingface/transformers">
29 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
30 </a>
31 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
32 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
33 </a>
34 <a href="https://huggingface.co/docs/transformers/index">
35 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
36 </a>
37 <a href="https://github.com/huggingface/transformers/releases">
38 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
39 </a>
40 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
41 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
42 </a>
43 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
44 </p>
45
46 <h4 align="center">
47 <p>
48 <b>English</b> |
49 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
50 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
51 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> |
52 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> |
53 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> |
54 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a>
55 <p>
56 </h4>
57
58 <h3 align="center">
59 <p>State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow</p>
60 </h3>
61
62 <h3 align="center">
63 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
64 </h3>
65
66 🤗 Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.
67
68 These models can be applied on:
69
70 * 📝 Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages.
71 * 🖼️ Images, for tasks like image classification, object detection, and segmentation.
72 * 🗣️ Audio, for tasks like speech recognition and audio classification.
73
74 Transformer models can also perform tasks on **several modalities combined**, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
75
76 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
77
78 🤗 Transformers is backed by the three most popular deep learning libraries — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other.
79
80 ## Online demos
81
82 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) for public and private models.
83
84 Here are a few examples:
85
86 In Natural Language Processing:
87 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
88 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
89 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
90 - [Natural Language Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
91 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
92 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
93 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
94
95 In Computer Vision:
96 - [Image classification with ViT](https://huggingface.co/google/vit-base-patch16-224)
97 - [Object Detection with DETR](https://huggingface.co/facebook/detr-resnet-50)
98 - [Semantic Segmentation with SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
99 - [Panoptic Segmentation with MaskFormer](https://huggingface.co/facebook/maskformer-swin-small-coco)
100 - [Depth Estimation with DPT](https://huggingface.co/docs/transformers/model_doc/dpt)
101 - [Video Classification with VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)
102 - [Universal Segmentation with OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)
103
104 In Audio:
105 - [Automatic Speech Recognition with Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
106 - [Keyword Spotting with Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
107 - [Audio Classification with Audio Spectrogram Transformer](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593)
108
109 In Multimodal tasks:
110 - [Table Question Answering with TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq)
111 - [Visual Question Answering with ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
112 - [Zero-shot Image Classification with CLIP](https://huggingface.co/openai/clip-vit-large-patch14)
113 - [Document Question Answering with LayoutLM](https://huggingface.co/impira/layoutlm-document-qa)
114 - [Zero-shot Video Classification with X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)
115
116 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities.
117
118
119 ## 100 projects using Transformers
120
121 Transformers is more than a toolkit to use pretrained models: it's a community of projects built around it and the
122 Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone
123 else to build their dream projects.
124
125 In order to celebrate the 100,000 stars of transformers, we have decided to put the spotlight on the
126 community, and we have created the [awesome-transformers](./awesome-transformers.md) page which lists 100
127 incredible projects built in the vicinity of transformers.
128
129 If you own or use a project that you believe should be part of the list, please open a PR to add it!
130
131 ## If you are looking for custom support from the Hugging Face team
132
133 <a target="_blank" href="https://huggingface.co/support">
134 <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
135 </a><br>
136
137 ## Quick tour
138
139 To immediately use a model on a given input (text, image, audio, ...), we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts:
140
141 ```python
142 >>> from transformers import pipeline
143
144 # Allocate a pipeline for sentiment-analysis
145 >>> classifier = pipeline('sentiment-analysis')
146 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
147 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
148 ```
149
150 The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here the answer is "positive" with a confidence of 99.97%.
151
152 Many tasks have a pre-trained `pipeline` ready to go, in NLP but also in computer vision and speech. For example, we can easily extract detected objects in an image:
153
154 ``` python
155 >>> import requests
156 >>> from PIL import Image
157 >>> from transformers import pipeline
158
159 # Download an image with cute cats
160 >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
161 >>> image_data = requests.get(url, stream=True).raw
162 >>> image = Image.open(image_data)
163
164 # Allocate a pipeline for object detection
165 >>> object_detector = pipeline('object-detection')
166 >>> object_detector(image)
167 [{'score': 0.9982201457023621,
168 'label': 'remote',
169 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
170 {'score': 0.9960021376609802,
171 'label': 'remote',
172 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
173 {'score': 0.9954745173454285,
174 'label': 'couch',
175 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
176 {'score': 0.9988006353378296,
177 'label': 'cat',
178 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
179 {'score': 0.9986783862113953,
180 'label': 'cat',
181 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
182 ```
183
184 Here we get a list of objects detected in the image, with a box surrounding the object and a confidence score. Here is the original image on the left, with the predictions displayed on the right:
185
186 <h3 align="center">
187 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
188 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
189 </h3>
190
191 You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/docs/transformers/task_summary).
192
193 In addition to `pipeline`, to download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version:
194 ```python
195 >>> from transformers import AutoTokenizer, AutoModel
196
197 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
198 >>> model = AutoModel.from_pretrained("bert-base-uncased")
199
200 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
201 >>> outputs = model(**inputs)
202 ```
203
204 And here is the equivalent code for TensorFlow:
205 ```python
206 >>> from transformers import AutoTokenizer, TFAutoModel
207
208 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
209 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
210
211 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
212 >>> outputs = model(**inputs)
213 ```
214
215 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator.
216
217 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use as usual. [This tutorial](https://huggingface.co/docs/transformers/training) explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune on a new dataset.
218
219 ## Why should I use transformers?
220
221 1. Easy-to-use state-of-the-art models:
222 - High performance on natural language understanding & generation, computer vision, and audio tasks.
223 - Low barrier to entry for educators and practitioners.
224 - Few user-facing abstractions with just three classes to learn.
225 - A unified API for using all our pretrained models.
226
227 1. Lower compute costs, smaller carbon footprint:
228 - Researchers can share trained models instead of always retraining.
229 - Practitioners can reduce compute time and production costs.
230 - Dozens of architectures with over 60,000 pretrained models across all modalities.
231
232 1. Choose the right framework for every part of a model's lifetime:
233 - Train state-of-the-art models in 3 lines of code.
234 - Move a single model between TF2.0/PyTorch/JAX frameworks at will.
235 - Seamlessly pick the right framework for training, evaluation and production.
236
237 1. Easily customize a model or an example to your needs:
238 - We provide examples for each architecture to reproduce the results published by its original authors.
239 - Model internals are exposed as consistently as possible.
240 - Model files can be used independently of the library for quick experiments.
241
242 ## Why shouldn't I use transformers?
243
244 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
245 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library (possibly, [Accelerate](https://huggingface.co/docs/accelerate)).
246 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/main/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs.
247
248 ## Installation
249
250 ### With pip
251
252 This repository is tested on Python 3.7+, Flax 0.4.1+, PyTorch 1.9+ and TensorFlow 2.4+.
253
254 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
255
256 First, create a virtual environment with the version of Python you're going to use and activate it.
257
258 Then, you will need to install at least one of Flax, PyTorch or TensorFlow.
259 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific installation command for your platform.
260
261 When one of those backends has been installed, 🤗 Transformers can be installed using pip as follows:
262
263 ```bash
264 pip install transformers
265 ```
266
267 If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
268
269 ### With conda
270
271 Since Transformers version v4.0.0, we now have a conda channel: `huggingface`.
272
273 🤗 Transformers can be installed using conda as follows:
274
275 ```shell script
276 conda install -c huggingface transformers
277 ```
278
279 Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
280
281 > **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062).
282
283 ## Model architectures
284
285 **[All the model checkpoints](https://huggingface.co/models)** provided by 🤗 Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co/models) where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations).
286
287 Current number of checkpoints: 
288
289 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/docs/transformers/model_summary) for a high-level summary of each them):
290
291 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
292 1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.
293 1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell.
294 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass.
295 1. **[Autoformer](https://huggingface.co/docs/transformers/model_doc/autoformer)** (from Tsinghua University) released with the paper [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
296 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
297 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
298 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
299 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
300 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
301 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
302 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
303 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
304 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
305 1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu.
306 1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby.
307 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
308 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
309 1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
310 1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (from Salesforce) released with the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi.
311 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/).
312 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
313 1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
314 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
315 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
316 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
317 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
318 1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (from LAION-AI) released with the paper [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
319 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
320 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker.
321 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
322 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
323 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
324 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
325 1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
326 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
327 1. **[CPM-Ant](https://huggingface.co/docs/transformers/model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/).
328 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
329 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
330 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
331 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
332 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
333 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
334 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
335 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
336 1. **[DePlot](https://huggingface.co/docs/transformers/model_doc/deplot)** (from Google AI) released with the paper [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) by Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.
337 1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
338 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
339 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
340 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi.
341 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT.
342 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
343 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
344 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
345 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
346 1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren.
347 1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le.
348 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
349 1. **[EnCodec](https://huggingface.co/docs/transformers/main/model_doc/encodec)** (from Meta AI) released with the paper [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi.
350 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
351 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu.
352 1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (from Baidu) released with the paper [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang.
353 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives.
354 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
355 1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
356 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
357 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
358 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
359 1. **[FocalNet](https://huggingface.co/docs/transformers/model_doc/focalnet)** (from Microsoft Research) released with the paper [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao.
360 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
361 1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang.
362 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
363 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
364 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
365 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
366 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori.
367 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
368 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
369 1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
370 1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
371 1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
372 1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
373 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
374 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
375 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
376 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
377 1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
378 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever.
379 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
380 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
381 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
382 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
383 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
384 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
385 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding.
386 1. **[LLaMA](https://huggingface.co/docs/transformers/model_doc/llama)** (from The FAIR team of Meta AI) released with the paper [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.
387 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
388 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
389 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
390 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
391 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
392 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
393 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
394 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei.
395 1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar.
396 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
397 1. **[MatCha](https://huggingface.co/docs/transformers/model_doc/matcha)** (from Google AI) released with the paper [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662) by Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos.
398 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
399 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
400 1. **[MEGA](https://huggingface.co/docs/transformers/model_doc/mega)** (from Meta/USC/CMU/SJTU) released with the paper [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer.
401 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
402 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
403 1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (from Alibaba Research) released with the paper [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) by Peng Wang, Cheng Da, and Cong Yao.
404 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
405 1. **[MMS](https://huggingface.co/docs/transformers/model_doc/mms)** (from Facebook) released with the paper [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516) by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli.
406 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
407 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
408 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
409 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
410 1. **[MobileViTV2](https://huggingface.co/docs/transformers/model_doc/mobilevitv2)** (from Apple) released with the paper [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari.
411 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
412 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
413 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
414 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi.
415 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
416 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
417 1. **[NLLB-MOE](https://huggingface.co/docs/transformers/model_doc/nllb-moe)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
418 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
419 1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi.
420 1. **[OpenLlama](https://huggingface.co/docs/transformers/model_doc/open-llama)** (from [s-JoL](https://huggingface.co/s-JoL)) released in [Open-Llama](https://github.com/s-JoL/Open-Llama).
421 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
422 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
423 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
424 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu.
425 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
426 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
427 1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (from Google) released with the paper [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
428 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
429 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
430 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
431 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
432 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
433 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
434 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
435 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
436 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
437 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
438 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
439 1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
440 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou.
441 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
442 1. **[RWKV](https://huggingface.co/docs/transformers/model_doc/rwkv)** (from Bo Peng), released on [this repo](https://github.com/BlinkDL/RWKV-LM) by Bo Peng.
443 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
444 1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (from Meta AI) released with the paper [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.
445 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
446 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
447 1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
448 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
449 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
450 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
451 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
452 1. **[SwiftFormer](https://huggingface.co/docs/transformers/model_doc/swiftformer)** (from MBZUAI) released with the paper [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan.
453 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
454 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
455 1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (from University of Würzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte.
456 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer.
457 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
458 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
459 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham.
460 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
461 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
462 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace).
463 1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani.
464 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
465 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
466 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
467 1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (from UNC Chapel Hill) released with the paper [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal.
468 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
469 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
470 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
471 1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun.
472 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
473 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
474 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
475 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
476 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
477 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
478 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
479 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas.
480 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
481 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
482 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
483 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
484 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
485 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling.
486 1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (from Meta AI) released with the paper [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe.
487 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
488 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
489 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
490 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
491 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
492 1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (from Meta AI) released with the paper [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa.
493 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
494 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
495 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
496 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
497 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
498 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
499
500 To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/docs/transformers/index#supported-frameworks).
501
502 These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://github.com/huggingface/transformers/tree/main/examples).
503
504
505 ## Learn more
506
507 | Section | Description |
508 |-|-|
509 | [Documentation](https://huggingface.co/docs/transformers/) | Full API documentation and tutorials |
510 | [Task summary](https://huggingface.co/docs/transformers/task_summary) | Tasks supported by 🤗 Transformers |
511 | [Preprocessing tutorial](https://huggingface.co/docs/transformers/preprocessing) | Using the `Tokenizer` class to prepare data for the models |
512 | [Training and fine-tuning](https://huggingface.co/docs/transformers/training) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API |
513 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/main/examples) | Example scripts for fine-tuning models on a wide range of tasks |
514 | [Model sharing and uploading](https://huggingface.co/docs/transformers/model_sharing) | Upload and share your fine-tuned models with the community |
515 | [Migration](https://huggingface.co/docs/transformers/migration) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` |
516
517 ## Citation
518
519 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library:
520 ```bibtex
521 @inproceedings{wolf-etal-2020-transformers,
522 title = "Transformers: State-of-the-Art Natural Language Processing",
523 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
524 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
525 month = oct,
526 year = "2020",
527 address = "Online",
528 publisher = "Association for Computational Linguistics",
529 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
530 pages = "38--45"
531 }
532 ```
533
[end of README.md]
[start of README_es.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <a href="https://github.com/huggingface/transformers/">English</a> |
44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
46 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> |
47 <b>Español</b> |
48 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> |
49 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a>
50 <p>
51 </h4>
52
53 <h3 align="center">
54 <p>Lo último de Machine Learning para JAX, PyTorch y TensorFlow</p>
55 </h3>
56
57 <h3 align="center">
58 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
59 </h3>
60
61 🤗 Transformers aporta miles de modelos preentrenados Para realizar tareas en diferentes modalidades como texto, vision, y audio.
62
63 Estos modelos pueden ser aplicados en:
64
65 * 📝 Texto, Para tareas como clasificación de texto, extracción de información, responder preguntas, resumir, traducir, generación de texto, en más de 100 idiomas.
66 * 🖼️ Imágenes, para tareas como clasificación de imágenes, detección the objetos, y segmentación.
67 * 🗣️ Audio, para tareas como reconocimiento de voz y clasificación de audio.
68
69 Los modelos de Transformer también pueden realizar tareas en **muchas modalidades combinadas**, como responder pregunstas, reconocimiento de carácteres ópticos,extracción de información de documentos escaneados, clasificación de video, y respuesta de preguntas visuales.
70
71 🤗 Transformers aporta APIs para descargar rápidamente y usar estos modelos preentrenados en un texto dado, afinarlos en tus propios sets de datos y compartirlos con la comunidad en nuestro [centro de modelos](https://huggingface.co/models). Al mismo tiempo, cada módulo de Python que define una arquitectura es completamente independiente y se puede modificar para permitir experimentos de investigación rápidos.
72
73 🤗 Transformers está respaldado por las tres bibliotecas de deep learning más populares — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) y [TensorFlow](https://www.tensorflow.org/) — con una perfecta integración entre ellos. Es sencillo entrenar sus modelos con uno antes de cargarlos para la inferencia con el otro.
74
75 ## Demostraciones en línea
76
77 Puedes probar la mayoría de nuestros modelos directamente en sus páginas desde el [centro de modelos](https://huggingface.co/models). También ofrecemos [alojamiento de modelos privados, control de versiones y una API de inferencia](https://huggingface.co/pricing) para modelos públicos y privados.
78
79 Aquí hay algunos ejemplos:
80
81 En procesamiento del lenguaje natural:
82 - [Terminación de palabras enmascaradas con BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
83 - [Reconocimiento del nombre de la entidad con Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
84 - [Generación de texto con GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
85 - [Inferencia del lenguaje natural con RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
86 - [Resumen con BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
87 - [Responder a preguntas con DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
88 - [Traducción con T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
89
90 En visión de ordenador:
91 - [Clasificación de imágenes con ViT](https://huggingface.co/google/vit-base-patch16-224)
92 - [Detección de objetos con DETR](https://huggingface.co/facebook/detr-resnet-50)
93 - [Segmentación semántica con SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
94 - [Segmentación panóptica con DETR](https://huggingface.co/facebook/detr-resnet-50-panoptic)
95 - [Segmentación Universal con OneFormer (Segmentación Semántica, de Instancia y Panóptica con un solo modelo)](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)
96
97 En Audio:
98 - [Reconocimiento de voz automático con Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
99 - [Detección de palabras clave con Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
100
101 En tareas multimodales:
102 - [Respuesta visual a preguntas con ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
103
104 **[Escribe con Transformer](https://transformer.huggingface.co)**, construido por el equipo de Hugging Face, es la demostración oficial de las capacidades de generación de texto de este repositorio.
105
106 ## Si está buscando soporte personalizado del equipo de Hugging Face
107
108 <a target="_blank" href="https://huggingface.co/support">
109 <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
110 </a><br>
111
112 ## Tour rápido
113
114 Para usar inmediatamente un modelo en una entrada determinada (texto, imagen, audio, ...), proporcionamos la API de `pipeline`. Los pipelines agrupan un modelo previamente entrenado con el preprocesamiento que se usó durante el entrenamiento de ese modelo. Aquí se explica cómo usar rápidamente un pipeline para clasificar textos positivos frente a negativos:
115
116 ```python
117 >>> from transformers import pipeline
118
119 # Allocate a pipeline for sentiment-analysis
120 >>> classifier = pipeline('sentiment-analysis')
121 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
122 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
123 ```
124
125 La segunda línea de código descarga y almacena en caché el modelo previamente entrenado que usa la canalización, mientras que la tercera lo evalúa en el texto dado. Aquí la respuesta es "positiva" con una confianza del 99,97%.
126
127 Muchas tareas tienen un `pipeline` preentrenado listo para funcionar, en NLP pero también en visión por ordenador y habla. Por ejemplo, podemos extraer fácilmente los objetos detectados en una imagen:
128
129 ``` python
130 >>> import requests
131 >>> from PIL import Image
132 >>> from transformers import pipeline
133
134 # Download an image with cute cats
135 >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
136 >>> image_data = requests.get(url, stream=True).raw
137 >>> image = Image.open(image_data)
138
139 # Allocate a pipeline for object detection
140 >>> object_detector = pipeline('object_detection')
141 >>> object_detector(image)
142 [{'score': 0.9982201457023621,
143 'label': 'remote',
144 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
145 {'score': 0.9960021376609802,
146 'label': 'remote',
147 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
148 {'score': 0.9954745173454285,
149 'label': 'couch',
150 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
151 {'score': 0.9988006353378296,
152 'label': 'cat',
153 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
154 {'score': 0.9986783862113953,
155 'label': 'cat',
156 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
157 ```
158
159 Aquí obtenemos una lista de objetos detectados en la imagen, con un cuadro que rodea el objeto y una puntuación de confianza. Aquí está la imagen original a la derecha, con las predicciones mostradas a la izquierda:
160
161 <h3 align="center">
162 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
163 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
164 </h3>
165
166 Puedes obtener más información sobre las tareas admitidas por la API de `pipeline` en [este tutorial](https://huggingface.co/docs/transformers/task_summary).
167
168 Además de `pipeline`, para descargar y usar cualquiera de los modelos previamente entrenados en su tarea dada, todo lo que necesita son tres líneas de código. Aquí está la versión de PyTorch:
169 ```python
170 >>> from transformers import AutoTokenizer, AutoModel
171
172 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
173 >>> model = AutoModel.from_pretrained("bert-base-uncased")
174
175 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
176 >>> outputs = model(**inputs)
177 ```
178
179 Y aquí está el código equivalente para TensorFlow:
180 ```python
181 >>> from transformers import AutoTokenizer, TFAutoModel
182
183 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
184 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
185
186 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
187 >>> outputs = model(**inputs)
188 ```
189
190 El tokenizador es responsable de todo el preprocesamiento que espera el modelo preentrenado y se puede llamar directamente en una sola cadena (como en los ejemplos anteriores) o en una lista. Dará como resultado un diccionario que puedes usar en el código descendente o simplemente pasarlo directamente a su modelo usando el operador de desempaquetado de argumento **.
191
192 El modelo en si es un [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) normal o un [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (dependiendo De tu backend) que puedes usar de forma habitual. [Este tutorial](https://huggingface.co/docs/transformers/training) explica cómo integrar un modelo de este tipo en un ciclo de entrenamiento PyTorch o TensorFlow clásico, o como usar nuestra API `Trainer` para ajustar rápidamente un nuevo conjunto de datos.
193
194 ## ¿Por qué debo usar transformers?
195
196 1. Modelos de última generación fáciles de usar:
197 - Alto rendimiento en comprensión y generación de lenguaje natural, visión artificial y tareas de audio.
198 - Baja barrera de entrada para educadores y profesionales.
199 - Pocas abstracciones de cara al usuario con solo tres clases para aprender.
200 - Una API unificada para usar todos nuestros modelos preentrenados.
201
202 1. Menores costes de cómputo, menor huella de carbono:
203 - Los investigadores pueden compartir modelos entrenados en lugar de siempre volver a entrenar.
204 - Los profesionales pueden reducir el tiempo de cómputo y los costos de producción.
205 - Docenas de arquitecturas con más de 60 000 modelos preentrenados en todas las modalidades.
206
207 1. Elija el marco adecuado para cada parte de la vida útil de un modelo:
208 - Entrene modelos de última generación en 3 líneas de código.
209 - Mueva un solo modelo entre los marcos TF2.0/PyTorch/JAX a voluntad.
210 - Elija sin problemas el marco adecuado para la formación, la evaluación y la producción.
211
212 1. Personalice fácilmente un modelo o un ejemplo según sus necesidades:
213 - Proporcionamos ejemplos de cada arquitectura para reproducir los resultados publicados por sus autores originales..
214 - Los internos del modelo están expuestos lo más consistentemente posible..
215 - Los archivos modelo se pueden usar independientemente de la biblioteca para experimentos rápidos.
216
217 ## ¿Por qué no debería usar transformers?
218
219 - Esta biblioteca no es una caja de herramientas modular de bloques de construcción para redes neuronales. El código en los archivos del modelo no se refactoriza con abstracciones adicionales a propósito, de modo que los investigadores puedan iterar rápidamente en cada uno de los modelos sin sumergirse en abstracciones/archivos adicionales.
220 - La API de entrenamiento no está diseñada para funcionar en ningún modelo, pero está optimizada para funcionar con los modelos proporcionados por la biblioteca. Para bucles genéricos de aprendizaje automático, debe usar otra biblioteca (posiblemente, [Accelerate](https://huggingface.co/docs/accelerate)).
221 - Si bien nos esforzamos por presentar tantos casos de uso como sea posible, los scripts en nuestra [carpeta de ejemplos](https://github.com/huggingface/transformers/tree/main/examples) son solo eso: ejemplos. Se espera que no funcionen de forma inmediata en su problema específico y que deba cambiar algunas líneas de código para adaptarlas a sus necesidades.
222
223 ## Instalación
224
225 ### Con pip
226
227 Este repositorio está probado en Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ y TensorFlow 2.3+.
228
229 Deberías instalar 🤗 Transformers en un [ambiente virtual](https://docs.python.org/3/library/venv.html). Si no estas familiarizado con los entornos virtuales de Python, consulta la [guía de usuario](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
230
231 Primero, crea un entorno virtual con la versión de Python que vas a usar y actívalo.
232
233 Luego, deberás instalar al menos uno de Flax, PyTorch o TensorFlow.
234 Por favor, ve a la [página de instalación de TensorFlow](https://www.tensorflow.org/install/), [página de instalación de PyTorch](https://pytorch.org/get-started/locally/#start-locally) y/o las páginas de instalación de [Flax](https://github.com/google/flax#quick-install) y [Jax](https://github.com/google/jax#installation) con respecto al comando de instalación específico para tu plataforma.
235
236 Cuando se ha instalado uno de esos backends, los 🤗 Transformers se pueden instalar usando pip de la siguiente manera:
237
238 ```bash
239 pip install transformers
240 ```
241
242 Si deseas jugar con los ejemplos o necesitas la última versión del código y no puedes esperar a una nueva versión, tienes que [instalar la librería de la fuente](https://huggingface.co/docs/transformers/installation#installing-from-source).
243
244 ### Con conda
245
246 Desde la versión v4.0.0 de Transformers, ahora tenemos un canal conda: `huggingface`.
247
248 🤗 Transformers se puede instalar usando conda de la siguiente manera:
249
250 ```shell script
251 conda install -c huggingface transformers
252 ```
253
254 Sigue las páginas de instalación de Flax, PyTorch o TensorFlow para ver cómo instalarlos con conda.
255
256 > **_NOTA:_** En Windows, es posible que se le pida que active el modo de desarrollador para beneficiarse del almacenamiento en caché. Si esta no es una opción para usted, háganoslo saber en [esta issue](https://github.com/huggingface/huggingface_hub/issues/1062).
257
258 ## Arquitecturas modelo
259
260 **[Todos los puntos de control del modelo](https://huggingface.co/models)** aportados por 🤗 Transformers están perfectamente integrados desde huggingface.co [Centro de modelos](https://huggingface.co) donde son subidos directamente por los [usuarios](https://huggingface.co/users) y [organizaciones](https://huggingface.co/organizations).
261
262 Número actual de puntos de control: 
263
264 🤗 Transformers actualmente proporciona las siguientes arquitecturas (ver [aquí](https://huggingface.co/docs/transformers/model_summary) para un resumen de alto nivel de cada uno de ellas.):
265
266 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
267 1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.
268 1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell.
269 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass.
270 1. **[Autoformer](https://huggingface.co/docs/transformers/model_doc/autoformer)** (from Tsinghua University) released with the paper [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
271 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
272 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
273 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
274 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
275 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
276 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
277 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
278 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
279 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
280 1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu.
281 1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby.
282 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
283 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
284 1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
285 1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (from Salesforce) released with the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi.
286 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/).
287 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
288 1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
289 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
290 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
291 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
292 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
293 1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (from LAION-AI) released with the paper [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
294 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
295 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker.
296 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
297 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
298 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
299 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
300 1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
301 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
302 1. **[CPM-Ant](https://huggingface.co/docs/transformers/model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/).
303 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
304 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
305 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
306 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
307 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
308 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
309 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
310 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
311 1. **[DePlot](https://huggingface.co/docs/transformers/model_doc/deplot)** (from Google AI) released with the paper [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) by Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.
312 1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
313 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
314 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
315 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi.
316 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT.
317 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
318 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
319 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
320 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
321 1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren.
322 1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le.
323 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
324 1. **[EnCodec](https://huggingface.co/docs/transformers/main/model_doc/encodec)** (from Meta AI) released with the paper [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi.
325 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
326 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu.
327 1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (from Baidu) released with the paper [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang.
328 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2** was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives.
329 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
330 1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
331 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
332 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
333 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
334 1. **[FocalNet](https://huggingface.co/docs/transformers/model_doc/focalnet)** (from Microsoft Research) released with the paper [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao.
335 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
336 1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang.
337 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
338 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
339 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
340 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
341 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori.
342 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
343 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
344 1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
345 1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
346 1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
347 1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
348 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
349 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
350 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
351 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
352 1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
353 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever.
354 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
355 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
356 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
357 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
358 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
359 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
360 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding.
361 1. **[LLaMA](https://huggingface.co/docs/transformers/model_doc/llama)** (from The FAIR team of Meta AI) released with the paper [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.
362 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
363 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
364 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
365 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
366 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
367 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
368 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
369 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei.
370 1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar.
371 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
372 1. **[MatCha](https://huggingface.co/docs/transformers/model_doc/matcha)** (from Google AI) released with the paper [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662) by Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos.
373 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
374 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
375 1. **[MEGA](https://huggingface.co/docs/transformers/model_doc/mega)** (from Facebook) released with the paper [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer.
376 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
377 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
378 1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (from Alibaba Research) released with the paper [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) by Peng Wang, Cheng Da, and Cong Yao.
379 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
380 1. **[MMS](https://huggingface.co/docs/transformers/model_doc/mms)** (from Facebook) released with the paper [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516) by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli.
381 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
382 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
383 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
384 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
385 1. **[MobileViTV2](https://huggingface.co/docs/transformers/model_doc/mobilevitv2)** (from Apple) released with the paper [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari.
386 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
387 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
388 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
389 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi.
390 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
391 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
392 1. **[NLLB-MOE](https://huggingface.co/docs/transformers/model_doc/nllb-moe)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
393 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
394 1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi.
395 1. **[OpenLlama](https://huggingface.co/docs/transformers/model_doc/open-llama)** (from [s-JoL](https://huggingface.co/s-JoL)) released in [Open-Llama](https://github.com/s-JoL/Open-Llama).
396 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
397 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
398 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
399 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu.
400 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
401 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
402 1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (from Google) released with the paper [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
403 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
404 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
405 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
406 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
407 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
408 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
409 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
410 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
411 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
412 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
413 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
414 1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
415 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou.
416 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
417 1. **[RWKV](https://huggingface.co/docs/transformers/model_doc/rwkv)** (from Bo Peng) released with the paper [this repo](https://github.com/BlinkDL/RWKV-LM) by Bo Peng.
418 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
419 1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (from Meta AI) released with the paper [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.
420 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
421 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
422 1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
423 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
424 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
425 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
426 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
427 1. **[SwiftFormer](https://huggingface.co/docs/transformers/model_doc/swiftformer)** (from MBZUAI) released with the paper [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan.
428 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
429 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
430 1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (from University of Würzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte.
431 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer.
432 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
433 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
434 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham.
435 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
436 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
437 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace).
438 1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani.
439 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
440 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
441 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
442 1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (from UNC Chapel Hill) released with the paper [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal.
443 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
444 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
445 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
446 1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun.
447 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
448 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
449 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
450 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
451 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
452 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
453 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
454 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas.
455 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
456 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
457 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
458 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
459 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
460 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling.
461 1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (from Meta AI) released with the paper [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe.
462 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
463 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
464 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
465 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
466 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
467 1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (from Meta AI) released with the paper [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa.
468 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
469 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
470 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
471 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
472 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
473 1. ¿Quieres aportar un nuevo modelo? Hemos agregado una **guía detallada y plantillas** para guiarte en el proceso de agregar un nuevo modelo. Puedes encontrarlos en la carpeta de [`templates`](./templates) del repositorio. Asegúrate de revisar las [pautas de contribución](./CONTRIBUTING.md) y comunícate con los mantenedores o abra un problema para recopilar comentarios antes de comenzar su PR.
474
475 Para comprobar si cada modelo tiene una implementación en Flax, PyTorch o TensorFlow, o tiene un tokenizador asociado respaldado por la librería 🤗 Tokenizers , ve a [esta tabla](https://huggingface.co/docs/transformers/index#supported-frameworks).
476
477 Estas implementaciones se han probado en varios conjuntos de datos (consulte los scripts de ejemplo) y deberían coincidir con el rendimiento de las implementaciones originales. Puede encontrar más detalles sobre el rendimiento en la sección Examples de la [documentación](https://github.com/huggingface/transformers/tree/main/examples).
478
479
480 ## Aprender más
481
482 | Sección | Descripción |
483 |-|-|
484 | [Documentación](https://huggingface.co/docs/transformers/) | Toda la documentación de la API y tutoriales |
485 | [Resumen de tareas](https://huggingface.co/docs/transformers/task_summary) | Tareas soportadas 🤗 Transformers |
486 | [Tutorial de preprocesAmiento](https://huggingface.co/docs/transformers/preprocessing) | Usando la clase `Tokenizer` para preparar datos para los modelos |
487 | [Entrenamiento y puesta a punto](https://huggingface.co/docs/transformers/training) | Usando los modelos aportados por 🤗 Transformers en un bucle de entreno de PyTorch/TensorFlow y la API de `Trainer` |
488 | [Recorrido rápido: secuencias de comandos de ajuste/uso](https://github.com/huggingface/transformers/tree/main/examples) | Scripts de ejemplo para ajustar modelos en una amplia gama de tareas |
489 | [Compartir y subir modelos](https://huggingface.co/docs/transformers/model_sharing) | Carga y comparte tus modelos perfeccionados con la comunidad |
490 | [Migración](https://huggingface.co/docs/transformers/migration) | Migra a 🤗 Transformers desde `pytorch-transformers` o `pytorch-pretrained-bert` |
491
492 ## Citación
493
494 Ahora nosotros tenemos un [papel](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) que puedes citar para la librería de 🤗 Transformers:
495 ```bibtex
496 @inproceedings{wolf-etal-2020-transformers,
497 title = "Transformers: State-of-the-Art Natural Language Processing",
498 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
499 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
500 month = oct,
501 year = "2020",
502 address = "Online",
503 publisher = "Association for Computational Linguistics",
504 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
505 pages = "38--45"
506 }
507 ```
508
[end of README_es.md]
[start of README_hd.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Hindi translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Hindi characters. E.g., कुल मिलाकर 100 से अधिक भाषाएँ; ट्रांसफॉर्मर लाइब्रेरी का उपयोग करता है।
20 - वर्गाकार उद्धरणों का प्रयोग करें, जैसे, "उद्धरण"
21
22 Dictionary
23
24 Hugging Face: गले लगाओ चेहरा
25 token: शब्द (और मूल अंग्रेजी को कोष्ठक में चिह्नित करें)
26 tokenize: टोकननाइज़ करें (और मूल अंग्रेज़ी को चिह्नित करने के लिए कोष्ठक का उपयोग करें)
27 tokenizer: Tokenizer (मूल अंग्रेजी में कोष्ठक के साथ)
28 transformer: transformer
29 pipeline: समनुक्रम
30 API: API (अनुवाद के बिना)
31 inference: विचार
32 Trainer: प्रशिक्षक। कक्षा के नाम के रूप में प्रस्तुत किए जाने पर अनुवादित नहीं किया गया।
33 pretrained/pretrain: पूर्व प्रशिक्षण
34 finetune: फ़ाइन ट्यूनिंग
35 community: समुदाय
36 example: जब विशिष्ट गोदाम example कैटलॉग करते समय "केस केस" के रूप में अनुवादित
37 Python data structures (e.g., list, set, dict): मूल अंग्रेजी को चिह्नित करने के लिए सूचियों, सेटों, शब्दकोशों में अनुवाद करें और कोष्ठक का उपयोग करें
38 NLP/Natural Language Processing: द्वारा NLP अनुवाद के बिना प्रकट होते हैं Natural Language Processing प्रस्तुत किए जाने पर प्राकृतिक भाषा संसाधन में अनुवाद करें
39 checkpoint: जाँच बिंदु
40 -->
41
42 <p align="center">
43 <br>
44 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
45 <br>
46 <p>
47 <p align="center">
48 <a href="https://circleci.com/gh/huggingface/transformers">
49 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
50 </a>
51 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
52 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
53 </a>
54 <a href="https://huggingface.co/docs/transformers/index">
55 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
56 </a>
57 <a href="https://github.com/huggingface/transformers/releases">
58 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
59 </a>
60 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
61 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
62 </a>
63 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
64 </p>
65
66 <h4 align="center">
67 <p>
68 <a href="https://github.com/huggingface/transformers/">English</a> |
69 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
70 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
71 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> |
72 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> |
73 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> |
74 <b>हिन्दी</b> |
75 <p>
76 </h4>
77
78 <h3 align="center">
79 <p>Jax, PyTorch और TensorFlow के लिए उन्नत मशीन लर्निंग</p>
80 </h3>
81
82 <h3 align="center">
83 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
84 </h3>
85
86 🤗 Transformers 100 से अधिक भाषाओं में पाठ वर्गीकरण, सूचना निष्कर्षण, प्रश्न उत्तर, सारांशीकरण, अनुवाद, पाठ निर्माण का समर्थन करने के लिए हजारों पूर्व-प्रशिक्षित मॉडल प्रदान करता है। इसका उद्देश्य सबसे उन्नत एनएलपी तकनीक को सभी के लिए सुलभ बनाना है।
87
88 🤗 Transformers त्वरित डाउनलोड और उपयोग के लिए एक एपीआई प्रदान करता है, जिससे आप किसी दिए गए पाठ पर एक पूर्व-प्रशिक्षित मॉडल ले सकते हैं, इसे अपने डेटासेट पर ठीक कर सकते हैं और इसे [मॉडल हब] (https://huggingface.co/models) के माध्यम से समुदाय के साथ साझा कर सकते हैं। ) . इसी समय, प्रत्येक परिभाषित पायथन मॉड्यूल पूरी तरह से स्वतंत्र है, जो संशोधन और तेजी से अनुसंधान प्रयोगों के लिए सुविधाजनक है।
89
90 🤗 Transformers तीन सबसे लोकप्रिय गहन शिक्षण पुस्तकालयों का समर्थन करता है: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — और इसके साथ निर्बाध रूप से एकीकृत होता है। आप अपने मॉडल को सीधे एक ढांचे के साथ प्रशिक्षित कर सकते हैं और दूसरे के साथ लोड और अनुमान लगा सकते हैं।
91
92 ## ऑनलाइन डेमो
93
94 आप सबसे सीधे मॉडल पृष्ठ पर परीक्षण कर सकते हैं [model hub](https://huggingface.co/models) मॉडल पर। हम [निजी मॉडल होस्टिंग, मॉडल संस्करण, और अनुमान एपीआई] भी प्रदान करते हैं।(https://huggingface.co/pricing)。
95
96 यहाँ कुछ उदाहरण हैं:
97 - [शब्द को भरने के लिए मास्क के रूप में BERT का प्रयोग करें](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
98 - [इलेक्ट्रा के साथ नामित इकाई पहचान](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
99 - [जीपीटी-2 के साथ टेक्स्ट जनरेशन](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
100 - [रॉबर्टा के साथ प्राकृतिक भाषा निष्कर्ष](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
101 - [बार्ट के साथ पाठ सारांश](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
102 - [डिस्टिलबर्ट के साथ प्रश्नोत्तर](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
103 - [अनुवाद के लिए T5 का प्रयोग करें](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
104
105 **[Write With Transformer](https://transformer.huggingface.co)**,हगिंग फेस टीम द्वारा बनाया गया, यह एक आधिकारिक पाठ पीढ़ी है demo。
106
107 ## यदि आप हगिंग फेस टीम से बीस्पोक समर्थन की तलाश कर रहे हैं
108
109 <a target="_blank" href="https://huggingface.co/support">
110 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
111 </a><br>
112
113 ## जल्दी शुरू करें
114
115 हम त्वरित उपयोग के लिए मॉडल प्रदान करते हैं `pipeline` (पाइपलाइन) एपीआई। पाइपलाइन पूर्व-प्रशिक्षित मॉडल और संबंधित पाठ प्रीप्रोसेसिंग को एकत्रित करती है। सकारात्मक और नकारात्मक भावना को निर्धारित करने के लिए पाइपलाइनों का उपयोग करने का एक त्वरित उदाहरण यहां दिया गया है:
116
117 ```python
118 >>> from transformers import pipeline
119
120 # भावना विश्लेषण पाइपलाइन का उपयोग करना
121 >>> classifier = pipeline('sentiment-analysis')
122 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
123 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
124 ```
125
126 कोड की दूसरी पंक्ति पाइपलाइन द्वारा उपयोग किए गए पूर्व-प्रशिक्षित मॉडल को डाउनलोड और कैश करती है, जबकि कोड की तीसरी पंक्ति दिए गए पाठ पर मूल्यांकन करती है। यहां उत्तर 99 आत्मविश्वास के स्तर के साथ "सकारात्मक" है।
127
128 कई एनएलपी कार्यों में आउट ऑफ़ द बॉक्स पाइपलाइनों का पूर्व-प्रशिक्षण होता है। उदाहरण के लिए, हम किसी दिए गए पाठ से किसी प्रश्न का उत्तर आसानी से निकाल सकते हैं:
129
130 ``` python
131 >>> from transformers import pipeline
132
133 # प्रश्नोत्तर पाइपलाइन का उपयोग करना
134 >>> question_answerer = pipeline('question-answering')
135 >>> question_answerer({
136 ... 'question': 'What is the name of the repository ?',
137 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
138 ... })
139 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
140
141 ```
142
143 उत्तर देने के अलावा, पूर्व-प्रशिक्षित मॉडल संगत आत्मविश्वास स्कोर भी देता है, जहां उत्तर टोकनयुक्त पाठ में शुरू और समाप्त होता है। आप [इस ट्यूटोरियल](https://huggingface.co/docs/transformers/task_summary) से पाइपलाइन एपीआई द्वारा समर्थित कार्यों के बारे में अधिक जान सकते हैं।
144
145 अपने कार्य पर किसी भी पूर्व-प्रशिक्षित मॉडल को डाउनलोड करना और उसका उपयोग करना भी कोड की तीन पंक्तियों की तरह सरल है। यहाँ PyTorch संस्करण के लिए एक उदाहरण दिया गया है:
146 ```python
147 >>> from transformers import AutoTokenizer, AutoModel
148
149 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
150 >>> model = AutoModel.from_pretrained("bert-base-uncased")
151
152 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
153 >>> outputs = model(**inputs)
154 ```
155 यहाँ समकक्ष है TensorFlow कोड:
156 ```python
157 >>> from transformers import AutoTokenizer, TFAutoModel
158
159 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
160 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
161
162 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
163 >>> outputs = model(**inputs)
164 ```
165
166 टोकननाइज़र सभी पूर्व-प्रशिक्षित मॉडलों के लिए प्रीप्रोसेसिंग प्रदान करता है और इसे सीधे एक स्ट्रिंग (जैसे ऊपर दिए गए उदाहरण) या किसी सूची पर बुलाया जा सकता है। यह एक डिक्शनरी (तानाशाही) को आउटपुट करता है जिसे आप डाउनस्ट्रीम कोड में उपयोग कर सकते हैं या `**` अनपैकिंग एक्सप्रेशन के माध्यम से सीधे मॉडल को पास कर सकते हैं।
167
168 मॉडल स्वयं एक नियमित [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) या [TensorFlow `tf.keras.Model`](https ://pytorch.org/docs/stable/nn.html#torch.nn.Module) ://www.tensorflow.org/api_docs/python/tf/keras/Model) (आपके बैकएंड के आधार पर), जो हो सकता है सामान्य तरीके से उपयोग किया जाता है। [यह ट्यूटोरियल](https://huggingface.co/transformers/training.html) बताता है कि इस तरह के मॉडल को क्लासिक PyTorch या TensorFlow प्रशिक्षण लूप में कैसे एकीकृत किया जाए, या हमारे `ट्रेनर` एपीआई का उपयोग कैसे करें ताकि इसे जल्दी से फ़ाइन ट्यून किया जा सके।एक नया डेटासेट पे।
169
170 ## ट्रांसफार्मर का उपयोग क्यों करें?
171
172 1. उपयोग में आसानी के लिए उन्नत मॉडल:
173 - एनएलयू और एनएलजी पर बेहतर प्रदर्शन
174 - प्रवेश के लिए कम बाधाओं के साथ शिक्षण और अभ्यास के अनुकूल
175 - उपयोगकर्ता-सामना करने वाले सार तत्व, केवल तीन वर्गों को जानने की जरूरत है
176 - सभी मॉडलों के लिए एकीकृत एपीआई
177
178 1. कम कम्प्यूटेशनल ओवरहेड और कम कार्बन उत्सर्जन:
179 - शोधकर्ता हर बार नए सिरे से प्रशिक्षण देने के बजाय प्रशिक्षित मॉडल साझा कर सकते हैं
180 - इंजीनियर गणना समय और उत्पादन ओवरहेड को कम कर सकते हैं
181 - दर्जनों मॉडल आर्किटेक्चर, 2,000 से अधिक पूर्व-प्रशिक्षित मॉडल, 100 से अधिक भाषाओं का समर्थन
182
183 1.मॉडल जीवनचक्र के हर हिस्से को शामिल करता है:
184 - कोड की केवल 3 पंक्तियों में उन्नत मॉडलों को प्रशिक्षित करें
185 - मॉडल को मनमाने ढंग से विभिन्न डीप लर्निंग फ्रेमवर्क के बीच स्थानांतरित किया जा सकता है, जैसा आप चाहते हैं
186 - निर्बाध रूप से प्रशिक्षण, मूल्यांकन और उत्पादन के लिए सबसे उपयुक्त ढांचा चुनें
187
188 1. आसानी से अनन्य मॉडल को अनुकूलित करें और अपनी आवश्यकताओं के लिए मामलों का उपयोग करें:
189 - हम मूल पेपर परिणामों को पुन: पेश करने के लिए प्रत्येक मॉडल आर्किटेक्चर के लिए कई उपयोग के मामले प्रदान करते हैं
190 - मॉडल की आंतरिक संरचना पारदर्शी और सुसंगत रहती है
191 - मॉडल फ़ाइल को अलग से इस्तेमाल किया जा सकता है, जो संशोधन और त्वरित प्रयोग के लिए सुविधाजनक है
192
193 ## मुझे ट्रांसफॉर्मर का उपयोग कब नहीं करना चाहिए?
194
195 - यह लाइब्रेरी मॉड्यूलर न्यूरल नेटवर्क टूलबॉक्स नहीं है। मॉडल फ़ाइल में कोड जानबूझकर अल्पविकसित है, बिना अतिरिक्त सार इनकैप्सुलेशन के, ताकि शोधकर्ता अमूर्तता और फ़ाइल जंपिंग में शामिल हुए जल्दी से पुनरावृति कर सकें।
196 - `ट्रेनर` एपीआई किसी भी मॉडल के साथ संगत नहीं है, यह केवल इस पुस्तकालय के मॉडल के लिए अनुकूलित है। यदि आप सामान्य मशीन लर्निंग के लिए उपयुक्त प्रशिक्षण लूप कार्यान्वयन की तलाश में हैं, तो कहीं और देखें।
197 - हमारे सर्वोत्तम प्रयासों के बावजूद, [उदाहरण निर्देशिका] (https://github.com/huggingface/transformers/tree/main/examples) में स्क्रिप्ट केवल उपयोग के मामले हैं। आपकी विशिष्ट समस्या के लिए, वे जरूरी नहीं कि बॉक्स से बाहर काम करें, और आपको कोड की कुछ पंक्तियों को सूट करने की आवश्यकता हो सकती है।
198
199 ## स्थापित करना
200
201 ### पिप का उपयोग करना
202
203 इस रिपॉजिटरी का परीक्षण Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ और TensorFlow 2.3+ के तहत किया गया है।
204
205 आप [वर्चुअल एनवायरनमेंट] (https://docs.python.org/3/library/venv.html) में 🤗 ट्रांसफॉर्मर इंस्टॉल कर सकते हैं। यदि आप अभी तक पायथन के वर्चुअल एनवायरनमेंट से परिचित नहीं हैं, तो कृपया इसे [उपयोगकर्ता निर्देश] (https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) पढ़ें।
206
207 सबसे पहले, पायथन के उस संस्करण के साथ एक आभासी वातावरण बनाएं जिसका आप उपयोग करने और उसे सक्रिय करने की योजना बना रहे हैं।
208
209 फिर, आपको Flax, PyTorch या TensorFlow में से किसी एक को स्थापित करने की आवश्यकता है। अपने प्लेटफ़ॉर्म पर इन फ़्रेमवर्क को स्थापित करने के लिए, [TensorFlow स्थापना पृष्ठ](https://www.tensorflow.org/install/), [PyTorch स्थापना पृष्ठ](https://pytorch.org/get-started /locally/# देखें) start-locally) या [Flax स्थापना पृष्ठ](https://github.com/google/flax#quick-install).
210
211 जब इनमें से कोई एक बैकएंड सफलतापूर्वक स्थापित हो जाता है, तो ट्रांसफॉर्मर निम्नानुसार स्थापित किए जा सकते हैं:
212
213 ```bash
214 pip install transformers
215 ```
216
217 यदि आप उपयोग के मामलों को आज़माना चाहते हैं या आधिकारिक रिलीज़ से पहले नवीनतम इन-डेवलपमेंट कोड का उपयोग करना चाहते हैं, तो आपको [सोर्स से इंस्टॉल करना होगा](https://huggingface.co/docs/transformers/installation#installing-from- स्रोत)।
218
219 ### कोंडा का उपयोग करना
220
221 ट्रांसफॉर्मर संस्करण 4.0.0 के बाद से, हमारे पास एक कोंडा चैनल है: `हगिंगफेस`।
222
223 ट्रांसफॉर्मर कोंडा के माध्यम से निम्नानुसार स्थापित किया जा सकता है:
224
225 ```shell script
226 conda install -c huggingface transformers
227 ```
228
229 कोंडा के माध्यम से Flax, PyTorch, या TensorFlow में से किसी एक को स्थापित करने के लिए, निर्देशों के लिए उनके संबंधित स्थापना पृष्ठ देखें।
230
231 ## मॉडल आर्किटेक्चर
232 [उपयोगकर्ता](https://huggingface.co/users) और [organization](https://huggingface.co) द्वारा ट्रांसफॉर्मर समर्थित [**सभी मॉडल चौकियों**](https://huggingface.co/models) /users) हगिंगफेस.को/ऑर्गनाइजेशन), सभी को बिना किसी बाधा के हगिंगफेस.को [मॉडल हब](https://huggingface.co) के साथ एकीकृत किया गया है।
233
234 चौकियों की वर्तमान संख्या: 
235
236 🤗 ट्रांसफॉर्मर वर्तमान में निम्नलिखित आर्किटेक्चर का समर्थन करते हैं (मॉडल के अवलोकन के लिए [यहां] देखें (https://huggingface.co/docs/transformers/model_summary)):
237
238 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (Google Research and the Toyota Technological Institute at Chicago) साथ थीसिस [ALBERT: A Lite BERT for Self-supervised भाषा प्रतिनिधित्व सीखना](https://arxiv.org/abs/1909.11942), झेंझोंग लैन, मिंगदा चेन, सेबेस्टियन गुडमैन, केविन गिम्पेल, पीयूष शर्मा, राडू सोरिकट
239 1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (Google Research से) Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. द्वाराअनुसंधान पत्र [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) के साथ जारी किया गया
240 1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell.
241 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass.
242 1. **[Autoformer](https://huggingface.co/docs/transformers/model_doc/autoformer)** (from Tsinghua University) released with the paper [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
243 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (फेसबुक) साथ थीसिस [बार्ट: प्राकृतिक भाषा निर्माण, अनुवाद के लिए अनुक्रम-से-अनुक्रम पूर्व प्रशिक्षण , और समझ] (https://arxiv.org/pdf/1910.13461.pdf) पर निर्भर माइक लुईस, यिनहान लियू, नमन गोयल, मार्जन ग़ज़विनिनेजाद, अब्देलरहमान मोहम्मद, ओमर लेवी, वेस स्टोयानोव और ल्यूक ज़ेटलमॉयर
244 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (से École polytechnique) साथ थीसिस [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) पर निर्भर Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis रिहाई।
245 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (VinAI Research से) साथ में पेपर [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701)गुयेन लुओंग ट्रान, डुओंग मिन्ह ले और डाट क्वोक गुयेन द्वारा पोस्ट किया गया।
246 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (Microsoft से) साथ में कागज [BEiT: BERT इमेज ट्रांसफॉर्मर्स का प्री-ट्रेनिंग](https://arxiv.org/abs/2106.08254) Hangbo Bao, Li Dong, Furu Wei द्वारा।
247 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (गूगल से) साथ वाला पेपर [बीईआरटी: प्री-ट्रेनिंग ऑफ डीप बिडायरेक्शनल ट्रांसफॉर्मर्स फॉर लैंग्वेज अंडरस्टैंडिंग](https://arxiv.org/abs/1810.04805) जैकब डेवलिन, मिंग-वेई चांग, केंटन ली और क्रिस्टीना टौटानोवा द्वारा प्रकाशित किया गया था। .
248 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (गूगल से) साथ देने वाला पेपर [सीक्वेंस जेनरेशन टास्क के लिए प्री-ट्रेंड चेकपॉइंट का इस्तेमाल करना](https ://arxiv.org/abs/1907.12461) साशा रोठे, शशि नारायण, अलियाक्सि सेवेरिन द्वारा।
249 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (VinAI Research से) साथ में पेपर [BERTweet: अंग्रेजी ट्वीट्स के लिए एक पूर्व-प्रशिक्षित भाषा मॉडल] (https://aclanthology.org/2020.emnlp-demos.2/) डाट क्वोक गुयेन, थान वु और अन्ह तुआन गुयेन द्वारा प्रकाशित।
250 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (गूगल रिसर्च से) साथ वाला पेपर [बिग बर्ड: ट्रांसफॉर्मर्स फॉर लॉन्गर सीक्वेंस](https://arxiv .org/abs/2007.14062) मंज़िल ज़हीर, गुरु गुरुगणेश, अविनावा दुबे, जोशुआ आइंस्ली, क्रिस अल्बर्टी, सैंटियागो ओंटानोन, फिलिप फाम, अनिरुद्ध रावुला, किफ़ान वांग, ली यांग, अमर अहमद द्वारा।
251 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (गूगल रिसर्च से) साथ में पेपर [बिग बर्ड: ट्रांसफॉर्मर्स फॉर लॉन्गर सीक्वेंस](https://arxiv.org/abs/2007.14062) मंज़िल ज़हीर, गुरु गुरुगणेश, अविनावा दुबे, जोशुआ आइंस्ली, क्रिस अल्बर्टी, सैंटियागो ओंटानन, फिलिप फाम द्वारा , अनिरुद्ध रावुला, किफ़ान वांग, ली यांग, अमर अहमद द्वारा पोस्ट किया गया।
252 1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu.
253 1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby.
254 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (फेसबुक से) साथ में कागज [एक ओपन-डोमेन चैटबॉट बनाने की विधि](https://arxiv.org /abs/2004.13637) स्टीफन रोलर, एमिली दीनन, नमन गोयल, दा जू, मैरी विलियमसन, यिनहान लियू, जिंग जू, मायल ओट, कर्ट शस्टर, एरिक एम। स्मिथ, वाई-लैन बॉरो, जेसन वेस्टन द्वारा।
255 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (फेसबुक से) साथ में पेपर [एक ओपन-डोमेन चैटबॉट बनाने की रेसिपी](https://arxiv .org/abs/2004.13637) स्टीफन रोलर, एमिली दीनन, नमन गोयल, दा जू, मैरी विलियमसन, यिनहान लियू, जिंग जू, मायल ओट, कर्ट शस्टर, एरिक एम स्मिथ, वाई-लैन बॉरो, जेसन वेस्टन द्वारा।
256 1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
257 1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (Salesforce से) Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. द्वाराअनुसंधान पत्र [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) के साथ जारी किया गया
258 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
259 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (एलेक्सा से) कागज के साथ [बीईआरटी के लिए ऑप्टिमल सबआर्किटेक्चर एक्सट्रैक्शन](https://arxiv.org/abs/ 2010.10499) एड्रियन डी विंटर और डैनियल जे पेरी द्वारा।
260 1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (हरबिन इंस्टिट्यूट ऑफ़ टेक्नोलॉजी/माइक्रोसॉफ्ट रिसर्च एशिया/इंटेल लैब्स से) कागज के साथ [ब्रिजटॉवर: विजन-लैंग्वेज रिप्रेजेंटेशन लर्निंग में एनकोडर्स के बीच ब्रिज बनाना](<https://arxiv.org/abs/2206.08657>) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
261 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (Google अनुसंधान से) साथ में कागज [ByT5: पूर्व-प्रशिक्षित बाइट-टू-बाइट मॉडल के साथ एक टोकन-मुक्त भविष्य की ओर] (https://arxiv.org/abs/2105.13626) Linting Xue, Aditya Barua, Noah Constant, रामी अल-रफू, शरण नारंग, मिहिर काले, एडम रॉबर्ट्स, कॉलिन रैफेल द्वारा पोस्ट किया गया।
262 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (इनरिया/फेसबुक/सोरबोन से) साथ में कागज [CamemBERT: एक टेस्टी फ्रेंच लैंग्वेज मॉडल](https:// arxiv.org/abs/1911.03894) लुई मार्टिन*, बेंजामिन मुलर*, पेड्रो जेवियर ऑर्टिज़ सुआरेज़*, योआन ड्यूपॉन्ट, लॉरेंट रोमरी, एरिक विलेमोन्टे डे ला क्लर्जरी, जैमे सेडाह और बेनोइट सगोट द्वारा।
263 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (Google रिसर्च से) साथ में दिया गया पेपर [कैनाइन: प्री-ट्रेनिंग ए एफिशिएंट टोकनाइजेशन-फ्री एनकोडर फॉर लैंग्वेज रिप्रेजेंटेशन]( https://arxiv.org/abs/2103.06874) जोनाथन एच क्लार्क, डैन गैरेट, यूलिया टर्क, जॉन विएटिंग द्वारा।
264 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
265 1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (LAION-AI से) Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. द्वाराअनुसंधान पत्र [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) के साथ जारी किया गया
266 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI से) साथ वाला पेपर [लर्निंग ट्रांसफरेबल विजुअल मॉडल फ्रॉम नेचुरल लैंग्वेज सुपरविजन](https://arxiv.org /abs/2103.00020) एलेक रैडफोर्ड, जोंग वूक किम, क्रिस हैलासी, आदित्य रमेश, गेब्रियल गोह, संध्या अग्रवाल, गिरीश शास्त्री, अमांडा एस्केल, पामेला मिश्किन, जैक क्लार्क, ग्रेचेन क्रुएगर, इल्या सुत्स्केवर द्वारा।
267 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker.
268 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (सेल्सफोर्स से) साथ में पेपर [प्रोग्राम सिंथेसिस के लिए एक संवादात्मक प्रतिमान](https://arxiv.org/abs/2203.13474) एरिक निजकैंप, बो पैंग, हिरोआकी हयाशी, लिफू तू, हुआन वांग, यिंगबो झोउ, सिल्वियो सावरेस, कैमिंग जिओंग रिलीज।
269 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (माइक्रोसॉफ्ट रिसर्च एशिया से) कागज के साथ [फास्ट ट्रेनिंग कन्वर्जेंस के लिए सशर्त डीईटीआर](https://arxiv. org/abs/2108.06152) डेपू मेंग, ज़ियाओकांग चेन, ज़ेजिया फैन, गैंग ज़ेंग, होउकियांग ली, युहुई युआन, लेई सन, जिंगडोंग वांग द्वारा।
270 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (YituTech से) साथ में कागज [ConvBERT: स्पैन-आधारित डायनेमिक कनवल्शन के साथ BERT में सुधार](https://arxiv .org/abs/2008.02496) जिहांग जियांग, वीहाओ यू, डाकान झोउ, युनपेंग चेन, जियाशी फेंग, शुइचेंग यान द्वारा।
271 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (Facebook AI से) साथ वाला पेपर [A ConvNet for the 2020s](https://arxiv.org/abs /2201.03545) ज़ुआंग लियू, हेंज़ी माओ, चाओ-युआन वू, क्रिस्टोफ़ फीचटेनहोफ़र, ट्रेवर डेरेल, सैनिंग ज़ी द्वारा।
272 1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
273 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (सिंघुआ यूनिवर्सिटी से) साथ में पेपर [सीपीएम: ए लार्ज-स्केल जेनेरेटिव चाइनीज प्री-ट्रेंड लैंग्वेज मॉडल](https : //arxiv.org/abs/2012.00413) झेंग्यान झांग, जू हान, हाओ झोउ, पेई के, युक्सियन गु, डेमिंग ये, युजिया किन, युशेंग सु, हाओझे जी, जियान गुआन, फैंचाओ क्यूई, ज़ियाओझी वांग, यानान झेंग द्वारा , गुओयांग ज़ेंग, हुआनकी काओ, शेंगकी चेन, डाइक्सुआन ली, ज़ेनबो सन, ज़ियुआन लियू, मिनली हुआंग, वेंटाओ हान, जी तांग, जुआनज़ी ली, ज़ियाओयान झू, माओसोंग सन।
274 1. **[CPM-Ant](https://huggingface.co/docs/transformers/model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/).
275 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (सेल्सफोर्स से) साथ में पेपर [CTRL: ए कंडिशनल ट्रांसफॉर्मर लैंग्वेज मॉडल फॉर कंट्रोलेबल जेनरेशन](https://arxiv.org/abs/1909.05858) नीतीश शिरीष केसकर*, ब्रायन मैककैन*, लव आर. वार्ष्णेय, कैमिंग जिओंग और रिचर्ड द्वारा सोचर द्वारा जारी किया गया।
276 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (Microsoft से) साथ में दिया गया पेपर [CvT: इंट्रोड्यूसिंग कनवॉल्यूशन टू विजन ट्रांसफॉर्मर्स](https://arxiv.org/ एब्स/2103.15808) हैपिंग वू, बिन जिओ, नोएल कोडेला, मेंगचेन लियू, जियांग दाई, लू युआन, लेई झांग द्वारा।
277 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (फेसबुक से) साथ में कागज [Data2Vec: भाषण, दृष्टि और भाषा में स्व-पर्यवेक्षित सीखने के लिए एक सामान्य ढांचा] (https://arxiv.org/abs/2202.03555) एलेक्सी बाएव्स्की, वेई-निंग सू, कियानटोंग जू, अरुण बाबू, जियाताओ गु, माइकल औली द्वारा पोस्ट किया गया।
278 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (Microsoft से) साथ में दिया गया पेपर [DeBERta: डिकोडिंग-एन्हांस्ड BERT विद डिसेंटैंगल्ड अटेंशन](https://arxiv. org/abs/2006.03654) पेंगचेंग हे, ज़ियाओडोंग लियू, जियानफेंग गाओ, वीज़ू चेन द्वारा।
279 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (Microsoft से) साथ में दिया गया पेपर [DeBERTa: डिकोडिंग-एन्हांस्ड BERT विथ डिसेंन्गल्ड अटेंशन](https: //arxiv.org/abs/2006.03654) पेंगचेंग हे, ज़ियाओडोंग लियू, जियानफेंग गाओ, वीज़ू चेन द्वारा पोस्ट किया गया।
280 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (बर्कले/फेसबुक/गूगल से) पेपर के साथ [डिसीजन ट्रांसफॉर्मर: रीनफोर्समेंट लर्निंग वाया सीक्वेंस मॉडलिंग](https : //arxiv.org/abs/2106.01345) लिली चेन, केविन लू, अरविंद राजेश्वरन, किमिन ली, आदित्य ग्रोवर, माइकल लास्किन, पीटर एबील, अरविंद श्रीनिवास, इगोर मोर्डच द्वारा पोस्ट किया गया।
281 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (सेंसटाइम रिसर्च से) साथ में पेपर [डिफॉर्मेबल डीईटीआर: डिफॉर्मेबल ट्रांसफॉर्मर्स फॉर एंड-टू-एंड ऑब्जेक्ट डिटेक्शन] (https://arxiv.org/abs/2010.04159) Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, जिफेंग दाई द्वारा पोस्ट किया गया।
282 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (फेसबुक से) साथ में पेपर [ट्रेनिंग डेटा-एफिशिएंट इमेज ट्रांसफॉर्मर और डिस्टिलेशन थ्रू अटेंशन](https://arxiv .org/abs/2012.12877) ह्यूगो टौव्रोन, मैथ्यू कॉर्ड, मैथिज्स डूज़, फ़्रांसिस्को मस्सा, एलेक्ज़ेंडर सबलेरोल्स, हर्वे जेगौ द्वारा।
283 1. **[DePlot](https://huggingface.co/docs/transformers/model_doc/deplot)** (Google AI से) Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun. द्वाराअनुसंधान पत्र [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) के साथ जारी किया गया
284 1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
285 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (फेसबुक से) साथ में कागज [ट्रांसफॉर्मर्स के साथ एंड-टू-एंड ऑब्जेक्ट डिटेक्शन](https://arxiv. org/abs/2005.12872) निकोलस कैरियन, फ़्रांसिस्को मस्सा, गेब्रियल सिनेव, निकोलस उसुनियर, अलेक्जेंडर किरिलोव, सर्गेई ज़ागोरुयको द्वारा।
286 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [DialoGPT: बड़े पैमाने पर जनरेटिव प्री-ट्रेनिंग फॉर कन्वर्सेशनल रिस्पांस जेनरेशन](https ://arxiv.org/abs/1911.00536) यिज़े झांग, सिकी सन, मिशेल गैली, येन-चुन चेन, क्रिस ब्रोकेट, जियांग गाओ, जियानफेंग गाओ, जिंगजिंग लियू, बिल डोलन द्वारा।
287 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi.
288 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (हगिंगफेस से), साथ में कागज [डिस्टिलबर्ट, बीईआरटी का डिस्टिल्ड वर्जन: छोटा, तेज, सस्ता और हल्का] (https://arxiv.org/abs/1910.01108) विक्टर सनह, लिसांड्रे डेब्यू और थॉमस वुल्फ द्वारा पोस्ट किया गया। यही तरीका GPT-2 को [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERta से [DistilRoBERta](https://github.com) पर कंप्रेस करने के लिए भी लागू किया जाता है। / हगिंगफेस/ट्रांसफॉर्मर्स/ट्री/मेन/उदाहरण/डिस्टिलेशन), बहुभाषी BERT से [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) और डिस्टिलबर्ट का जर्मन संस्करण।
289 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [DiT: सेल्फ सुपरवाइज्ड प्री-ट्रेनिंग फॉर डॉक्यूमेंट इमेज ट्रांसफॉर्मर](https://arxiv.org/abs/2203.02378) जुनलॉन्ग ली, यिहेंग जू, टेंगचाओ लव, लेई कुई, चा झांग द्वारा फुरु वेई द्वारा पोस्ट किया गया।
290 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (NAVER से) साथ में कागज [OCR-मुक्त डॉक्यूमेंट अंडरस्टैंडिंग ट्रांसफॉर्मर](https://arxiv.org/abs /2111.15664) गीवूक किम, टीकग्यू होंग, मूनबिन यिम, जियोंग्योन नाम, जिनयॉन्ग पार्क, जिनयॉन्ग यिम, वोनसेओक ह्वांग, सांगडू यूं, डोंगयून हान, सेउंग्युन पार्क द्वारा।
291 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (फेसबुक से) साथ में पेपर [ओपन-डोमेन क्वेश्चन आंसरिंग के लिए डेंस पैसेज रिट्रीवल](https://arxiv. org/abs/2004.04906) व्लादिमीर करपुखिन, बरलास ओज़ुज़, सेवन मिन, पैट्रिक लुईस, लेडेल वू, सर्गेई एडुनोव, डैनकी चेन, और वेन-ताऊ यिह द्वारा।
292 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (इंटेल लैब्स से) साथ में कागज [विज़न ट्रांसफॉर्मर्स फॉर डेंस प्रेडिक्शन](https://arxiv.org /abs/2103.13413) रेने रैनफ्टल, एलेक्सी बोचकोवस्की, व्लादलेन कोल्टन द्वारा।
293 1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren.
294 1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le.
295 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (Google रिसर्च/स्टैनफोर्ड यूनिवर्सिटी से) साथ में दिया गया पेपर [इलेक्ट्रा: जेनरेटर के बजाय भेदभाव करने वाले के रूप में टेक्स्ट एन्कोडर्स का पूर्व-प्रशिक्षण] (https://arxiv.org/abs/2003.10555) केविन क्लार्क, मिन्ह-थांग लुओंग, क्वोक वी. ले, क्रिस्टोफर डी. मैनिंग द्वारा पोस्ट किया गया।
296 1. **[EnCodec](https://huggingface.co/docs/transformers/main/model_doc/encodec)** (Meta AI से) Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi. द्वाराअनुसंधान पत्र [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) के साथ जारी किया गया
297 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (Google रिसर्च से) साथ में दिया गया पेपर [सीक्वेंस जेनरेशन टास्क के लिए प्री-ट्रेंड चेकपॉइंट का इस्तेमाल करना](https:/ /arxiv.org/abs/1907.12461) साशा रोठे, शशि नारायण, अलियाक्सि सेवेरिन द्वारा।
298 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)**(Baidu से) साथ देने वाला पेपर [ERNIE: एन्हांस्ड रिप्रेजेंटेशन थ्रू नॉलेज इंटीग्रेशन](https://arxiv.org/abs/1904.09223) यू सन, शुओहुआन वांग, युकुन ली, शिकुन फेंग, ज़ुई चेन, हान झांग, शिन तियान, डैनक्सियांग झू, हाओ तियान, हुआ वू द्वारा पोस्ट किया गया।
299 1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (Baidu से) Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. द्वाराअनुसंधान पत्र [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) के साथ जारी किया गया
300 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (मेटा AI से) ट्रांसफॉर्मर प्रोटीन भाषा मॉडल हैं। **ESM-1b** पेपर के साथ जारी किया गया था [ अलेक्जेंडर राइव्स, जोशुआ मेयर, टॉम सर्कु, सिद्धार्थ गोयल, ज़ेमिंग लिन द्वारा जैविक संरचना और कार्य असुरक्षित सीखने को 250 मिलियन प्रोटीन अनुक्रमों तक स्केल करने से उभरता है] (https://www.pnas.org/content/118/15/e2016239118) जेसन लियू, डेमी गुओ, मायल ओट, सी. लॉरेंस ज़िटनिक, जेरी मा और रॉब फर्गस। **ESM-1v** को पेपर के साथ जारी किया गया था [भाषा मॉडल प्रोटीन फ़ंक्शन पर उत्परिवर्तन के प्रभावों की शून्य-शॉट भविष्यवाणी को सक्षम करते हैं] (https://doi.org/10.1101/2021.07.09.450648) जोशुआ मेयर, रोशन राव, रॉबर्ट वेरकुइल, जेसन लियू, टॉम सर्कु और अलेक्जेंडर राइव्स द्वारा। **ESM-2** को पेपर के साथ जारी किया गया था [भाषा मॉडल विकास के पैमाने पर प्रोटीन अनुक्रम सटीक संरचना भविष्यवाणी को सक्षम करते हैं](https://doi.org/10.1101/2022.07.20.500902) ज़ेमिंग लिन, हलील अकिन, रोशन राव, ब्रायन ही, झोंगकाई झू, वेंटिंग लू, ए द्वारा लान डॉस सैंटोस कोस्टा, मरियम फ़ज़ल-ज़रंडी, टॉम सर्कू, साल कैंडिडो, अलेक्जेंडर राइव्स।
301 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
302 1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
303 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (CNRS से) साथ वाला पेपर [FlauBERT: Unsupervised Language Model Pre-training for फ़्रेंच](https://arxiv .org/abs/1912.05372) Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, बेंजामिन लेकोउटेक्स, अलेक्जेंड्रे अल्लाउज़ेन, बेनोइट क्रैबे, लॉरेंट बेसेसियर, डिडिएर श्वाब द्वारा।
304 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (FLAVA: A फाउंडेशनल लैंग्वेज एंड विजन अलाइनमेंट मॉडल) (https://arxiv) साथ वाला पेपर .org/abs/2112.04482) अमनप्रीत सिंह, रोंगहांग हू, वेदानुज गोस्वामी, गुइल्यूम कुएरॉन, वोज्शिएक गालुबा, मार्कस रोहरबैक, और डौवे कीला द्वारा।
305 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (गूगल रिसर्च से) साथ वाला पेपर [FNet: मिक्सिंग टोकन विद फूरियर ट्रांसफॉर्म्स](https://arxiv.org /abs/2105.03824) जेम्स ली-थॉर्प, जोशुआ आइंस्ली, इल्या एकस्टीन, सैंटियागो ओंटानन द्वारा।
306 1. **[FocalNet](https://huggingface.co/docs/transformers/model_doc/focalnet)** (Microsoft Research से) Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao. द्वाराअनुसंधान पत्र [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) के साथ जारी किया गया
307 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (सीएमयू/गूगल ब्रेन से) साथ में कागज [फ़नल-ट्रांसफॉर्मर: कुशल भाषा प्रसंस्करण के लिए अनुक्रमिक अतिरेक को छानना](https://arxiv.org/abs/2006.03236) जिहांग दाई, गुओकुन लाई, यिमिंग यांग, क्वोक वी. ले द्वारा रिहाई।
308 1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang.
309 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (KAIST से) साथ वाला पेपर [वर्टिकल कटडेप्थ के साथ मोनोकुलर डेप्थ एस्टीमेशन के लिए ग्लोबल-लोकल पाथ नेटवर्क्स](https:/ /arxiv.org/abs/2201.07436) डोयोन किम, वूंगह्युन गा, प्युंगवान आह, डोंगग्यू जू, सेहवान चुन, जुनमो किम द्वारा।
310 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (OpenAI से) साथ में दिया गया पेपर [जेनरेटिव प्री-ट्रेनिंग द्वारा भाषा की समझ में सुधार](https://blog .openai.com/language-unsupervised/) एलेक रैडफोर्ड, कार्तिक नरसिम्हन, टिम सालिमन्स और इल्या सुत्स्केवर द्वारा।
311 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (EleutherAI से) रिपॉजिटरी के साथ [EleutherAI/gpt-neo](https://github.com/ EleutherAI /gpt-neo) रिलीज। सिड ब्लैक, स्टेला बिडरमैन, लियो गाओ, फिल वांग और कॉनर लेही द्वारा पोस्ट किया गया।
312 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (EleutherAI से) पेपर के साथ जारी किया गया [GPT-NeoX-20B: एक ओपन-सोर्स ऑटोरेग्रेसिव लैंग्वेज मॉडल] (https://arxiv.org/abs/2204.06745) सिड ब्लैक, स्टेला बिडरमैन, एरिक हैलाहन, क्वेंटिन एंथोनी, लियो गाओ, लॉरेंस गोल्डिंग, होरेस हे, कॉनर लेही, काइल मैकडोनेल, जेसन फांग, माइकल पाइलर, यूएसवीएसएन साई प्रशांत द्वारा , शिवांशु पुरोहित, लारिया रेनॉल्ड्स, जोनाथन टो, बेन वांग, सैमुअल वेनबैक
313 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (अबेजा के जरिए) शिन्या ओटानी, ताकायोशी मकाबे, अनुज अरोड़ा, क्यो हटोरी द्वारा।
314 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (ओपनएआई से) साथ में पेपर [लैंग्वेज मॉडल्स अनसुपरवाइज्ड मल्टीटास्क लर्नर्स हैं](https://blog.openai.com/better-language-models/) एलेक रैडफोर्ड*, जेफरी वू*, रेवन चाइल्ड, डेविड लुआन, डारियो एमोडी* द्वारा * और इल्या सुत्सकेवर** ने पोस्ट किया।
315 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (EleutherAI से) साथ वाला पेपर [kingoflolz/mesh-transformer-jax](https://github. com/kingoflolz/mesh-transformer-jax/) बेन वांग और अरन कोमात्सुजाकी द्वारा।
316 1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
317 1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (BigCode से) Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra. द्वाराअनुसंधान पत्र [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) के साथ जारी किया गया
318 1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
319 1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
320 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA से) साथ में कागज [GroupViT: टेक्स्ट सुपरविजन से सिमेंटिक सेगमेंटेशन इमर्जेस](https://arxiv .org/abs/2202.11094) जियारुई जू, शालिनी डी मेलो, सिफ़ी लियू, वोनमिन बायन, थॉमस ब्रेउएल, जान कौट्ज़, ज़ियाओलोंग वांग द्वारा।
321 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (फेसबुक से) साथ में पेपर [ह्यूबर्ट: सेल्फ सुपरवाइज्ड स्पीच रिप्रेजेंटेशन लर्निंग बाय मास्क्ड प्रेडिक्शन ऑफ हिडन यूनिट्स](https ://arxiv.org/abs/2106.07447) वेई-निंग सू, बेंजामिन बोल्टे, याओ-हंग ह्यूबर्ट त्साई, कुशाल लखोटिया, रुस्लान सालाखुतदीनोव, अब्देलरहमान मोहम्मद द्वारा।
322 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (बर्कले से) साथ में कागज [I-BERT: Integer-only BERT Quantization](https:// arxiv.org/abs/2101.01321) सेहून किम, अमीर घोलमी, ज़ेवेई याओ, माइकल डब्ल्यू महोनी, कर्ट केटज़र द्वारा।
323 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
324 1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
325 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever.
326 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
327 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
328 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (माइक्रोसॉफ्ट रिसर्च एशिया से) साथ देने वाला पेपर [लेआउटएलएमवी3: यूनिफाइड टेक्स्ट और इमेज मास्किंग के साथ दस्तावेज़ एआई के लिए पूर्व-प्रशिक्षण](https://arxiv.org/abs/2204.08387) युपन हुआंग, टेंगचाओ लव, लेई कुई, युटोंग लू, फुरु वेई द्वारा पोस्ट किया गया।
329 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
330 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
331 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (मेटा AI से) साथ वाला पेपर [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https:/ /arxiv.org/abs/2104.01136) बेन ग्राहम, अलाएल्डिन एल-नौबी, ह्यूगो टौवरन, पियरे स्टॉक, आर्मंड जौलिन, हर्वे जेगौ, मैथिज डूज़ द्वारा।
332 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (दक्षिण चीन प्रौद्योगिकी विश्वविद्यालय से) साथ में कागज [LiLT: एक सरल लेकिन प्रभावी भाषा-स्वतंत्र लेआउट ट्रांसफार्मर संरचित दस्तावेज़ समझ के लिए](https://arxiv.org/abs/2202.13669) जियापेंग वांग, लियानवेन जिन, काई डिंग द्वारा पोस्ट किया गया।
333 1. **[LLaMA](https://huggingface.co/docs/transformers/model_doc/llama)** (The FAIR team of Meta AI से) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. द्वाराअनुसंधान पत्र [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) के साथ जारी किया गया
334 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
335 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (मैंडी गुओ, जोशुआ आइंस्ली, डेविड यूथस, सैंटियागो ओंटानन, जियानमो नि, यूं-हुआन सुंग, यिनफेई यांग द्वारा पोस्ट किया गया।
336 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (स्टूडियो औसिया से) साथ में पेपर [LUKE: डीप कॉन्टेक्स्टुअलाइज्ड एंटिटी रिप्रेजेंटेशन विद एंटिटी-अवेयर सेल्फ-अटेंशन](https ://arxiv.org/abs/2010.01057) Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto द्वारा।
337 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (UNC चैपल हिल से) साथ में पेपर [LXMERT: ओपन-डोमेन क्वेश्चन के लिए ट्रांसफॉर्मर से क्रॉस-मोडलिटी एनकोडर रिप्रेजेंटेशन सीखना Answering](https://arxiv.org/abs/1908.07490) हाओ टैन और मोहित बंसल द्वारा।
338 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
339 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (फेसबुक से) साथ देने वाला पेपर [बियॉन्ड इंग्लिश-सेंट्रिक मल्टीलिंगुअल मशीन ट्रांसलेशन](https://arxiv.org/ एब्स/2010.11125) एंजेला फैन, श्रुति भोसले, होल्गर श्वेन्क, झी मा, अहमद अल-किश्की, सिद्धार्थ गोयल, मनदीप बैनेस, ओनूर सेलेबी, गुइल्लाम वेन्जेक, विश्रव चौधरी, नमन गोयल, टॉम बर्च, विटाली लिपचिंस्की, सर्गेई एडुनोव, एडौर्ड द्वारा ग्रेव, माइकल औली, आर्मंड जौलिन द्वारा पोस्ट किया गया।
340 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Jörg द्वारा [OPUS](http://opus.nlpl.eu/) डेटा से प्रशिक्षित मशीनी अनुवाद मॉडल पोस्ट किया गया टाइडेमैन द्वारा। [मैरियन फ्रेमवर्क](https://marian-nmt.github.io/) माइक्रोसॉफ्ट ट्रांसलेटर टीम द्वारा विकसित।
341 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (माइक्रोसॉफ्ट रिसर्च एशिया से) साथ में पेपर [मार्कअपएलएम: विजुअली-रिच डॉक्यूमेंट अंडरस्टैंडिंग के लिए टेक्स्ट और मार्कअप लैंग्वेज का प्री-ट्रेनिंग] (https://arxiv.org/abs/2110.08518) जुनलॉन्ग ली, यिहेंग जू, लेई कुई, फुरु द्वारा वी द्वारा पोस्ट किया गया।
342 1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (FAIR and UIUC से) Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. द्वाराअनुसंधान पत्र [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) के साथ जारी किया गया
343 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (मेटा और UIUC से) पेपर के साथ जारी किया गया [प्रति-पिक्सेल वर्गीकरण वह सब नहीं है जिसकी आपको सिमेंटिक सेगमेंटेशन की आवश्यकता है] (https://arxiv.org/abs/2107.06278) बोवेन चेंग, अलेक्जेंडर जी. श्विंग, अलेक्जेंडर किरिलोव द्वारा >>>>>> रिबेस ठीक करें
344 1. **[MatCha](https://huggingface.co/docs/transformers/model_doc/matcha)** (Google AI से) Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos. द्वाराअनुसंधान पत्र [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662) के साथ जारी किया गया
345 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (फेसबुक से) साथ में पेपर [न्यूरल मशीन ट्रांसलेशन के लिए मल्टीलिंगुअल डीनोइजिंग प्री-ट्रेनिंग](https://arxiv. org/abs/2001.08210) यिनहान लियू, जियाताओ गु, नमन गोयल, जियान ली, सर्गेई एडुनोव, मार्जन ग़ज़विनिनेजाद, माइक लुईस, ल्यूक ज़ेटलमॉयर द्वारा।
346 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (फेसबुक से) साथ में पेपर [एक्स्टेंसिबल बहुभाषी प्रीट्रेनिंग और फाइनट्यूनिंग के साथ बहुभाषी अनुवाद](https://arxiv युकिंग टैंग, चाउ ट्रान, जियान ली, पेंग-जेन चेन, नमन गोयल, विश्रव चौधरी, जियाताओ गु, एंजेला फैन द्वारा .org/abs/2008.00401)।
347 1. **[MEGA](https://huggingface.co/docs/transformers/model_doc/mega)** (Facebook से) Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. द्वाराअनुसंधान पत्र [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) के साथ जारी किया गया
348 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (NVIDIA से) कागज के साथ [Megatron-LM: मॉडल का उपयोग करके बहु-अरब पैरामीटर भाषा मॉडल का प्रशिक्षण Parallelism](https://arxiv.org/abs/1909.08053) मोहम्मद शोएबी, मोस्टोफा पटवारी, राउल पुरी, पैट्रिक लेग्रेस्ले, जेरेड कैस्पर और ब्रायन कैटानज़ारो द्वारा।
349 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (NVIDIA से) साथ वाला पेपर [Megatron-LM: ट्रेनिंग मल्टी-बिलियन पैरामीटर लैंग्वेज मॉडल्स यूजिंग मॉडल पैरेललिज़्म] (https://arxiv.org/abs/1909.08053) मोहम्मद शोएबी, मोस्टोफा पटवारी, राउल पुरी, पैट्रिक लेग्रेस्ले, जेरेड कैस्पर और ब्रायन कैटानज़ारो द्वारा पोस्ट किया गया।
350 1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (Alibaba Research से) Peng Wang, Cheng Da, and Cong Yao. द्वाराअनुसंधान पत्र [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) के साथ जारी किया गया
351 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (फ्रॉम Studio Ousia) साथ में पेपर [mLUKE: द पावर ऑफ एंटिटी रिप्रेजेंटेशन इन मल्टीलिंगुअल प्रीट्रेन्ड लैंग्वेज मॉडल्स](https://arxiv.org/abs/2110.08151) रयोकन री, इकुया यामाडा, और योशिमासा त्सुरोका द्वारा।
352 1. **[MMS](https://huggingface.co/docs/transformers/model_doc/mms)** (Facebook से) Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli. द्वाराअनुसंधान पत्र [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516) के साथ जारी किया गया
353 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (सीएमयू/गूगल ब्रेन से) साथ में कागज [मोबाइलबर्ट: संसाधन-सीमित उपकरणों के लिए एक कॉम्पैक्ट टास्क-अज्ञेय बीईआरटी] (https://arxiv.org/abs/2004.02984) Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, और Denny Zhou द्वारा पोस्ट किया गया।
354 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
355 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
356 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (Apple से) साथ में कागज [MobileViT: लाइट-वेट, जनरल-पर्पस, और मोबाइल-फ्रेंडली विजन ट्रांसफॉर्मर] (https://arxiv.org/abs/2110.02178) सचिन मेहता और मोहम्मद रस्तगरी द्वारा पोस्ट किया गया।
357 1. **[MobileViTV2](https://huggingface.co/docs/transformers/model_doc/mobilevitv2)** (Apple से) Sachin Mehta and Mohammad Rastegari. द्वाराअनुसंधान पत्र [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) के साथ जारी किया गया
358 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
359 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (Google AI से) साथ वाला पेपर [mT5: एक व्यापक बहुभाषी पूर्व-प्रशिक्षित टेक्स्ट-टू-टेक्स्ट ट्रांसफॉर्मर]( https://arxiv.org/abs/2010.11934) लिंटिंग ज़ू, नोआ कॉन्सटेंट, एडम रॉबर्ट्स, मिहिर काले, रामी अल-रफू, आदित्य सिद्धांत, आदित्य बरुआ, कॉलिन रैफेल द्वारा पोस्ट किया गया।
360 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
361 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi.
362 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (हुआवेई नूह के आर्क लैब से) साथ में कागज़ [NEZHA: चीनी भाषा समझ के लिए तंत्रिका प्रासंगिक प्रतिनिधित्व](https :/ /arxiv.org/abs/1909.00204) जुन्किउ वेई, ज़ियाओज़े रेन, ज़िआओगुआंग ली, वेनयोंग हुआंग, यी लियाओ, याशेंग वांग, जियाशू लिन, शिन जियांग, जिओ चेन और कुन लियू द्वारा।
363 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (फ्रॉम मेटा) साथ में पेपर [नो लैंग्वेज लेफ्ट बिहाइंड: स्केलिंग ह्यूमन-सेंटेड मशीन ट्रांसलेशन] (https://arxiv.org/abs/2207.04672) एनएलएलबी टीम द्वारा प्रकाशित।
364 1. **[NLLB-MOE](https://huggingface.co/docs/transformers/model_doc/nllb-moe)** (Meta से) the NLLB team. द्वाराअनुसंधान पत्र [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) के साथ जारी किया गया
365 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (विस्कॉन्सिन विश्वविद्यालय - मैडिसन से) साथ में कागज [Nyströmformer: A Nyström- आधारित एल्गोरिथम आत्म-ध्यान का अनुमान लगाने के लिए ](https://arxiv.org/abs/2102.03902) युनयांग ज़िओंग, झानपेंग ज़ेंग, रुद्रसिस चक्रवर्ती, मिंगक्सिंग टैन, ग्लेन फंग, यिन ली, विकास सिंह द्वारा पोस्ट किया गया।
366 1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (SHI Labs से) पेपर [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) जितेश जैन, जिआचेन ली, मांगटिक चिउ, अली हसनी, निकिता ओरलोव, हम्फ्री शि के द्वारा जारी किया गया है।
367 1. **[OpenLlama](https://huggingface.co/docs/transformers/model_doc/open-llama)** (from [s-JoL](https://huggingface.co/s-JoL)) released in [Open-Llama](https://github.com/s-JoL/Open-Llama).
368 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
369 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI से) साथ में कागज [विज़न ट्रांसफॉर्मर्स के साथ सिंपल ओपन-वोकैबुलरी ऑब्जेक्ट डिटेक्शन](https:/ /arxiv.org/abs/2205.06230) मैथियास मिंडरर, एलेक्सी ग्रिट्सेंको, ऑस्टिन स्टोन, मैक्सिम न्यूमैन, डिर्क वीसेनबोर्न, एलेक्सी डोसोवित्स्की, अरविंद महेंद्रन, अनुराग अर्नब, मुस्तफा देहघानी, ज़ुओरन शेन, जिओ वांग, ज़ियाओहुआ झाई, थॉमस किफ़, और नील हॉल्सबी द्वारा पोस्ट किया गया।
370 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
371 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google की ओर से) साथ में दिया गया पेपर [लंबे इनपुट सारांश के लिए ट्रांसफ़ॉर्मरों को बेहतर तरीके से एक्सटेंड करना](https://arxiv .org/abs/2208.04347) जेसन फांग, याओ झाओ, पीटर जे लियू द्वारा।
372 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (दीपमाइंड से) साथ में पेपर [पर्सीवर आईओ: संरचित इनपुट और आउटपुट के लिए एक सामान्य वास्तुकला] (https://arxiv.org/abs/2107.14795) एंड्रयू जेगल, सेबेस्टियन बोरग्यूड, जीन-बैप्टिस्ट अलायराक, कार्ल डोर्श, कैटलिन इओनेस्कु, डेविड द्वारा डिंग, स्कंद कोप्पुला, डैनियल ज़ोरान, एंड्रयू ब्रॉक, इवान शेलहैमर, ओलिवियर हेनाफ, मैथ्यू एम। बोट्विनिक, एंड्रयू ज़िसरमैन, ओरिओल विनियल्स, जोआओ कैरेरा द्वारा पोस्ट किया गया।
373 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (VinAI Research से) कागज के साथ [PhoBERT: वियतनामी के लिए पूर्व-प्रशिक्षित भाषा मॉडल](https://www .aclweb.org/anthology/2020.findings-emnlp.92/) डैट क्वोक गुयेन और अन्ह तुआन गुयेन द्वारा पोस्ट किया गया।
374 1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (Google से) Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. द्वाराअनुसंधान पत्र [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) के साथ जारी किया गया
375 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (UCLA NLP से) साथ वाला पेपर [प्रोग्राम अंडरस्टैंडिंग एंड जेनरेशन के लिए यूनिफाइड प्री-ट्रेनिंग](https://arxiv .org/abs/2103.06333) वसी उद्दीन अहमद, सैकत चक्रवर्ती, बैशाखी रे, काई-वेई चांग द्वारा।
376 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
377 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [ProphetNet: प्रेडिक्टिंग फ्यूचर एन-ग्राम फॉर सीक्वेंस-टू-सीक्वेंस प्री-ट्रेनिंग ](https://arxiv.org/abs/2001.04063) यू यान, वीज़ेन क्यूई, येयुन गोंग, दयाहेंग लियू, नान डुआन, जिउशेंग चेन, रुओफ़ेई झांग और मिंग झोउ द्वारा पोस्ट किया गया।
378 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (NVIDIA से) साथ वाला पेपर [डीप लर्निंग इंफ़ेक्शन के लिए इंटीजर क्वांटिज़ेशन: प्रिंसिपल्स एंड एम्पिरिकल इवैल्यूएशन](https:// arxiv.org/abs/2004.09602) हाओ वू, पैट्रिक जुड, जिआओजी झांग, मिखाइल इसेव और पॉलियस माइकेविसियस द्वारा।
379 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (फेसबुक से) साथ में कागज [रिट्रीवल-ऑगमेंटेड जेनरेशन फॉर नॉलेज-इंटेंसिव एनएलपी टास्क](https://arxiv .org/abs/2005.11401) पैट्रिक लुईस, एथन पेरेज़, अलेक्जेंड्रा पिक्टस, फैबियो पेट्रोनी, व्लादिमीर कारपुखिन, नमन गोयल, हेनरिक कुटलर, माइक लुईस, वेन-ताउ यिह, टिम रॉकटाशेल, सेबस्टियन रिडेल, डौवे कीला द्वारा।
380 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (Google अनुसंधान से) केल्विन गु, केंटन ली, ज़ोरा तुंग, पानुपोंग पसुपत और मिंग-वेई चांग द्वारा साथ में दिया गया पेपर [REALM: रिट्रीवल-ऑगमेंटेड लैंग्वेज मॉडल प्री-ट्रेनिंग](https://arxiv.org/abs/2002.08909)।
381 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
382 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (META रिसर्च से) [डिज़ाइनिंग नेटवर्क डिज़ाइन स्पेस] (https://arxiv.org/) पेपर के साथ जारी किया गया एब्स/2003.13678) इलिजा राडोसावोविक, राज प्रतीक कोसाराजू, रॉस गिर्शिक, कैमिंग ही, पिओटर डॉलर द्वारा।
383 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (गूगल रिसर्च से) साथ वाला पेपर [पूर्व-प्रशिक्षित भाषा मॉडल में एम्बेडिंग कपलिंग पर पुनर्विचार](https://arxiv .org/pdf/2010.12821.pdf) ह्युंग वोन चुंग, थिबॉल्ट फ़ेवरी, हेनरी त्साई, एम. जॉनसन, सेबेस्टियन रुडर द्वारा।
384 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (माइक्रोसॉफ्ट रिसर्च से) [डीप रेसिडुअल लर्निंग फॉर इमेज रिकग्निशन] (https://arxiv. org/abs/1512.03385) कैमिंग हे, जियांग्यु झांग, शाओकिंग रेन, जियान सन द्वारा।
385 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (फेसबुक से), साथ में कागज [मजबूत रूप से अनुकूलित BERT प्रीट्रेनिंग दृष्टिकोण](https://arxiv.org/abs /1907.11692) यिनहान लियू, मायल ओट, नमन गोयल, जिंगफेई डू, मंदार जोशी, डैनकी चेन, ओमर लेवी, माइक लुईस, ल्यूक ज़ेटलमॉयर, वेसेलिन स्टोयानोव द्वारा।
386 1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
387 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou.
388 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (झुईई टेक्नोलॉजी से), साथ में पेपर [रोफॉर्मर: रोटरी पोजिशन एंबेडिंग के साथ एन्हांस्ड ट्रांसफॉर्मर] (https://arxiv.org/pdf/2104.09864v1.pdf) जियानलिन सु और यू लू और शेंगफेंग पैन और बो वेन और युनफेंग लियू द्वारा प्रकाशित।
389 1. **[RWKV](https://huggingface.co/docs/transformers/model_doc/rwkv)** (Bo Peng से) Bo Peng. द्वाराअनुसंधान पत्र [this repo](https://github.com/BlinkDL/RWKV-LM) के साथ जारी किया गया
390 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
391 1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (Meta AI से) Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick. द्वाराअनुसंधान पत्र [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) के साथ जारी किया गया
392 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP से) साथ देने वाला पेपर [भाषण पहचान के लिए अनसुपरवाइज्ड प्री-ट्रेनिंग में परफॉर्मेंस-एफिशिएंसी ट्रेड-ऑफ्स](https ://arxiv.org/abs/2109.06870) फेलिक्स वू, क्वांगयुन किम, जिंग पैन, क्यू हान, किलियन क्यू. वेनबर्गर, योव आर्टज़ी द्वारा।
393 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP से) साथ में पेपर [भाषण पहचान के लिए अनसुपरवाइज्ड प्री-ट्रेनिंग में परफॉर्मेंस-एफिशिएंसी ट्रेड-ऑफ्स] (https://arxiv.org/abs/2109.06870) फेलिक्स वू, क्वांगयुन किम, जिंग पैन, क्यू हान, किलियन क्यू. वेनबर्गर, योआव आर्टज़ी द्वारा पोस्ट किया गया।
394 1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
395 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (फेसबुक से), साथ में पेपर [फेयरसेक S2T: फास्ट स्पीच-टू-टेक्स्ट मॉडलिंग विद फेयरसेक](https: //arxiv.org/abs/2010.05171) चांगहान वांग, यूं तांग, जुताई मा, ऐनी वू, दिमित्रो ओखोनको, जुआन पिनो द्वारा पोस्ट किया गया。
396 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (फेसबुक से) साथ में पेपर [लार्ज-स्केल सेल्फ- एंड सेमी-सुपरवाइज्ड लर्निंग फॉर स्पीच ट्रांसलेशन](https://arxiv.org/abs/2104.06678) चांगहान वांग, ऐनी वू, जुआन पिनो, एलेक्सी बेवस्की, माइकल औली, एलेक्सिस द्वारा Conneau द्वारा पोस्ट किया गया।
397 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (तेल अवीव यूनिवर्सिटी से) साथ में पेपर [स्पैन सिलेक्शन को प्री-ट्रेनिंग करके कुछ-शॉट क्वेश्चन आंसरिंग](https:// arxiv.org/abs/2101.00438) ओरि राम, युवल कर्स्टन, जोनाथन बेरेंट, अमीर ग्लोबर्सन, ओमर लेवी द्वारा।
398 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (बर्कले से) कागज के साथ [SqueezeBERT: कुशल तंत्रिका नेटवर्क के बारे में NLP को कंप्यूटर विज़न क्या सिखा सकता है?](https: //arxiv.org/abs/2006.11316) फॉरेस्ट एन. इनडोला, अल्बर्ट ई. शॉ, रवि कृष्णा, और कर्ट डब्ल्यू. केटज़र द्वारा।
399 1. **[SwiftFormer](https://huggingface.co/docs/transformers/model_doc/swiftformer)** (MBZUAI से) Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan. द्वाराअनुसंधान पत्र [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) के साथ जारी किया गया
400 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (माइक्रोसॉफ्ट से) साथ में कागज [स्वाइन ट्रांसफॉर्मर: शिफ्टेड विंडोज का उपयोग कर पदानुक्रमित विजन ट्रांसफॉर्मर](https://arxiv .org/abs/2103.14030) ज़ी लियू, युटोंग लिन, यू काओ, हान हू, यिक्सुआन वेई, झेंग झांग, स्टीफन लिन, बैनिंग गुओ द्वारा।
401 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (Microsoft से) साथ वाला पेपर [Swin Transformer V2: स्केलिंग अप कैपेसिटी एंड रेजोल्यूशन](https:// ज़ी लियू, हान हू, युटोंग लिन, ज़ुलिआंग याओ, ज़ेंडा ज़ी, यिक्सुआन वेई, जिया निंग, यू काओ, झेंग झांग, ली डोंग, फुरु वेई, बैनिंग गुओ द्वारा arxiv.org/abs/2111.09883।
402 1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (from University of Würzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte.
403 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer.
404 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI)कॉलिन रैफेल और नोम शज़ीर और एडम रॉबर्ट्स और कैथरीन ली और शरण नारंग और माइकल मटेना द्वारा साथ में पेपर [एक एकीकृत टेक्स्ट-टू-टेक्स्ट ट्रांसफॉर्मर के साथ स्थानांतरण सीखने की सीमा की खोज] (https://arxiv.org/abs/1910.10683) और यांकी झोउ और वेई ली और पीटर जे लियू।
405 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (Google AI से) साथ वाला पेपर [google-research/text-to-text-transfer- ट्रांसफॉर्मर](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) कॉलिन रैफेल और नोम शज़ीर और एडम रॉबर्ट्स और कैथरीन ली और शरण नारंग द्वारा और माइकल मटेना और यांकी झोउ और वेई ली और पीटर जे लियू।
406 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [पबटेबल्स-1एम: टूवर्ड्स कॉम्प्रिहेंसिव टेबल एक्सट्रैक्शन फ्रॉम अनस्ट्रक्चर्ड डॉक्यूमेंट्स ](https://arxiv.org/abs/2110.00061) ब्रैंडन स्मॉक, रोहित पेसाला, रॉबिन अब्राहम द्वारा पोस्ट किया गया।
407 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (Google AI से) साथ में कागज [TAPAS: पूर्व-प्रशिक्षण के माध्यम से कमजोर पर्यवेक्षण तालिका पार्सिंग](https:// arxiv.org/abs/2004.02349) जोनाथन हर्ज़िग, पावेल क्रिज़िस्तोफ़ नोवाक, थॉमस मुलर, फ्रांसेस्को पिकिन्नो और जूलियन मार्टिन ईसेन्च्लोस द्वारा।
408 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [TAPEX: टेबल प्री-ट्रेनिंग थ्रू लर्निंग अ न्यूरल SQL एक्ज़ीक्यूटर](https: //arxiv.org/abs/2107.07653) कियान लियू, बेई चेन, जियाकी गुओ, मोर्टेज़ा ज़ियादी, ज़ेकी लिन, वीज़ू चेन, जियान-गुआंग लू द्वारा पोस्ट किया गया।
409 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace).
410 1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani.
411 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
412 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (Google/CMU की ओर से) कागज के साथ [संस्करण-एक्स: एक ब्लॉग मॉडल चौकस चौक मॉडल मॉडल] (https://arxivorg/abs/1901.02860) क्वोकोक वी. ले, रुस्लैन सलाखुतदी
413 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
414 1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (from UNC Chapel Hill) released with the paper [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal.
415 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
416 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (माइक्रोसॉफ्ट रिसर्च से) साथ में दिया गया पेपर [UniSpeech: यूनिफाइड स्पीच रिप्रेजेंटेशन लर्निंग विद लेबलेड एंड अनलेबल्ड डेटा](https:/ /arxiv.org/abs/2101.07597) चेंगई वांग, यू वू, याओ कियान, केनिची कुमातानी, शुजी लियू, फुरु वेई, माइकल ज़ेंग, ज़ुएदोंग हुआंग द्वारा।
417 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [UNISPEECH-SAT: यूनिवर्सल स्पीच रिप्रेजेंटेशन लर्निंग विद स्पीकर अवेयर प्री-ट्रेनिंग ](https://arxiv.org/abs/2110.05752) सानयुआन चेन, यू वू, चेंग्यी वांग, झेंगयांग चेन, झूओ चेन, शुजी लियू, जियान वू, याओ कियान, फुरु वेई, जिन्यु ली, जियांगज़ान यू द्वारा पोस्ट किया गया।
418 1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun.
419 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (सिंघुआ यूनिवर्सिटी और ननकाई यूनिवर्सिटी से) साथ में पेपर [विजुअल अटेंशन नेटवर्क](https://arxiv.org/ pdf/2202.09741.pdf) मेंग-हाओ गुओ, चेंग-ज़े लू, झेंग-निंग लियू, मिंग-मिंग चेंग, शि-मिन हू द्वारा।
420 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (मल्टीमीडिया कम्प्यूटिंग ग्रुप, नानजिंग यूनिवर्सिटी से) साथ में पेपर [वीडियोएमएई: मास्क्ड ऑटोएन्कोडर स्व-पर्यवेक्षित वीडियो प्री-ट्रेनिंग के लिए डेटा-कुशल सीखने वाले हैं] (https://arxiv.org/abs/2203.12602) ज़ान टोंग, यिबिंग सॉन्ग, जुए द्वारा वांग, लिमिन वांग द्वारा पोस्ट किया गया।
421 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (NAVER AI Lab/Kakao Enterprise/Kakao Brain से) साथ में कागज [ViLT: Vision-and-Language Transformer बिना कनवल्शन या रीजन सुपरविजन](https://arxiv.org/abs/2102.03334) वोनजे किम, बोक्यूंग सोन, इल्डू किम द्वारा पोस्ट किया गया।
422 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (गूगल एआई से) कागज के साथ [एक इमेज इज़ वर्थ 16x16 वर्ड्स: ट्रांसफॉर्मर्स फॉर इमेज रिकॉग्निशन एट स्केल](https://arxiv.org/abs/2010.11929) एलेक्सी डोसोवित्स्की, लुकास बेयर, अलेक्जेंडर कोलेसनिकोव, डिर्क वीसेनबोर्न, शियाओहुआ झाई, थॉमस अनटरथिनर, मुस्तफा देहघानी, मैथियास मिंडरर, जॉर्ज हेगोल्ड, सिल्वेन गेली, जैकब उस्ज़कोरेइट द्वारा हॉल्सबी द्वारा पोस्ट किया गया।
423 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (UCLA NLP से) साथ वाला पेपर [VisualBERT: A Simple and Performant Baseline for Vision and Language](https:/ /arxiv.org/pdf/1908.03557) लियुनियन हेरोल्ड ली, मार्क यात्स्कर, दा यिन, चो-जुई हसीह, काई-वेई चांग द्वारा।
424 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
425 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (मेटा एआई से) साथ में कागज [मास्कड ऑटोएन्कोडर स्केलेबल विजन लर्नर्स हैं](https://arxiv.org/ एब्स/2111.06377) कैमिंग हे, ज़िनेली चेन, सेनिंग ज़ी, यांगहो ली, पिओट्र डॉलर, रॉस गिर्शिक द्वारा।
426 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (मेटा एआई से) साथ में कागज [लेबल-कुशल सीखने के लिए मास्क्ड स्याम देश के नेटवर्क](https://arxiv. org/abs/2204.07141) महमूद असरान, मथिल्डे कैरन, ईशान मिश्रा, पियोट्र बोजानोवस्की, फ्लोरियन बोर्डेस, पास्कल विंसेंट, आर्मंड जौलिन, माइकल रब्बत, निकोलस बल्लास द्वारा।
427 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (फेसबुक एआई से) साथ में पेपर [wav2vec 2.0: ए फ्रेमवर्क फॉर सेल्फ-सुपरवाइज्ड लर्निंग ऑफ स्पीच रिप्रेजेंटेशन] (https://arxiv.org/abs/2006.11477) एलेक्सी बेवस्की, हेनरी झोउ, अब्देलरहमान मोहम्मद, माइकल औली द्वारा।
428 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (Facebook AI से) साथ वाला पेपर [FAIRSEQ S2T: FAIRSEQ के साथ फास्ट स्पीच-टू-टेक्स्ट मॉडलिंग ](https://arxiv.org/abs/2010.05171) चांगहान वांग, यूं तांग, जुताई मा, ऐनी वू, सरव्या पोपुरी, दिमित्रो ओखोनको, जुआन पिनो द्वारा पोस्ट किया गया।
429 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (Facebook AI से) साथ वाला पेपर [सरल और प्रभावी जीरो-शॉट क्रॉस-लिंगुअल फोनेम रिकॉग्निशन](https:/ /arxiv.org/abs/2109.11680) कियानटोंग जू, एलेक्सी बाएव्स्की, माइकल औली द्वारा।
430 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (माइक्रोसॉफ्ट रिसर्च से) पेपर के साथ जारी किया गया [WavLM: फुल स्टैक के लिए बड़े पैमाने पर स्व-पर्यवेक्षित पूर्व-प्रशिक्षण स्पीच प्रोसेसिंग] (https://arxiv.org/abs/2110.13900) सानयुआन चेन, चेंगयी वांग, झेंगयांग चेन, यू वू, शुजी लियू, ज़ुओ चेन, जिन्यु ली, नाओयुकी कांडा, ताकुया योशियोका, ज़िओंग जिओ, जियान वू, लॉन्ग झोउ, शुओ रेन, यानमिन कियान, याओ कियान, जियान वू, माइकल ज़ेंग, फुरु वेई।
431 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (OpenAI से) साथ में कागज [बड़े पैमाने पर कमजोर पर्यवेक्षण के माध्यम से मजबूत भाषण पहचान](https://cdn. openai.com/papers/whisper.pdf) एलेक रैडफोर्ड, जोंग वूक किम, ताओ जू, ग्रेग ब्रॉकमैन, क्रिस्टीन मैकलीवे, इल्या सुत्स्केवर द्वारा।
432 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [एक्सपैंडिंग लैंग्वेज-इमेज प्रीट्रेन्ड मॉडल फॉर जनरल वीडियो रिकग्निशन](https: //arxiv.org/abs/2208.02816) बोलिन नी, होउवेन पेंग, मिंगाओ चेन, सोंगयांग झांग, गाओफेंग मेंग, जियानलोंग फू, शिमिंग जियांग, हैबिन लिंग द्वारा।
433 1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (Meta AI से) Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe. द्वाराअनुसंधान पत्र [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) के साथ जारी किया गया
434 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
435 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (फेसबुक से) साथ में पेपर [क्रॉस-लिंगुअल लैंग्वेज मॉडल प्रीट्रेनिंग] (https://arxiv.org/abs/1901.07291) गिलाउम लैम्पल और एलेक्सिस कोनो द्वारा।
436 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (माइक्रोसॉफ्ट रिसर्च से) साथ में कागज [ProphetNet: प्रेडिक्टिंग फ्यूचर एन-ग्राम फॉर सीक्वेंस-टू- सीक्वेंस प्री-ट्रेनिंग](https://arxiv.org/abs/2001.04063) यू यान, वीज़ेन क्यूई, येयुन गोंग, दयाहेंग लियू, नान डुआन, जिउशेंग चेन, रुओफ़ेई झांग और मिंग झोउ द्वारा।
437 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (फेसबुक एआई से), साथ में पेपर [अनसुपरवाइज्ड क्रॉस-लिंगुअल रिप्रेजेंटेशन लर्निंग एट स्केल] (https://arxiv.org/abs/1911.02116) एलेक्सिस कोन्यू*, कार्तिकेय खंडेलवाल*, नमन गोयल, विश्रव चौधरी, गिलाउम वेनज़ेक, फ्रांसिस्को गुज़मैन द्वारा , एडौर्ड ग्रेव, मायल ओट, ल्यूक ज़ेटलमॉयर और वेसेलिन स्टोयानोव द्वारा।
438 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (Facebook AI से) साथ में कागज [बहुभाषी नकाबपोश भाषा के लिए बड़े पैमाने पर ट्रांसफॉर्मर ] मॉडलिंग](https://arxiv.org/abs/2105.00572) नमन गोयल, जिंगफेई डू, मायल ओट, गिरि अनंतरामन, एलेक्सिस कोनो द्वारा पोस्ट किया गया।
439 1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (from Meta AI) released with the paper [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa.
440 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (Google/CMU से) साथ वाला पेपर [XLNet: जनरलाइज्ड ऑटोरेग्रेसिव प्रीट्रेनिंग फॉर लैंग्वेज अंडरस्टैंडिंग](https://arxiv ज़ीलिन यांग*, ज़िहांग दाई*, यिमिंग यांग, जैम कार्बोनेल, रुस्लान सलाखुतदीनोव, क्वोक वी. ले द्वारा .org/abs/1906.08237)।
441 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (Facebook AI से) साथ वाला पेपर [XLS-R: सेल्फ सुपरवाइज्ड क्रॉस-लिंगुअल स्पीच रिप्रेजेंटेशन लर्निंग एट स्केल](https://arxiv.org/abs/2111.09296) अरुण बाबू, चांगहान वांग, एंड्रोस तजंद्रा, कुशाल लखोटिया, कियानटोंग जू, नमन गोयल, कृतिका सिंह, पैट्रिक वॉन प्लैटन, याथार्थ सराफ, जुआन पिनो, एलेक्सी बेवस्की, एलेक्सिस कोन्यू, माइकल औली द्वारा पोस्ट किया गया।
442 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (फेसबुक एआई से) साथ में पेपर [अनसुपरवाइज्ड क्रॉस-लिंगुअल रिप्रेजेंटेशन लर्निंग फॉर स्पीच रिकग्निशन] (https://arxiv.org/abs/2006.13979) एलेक्सिस कोन्यू, एलेक्सी बेवस्की, रोनन कोलोबर्ट, अब्देलरहमान मोहम्मद, माइकल औली द्वारा।
443 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (हुआझोंग यूनिवर्सिटी ऑफ साइंस एंड टेक्नोलॉजी से) साथ में पेपर [यू ओनली लुक एट वन सीक्वेंस: रीथिंकिंग ट्रांसफॉर्मर इन विज़न थ्रू ऑब्जेक्ट डिटेक्शन](https://arxiv.org/abs/2106.00666) युक्सिन फेंग, बेनचेंग लियाओ, जिंगगैंग वांग, जेमिन फेंग, जियांग क्यूई, रुई वू, जियानवेई नीयू, वेन्यू लियू द्वारा पोस्ट किया गया।
444 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (विस्कॉन्सिन विश्वविद्यालय - मैडिसन से) साथ में पेपर [यू ओनली सैंपल (लगभग) ज़ानपेंग ज़ेंग, युनयांग ज़िओंग द्वारा , सत्य एन. रवि, शैलेश आचार्य, ग्लेन फंग, विकास सिंह द्वारा पोस्ट किया गया।
445 1. एक नए मॉडल में योगदान देना चाहते हैं? नए मॉडल जोड़ने में आपका मार्गदर्शन करने के लिए हमारे पास एक **विस्तृत मार्गदर्शिका और टेम्प्लेट** है। आप उन्हें [`टेम्पलेट्स`](./templates) निर्देशिका में पा सकते हैं। पीआर शुरू करने से पहले [योगदान दिशानिर्देश] (./CONTRIBUTING.md) देखना और अनुरक्षकों से संपर्क करना या प्रतिक्रिया प्राप्त करने के लिए एक नया मुद्दा खोलना याद रखें।
446
447 यह जांचने के लिए कि क्या किसी मॉडल में पहले से ही Flax, PyTorch या TensorFlow का कार्यान्वयन है, या यदि उसके पास Tokenizers लाइब्रेरी में संबंधित टोकन है, तो [यह तालिका] (https://huggingface.co/ docs/transformers/index#supported) देखें। -फ्रेमवर्क)।
448
449 इन कार्यान्वयनों का परीक्षण कई डेटासेट पर किया गया है (देखें केस स्क्रिप्ट का उपयोग करें) और वैनिला कार्यान्वयन के लिए तुलनात्मक रूप से प्रदर्शन करना चाहिए। आप उपयोग के मामले के दस्तावेज़ [इस अनुभाग](https://huggingface.co/docs/transformers/examples) में व्यवहार का विवरण पढ़ सकते हैं।
450
451
452 ## अधिक समझें
453
454 |अध्याय | विवरण |
455 |-|-|
456 | [दस्तावेज़ीकरण](https://huggingface.co/transformers/) | पूरा एपीआई दस्तावेज़ीकरण और ट्यूटोरियल |
457 | [कार्य सारांश](https://huggingface.co/docs/transformers/task_summary) | ट्रांसफॉर्मर समर्थित कार्य |
458 | [प्रीप्रोसेसिंग ट्यूटोरियल](https://huggingface.co/docs/transformers/preprocessing) | मॉडल के लिए डेटा तैयार करने के लिए `टोकनाइज़र` का उपयोग करना |
459 | [प्रशिक्षण और फाइन-ट्यूनिंग](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlow के ट्रेनिंग लूप या `ट्रेनर` API में ट्रांसफॉर्मर द्वारा दिए गए मॉडल का उपयोग करें |
460 | [क्विक स्टार्ट: ट्वीकिंग एंड यूज़ केस स्क्रिप्ट्स](https://github.com/huggingface/transformers/tree/main/examples) | विभिन्न कार्यों के लिए केस स्क्रिप्ट का उपयोग करें |
461 | [मॉडल साझा करना और अपलोड करना](https://huggingface.co/docs/transformers/model_sharing) | समुदाय के साथ अपने फाइन टूनड मॉडल अपलोड और साझा करें |
462 | [माइग्रेशन](https://huggingface.co/docs/transformers/migration) | `पाइटोरच-ट्रांसफॉर्मर्स` या `पाइटोरच-प्रीट्रेनड-बर्ट` से ट्रांसफॉर्मर में माइग्रेट करना |
463
464 ## उद्धरण
465
466 हमने आधिकारिक तौर पर इस लाइब्रेरी का [पेपर](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) प्रकाशित किया है, अगर आप ट्रान्सफ़ॉर्मर्स लाइब्रेरी का उपयोग करते हैं, तो कृपया उद्धृत करें:
467 ```bibtex
468 @inproceedings{wolf-etal-2020-transformers,
469 title = "Transformers: State-of-the-Art Natural Language Processing",
470 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
471 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
472 month = oct,
473 year = "2020",
474 address = "Online",
475 publisher = "Association for Computational Linguistics",
476 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
477 pages = "38--45"
478 }
479 ```
480
[end of README_hd.md]
[start of README_ja.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Traditional Japanese translation of Hugging Face documentation
19 - Use square quotes, e.g.,「引用」
20
21 Dictionary
22
23 API: API(翻訳しない)
24 add: 追加
25 checkpoint: チェックポイント
26 code: コード
27 community: コミュニティ
28 confidence: 信頼度
29 dataset: データセット
30 documentation: ドキュメント
31 example: 例
32 finetune: 微調整
33 Hugging Face: Hugging Face(翻訳しない)
34 implementation: 実装
35 inference: 推論
36 library: ライブラリ
37 module: モジュール
38 NLP/Natural Language Processing: NLPと表示される場合は翻訳されず、Natural Language Processingと表示される場合は翻訳される
39 online demos: オンラインデモ
40 pipeline: pipeline(翻訳しない)
41 pretrained/pretrain: 学習済み
42 Python data structures (e.g., list, set, dict): リスト、セット、ディクショナリと訳され、括弧内は原文英語
43 repository: repository(翻訳しない)
44 summary: 概要
45 token-: token-(翻訳しない)
46 Trainer: Trainer(翻訳しない)
47 transformer: transformer(翻訳しない)
48 tutorial: チュートリアル
49 user: ユーザ
50 -->
51
52 <p align="center">
53 <br>
54 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
55 <br>
56 <p>
57 <p align="center">
58 <a href="https://circleci.com/gh/huggingface/transformers">
59 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
60 </a>
61 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
62 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
63 </a>
64 <a href="https://huggingface.co/docs/transformers/index">
65 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
66 </a>
67 <a href="https://github.com/huggingface/transformers/releases">
68 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
69 </a>
70 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
71 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
72 </a>
73 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
74 </p>
75
76 <h4 align="center">
77 <p>
78 <a href="https://github.com/huggingface/transformers/">English</a> |
79 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
80 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
81 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> |
82 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> |
83 <b>日本語</b> |
84 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a>
85 <p>
86 </h4>
87
88 <h3 align="center">
89 <p>JAX、PyTorch、TensorFlowのための最先端機械学習</p>
90 </h3>
91
92 <h3 align="center">
93 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
94 </h3>
95
96 🤗Transformersは、テキスト、視覚、音声などの異なるモダリティに対してタスクを実行するために、事前に学習させた数千のモデルを提供します。
97
98 これらのモデルは次のような場合に適用できます:
99
100 * 📝 テキストは、テキストの分類、情報抽出、質問応答、要約、翻訳、テキスト生成などのタスクのために、100以上の言語に対応しています。
101 * 🖼️ 画像分類、物体検出、セグメンテーションなどのタスクのための画像。
102 * 🗣️ 音声は、音声認識や音声分類などのタスクに使用します。
103
104 トランスフォーマーモデルは、テーブル質問応答、光学文字認識、スキャン文書からの情報抽出、ビデオ分類、視覚的質問応答など、**複数のモダリティを組み合わせた**タスクも実行可能です。
105
106 🤗Transformersは、与えられたテキストに対してそれらの事前学習されたモデルを素早くダウンロードして使用し、あなた自身のデータセットでそれらを微調整し、私たちの[model hub](https://huggingface.co/models)でコミュニティと共有するためのAPIを提供します。同時に、アーキテクチャを定義する各Pythonモジュールは完全にスタンドアロンであり、迅速な研究実験を可能にするために変更することができます。
107
108 🤗Transformersは[Jax](https://jax.readthedocs.io/en/latest/)、[PyTorch](https://pytorch.org/)、[TensorFlow](https://www.tensorflow.org/)という3大ディープラーニングライブラリーに支えられ、それぞれのライブラリをシームレスに統合しています。片方でモデルを学習してから、もう片方で推論用にロードするのは簡単なことです。
109
110 ## オンラインデモ
111
112 [model hub](https://huggingface.co/models)から、ほとんどのモデルのページで直接テストすることができます。また、パブリックモデル、プライベートモデルに対して、[プライベートモデルのホスティング、バージョニング、推論API](https://huggingface.co/pricing)を提供しています。
113
114 以下はその一例です:
115
116 自然言語処理にて:
117 - [BERTによるマスクドワード補完](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
118 - [Electraによる名前実体認識](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
119 - [GPT-2によるテキスト生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
120 - [RoBERTaによる自然言語推論](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
121 - [BARTによる要約](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
122 - [DistilBERTによる質問応答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
123 - [T5による翻訳](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
124
125 コンピュータビジョンにて:
126 - [ViTによる画像分類](https://huggingface.co/google/vit-base-patch16-224)
127 - [DETRによる物体検出](https://huggingface.co/facebook/detr-resnet-50)
128 - [SegFormerによるセマンティックセグメンテーション](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
129 - [DETRによるパノプティックセグメンテーション](https://huggingface.co/facebook/detr-resnet-50-panoptic)
130
131 オーディオにて:
132 - [Wav2Vec2による自動音声認識](https://huggingface.co/facebook/wav2vec2-base-960h)
133 - [Wav2Vec2によるキーワード検索](https://huggingface.co/superb/wav2vec2-base-superb-ks)
134
135 マルチモーダルなタスクにて:
136 - [ViLTによる視覚的質問応答](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
137
138 Hugging Faceチームによって作られた **[トランスフォーマーを使った書き込み](https://transformer.huggingface.co)** は、このリポジトリのテキスト生成機能の公式デモである。
139
140 ## Hugging Faceチームによるカスタム・サポートをご希望の場合
141
142 <a target="_blank" href="https://huggingface.co/support">
143 <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
144 </a><br>
145
146 ## クイックツアー
147
148 与えられた入力(テキスト、画像、音声、...)に対してすぐにモデルを使うために、我々は`pipeline`というAPIを提供しております。pipelineは、学習済みのモデルと、そのモデルの学習時に使用された前処理をグループ化したものです。以下は、肯定的なテキストと否定的なテキストを分類するためにpipelineを使用する方法です:
149
150 ```python
151 >>> from transformers import pipeline
152
153 # Allocate a pipeline for sentiment-analysis
154 >>> classifier = pipeline('sentiment-analysis')
155 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
156 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
157 ```
158
159 2行目のコードでは、pipelineで使用される事前学習済みモデルをダウンロードしてキャッシュし、3行目では与えられたテキストに対してそのモデルを評価します。ここでは、答えは99.97%の信頼度で「ポジティブ」です。
160
161 自然言語処理だけでなく、コンピュータビジョンや音声処理においても、多くのタスクにはあらかじめ訓練された`pipeline`が用意されている。例えば、画像から検出された物体を簡単に抽出することができる:
162
163 ``` python
164 >>> import requests
165 >>> from PIL import Image
166 >>> from transformers import pipeline
167
168 # Download an image with cute cats
169 >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
170 >>> image_data = requests.get(url, stream=True).raw
171 >>> image = Image.open(image_data)
172
173 # Allocate a pipeline for object detection
174 >>> object_detector = pipeline('object-detection')
175 >>> object_detector(image)
176 [{'score': 0.9982201457023621,
177 'label': 'remote',
178 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
179 {'score': 0.9960021376609802,
180 'label': 'remote',
181 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
182 {'score': 0.9954745173454285,
183 'label': 'couch',
184 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
185 {'score': 0.9988006353378296,
186 'label': 'cat',
187 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
188 {'score': 0.9986783862113953,
189 'label': 'cat',
190 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
191 ```
192
193 ここでは、画像から検出されたオブジェクトのリストが得られ、オブジェクトを囲むボックスと信頼度スコアが表示されます。左側が元画像、右側が予測結果を表示したものです:
194
195 <h3 align="center">
196 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
197 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
198 </h3>
199
200 [このチュートリアル](https://huggingface.co/docs/transformers/task_summary)では、`pipeline`APIでサポートされているタスクについて詳しく説明しています。
201
202 `pipeline`に加えて、与えられたタスクに学習済みのモデルをダウンロードして使用するために必要なのは、3行のコードだけです。以下はPyTorchのバージョンです:
203 ```python
204 >>> from transformers import AutoTokenizer, AutoModel
205
206 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
207 >>> model = AutoModel.from_pretrained("bert-base-uncased")
208
209 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
210 >>> outputs = model(**inputs)
211 ```
212
213 And here is the equivalent code for TensorFlow:
214 ```python
215 >>> from transformers import AutoTokenizer, TFAutoModel
216
217 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
218 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
219
220 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
221 >>> outputs = model(**inputs)
222 ```
223
224 トークナイザは学習済みモデルが期待するすべての前処理を担当し、単一の文字列 (上記の例のように) またはリストに対して直接呼び出すことができます。これは下流のコードで使用できる辞書を出力します。また、単純に ** 引数展開演算子を使用してモデルに直接渡すこともできます。
225
226 モデル自体は通常の[Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) または [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (バックエンドによって異なる)で、通常通り使用することが可能です。[このチュートリアル](https://huggingface.co/docs/transformers/training)では、このようなモデルを従来のPyTorchやTensorFlowの学習ループに統合する方法や、私たちの`Trainer`APIを使って新しいデータセットで素早く微調整を行う方法について説明します。
227
228 ## なぜtransformersを使う必要があるのでしょうか?
229
230 1. 使いやすい最新モデル:
231 - 自然言語理解・生成、コンピュータビジョン、オーディオの各タスクで高いパフォーマンスを発揮します。
232 - 教育者、実務者にとっての低い参入障壁。
233 - 学習するクラスは3つだけで、ユーザが直面する抽象化はほとんどありません。
234 - 学習済みモデルを利用するための統一されたAPI。
235
236 1. 低い計算コスト、少ないカーボンフットプリント:
237 - 研究者は、常に再トレーニングを行うのではなく、トレーニングされたモデルを共有することができます。
238 - 実務家は、計算時間や生産コストを削減することができます。
239 - すべてのモダリティにおいて、60,000以上の事前学習済みモデルを持つ数多くのアーキテクチャを提供します。
240
241 1. モデルのライフタイムのあらゆる部分で適切なフレームワークを選択可能:
242 - 3行のコードで最先端のモデルをトレーニング。
243 - TF2.0/PyTorch/JAXフレームワーク間で1つのモデルを自在に移動させる。
244 - 学習、評価、生産に適したフレームワークをシームレスに選択できます。
245
246 1. モデルやサンプルをニーズに合わせて簡単にカスタマイズ可能:
247 - 原著者が発表した結果を再現するために、各アーキテクチャの例を提供しています。
248 - モデル内部は可能な限り一貫して公開されています。
249 - モデルファイルはライブラリとは独立して利用することができ、迅速な実験が可能です。
250
251 ## なぜtransformersを使ってはいけないのでしょうか?
252
253 - このライブラリは、ニューラルネットのためのビルディングブロックのモジュール式ツールボックスではありません。モデルファイルのコードは、研究者が追加の抽象化/ファイルに飛び込むことなく、各モデルを素早く反復できるように、意図的に追加の抽象化でリファクタリングされていません。
254 - 学習APIはどのようなモデルでも動作するわけではなく、ライブラリが提供するモデルで動作するように最適化されています。一般的な機械学習のループには、別のライブラリ(おそらく[Accelerate](https://huggingface.co/docs/accelerate))を使用する必要があります。
255 - 私たちはできるだけ多くの使用例を紹介するよう努力していますが、[examples フォルダ](https://github.com/huggingface/transformers/tree/main/examples) にあるスクリプトはあくまで例です。あなたの特定の問題に対してすぐに動作するわけではなく、あなたのニーズに合わせるために数行のコードを変更する必要があることが予想されます。
256
257 ## インストール
258
259 ### pipにて
260
261 このリポジトリは、Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+, TensorFlow 2.3+ でテストされています。
262
263 🤗Transformersは[仮想環境](https://docs.python.org/3/library/venv.html)にインストールする必要があります。Pythonの仮想環境に慣れていない場合は、[ユーザーガイド](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)を確認してください。
264
265 まず、使用するバージョンのPythonで仮想環境を作成し、アクティベートします。
266
267 その後、Flax, PyTorch, TensorFlowのうち少なくとも1つをインストールする必要があります。
268 [TensorFlowインストールページ](https://www.tensorflow.org/install/)、[PyTorchインストールページ](https://pytorch.org/get-started/locally/#start-locally)、[Flax](https://github.com/google/flax#quick-install)、[Jax](https://github.com/google/jax#installation)インストールページで、お使いのプラットフォーム別のインストールコマンドを参照してください。
269
270 これらのバックエンドのいずれかがインストールされている場合、🤗Transformersは以下のようにpipを使用してインストールすることができます:
271
272 ```bash
273 pip install transformers
274 ```
275
276 もしサンプルを試したい、またはコードの最先端が必要で、新しいリリースを待てない場合は、[ライブラリをソースからインストール](https://huggingface.co/docs/transformers/installation#installing-from-source)する必要があります。
277
278 ### condaにて
279
280 Transformersバージョン4.0.0から、condaチャンネルを搭載しました: `huggingface`。
281
282 🤗Transformersは以下のようにcondaを使って設置することができます:
283
284 ```shell script
285 conda install -c huggingface transformers
286 ```
287
288 Flax、PyTorch、TensorFlowをcondaでインストールする方法は、それぞれのインストールページに従ってください。
289
290 > **_注意:_** Windowsでは、キャッシュの恩恵を受けるために、デベロッパーモードを有効にするよう促されることがあります。このような場合は、[このissue](https://github.com/huggingface/huggingface_hub/issues/1062)でお知らせください。
291
292 ## モデルアーキテクチャ
293
294 🤗Transformersが提供する **[全モデルチェックポイント](https://huggingface.co/models)** は、[ユーザー](https://huggingface.co/users)や[組織](https://huggingface.co/organizations)によって直接アップロードされるhuggingface.co [model hub](https://huggingface.co)からシームレスに統合されています。
295
296 現在のチェックポイント数: 
297
298 🤗Transformersは現在、以下のアーキテクチャを提供しています(それぞれのハイレベルな要約は[こちら](https://huggingface.co/docs/transformers/model_summary)を参照してください):
299
300 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (Google Research and the Toyota Technological Institute at Chicago から) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut から公開された研究論文: [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942)
301 1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (Google Research から) Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. から公開された研究論文 [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918)
302 1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (BAAI から) Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell から公開された研究論文: [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679)
303 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (MIT から) Yuan Gong, Yu-An Chung, James Glass から公開された研究論文: [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778)
304 1. **[Autoformer](https://huggingface.co/docs/transformers/model_doc/autoformer)** (from Tsinghua University) released with the paper [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
305 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (Facebook から) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer から公開された研究論文: [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461)
306 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (École polytechnique から) Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis から公開された研究論文: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321)
307 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (VinAI Research から) Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen から公開された研究論文: [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701)
308 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (Microsoft から) Hangbo Bao, Li Dong, Furu Wei から公開された研究論文: [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254)
309 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (Google から) Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova から公開された研究論文: [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805)
310 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (Google から) Sascha Rothe, Shashi Narayan, Aliaksei Severyn から公開された研究論文: [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461)
311 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (VinAI Research から) Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen から公開された研究論文: [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/)
312 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (Google Research から) Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed から公開された研究論文: [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062)
313 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (Google Research から) Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed から公開された研究論文: [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062)
314 1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (Microsoft Research AI4Science から) Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu から公開された研究論文: [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9)
315 1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (Google AI から) Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil から公開された研究論文: [Big Transfer (BiT)](https://arxiv.org/abs/1912.11370)Houlsby.
316 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (Facebook から) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston から公開された研究論文: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637)
317 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (Facebook から) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston から公開された研究論文: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637)
318 1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (Salesforce から) Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi から公開された研究論文: [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086)
319 1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (Salesforce から) Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. から公開された研究論文 [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597)
320 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (BigScience workshop から) [BigScience Workshop](https://bigscience.huggingface.co/) から公開されました.
321 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (Alexa から) Adrian de Wynter and Daniel J. Perry から公開された研究論文: [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499)
322 1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (Harbin Institute of Technology/Microsoft Research Asia/Intel Labs から) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
323 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (Google Research から) Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel から公開された研究論文: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
324 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (Inria/Facebook/Sorbonne から) Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot から公開された研究論文: [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894)
325 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (Google Research から) Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting から公開された研究論文: [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874)
326 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (OFA-Sys から) An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou から公開された研究論文: [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335)
327 1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (LAION-AI から) Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. から公開された研究論文 [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687)
328 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI から) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever から公開された研究論文: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
329 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (University of Göttingen から) Timo Lüddecke and Alexander Ecker から公開された研究論文: [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003)
330 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (Salesforce から) Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong から公開された研究論文: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474)
331 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (Microsoft Research Asia から) Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang から公開された研究論文: [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152)
332 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (YituTech から) Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan から公開された研究論文: [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496)
333 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (Facebook AI から) Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie から公開された研究論文: [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545)
334 1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
335 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (Tsinghua University から) Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun から公開された研究論文: [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413)
336 1. **[CPM-Ant](https://huggingface.co/docs/transformers/model_doc/cpmant)** (OpenBMB から) [OpenBMB](https://www.openbmb.org/) から公開されました.
337 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (Salesforce から) Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher から公開された研究論文: [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858)
338 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (Microsoft から) Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang から公開された研究論文: [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808)
339 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (Facebook から) Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli から公開された研究論文: [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555)
340 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (Microsoft から) Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen から公開された研究論文: [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654)
341 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (Microsoft から) Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen から公開された研究論文: [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654)
342 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (Berkeley/Facebook/Google から) Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch から公開された研究論文: [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345)
343 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (SenseTime Research から) Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai から公開された研究論文: [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159)
344 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (Facebook から) Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou から公開された研究論文: [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877)
345 1. **[DePlot](https://huggingface.co/docs/transformers/model_doc/deplot)** (Google AI から) Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun. から公開された研究論文 [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505)
346 1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (The University of Texas at Austin から) Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl. から公開された研究論文 [NMS Strikes Back](https://arxiv.org/abs/2212.06137)
347 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (Facebook から) Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko から公開された研究論文: [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872)
348 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (Microsoft Research から) Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan から公開された研究論文: [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536)
349 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (SHI Labs から) Ali Hassani and Humphrey Shi から公開された研究論文: [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001)
350 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (HuggingFace から), Victor Sanh, Lysandre Debut and Thomas Wolf. 同じ手法で GPT2, RoBERTa と Multilingual BERT の圧縮を行いました.圧縮されたモデルはそれぞれ [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation)、[DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation)、[DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) と名付けられました. 公開された研究論文: [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108)
351 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (Microsoft Research から) Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei から公開された研究論文: [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378)
352 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (NAVER から), Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park から公開された研究論文: [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664)
353 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (Facebook から) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih から公開された研究論文: [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906)
354 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (Intel Labs から) René Ranftl, Alexey Bochkovskiy, Vladlen Koltun から公開された研究論文: [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413)
355 1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (Snap Research から) Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. から公開された研究論文 [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191)
356 1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le.
357 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (Google Research/Stanford University から) Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning から公開された研究論文: [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555)
358 1. **[EnCodec](https://huggingface.co/docs/transformers/main/model_doc/encodec)** (Meta AI から) Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi. から公開された研究論文 [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438)
359 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (Google Research から) Sascha Rothe, Shashi Narayan, Aliaksei Severyn から公開された研究論文: [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461)
360 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (Baidu から) Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu から公開された研究論文: [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223)
361 1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (Baidu から) Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. から公開された研究論文 [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674)
362 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (Meta AI から) はトランスフォーマープロテイン言語モデルです. **ESM-1b** は Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus から公開された研究論文: [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118). **ESM-1v** は Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives から公開された研究論文: [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648). **ESM-2** と **ESMFold** は Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives から公開された研究論文: [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902)
363 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (Google AI から) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V から公開されたレポジトリー [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) Le, and Jason Wei
364 1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
365 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (CNRS から) Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab から公開された研究論文: [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372)
366 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (Facebook AI から) Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela から公開された研究論文: [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482)
367 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (Google Research から) James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon から公開された研究論文: [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824)
368 1. **[FocalNet](https://huggingface.co/docs/transformers/model_doc/focalnet)** (Microsoft Research から) Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao. から公開された研究論文 [Focal Modulation Networks](https://arxiv.org/abs/2203.11926)
369 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (CMU/Google Brain から) Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le から公開された研究論文: [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236)
370 1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (Microsoft Research から) Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. から公開された研究論文 [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100)
371 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (KAIST から) Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim から公開された研究論文: [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436)
372 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (OpenAI から) Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever から公開された研究論文: [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/)
373 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (EleutherAI から) Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy から公開されたレポジトリー : [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo)
374 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (EleutherAI から) Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach から公開された研究論文: [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745)
375 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (ABEJA から) Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori からリリース.
376 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (OpenAI から) Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** から公開された研究論文: [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/)
377 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (EleutherAI から) Ben Wang and Aran Komatsuzaki から公開されたレポジトリー [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/)
378 1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (AI-Sweden から) Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren から公開された研究論文: [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf)
379 1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (BigCode から) Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra. から公開された研究論文 [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988)
380 1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) 坂本俊之(tanreinama)からリリースされました.
381 1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (Microsoft から) Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu から公開された研究論文: [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234).
382 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA から) Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang から公開された研究論文: [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094)
383 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (Facebook から) Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed から公開された研究論文: [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447)
384 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (Berkeley から) Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer から公開された研究論文: [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321)
385 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (OpenAI から) Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever から公開された研究論文: [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/)
386 1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
387 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (OpenAI から) Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever から公開された研究論文: [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf)
388 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (Microsoft Research Asia から) Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou から公開された研究論文: [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318)
389 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (Microsoft Research Asia から) Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou から公開された研究論文: [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740)
390 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (Microsoft Research Asia から) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei から公開された研究論文: [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387)
391 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (Microsoft Research Asia から) Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei から公開された研究論文: [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836)
392 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (AllenAI から) Iz Beltagy, Matthew E. Peters, Arman Cohan から公開された研究論文: [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150)
393 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (Meta AI から) Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze から公開された研究論文: [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136)
394 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (South China University of Technology から) Jiapeng Wang, Lianwen Jin, Kai Ding から公開された研究論文: [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669)
395 1. **[LLaMA](https://huggingface.co/docs/transformers/model_doc/llama)** (The FAIR team of Meta AI から) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. から公開された研究論文 [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
396 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (AllenAI から) Iz Beltagy, Matthew E. Peters, Arman Cohan から公開された研究論文: [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150)
397 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (Google AI から) Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang から公開された研究論文: [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916)
398 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (Studio Ousia から) Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto から公開された研究論文: [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057)
399 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (UNC Chapel Hill から) Hao Tan and Mohit Bansal から公開された研究論文: [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490)
400 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (Facebook から) Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert から公開された研究論文: [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161)
401 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (Facebook から) Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin から公開された研究論文: [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125)
402 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Jörg Tiedemann から. [OPUS](http://opus.nlpl.eu/) を使いながら学習された "Machine translation" (マシントランスレーション) モデル. [Marian Framework](https://marian-nmt.github.io/) はMicrosoft Translator Team が現在開発中です.
403 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (Microsoft Research Asia から) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei から公開された研究論文: [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518)
404 1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (FAIR and UIUC から) Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. から公開された研究論文 [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527)
405 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (Meta and UIUC から) Bowen Cheng, Alexander G. Schwing, Alexander Kirillov から公開された研究論文: [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278)
406 1. **[MatCha](https://huggingface.co/docs/transformers/model_doc/matcha)** (Google AI から) Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos. から公開された研究論文 [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662)
407 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (Facebook から) Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer から公開された研究論文: [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210)
408 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (Facebook から) Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan から公開された研究論文: [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401)
409 1. **[MEGA](https://huggingface.co/docs/transformers/model_doc/mega)** (Facebook から) Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. から公開された研究論文 [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655)
410 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (NVIDIA から) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro から公開された研究論文: [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053)
411 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (NVIDIA から) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro から公開された研究論文: [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053)
412 1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (Alibaba Research から) Peng Wang, Cheng Da, and Cong Yao. から公開された研究論文 [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592)
413 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (Studio Ousia から) Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka から公開された研究論文: [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151)
414 1. **[MMS](https://huggingface.co/docs/transformers/model_doc/mms)** (Facebook から) Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli. から公開された研究論文 [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516)
415 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (CMU/Google Brain から) Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou から公開された研究論文: [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984)
416 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (Google Inc. から) Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam から公開された研究論文: [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861)
417 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (Google Inc. から) Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen から公開された研究論文: [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)
418 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (Apple から) Sachin Mehta and Mohammad Rastegari から公開された研究論文: [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178)
419 1. **[MobileViTV2](https://huggingface.co/docs/transformers/model_doc/mobilevitv2)** (Apple から) Sachin Mehta and Mohammad Rastegari. から公開された研究論文 [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680)
420 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (Microsoft Research から) Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu から公開された研究論文: [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297)
421 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (Google AI から) Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel から公開された研究論文: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
422 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (RUC AI Box から) Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen から公開された研究論文: [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131)
423 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (SHI Labs から) Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi から公開された研究論文: [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143)
424 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (Huawei Noah’s Ark Lab から) Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu から公開された研究論文: [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204)
425 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (Meta から) the NLLB team から公開された研究論文: [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672)
426 1. **[NLLB-MOE](https://huggingface.co/docs/transformers/model_doc/nllb-moe)** (Meta から) the NLLB team. から公開された研究論文 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672)
427 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (the University of Wisconsin - Madison から) Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh から公開された研究論文: [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902)
428 1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (SHI Labs から) Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi から公開された研究論文: [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220)
429 1. **[OpenLlama](https://huggingface.co/docs/transformers/model_doc/open-llama)** (from [s-JoL](https://huggingface.co/s-JoL)) released in [Open-Llama](https://github.com/s-JoL/Open-Llama).
430 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (Meta AI から) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al から公開された研究論文: [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068)
431 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI から) Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby から公開された研究論文: [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230)
432 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (Google から) Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu から公開された研究論文: [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777)
433 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google から) Jason Phang, Yao Zhao, and Peter J. Liu から公開された研究論文: [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347)
434 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (Deepmind から) Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira から公開された研究論文: [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795)
435 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (VinAI Research から) Dat Quoc Nguyen and Anh Tuan Nguyen から公開された研究論文: [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/)
436 1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (Google から) Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. から公開された研究論文 [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347)
437 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (UCLA NLP から) Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang から公開された研究論文: [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333)
438 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (Sea AI Labs から) Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng から公開された研究論文: [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418)
439 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (Microsoft Research から) Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou から公開された研究論文: [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063)
440 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (NVIDIA から) Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius から公開された研究論文: [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602)
441 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (Facebook から) Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela から公開された研究論文: [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401)
442 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (Google Research から) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang から公開された研究論文: [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909)
443 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (Google Research から) Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya から公開された研究論文: [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451)
444 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (META Platforms から) Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár から公開された研究論文: [Designing Network Design Space](https://arxiv.org/abs/2003.13678)
445 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (Google Research から) Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder から公開された研究論文: [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821)
446 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (Microsoft Research から) Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun から公開された研究論文: [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
447 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (Facebook から), Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov から公開された研究論文: [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692)
448 1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (Facebook から) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli から公開された研究論文: [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038)
449 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (WeChatAI から) HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou から公開された研究論文: [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf)
450 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (ZhuiyiTechnology から), Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu から公開された研究論文: [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864)
451 1. **[RWKV](https://huggingface.co/docs/transformers/model_doc/rwkv)** (Bo Peng から) Bo Peng. から公開された研究論文 [this repo](https://github.com/BlinkDL/RWKV-LM)
452 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (NVIDIA から) Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo から公開された研究論文: [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203)
453 1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (Meta AI から) Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick. から公開された研究論文 [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf)
454 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP から) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi から公開された研究論文: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
455 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP から) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi から公開された研究論文: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
456 1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (Microsoft Research から) Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. から公開された研究論文 [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205)
457 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (Facebook から), Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino から公開された研究論文: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
458 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (Facebook から), Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau から公開された研究論文: [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678)
459 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (Tel Aviv University から), Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy から公開された研究論文: [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438)
460 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (Berkeley から) Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer から公開された研究論文: [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316)
461 1. **[SwiftFormer](https://huggingface.co/docs/transformers/model_doc/swiftformer)** (MBZUAI から) Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan. から公開された研究論文 [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446)
462 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (Microsoft から) Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo から公開された研究論文: [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030)
463 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (Microsoft から) Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo から公開された研究論文: [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883)
464 1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (University of Würzburg から) Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte から公開された研究論文: [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345)
465 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (Google から) William Fedus, Barret Zoph, Noam Shazeer から公開された研究論文: [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961)
466 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (Google AI から) Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu から公開された研究論文: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683)
467 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (Google AI から) Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu から公開されたレポジトリー [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511)
468 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (Microsoft Research から) Brandon Smock, Rohith Pesala, Robin Abraham から公開された研究論文: [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061)
469 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (Google AI から) Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos から公開された研究論文: [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349)
470 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (Microsoft Research から) Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou から公開された研究論文: [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653)
471 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (HuggingFace から).
472 1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (Facebook から) Gedas Bertasius, Heng Wang, Lorenzo Torresani から公開された研究論文: [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095)
473 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (the University of California at Berkeley から) Michael Janner, Qiyang Li, Sergey Levine から公開された研究論文: [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039)
474 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (Google/CMU から) Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov から公開された研究論文: [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860)
475 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (Microsoft から), Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei から公開された研究論文: [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282)
476 1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (from UNC Chapel Hill から), Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal から公開された研究論文: [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156)
477 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (Google Research から) Yi Tay, Mostafa Dehghani, Vinh Q から公開された研究論文: [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
478 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (Microsoft Research から) Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang から公開された研究論文: [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597)
479 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (Microsoft Research から) Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu から公開された研究論文: [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
480 1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (Peking University から) Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. から公開された研究論文 [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221)
481 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (Tsinghua University and Nankai University から) Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu から公開された研究論文: [Visual Attention Network](https://arxiv.org/abs/2202.09741)
482 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (Multimedia Computing Group, Nanjing University から) Zhan Tong, Yibing Song, Jue Wang, Limin Wang から公開された研究論文: [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602)
483 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (NAVER AI Lab/Kakao Enterprise/Kakao Brain から) Wonjae Kim, Bokyung Son, Ildoo Kim から公開された研究論文: [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334)
484 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (Google AI から) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby から公開された研究論文: [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)
485 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (UCLA NLP から) Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang から公開された研究論文: [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557)
486 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (Google AI から) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby から公開された研究論文: [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)
487 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (Meta AI から) Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick から公開された研究論文: [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377)
488 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (Meta AI から) Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas から公開された研究論文: [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141)
489 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (Facebook AI から) Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli から公開された研究論文: [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477)
490 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (Facebook AI から) Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino から公開された研究論文: [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171)
491 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (Facebook AI から) Qiantong Xu, Alexei Baevski, Michael Auli から公開された研究論文: [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680)
492 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (Microsoft Research から) Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei から公開された研究論文: [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
493 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (OpenAI から) Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever から公開された研究論文: [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf)
494 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (Microsoft Research から) Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling から公開された研究論文: [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816)
495 1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (Meta AI から) Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe. から公開された研究論文 [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255)
496 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li から公開された研究論文: [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668)
497 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (Facebook から) Guillaume Lample and Alexis Conneau から公開された研究論文: [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291)
498 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (Microsoft Research から) Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou から公開された研究論文: [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063)
499 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (Facebook AI から), Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov から公開された研究論文: [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116)
500 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (Facebook AI から), Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau から公開された研究論文: [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572)
501 1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (Meta AI から) Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa から公開された研究論文: [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472)
502 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (Google/CMU から) Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le から公開された研究論文: [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237)
503 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (Facebook AI から) Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli から公開された研究論文: [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296)
504 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (Facebook AI から) Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli から公開された研究論文: [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979)
505 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (Huazhong University of Science & Technology から) Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu から公開された研究論文: [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666)
506 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (the University of Wisconsin - Madison から) Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh から公開された研究論文: [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714)
507 1. 新しいモデルを投稿したいですか?新しいモデルを追加するためのガイドとして、**詳細なガイドとテンプレート**が追加されました。これらはリポジトリの[`templates`](./templates)フォルダにあります。PRを始める前に、必ず[コントリビューションガイド](./CONTRIBUTING.md)を確認し、メンテナに連絡するか、フィードバックを収集するためにissueを開いてください。
508
509 各モデルがFlax、PyTorch、TensorFlowで実装されているか、🤗Tokenizersライブラリに支えられた関連トークナイザを持っているかは、[この表](https://huggingface.co/docs/transformers/index#supported-frameworks)を参照してください。
510
511 これらの実装はいくつかのデータセットでテストされており(サンプルスクリプトを参照)、オリジナルの実装の性能と一致するはずである。性能の詳細は[documentation](https://github.com/huggingface/transformers/tree/main/examples)のExamplesセクションで見ることができます。
512
513
514 ## さらに詳しく
515
516 | セクション | 概要 |
517 |-|-|
518 | [ドキュメント](https://huggingface.co/docs/transformers/) | 完全なAPIドキュメントとチュートリアル |
519 | [タスク概要](https://huggingface.co/docs/transformers/task_summary) | 🤗Transformersがサポートするタスク |
520 | [前処理チュートリアル](https://huggingface.co/docs/transformers/preprocessing) | モデル用のデータを準備するために`Tokenizer`クラスを使用 |
521 | [トレーニングと微調整](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlowの学習ループと`Trainer`APIで🤗Transformersが提供するモデルを使用 |
522 | [クイックツアー: 微調整/使用方法スクリプト](https://github.com/huggingface/transformers/tree/main/examples) | 様々なタスクでモデルの微調整を行うためのスクリプト例 |
523 | [モデルの共有とアップロード](https://huggingface.co/docs/transformers/model_sharing) | 微調整したモデルをアップロードしてコミュニティで共有する |
524 | [マイグレーション](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`または`pytorch-pretrained-bert`から🤗Transformers に移行する |
525
526 ## 引用
527
528 🤗 トランスフォーマーライブラリに引用できる[論文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)が出来ました:
529 ```bibtex
530 @inproceedings{wolf-etal-2020-transformers,
531 title = "Transformers: State-of-the-Art Natural Language Processing",
532 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
533 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
534 month = oct,
535 year = "2020",
536 address = "Online",
537 publisher = "Association for Computational Linguistics",
538 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
539 pages = "38--45"
540 }
541 ```
542
[end of README_ja.md]
[start of README_ko.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <a href="https://github.com/huggingface/transformers/">English</a> |
44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
46 <b>한국어</b> |
47 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> |
48 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> |
49 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a>
50 <p>
51 </h4>
52
53 <h3 align="center">
54 <p> Jax, Pytorch, TensorFlow를 위한 최첨단 자연어처리</p>
55 </h3>
56
57 <h3 align="center">
58 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
59 </h3>
60
61 🤗 Transformers는 분류, 정보 추출, 질문 답변, 요약, 번역, 문장 생성 등을 100개 이상의 언어로 수행할 수 있는 수천개의 사전학습된 모델을 제공합니다. 우리의 목표는 모두가 최첨단의 NLP 기술을 쉽게 사용하는 것입니다.
62
63 🤗 Transformers는 이러한 사전학습 모델을 빠르게 다운로드해 특정 텍스트에 사용하고, 원하는 데이터로 fine-tuning해 커뮤니티나 우리의 [모델 허브](https://huggingface.co/models)에 공유할 수 있도록 API를 제공합니다. 또한, 모델 구조를 정의하는 각 파이썬 모듈은 완전히 독립적이여서 연구 실험을 위해 손쉽게 수정할 수 있습니다.
64
65 🤗 Transformers는 가장 유명한 3개의 딥러닝 라이브러리를 지원합니다. 이들은 서로 완벽히 연동됩니다 — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). 간단하게 이 라이브러리 중 하나로 모델을 학습하고, 또 다른 라이브러리로 추론을 위해 모델을 불러올 수 있습니다.
66
67 ## 온라인 데모
68
69 대부분의 모델을 [모델 허브](https://huggingface.co/models) 페이지에서 바로 테스트해볼 수 있습니다. 공개 및 비공개 모델을 위한 [비공개 모델 호스팅, 버전 관리, 추론 API](https://huggingface.co/pricing)도 제공합니다.
70
71 예시:
72 - [BERT로 마스킹된 단어 완성하기](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
73 - [Electra를 이용한 개체명 인식](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
74 - [GPT-2로 텍스트 생성하기](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
75 - [RoBERTa로 자연어 추론하기](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
76 - [BART를 이용한 요약](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
77 - [DistilBERT를 이용한 질문 답변](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
78 - [T5로 번역하기](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
79
80 **[Transformer와 글쓰기](https://transformer.huggingface.co)** 는 이 저장소의 텍스트 생성 능력에 관한 Hugging Face 팀의 공식 데모입니다.
81
82 ## Hugging Face 팀의 커스텀 지원을 원한다면
83
84 <a target="_blank" href="https://huggingface.co/support">
85 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
86 </a><br>
87
88 ## 퀵 투어
89
90 원하는 텍스트에 바로 모델을 사용할 수 있도록, 우리는 `pipeline` API를 제공합니다. Pipeline은 사전학습 모델과 그 모델을 학습할 때 적용한 전처리 방식을 하나로 합칩니다. 다음은 긍정적인 텍스트와 부정적인 텍스트를 분류하기 위해 pipeline을 사용한 간단한 예시입니다:
91
92 ```python
93 >>> from transformers import pipeline
94
95 # Allocate a pipeline for sentiment-analysis
96 >>> classifier = pipeline('sentiment-analysis')
97 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
98 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
99 ```
100
101 코드의 두번째 줄은 pipeline이 사용하는 사전학습 모델을 다운로드하고 캐시로 저장합니다. 세번째 줄에선 그 모델이 주어진 텍스트를 평가합니다. 여기서 모델은 99.97%의 확률로 텍스트가 긍정적이라고 평가했습니다.
102
103 많은 NLP 과제들을 `pipeline`으로 바로 수행할 수 있습니다. 예를 들어, 질문과 문맥이 주어지면 손쉽게 답변을 추출할 수 있습니다:
104
105 ``` python
106 >>> from transformers import pipeline
107
108 # Allocate a pipeline for question-answering
109 >>> question_answerer = pipeline('question-answering')
110 >>> question_answerer({
111 ... 'question': 'What is the name of the repository ?',
112 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
113 ... })
114 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
115
116 ```
117
118 답변뿐만 아니라, 여기에 사용된 사전학습 모델은 확신도와 토크나이즈된 문장 속 답변의 시작점, 끝점까지 반환합니다. [이 튜토리얼](https://huggingface.co/docs/transformers/task_summary)에서 `pipeline` API가 지원하는 다양한 과제를 확인할 수 있습니다.
119
120 코드 3줄로 원하는 과제에 맞게 사전학습 모델을 다운로드 받고 사용할 수 있습니다. 다음은 PyTorch 버전입니다:
121 ```python
122 >>> from transformers import AutoTokenizer, AutoModel
123
124 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
125 >>> model = AutoModel.from_pretrained("bert-base-uncased")
126
127 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
128 >>> outputs = model(**inputs)
129 ```
130 다음은 TensorFlow 버전입니다:
131 ```python
132 >>> from transformers import AutoTokenizer, TFAutoModel
133
134 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
135 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
136
137 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
138 >>> outputs = model(**inputs)
139 ```
140
141 토크나이저는 사전학습 모델의 모든 전처리를 책임집니다. 그리고 (위의 예시처럼) 1개의 스트링이나 리스트도 처리할 수 있습니다. 토크나이저는 딕셔너리를 반환하는데, 이는 다운스트림 코드에 사용하거나 언패킹 연산자 ** 를 이용해 모델에 바로 전달할 수도 있습니다.
142
143 모델 자체는 일반적으로 사용되는 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)나 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)입니다. [이 튜토리얼](https://huggingface.co/transformers/training.html)은 이러한 모델을 표준적인 PyTorch나 TensorFlow 학습 과정에서 사용하는 방법, 또는 새로운 데이터로 fine-tune하기 위해 `Trainer` API를 사용하는 방법을 설명해줍니다.
144
145 ## 왜 transformers를 사용해야 할까요?
146
147 1. 손쉽게 사용할 수 있는 최첨단 모델:
148 - NLU와 NLG 과제에서 뛰어난 성능을 보입니다.
149 - 교육자 실무자에게 진입 장벽이 낮습니다.
150 - 3개의 클래스만 배우면 바로 사용할 수 있습니다.
151 - 하나의 API로 모든 사전학습 모델을 사용할 수 있습니다.
152
153 1. 더 적은 계산 비용, 더 적은 탄소 발자국:
154 - 연구자들은 모델을 계속 다시 학습시키는 대신 학습된 모델을 공유할 수 있습니다.
155 - 실무자들은 학습에 필요한 시간과 비용을 절약할 수 있습니다.
156 - 수십개의 모델 구조, 2,000개 이상의 사전학습 모델, 100개 이상의 언어로 학습된 모델 등.
157
158 1. 모델의 각 생애주기에 적합한 프레임워크:
159 - 코드 3줄로 최첨단 모델을 학습하세요.
160 - 자유롭게 모델을 TF2.0나 PyTorch 프레임워크로 변환하세요.
161 - 학습, 평가, 공개 등 각 단계에 맞는 프레임워크를 원하는대로 선택하세요.
162
163 1. 필요한 대로 모델이나 예시를 커스터마이즈하세요:
164 - 우리는 저자가 공개한 결과를 재현하기 위해 각 모델 구조의 예시를 제공합니다.
165 - 모델 내부 구조는 가능한 일관적으로 공개되어 있습니다.
166 - 빠른 실험을 위해 모델 파일은 라이브러리와 독립적으로 사용될 수 있습니다.
167
168 ## 왜 transformers를 사용하지 말아야 할까요?
169
170 - 이 라이브러리는 신경망 블록을 만들기 위한 모듈이 아닙니다. 연구자들이 여러 파일을 살펴보지 않고 바로 각 모델을 사용할 수 있도록, 모델 파일 코드의 추상화 수준을 적정하게 유지했습니다.
171 - 학습 API는 모든 모델에 적용할 수 있도록 만들어지진 않았지만, 라이브러리가 제공하는 모델들에 적용할 수 있도록 최적화되었습니다. 일반적인 머신 러닝을 위해선, 다른 라이브러리를 사용하세요.
172 - 가능한 많은 사용 예시를 보여드리고 싶어서, [예시 폴더](https://github.com/huggingface/transformers/tree/main/examples)의 스크립트를 준비했습니다. 이 스크립트들을 수정 없이 특정한 문제에 바로 적용하지 못할 수 있습니다. 필요에 맞게 일부 코드를 수정해야 할 수 있습니다.
173
174 ## 설치
175
176 ### pip로 설치하기
177
178 이 저장소는 Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+, TensorFlow 2.3+에서 테스트 되었습니다.
179
180 [가상 환경](https://docs.python.org/3/library/venv.html)에 🤗 Transformers를 설치하세요. Python 가상 환경에 익숙하지 않다면, [사용자 가이드](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)를 확인하세요.
181
182 우선, 사용할 Python 버전으로 가상 환경을 만들고 실행하세요.
183
184 그 다음, Flax, PyTorch, TensorFlow 중 적어도 하나는 설치해야 합니다.
185 플랫폼에 맞는 설치 명령어를 확인하기 위해 [TensorFlow 설치 페이지](https://www.tensorflow.org/install/), [PyTorch 설치 페이지](https://pytorch.org/get-started/locally/#start-locally), [Flax 설치 페이지](https://github.com/google/flax#quick-install)를 확인하세요.
186
187 이들 중 적어도 하나가 설치되었다면, 🤗 Transformers는 다음과 같이 pip을 이용해 설치할 수 있습니다:
188
189 ```bash
190 pip install transformers
191 ```
192
193 예시들을 체험해보고 싶거나, 최최최첨단 코드를 원하거나, 새로운 버전이 나올 때까지 기다릴 수 없다면 [라이브러리를 소스에서 바로 설치](https://huggingface.co/docs/transformers/installation#installing-from-source)하셔야 합니다.
194
195 ### conda로 설치하기
196
197 Transformers 버전 v4.0.0부터, conda 채널이 생겼습니다: `huggingface`.
198
199 🤗 Transformers는 다음과 같이 conda로 설치할 수 있습니다:
200
201 ```shell script
202 conda install -c huggingface transformers
203 ```
204
205 Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는 방법을 확인하세요.
206
207 ## 모델 구조
208
209 **🤗 Transformers가 제공하는 [모든 모델 체크포인트](https://huggingface.co/models)** 는 huggingface.co [모델 허브](https://huggingface.co)에 완벽히 연동되어 있습니다. [개인](https://huggingface.co/users)과 [기관](https://huggingface.co/organizations)이 모델 허브에 직접 업로드할 수 있습니다.
210
211 현재 사용 가능한 모델 체크포인트의 개수: 
212
213 🤗 Transformers는 다음 모델들을 제공합니다 (각 모델의 요약은 [여기](https://huggingface.co/docs/transformers/model_summary)서 확인하세요):
214
215 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
216 1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (Google Research 에서 제공)은 Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.의 [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918)논문과 함께 발표했습니다.
217 1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell.
218 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass.
219 1. **[Autoformer](https://huggingface.co/docs/transformers/model_doc/autoformer)** (from Tsinghua University) released with the paper [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
220 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
221 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
222 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
223 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
224 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
225 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
226 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
227 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
228 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
229 1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu.
230 1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby.
231 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
232 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
233 1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
234 1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (Salesforce 에서 제공)은 Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi.의 [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597)논문과 함께 발표했습니다.
235 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/).
236 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (Alexa 에서) Adrian de Wynter and Daniel J. Perry 의 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 논문과 함께 발표했습니다.
237 1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
238 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (Google Research 에서) Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 의 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 논문과 함께 발표했습니다.
239 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (Inria/Facebook/Sorbonne 에서) Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 의 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 논문과 함께 발표했습니다.
240 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (Google Research 에서) Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 의 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 논문과 함께 발표했습니다.
241 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (OFA-Sys 에서) An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou 의 [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) 논문과 함께 발표했습니다.
242 1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (LAION-AI 에서 제공)은 Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.의 [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687)논문과 함께 발표했습니다.
243 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI 에서) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 의 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 논문과 함께 발표했습니다.
244 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (University of Göttingen 에서) Timo Lüddecke and Alexander Ecker 의 [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) 논문과 함께 발표했습니다.
245 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (Salesforce 에서) Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 의 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 논문과 함께 발표했습니다.
246 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (Microsoft Research Asia 에서) Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang 의 [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) 논문과 함께 발표했습니다.
247 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (YituTech 에서) Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 의 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 논문과 함께 발표했습니다.
248 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (Facebook AI 에서) Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie 의 [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 논문과 함께 발표했습니다.
249 1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
250 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (Tsinghua University 에서) Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 의 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 논문과 함께 발표했습니다.
251 1. **[CPM-Ant](https://huggingface.co/docs/transformers/model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/).
252 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (Salesforce 에서) Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 의 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 논문과 함께 발표했습니다.
253 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (Microsoft 에서) Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang 의 [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 논문과 함께 발표했습니다.
254 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (Facebook 에서) Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli 의 [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) 논문과 함께 발표했습니다.
255 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (Microsoft 에서) Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 의 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 논문과 함께 발표했습니다.
256 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (Microsoft 에서) Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 의 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 논문과 함께 발표했습니다.
257 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (Berkeley/Facebook/Google 에서) Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch 의 [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) 논문과 함께 발표했습니다.
258 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (SenseTime Research 에서) Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai 의 [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) 논문과 함께 발표했습니다.
259 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (Facebook 에서) Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 의 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 논문과 함께 발표했습니다.
260 1. **[DePlot](https://huggingface.co/docs/transformers/model_doc/deplot)** (Google AI 에서 제공)은 Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.의 [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505)논문과 함께 발표했습니다.
261 1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (The University of Texas at Austin 에서 제공)은 Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.의 [NMS Strikes Back](https://arxiv.org/abs/2212.06137)논문과 함께 발표했습니다.
262 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (Facebook 에서) Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 의 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 논문과 함께 발표했습니다.
263 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (Microsoft Research 에서) Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 의 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 논문과 함께 발표했습니다.
264 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (SHI Labs 에서) Ali Hassani and Humphrey Shi 의 [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) 논문과 함께 발표했습니다.
265 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (HuggingFace 에서) Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT 의 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 논문과 함께 발표했습니다.
266 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (Microsoft Research 에서) Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei 의 [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) 논문과 함께 발표했습니다.
267 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (NAVER 에서) Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park 의 [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) 논문과 함께 발표했습니다.
268 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (Facebook 에서) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 의 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 논문과 함께 발표했습니다.
269 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (Intel Labs 에서) René Ranftl, Alexey Bochkovskiy, Vladlen Koltun 의 [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) 논문과 함께 발표했습니다.
270 1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren.
271 1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le.
272 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (Google Research/Stanford University 에서) Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 의 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 논문과 함께 발표했습니다.
273 1. **[EnCodec](https://huggingface.co/docs/transformers/main/model_doc/encodec)** (Meta AI 에서 제공)은 Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi.의 [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438)논문과 함께 발표했습니다.
274 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (Google Research 에서) Sascha Rothe, Shashi Narayan, Aliaksei Severyn 의 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 논문과 함께 발표했습니다.
275 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (Baidu 에서) Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu 의 [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) 논문과 함께 발표했습니다.
276 1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (Baidu 에서 제공)은 Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang.의 [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674)논문과 함께 발표했습니다.
277 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2** was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives.
278 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
279 1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
280 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
281 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
282 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
283 1. **[FocalNet](https://huggingface.co/docs/transformers/model_doc/focalnet)** (from Microsoft Research) released with the paper [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao.
284 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
285 1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang.
286 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
287 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
288 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
289 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (EleutherAI 에서) Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbac 의 [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) 논문과 함께 발표했습니다.
290 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori.
291 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (OpenAI 에서) Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 의 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 논문과 함께 발표했습니다.
292 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
293 1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (AI-Sweden 에서) Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren. 의 [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) 논문과 함께 발표했습니다.
294 1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (BigCode 에서 제공)은 Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.의 [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988)논문과 함께 발표했습니다.
295 1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
296 1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu 의 [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) 논문과 함께 발표했습니다.
297 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA 에서) Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 의 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 논문과 함께 발표했습니다.
298 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (Facebook 에서) Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 의 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 논문과 함께 발표했습니다.
299 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (Berkeley 에서) Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 의 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 논문과 함께 발표했습니다.
300 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (OpenAI 에서) Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 의 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 논문과 함께 발표했습니다.
301 1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
302 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (OpenAI 에서) Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever 의 [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) 논문과 함께 발표했습니다.
303 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (Microsoft Research Asia 에서) Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 의 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 논문과 함께 발표했습니다.
304 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (Microsoft Research Asia 에서) Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 의 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 논문과 함께 발표했습니다.
305 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (Microsoft Research Asia 에서) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei 의 [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) 논문과 함께 발표했습니다.
306 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (Microsoft Research Asia 에서) Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 의 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 논문과 함께 발표했습니다.
307 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (AllenAI 에서) Iz Beltagy, Matthew E. Peters, Arman Cohan 의 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 논문과 함께 발표했습니다.
308 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (Meta AI 에서) Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze 의 [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) 논문과 함께 발표했습니다.
309 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (South China University of Technology 에서) Jiapeng Wang, Lianwen Jin, Kai Ding 의 [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) 논문과 함께 발표했습니다.
310 1. **[LLaMA](https://huggingface.co/docs/transformers/model_doc/llama)** (The FAIR team of Meta AI 에서 제공)은 Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.의 [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)논문과 함께 발표했습니다.
311 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (AllenAI 에서) Iz Beltagy, Matthew E. Peters, Arman Cohan 의 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 논문과 함께 발표했습니다.
312 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (Google AI 에서) Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang 의 [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) 논문과 함께 발표했습니다.
313 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (Studio Ousia 에서) Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 의 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 논문과 함께 발표했습니다.
314 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (UNC Chapel Hill 에서) Hao Tan and Mohit Bansal 의 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 논문과 함께 발표했습니다.
315 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (Facebook 에서) Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert 의 [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) 논문과 함께 발표했습니다.
316 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (Facebook 에서) Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 의 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 논문과 함께 발표했습니다.
317 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
318 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (Microsoft Research Asia 에서) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei 의 [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) 논문과 함께 발표했습니다.
319 1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (FAIR and UIUC 에서 제공)은 Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar.의 [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527)논문과 함께 발표했습니다.
320 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (Meta and UIUC 에서) Bowen Cheng, Alexander G. Schwing, Alexander Kirillov 의 [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) 논문과 함께 발표했습니다.
321 1. **[MatCha](https://huggingface.co/docs/transformers/model_doc/matcha)** (Google AI 에서 제공)은 Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos.의 [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662)논문과 함께 발표했습니다.
322 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (Facebook 에서) Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 의 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 논문과 함께 발표했습니다.
323 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (Facebook 에서) Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 의 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 논문과 함께 발표했습니다.
324 1. **[MEGA](https://huggingface.co/docs/transformers/model_doc/mega)** (Facebook 에서 제공)은 Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer.의 [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655)논문과 함께 발표했습니다.
325 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (NVIDIA 에서) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 의 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 논문과 함께 발표했습니다.
326 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (NVIDIA 에서) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 의 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 논문과 함께 발표했습니다.
327 1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (Alibaba Research 에서 제공)은 Peng Wang, Cheng Da, and Cong Yao.의 [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592)논문과 함께 발표했습니다.
328 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (Studio Ousia 에서) Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka 의 [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 논문과 함께 발표했습니다.
329 1. **[MMS](https://huggingface.co/docs/transformers/model_doc/mms)** (Facebook 에서 제공)은 Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli.의 [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516)논문과 함께 발표했습니다.
330 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (CMU/Google Brain 에서) Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou 의 [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) 논문과 함께 발표했습니다.
331 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (Google Inc. 에서) Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam 의 [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) 논문과 함께 발표했습니다.
332 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (Google Inc. 에서) Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen 의 [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) 논문과 함께 발표했습니다.
333 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (Apple 에서) Sachin Mehta and Mohammad Rastegari 의 [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) 논문과 함께 발표했습니다.
334 1. **[MobileViTV2](https://huggingface.co/docs/transformers/model_doc/mobilevitv2)** (Apple 에서 제공)은 Sachin Mehta and Mohammad Rastegari.의 [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680)논문과 함께 발표했습니다.
335 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (Microsoft Research 에서) Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 의 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 논문과 함께 발표했습니다.
336 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (Google AI 에서) Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 의 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 논문과 함께 발표했습니다.
337 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (RUC AI Box 에서) Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen 의 [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) 논문과 함께 발표했습니다.
338 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (SHI Labs 에서) Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi 의 [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) 논문과 함께 발표했습니다.
339 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (Huawei Noah’s Ark Lab 에서) Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu 의 [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) 논문과 함께 발표했습니다.
340 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (Meta 에서) the NLLB team 의 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 논문과 함께 발표했습니다.
341 1. **[NLLB-MOE](https://huggingface.co/docs/transformers/model_doc/nllb-moe)** (Meta 에서 제공)은 the NLLB team.의 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672)논문과 함께 발표했습니다.
342 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (the University of Wisconsin - Madison 에서) Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh 의 [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 논문과 함께 발표했습니다.
343 1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (SHI Labs 에서) Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi 의 [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) 논문과 함께 발표했습니다.
344 1. **[OpenLlama](https://huggingface.co/docs/transformers/model_doc/open-llama)** (from [s-JoL](https://huggingface.co/s-JoL)) released in [Open-Llama](https://github.com/s-JoL/Open-Llama).
345 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (Meta AI 에서) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 의 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 논문과 함께 발표했습니다.
346 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI 에서) Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 의 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 논문과 함께 발표했습니다.
347 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (Google 에서) Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 의 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 논문과 함께 발표했습니다.
348 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google 에서) Jason Phang, Yao Zhao, Peter J. Liu 의 [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) 논문과 함께 발표했습니다.
349 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (Deepmind 에서) Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 의 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 논문과 함께 발표했습니다.
350 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (VinAI Research 에서) Dat Quoc Nguyen and Anh Tuan Nguyen 의 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 논문과 함께 발표했습니다.
351 1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (Google 에서 제공)은 Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.의 [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347)논문과 함께 발표했습니다.
352 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (UCLA NLP 에서) Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang 의 [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) 논문과 함께 발표했습니다.
353 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (Sea AI Labs 에서) Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng 의 [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) 논문과 함께 발표했습니다.
354 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (Microsoft Research 에서) Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 의 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 논문과 함께 발표했습니다.
355 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (NVIDIA 에서) Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 의 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 논문과 함께 발표했습니다.
356 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (Facebook 에서) Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela 의 [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) 논문과 함께 발표했습니다.
357 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (Google Research 에서) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang 의 [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 논문과 함께 발표했습니다.
358 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (Google Research 에서) Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 의 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 논문과 함께 발표했습니다.
359 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (META Research 에서) Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár 의 [Designing Network Design Space](https://arxiv.org/abs/2003.13678) 논문과 함께 발표했습니다.
360 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (Google Research 에서) Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 의 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 논문과 함께 발표했습니다.
361 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (Microsoft Research 에서) Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun 의 [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) 논문과 함께 발표했습니다.
362 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (Facebook 에서) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 의 a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 논문과 함께 발표했습니다.
363 1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (Facebook 에서) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli 의 [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) 논문과 함께 발표했습니다.
364 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (WeChatAI 에서) HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou 의 [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) 논문과 함께 발표했습니다.
365 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (ZhuiyiTechnology 에서) Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 의 a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 논문과 함께 발표했습니다.
366 1. **[RWKV](https://huggingface.co/docs/transformers/model_doc/rwkv)** (Bo Peng 에서 제공)은 Bo Peng.의 [this repo](https://github.com/BlinkDL/RWKV-LM)논문과 함께 발표했습니다.
367 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (NVIDIA 에서) Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 의 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 논문과 함께 발표했습니다.
368 1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (Meta AI 에서 제공)은 Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.의 [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf)논문과 함께 발표했습니다.
369 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP 에서) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 의 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 논문과 함께 발표했습니다.
370 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP 에서) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 의 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 논문과 함께 발표했습니다.
371 1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (Microsoft Research 에서 제공)은 Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.의 [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205)논문과 함께 발표했습니다.
372 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (Facebook 에서) Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 의 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 논문과 함께 발표했습니다.
373 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (Facebook 에서) Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 의 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 논문과 함께 발표했습니다.
374 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (Tel Aviv University 에서) Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 의 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 논문과 함께 발표했습니다.
375 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (Berkeley 에서) Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 의 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 논문과 함께 발표했습니다.
376 1. **[SwiftFormer](https://huggingface.co/docs/transformers/model_doc/swiftformer)** (MBZUAI 에서 제공)은 Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan.의 [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446)논문과 함께 발표했습니다.
377 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (Microsoft 에서) Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo 의 [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) 논문과 함께 발표했습니다.
378 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (Microsoft 에서) Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo 의 [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) 논문과 함께 발표했습니다.
379 1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (University of Würzburg 에서) Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte 의 [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) 논문과 함께 발표했습니다.
380 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (Google 에서) William Fedus, Barret Zoph, Noam Shazeer. 의 [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) 논문과 함께 발표했습니다.
381 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (Google AI 에서) Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 의 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 논문과 함께 발표했습니다.
382 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
383 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (Microsoft Research 에서) Brandon Smock, Rohith Pesala, Robin Abraham 의 [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) 논문과 함께 발표했습니다.
384 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (Google AI 에서) Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 의 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 논문과 함께 발표했습니다.
385 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (Microsoft Research 에서) Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou 의 [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) 논문과 함께 발표했습니다.
386 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace).
387 1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (Facebook 에서) Gedas Bertasius, Heng Wang, Lorenzo Torresani 의 [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) 논문과 함께 발표했습니다.
388 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (the University of California at Berkeley 에서) Michael Janner, Qiyang Li, Sergey Levin 의 [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) 논문과 함께 발표했습니다.
389 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (Google/CMU 에서) Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 의 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 논문과 함께 발표했습니다.
390 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (Microsoft 에서) Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 의 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 논문과 함께 발표했습니다.
391 1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (from UNC Chapel Hill 에서) Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal 의 [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) 논문과 함께 발표했습니다.
392 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (Google Research 에서) Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzle 의 [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) 논문과 함께 발표했습니다.
393 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (Microsoft Research 에서) Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 의 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 논문과 함께 발표했습니다.
394 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (Microsoft Research 에서) Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 의 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 논문과 함께 발표했습니다.
395 1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (Peking University 에서 제공)은 Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun.의 [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221)논문과 함께 발표했습니다.
396 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (Tsinghua University and Nankai University 에서) Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu 의 [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) 논문과 함께 발표했습니다.
397 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (Multimedia Computing Group, Nanjing University 에서) Zhan Tong, Yibing Song, Jue Wang, Limin Wang 의 [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) 논문과 함께 발표했습니다.
398 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (NAVER AI Lab/Kakao Enterprise/Kakao Brain 에서) Wonjae Kim, Bokyung Son, Ildoo Kim 의 [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 논문과 함께 발표했습니다.
399 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (Google AI 에서) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 의 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 논문과 함께 발표했습니다.
400 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (UCLA NLP 에서) Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 의 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 논문과 함께 발표했습니다.
401 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (Google AI 에서) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 의 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 논문과 함께 발표했습니다.
402 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (Meta AI 에서) Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick 의 [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 논문과 함께 발표했습니다.
403 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (Meta AI 에서) Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas 의 [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) 논문과 함께 발표했습니다.
404 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (Facebook AI 에서) Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 의 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 논문과 함께 발표했습니다.
405 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (Facebook AI 에서) Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino 의 [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) 논문과 함께 발표했습니다.
406 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (Facebook AI 에서) Qiantong Xu, Alexei Baevski, Michael Auli 의 [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 논문과 함께 발표했습니다.
407 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (Microsoft Research 에서) Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei 의 [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) 논문과 함께 발표했습니다.
408 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (OpenAI 에서) Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever 의 [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) 논문과 함께 발표했습니다.
409 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (Microsoft Research 에서) Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling 의 [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) 논문과 함께 발표했습니다.
410 1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (Meta AI 에서 제공)은 Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe.의 [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255)논문과 함께 발표했습니다.
411 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (Facebook AI 에서 제공) Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li 의 [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) 논문과 함께 발표했습니다.
412 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (Facebook 에서) Guillaume Lample and Alexis Conneau 의 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 논문과 함께 발표했습니다.
413 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (Microsoft Research 에서) Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 의 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 논문과 함께 발표했습니다.
414 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (Facebook AI 에서) Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 의 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 논문과 함께 발표했습니다.
415 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (Facebook AI 에서) Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau 의 [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) 논문과 함께 발표했습니다.
416 1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (Meta AI 에서) Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa 의 [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) 논문과 함께 발표했습니다.
417 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (Google/CMU 에서) Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 의 [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 논문과 함께 발표했습니다.
418 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (Facebook AI 에서) Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli 의 [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 논문과 함께 발표했습니다.
419 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (Facebook AI 에서) Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 의 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 논문과 함께 발표했습니다.
420 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (Huazhong University of Science & Technology 에서) Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu 의 [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) 논문과 함께 발표했습니다.
421 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (the University of Wisconsin - Madison 에서) Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh 의 [You Only Sample (Almost) 논문과 함께 발표했습니다.
422 1. 새로운 모델을 올리고 싶나요? 우리가 **상세한 가이드와 템플릿** 으로 새로운 모델을 올리도록 도와드릴게요. 가이드와 템플릿은 이 저장소의 [`templates`](./templates) 폴더에서 확인하실 수 있습니다. [컨트리뷰션 가이드라인](./CONTRIBUTING.md)을 꼭 확인해주시고, PR을 올리기 전에 메인테이너에게 연락하거나 이슈를 오픈해 피드백을 받으시길 바랍니다.
423
424 각 모델이 Flax, PyTorch, TensorFlow으로 구현되었는지 또는 🤗 Tokenizers 라이브러리가 지원하는 토크나이저를 사용하는지 확인하려면, [이 표](https://huggingface.co/docs/transformers/index#supported-frameworks)를 확인하세요.
425
426 이 구현은 여러 데이터로 검증되었고 (예시 스크립트를 참고하세요) 오리지널 구현의 성능과 같아야 합니다. [도큐먼트](https://huggingface.co/docs/transformers/examples)의 Examples 섹션에서 성능에 대한 자세한 설명을 확인할 수 있습니다.
427
428 ## 더 알아보기
429
430 | 섹션 | 설명 |
431 |-|-|
432 | [도큐먼트](https://huggingface.co/transformers/) | 전체 API 도큐먼트와 튜토리얼 |
433 | [과제 요약](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers가 지원하는 과제들 |
434 | [전처리 튜토리얼](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` 클래스를 이용해 모델을 위한 데이터 준비하기 |
435 | [학습과 fine-tuning](https://huggingface.co/docs/transformers/training) | 🤗 Transformers가 제공하는 모델 PyTorch/TensorFlow 학습 과정과 `Trainer` API에서 사용하기 |
436 | [퀵 투어: Fine-tuning/사용 스크립트](https://github.com/huggingface/transformers/tree/main/examples) | 다양한 과제에서 모델 fine-tuning하는 예시 스크립트 |
437 | [모델 공유 및 업로드](https://huggingface.co/docs/transformers/model_sharing) | 커뮤니티에 fine-tune된 모델을 업로드 및 공유하기 |
438 | [마이그레이션](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`나 `pytorch-pretrained-bert`에서 🤗 Transformers로 이동하기|
439
440 ## 인용
441
442 🤗 Transformers 라이브러리를 인용하고 싶다면, 이 [논문](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)을 인용해 주세요:
443 ```bibtex
444 @inproceedings{wolf-etal-2020-transformers,
445 title = "Transformers: State-of-the-Art Natural Language Processing",
446 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
447 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
448 month = oct,
449 year = "2020",
450 address = "Online",
451 publisher = "Association for Computational Linguistics",
452 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
453 pages = "38--45"
454 }
455 ```
456
[end of README_ko.md]
[start of README_zh-hans.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多种语言; 使用 transformers 库。
20 - Use square quotes, e.g.,「引用」
21
22 Dictionary
23
24 Hugging Face: 抱抱脸
25 token: 词符(并用括号标注原英文)
26 tokenize: 词符化(并用括号标注原英文)
27 tokenizer: 词符化器(并用括号标注原英文)
28 transformer: transformer(不翻译)
29 pipeline: 流水线
30 API: API (不翻译)
31 inference: 推理
32 Trainer: 训练器。当作为类名出现时不翻译。
33 pretrained/pretrain: 预训练
34 finetune: 微调
35 community: 社区
36 example: 当特指仓库中 example 目录时翻译为「用例」
37 Python data structures (e.g., list, set, dict): 翻译为列表,集合,词典,并用括号标注原英文
38 NLP/Natural Language Processing: 以 NLP 出现时不翻译,以 Natural Language Processing 出现时翻译为自然语言处理
39 checkpoint: 检查点
40 -->
41
42 <p align="center">
43 <br>
44 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
45 <br>
46 <p>
47 <p align="center">
48 <a href="https://circleci.com/gh/huggingface/transformers">
49 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
50 </a>
51 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
52 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
53 </a>
54 <a href="https://huggingface.co/docs/transformers/index">
55 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
56 </a>
57 <a href="https://github.com/huggingface/transformers/releases">
58 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
59 </a>
60 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
61 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
62 </a>
63 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
64 </p>
65
66 <h4 align="center">
67 <p>
68 <a href="https://github.com/huggingface/transformers/">English</a> |
69 <b>简体中文</b> |
70 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
71 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> |
72 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> |
73 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> |
74 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a>
75 <p>
76 </h4>
77
78 <h3 align="center">
79 <p>为 Jax、PyTorch 和 TensorFlow 打造的先进的自然语言处理</p>
80 </h3>
81
82 <h3 align="center">
83 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
84 </h3>
85
86 🤗 Transformers 提供了数以千计的预训练模型,支持 100 多种语言的文本分类、信息抽取、问答、摘要、翻译、文本生成。它的宗旨是让最先进的 NLP 技术人人易用。
87
88 🤗 Transformers 提供了便于快速下载和使用的API,让你可以把预训练模型用在给定文本、在你的数据集上微调然后通过 [model hub](https://huggingface.co/models) 与社区共享。同时,每个定义的 Python 模块均完全独立,方便修改和快速研究实验。
89
90 🤗 Transformers 支持三个最热门的深度学习库: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) 以及 [TensorFlow](https://www.tensorflow.org/) — 并与之无缝整合。你可以直接使用一个框架训练你的模型然后用另一个加载和推理。
91
92 ## 在线演示
93
94 你可以直接在模型页面上测试大多数 [model hub](https://huggingface.co/models) 上的模型。 我们也提供了 [私有模型托管、模型版本管理以及推理API](https://huggingface.co/pricing)。
95
96 这里是一些例子:
97 - [用 BERT 做掩码填词](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
98 - [用 Electra 做命名实体识别](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
99 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
100 - [用 RoBERTa 做自然语言推理](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
101 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
102 - [用 DistilBERT 做问答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
103 - [用 T5 做翻译](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
104
105 **[Write With Transformer](https://transformer.huggingface.co)**,由抱抱脸团队打造,是一个文本生成的官方 demo。
106
107 ## 如果你在寻找由抱抱脸团队提供的定制化支持服务
108
109 <a target="_blank" href="https://huggingface.co/support">
110 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
111 </a><br>
112
113 ## 快速上手
114
115 我们为快速使用模型提供了 `pipeline` (流水线)API。流水线聚合了预训练模型和对应的文本预处理。下面是一个快速使用流水线去判断正负面情绪的例子:
116
117 ```python
118 >>> from transformers import pipeline
119
120 # 使用情绪分析流水线
121 >>> classifier = pipeline('sentiment-analysis')
122 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
123 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
124 ```
125
126 第二行代码下载并缓存了流水线使用的预训练模型,而第三行代码则在给定的文本上进行了评估。这里的答案“正面” (positive) 具有 99 的置信度。
127
128 许多的 NLP 任务都有开箱即用的预训练流水线。比如说,我们可以轻松的从给定文本中抽取问题答案:
129
130 ``` python
131 >>> from transformers import pipeline
132
133 # 使用问答流水线
134 >>> question_answerer = pipeline('question-answering')
135 >>> question_answerer({
136 ... 'question': 'What is the name of the repository ?',
137 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
138 ... })
139 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
140
141 ```
142
143 除了给出答案,预训练模型还给出了对应的置信度分数、答案在词符化 (tokenized) 后的文本中开始和结束的位置。你可以从[这个教程](https://huggingface.co/docs/transformers/task_summary)了解更多流水线API支持的任务。
144
145 要在你的任务上下载和使用任意预训练模型也很简单,只需三行代码。这里是 PyTorch 版的示例:
146 ```python
147 >>> from transformers import AutoTokenizer, AutoModel
148
149 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
150 >>> model = AutoModel.from_pretrained("bert-base-uncased")
151
152 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
153 >>> outputs = model(**inputs)
154 ```
155 这里是等效的 TensorFlow 代码:
156 ```python
157 >>> from transformers import AutoTokenizer, TFAutoModel
158
159 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
160 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
161
162 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
163 >>> outputs = model(**inputs)
164 ```
165
166 词符化器 (tokenizer) 为所有的预训练模型提供了预处理,并可以直接对单个字符串进行调用(比如上面的例子)或对列表 (list) 调用。它会输出一个你可以在下游代码里使用或直接通过 `**` 解包表达式传给模型的词典 (dict)。
167
168 模型本身是一个常规的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取决于你的后端),可以常规方式使用。 [这个教程](https://huggingface.co/transformers/training.html)解释了如何将这样的模型整合到经典的 PyTorch 或 TensorFlow 训练循环中,或是如何使用我们的 `Trainer` 训练器)API 来在一个新的数据集上快速微调。
169
170 ## 为什么要用 transformers?
171
172 1. 便于使用的先进模型:
173 - NLU 和 NLG 上表现优越
174 - 对教学和实践友好且低门槛
175 - 高级抽象,只需了解三个类
176 - 对所有模型统一的API
177
178 1. 更低计算开销,更少的碳排放:
179 - 研究人员可以分享已训练的模型而非每次从头开始训练
180 - 工程师可以减少计算用时和生产环境开销
181 - 数十种模型架构、两千多个预训练模型、100多种语言支持
182
183 1. 对于模型生命周期的每一个部分都面面俱到:
184 - 训练先进的模型,只需 3 行代码
185 - 模型在不同深度学习框架间任意转移,随你心意
186 - 为训练、评估和生产选择最适合的框架,衔接无缝
187
188 1. 为你的需求轻松定制专属模型和用例:
189 - 我们为每种模型架构提供了多个用例来复现原论文结果
190 - 模型内部结构保持透明一致
191 - 模型文件可单独使用,方便魔改和快速实验
192
193 ## 什么情况下我不该用 transformers?
194
195 - 本库并不是模块化的神经网络工具箱。模型文件中的代码特意呈若璞玉,未经额外抽象封装,以便研究人员快速迭代魔改而不致溺于抽象和文件跳转之中。
196 - `Trainer` API 并非兼容任何模型,只为本库之模型优化。若是在寻找适用于通用机器学习的训练循环实现,请另觅他库。
197 - 尽管我们已尽力而为,[examples 目录](https://github.com/huggingface/transformers/tree/main/examples)中的脚本也仅为用例而已。对于你的特定问题,它们并不一定开箱即用,可能需要改几行代码以适之。
198
199 ## 安装
200
201 ### 使用 pip
202
203 这个仓库已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下经过测试。
204
205 你可以在[虚拟环境](https://docs.python.org/3/library/venv.html)中安装 🤗 Transformers。如果你还不熟悉 Python 的虚拟环境,请阅此[用户说明](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
206
207 首先,用你打算使用的版本的 Python 创建一个虚拟环境并激活。
208
209 然后,你需要安装 Flax、PyTorch 或 TensorFlow 其中之一。关于在你使用的平台上安装这些框架,请参阅 [TensorFlow 安装页](https://www.tensorflow.org/install/), [PyTorch 安装页](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安装页](https://github.com/google/flax#quick-install)。
210
211 当这些后端之一安装成功后, 🤗 Transformers 可依此安装:
212
213 ```bash
214 pip install transformers
215 ```
216
217 如果你想要试试用例或者想在正式发布前使用最新的开发中代码,你得[从源代码安装](https://huggingface.co/docs/transformers/installation#installing-from-source)。
218
219 ### 使用 conda
220
221 自 Transformers 4.0.0 版始,我们有了一个 conda 频道: `huggingface`。
222
223 🤗 Transformers 可以通过 conda 依此安装:
224
225 ```shell script
226 conda install -c huggingface transformers
227 ```
228
229 要通过 conda 安装 Flax、PyTorch 或 TensorFlow 其中之一,请参阅它们各自安装页的说明。
230
231 ## 模型架构
232
233 🤗 Transformers 支持的[**所有的模型检查点**](https://huggingface.co/models)由[用户](https://huggingface.co/users)和[组织](https://huggingface.co/organizations)上传,均与 huggingface.co [model hub](https://huggingface.co) 无缝整合。
234
235 目前的检查点数量: 
236
237 🤗 Transformers 目前支持如下的架构(模型概述请阅[这里](https://huggingface.co/docs/transformers/model_summary)):
238
239 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (来自 Google Research and the Toyota Technological Institute at Chicago) 伴随论文 [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), 由 Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut 发布。
240 1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (来自 Google Research) 伴随论文 [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) 由 Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig 发布。
241 1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (来自 BAAI) 伴随论文 [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) 由 Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell 发布。
242 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (来自 MIT) 伴随论文 [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) 由 Yuan Gong, Yu-An Chung, James Glass 发布。
243 1. **[Autoformer](https://huggingface.co/docs/transformers/model_doc/autoformer)** (from Tsinghua University) released with the paper [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
244 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (来自 Facebook) 伴随论文 [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) 由 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer 发布。
245 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (来自 École polytechnique) 伴随论文 [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) 由 Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis 发布。
246 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (来自 VinAI Research) 伴随论文 [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) 由 Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen 发布。
247 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (来自 Microsoft) 伴随论文 [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) 由 Hangbo Bao, Li Dong, Furu Wei 发布。
248 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (来自 Google) 伴随论文 [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) 由 Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova 发布。
249 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (来自 Google) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
250 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (来自 VinAI Research) 伴随论文 [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) 由 Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen 发布。
251 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
252 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
253 1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (来自 Microsoft Research AI4Science) 伴随论文 [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) 由 Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu 发布。
254 1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (来自 Google AI) 伴随论文 [Big Transfer (BiT) 由 Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby 发布。
255 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
256 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
257 1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (来自 Salesforce) 伴随论文 [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) 由 Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi 发布。
258 1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (来自 Salesforce) 伴随论文 [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) 由 Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi 发布。
259 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/).
260 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (来自 Alexa) 伴随论文 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 由 Adrian de Wynter and Daniel J. Perry 发布。
261 1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
262 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (来自 Google Research) 伴随论文 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 由 Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 发布。
263 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (来自 Inria/Facebook/Sorbonne) 伴随论文 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 由 Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 发布。
264 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (来自 Google Research) 伴随论文 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 由 Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 发布。
265 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (来自 OFA-Sys) 伴随论文 [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) 由 An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou 发布。
266 1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (来自 LAION-AI) 伴随论文 [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) 由 Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov 发布。
267 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (来自 OpenAI) 伴随论文 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 由 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 发布。
268 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (来自 University of Göttingen) 伴随论文 [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) 由 Timo Lüddecke and Alexander Ecker 发布。
269 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (来自 Salesforce) 伴随论文 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 由 Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 发布。
270 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (来自 Microsoft Research Asia) 伴随论文 [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) 由 Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang 发布。
271 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (来自 YituTech) 伴随论文 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 由 Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 发布。
272 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (来自 Facebook AI) 伴随论文 [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 由 Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie 发布。
273 1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
274 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (来自 Tsinghua University) 伴随论文 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 由 Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 发布。
275 1. **[CPM-Ant](https://huggingface.co/docs/transformers/model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/).
276 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (来自 Salesforce) 伴随论文 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 由 Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 发布。
277 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (来自 Microsoft) 伴随论文 [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 由 Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang 发布。
278 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (来自 Facebook) 伴随论文 [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) 由 Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli 发布。
279 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
280 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
281 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (来自 Berkeley/Facebook/Google) 伴随论文 [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) 由 Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch 发布。
282 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (来自 SenseTime Research) 伴随论文 [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) 由 Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai 发布。
283 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (来自 Facebook) 伴随论文 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 由 Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 发布。
284 1. **[DePlot](https://huggingface.co/docs/transformers/model_doc/deplot)** (来自 Google AI) 伴随论文 [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) 由 Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun 发布。
285 1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (来自 The University of Texas at Austin) 伴随论文 [NMS Strikes Back](https://arxiv.org/abs/2212.06137) 由 Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl 发布。
286 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (来自 Facebook) 伴随论文 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 由 Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 发布。
287 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (来自 Microsoft Research) 伴随论文 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 由 Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 发布。
288 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (来自 SHI Labs) 伴随论文 [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) 由 Ali Hassani and Humphrey Shi 发布。
289 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (来自 HuggingFace), 伴随论文 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 由 Victor Sanh, Lysandre Debut and Thomas Wolf 发布。 同样的方法也应用于压缩 GPT-2 到 [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa 到 [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT 到 [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) 和德语版 DistilBERT。
290 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (来自 Microsoft Research) 伴随论文 [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) 由 Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei 发布。
291 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (来自 NAVER) 伴随论文 [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) 由 Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park 发布。
292 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (来自 Facebook) 伴随论文 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 由 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 发布。
293 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (来自 Intel Labs) 伴随论文 [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) 由 René Ranftl, Alexey Bochkovskiy, Vladlen Koltun 发布。
294 1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (来自 Snap Research) 伴随论文 [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) 由 Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren 发布。
295 1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le.
296 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (来自 Google Research/Stanford University) 伴随论文 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 由 Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 发布。
297 1. **[EnCodec](https://huggingface.co/docs/transformers/main/model_doc/encodec)** (来自 Meta AI) 伴随论文 [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) 由 Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi 发布。
298 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (来自 Google Research) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
299 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (来自 Baidu) 伴随论文 [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu 发布。
300 1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (来自 Baidu) 伴随论文 [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) 由 Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang 发布。
301 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2** was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives.
302 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
303 1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
304 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (来自 CNRS) 伴随论文 [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) 由 Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab 发布。
305 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (来自 Facebook AI) 伴随论文 [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) 由 Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela 发布。
306 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (来自 Google Research) 伴随论文 [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) 由 James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon 发布。
307 1. **[FocalNet](https://huggingface.co/docs/transformers/model_doc/focalnet)** (来自 Microsoft Research) 伴随论文 [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) 由 Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao 发布。
308 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (来自 CMU/Google Brain) 伴随论文 [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) 由 Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le 发布。
309 1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (来自 Microsoft Research) 伴随论文 [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) 由 Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang 发布。
310 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (来自 KAIST) 伴随论文 [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) 由 Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim 发布。
311 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (来自 OpenAI) 伴随论文 [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) 由 Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever 发布。
312 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (来自 EleutherAI) 随仓库 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) 发布。作者为 Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy 发布。
313 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
314 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (来自 ABEJA) 由 Shinya Otani, Takayoshi Makabe, Anuj Arora, Kyo Hattori。
315 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (来自 OpenAI) 伴随论文 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 由 Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 发布。
316 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (来自 EleutherAI) 伴随论文 [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 由 Ben Wang and Aran Komatsuzaki 发布。
317 1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
318 1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (来自 BigCode) 伴随论文 [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) 由 Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra 发布。
319 1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by 坂本俊之(tanreinama).
320 1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
321 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (来自 UCSD, NVIDIA) 伴随论文 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 由 Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 发布。
322 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。
323 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (来自 Berkeley) 伴随论文 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 由 Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 发布。
324 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (来自 OpenAI) 伴随论文 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 由 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 发布。
325 1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
326 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever.
327 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 由 Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 发布。
328 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 由 Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 发布。
329 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) 由 Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei 发布。
330 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 由 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 发布。
331 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
332 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (来自 Meta AI) 伴随论文 [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) 由 Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze 发布。
333 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (来自 South China University of Technology) 伴随论文 [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) 由 Jiapeng Wang, Lianwen Jin, Kai Ding 发布。
334 1. **[LLaMA](https://huggingface.co/docs/transformers/model_doc/llama)** (来自 The FAIR team of Meta AI) 伴随论文 [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) 由 Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample 发布。
335 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
336 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (来自 Google AI) released 伴随论文 [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) 由 Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang 发布。
337 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (来自 Studio Ousia) 伴随论文 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 由 Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 发布。
338 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (来自 UNC Chapel Hill) 伴随论文 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 由 Hao Tan and Mohit Bansal 发布。
339 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (来自 Facebook) 伴随论文 [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) 由 Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert 发布。
340 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (来自 Facebook) 伴随论文 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 由 Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 发布。
341 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** 用 [OPUS](http://opus.nlpl.eu/) 数据训练的机器翻译模型由 Jörg Tiedemann 发布。[Marian Framework](https://marian-nmt.github.io/) 由微软翻译团队开发。
342 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (来自 Microsoft Research Asia) 伴随论文 [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) 由 Junlong Li, Yiheng Xu, Lei Cui, Furu Wei 发布。
343 1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (来自 FAIR and UIUC) 伴随论文 [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) 由 Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar 发布。
344 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov
345 1. **[MatCha](https://huggingface.co/docs/transformers/model_doc/matcha)** (来自 Google AI) 伴随论文 [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662) 由 Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos 发布。
346 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 由 Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 发布。
347 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 由 Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 发布。
348 1. **[MEGA](https://huggingface.co/docs/transformers/model_doc/mega)** (来自 Facebook) 伴随论文 [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) 由 Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer 发布。
349 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
350 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
351 1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (来自 Alibaba Research) 伴随论文 [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) 由 Peng Wang, Cheng Da, and Cong Yao 发布。
352 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (来自 Studio Ousia) 伴随论文 [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 由 Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka 发布。
353 1. **[MMS](https://huggingface.co/docs/transformers/model_doc/mms)** (来自 Facebook) 伴随论文 [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516) 由 Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli 发布。
354 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (来自 CMU/Google Brain) 伴随论文 [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) 由 Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou 发布。
355 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (来自 Google Inc.) 伴随论文 [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) 由 Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam 发布。
356 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (来自 Google Inc.) 伴随论文 [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) 由 Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen 发布。
357 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (来自 Apple) 伴随论文 [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) 由 Sachin Mehta and Mohammad Rastegari 发布。
358 1. **[MobileViTV2](https://huggingface.co/docs/transformers/model_doc/mobilevitv2)** (来自 Apple) 伴随论文 [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) 由 Sachin Mehta and Mohammad Rastegari 发布。
359 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (来自 Microsoft Research) 伴随论文 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 由 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 发布。
360 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (来自 Google AI) 伴随论文 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 由 Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 发布。
361 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (来自 中国人民大学 AI Box) 伴随论文 [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) 由 Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen 发布。
362 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (来自 SHI Labs) 伴随论文 [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) 由 Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi 发布。
363 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (来自华为诺亚方舟实验室) 伴随论文 [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) 由 Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu 发布。
364 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (来自 Meta) 伴随论文 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 由 the NLLB team 发布。
365 1. **[NLLB-MOE](https://huggingface.co/docs/transformers/model_doc/nllb-moe)** (来自 Meta) 伴随论文 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 由 the NLLB team 发布。
366 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (来自 the University of Wisconsin - Madison) 伴随论文 [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 由 Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh 发布。
367 1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (来自 SHI Labs) 伴随论文 [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) 由 Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi 发布。
368 1. **[OpenLlama](https://huggingface.co/docs/transformers/model_doc/open-llama)** (来自 [s-JoL](https://huggingface.co/s-JoL)) 由 [Open-Llama](https://github.com/s-JoL/Open-Llama) 发布.
369 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (来自 Meta AI) 伴随论文 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 由 Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 发布。
370 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (来自 Google AI) 伴随论文 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 由 Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 发布。
371 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。
372 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (来自 Google) 伴随论文 [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) 由 Jason Phang, Yao Zhao, Peter J. Liu 发布。
373 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (来自 Deepmind) 伴随论文 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 由 Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 发布。
374 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (来自 VinAI Research) 伴随论文 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 由 Dat Quoc Nguyen and Anh Tuan Nguyen 发布。
375 1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (来自 Google) 伴随论文 [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) 由 Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova 发布。
376 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (来自 UCLA NLP) 伴随论文 [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) 由 Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang 发布。
377 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (来自 Sea AI Labs) 伴随论文 [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) 由 Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng 发布。
378 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
379 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (来自 NVIDIA) 伴随论文 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 由 Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 发布。
380 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (来自 Facebook) 伴随论文 [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) 由 Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela 发布。
381 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (来自 Google Research) 伴随论文 [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 由 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang 发布。
382 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (来自 Google Research) 伴随论文 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 由 Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 发布。
383 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
384 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (来自 Google Research) 伴随论文 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 由 Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 发布。
385 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
386 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (来自 Facebook), 伴随论文 [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 由 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 发布。
387 1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (来自 Facebook) 伴随论文 [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) 由 Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli 发布。
388 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (来自 WeChatAI), 伴随论文 [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) 由 HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou 发布。
389 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (来自 ZhuiyiTechnology), 伴随论文 [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 由 Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 发布。
390 1. **[RWKV](https://huggingface.co/docs/transformers/model_doc/rwkv)** (来自 Bo Peng) 伴随论文 [this repo](https://github.com/BlinkDL/RWKV-LM) 由 Bo Peng 发布。
391 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (来自 NVIDIA) 伴随论文 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 由 Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 发布。
392 1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (来自 Meta AI) 伴随论文 [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) 由 Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick 发布。
393 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
394 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
395 1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (来自 Microsoft Research) 伴随论文 [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) 由 Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei 发布。
396 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (来自 Facebook), 伴随论文 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 发布。
397 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (来自 Facebook) 伴随论文 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 由 Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 发布。
398 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (来自 Tel Aviv University) 伴随论文 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 由 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 发布。
399 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (来自 Berkeley) 伴随论文 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 由 Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 发布。
400 1. **[SwiftFormer](https://huggingface.co/docs/transformers/model_doc/swiftformer)** (来自 MBZUAI) 伴随论文 [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) 由 Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan 发布。
401 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (来自 Microsoft) 伴随论文 [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) 由 Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo 发布。
402 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (来自 Microsoft) 伴随论文 [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) 由 Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo 发布。
403 1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (来自 University of Würzburg) 伴随论文 [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) 由 Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte 发布。
404 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer.
405 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI) 伴随论文 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
406 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (来自 Google AI) 伴随论文 [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
407 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (来自 Microsoft Research) 伴随论文 [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) 由 Brandon Smock, Rohith Pesala, Robin Abraham 发布。
408 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (来自 Google AI) 伴随论文 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 由 Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 发布。
409 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (来自 Microsoft Research) 伴随论文 [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) 由 Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou 发布。
410 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace).
411 1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani.
412 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
413 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (来自 Google/CMU) 伴随论文 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 由 Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 发布。
414 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (来自 Microsoft) 伴随论文 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 由 Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 发布。
415 1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (来自 UNC Chapel Hill) 伴随论文 [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) 由 Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal 发布。
416 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
417 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (来自 Microsoft Research) 伴随论文 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 由 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 发布。
418 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (来自 Microsoft Research) 伴随论文 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 由 Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 发布。
419 1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (来自 Peking University) 伴随论文 [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) 由 Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun 发布。
420 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (来自 Tsinghua University and Nankai University) 伴随论文 [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) 由 Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu 发布。
421 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (来自 Multimedia Computing Group, Nanjing University) 伴随论文 [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) 由 Zhan Tong, Yibing Song, Jue Wang, Limin Wang 发布。
422 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (来自 NAVER AI Lab/Kakao Enterprise/Kakao Brain) 伴随论文 [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 由 Wonjae Kim, Bokyung Son, Ildoo Kim 发布。
423 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。
424 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (来自 UCLA NLP) 伴随论文 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 由 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 发布。
425 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。
426 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (来自 Meta AI) 伴随论文 [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 由 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick 发布。
427 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (来自 Meta AI) 伴随论文 [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas 发布.
428 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (来自 Facebook AI) 伴随论文 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 由 Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 发布。
429 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (来自 Facebook AI) 伴随论文 [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino 发布。
430 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (来自 Facebook AI) 伴随论文 [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 由 Qiantong Xu, Alexei Baevski, Michael Auli 发布。
431 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
432 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (来自 OpenAI) 伴随论文 [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) 由 Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever 发布。
433 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (来自 Microsoft Research) 伴随论文 [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) 由 Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling 发布。
434 1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (来自 Meta AI) 伴随论文 [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) 由 Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe 发布。
435 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
436 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (来自 Facebook) 伴随论文 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 由 Guillaume Lample and Alexis Conneau 发布。
437 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
438 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (来自 Facebook AI), 伴随论文 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 由 Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 发布。
439 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (来自 Facebook AI) 伴随论文 [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) 由 Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau 发布。
440 1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (来自 Meta AI) 伴随论文 [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) 由 Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa 发布。
441 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (来自 Google/CMU) 伴随论文 [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 由 Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 发布。
442 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (来自 Facebook AI) 伴随论文 [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 由 Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli 发布。
443 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (来自 Facebook AI) 伴随论文 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 由 Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 发布。
444 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (来自 Huazhong University of Science & Technology) 伴随论文 [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) 由 Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu 发布。
445 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (来自 the University of Wisconsin - Madison) 伴随论文 [You Only Sample (Almost) 由 Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh 发布。
446 1. 想要贡献新的模型?我们这里有一份**详细指引和模板**来引导你添加新的模型。你可以在 [`templates`](./templates) 目录中找到他们。记得查看 [贡献指南](./CONTRIBUTING.md) 并在开始写 PR 前联系维护人员或开一个新的 issue 来获得反馈。
447
448 要检查某个模型是否已有 Flax、PyTorch 或 TensorFlow 的实现,或其是否在 🤗 Tokenizers 库中有对应词符化器(tokenizer),敬请参阅[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
449
450 这些实现均已于多个数据集测试(请参看用例脚本)并应于原版实现表现相当。你可以在用例文档的[此节](https://huggingface.co/docs/transformers/examples)中了解表现的细节。
451
452
453 ## 了解更多
454
455 | 章节 | 描述 |
456 |-|-|
457 | [文档](https://huggingface.co/docs/transformers/) | 完整的 API 文档和教程 |
458 | [任务总结](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支持的任务 |
459 | [预处理教程](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 来为模型准备数据 |
460 | [训练和微调](https://huggingface.co/docs/transformers/training) | 在 PyTorch/TensorFlow 的训练循环或 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
461 | [快速上手:微调和用例脚本](https://github.com/huggingface/transformers/tree/main/examples) | 为各种任务提供的用例脚本 |
462 | [模型分享和上传](https://huggingface.co/docs/transformers/model_sharing) | 和社区上传和分享你微调的模型 |
463 | [迁移](https://huggingface.co/docs/transformers/migration) | 从 `pytorch-transformers` 或 `pytorch-pretrained-bert` 迁移到 🤗 Transformers |
464
465 ## 引用
466
467 我们已将此库的[论文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式发表,如果你使用了 🤗 Transformers 库,请引用:
468 ```bibtex
469 @inproceedings{wolf-etal-2020-transformers,
470 title = "Transformers: State-of-the-Art Natural Language Processing",
471 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
472 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
473 month = oct,
474 year = "2020",
475 address = "Online",
476 publisher = "Association for Computational Linguistics",
477 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
478 pages = "38--45"
479 }
480 ```
481
[end of README_zh-hans.md]
[start of README_zh-hant.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Traditional Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多種語言; 使用 transformers 函式庫。
20 - Use square quotes, e.g.,「引用」
21 - Some of terms in the file can be found at National Academy for Educational Research (https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese.
22
23 Dictionary
24
25 API: API (不翻譯)
26 add: 加入
27 checkpoint: 檢查點
28 code: 程式碼
29 community: 社群
30 confidence: 信賴度
31 dataset: 資料集
32 documentation: 文件
33 example: 基本翻譯為「範例」,或依語意翻為「例子」
34 finetune: 微調
35 Hugging Face: Hugging Face(不翻譯)
36 implementation: 實作
37 inference: 推論
38 library: 函式庫
39 module: 模組
40 NLP/Natural Language Processing: 以 NLP 出現時不翻譯,以 Natural Language Processing 出現時翻譯為自然語言處理
41 online demos: 線上Demo
42 pipeline: pipeline(不翻譯)
43 pretrained/pretrain: 預訓練
44 Python data structures (e.g., list, set, dict): 翻譯為串列,集合,字典,並用括號標註原英文
45 repository: repository(不翻譯)
46 summary: 概覽
47 token-: token-(不翻譯)
48 Trainer: Trainer(不翻譯)
49 transformer: transformer(不翻譯)
50 tutorial: 教學
51 user: 使用者
52 -->
53
54 <p align="center">
55 <br>
56 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
57 <br>
58 <p>
59 <p align="center">
60 <a href="https://circleci.com/gh/huggingface/transformers">
61 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
62 </a>
63 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
64 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
65 </a>
66 <a href="https://huggingface.co/docs/transformers/index">
67 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
68 </a>
69 <a href="https://github.com/huggingface/transformers/releases">
70 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
71 </a>
72 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
73 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
74 </a>
75 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
76 </p>
77
78 <h4 align="center">
79 <p>
80 <a href="https://github.com/huggingface/transformers/">English</a> |
81 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
82 <b>繁體中文</b> |
83 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> |
84 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> |
85 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> |
86 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a>
87 <p>
88 </h4>
89
90 <h3 align="center">
91 <p>為 Jax、PyTorch 以及 TensorFlow 打造的先進自然語言處理函式庫</p>
92 </h3>
93
94 <h3 align="center">
95 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
96 </h3>
97
98 🤗 Transformers 提供了數以千計的預訓練模型,支援 100 多種語言的文本分類、資訊擷取、問答、摘要、翻譯、文本生成。它的宗旨是讓最先進的 NLP 技術人人易用。
99
100 🤗 Transformers 提供了便於快速下載和使用的API,讓你可以將預訓練模型用在給定文本、在你的資料集上微調然後經由 [model hub](https://huggingface.co/models) 與社群共享。同時,每個定義的 Python 模組架構均完全獨立,方便修改和快速研究實驗。
101
102 🤗 Transformers 支援三個最熱門的深度學習函式庫: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) 以及 [TensorFlow](https://www.tensorflow.org/) — 並與之完美整合。你可以直接使用其中一個框架訓練你的模型,然後用另一個載入和推論。
103
104 ## 線上Demo
105
106 你可以直接在 [model hub](https://huggingface.co/models) 上測試大多數的模型。我們也提供了 [私有模型託管、模型版本管理以及推論API](https://huggingface.co/pricing)。
107
108 這裡是一些範例:
109 - [用 BERT 做遮蓋填詞](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
110 - [用 Electra 做專有名詞辨識](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
111 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
112 - [用 RoBERTa 做自然語言推論](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
113 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
114 - [用 DistilBERT 做問答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
115 - [用 T5 做翻譯](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
116
117 **[Write With Transformer](https://transformer.huggingface.co)**,由 Hugging Face 團隊所打造,是一個文本生成的官方 demo。
118
119 ## 如果你在尋找由 Hugging Face 團隊所提供的客製化支援服務
120
121 <a target="_blank" href="https://huggingface.co/support">
122 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
123 </a><br>
124
125 ## 快速上手
126
127 我們為快速使用模型提供了 `pipeline` API。 Pipeline 包含了預訓練模型和對應的文本預處理。下面是一個快速使用 pipeline 去判斷正負面情緒的例子:
128
129 ```python
130 >>> from transformers import pipeline
131
132 # 使用情緒分析 pipeline
133 >>> classifier = pipeline('sentiment-analysis')
134 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
135 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
136 ```
137
138 第二行程式碼下載並快取 pipeline 使用的預訓練模型,而第三行程式碼則在給定的文本上進行了評估。這裡的答案“正面” (positive) 具有 99.97% 的信賴度。
139
140 許多的 NLP 任務都有隨選即用的預訓練 `pipeline`。例如,我們可以輕鬆地從給定文本中擷取問題答案:
141
142 ``` python
143 >>> from transformers import pipeline
144
145 # 使用問答 pipeline
146 >>> question_answerer = pipeline('question-answering')
147 >>> question_answerer({
148 ... 'question': 'What is the name of the repository ?',
149 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
150 ... })
151 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
152
153 ```
154
155 除了提供問題解答,預訓練模型還提供了對應的信賴度分數以及解答在 tokenized 後的文本中開始和結束的位置。你可以從[這個教學](https://huggingface.co/docs/transformers/task_summary)了解更多 `pipeline` API支援的任務。
156
157 要在你的任務中下載和使用任何預訓練模型很簡單,只需三行程式碼。這裡是 PyTorch 版的範例:
158 ```python
159 >>> from transformers import AutoTokenizer, AutoModel
160
161 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
162 >>> model = AutoModel.from_pretrained("bert-base-uncased")
163
164 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
165 >>> outputs = model(**inputs)
166 ```
167 這裡是對應的 TensorFlow 程式碼:
168 ```python
169 >>> from transformers import AutoTokenizer, TFAutoModel
170
171 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
172 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
173
174 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
175 >>> outputs = model(**inputs)
176 ```
177
178 Tokenizer 為所有的預訓練模型提供了預處理,並可以直接轉換單一字串(比如上面的例子)或串列 (list)。它會輸出一個的字典 (dict) 讓你可以在下游程式碼裡使用或直接藉由 `**` 運算式傳給模型。
179
180 模型本身是一個常規的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取決於你的後端),可依常規方式使用。 [這個教學](https://huggingface.co/transformers/training.html)解釋了如何將這樣的模型整合到一般的 PyTorch 或 TensorFlow 訓練迴圈中,或是如何使用我們的 `Trainer` API 在一個新的資料集上快速進行微調。
181
182 ## 為什麼要用 transformers?
183
184 1. 便於使用的先進模型:
185 - NLU 和 NLG 上性能卓越
186 - 對教學和實作友好且低門檻
187 - 高度抽象,使用者只須學習 3 個類別
188 - 對所有模型使用的制式化API
189
190 1. 更低的運算成本,更少的碳排放:
191 - 研究人員可以分享已訓練的模型而非每次從頭開始訓練
192 - 工程師可以減少計算時間以及生產成本
193 - 數十種模型架構、兩千多個預訓練模型、100多種語言支援
194
195 1. 對於模型生命週期的每一個部分都面面俱到:
196 - 訓練先進的模型,只需 3 行程式碼
197 - 模型可以在不同深度學習框架之間任意轉換
198 - 為訓練、評估和生產選擇最適合的框架,並完美銜接
199
200 1. 為你的需求輕鬆客製化專屬模型和範例:
201 - 我們為每種模型架構提供了多個範例來重現原論文結果
202 - 一致的模型內部架構
203 - 模型檔案可單獨使用,便於修改和快速實驗
204
205 ## 什麼情況下我不該用 transformers?
206
207 - 本函式庫並不是模組化的神經網絡工具箱。模型文件中的程式碼並未做額外的抽象封裝,以便研究人員快速地翻閱及修改程式碼,而不會深陷複雜的類別包裝之中。
208 - `Trainer` API 並非相容任何模型,它只為本函式庫中的模型最佳化。對於一般的機器學習用途,請使用其他函式庫。
209 - 儘管我們已盡力而為,[examples 目錄](https://github.com/huggingface/transformers/tree/main/examples)中的腳本也僅為範例而已。對於特定問題,它們並不一定隨選即用,可能需要修改幾行程式碼以符合需求。
210
211 ## 安裝
212
213 ### 使用 pip
214
215 這個 Repository 已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下經過測試。
216
217 你可以在[虛擬環境](https://docs.python.org/3/library/venv.html)中安裝 🤗 Transformers。如果你還不熟悉 Python 的虛擬環境,請閱此[使用者指引](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
218
219 首先,用你打算使用的版本的 Python 創建一個虛擬環境並進入。
220
221 然後,你需要安裝 Flax、PyTorch 或 TensorFlow 其中之一。對於該如何在你使用的平台上安裝這些框架,請參閱 [TensorFlow 安裝頁面](https://www.tensorflow.org/install/), [PyTorch 安裝頁面](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安裝頁面](https://github.com/google/flax#quick-install)。
222
223 當其中一個後端安裝成功後,🤗 Transformers 可依此安裝:
224
225 ```bash
226 pip install transformers
227 ```
228
229 如果你想要試試範例或者想在正式發布前使用最新開發中的程式碼,你必須[從原始碼安裝](https://huggingface.co/docs/transformers/installation#installing-from-source)。
230
231 ### 使用 conda
232
233 自 Transformers 4.0.0 版始,我們有了一個 conda channel: `huggingface`。
234
235 🤗 Transformers 可以藉由 conda 依此安裝:
236
237 ```shell script
238 conda install -c huggingface transformers
239 ```
240
241 要藉由 conda 安裝 Flax、PyTorch 或 TensorFlow 其中之一,請參閱它們各自安裝頁面的說明。
242
243 ## 模型架構
244
245 **🤗 Transformers 支援的[所有的模型檢查點](https://huggingface.co/models)**,由[使用者](https://huggingface.co/users)和[組織](https://huggingface.co/organizations)上傳,均與 huggingface.co [model hub](https://huggingface.co) 完美結合。
246
247 目前的檢查點數量: 
248
249 🤗 Transformers 目前支援以下的架構(模型概覽請參閱[這裡](https://huggingface.co/docs/transformers/model_summary)):
250
251 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
252 1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.
253 1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell.
254 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass.
255 1. **[Autoformer](https://huggingface.co/docs/transformers/model_doc/autoformer)** (from Tsinghua University) released with the paper [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
256 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
257 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
258 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
259 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
260 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
261 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
262 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
263 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
264 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
265 1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu.
266 1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby.
267 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
268 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
269 1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
270 1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (from Salesforce) released with the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi.
271 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/).
272 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
273 1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
274 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
275 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
276 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
277 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
278 1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (from LAION-AI) released with the paper [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
279 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
280 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker.
281 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
282 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
283 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
284 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
285 1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
286 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
287 1. **[CPM-Ant](https://huggingface.co/docs/transformers/model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/).
288 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
289 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
290 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
291 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
292 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
293 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
294 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
295 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
296 1. **[DePlot](https://huggingface.co/docs/transformers/model_doc/deplot)** (from Google AI) released with the paper [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) by Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.
297 1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
298 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
299 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
300 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi.
301 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT.
302 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
303 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (from NAVER) released with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
304 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
305 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
306 1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren.
307 1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le.
308 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
309 1. **[EnCodec](https://huggingface.co/docs/transformers/main/model_doc/encodec)** (from Meta AI) released with the paper [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi.
310 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
311 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu.
312 1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (from Baidu) released with the paper [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang.
313 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2** was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives.
314 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
315 1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
316 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
317 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
318 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
319 1. **[FocalNet](https://huggingface.co/docs/transformers/model_doc/focalnet)** (from Microsoft Research) released with the paper [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao.
320 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
321 1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang.
322 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
323 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
324 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
325 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
326 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori.
327 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
328 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released with the paper [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
329 1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
330 1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
331 1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by 坂本俊之(tanreinama).
332 1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
333 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
334 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
335 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
336 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
337 1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
338 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever.
339 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
340 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
341 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
342 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
343 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
344 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
345 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding.
346 1. **[LLaMA](https://huggingface.co/docs/transformers/model_doc/llama)** (from The FAIR team of Meta AI) released with the paper [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.
347 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
348 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
349 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
350 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
351 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
352 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
353 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
354 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei.
355 1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar.
356 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov
357 1. **[MatCha](https://huggingface.co/docs/transformers/model_doc/matcha)** (from Google AI) released with the paper [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662) by Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos.
358 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
359 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
360 1. **[MEGA](https://huggingface.co/docs/transformers/model_doc/mega)** (from Facebook) released with the paper [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer.
361 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
362 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
363 1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (from Alibaba Research) released with the paper [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) by Peng Wang, Cheng Da, and Cong Yao.
364 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
365 1. **[MMS](https://huggingface.co/docs/transformers/model_doc/mms)** (from Facebook) released with the paper [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516) by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli.
366 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
367 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
368 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
369 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
370 1. **[MobileViTV2](https://huggingface.co/docs/transformers/model_doc/mobilevitv2)** (from Apple) released with the paper [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari.
371 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
372 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
373 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
374 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi.
375 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
376 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
377 1. **[NLLB-MOE](https://huggingface.co/docs/transformers/model_doc/nllb-moe)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
378 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
379 1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi.
380 1. **[OpenLlama](https://huggingface.co/docs/transformers/model_doc/open-llama)** (from [s-JoL](https://huggingface.co/s-JoL)) released in [Open-Llama](https://github.com/s-JoL/Open-Llama).
381 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
382 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
383 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
384 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, Peter J. Liu.
385 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
386 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
387 1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (from Google) released with the paper [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
388 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
389 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
390 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
391 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
392 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
393 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
394 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
395 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
396 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
397 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
398 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
399 1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
400 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou.
401 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
402 1. **[RWKV](https://huggingface.co/docs/transformers/model_doc/rwkv)** (from Bo Peng) released with the paper [this repo](https://github.com/BlinkDL/RWKV-LM) by Bo Peng.
403 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
404 1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (from Meta AI) released with the paper [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.
405 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
406 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
407 1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
408 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
409 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook) released with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
410 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University) released with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
411 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
412 1. **[SwiftFormer](https://huggingface.co/docs/transformers/model_doc/swiftformer)** (from MBZUAI) released with the paper [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan.
413 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
414 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
415 1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (from University of Würzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte.
416 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer.
417 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
418 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released with the paper [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
419 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham.
420 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
421 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
422 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace).
423 1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani.
424 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
425 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
426 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
427 1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (from UNC Chapel Hill) released with the paper [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal.
428 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
429 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
430 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
431 1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun.
432 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
433 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
434 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
435 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
436 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
437 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
438 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
439 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas.
440 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
441 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
442 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
443 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
444 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
445 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling.
446 1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (from Meta AI) released with the paper [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe.
447 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
448 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
449 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
450 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
451 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
452 1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (from Meta AI) released with the paper [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa.
453 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
454 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
455 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
456 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
457 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
458 1. 想要貢獻新的模型?我們這裡有一份**詳細指引和模板**來引導你加入新的模型。你可以在 [`templates`](./templates) 目錄中找到它們。記得查看[貢獻指引](./CONTRIBUTING.md)並在開始寫 PR 前聯繫維護人員或開一個新的 issue 來獲得 feedbacks。
459
460 要檢查某個模型是否已有 Flax、PyTorch 或 TensorFlow 的實作,或其是否在🤗 Tokenizers 函式庫中有對應的 tokenizer,敬請參閱[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
461
462 這些實作均已於多個資料集測試(請參閱範例腳本)並應與原版實作表現相當。你可以在範例文件的[此節](https://huggingface.co/docs/transformers/examples)中了解實作的細節。
463
464
465 ## 了解更多
466
467 | 章節 | 描述 |
468 |-|-|
469 | [文件](https://huggingface.co/transformers/) | 完整的 API 文件和教學 |
470 | [任務概覽](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支援的任務 |
471 | [預處理教學](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 來為模型準備資料 |
472 | [訓練和微調](https://huggingface.co/docs/transformers/training) | 使用 PyTorch/TensorFlow 的內建的訓練方式或於 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
473 | [快速上手:微調和範例腳本](https://github.com/huggingface/transformers/tree/main/examples) | 為各種任務提供的範例腳本 |
474 | [模型分享和上傳](https://huggingface.co/docs/transformers/model_sharing) | 上傳並與社群分享你微調的模型 |
475 | [遷移](https://huggingface.co/docs/transformers/migration) | 從 `pytorch-transformers` 或 `pytorch-pretrained-bert` 遷移到 🤗 Transformers |
476
477 ## 引用
478
479 我們已將此函式庫的[論文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式發表。如果你使用了 🤗 Transformers 函式庫,可以引用:
480 ```bibtex
481 @inproceedings{wolf-etal-2020-transformers,
482 title = "Transformers: State-of-the-Art Natural Language Processing",
483 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
484 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
485 month = oct,
486 year = "2020",
487 address = "Online",
488 publisher = "Association for Computational Linguistics",
489 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
490 pages = "38--45"
491 }
492 ```
493
[end of README_zh-hant.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
huggingface/transformers
|
45f71d793d944fe44cd30edf1bb2a250f7ccbe77
|
(Not So) Bad words list for text generation
### Feature request
Support a soft penalization logits processor in the transformers generate method (extends NoBadWordsLogitsProcessor).
### Motivation
- The [NoBadWordsLogitsProcessor](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.NoBadWordsLogitsProcessor) forbids the generation of certain tokens _in absolute terms_ by overwriting the logits to minus infinity
- The request is to add a softer version of this, one in which certain tokens are penalized or boosted but _only mildly_
- This is in the spirit of the `logit_bias` parameter in the generate methods [here](https://beta.openai.com/docs/api-reference/completions/create#completions/create-logit_bias) (OpenAI) and [here](https://docs.cohere.ai/reference/generate) (Cohere)
- Possible use cases include, but are not limited to: enhance extractiveness during document summarization by boosting tokens present in the input and style guidance by penalizing/boosting the appropriate vocabulary
### Your contribution
**Overview**
- A new class is defined as `BendLogitsProcessor` based on the current `NoBadWordsLogitsProcessor` class
- The current argument `bad_words_ids` is enriched to include a float value per list of tokens_ids, aka the penalization/boosting score. Positive large values encourage the token to be generated while negative large values do the opposite
- Penalization/boosting scores are unbounded but could be later scaled as it seems to be the case in the implementations referenced above, e.g. `logit bias` is in [-10,10] [here](https://beta.openai.com/docs/api-reference/completions/create#completions/create-logit_bias) and [-100,100] [here](https://docs.cohere.ai/reference/generate)
- Observe that `NoBadWordsLogitsProcessor` behavior could be recovered just by explicitly setting penalization/boosting scores to float(“-Inf”)
**The new class**
This is very much the same as `NoBadWordsLogitsProcessor`, I tried to keep as much as possible intact. There might be a more efficient implementation.
```py
class BendLogitsProcessor(LogitsProcessor):
"""
[`LogitsProcessor`] that softly penalizes or boosts certain token/s
Args:
bend_list (`List[Union[float, List[int]]]`):
List of list of lists with penalization/boosting coefficients and list of token ids.
In order to get the token ids of the words, use `tokenizer(bad_words, add_prefix_space=True,
add_special_tokens=False).input_ids`.
eos_token_id (`int`):
The id of the *end-of-sequence* token.
"""
def __init__(self, bend_list: List[Union[float, List[int]]], eos_token_id: int):
self.bend_list = bend_list
coefs = [coef for coef,tok in self.bend_list]
words_ids = [tok for coef,tok in self.bend_list]
if not isinstance(bend_list, List) or len(bend_list) == 0:
raise ValueError(f"`bend_list` has to be a non-empty list, but is {bend_list}.")
if any(not isinstance(word_ids, list) for word_ids in words_ids):
raise ValueError(f"`words_ids` has to be a list of lists, but is {words_ids}.")
if any(
any((not isinstance(token_id, (int, np.integer)) or token_id < 0) for token_id in word_ids)
for word_ids in words_ids
):
raise ValueError(
f"Each list in `words_ids` has to be a list of positive integers, but is {words_ids}."
)
if any(not isinstance(coef, float) for coef in coefs):
raise ValueError(f"`coefs` has to be a float, but is {coefs}.")
words_ids = list(filter(lambda token_seq: token_seq != [eos_token_id], words_ids))
self.words_id_length_1, self.coefs_length_1 = [],[]
self.words_id_length_greater_than_1, self.coefs_length_greater_than_1 = [],[]
for coef,word in zip(coefs,words_ids):
if len(word) == 1:
self.words_id_length_1.append(word[0])
self.coefs_length_1.append(coef)
else:
self.words_id_length_greater_than_1.append(word)
self.coefs_length_greater_than_1.append(coef)
for token_seq in self.words_id_length_greater_than_1:
if len(token_seq) == 0:
raise ValueError(f"Words token sequences {words_ids} cannot have an empty list")
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
masks_length_1, scores_length_1 = [], torch.zeros_like(scores)
masks_length_greater_than_1, scores_length_greater_than_1 = [], torch.zeros_like(scores)
if len(self.words_id_length_1) > 0:
for word_id,coef in zip(self.words_id_length_1,self.coefs_length_1):
mask = self._get_mask_length_1(scores,word_id)
masks_length_1.append(mask)
if coef >= 0:
score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) * (1 + coef) + \
scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) / (1 + coef)
if coef < 0:
score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) / (1 + abs(coef)) + \
scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) * (1 + abs(coef))
scores_length_1 += score
if len(self.words_id_length_greater_than_1) > 0:
for word_ids,coef in zip(self.words_id_length_greater_than_1,self.coefs_length_greater_than_1):
mask = self._get_mask_length_greater_than_1(input_ids.tolist(),scores,word_ids)
masks_length_greater_than_1.append(mask)
if coef >= 0:
score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) * (1 + coef) + \
scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) / (1 + coef)
if coef < 0:
score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) / (1 + abs(coef)) + \
scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) * (1 + abs(coef))
scores_length_greater_than_1 += score
masks_all_lengths = masks_length_1 + masks_length_greater_than_1
one_large_mask = torch.zeros_like(scores).bool()
for mask in masks_all_lengths:
one_large_mask = torch.bitwise_or(one_large_mask,mask)
base_scores = scores.masked_fill(one_large_mask,0.)
new_scores = base_scores + scores_length_1 + scores_length_greater_than_1
return new_scores
def _get_mask_length_1(self, scores: torch.FloatTensor, word_id:List[int]) -> torch.BoolTensor:
mask = torch.zeros(scores.shape[1])
mask[word_id] = 1
return mask.unsqueeze(0).to(scores.device).bool()
def _tokens_match(self, prev_tokens: List[int], tokens: List[int]) -> bool:
if len(tokens) == 0:
return True
elif len(tokens) > len(prev_tokens):
return False
else:
return prev_tokens[-len(tokens) :] == tokens
def _calc_word_ids(self, prev_input_ids: List[List[int]], word_ids:List[int]) -> Iterable[int]:
tokens = []
for prev_input_ids_slice in prev_input_ids:
tokens_slice = []
if self._tokens_match(prev_input_ids_slice, word_ids[:-1]):
tokens_slice.append(word_ids[-1])
tokens.append(tokens_slice)
return tokens
def _get_mask_length_greater_than_1(self, input_ids: list, scores: torch.FloatTensor, word_ids:List[int]) -> torch.BoolTensor:
dynamic_tokens = self._calc_word_ids(input_ids, word_ids)
mask_list = []
for idx, batch_tokens in enumerate(dynamic_tokens):
for token in batch_tokens:
# Eliminates invalid bad word IDs that are over the vocabulary size.
if token <= scores.shape[1]:
mask_list.append([idx, token])
else:
logger.error(
f"An invalid bad word ID is defined: {token}. This ID is not contained in the "
"vocabulary, and is therefore ignored."
)
if not mask_list:
mask = torch.zeros_like(scores).bool()
else:
mask = torch.LongTensor(mask_list)
indices = torch.ones(len(mask))
mask = (
torch.sparse.LongTensor(mask.t(), indices, scores.size())
.to(scores.device)
.to_dense()
.bool()
)
return mask
```
**An example**
Take the summarization example in BART documentation [here](https://huggingface.co/docs/transformers/model_doc/bart#transformers.BartForConditionalGeneration.forward.example). Set `add_prefix_space=True` in the tokenizer and remove the `max_length = 20` in the generate method call.
```py
from transformers import AutoTokenizer, BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn", add_prefix_space=True)
ARTICLE_TO_SUMMARIZE = (
"PG&E stated it scheduled the blackouts in response to forecasts for high winds "
"amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
"scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
)
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0)
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
This yields the following summary:
> Nearly 800 thousand customers were scheduled to be affected by the shutoffs. PG&E stated it scheduled the blackouts in response to forecasts for high winds.
At this point the new logits processor class is applied. The objective will be to make the model output the number of customers affected as digits and replace the word “shutoffs”. We do so by penalizing the token ids for “thousand” and “shutoffs” while boosting the ones for “shutdowns”.
```py
logits_processor = LogitsProcessorList(
[
BendLogitsProcessor(
bend_list = [[-10000.,[7673]], # thousand
[1000.,[5001, 29]], # shutdowns
[-1000000.,[2572, 10816]], # shutoffs
[-1000000.,[2572, 1529]], # shutoffs
],
eos_token_id=model.config.eos_token_id
)
]
)
# Generate Summary
summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0, logits_processor=logits_processor, renormalize_logits=True)
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
If we call the the summary generation again, this time including the logits processor and renormalizing we get:
> Nearly 800,000 customers were scheduled to be affected by the shutdowns. PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions.
|
cc @gante
Hey @iiglesias-asapp 👋 Thank you for the suggestion!
Before we dive into adding code, a disclaimer -- one of the current problems with `.generate()` is that there are too many options, scaring users away from the docs. This means that I will be conservative before giving the green light to add more options 🤗
We do have an option to have control over extractive vs abstraction summarization, the `encoder_repetition_penalty` ([docs](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.encoder_repetition_penalty)). This is a multiplicative factor to the logits that increases/decreases the odds of reusing the tokens in the input.
Do you have more use cases in mind, where your suggestion would be critical?
Hi @gante! Thanks for the reply.
I agree that there many options already 😅 I wasn't thinking of this as an additional option but more like an "upgrade" of the existing feature since it gives the user a bit more flexibility while keeping the previous functionality, i.e. tokens are boosted/penalized instead of forced/forbidden and users willing to forbid the appearance of certain token can still input float("-Inf") as score.
Main use case in mind was cheap model customization by a set of score,[tokens]. I guess, more generally, it is desirable to allow the model to generate a certain token if there is no natural replacement for it and discourage it otherwise; the sort of soft penalization that is allowed in other APIs.
@iiglesias-asapp I see your point - controlling at a token level may be advantageous. Nevertheless, i) without a specific common use case in mind and ii) having not heard the demand for this feature before, I'm reluctant to add it. Remember that custom logits processors can be used, so not adding it to the codebase doesn't mean that it can't be used 🤗
Let's not close this issue and do the following. If this comment gets 10 reactions/this issue gets mentioned 10 times, then it means that folks have been searching for this feature. In that case, let's roll back my decision above, and add it to the codebase. That way, we can balance HF's limited maintenance resources with actual feature demand! (Whoever does the 10th react, plz tag me)
@iiglesias-asapp does it sound good to you?
Sounds good! Thanks for considering it @gante
Please add this because I have alpaca model and it was trained on a bad dataset with many cases of input and output fields having "<noinput" and "nooutput>" text in them which causes my LLM to constantly respond with those words :/
@teknium1 I think that `bad_words_list` as it is would be enough for your example. But if you still feel something like the `logit_bias` parameter is what you need, react to @gante comment to make this available
> @teknium1 I think that `bad_words_list` as it is would be enough for your example. But if you still feel something like the `logit_bias` parameter is what you need, react to @gante comment to make this available
Oh can you point me to where/how I can use the bad_words_list
edit: nvm found it ty
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
> custom logits processors
> @iiglesias-asapp I see your point - controlling at a token level may be advantageous. Nevertheless, i) without a specific common use case in mind and ii) having not heard the demand for this feature before, I'm reluctant to add it. Remember that custom logits processors can be used, so not adding it to the codebase doesn't mean that it can't be used 🤗
>
> Let's not close this issue and do the following. If this comment gets 10 reactions/this issue gets mentioned 10 times, then it means that folks have been searching for this feature. In that case, let's roll back my decision above, and add it to the codebase. That way, we can balance HF's limited maintenance resources with actual feature demand! (Whoever does the 10th react, plz tag me)
>
> @iiglesias-asapp does it sound good to you?
@gante
There are many use cases:
1) Increase length of generated text, by making **end of text** token less probable.
2) If you use few shot learning, and you have problem with labels that use used, you can increase probability of a label.
for example:
instruction: write me a joke about cars
answer: some response
instruction: write me a joke about [subject2]
answer: some response
instruction: write me a joke about [subject3]
answer: some response
then you need to increase probability for answer: in some cases, when not everything work as it should.
encoded norepeat engrams is one option, but it sometimes generates strange text.
2a) The same thing if you do a few shot learning to generate html text.
For example, when you want text not to repeat, if you set params for that,
then also html tags wont be repeated and text will be strangely formated. So then you just increase the probability of html tags
and you get much better output.
3) paraphrasing for dataset multiplying
to get more unique paraphrases, it is good to lower probability of original words
4) openai has this feature, i really doubt they would implement something, and write documentation for that, if they did not think that some users would use it.
@gante
Here comes the 10th reaction!
Thanks for considering adding this feature. Really need this since I'm currently working on building APIs similar to [OpenAI API](https://platform.openai.com/docs/api-reference/completions/create#completions/create-logit_bias). It would be convenient if it is officially supported!
As promised, I've added it to my queue! 🫡
|
2023-06-17T18:20:09Z
|
<patch>
diff --git a/src/transformers/__init__.py b/src/transformers/__init__.py
--- a/src/transformers/__init__.py
+++ b/src/transformers/__init__.py
@@ -970,6 +970,7 @@
"PhrasalConstraint",
"PrefixConstrainedLogitsProcessor",
"RepetitionPenaltyLogitsProcessor",
+ "SequenceBiasLogitsProcessor",
"StoppingCriteria",
"StoppingCriteriaList",
"TemperatureLogitsWarper",
@@ -4733,6 +4734,7 @@
PhrasalConstraint,
PrefixConstrainedLogitsProcessor,
RepetitionPenaltyLogitsProcessor,
+ SequenceBiasLogitsProcessor,
StoppingCriteria,
StoppingCriteriaList,
TemperatureLogitsWarper,
diff --git a/src/transformers/generation/__init__.py b/src/transformers/generation/__init__.py
--- a/src/transformers/generation/__init__.py
+++ b/src/transformers/generation/__init__.py
@@ -56,6 +56,7 @@
"NoRepeatNGramLogitsProcessor",
"PrefixConstrainedLogitsProcessor",
"RepetitionPenaltyLogitsProcessor",
+ "SequenceBiasLogitsProcessor",
"EncoderRepetitionPenaltyLogitsProcessor",
"TemperatureLogitsWarper",
"TopKLogitsWarper",
@@ -182,6 +183,7 @@
NoRepeatNGramLogitsProcessor,
PrefixConstrainedLogitsProcessor,
RepetitionPenaltyLogitsProcessor,
+ SequenceBiasLogitsProcessor,
TemperatureLogitsWarper,
TopKLogitsWarper,
TopPLogitsWarper,
diff --git a/src/transformers/generation/configuration_utils.py b/src/transformers/generation/configuration_utils.py
--- a/src/transformers/generation/configuration_utils.py
+++ b/src/transformers/generation/configuration_utils.py
@@ -142,11 +142,8 @@ class GenerationConfig(PushToHubMixin):
no_repeat_ngram_size (`int`, *optional*, defaults to 0):
If set to int > 0, all ngrams of that size can only occur once.
bad_words_ids(`List[List[int]]`, *optional*):
- List of token ids that are not allowed to be generated. In order to get the token ids of the words that
- should not appear in the generated text, make sure to set `add_prefix_space=True` when initializing the
- tokenizer, and use `tokenizer(bad_words, add_special_tokens=False).input_ids`. The `add_prefix_space`
- argument is only supported for some slow tokenizers, as fast tokenizers' prefixing behaviours come from
- `pre tokenizers`. Read more [here](https://huggingface.co/docs/tokenizers/api/pre-tokenizers).
+ List of list of token ids that are not allowed to be generated. Check
+ [`~generation.NoBadWordsLogitsProcessor`] for further documentation and examples.
force_words_ids(`List[List[int]]` or `List[List[List[int]]]`, *optional*):
List of token ids that must be generated. If given a `List[List[int]]`, this is treated as a simple list of
words that must be included, the opposite to `bad_words_ids`. If given `List[List[List[int]]]`, this
@@ -183,6 +180,10 @@ class GenerationConfig(PushToHubMixin):
A list of pairs of integers which indicates a mapping from generation indices to token indices that will be
forced before sampling. For example, `[[1, 123]]` means the second generated token will always be a token
of index 123.
+ sequence_bias (`Dict[Tuple[int], float]`, *optional*)):
+ Dictionary that maps a sequence of tokens to its bias term. Positive biases increase the odds of the
+ sequence being selected, while negative biases do the opposite. Check
+ [`~generation.SequenceBiasLogitsProcessor`] for further documentation and examples.
> Parameters that define the output variables of `generate`
@@ -262,6 +263,7 @@ def __init__(self, **kwargs):
self.suppress_tokens = kwargs.pop("suppress_tokens", None)
self.begin_suppress_tokens = kwargs.pop("begin_suppress_tokens", None)
self.forced_decoder_ids = kwargs.pop("forced_decoder_ids", None)
+ self.sequence_bias = kwargs.pop("sequence_bias", None)
# Parameters that define the output variables of `generate`
self.num_return_sequences = kwargs.pop("num_return_sequences", 1)
diff --git a/src/transformers/generation/logits_process.py b/src/transformers/generation/logits_process.py
--- a/src/transformers/generation/logits_process.py
+++ b/src/transformers/generation/logits_process.py
@@ -15,7 +15,7 @@
import inspect
import math
-from typing import Callable, Iterable, List, Optional, Tuple, Union
+from typing import Callable, Dict, Iterable, List, Tuple, Union
import numpy as np
import torch
@@ -539,140 +539,218 @@ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> to
return scores
-class NoBadWordsLogitsProcessor(LogitsProcessor):
+class SequenceBiasLogitsProcessor(LogitsProcessor):
"""
- [`LogitsProcessor`] that enforces that specified sequences will never be sampled.
+ [`LogitsProcessor`] that applies an additive bias on sequences. The bias is applied to the last token of a sequence
+ when the next generated token can complete it. Consequently, to take the most of biasing sequences with more than
+ one token, consider using beam methods (to gracefully work around partially completed sequences that have a
+ negative bias) and applying the bias to their prefixes (to ensure the bias is applied earlier).
+
+ <Tip>
+
+ In order to get the token ids of the sequences that you want to bias, make sure to set `add_prefix_space=True` when
+ initializing the tokenizer, and use `tokenizer(bad_words, add_special_tokens=False).input_ids`. The
+ `add_prefix_space` argument is only supported for some slow tokenizers, as fast tokenizers' prefixing behaviours
+ come from `pre tokenizers`. Read more [here](https://huggingface.co/docs/tokenizers/api/pre-tokenizers).
+
+ </Tip>
Args:
- bad_words_ids (`List[List[int]]`):
- List of list of token ids that are not allowed to be generated. In order to get the token ids of the words
- that should not appear in the generated text, make sure to set `add_prefix_space=True` when initializing
- the tokenizer, and use `tokenizer(bad_words, add_special_tokens=False).input_ids`. The `add_prefix_space`
- argument is only supported for some slow tokenizers, as fast tokenizers' prefixing behaviours come from
- `pre tokenizers`. Read more [here](https://huggingface.co/docs/tokenizers/api/pre-tokenizers).
- eos_token_id (`Union[int, List[int]]`):
- The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens.
+ sequence_bias (`Dict[Tuple[int], float]`):
+ Dictionary that maps a sequence of tokens to its bias term. Positive biases increase the odds of the
+ sequence being selected, while negative biases do the opposite. If a sequence has a length of 1, its bias
+ will always be applied. Otherwise, the bias will only be applied if the sequence in question is about to be
+ completed (in the token selection step after this processor is applied).
+
+ Examples:
+
+ ```python
+ >>> from transformers import AutoTokenizer, AutoModelForCausalLM
+
+ >>> model = AutoModelForCausalLM.from_pretrained("gpt2")
+ >>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
+ >>> inputs = tokenizer(["The full name of Donald is Donald"], return_tensors="pt")
+
+ >>> summary_ids = model.generate(inputs["input_ids"], max_new_tokens=4)
+ >>> print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True)[0])
+ The full name of Donald is Donald J. Trump Jr
+
+ >>> # Now let's control generation through a bias. Please note that the tokenizer is initialized differently!
+ >>> tokenizer_with_prefix_space = AutoTokenizer.from_pretrained("gpt2", add_prefix_space=True)
+
+
+ >>> def get_tokens_as_tuple(word):
+ ... return tuple(tokenizer_with_prefix_space([word], add_special_tokens=False).input_ids[0])
+
+
+ >>> # If we add a negative bias without beam search, it may become "stuck" in a prefix without good continuations
+ >>> sequence_bias = {get_tokens_as_tuple("Trump"): -10.0}
+ >>> biased_ids = model.generate(inputs["input_ids"], max_new_tokens=4, sequence_bias=sequence_bias)
+ >>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0])
+ The full name of Donald is Donald J. Donald,
+
+ >>> biased_ids = model.generate(inputs["input_ids"], max_new_tokens=4, num_beams=4, sequence_bias=sequence_bias)
+ >>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0])
+ The full name of Donald is Donald Rumsfeld,
+
+ >>> # We can also add a positive bias to nudge the model towards specific tokens or continuations
+ >>> sequence_bias = {get_tokens_as_tuple("Donald Duck"): 10.0}
+ >>> biased_ids = model.generate(inputs["input_ids"], max_new_tokens=4, num_beams=4, sequence_bias=sequence_bias)
+ >>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0])
+ The full name of Donald is Donald Duck.
+ ```
"""
- def __init__(self, bad_words_ids: List[List[int]], eos_token_id: Union[int, List[int]]):
- if not isinstance(bad_words_ids, List) or len(bad_words_ids) == 0:
- raise ValueError(f"`bad_words_ids` has to be a non-empty list, but is {bad_words_ids}.")
- if any(not isinstance(bad_word_ids, list) for bad_word_ids in bad_words_ids):
- raise ValueError(f"`bad_words_ids` has to be a list of lists, but is {bad_words_ids}.")
+ def __init__(self, sequence_bias: Dict[Tuple[int], float]):
+ self.sequence_bias = sequence_bias
+ self._validate_arguments()
+
+ # Bias variables that will be populated on the first call (for retrocompatibility purposes, the vocabulary size
+ # is infered in the first usage, which inhibits initializing here)
+ self.sequences_length_greater_than_1 = []
+ self.length_1_bias = None
+ self.length_greather_than_1_bias = None
+ self.prepared_bias_variables = False
+
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
+ # 1 - Prepares the bias tensors. This is only needed the first time the logit processor is called.
+ if not self.prepared_bias_variables:
+ self._prepare_bias_variables(scores)
+
+ # 2 - prepares an empty bias to add
+ bias = torch.zeros_like(scores)
+
+ # 3 - include the bias from length = 1
+ bias += self.length_1_bias
+
+ # 4 - include the bias from length > 1, after determining which biased sequences may be completed.
+ # `matching_mask` is a (batch_size, vocab_size) boolean mask that is True for all tokens whose corresponding
+ # bias should be applied. The bias is applied on the last token of the sequence, if (and only if) the sequence
+ # may become complete this iteration.
+ matching_mask = torch.zeros_like(scores, dtype=torch.bool)
+ for sequence_ids in self.sequences_length_greater_than_1:
+ if len(sequence_ids) > input_ids.shape[1]: # the sequence is longer than the context, ignore
+ continue
+ prefix_length = len(sequence_ids) - 1
+ last_token = sequence_ids[-1]
+ matching_rows = torch.eq(
+ input_ids[:, -prefix_length:],
+ torch.tensor(sequence_ids[:-1], dtype=input_ids.dtype, device=input_ids.device),
+ ).prod(dim=1)
+ matching_mask[:, last_token] |= matching_rows.bool()
+ bias += torch.where(matching_mask, self.length_greather_than_1_bias, 0.0)
+
+ # 5 - apply the bias to the scores
+ scores = scores + bias
+ return scores
+
+ def _prepare_bias_variables(self, scores: torch.FloatTensor):
+ vocabulary_size = scores.shape[-1]
+ sequence_bias = self.sequence_bias
+ tokens_with_bias = []
+
+ # Check biased tokens out of bounds
+ invalid_biases = []
+ for sequence_ids in sequence_bias:
+ for token_id in sequence_ids:
+ if token_id >= vocabulary_size:
+ invalid_biases.append(token_id)
+ if len(invalid_biases) > 0:
+ raise ValueError(
+ f"The model vocabulary size is {vocabulary_size}, but the following tokens were being biased: "
+ f"{invalid_biases}"
+ )
+
+ # Precompute the bias tensors to be applied. Sequences of length 1 are kept separately, as they can be applied
+ # with simpler logic.
+ self.length_1_bias = torch.zeros((vocabulary_size,), dtype=torch.float).to(scores.device)
+ self.length_greather_than_1_bias = torch.zeros((vocabulary_size,), dtype=torch.float).to(scores.device)
+ for sequence_ids, bias in sequence_bias.items():
+ if len(sequence_ids) == 1:
+ self.length_1_bias[sequence_ids[-1]] = bias
+ else:
+ self.sequences_length_greater_than_1.append(sequence_ids)
+ if self.length_greather_than_1_bias[sequence_ids[-1]] != 0.0:
+ raise ValueError(
+ "Setting a bias on sequences that share a common token termination is not yet supported. "
+ "Please open an issue if you see this error message (after checking that it doesn't already "
+ "exist)."
+ )
+ self.length_greather_than_1_bias[sequence_ids[-1]] = bias
+ tokens_with_bias.append(sequence_ids[-1])
+
+ self.prepared_bias_variables = True
+
+ def _validate_arguments(self):
+ sequence_bias = self.sequence_bias
+ if not isinstance(sequence_bias, dict) or len(sequence_bias) == 0:
+ raise ValueError(f"`sequence_bias` has to be a non-empty dictionary, but is {sequence_bias}.")
+ if any(not isinstance(sequence_ids, tuple) for sequence_ids in sequence_bias.keys()):
+ raise ValueError(f"`sequence_bias` has to be a dict with tuples as keys, but is {sequence_bias}.")
if any(
- any((not isinstance(token_id, (int, np.integer)) or token_id < 0) for token_id in bad_word_ids)
- for bad_word_ids in bad_words_ids
+ any((not isinstance(token_id, (int, np.integer)) or token_id < 0) for token_id in sequence_ids)
+ or len(sequence_ids) == 0
+ for sequence_ids in sequence_bias.keys()
):
raise ValueError(
- f"Each list in `bad_words_ids` has to be a list of positive integers, but is {bad_words_ids}."
+ f"Each key in `sequence_bias` has to be a non-empty tuple of positive integers, but is "
+ f"{sequence_bias}."
)
+ if any(not isinstance(bias, float) for bias in sequence_bias.values()):
+ raise ValueError(f"`sequence_bias` has to be a dict with floats as values, but is {sequence_bias}.")
- if eos_token_id is None:
- eos_token_id = []
- if isinstance(eos_token_id, int):
- eos_token_id = [eos_token_id]
- bad_words_ids = list(
- filter(lambda bad_token_seq: all([bad_token_seq != [i] for i in eos_token_id]), bad_words_ids)
- )
- self.bad_words_id_length_1 = []
- self.bad_words_id_length_greater_than_1 = []
- for word in bad_words_ids:
- if len(word) == 1:
- self.bad_words_id_length_1.append(word[0])
- else:
- self.bad_words_id_length_greater_than_1.append(word)
+class NoBadWordsLogitsProcessor(SequenceBiasLogitsProcessor):
+ """
+ [`LogitsProcessor`] that enforces that specified sequences will never be selected.
- self.static_bad_words_mask: Optional[torch.LongTensor] = None
+ <Tip>
- for banned_token_seq in self.bad_words_id_length_greater_than_1:
- if len(banned_token_seq) == 0:
- raise ValueError(f"Banned words token sequences {bad_words_ids} cannot have an empty list")
+ In order to get the token ids of the words that should not appear in the generated text, make sure to set
+ `add_prefix_space=True` when initializing the tokenizer, and use `tokenizer(bad_words,
+ add_special_tokens=False).input_ids`. The `add_prefix_space` argument is only supported for some slow tokenizers,
+ as fast tokenizers' prefixing behaviours come from `pre tokenizers`. Read more
+ [here](https://huggingface.co/docs/tokenizers/api/pre-tokenizers).
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
- if self.static_bad_words_mask is None and len(self.bad_words_id_length_1) > 0:
- self.static_bad_words_mask = self._calc_static_bad_word_mask(scores)
+ </Tip>
- dynamic_banned_tokens = self._calc_banned_bad_words_ids(input_ids.tolist())
- scores = self._set_scores_to_inf_for_banned_tokens(scores, dynamic_banned_tokens)
+ Args:
+ bad_words_ids (`List[List[int]]`):
+ List of list of token ids that are not allowed to be generated.
+ eos_token_id (`Union[int, List[int]]`):
+ The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens.
+ """
- return scores
+ def __init__(self, bad_words_ids: List[List[int]], eos_token_id: Union[int, List[int]]):
+ self.bad_word_ids = bad_words_ids
+ self._validate_arguments()
- def _calc_static_bad_word_mask(self, scores: torch.FloatTensor) -> torch.BoolTensor:
- static_bad_words_mask = torch.zeros(scores.shape[1])
- static_bad_words_mask[self.bad_words_id_length_1] = 1
- return static_bad_words_mask.unsqueeze(0).to(scores.device).bool()
-
- def _tokens_match(self, prev_tokens: List[int], tokens: List[int]) -> bool:
- if len(tokens) == 0:
- # if bad word tokens is just one token always ban it
- return True
- elif len(tokens) > len(prev_tokens):
- # if bad word tokens are longer then prev input_ids they can't be equal
- return False
- else:
- return prev_tokens[-len(tokens) :] == tokens
-
- def _calc_banned_bad_words_ids(self, prev_input_ids: List[List[int]]) -> Iterable[int]:
- banned_tokens = []
- for prev_input_ids_slice in prev_input_ids:
- banned_tokens_slice = []
- for banned_token_seq in self.bad_words_id_length_greater_than_1:
- if self._tokens_match(prev_input_ids_slice, banned_token_seq[:-1]):
- banned_tokens_slice.append(banned_token_seq[-1])
-
- banned_tokens.append(banned_tokens_slice)
-
- return banned_tokens
-
- def _set_scores_to_inf_for_banned_tokens(
- self, scores: torch.Tensor, banned_tokens: List[List[int]]
- ) -> torch.Tensor:
- """
- Modifies the scores in place by setting the banned token positions to `-inf`. Banned token is expected to be a
- list of list of banned tokens to ban in the format [[batch index, vocabulary position],...
-
- Args:
- scores: logits distribution of shape (batch size, vocabulary size)
- banned_tokens: list of list of tokens to ban of length (batch_size)
- """
- banned_mask_list = []
- for idx, batch_banned_tokens in enumerate(banned_tokens):
- for token in batch_banned_tokens:
- # Eliminates invalid bad word IDs that are over the vocabulary size.
- if token <= scores.shape[1]:
- banned_mask_list.append([idx, token])
- else:
- logger.error(
- f"An invalid bad word ID is defined: {token}. This ID is not contained in the "
- "vocabulary, and is therefore ignored."
- )
- if not banned_mask_list and self.static_bad_words_mask is None:
- return scores
+ # Filter EOS token from bad_words_ids
+ if eos_token_id is None:
+ eos_token_id = []
+ if isinstance(eos_token_id, int):
+ eos_token_id = [eos_token_id]
+ bad_words_ids = list(
+ filter(lambda bad_token_seq: all([bad_token_seq != [i] for i in eos_token_id]), bad_words_ids)
+ )
- else:
- if banned_mask_list:
- indices = torch.ones(len(banned_mask_list))
- banned_mask = torch.LongTensor(banned_mask_list, device=indices.device)
- # A sparse tensor is generated from a list of coordinates: [[0, 1], [0, 2], [2, 0]]. A conversion to dense tensor generates:
- # [ 0 1 1 ]
- # [ 0 0 0 ]
- # [ 1 0 0 ]
-
- banned_mask = (
- torch.sparse.LongTensor(banned_mask.t(), indices, scores.size())
- .to(scores.device)
- .to_dense()
- .bool()
- )
-
- if self.static_bad_words_mask is not None:
- banned_mask = torch.bitwise_or(banned_mask, self.static_bad_words_mask)
- else:
- banned_mask = self.static_bad_words_mask
+ # Forbidding a sequence is equivalent to setting its bias to -inf
+ sequence_bias = {tuple(sequence): float("-inf") for sequence in bad_words_ids}
+ super().__init__(sequence_bias=sequence_bias)
- scores = scores.masked_fill(banned_mask, -float("inf"))
- return scores
+ def _validate_arguments(self):
+ bad_words_ids = self.bad_word_ids
+ if not isinstance(bad_words_ids, list) or len(bad_words_ids) == 0:
+ raise ValueError(f"`bad_words_ids` has to be a non-empty list, but is {bad_words_ids}.")
+ if any(not isinstance(bad_word_ids, list) for bad_word_ids in bad_words_ids):
+ raise ValueError(f"`bad_words_ids` has to be a list of lists, but is {bad_words_ids}.")
+ if any(
+ any((not isinstance(token_id, (int, np.integer)) or token_id < 0) for token_id in bad_word_ids)
+ for bad_word_ids in bad_words_ids
+ ):
+ raise ValueError(
+ f"Each list in `bad_words_ids` has to be a list of positive integers, but is {bad_words_ids}."
+ )
class PrefixConstrainedLogitsProcessor(LogitsProcessor):
diff --git a/src/transformers/generation/utils.py b/src/transformers/generation/utils.py
--- a/src/transformers/generation/utils.py
+++ b/src/transformers/generation/utils.py
@@ -56,6 +56,7 @@
NoRepeatNGramLogitsProcessor,
PrefixConstrainedLogitsProcessor,
RepetitionPenaltyLogitsProcessor,
+ SequenceBiasLogitsProcessor,
SuppressTokensAtBeginLogitsProcessor,
SuppressTokensLogitsProcessor,
TemperatureLogitsWarper,
@@ -842,8 +843,9 @@ def _get_logits_processor(
# instantiate processors list
processors = LogitsProcessorList()
- # the following idea is largely copied from this PR: https://github.com/huggingface/transformers/pull/5420/files
- # all samplers can be found in `generation_utils_samplers.py`
+ if generation_config.sequence_bias is not None:
+ processors.append(SequenceBiasLogitsProcessor(sequence_bias=generation_config.sequence_bias))
+
if generation_config.diversity_penalty is not None and generation_config.diversity_penalty > 0.0:
processors.append(
HammingDiversityLogitsProcessor(
diff --git a/src/transformers/utils/dummy_pt_objects.py b/src/transformers/utils/dummy_pt_objects.py
--- a/src/transformers/utils/dummy_pt_objects.py
+++ b/src/transformers/utils/dummy_pt_objects.py
@@ -240,6 +240,13 @@ def __init__(self, *args, **kwargs):
requires_backends(self, ["torch"])
+class SequenceBiasLogitsProcessor(metaclass=DummyObject):
+ _backends = ["torch"]
+
+ def __init__(self, *args, **kwargs):
+ requires_backends(self, ["torch"])
+
+
class StoppingCriteria(metaclass=DummyObject):
_backends = ["torch"]
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-369
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error in Rz gate
There is a bug in this line and the line above it (rz gate):
https://github.com/QISKit/qiskit-sdk-py/blob/4f15c99c1dc4b91abd5d54ccc6b94c8e38ac4997/qiskit/extensions/standard/rz.py#L61
It is replicated in rx and ry as well.
</issue>
<code>
[start of README.md]
1 # Quantum Information Software Kit (QISKit)
2
3 [](https://pypi.python.org/pypi/qiskit)
4 [](https://travis-ci.org/QISKit/qiskit-sdk-py)
5
6 The Quantum Information Software Kit (**QISKit** for short) is a software development kit (SDK) for
7 working with [OpenQASM](https://github.com/QISKit/qiskit-openqasm) and the
8 [IBM Q Experience (QX)](https://quantumexperience.ng.bluemix.net/).
9
10 Use **QISKit** to create quantum computing programs, compile them, and execute them on one of
11 several backends (online Real quantum processors, online simulators, and local simulators). For
12 the online backends, QISKit uses our [python API client](https://github.com/QISKit/qiskit-api-py)
13 to connect to the IBM Q Experience.
14
15 **We use GitHub issues for tracking requests and bugs. Please see the**
16 [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) **for
17 questions and discussion.**
18
19 **If you'd like to contribute to QISKit, please take a look at our**
20 [contribution guidelines](CONTRIBUTING.rst).
21
22 Links to Sections:
23
24 * [Installation](#installation)
25 * [Creating your first Quantum Program](#creating-your-first-quantum-program)
26 * [More Information](#more-information)
27 * [Authors](#authors-alphabetical)
28 * [License](#license)
29
30 ## Installation
31
32 ### Dependencies
33
34 At least [Python 3.5 or later](https://www.python.org/downloads/) is needed for using QISKit. In
35 addition, [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) is recommended
36 for interacting with the tutorials.
37 For this reason we recommend installing the [Anaconda 3](https://www.continuum.io/downloads)
38 python distribution, as it comes with all of these dependencies pre-installed.
39
40 In addition, a basic understanding of quantum information is very helpful when interacting with
41 QISKit. If you're new to quantum, start with our
42 [User Guides](https://github.com/QISKit/ibmqx-user-guides)!
43
44 ### Installation
45
46 We encourage to install QISKit via the PIP tool (a python package manager):
47
48 ```
49 pip install qiskit
50 ```
51
52 PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version.
53
54 PIP package comes with prebuilt binaries for these platforms:
55
56 * Linux x86_64
57 * Darwin
58 * Win64
59
60 If your platform is not in the list, PIP will try to build from the sources at installation time. It will require to have CMake 3.5 or higher pre-installed and at least one of the [build environments supported by CMake](https://cmake.org/cmake/help/v3.5/manual/cmake-generators.7.html).
61
62 If during the installation PIP doesn't succeed to build, don't worry, you will have QISKit installed at the end but you probably couldn't take advantage of some of the high-performance components. Anyway, we always provide a python, not-so-fast alternative as a fallback.
63
64
65 #### Setup your environment
66
67 We recommend using python virtual environments to improve your experience. Refer to our
68 [Environment Setup documentation](doc/install.rst#3.1-Setup-the-environment) for more information.
69
70 ## Creating your first Quantum Program
71
72 Now that the SDK is installed, it's time to begin working with QISKit.
73
74 We are ready to try out a quantum circuit example, which runs via the local simulator.
75
76 This is a simple example that makes an entangled state.
77
78 ```python
79 # Import the QISKit SDK
80 from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
81 from qiskit.wrapper import available_backends, execute
82
83 # Create a Quantum Register with 2 qubits
84 q = QuantumRegister(2)
85 # Create a Classical Register with 2 bits.
86 c = ClassicalRegister(2)
87 # Create a Quantum Circuit
88 qc = QuantumCircuit(q, c)
89
90 # Add a H gate on qubit 0, putting this qubit in superposition.
91 qc.h(q[0])
92 # Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting
93 # the qubits in a Bell state.
94 qc.cx(q[0], q[1])
95 # Add a Measure gate to see the state.
96 qc.measure(q, c)
97
98 # See a list of available local simulators
99 print("Local backends: ", available_backends({'local': True}))
100
101 # Compile and run the Quantum circuit on a simulator backend
102 sim_result = execute(qc, 'local_qasm_simulator')
103
104 # Show the results
105 print("simulation: ", sim_result)
106 print(sim_result.get_counts(qc))
107 ```
108
109 In this case, the output will be:
110
111 ```
112 COMPLETED
113 {'counts': {'00': 512, '11': 512}}
114 ```
115
116 This script is available [here](examples/python/hello_quantum.py), where we also show how to
117 run the same program on a real quantum computer.
118
119 ### Executing your code on a real Quantum chip
120
121 You can also use QISKit to execute your code on a
122 [real quantum chip](https://github.com/QISKit/ibmqx-backend-information).
123 In order to do so, you need to configure the SDK for using the credentials in
124 your IBM Q Experience account:
125
126
127 #### Configure your API token and QX credentials
128
129
130 1. Create an _[IBM Q Experience](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so.
131 2. Get an API token from the IBM Q Experience website under _My Account > Advanced > API Token_. This API token allows you to execute your programs with the IBM Q Experience backends. See: [Example](doc/example_real_backend.rst).
132 3. We are going to create a new file called `Qconfig.py` and insert the API token into it. This file must have these contents:
133
134 ```python
135 APItoken = 'MY_API_TOKEN'
136
137 config = {
138 'url': 'https://quantumexperience.ng.bluemix.net/api',
139 # The following should only be needed for IBM Q Network users.
140 'hub': 'MY_HUB',
141 'group': 'MY_GROUP',
142 'project': 'MY_PROJECT'
143 }
144 ```
145
146 4. Substitute `MY_API_TOKEN` with your real API Token extracted in step 2.
147
148 5. If you have access to the IBM Q Network features, you also need to setup the
149 values for your hub, group, and project. You can do so by filling the
150 `config` variable with the values you can find on your IBM Q account
151 page.
152
153 Once the `Qconfig.py` file is set up, you have to move it under the same directory/folder where your program/tutorial resides, so it can be imported and be used to authenticate with the `set_api()` function. For example:
154
155 ```python
156 from qiskit import QuantumProgram
157 import Qconfig
158
159 # Creating Programs create your first QuantumProgram object instance.
160 qp = QuantumProgram()
161 qp.set_api(Qconfig.APItoken, Qconfig.config["url"],
162 hub=Qconfig.config["hub"],
163 group=Qconfig.config["group"],
164 project=Qconfig.config["project"])
165 ```
166
167 For more details on this and more information see
168 [our QISKit documentation](https://www.qiskit.org/documentation/).
169
170
171 ### Next Steps
172
173 Now you're set up and ready to check out some of the other examples from our
174 [Tutorial](https://github.com/QISKit/qiskit-tutorial) repository. Start with the
175 [index tutorial](https://github.com/QISKit/qiskit-tutorial/blob/master/index.ipynb) and then go to
176 the [‘Getting Started’ example](https://github.com/QISKit/qiskit-tutorial/blob/002d054c72fc59fc5009bb9fa0ee393e15a69d07/1_introduction/getting_started.ipynb).
177 If you already have [Jupyter Notebooks installed](https://jupyter.readthedocs.io/en/latest/install.html),
178 you can copy and modify the notebooks to create your own experiments.
179
180 To install the tutorials as part of the QISKit SDK, see the following
181 [installation details](doc/install.rst#Install-Jupyter-based-tutorials). Complete SDK
182 documentation can be found in the [*doc* directory](doc/qiskit.rst) and in
183 [the official QISKit site](https://www.qiskit.org/documentation).
184
185 ## More Information
186
187 For more information on how to use QISKit, tutorial examples, and other helpful links, take a look
188 at these resources:
189
190 * **[User Guides](https://github.com/QISKit/ibmqx-user-guides)**,
191 a good starting place for learning about quantum information and computing
192 * **[Tutorials](https://github.com/QISKit/qiskit-tutorial)**,
193 for example notebooks, start with the [index](https://github.com/QISKit/qiskit-tutorial/blob/master/index.ipynb) and [‘Getting Started’ Jupyter notebook](https://github.com/QISKit/qiskit-tutorial/blob/002d054c72fc59fc5009bb9fa0ee393e15a69d07/1_introduction/getting_started.ipynb)
194 * **[OpenQASM](https://github.com/QISKit/openqasm)**,
195 for additional information and examples of QASM code
196 * **[IBM Quantum Experience Composer](https://quantumexperience.ng.bluemix.net/qx/editor)**,
197 a GUI for interacting with real and simulated quantum computers
198 * **[QISkit Python API](https://github.com/QISKit/qiskit-api-py)**, an API to use the IBM Quantum
199 Experience in Python
200
201 QISKit was originally developed by researchers and developers on the
202 [IBM-Q](http://www.research.ibm.com/ibm-q/) Team at [IBM Research](http://www.research.ibm.com/),
203 with the aim of offering a high level development kit to work with quantum computers.
204
205 Visit the [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) for
206 questions and discussions on QISKit and quantum computing more broadly. If you'd like to
207 contribute to QISKit, please take a look at our [contribution guidelines](CONTRIBUTING.rst).
208
209 ## Multilanguage guide
210
211 * **[Korean Translation](doc/ko/README.md)** - basic guide line written in Korean.
212 * **[Chinese Translation](doc/zh/README.md)** - basic guide line written in Chinese.
213
214 ## Authors (alphabetical)
215
216 QISKit was originally authored by
217 Luciano Bello, Jim Challenger, Andrew Cross, Ismael Faro, Jay Gambetta, Juan Gomez,
218 Ali Javadi-Abhari, Paco Martin, Diego Moreda, Jesus Perez, Erick Winston and Chris Wood.
219
220 And continues to grow with the help and work of [many people](CONTRIBUTORS.md) who contribute
221 to the project at different levels.
222
223 ## License
224
225 This project uses the [Apache License Version 2.0 software license](https://www.apache.org/licenses/LICENSE-2.0).
226
[end of README.md]
[start of qiskit/backends/local/qasm_simulator_projectq.py]
1 # -*- coding: utf-8 -*-
2 # pylint: disable=unused-import
3
4 # Copyright 2017 IBM RESEARCH. All Rights Reserved.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 # =============================================================================
18
19 """Backend for the Project Q C++ simulator."""
20
21
22 import time
23 import itertools
24 import operator
25 import random
26 import logging
27 import warnings
28 from collections import OrderedDict, Counter
29 import numpy as np
30 from qiskit._result import Result
31 from qiskit.backends import BaseBackend
32 from qiskit.backends.local._simulatorerror import SimulatorError
33 try:
34 from projectq.backends._sim._cppsim import Simulator as CppSim
35 except ImportError:
36 CppSim = None
37 else:
38 from projectq import MainEngine
39 from projectq.backends import Simulator
40 from projectq.ops import (H,
41 X,
42 Y,
43 Z,
44 S,
45 T,
46 Rx,
47 Ry,
48 Rz,
49 CX,
50 Toffoli,
51 Measure,
52 BasicGate,
53 BasicMathGate,
54 QubitOperator,
55 TimeEvolution,
56 All)
57 logger = logging.getLogger(__name__)
58
59
60 class QasmSimulatorProjectQ(BaseBackend):
61 """Python interface to Project Q simulator"""
62
63 DEFAULT_CONFIGURATION = {
64 'name': 'local_qasm_simulator_projectq',
65 'url': 'https://projectq.ch',
66 'simulator': True,
67 'local': True,
68 'description': 'ProjectQ C++ simulator',
69 'coupling_map': 'all-to-all',
70 'basis_gates': 'h,s,t,cx,id'
71 }
72
73 def __init__(self, configuration=None):
74 """
75 Args:
76 configuration (dict): backend configuration
77 Raises:
78 ImportError: if the Project Q simulator is not available.
79 """
80 super().__init__(configuration or self.DEFAULT_CONFIGURATION.copy())
81 if CppSim is None:
82 logger.info('Project Q C++ simulator unavailable.')
83 raise ImportError('Project Q C++ simulator unavailable.')
84
85 # Define the attributes inside __init__.
86 self._number_of_qubits = 0
87 self._number_of_clbits = 0
88 self._statevector = 0
89 self._classical_state = 0
90 self._seed = None
91 self._shots = 0
92 self._sim = None
93
94 def run(self, q_job):
95 """Run circuits in q_job"""
96 # Generating a string id for the job
97 result_list = []
98 qobj = q_job.qobj
99 self._validate(qobj)
100 self._sim = Simulator(gate_fusion=True)
101 if 'seed' in qobj['config']:
102 self._seed = qobj['config']['seed']
103 self._sim._simulator = CppSim(self._seed)
104 else:
105 self._seed = random.getrandbits(32)
106 self._shots = qobj['config']['shots']
107 start = time.time()
108 for circuit in qobj['circuits']:
109 result_list.append(self.run_circuit(circuit))
110 end = time.time()
111 result = {'backend': self._configuration['name'],
112 'id': qobj['id'],
113 'result': result_list,
114 'status': 'COMPLETED',
115 'success': True,
116 'time_taken': (end - start)}
117 return Result(result, qobj)
118
119 def run_circuit(self, circuit):
120 """Run a circuit and return a single Result.
121
122 Args:
123 circuit (dict): JSON circuit from qobj circuits list
124
125 Returns:
126 dict: A dictionary of results which looks something like::
127
128 {
129 "data":
130 { #### DATA CAN BE A DIFFERENT DICTIONARY FOR EACH BACKEND ####
131 "counts": {'00000': XXXX, '00001': XXXXX},
132 "time" : xx.xxxxxxxx
133 },
134 "status": --status (string)--
135 }
136 Raises:
137 SimulatorError: if an error occurred.
138 """
139 # pylint: disable=expression-not-assigned,pointless-statement
140 ccircuit = circuit['compiled_circuit']
141 self._number_of_qubits = ccircuit['header']['number_of_qubits']
142 self._number_of_clbits = ccircuit['header']['number_of_clbits']
143 self._statevector = 0
144 self._classical_state = 0
145 cl_reg_index = [] # starting bit index of classical register
146 cl_reg_nbits = [] # number of bits in classical register
147 clbit_index = 0
148 qobj_quregs = OrderedDict(_get_register_specs(
149 ccircuit['header']['qubit_labels']))
150 eng = MainEngine(backend=self._sim)
151 for cl_reg in ccircuit['header']['clbit_labels']:
152 cl_reg_nbits.append(cl_reg[1])
153 cl_reg_index.append(clbit_index)
154 clbit_index += cl_reg[1]
155 # let circuit seed override qobj default
156 if 'config' in circuit:
157 if 'seed' in circuit['config']:
158 if circuit['config']['seed'] is not None:
159 self._sim._simulator = CppSim(circuit['config']['seed'])
160 outcomes = []
161 start = time.time()
162 for _ in range(self._shots):
163 self._statevector = np.zeros(1 << self._number_of_qubits,
164 dtype=complex)
165 self._statevector[0] = 1
166 # initialize starting state
167 self._classical_state = 0
168 unmeasured_qubits = list(range(self._number_of_qubits))
169 projq_qureg_dict = OrderedDict(((key, eng.allocate_qureg(size))
170 for key, size in
171 qobj_quregs.items()))
172 qureg = [qubit for sublist in projq_qureg_dict.values()
173 for qubit in sublist]
174 # Do each operation in this shot
175 for operation in ccircuit['operations']:
176 if 'conditional' in operation:
177 mask = int(operation['conditional']['mask'], 16)
178 if mask > 0:
179 value = self._classical_state & mask
180 while (mask & 0x1) == 0:
181 mask >>= 1
182 value >>= 1
183 if value != int(operation['conditional']['val'], 16):
184 continue
185 # Check if single gate
186 if operation['name'] in ['U', 'u3']:
187 params = operation['params']
188 qubit = qureg[operation['qubits'][0]]
189 Rz(params[2]) | qubit
190 Ry(params[0]) | qubit
191 Rz(params[1]) | qubit
192 elif operation['name'] in ['u1']:
193 params = operation['params']
194 qubit = qureg[operation['qubits'][0]]
195 Rz(params[0]) | qubit
196 elif operation['name'] in ['u2']:
197 params = operation['params']
198 qubit = qureg[operation['qubits'][0]]
199 Rz(params[1] - np.pi/2) | qubit
200 Rx(np.pi/2) | qubit
201 Rz(params[0] + np.pi/2) | qubit
202 elif operation['name'] == 't':
203 qubit = qureg[operation['qubits'][0]]
204 T | qubit
205 elif operation['name'] == 'h':
206 qubit = qureg[operation['qubits'][0]]
207 H | qubit
208 elif operation['name'] == 's':
209 qubit = qureg[operation['qubits'][0]]
210 S | qubit
211 elif operation['name'] in ['CX', 'cx']:
212 qubit0 = qureg[operation['qubits'][0]]
213 qubit1 = qureg[operation['qubits'][1]]
214 CX | (qubit0, qubit1)
215 elif operation['name'] in ['id', 'u0']:
216 pass
217 # Check if measure
218 elif operation['name'] == 'measure':
219 qubit_index = operation['qubits'][0]
220 qubit = qureg[qubit_index]
221 clbit = operation['clbits'][0]
222 Measure | qubit
223 bit = 1 << clbit
224 self._classical_state = (
225 self._classical_state & (~bit)) | (int(qubit)
226 << clbit)
227 # check whether we already measured this qubit
228 if operation['qubits'][0] in unmeasured_qubits:
229 unmeasured_qubits.remove(operation['qubits'][0])
230 # Check if reset
231 elif operation['name'] == 'reset':
232 qubit = operation['qubits'][0]
233 raise SimulatorError('Reset operation not yet implemented '
234 'for ProjectQ C++ backend')
235 elif operation['name'] == 'barrier':
236 pass
237 else:
238 backend = self._configuration['name']
239 err_msg = '{0} encountered unrecognized operation "{1}"'
240 raise SimulatorError(err_msg.format(backend,
241 operation['name']))
242 for ind in unmeasured_qubits:
243 qubit = qureg[ind]
244 Measure | qubit
245 eng.flush()
246 # Turn classical_state (int) into bit string
247 state = format(self._classical_state, 'b')
248 outcomes.append(state.zfill(self._number_of_clbits))
249 # Return the results
250 counts = dict(Counter(outcomes))
251 data = {'counts': _format_result(
252 counts, cl_reg_index, cl_reg_nbits)}
253 if self._shots == 1:
254 # TODO: deprecated -- remove in v0.6
255 data['statevector'] = self._statevector
256 data['quantum_state'] = self._statevector
257 data['classical_state'] = self._classical_state
258 end = time.time()
259 return {'name': circuit['name'],
260 'seed': self._seed,
261 'shots': self._shots,
262 'data': data,
263 'status': 'DONE',
264 'success': True,
265 'time_taken': (end-start)}
266
267 def _validate(self, qobj):
268 if qobj['config']['shots'] == 1:
269 warnings.warn('The behavior of getting statevector from simulators '
270 'by setting shots=1 is deprecated and will be removed. '
271 'Use the local_statevector_simulator instead, or place '
272 'explicit snapshot instructions.',
273 DeprecationWarning)
274 for circ in qobj['circuits']:
275 if 'measure' not in [op['name'] for
276 op in circ['compiled_circuit']['operations']]:
277 logger.warning("no measurements in circuit '%s', "
278 "classical register will remain all zeros.", circ['name'])
279 return
280
281
282 def _get_register_specs(bit_labels):
283 """
284 Get the number and size of unique registers from bit_labels list with an
285 iterator of register_name:size pairs.
286
287 Args:
288 bit_labels (list): this list is of the form::
289
290 [['reg1', 0], ['reg1', 1], ['reg2', 0]]
291
292 which indicates a register named "reg1" of size 2
293 and a register named "reg2" of size 1. This is the
294 format of classic and quantum bit labels in qobj
295 header.
296 Yields:
297 tuple: pairs of (register_name, size)
298 """
299 iterator = itertools.groupby(bit_labels, operator.itemgetter(0))
300 for register_name, sub_it in iterator:
301 yield register_name, max(ind[1] for ind in sub_it) + 1
302
303
304 def _format_result(counts, cl_reg_index, cl_reg_nbits):
305 """Format the result bit string.
306
307 This formats the result bit strings such that spaces are inserted
308 at register divisions.
309
310 Args:
311 counts (dict): dictionary of counts e.g. {'1111': 1000, '0000':5}
312 cl_reg_index (list): starting bit index of classical register
313 cl_reg_nbits (list): total amount of bits in classical register
314 Returns:
315 dict: spaces inserted into dictionary keys at register boundaries.
316 """
317 fcounts = {}
318 for key, value in counts.items():
319 new_key = [key[-cl_reg_nbits[0]:]]
320 for index, nbits in zip(cl_reg_index[1:],
321 cl_reg_nbits[1:]):
322 new_key.insert(0, key[-(index+nbits):-index])
323 fcounts[' '.join(new_key)] = value
324 return fcounts
325
[end of qiskit/backends/local/qasm_simulator_projectq.py]
[start of qiskit/extensions/quantum_initializer/_initializer.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017 IBM RESEARCH. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 # =============================================================================
17 """
18 Initialize qubit registers to desired arbitrary state.
19 """
20
21 import math
22 import numpy
23 import scipy
24
25 from qiskit import CompositeGate
26 from qiskit import Gate
27 from qiskit import QISKitError
28 from qiskit import QuantumCircuit
29 from qiskit.extensions.standard.cx import CnotGate
30 from qiskit.extensions.standard.ry import RYGate
31 from qiskit.extensions.standard.rz import RZGate
32
33 _EPS = 1e-10 # global variable used to chop very small numbers to zero
34
35
36 class InitializeGate(CompositeGate):
37 """Complex amplitude initialization.
38
39 Class that implements the (complex amplitude) initialization of some
40 flexible collection of qubit registers (assuming the qubits are in the
41 zero state).
42
43 Implements a recursive initialization algorithm including optimizations
44 from "Synthesis of Quantum Logic Circuits" Shende, Bullock, Markov
45 https://arxiv.org/abs/quant-ph/0406176v5
46
47 Additionally implements some extra optimizations: remove zero rotations and
48 double cnots.`
49
50 It inherits from CompositeGate in the same way that the Fredkin (cswap)
51 gate does. Therefore self.data is the list of gates (in order) that must
52 be applied to implement this meta-gate.
53
54 param = list of complex amplitudes
55 arg = list of qubits
56 circ = QuantumCircuit or CompositeGate containing this gate
57 """
58 def __init__(self, param, arg, circ=None):
59 """Create new initialize composite gate."""
60 num_qubits = math.log2(len(param))
61
62 # Check if param is a power of 2
63 if num_qubits == 0 or not num_qubits.is_integer():
64 raise QISKitError("Desired vector not a positive power of 2.")
65
66 self.num_qubits = int(num_qubits)
67
68 # Check if number of desired qubits agrees with available qubits
69 if len(arg) != self.num_qubits:
70 raise QISKitError("Number of complex amplitudes do not correspond "
71 "to the number of qubits.")
72
73 # Check if probabilities (amplitudes squared) sum to 1
74 if not math.isclose(sum(numpy.absolute(param) ** 2), 1.0,
75 abs_tol=_EPS):
76 raise QISKitError("Sum of amplitudes-squared does not equal one.")
77
78 super().__init__("init", param, arg, circ)
79
80 # call to generate the circuit that takes the desired vector to zero
81 self.gates_to_uncompute()
82 # remove zero rotations and double cnots
83 self.optimize_gates()
84 # invert the circuit to create the desired vector from zero (assuming
85 # the qubits are in the zero state)
86 self.inverse()
87
88 def nth_qubit_from_least_sig_qubit(self, nth):
89 """
90 Return the qubit that is nth away from the least significant qubit
91 (LSB), so n=0 corresponds to the LSB.
92 """
93 # if LSB is first (as is the case with the IBM QE) and significance is
94 # in order:
95 return self.arg[nth]
96 # if MSB is first: return self.arg[self.num_qubits - 1 - n]
97 # equivalent to self.arg[-(n+1)]
98 # to generalize any mapping could be placed here or even taken from
99 # the user
100
101 def reapply(self, circ):
102 """Reapply this gate to the corresponding qubits in circ."""
103 self._modifiers(circ.initialize(self.name, self.param, self.arg))
104
105 def gates_to_uncompute(self):
106 """
107 Call to populate the self.data list with gates that takes the
108 desired vector to zero.
109 """
110 # kick start the peeling loop
111 remaining_param = self.param
112
113 for i in range(self.num_qubits):
114 # work out which rotations must be done to disentangle the LSB
115 # qubit (we peel away one qubit at a time)
116 (remaining_param,
117 thetas,
118 phis) = InitializeGate._rotations_to_disentangle(remaining_param)
119
120 # perform the required rotations to decouple the LSB qubit (so that
121 # it can be "factored" out, leaving a
122 # shorter amplitude vector to peel away)
123 self._attach(self._multiplex(RZGate, i, phis))
124 self._attach(self._multiplex(RYGate, i, thetas))
125
126 @staticmethod
127 def _rotations_to_disentangle(local_param):
128 """
129 Static internal method to work out Ry and Rz rotation angles used
130 to disentangle the LSB qubit.
131 These rotations make up the block diagonal matrix U (i.e. multiplexor)
132 that disentangles the LSB.
133
134 [[Ry(theta_1).Rz(phi_1) 0 . . 0],
135 [0 Ry(theta_2).Rz(phi_2) . 0],
136 .
137 .
138 0 0 Ry(theta_2^n).Rz(phi_2^n)]]
139 """
140 remaining_vector = []
141 thetas = []
142 phis = []
143
144 param_len = len(local_param)
145
146 for i in range(param_len // 2):
147 # Ry and Rz rotations to move bloch vector from 0 to "imaginary"
148 # qubit
149 # (imagine a qubit state signified by the amplitudes at index 2*i
150 # and 2*(i+1), corresponding to the select qubits of the
151 # multiplexor being in state |i>)
152 (remains,
153 add_theta,
154 add_phi) = InitializeGate._bloch_angles(
155 local_param[2*i: 2*(i + 1)])
156
157 remaining_vector.append(remains)
158
159 # rotations for all imaginary qubits of the full vector
160 # to move from where it is to zero, hence the negative sign
161 thetas.append(-add_theta)
162 phis.append(-add_phi)
163
164 return remaining_vector, thetas, phis
165
166 @staticmethod
167 def _bloch_angles(pair_of_complex):
168 """
169 Static internal method to work out rotation to create the passed in
170 qubit from the zero vector.
171 """
172 [a_complex, b_complex] = pair_of_complex
173 # Force a and b to be complex, as otherwise numpy.angle might fail.
174 a_complex = complex(a_complex)
175 b_complex = complex(b_complex)
176 mag_a = numpy.absolute(a_complex)
177 final_r = float(numpy.sqrt(mag_a ** 2 + numpy.absolute(b_complex) ** 2))
178 if final_r < _EPS:
179 theta = 0
180 phi = 0
181 final_r = 0
182 final_t = 0
183 else:
184 theta = float(2 * numpy.arccos(mag_a / final_r))
185 a_arg = numpy.angle(a_complex)
186 b_arg = numpy.angle(b_complex)
187 final_t = a_arg + b_arg
188 phi = b_arg - a_arg
189
190 return final_r * numpy.exp(1.J * final_t/2), theta, phi
191
192 def _multiplex(self, bottom_gate, bottom_qubit_index, list_of_angles):
193 """
194 Internal recursive method to create gates to perform rotations on the
195 imaginary qubits: works by rotating LSB (and hence ALL imaginary
196 qubits) by combo angle and then flipping sign (by flipping the bit,
197 hence moving the complex amplitudes) of half the imaginary qubits
198 (CNOT) followed by another combo angle on LSB, therefore executing
199 conditional (on MSB) rotations, thereby disentangling LSB.
200 """
201 list_len = len(list_of_angles)
202 target_qubit = self.nth_qubit_from_least_sig_qubit(bottom_qubit_index)
203
204 # Case of no multiplexing = base case for recursion
205 if list_len == 1:
206 return bottom_gate(list_of_angles[0], target_qubit)
207
208 local_num_qubits = int(math.log2(list_len)) + 1
209 control_qubit = self.nth_qubit_from_least_sig_qubit(
210 local_num_qubits - 1 + bottom_qubit_index)
211
212 # calc angle weights, assuming recursion (that is the lower-level
213 # requested angles have been correctly implemented by recursion
214 angle_weight = scipy.kron([[0.5, 0.5], [0.5, -0.5]],
215 numpy.identity(2 ** (local_num_qubits - 2)))
216
217 # calc the combo angles
218 list_of_angles = (angle_weight * numpy.matrix(
219 list_of_angles).transpose()).reshape(-1).tolist()[0]
220
221 combine_composite_gates = CompositeGate(
222 "multiplex" + local_num_qubits.__str__(), [], self.arg)
223
224 # recursive step on half the angles fulfilling the above assumption
225 combine_composite_gates._attach(
226 self._multiplex(bottom_gate, bottom_qubit_index,
227 list_of_angles[0:(list_len // 2)]))
228
229 # combine_composite_gates.cx(control_qubit,target_qubit) -> does not
230 # work as expected because checks circuit
231 # so attach CNOT as follows, thereby flipping the LSB qubit
232 combine_composite_gates._attach(CnotGate(control_qubit, target_qubit))
233
234 # implement extra efficiency from the paper of cancelling adjacent
235 # CNOTs (by leaving out last CNOT and reversing (NOT inverting) the
236 # second lower-level multiplex)
237 sub_gate = self._multiplex(
238 bottom_gate, bottom_qubit_index, list_of_angles[(list_len // 2):])
239 if isinstance(sub_gate, CompositeGate):
240 combine_composite_gates._attach(sub_gate.reverse())
241 else:
242 combine_composite_gates._attach(sub_gate)
243
244 # outer multiplex keeps final CNOT, because no adjacent CNOT to cancel
245 # with
246 if self.num_qubits == local_num_qubits + bottom_qubit_index:
247 combine_composite_gates._attach(CnotGate(control_qubit,
248 target_qubit))
249
250 return combine_composite_gates
251
252 @staticmethod
253 def chop_num(numb):
254 """
255 Set very small numbers (as defined by global variable _EPS) to zero.
256 """
257 return 0 if abs(numb) < _EPS else numb
258
259
260 # ###############################################################
261 # Add needed functionality to other classes (it feels
262 # weird following the QISKit convention of adding functionality to other
263 # classes like this ;),
264 # TODO: multiple inheritance might be better?)
265
266
267 def reverse(self):
268 """
269 Reverse (recursively) the sub-gates of this CompositeGate. Note this does
270 not invert the gates!
271 """
272 new_data = []
273 for gate in reversed(self.data):
274 if isinstance(gate, CompositeGate):
275 new_data.append(gate.reverse())
276 else:
277 new_data.append(gate)
278 self.data = new_data
279
280 # not just a high-level reverse:
281 # self.data = [gate for gate in reversed(self.data)]
282
283 return self
284
285
286 QuantumCircuit.reverse = reverse
287 CompositeGate.reverse = reverse
288
289
290 def optimize_gates(self):
291 """Remove Zero rotations and Double CNOTS."""
292 self.remove_zero_rotations()
293 while self.remove_double_cnots_once():
294 pass
295
296
297 QuantumCircuit.optimize_gates = optimize_gates
298 CompositeGate.optimize_gates = optimize_gates
299
300
301 def remove_zero_rotations(self):
302 """
303 Remove Zero Rotations by looking (recursively) at rotation gates at the
304 leaf ends.
305 """
306 # Removed at least one zero rotation.
307 zero_rotation_removed = False
308 new_data = []
309 for gate in self.data:
310 if isinstance(gate, CompositeGate):
311 zero_rotation_removed |= gate.remove_zero_rotations()
312 if gate.data:
313 new_data.append(gate)
314 else:
315 if ((not isinstance(gate, Gate)) or
316 (not (gate.name == "rz" or gate.name == "ry" or
317 gate.name == "rx") or
318 (InitializeGate.chop_num(gate.param[0]) != 0))):
319 new_data.append(gate)
320 else:
321 zero_rotation_removed = True
322
323 self.data = new_data
324
325 return zero_rotation_removed
326
327
328 QuantumCircuit.remove_zero_rotations = remove_zero_rotations
329 CompositeGate.remove_zero_rotations = remove_zero_rotations
330
331
332 def number_atomic_gates(self):
333 """Count the number of leaf gates. """
334 num = 0
335 for gate in self.data:
336 if isinstance(gate, CompositeGate):
337 num += gate.number_atomic_gates()
338 else:
339 if isinstance(gate, Gate):
340 num += 1
341 return num
342
343
344 QuantumCircuit.number_atomic_gates = number_atomic_gates
345 CompositeGate.number_atomic_gates = number_atomic_gates
346
347
348 def remove_double_cnots_once(self):
349 """
350 Remove Double CNOTS paying attention that gates may be neighbours across
351 Composite Gate boundaries.
352 """
353 num_high_level_gates = len(self.data)
354
355 if num_high_level_gates == 0:
356 return False
357 else:
358 if num_high_level_gates == 1 and isinstance(self.data[0],
359 CompositeGate):
360 return self.data[0].remove_double_cnots_once()
361
362 # Removed at least one double cnot.
363 double_cnot_removed = False
364
365 # last gate might be composite
366 if isinstance(self.data[num_high_level_gates - 1], CompositeGate):
367 double_cnot_removed = \
368 double_cnot_removed or\
369 self.data[num_high_level_gates - 1].remove_double_cnots_once()
370
371 # don't start with last gate, using reversed so that can del on the go
372 for i in reversed(range(num_high_level_gates - 1)):
373 if isinstance(self.data[i], CompositeGate):
374 double_cnot_removed =\
375 double_cnot_removed \
376 or self.data[i].remove_double_cnots_once()
377 left_gate_host = self.data[i].last_atomic_gate_host()
378 left_gate_index = -1
379 # TODO: consider adding if semantics needed:
380 # to remove empty composite gates
381 # if left_gate_host == None: del self.data[i]
382 else:
383 left_gate_host = self.data
384 left_gate_index = i
385
386 if ((left_gate_host is not None) and
387 left_gate_host[left_gate_index].name == "cx"):
388 if isinstance(self.data[i + 1], CompositeGate):
389 right_gate_host = self.data[i + 1].first_atomic_gate_host()
390 right_gate_index = 0
391 else:
392 right_gate_host = self.data
393 right_gate_index = i + 1
394
395 if (right_gate_host is not None) \
396 and right_gate_host[right_gate_index].name == "cx" \
397 and (left_gate_host[left_gate_index].arg ==
398 right_gate_host[right_gate_index].arg):
399 del right_gate_host[right_gate_index]
400 del left_gate_host[left_gate_index]
401 double_cnot_removed = True
402
403 return double_cnot_removed
404
405
406 QuantumCircuit.remove_double_cnots_once = remove_double_cnots_once
407 CompositeGate.remove_double_cnots_once = remove_double_cnots_once
408
409
410 def first_atomic_gate_host(self):
411 """Return the host list of the leaf gate on the left edge."""
412 if self.data:
413 if isinstance(self.data[0], CompositeGate):
414 return self.data[0].first_atomic_gate_host()
415 return self.data
416
417 return None
418
419
420 QuantumCircuit.first_atomic_gate_host = first_atomic_gate_host
421 CompositeGate.first_atomic_gate_host = first_atomic_gate_host
422
423
424 def last_atomic_gate_host(self):
425 """Return the host list of the leaf gate on the right edge."""
426 if self.data:
427 if isinstance(self.data[-1], CompositeGate):
428 return self.data[-1].last_atomic_gate_host()
429 return self.data
430
431 return None
432
433
434 QuantumCircuit.last_atomic_gate_host = last_atomic_gate_host
435 CompositeGate.last_atomic_gate_host = last_atomic_gate_host
436
437
438 def initialize(self, params, qubits):
439 """Apply initialize to circuit."""
440 self._check_dups(qubits)
441 for i in qubits:
442 self._check_qubit(i)
443 # TODO: make initialize an Instruction, and insert reset
444 # TODO: avoid explicit reset if compiler determines a |0> state
445
446 return self._attach(InitializeGate(params, qubits, self))
447
448
449 QuantumCircuit.initialize = initialize
450 CompositeGate.initialize = initialize
451
[end of qiskit/extensions/quantum_initializer/_initializer.py]
[start of qiskit/extensions/standard/__init__.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017 IBM RESEARCH. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 # =============================================================================
17
18 """Standard gates."""
19 from .barrier import barrier
20 from .ccx import ccx
21 from .cswap import cswap
22 from .cx import cx
23 from .cxbase import cx_base
24 from .cy import cy
25 from .cz import cz
26 from .swap import swap
27 from .h import h
28 from .iden import iden
29 from .s import s
30 from .t import t
31 from .u0 import u0
32 from .u1 import u1
33 from .u2 import u2
34 from .u3 import u3
35 from .ubase import u_base
36 from .x import x
37 from .y import y
38 from .z import z
39 from .rx import rx
40 from .ry import ry
41 from .rz import rz
42 from .cu1 import cu1
43 from .ch import ch
44 from .crz import crz
45 from .cu3 import cu3
46 from .rzz import rzz
47
[end of qiskit/extensions/standard/__init__.py]
[start of qiskit/extensions/standard/crz.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017 IBM RESEARCH. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 # =============================================================================
17
18 """
19 controlled-rz gate.
20 """
21 from qiskit import CompositeGate
22 from qiskit import Gate
23 from qiskit import QuantumCircuit
24 from qiskit._instructionset import InstructionSet
25 from qiskit._quantumregister import QuantumRegister
26 from qiskit.extensions.standard import header # pylint: disable=unused-import
27
28
29 class CrzGate(Gate):
30 """controlled-rz gate."""
31
32 def __init__(self, theta, ctl, tgt, circ=None):
33 """Create new crz gate."""
34 super().__init__("crz", [theta], [ctl, tgt], circ)
35
36 def qasm(self):
37 """Return OPENQASM string."""
38 ctl = self.arg[0]
39 tgt = self.arg[1]
40 theta = self.param[0]
41 return self._qasmif("crz(%s) %s[%d],%s[%d];" % (theta, ctl[0].openqasm_name, ctl[1],
42 tgt[0].openqasm_name, tgt[1]))
43
44 def inverse(self):
45 """Invert this gate."""
46 self.param[0] = -self.param[0]
47 return self
48
49 def reapply(self, circ):
50 """Reapply this gate to corresponding qubits in circ."""
51 self._modifiers(circ.crz(self.param[0], self.arg[0], self.arg[1]))
52
53
54 def crz(self, theta, ctl, tgt):
55 """Apply crz from ctl to tgt with angle theta."""
56 if isinstance(ctl, QuantumRegister) and \
57 isinstance(tgt, QuantumRegister) and len(ctl) == len(tgt):
58 instructions = InstructionSet()
59 for i in range(ctl.size):
60 instructions.add(self.crz(theta, (ctl, i), (tgt, i)))
61 return instructions
62
63 self._check_qubit(ctl)
64 self._check_qubit(tgt)
65 self._check_dups([ctl, tgt])
66 return self._attach(CrzGate(theta, ctl, tgt, self))
67
68
69 QuantumCircuit.crz = crz
70 CompositeGate.crz = crz
71
[end of qiskit/extensions/standard/crz.py]
[start of qiskit/extensions/standard/rz.py]
1 # -*- coding: utf-8 -*-
2 # pylint: disable=invalid-name
3
4 # Copyright 2017 IBM RESEARCH. All Rights Reserved.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 # =============================================================================
18
19 """
20 Rotation around the z-axis.
21 """
22 from qiskit import CompositeGate
23 from qiskit import Gate
24 from qiskit import InstructionSet
25 from qiskit import QuantumCircuit
26 from qiskit import QuantumRegister
27 from qiskit.extensions.standard import header # pylint: disable=unused-import
28
29
30 class RZGate(Gate):
31 """rotation around the z-axis."""
32
33 def __init__(self, phi, qubit, circ=None):
34 """Create new rz single qubit gate."""
35 super().__init__("rz", [phi], [qubit], circ)
36
37 def qasm(self):
38 """Return OPENQASM string."""
39 qubit = self.arg[0]
40 phi = self.param[0]
41 return self._qasmif("rz(%s) %s[%d];" % (phi, qubit[0].openqasm_name, qubit[1]))
42
43 def inverse(self):
44 """Invert this gate.
45
46 rz(phi)^dagger = rz(-phi)
47 """
48 self.param[0] = -self.param[0]
49 return self
50
51 def reapply(self, circ):
52 """Reapply this gate to corresponding qubits in circ."""
53 self._modifiers(circ.rz(self.param[0], self.arg[0]))
54
55
56 def rz(self, phi, q):
57 """Apply Rz to q."""
58 if isinstance(q, QuantumRegister):
59 instructions = InstructionSet()
60 for j in range(q.size):
61 instructions.add(self.rz(phi, (q, j)))
62 return instructions
63
64 self._check_qubit(q)
65 return self._attach(RZGate(phi, q, self))
66
67
68 QuantumCircuit.rz = rz
69 CompositeGate.rz = rz
70
[end of qiskit/extensions/standard/rz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
6b6c370725dec7b61618c1d6f1cc62b8cf1a0907
|
Error in Rz gate
There is a bug in this line and the line above it (rz gate):
https://github.com/QISKit/qiskit-sdk-py/blob/4f15c99c1dc4b91abd5d54ccc6b94c8e38ac4997/qiskit/extensions/standard/rz.py#L61
It is replicated in rx and ry as well.
|
2018-03-26T20:22:30Z
|
<patch>
diff --git a/qiskit/_quantumcircuit.py b/qiskit/_quantumcircuit.py
--- a/qiskit/_quantumcircuit.py
+++ b/qiskit/_quantumcircuit.py
@@ -94,7 +94,7 @@ def has_register(self, register):
def get_qregs(self):
"""Get the qregs from the registers."""
- qregs = {}
+ qregs = OrderedDict()
for name, register in self.regs.items():
if isinstance(register, QuantumRegister):
qregs[name] = register
@@ -102,7 +102,7 @@ def get_qregs(self):
def get_cregs(self):
"""Get the cregs from the registers."""
- cregs = {}
+ cregs = OrderedDict()
for name, register in self.regs.items():
if isinstance(register, ClassicalRegister):
cregs[name] = register
@@ -202,6 +202,14 @@ def _check_qreg(self, register):
def _check_qubit(self, qubit):
"""Raise exception if qubit is not in this circuit or bad format."""
+ if not isinstance(qubit, tuple):
+ raise QISKitError("%s is not a tuple."
+ "A qubit should be formated as a tuple." % str(qubit))
+ if not len(qubit) == 2:
+ raise QISKitError("%s is not a tuple with two elements, but %i instead" % len(qubit))
+ if not isinstance(qubit[1], int):
+ raise QISKitError("The second element of a tuple defining a qubit should be an int:"
+ "%s was found instead" % type(qubit[1]).__name__)
self._check_qreg(qubit[0])
qubit[0].check_range(qubit[1])
diff --git a/qiskit/extensions/standard/barrier.py b/qiskit/extensions/standard/barrier.py
--- a/qiskit/extensions/standard/barrier.py
+++ b/qiskit/extensions/standard/barrier.py
@@ -22,7 +22,6 @@
from qiskit import QuantumCircuit
from qiskit import CompositeGate
from qiskit import QuantumRegister
-from qiskit.extensions._extensionerror import ExtensionError
from qiskit.extensions.standard import header # pylint: disable=unused-import
@@ -55,26 +54,28 @@ def reapply(self, circ):
self._modifiers(circ.barrier(*self.arg))
-def barrier(self, *tuples):
- """Apply barrier to tuples (reg, idx)."""
- tuples = list(tuples)
- if not tuples: # TODO: implement this for all single qubit gates
- if isinstance(self, QuantumCircuit):
- for register in self.regs.values():
- if isinstance(register, QuantumRegister):
- tuples.append(register)
- if not tuples:
- raise ExtensionError("no arguments passed")
+def barrier(self, *args):
+ """Apply barrier to circuit.
+ If args is None, applies to all the qbits.
+ Args is a list of QuantumRegister or single qubits.
+ For QuantumRegister, applies barrier to all the qbits in that register."""
qubits = []
- for tuple_element in tuples:
- if isinstance(tuple_element, QuantumRegister):
- for j in range(tuple_element.size):
- self._check_qubit((tuple_element, j))
- qubits.append((tuple_element, j))
+
+ if not args: # None
+ for qreg in self.get_qregs().values():
+ for j in range(qreg.size):
+ qubits.append((qreg, j))
+
+ for arg in args:
+ if isinstance(arg, QuantumRegister):
+ for j in range(arg.size):
+ qubits.append((arg, j))
else:
- self._check_qubit(tuple_element)
- qubits.append(tuple_element)
+ qubits.append(arg)
+
self._check_dups(qubits)
+ for qubit in qubits:
+ self._check_qubit(qubit)
return self._attach(Barrier(qubits, self))
diff --git a/qiskit/extensions/standard/ch.py b/qiskit/extensions/standard/ch.py
--- a/qiskit/extensions/standard/ch.py
+++ b/qiskit/extensions/standard/ch.py
@@ -59,6 +59,18 @@ def ch(self, ctl, tgt):
instructions.add(self.ch((ctl, i), (tgt, i)))
return instructions
+ if isinstance(ctl, QuantumRegister):
+ gs = InstructionSet()
+ for j in range(ctl.size):
+ gs.add(self.ch((ctl, j), tgt))
+ return gs
+
+ if isinstance(tgt, QuantumRegister):
+ gs = InstructionSet()
+ for j in range(tgt.size):
+ gs.add(self.ch(ctl, (tgt, j)))
+ return gs
+
self._check_qubit(ctl)
self._check_qubit(tgt)
self._check_dups([ctl, tgt])
diff --git a/qiskit/extensions/standard/crz.py b/qiskit/extensions/standard/crz.py
--- a/qiskit/extensions/standard/crz.py
+++ b/qiskit/extensions/standard/crz.py
@@ -60,6 +60,18 @@ def crz(self, theta, ctl, tgt):
instructions.add(self.crz(theta, (ctl, i), (tgt, i)))
return instructions
+ if isinstance(ctl, QuantumRegister):
+ instructions = InstructionSet()
+ for j in range(ctl.size):
+ instructions.add(self.crz(theta, (ctl, j), tgt))
+ return instructions
+
+ if isinstance(tgt, QuantumRegister):
+ instructions = InstructionSet()
+ for j in range(tgt.size):
+ instructions.add(self.crz(theta, ctl, (tgt, j)))
+ return instructions
+
self._check_qubit(ctl)
self._check_qubit(tgt)
self._check_dups([ctl, tgt])
diff --git a/qiskit/extensions/standard/cu1.py b/qiskit/extensions/standard/cu1.py
--- a/qiskit/extensions/standard/cu1.py
+++ b/qiskit/extensions/standard/cu1.py
@@ -60,6 +60,18 @@ def cu1(self, theta, ctl, tgt):
instructions.add(self.cu1(theta, (ctl, i), (tgt, i)))
return instructions
+ if isinstance(ctl, QuantumRegister):
+ instructions = InstructionSet()
+ for j in range(ctl.size):
+ instructions.add(self.cu1(theta, (ctl, j), tgt))
+ return instructions
+
+ if isinstance(tgt, QuantumRegister):
+ instructions = InstructionSet()
+ for j in range(tgt.size):
+ instructions.add(self.cu1(theta, ctl, (tgt, j)))
+ return instructions
+
self._check_qubit(ctl)
self._check_qubit(tgt)
self._check_dups([ctl, tgt])
diff --git a/qiskit/extensions/standard/cu3.py b/qiskit/extensions/standard/cu3.py
--- a/qiskit/extensions/standard/cu3.py
+++ b/qiskit/extensions/standard/cu3.py
@@ -67,6 +67,18 @@ def cu3(self, theta, phi, lam, ctl, tgt):
instructions.add(self.cu3(theta, phi, lam, (ctl, i), (tgt, i)))
return instructions
+ if isinstance(ctl, QuantumRegister):
+ instructions = InstructionSet()
+ for j in range(ctl.size):
+ instructions.add(self.cu3(theta, phi, lam, (ctl, j), tgt))
+ return instructions
+
+ if isinstance(tgt, QuantumRegister):
+ instructions = InstructionSet()
+ for j in range(tgt.size):
+ instructions.add(self.cu3(theta, phi, lam, ctl, (tgt, j)))
+ return instructions
+
self._check_qubit(ctl)
self._check_qubit(tgt)
self._check_dups([ctl, tgt])
diff --git a/qiskit/extensions/standard/cx.py b/qiskit/extensions/standard/cx.py
--- a/qiskit/extensions/standard/cx.py
+++ b/qiskit/extensions/standard/cx.py
@@ -51,7 +51,7 @@ def reapply(self, circ):
def cx(self, ctl, tgt):
- """Apply CNOT from ctl to tgt."""
+ """Apply CX from ctl to tgt."""
if isinstance(ctl, QuantumRegister) and \
isinstance(tgt, QuantumRegister) and len(ctl) == len(tgt):
instructions = InstructionSet()
@@ -59,6 +59,18 @@ def cx(self, ctl, tgt):
instructions.add(self.cx((ctl, i), (tgt, i)))
return instructions
+ if isinstance(ctl, QuantumRegister):
+ gs = InstructionSet()
+ for j in range(ctl.size):
+ gs.add(self.cx((ctl, j), tgt))
+ return gs
+
+ if isinstance(tgt, QuantumRegister):
+ gs = InstructionSet()
+ for j in range(tgt.size):
+ gs.add(self.cx(ctl, (tgt, j)))
+ return gs
+
self._check_qubit(ctl)
self._check_qubit(tgt)
self._check_dups([ctl, tgt])
diff --git a/qiskit/extensions/standard/cxbase.py b/qiskit/extensions/standard/cxbase.py
--- a/qiskit/extensions/standard/cxbase.py
+++ b/qiskit/extensions/standard/cxbase.py
@@ -21,6 +21,8 @@
from qiskit import CompositeGate
from qiskit import Gate
from qiskit import QuantumCircuit
+from qiskit._instructionset import InstructionSet
+from qiskit._quantumregister import QuantumRegister
from qiskit.extensions.standard import header # pylint: disable=unused-import
@@ -49,6 +51,27 @@ def reapply(self, circ):
def cx_base(self, ctl, tgt):
"""Apply CX ctl, tgt."""
+
+ if isinstance(ctl, QuantumRegister) and \
+ isinstance(tgt, QuantumRegister) and len(ctl) == len(tgt):
+ # apply CX to qubits between two registers
+ instructions = InstructionSet()
+ for i in range(ctl.size):
+ instructions.add(self.cx_base((ctl, i), (tgt, i)))
+ return instructions
+
+ if isinstance(ctl, QuantumRegister):
+ instructions = InstructionSet()
+ for j in range(ctl.size):
+ instructions.add(self.cx_base((ctl, j), tgt))
+ return instructions
+
+ if isinstance(tgt, QuantumRegister):
+ instructions = InstructionSet()
+ for j in range(tgt.size):
+ instructions.add(self.cx_base(ctl, (tgt, j)))
+ return instructions
+
self._check_qubit(ctl)
self._check_qubit(tgt)
self._check_dups([ctl, tgt])
diff --git a/qiskit/extensions/standard/cy.py b/qiskit/extensions/standard/cy.py
--- a/qiskit/extensions/standard/cy.py
+++ b/qiskit/extensions/standard/cy.py
@@ -59,6 +59,18 @@ def cy(self, ctl, tgt):
instructions.add(self.cy((ctl, i), (tgt, i)))
return instructions
+ if isinstance(ctl, QuantumRegister):
+ gs = InstructionSet()
+ for j in range(ctl.size):
+ gs.add(self.cy((ctl, j), tgt))
+ return gs
+
+ if isinstance(tgt, QuantumRegister):
+ gs = InstructionSet()
+ for j in range(tgt.size):
+ gs.add(self.cy(ctl, (tgt, j)))
+ return gs
+
self._check_qubit(ctl)
self._check_qubit(tgt)
self._check_dups([ctl, tgt])
diff --git a/qiskit/extensions/standard/cz.py b/qiskit/extensions/standard/cz.py
--- a/qiskit/extensions/standard/cz.py
+++ b/qiskit/extensions/standard/cz.py
@@ -58,6 +58,18 @@ def cz(self, ctl, tgt):
instructions.add(self.cz((ctl, i), (tgt, i)))
return instructions
+ if isinstance(ctl, QuantumRegister):
+ instructions = InstructionSet()
+ for j in range(ctl.size):
+ instructions.add(self.cz((ctl, j), tgt))
+ return instructions
+
+ if isinstance(tgt, QuantumRegister):
+ instructions = InstructionSet()
+ for j in range(tgt.size):
+ instructions.add(self.cz(ctl, (tgt, j)))
+ return instructions
+
self._check_qubit(ctl)
self._check_qubit(tgt)
self._check_dups([ctl, tgt])
diff --git a/qiskit/extensions/standard/h.py b/qiskit/extensions/standard/h.py
--- a/qiskit/extensions/standard/h.py
+++ b/qiskit/extensions/standard/h.py
@@ -56,6 +56,12 @@ def h(self, q):
instructions.add(self.h((q, j)))
return instructions
+ if isinstance(q, QuantumRegister):
+ instructions = InstructionSet()
+ for j in range(q.size):
+ instructions.add(self.h(q))
+ return instructions
+
self._check_qubit(q)
return self._attach(HGate(q, self))
diff --git a/qiskit/extensions/standard/swap.py b/qiskit/extensions/standard/swap.py
--- a/qiskit/extensions/standard/swap.py
+++ b/qiskit/extensions/standard/swap.py
@@ -53,10 +53,10 @@ def reapply(self, circ):
def swap(self, ctl, tgt):
"""Apply SWAP from ctl to tgt."""
if isinstance(ctl, QuantumRegister) and \
- isinstance(tgt, QuantumRegister) and len(ctl) == len(tgt):
+ isinstance(tgt, QuantumRegister) and len(ctl) == len(tgt):
instructions = InstructionSet()
- for i in range(ctl.size):
- instructions.add(self.swap((ctl, i), (tgt, i)))
+ for j in range(ctl.size):
+ instructions.add(self.swap((ctl, j), (tgt, j)))
return instructions
self._check_qubit(ctl)
diff --git a/qiskit/extensions/standard/ubase.py b/qiskit/extensions/standard/ubase.py
--- a/qiskit/extensions/standard/ubase.py
+++ b/qiskit/extensions/standard/ubase.py
@@ -21,7 +21,9 @@
"""
from qiskit import CompositeGate
from qiskit import Gate
+from qiskit import InstructionSet
from qiskit import QuantumCircuit
+from qiskit import QuantumRegister
from qiskit.extensions.standard import header # pylint: disable=unused-import
@@ -59,6 +61,12 @@ def reapply(self, circ):
def u_base(self, theta, phi, lam, q):
"""Apply U to q."""
+ if isinstance(q, QuantumRegister):
+ gs = InstructionSet()
+ for j in range(q.size):
+ gs.add(self.u_base(theta, phi, lam, (q, j)))
+ return gs
+
self._check_qubit(q)
return self._attach(UBase(theta, phi, lam, q, self))
</patch>
|
[]
|
[]
| ||||
Qiskit__qiskit-4621
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pulse jobs fail schema validation
Every pulse job is failing schema validation. This happened in https://github.com/Qiskit/qiskit-terra/pull/4539.
```py
from qiskit import *
IBMQ.load_account()
p = IBMQ.get_provider(project='......')
backend = p.get_backend('ibmq_armonk')
c = QuantumCircuit(1, 1)
c.measure(0, 0)
c = transpile(c, backend)
s = schedule(c, backend)
qobj = assemble(s, backend=backend, shots=100)
job = backend.run(qobj)
```
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)
4
5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> result = backend_sim.run(assemble(qc)).result()
40 >>> print(result.get_counts(qc))
41 ```
42
43 In this case, the output will be:
44
45 ```python
46 {'00': 513, '11': 511}
47 ```
48
49 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
50 run the same program on a real quantum computer via IBMQ.
51
52 ### Executing your code on a real quantum chip
53
54 You can also use Qiskit to execute your code on a
55 **real quantum chip**.
56 In order to do so, you need to configure Qiskit for using the credentials in
57 your IBM Q account:
58
59 #### Configure your IBMQ credentials
60
61 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
62
63 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
64
65 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
66
67 ```python
68 >>> from qiskit import IBMQ
69 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
70 ```
71
72 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
73 Once they are stored, at any point in the future you can load and use them
74 in your program simply via:
75
76 ```python
77 >>> from qiskit import IBMQ
78 >>> IBMQ.load_account()
79 ```
80
81 Those who do not want to save their credentials to disk should use instead:
82
83 ```python
84 >>> from qiskit import IBMQ
85 >>> IBMQ.enable_account('MY_API_TOKEN')
86 ```
87
88 and the token will only be active for the session. For examples using Terra with real
89 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
90 the levels.
91
92 ## Contribution Guidelines
93
94 If you'd like to contribute to Qiskit Terra, please take a look at our
95 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
96
97 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
98 [join the Qiskit Slack community](https://join.slack.com/t/qiskit/shared_invite/zt-e4sscbg2-p8NHTezPVkC3r8nV6BIUVw)
99 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
100 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
101
102 ## Next Steps
103
104 Now you're set up and ready to check out some of the other examples from our
105 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
106
107 ## Authors and Citation
108
109 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
110 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
111
112 ## Changelog and Release Notes
113
114 The changelog for a particular release is dynamically generated and gets
115 written to the release page on Github for each release. For example, you can
116 find the page for the `0.9.0` release here:
117
118 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
119
120 The changelog for the current release can be found in the releases tab:
121 
122 The changelog provides a quick overview of noteable changes for a given
123 release.
124
125 Additionally, as part of each release detailed release notes are written to
126 document in detail what has changed as part of a release. This includes any
127 documentation on potential breaking changes on upgrade and new features.
128 For example, You can find the release notes for the `0.9.0` release in the
129 Qiskit documentation here:
130
131 https://qiskit.org/documentation/release_notes.html#terra-0-9
132
133 ## License
134
135 [Apache License 2.0](LICENSE.txt)
136
[end of README.md]
[start of examples/python/ibmq/using_qiskit_terra_level_1.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 Example showing how to use Qiskit at level 1 (intermediate).
17
18 This example shows how an intermediate user interacts with Terra.
19 It builds some circuits and transpiles them with transpile options.
20 It then makes a qobj object which is just a container to be run on a backend.
21 The same qobj can be submitted to many backends (as shown).
22 It is the user's responsibility to make sure it can be run (i.e. it conforms
23 to the restrictions of the backend, if any).
24 This is useful when you want to compare the same
25 circuit on different backends without recompiling the whole circuit,
26 or just want to change some runtime parameters.
27
28 To control the passes that transform the circuit, we have a pass manager
29 for the level 2 user.
30 """
31
32 # Import the Qiskit modules
33 from qiskit import IBMQ, BasicAer
34 from qiskit.circuit import QuantumCircuit
35 from qiskit.compiler import transpile, assemble
36 from qiskit.providers.ibmq import least_busy
37 from qiskit.tools.monitor import job_monitor
38
39 provider = IBMQ.load_account()
40
41 # Making first circuit: bell state
42 qc1 = QuantumCircuit(2, 2, name="bell")
43 qc1.h(0)
44 qc1.cx(0, 1)
45 qc1.measure([0,1], [0,1])
46
47 # Making another circuit: superpositions
48 qc2 = QuantumCircuit(2, 2, name="superposition")
49 qc2.h([0,1])
50 qc2.measure([0,1], [0,1])
51
52 # Setting up the backend
53 print("(Aer Backends)")
54 for backend in BasicAer.backends():
55 print(backend.status())
56 qasm_simulator = BasicAer.get_backend('qasm_simulator')
57
58
59 # Compile and run the circuit on a real device backend
60 # See a list of available remote backends
61 print("\n(IBMQ Backends)")
62 for backend in provider.backends():
63 print(backend.status())
64
65 try:
66 # select least busy available device and execute.
67 least_busy_device = least_busy(provider.backends(simulator=False))
68 except:
69 print("All devices are currently unavailable.")
70
71 print("Running on current least busy device: ", least_busy_device)
72
73 # Transpile the circuits to make them compatible with the experimental backend
74 [qc1_new, qc2_new] = transpile(circuits=[qc1, qc2], backend=least_busy_device)
75
76 print("Bell circuit before transpile:")
77 print(qc1)
78 print("Bell circuit after transpile:")
79 print(qc1_new)
80 print("Superposition circuit before transpile:")
81 print(qc2)
82 print("Superposition circuit after transpile:")
83 print(qc2_new)
84
85 # Assemble the two circuits into a runnable qobj
86 qobj = assemble([qc1_new, qc2_new], shots=1000)
87
88 # Running qobj on the simulator
89 sim_job = qasm_simulator.run(qobj)
90
91 # Getting the result
92 sim_result=sim_job.result()
93
94 # Show the results
95 print(sim_result.get_counts(qc1))
96 print(sim_result.get_counts(qc2))
97
98 # Running the job.
99 exp_job = least_busy_device.run(qobj)
100
101 job_monitor(exp_job)
102 exp_result = exp_job.result()
103
104 # Show the results
105 print(exp_result.get_counts(qc1))
106 print(exp_result.get_counts(qc2))
107
[end of examples/python/ibmq/using_qiskit_terra_level_1.py]
[start of examples/python/ibmq/using_qiskit_terra_level_2.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 Example showing how to use Qiskit at level 2 (advanced).
17
18 This example shows how an advanced user interacts with Terra.
19 It builds some circuits and transpiles them with the pass_manager.
20 """
21
22 # Import the Qiskit modules
23 from qiskit import IBMQ, BasicAer
24 from qiskit.circuit import QuantumCircuit
25 from qiskit.circuit.library.standard_gates import SwapGate
26 from qiskit.compiler import assemble
27 from qiskit.providers.ibmq import least_busy
28 from qiskit.tools.monitor import job_monitor
29
30 from qiskit.transpiler import PassManager
31 from qiskit.transpiler import CouplingMap
32 from qiskit.transpiler.passes import Unroller
33 from qiskit.transpiler.passes import FullAncillaAllocation
34 from qiskit.transpiler.passes import EnlargeWithAncilla
35 from qiskit.transpiler.passes import TrivialLayout
36 from qiskit.transpiler.passes import Decompose
37 from qiskit.transpiler.passes import CXDirection
38 from qiskit.transpiler.passes import LookaheadSwap
39
40
41 provider = IBMQ.load_account()
42
43 # Making first circuit: superpositions
44 qc1 = QuantumCircuit(4, 4)
45 qc1.h(0)
46 qc1.cx(0, 1)
47 qc1.measure([0, 1], [0, 1])
48
49 # Making another circuit: GHZ State
50 qc2 = QuantumCircuit(4, 4)
51 qc2.h([0, 1, 2, 3])
52 qc2.cx(0, 1)
53 qc2.cx(0, 2)
54 qc2.cx(0, 3)
55 qc2.measure([0, 1, 2, 3], [0, 1, 2, 3])
56
57 # Setting up the backend
58 print("(Aer Backends)")
59 for backend in BasicAer.backends():
60 print(backend.status())
61 qasm_simulator = BasicAer.get_backend('qasm_simulator')
62
63
64 # Compile and run the circuit on a real device backend
65 # See a list of available remote backends
66 print("\n(IBMQ Backends)")
67 for backend in provider.backends():
68 print(backend.status())
69
70 try:
71 # select least busy available device and execute.
72 least_busy_device = least_busy(provider.backends(simulator=False))
73 except:
74 print("All devices are currently unavailable.")
75
76 print("Running on current least busy device: ", least_busy_device)
77
78
79 # making a pass manager to compile the circuits
80 coupling_map = CouplingMap(least_busy_device.configuration().coupling_map)
81 print("coupling map: ", coupling_map)
82
83 pm = PassManager()
84
85 # Use the trivial layout
86 pm.append(TrivialLayout(coupling_map))
87
88 # Extend the the dag/layout with ancillas using the full coupling map
89 pm.append(FullAncillaAllocation(coupling_map))
90 pm.append(EnlargeWithAncilla())
91
92 # Swap mapper
93 pm.append(LookaheadSwap(coupling_map))
94
95 # Expand swaps
96 pm.append(Decompose(SwapGate))
97
98 # Simplify CXs
99 pm.append(CXDirection(coupling_map))
100
101 # unroll to single qubit gates
102 pm.append(Unroller(['u1', 'u2', 'u3', 'id', 'cx']))
103 qc1_new = pm.run(qc1)
104 qc2_new = pm.run(qc2)
105
106 print("Bell circuit before passes:")
107 print(qc1)
108 print("Bell circuit after passes:")
109 print(qc1_new)
110 print("Superposition circuit before passes:")
111 print(qc2)
112 print("Superposition circuit after passes:")
113 print(qc2_new)
114
115 # Assemble the two circuits into a runnable qobj
116 qobj = assemble([qc1_new, qc2_new], shots=1000)
117
118 # Running qobj on the simulator
119 print("Running on simulator:")
120 sim_job = qasm_simulator.run(qobj)
121
122 # Getting the result
123 sim_result = sim_job.result()
124
125 # Show the results
126 print(sim_result.get_counts(qc1))
127 print(sim_result.get_counts(qc2))
128
129 # Running the job.
130 print("Running on device:")
131 exp_job = least_busy_device.run(qobj)
132
133 job_monitor(exp_job)
134 exp_result = exp_job.result()
135
136 # Show the results
137 print(exp_result.get_counts(qc1))
138 print(exp_result.get_counts(qc2))
139
[end of examples/python/ibmq/using_qiskit_terra_level_2.py]
[start of qiskit/compiler/assemble.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Assemble function for converting a list of circuits into a qobj"""
16 import uuid
17 import copy
18 import logging
19 import warnings
20 from time import time
21
22 from typing import Union, List, Dict, Optional
23 from qiskit.circuit import QuantumCircuit, Qubit, Parameter
24 from qiskit.exceptions import QiskitError
25 from qiskit.pulse import ScheduleComponent, LoConfig
26 from qiskit.assembler.run_config import RunConfig
27 from qiskit.assembler import assemble_circuits, assemble_schedules
28 from qiskit.qobj import QobjHeader, Qobj
29 from qiskit.qobj.utils import MeasLevel, MeasReturnType
30 from qiskit.validation.jsonschema import SchemaValidationError
31 from qiskit.providers import BaseBackend
32 from qiskit.pulse.channels import PulseChannel
33 from qiskit.pulse import Schedule
34
35 LOG = logging.getLogger(__name__)
36
37
38 def _log_assembly_time(start_time, end_time):
39 log_msg = "Total Assembly Time - %.5f (ms)" % ((end_time - start_time) * 1000)
40 LOG.info(log_msg)
41
42
43 # TODO: parallelize over the experiments (serialize each separately, then add global header/config)
44 def assemble(experiments: Union[QuantumCircuit, List[QuantumCircuit], Schedule, List[Schedule]],
45 backend: Optional[BaseBackend] = None,
46 qobj_id: Optional[str] = None,
47 qobj_header: Optional[Union[QobjHeader, Dict]] = None,
48 shots: Optional[int] = None, memory: Optional[bool] = False,
49 max_credits: Optional[int] = None,
50 seed_simulator: Optional[int] = None,
51 qubit_lo_freq: Optional[List[int]] = None,
52 meas_lo_freq: Optional[List[int]] = None,
53 qubit_lo_range: Optional[List[int]] = None,
54 meas_lo_range: Optional[List[int]] = None,
55 schedule_los: Optional[Union[List[Union[Dict[PulseChannel, float], LoConfig]],
56 Union[Dict[PulseChannel, float], LoConfig]]] = None,
57 meas_level: Union[int, MeasLevel] = MeasLevel.CLASSIFIED,
58 meas_return: Union[str, MeasReturnType] = MeasReturnType.AVERAGE,
59 meas_map: Optional[List[List[Qubit]]] = None,
60 memory_slot_size: int = 100,
61 rep_time: Optional[float] = None,
62 rep_delay: Optional[float] = None,
63 parameter_binds: Optional[List[Dict[Parameter, float]]] = None,
64 parametric_pulses: Optional[List[str]] = None,
65 init_qubits: bool = True,
66 **run_config: Dict) -> Qobj:
67 """Assemble a list of circuits or pulse schedules into a ``Qobj``.
68
69 This function serializes the payloads, which could be either circuits or schedules,
70 to create ``Qobj`` "experiments". It further annotates the experiment payload with
71 header and configurations.
72
73 Args:
74 experiments: Circuit(s) or pulse schedule(s) to execute
75 backend: If set, some runtime options are automatically grabbed from
76 ``backend.configuration()`` and ``backend.defaults()``.
77 If any other option is explicitly set (e.g., ``rep_time``), it
78 will override the backend's.
79 If any other options is set in the run_config, it will
80 also override the backend's.
81 qobj_id: String identifier to annotate the ``Qobj``
82 qobj_header: User input that will be inserted in ``Qobj`` header, and will also be
83 copied to the corresponding Result header. Headers do not affect the run.
84 shots: Number of repetitions of each circuit, for sampling. Default: 1024
85 or ``max_shots`` from the backend configuration, whichever is smaller
86 memory: If ``True``, per-shot measurement bitstrings are returned as well
87 (provided the backend supports it). For OpenPulse jobs, only
88 measurement level 2 supports this option.
89 max_credits: Maximum credits to spend on job. Default: 10
90 seed_simulator: Random seed to control sampling, for when backend is a simulator
91 qubit_lo_freq: List of default qubit LO frequencies in Hz. Will be overridden by
92 ``schedule_los`` if set.
93 meas_lo_freq: List of default measurement LO frequencies in Hz. Will be overridden
94 by ``schedule_los`` if set.
95 qubit_lo_range: List of drive LO ranges each of form ``[range_min, range_max]`` in Hz.
96 Used to validate the supplied qubit frequencies.
97 meas_lo_range: List of measurement LO ranges each of form ``[range_min, range_max]`` in Hz.
98 Used to validate the supplied qubit frequencies.
99 schedule_los: Experiment LO configurations, frequencies are given in Hz.
100 meas_level: Set the appropriate level of the measurement output for pulse experiments.
101 meas_return: Level of measurement data for the backend to return.
102
103 For ``meas_level`` 0 and 1:
104 * ``single`` returns information from every shot.
105 * ``avg`` returns average measurement output (averaged over number of shots).
106 meas_map: List of lists, containing qubits that must be measured together.
107 memory_slot_size: Size of each memory slot if the output is Level 0.
108 rep_time: Time per program execution in sec. Must be from the list provided
109 by the backend (``backend.configuration().rep_times``).
110 rep_delay: Delay between programs in sec. Only supported on certain
111 backends (``backend.configuration().dynamic_reprate_enabled`` ).
112 If supported, ``rep_delay`` will be used instead of ``rep_time``. Must be from the list
113 provided by the backend (``backend.configuration().rep_delays``).
114 parameter_binds: List of Parameter bindings over which the set of experiments will be
115 executed. Each list element (bind) should be of the form
116 {Parameter1: value1, Parameter2: value2, ...}. All binds will be
117 executed across all experiments; e.g., if parameter_binds is a
118 length-n list, and there are m experiments, a total of m x n
119 experiments will be run (one for each experiment/bind pair).
120 parametric_pulses: A list of pulse shapes which are supported internally on the backend.
121 Example::
122
123 ['gaussian', 'constant']
124 init_qubits: Whether to reset the qubits to the ground state for each shot.
125 Default: ``True``.
126 **run_config: Extra arguments used to configure the run (e.g., for Aer configurable
127 backends). Refer to the backend documentation for details on these
128 arguments.
129
130 Returns:
131 A ``Qobj`` that can be run on a backend. Depending on the type of input,
132 this will be either a ``QasmQobj`` or a ``PulseQobj``.
133
134 Raises:
135 QiskitError: if the input cannot be interpreted as either circuits or schedules
136 """
137 start_time = time()
138 experiments = experiments if isinstance(experiments, list) else [experiments]
139 qobj_id, qobj_header, run_config_common_dict = _parse_common_args(backend, qobj_id, qobj_header,
140 shots, memory, max_credits,
141 seed_simulator, init_qubits,
142 **run_config)
143
144 # assemble either circuits or schedules
145 if all(isinstance(exp, QuantumCircuit) for exp in experiments):
146 run_config = _parse_circuit_args(parameter_binds, **run_config_common_dict)
147
148 # If circuits are parameterized, bind parameters and remove from run_config
149 bound_experiments, run_config = _expand_parameters(circuits=experiments,
150 run_config=run_config)
151 end_time = time()
152 _log_assembly_time(start_time, end_time)
153 return assemble_circuits(circuits=bound_experiments, qobj_id=qobj_id,
154 qobj_header=qobj_header, run_config=run_config)
155
156 elif all(isinstance(exp, ScheduleComponent) for exp in experiments):
157 run_config = _parse_pulse_args(backend, qubit_lo_freq, meas_lo_freq,
158 qubit_lo_range, meas_lo_range,
159 schedule_los, meas_level, meas_return,
160 meas_map, memory_slot_size,
161 rep_time, rep_delay,
162 parametric_pulses,
163 **run_config_common_dict)
164
165 end_time = time()
166 _log_assembly_time(start_time, end_time)
167 return assemble_schedules(schedules=experiments, qobj_id=qobj_id,
168 qobj_header=qobj_header, run_config=run_config)
169
170 else:
171 raise QiskitError("bad input to assemble() function; "
172 "must be either circuits or schedules")
173
174
175 # TODO: rework to return a list of RunConfigs (one for each experiments), and a global one
176 def _parse_common_args(backend, qobj_id, qobj_header, shots,
177 memory, max_credits, seed_simulator, init_qubits,
178 **run_config):
179 """Resolve the various types of args allowed to the assemble() function through
180 duck typing, overriding args, etc. Refer to the assemble() docstring for details on
181 what types of inputs are allowed.
182
183 Here the args are resolved by converting them to standard instances, and prioritizing
184 them in case a run option is passed through multiple args (explicitly setting an arg
185 has more priority than the arg set by backend)
186
187 Returns:
188 RunConfig: a run config, which is a standardized object that configures the qobj
189 and determines the runtime environment.
190
191 Raises:
192 QiskitError: if the memory arg is True and the backend does not support
193 memory. Also if shots exceeds max_shots for the configured backend.
194 """
195 # grab relevant info from backend if it exists
196 backend_config = None
197 if backend:
198 backend_config = backend.configuration()
199 # check for memory flag applied to backend that does not support memory
200 if memory and not backend_config.memory:
201 raise QiskitError("memory not supported by backend {}"
202 .format(backend_config.backend_name))
203
204 # an identifier for the Qobj
205 qobj_id = qobj_id or str(uuid.uuid4())
206
207 # The header that goes at the top of the Qobj (and later Result)
208 # we process it as dict, then write entries that are not None to a QobjHeader object
209 qobj_header = qobj_header or {}
210 if isinstance(qobj_header, QobjHeader):
211 qobj_header = qobj_header.to_dict()
212 backend_name = getattr(backend_config, 'backend_name', None)
213 backend_version = getattr(backend_config, 'backend_version', None)
214 qobj_header = {**dict(backend_name=backend_name, backend_version=backend_version),
215 **qobj_header}
216 qobj_header = QobjHeader(**{k: v for k, v in qobj_header.items() if v is not None})
217
218 max_shots = getattr(backend_config, 'max_shots', None)
219 if shots is None:
220 if max_shots:
221 shots = min(1024, max_shots)
222 else:
223 shots = 1024
224 elif max_shots and max_shots < shots:
225 raise QiskitError(
226 'Number of shots specified: %s exceeds max_shots property of the '
227 'backend: %s.' % (shots, max_shots))
228
229 # create run configuration and populate
230 run_config_dict = dict(shots=shots,
231 memory=memory,
232 max_credits=max_credits,
233 seed_simulator=seed_simulator,
234 init_qubits=init_qubits,
235 **run_config)
236
237 return qobj_id, qobj_header, run_config_dict
238
239
240 def _parse_pulse_args(backend, qubit_lo_freq, meas_lo_freq, qubit_lo_range,
241 meas_lo_range, schedule_los, meas_level,
242 meas_return, meas_map,
243 memory_slot_size,
244 rep_time, rep_delay,
245 parametric_pulses,
246 **run_config):
247 """Build a pulse RunConfig replacing unset arguments with defaults derived from the `backend`.
248 See `assemble` for more information on the required arguments.
249
250 Returns:
251 RunConfig: a run config, which is a standardized object that configures the qobj
252 and determines the runtime environment.
253 Raises:
254 SchemaValidationError: if the given meas_level is not allowed for the given `backend`.
255 """
256 # grab relevant info from backend if it exists
257 backend_config = None
258 backend_default = None
259 if backend:
260 backend_default = backend.defaults()
261 backend_config = backend.configuration()
262
263 if meas_level not in getattr(backend_config, 'meas_levels', [MeasLevel.CLASSIFIED]):
264 raise SchemaValidationError(
265 ('meas_level = {} not supported for backend {}, only {} is supported'
266 ).format(meas_level, backend_config.backend_name, backend_config.meas_levels)
267 )
268
269 meas_map = meas_map or getattr(backend_config, 'meas_map', None)
270
271 schedule_los = schedule_los or []
272 if isinstance(schedule_los, (LoConfig, dict)):
273 schedule_los = [schedule_los]
274
275 # Convert to LoConfig if LO configuration supplied as dictionary
276 schedule_los = [lo_config if isinstance(lo_config, LoConfig) else LoConfig(lo_config)
277 for lo_config in schedule_los]
278
279 if not qubit_lo_freq and hasattr(backend_default, 'qubit_freq_est'):
280 qubit_lo_freq = backend_default.qubit_freq_est
281 if not meas_lo_freq and hasattr(backend_default, 'meas_freq_est'):
282 meas_lo_freq = backend_default.meas_freq_est
283
284 qubit_lo_range = qubit_lo_range or getattr(backend_config, 'qubit_lo_range', None)
285 meas_lo_range = meas_lo_range or getattr(backend_config, 'meas_lo_range', None)
286
287 dynamic_reprate_enabled = getattr(backend_config, 'dynamic_reprate_enabled', False)
288
289 rep_time = rep_time or getattr(backend_config, 'rep_times', None)
290 if rep_time:
291 if dynamic_reprate_enabled:
292 warnings.warn("Dynamic rep rates are supported on this backend. 'rep_delay' will be "
293 "used instead, if specified.", RuntimeWarning)
294 if isinstance(rep_time, list):
295 rep_time = rep_time[0]
296 rep_time = rep_time * 1e6 # convert sec to μs
297
298 rep_delay = rep_delay or getattr(backend_config, 'rep_delays', None)
299 if rep_delay:
300 if not dynamic_reprate_enabled:
301 warnings.warn("Dynamic rep rates not supported on this backend. 'rep_time' will be "
302 "used instead.", RuntimeWarning)
303
304 if isinstance(rep_delay, list):
305 rep_delay = rep_delay[0]
306 rep_delay = rep_delay * 1e6 # convert sec to μs
307
308 parametric_pulses = parametric_pulses or getattr(backend_config, 'parametric_pulses', [])
309
310 # create run configuration and populate
311 run_config_dict = dict(qubit_lo_freq=qubit_lo_freq,
312 meas_lo_freq=meas_lo_freq,
313 qubit_lo_range=qubit_lo_range,
314 meas_lo_range=meas_lo_range,
315 schedule_los=schedule_los,
316 meas_level=meas_level,
317 meas_return=meas_return,
318 meas_map=meas_map,
319 memory_slot_size=memory_slot_size,
320 rep_time=rep_time,
321 rep_delay=rep_delay,
322 parametric_pulses=parametric_pulses,
323 **run_config)
324 run_config = RunConfig(**{k: v for k, v in run_config_dict.items() if v is not None})
325
326 return run_config
327
328
329 def _parse_circuit_args(parameter_binds, **run_config):
330 """Build a circuit RunConfig replacing unset arguments with defaults derived from the `backend`.
331 See `assemble` for more information on the required arguments.
332
333 Returns:
334 RunConfig: a run config, which is a standardized object that configures the qobj
335 and determines the runtime environment.
336 """
337 parameter_binds = parameter_binds or []
338
339 # create run configuration and populate
340 run_config_dict = dict(parameter_binds=parameter_binds, **run_config)
341 run_config = RunConfig(**{k: v for k, v in run_config_dict.items() if v is not None})
342
343 return run_config
344
345
346 def _expand_parameters(circuits, run_config):
347 """Verifies that there is a single common set of parameters shared between
348 all circuits and all parameter binds in the run_config. Returns an expanded
349 list of circuits (if parameterized) with all parameters bound, and a copy of
350 the run_config with parameter_binds cleared.
351
352 If neither the circuits nor the run_config specify parameters, the two are
353 returned unmodified.
354
355 Raises:
356 QiskitError: if run_config parameters are not compatible with circuit parameters
357
358 Returns:
359 Tuple(List[QuantumCircuit], RunConfig):
360 - List of input circuits expanded and with parameters bound
361 - RunConfig with parameter_binds removed
362 """
363
364 parameter_binds = run_config.parameter_binds
365 if parameter_binds or \
366 any(circuit.parameters for circuit in circuits):
367
368 all_bind_parameters = [bind.keys()
369 for bind in parameter_binds]
370 all_circuit_parameters = [circuit.parameters for circuit in circuits]
371
372 # Collect set of all unique parameters across all circuits and binds
373 unique_parameters = {param
374 for param_list in all_bind_parameters + all_circuit_parameters
375 for param in param_list}
376
377 # Check that all parameters are common to all circuits and binds
378 if not all_bind_parameters \
379 or not all_circuit_parameters \
380 or any(unique_parameters != bind_params for bind_params in all_bind_parameters) \
381 or any(unique_parameters != parameters for parameters in all_circuit_parameters):
382 raise QiskitError(
383 ('Mismatch between run_config.parameter_binds and all circuit parameters. ' +
384 'Parameter binds: {} ' +
385 'Circuit parameters: {}').format(all_bind_parameters, all_circuit_parameters))
386
387 circuits = [circuit.bind_parameters(binds)
388 for circuit in circuits
389 for binds in parameter_binds]
390
391 # All parameters have been expanded and bound, so remove from run_config
392 run_config = copy.deepcopy(run_config)
393 run_config.parameter_binds = []
394
395 return circuits, run_config
396
[end of qiskit/compiler/assemble.py]
[start of qiskit/execute.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 =============================================
17 Executing Experiments (:mod:`qiskit.execute`)
18 =============================================
19
20 .. currentmodule:: qiskit.execute
21
22 .. autofunction:: execute
23 """
24 import logging
25 from time import time
26 from qiskit.compiler import transpile, assemble, schedule
27 from qiskit.qobj.utils import MeasLevel, MeasReturnType
28 from qiskit.pulse import Schedule
29 from qiskit.exceptions import QiskitError
30
31 logger = logging.getLogger(__name__)
32
33
34 def _log_submission_time(start_time, end_time):
35 log_msg = ("Total Job Submission Time - %.5f (ms)"
36 % ((end_time - start_time) * 1000))
37 logger.info(log_msg)
38
39
40 def execute(experiments, backend,
41 basis_gates=None, coupling_map=None, # circuit transpile options
42 backend_properties=None, initial_layout=None,
43 seed_transpiler=None, optimization_level=None, pass_manager=None,
44 qobj_id=None, qobj_header=None, shots=1024, # common run options
45 memory=False, max_credits=10, seed_simulator=None,
46 default_qubit_los=None, default_meas_los=None, # schedule run options
47 schedule_los=None, meas_level=MeasLevel.CLASSIFIED,
48 meas_return=MeasReturnType.AVERAGE,
49 memory_slots=None, memory_slot_size=100, rep_time=None, rep_delay=None,
50 parameter_binds=None, schedule_circuit=False, inst_map=None, meas_map=None,
51 scheduling_method=None, init_qubits=None,
52 **run_config):
53 """Execute a list of :class:`qiskit.circuit.QuantumCircuit` or
54 :class:`qiskit.pulse.Schedule` on a backend.
55
56 The execution is asynchronous, and a handle to a job instance is returned.
57
58 Args:
59 experiments (QuantumCircuit or list[QuantumCircuit] or Schedule or list[Schedule]):
60 Circuit(s) or pulse schedule(s) to execute
61
62 backend (BaseBackend):
63 Backend to execute circuits on.
64 Transpiler options are automatically grabbed from
65 backend.configuration() and backend.properties().
66 If any other option is explicitly set (e.g. coupling_map), it
67 will override the backend's.
68
69 basis_gates (list[str]):
70 List of basis gate names to unroll to.
71 e.g: ``['u1', 'u2', 'u3', 'cx']``
72 If ``None``, do not unroll.
73
74 coupling_map (CouplingMap or list): Coupling map (perhaps custom) to
75 target in mapping. Multiple formats are supported:
76
77 #. CouplingMap instance
78 #. list
79 Must be given as an adjacency matrix, where each entry
80 specifies all two-qubit interactions supported by backend
81 e.g:
82 ``[[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]``
83
84 backend_properties (BackendProperties):
85 Properties returned by a backend, including information on gate
86 errors, readout errors, qubit coherence times, etc. Find a backend
87 that provides this information with:
88 ``backend.properties()``
89
90 initial_layout (Layout or dict or list):
91 Initial position of virtual qubits on physical qubits.
92 If this layout makes the circuit compatible with the coupling_map
93 constraints, it will be used.
94 The final layout is not guaranteed to be the same, as the transpiler
95 may permute qubits through swaps or other means.
96
97 Multiple formats are supported:
98
99 #. :class:`qiskit.transpiler.Layout` instance
100 #. ``dict``:
101 virtual to physical::
102
103 {qr[0]: 0,
104 qr[1]: 3,
105 qr[2]: 5}
106
107 physical to virtual::
108 {0: qr[0],
109 3: qr[1],
110 5: qr[2]}
111
112 #. ``list``
113 virtual to physical::
114
115 [0, 3, 5] # virtual qubits are ordered (in addition to named)
116
117 physical to virtual::
118
119 [qr[0], None, None, qr[1], None, qr[2]]
120
121 seed_transpiler (int): Sets random seed for the stochastic parts of the transpiler
122
123 optimization_level (int): How much optimization to perform on the circuits.
124 Higher levels generate more optimized circuits,
125 at the expense of longer transpilation time.
126 #. No optimization
127 #. Light optimization
128 #. Heavy optimization
129 #. Highest optimization
130 If None, level 1 will be chosen as default.
131
132 pass_manager (PassManager): The pass manager to use during transpilation. If this
133 arg is present, auto-selection of pass manager based on the transpile options
134 will be turned off and this pass manager will be used directly.
135
136 qobj_id (str): String identifier to annotate the Qobj
137
138 qobj_header (QobjHeader or dict): User input that will be inserted in Qobj header,
139 and will also be copied to the corresponding :class:`qiskit.result.Result`
140 header. Headers do not affect the run.
141
142 shots (int): Number of repetitions of each circuit, for sampling. Default: 1024
143
144 memory (bool): If True, per-shot measurement bitstrings are returned as well
145 (provided the backend supports it). For OpenPulse jobs, only
146 measurement level 2 supports this option. Default: False
147
148 max_credits (int): Maximum credits to spend on job. Default: 10
149
150 seed_simulator (int): Random seed to control sampling, for when backend is a simulator
151
152 default_qubit_los (list): List of default qubit LO frequencies in Hz
153
154 default_meas_los (list): List of default meas LO frequencies in Hz
155
156 schedule_los (None or list or dict or LoConfig): Experiment LO
157 configurations, if specified the list is in the format::
158
159 list[Union[Dict[PulseChannel, float], LoConfig]] or
160 Union[Dict[PulseChannel, float], LoConfig]
161
162 meas_level (int or MeasLevel): Set the appropriate level of the
163 measurement output for pulse experiments.
164
165 meas_return (str or MeasReturn): Level of measurement data for the
166 backend to return For ``meas_level`` 0 and 1:
167 ``"single"`` returns information from every shot.
168 ``"avg"`` returns average measurement output (averaged over number
169 of shots).
170
171 memory_slots (int): Number of classical memory slots used in this job.
172
173 memory_slot_size (int): Size of each memory slot if the output is Level 0.
174
175 rep_time (list[float]): Time per program execution in sec. Must be from the list provided
176 by the backend (``backend.configuration().rep_times``).
177
178 rep_delay (list[float]): Delay between programs in sec. Only supported on certain
179 backends (``backend.configuration().dynamic_reprate_enabled`` ).
180 If supported, ``rep_delay`` will be used instead of ``rep_time``. Must be from the list
181 provided by the backend (``backend.configuration().rep_delays``).
182
183 parameter_binds (list[dict]): List of Parameter bindings over which the set of
184 experiments will be executed. Each list element (bind) should be of the form
185 ``{Parameter1: value1, Parameter2: value2, ...}``. All binds will be
186 executed across all experiments, e.g. if parameter_binds is a
187 length-n list, and there are m experiments, a total of :math:`m x n`
188 experiments will be run (one for each experiment/bind pair).
189
190 schedule_circuit (bool): If ``True``, ``experiments`` will be converted to
191 :class:`qiskit.pulse.Schedule` objects prior to execution.
192
193 inst_map (InstructionScheduleMap):
194 Mapping of circuit operations to pulse schedules. If None, defaults to the
195 ``instruction_schedule_map`` of ``backend``.
196
197 meas_map (list(list(int))):
198 List of sets of qubits that must be measured together. If None, defaults to
199 the ``meas_map`` of ``backend``.
200
201 scheduling_method (str or list(str)):
202 Optionally specify a particular scheduling method.
203
204 init_qubits (bool): Whether to reset the qubits to the ground state for each shot.
205 Default: ``True``.
206
207 run_config (dict):
208 Extra arguments used to configure the run (e.g. for Aer configurable backends).
209 Refer to the backend documentation for details on these arguments.
210 Note: for now, these keyword arguments will both be copied to the
211 Qobj config, and passed to backend.run()
212
213 Returns:
214 BaseJob: returns job instance derived from BaseJob
215
216 Raises:
217 QiskitError: if the execution cannot be interpreted as either circuits or schedules
218
219 Example:
220 Construct a 5-qubit GHZ circuit and execute 4321 shots on a backend.
221
222 .. jupyter-execute::
223
224 from qiskit import QuantumCircuit, execute, BasicAer
225
226 backend = BasicAer.get_backend('qasm_simulator')
227
228 qc = QuantumCircuit(5, 5)
229 qc.h(0)
230 qc.cx(0, range(1, 5))
231 qc.measure_all()
232
233 job = execute(qc, backend, shots=4321)
234 """
235 if isinstance(experiments, Schedule) or (isinstance(experiments, list) and
236 isinstance(experiments[0], Schedule)):
237 # do not transpile a schedule circuit
238 if schedule_circuit:
239 raise QiskitError("Must supply QuantumCircuit to schedule circuit.")
240 elif pass_manager is not None:
241 # transpiling using pass_manager
242 _check_conflicting_argument(optimization_level=optimization_level,
243 basis_gates=basis_gates,
244 coupling_map=coupling_map,
245 seed_transpiler=seed_transpiler,
246 backend_properties=backend_properties,
247 initial_layout=initial_layout,
248 backend=backend)
249 experiments = pass_manager.run(experiments)
250 else:
251 # transpiling the circuits using given transpile options
252 experiments = transpile(experiments,
253 basis_gates=basis_gates,
254 coupling_map=coupling_map,
255 backend_properties=backend_properties,
256 initial_layout=initial_layout,
257 seed_transpiler=seed_transpiler,
258 optimization_level=optimization_level,
259 backend=backend)
260
261 if schedule_circuit:
262 experiments = schedule(circuits=experiments,
263 backend=backend,
264 inst_map=inst_map,
265 meas_map=meas_map,
266 method=scheduling_method)
267
268 # assembling the circuits into a qobj to be run on the backend
269 qobj = assemble(experiments,
270 qobj_id=qobj_id,
271 qobj_header=qobj_header,
272 shots=shots,
273 memory=memory,
274 max_credits=max_credits,
275 seed_simulator=seed_simulator,
276 default_qubit_los=default_qubit_los,
277 default_meas_los=default_meas_los,
278 schedule_los=schedule_los,
279 meas_level=meas_level,
280 meas_return=meas_return,
281 memory_slots=memory_slots,
282 memory_slot_size=memory_slot_size,
283 rep_time=rep_time,
284 rep_delay=rep_delay,
285 parameter_binds=parameter_binds,
286 backend=backend,
287 init_qubits=init_qubits,
288 **run_config)
289
290 # executing the circuits on the backend and returning the job
291 start_time = time()
292 job = backend.run(qobj, **run_config)
293 end_time = time()
294 _log_submission_time(start_time, end_time)
295 return job
296
297
298 def _check_conflicting_argument(**kargs):
299 conflicting_args = [arg for arg, value in kargs.items() if value]
300 if conflicting_args:
301 raise QiskitError("The parameters pass_manager conflicts with the following "
302 "parameter(s): {}.".format(', '.join(conflicting_args)))
303
[end of qiskit/execute.py]
[start of qiskit/qobj/utils.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Qobj utilities and enums."""
16
17 from enum import Enum, IntEnum
18
19 from fastjsonschema.exceptions import JsonSchemaException
20
21 from qiskit.validation.jsonschema.exceptions import SchemaValidationError
22
23
24 class QobjType(str, Enum):
25 """Qobj.type allowed values."""
26 QASM = 'QASM'
27 PULSE = 'PULSE'
28
29
30 class MeasReturnType(str, Enum):
31 """PulseQobjConfig meas_return allowed values."""
32 AVERAGE = 'avg'
33 SINGLE = 'single'
34
35
36 class MeasLevel(IntEnum):
37 """MeasLevel allowed values."""
38 RAW = 0
39 KERNELED = 1
40 CLASSIFIED = 2
41
42
43 def validate_qobj_against_schema(qobj):
44 """Validates a QObj against the .json schema.
45
46 Args:
47 qobj (Qobj): Qobj to be validated.
48
49 Raises:
50 SchemaValidationError: if the qobj fails schema validation
51 """
52 try:
53 qobj.to_dict(validate=True)
54 except JsonSchemaException as err:
55 msg = ("Qobj validation failed. Specifically path: %s failed to fulfil"
56 " %s" % (err.path, err.definition))
57 raise SchemaValidationError(msg)
58
[end of qiskit/qobj/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
5f71bf2ce3d832f396ecad2ec689975064402fba
|
pulse jobs fail schema validation
Every pulse job is failing schema validation. This happened in https://github.com/Qiskit/qiskit-terra/pull/4539.
```py
from qiskit import *
IBMQ.load_account()
p = IBMQ.get_provider(project='......')
backend = p.get_backend('ibmq_armonk')
c = QuantumCircuit(1, 1)
c.measure(0, 0)
c = transpile(c, backend)
s = schedule(c, backend)
qobj = assemble(s, backend=backend, shots=100)
job = backend.run(qobj)
```
|
To make #4539 work with our broken system around schemas (because of the lack of versioning and the backwards relationship between iqx and terra) it needs to be reverted on the qobj and assembly side except for the qobj schema changes. Then after the schema changes get included in the next release we can look at adding back the pieces that use the changes. (Note this only works because the change is an addition only and a backwards compatible type change)
Right now the schemas used by iqx rely on the released version of terra as the source of truth for the schema files and are not using versioning. So until we release terra with the updated schema IQX doesn't know about any changes made in the terra and will be using the schema from the last release.
It seems like it may be failing because `rep_time` may be empty? We could provide a dummy value if not supplied to make this pass validation temporarily? @zachschoenfeld33
|
2020-06-26T14:43:33Z
|
<patch>
diff --git a/qiskit/compiler/assemble.py b/qiskit/compiler/assemble.py
--- a/qiskit/compiler/assemble.py
+++ b/qiskit/compiler/assemble.py
@@ -58,7 +58,7 @@ def assemble(experiments: Union[QuantumCircuit, List[QuantumCircuit], Schedule,
meas_return: Union[str, MeasReturnType] = MeasReturnType.AVERAGE,
meas_map: Optional[List[List[Qubit]]] = None,
memory_slot_size: int = 100,
- rep_time: Optional[float] = None,
+ rep_time: Optional[int] = None,
rep_delay: Optional[float] = None,
parameter_binds: Optional[List[Dict[Parameter, float]]] = None,
parametric_pulses: Optional[List[str]] = None,
@@ -293,7 +293,7 @@ def _parse_pulse_args(backend, qubit_lo_freq, meas_lo_freq, qubit_lo_range,
"used instead, if specified.", RuntimeWarning)
if isinstance(rep_time, list):
rep_time = rep_time[0]
- rep_time = rep_time * 1e6 # convert sec to μs
+ rep_time = int(rep_time * 1e6) # convert sec to μs
rep_delay = rep_delay or getattr(backend_config, 'rep_delays', None)
if rep_delay:
diff --git a/qiskit/execute.py b/qiskit/execute.py
--- a/qiskit/execute.py
+++ b/qiskit/execute.py
@@ -172,10 +172,10 @@ def execute(experiments, backend,
memory_slot_size (int): Size of each memory slot if the output is Level 0.
- rep_time (list[float]): Time per program execution in sec. Must be from the list provided
+ rep_time (int): Time per program execution in sec. Must be from the list provided
by the backend (``backend.configuration().rep_times``).
- rep_delay (list[float]): Delay between programs in sec. Only supported on certain
+ rep_delay (float): Delay between programs in sec. Only supported on certain
backends (``backend.configuration().dynamic_reprate_enabled`` ).
If supported, ``rep_delay`` will be used instead of ``rep_time``. Must be from the list
provided by the backend (``backend.configuration().rep_delays``).
diff --git a/qiskit/qobj/pulse_qobj.py b/qiskit/qobj/pulse_qobj.py
--- a/qiskit/qobj/pulse_qobj.py
+++ b/qiskit/qobj/pulse_qobj.py
@@ -249,7 +249,7 @@ def __init__(self, meas_level, meas_return, pulse_library,
measurement driver LO's in GHz.
memory_slot_size (int): Size of each memory slot if the output is
Level 0.
- rep_time (float): Time per program execution in sec. Must be from the list provided
+ rep_time (int): Time per program execution in sec. Must be from the list provided
by the backend (``backend.configuration().rep_times``).
rep_delay (float): Delay between programs in sec. Only supported on certain
backends (``backend.configuration().dynamic_reprate_enabled``).
</patch>
|
[]
|
[]
| |||
pypa__pip-2237
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
We have two similar options for index security in 6.0
Currently in develop branch we have two flags, `--no-check-certificate` which globally disables TLS verification and we have `--trusted-host <foo>` which allows non HTTPS for a particular index.
The general idea behind both of these flags is the same, pip wants a valid HTTPS index and the person doesn't have one, so they tell us to allow an invalid one. I think we can and should probably just drop `--no-check-certificate` and roll that into `--trusted-host <foo>`.
</issue>
<code>
[start of README.rst]
1 pip
2 ===
3
4 The `PyPA recommended
5 <https://python-packaging-user-guide.readthedocs.org/en/latest/current.html>`_
6 tool for installing Python packages.
7
8 * `Installation <https://pip.pypa.io/en/latest/installing.html>`_
9 * `Documentation <https://pip.pypa.io/>`_
10 * `Changelog <https://pip.pypa.io/en/latest/news.html>`_
11 * `Github Page <https://github.com/pypa/pip>`_
12 * `Issue Tracking <https://github.com/pypa/pip/issues>`_
13 * `Mailing list <http://groups.google.com/group/python-virtualenv>`_
14 * User IRC: #pypa on Freenode.
15 * Dev IRC: #pypa-dev on Freenode.
16
17
18 .. image:: https://pypip.in/v/pip/badge.png
19 :target: https://pypi.python.org/pypi/pip
20
21 .. image:: https://secure.travis-ci.org/pypa/pip.png?branch=develop
22 :target: http://travis-ci.org/pypa/pip
23
[end of README.rst]
[start of pip/index.py]
1 """Routines related to PyPI, indexes"""
2 from __future__ import absolute_import
3
4 import logging
5 import cgi
6 import sys
7 import os
8 import re
9 import mimetypes
10 import posixpath
11 import warnings
12
13 from pip._vendor.six.moves.urllib import parse as urllib_parse
14 from pip._vendor.six.moves.urllib import request as urllib_request
15
16 from pip.compat import ipaddress
17 from pip.utils import Inf, cached_property, normalize_name, splitext
18 from pip.utils.deprecation import RemovedInPip7Warning, RemovedInPip8Warning
19 from pip.utils.logging import indent_log
20 from pip.exceptions import (
21 DistributionNotFound, BestVersionAlreadyInstalled, InvalidWheelFilename,
22 UnsupportedWheel,
23 )
24 from pip.download import url_to_path, path_to_url
25 from pip.models import PyPI
26 from pip.wheel import Wheel, wheel_ext
27 from pip.pep425tags import supported_tags, supported_tags_noarch, get_platform
28 from pip.req.req_requirement import InstallationCandidate
29 from pip._vendor import html5lib, requests, pkg_resources, six
30 from pip._vendor.packaging.version import parse as parse_version
31 from pip._vendor.requests.exceptions import SSLError
32
33
34 __all__ = ['PackageFinder']
35
36
37 # Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC)
38 SECURE_ORIGINS = [
39 # protocol, hostname, port
40 ("https", "*", "*"),
41 ("*", "localhost", "*"),
42 ("*", "127.0.0.0/8", "*"),
43 ("*", "::1/128", "*"),
44 ("file", "*", None),
45 ]
46
47
48 logger = logging.getLogger(__name__)
49
50
51 class PackageFinder(object):
52 """This finds packages.
53
54 This is meant to match easy_install's technique for looking for
55 packages, by reading pages and looking for appropriate links
56 """
57
58 def __init__(self, find_links, index_urls,
59 use_wheel=True, allow_external=(), allow_unverified=(),
60 allow_all_external=False, allow_all_prereleases=False,
61 trusted_hosts=None, process_dependency_links=False,
62 session=None):
63 if session is None:
64 raise TypeError(
65 "PackageFinder() missing 1 required keyword argument: "
66 "'session'"
67 )
68
69 self.find_links = find_links
70 self.index_urls = index_urls
71 self.dependency_links = []
72
73 # These are boring links that have already been logged somehow:
74 self.logged_links = set()
75
76 self.use_wheel = use_wheel
77
78 # Do we allow (safe and verifiable) externally hosted files?
79 self.allow_external = set(normalize_name(n) for n in allow_external)
80
81 # Which names are allowed to install insecure and unverifiable files?
82 self.allow_unverified = set(
83 normalize_name(n) for n in allow_unverified
84 )
85
86 # Anything that is allowed unverified is also allowed external
87 self.allow_external |= self.allow_unverified
88
89 # Do we allow all (safe and verifiable) externally hosted files?
90 self.allow_all_external = allow_all_external
91
92 # Domains that we won't emit warnings for when not using HTTPS
93 self.secure_origins = [
94 ("*", host, "*")
95 for host in (trusted_hosts if trusted_hosts else [])
96 ]
97
98 # Stores if we ignored any external links so that we can instruct
99 # end users how to install them if no distributions are available
100 self.need_warn_external = False
101
102 # Stores if we ignored any unsafe links so that we can instruct
103 # end users how to install them if no distributions are available
104 self.need_warn_unverified = False
105
106 # Do we want to allow _all_ pre-releases?
107 self.allow_all_prereleases = allow_all_prereleases
108
109 # Do we process dependency links?
110 self.process_dependency_links = process_dependency_links
111
112 # The Session we'll use to make requests
113 self.session = session
114
115 def add_dependency_links(self, links):
116 # # FIXME: this shouldn't be global list this, it should only
117 # # apply to requirements of the package that specifies the
118 # # dependency_links value
119 # # FIXME: also, we should track comes_from (i.e., use Link)
120 if self.process_dependency_links:
121 warnings.warn(
122 "Dependency Links processing has been deprecated and will be "
123 "removed in a future release.",
124 RemovedInPip7Warning,
125 )
126 self.dependency_links.extend(links)
127
128 def _sort_locations(self, locations):
129 """
130 Sort locations into "files" (archives) and "urls", and return
131 a pair of lists (files,urls)
132 """
133 files = []
134 urls = []
135
136 # puts the url for the given file path into the appropriate list
137 def sort_path(path):
138 url = path_to_url(path)
139 if mimetypes.guess_type(url, strict=False)[0] == 'text/html':
140 urls.append(url)
141 else:
142 files.append(url)
143
144 for url in locations:
145
146 is_local_path = os.path.exists(url)
147 is_file_url = url.startswith('file:')
148 is_find_link = url in self.find_links
149
150 if is_local_path or is_file_url:
151 if is_local_path:
152 path = url
153 else:
154 path = url_to_path(url)
155 if is_find_link and os.path.isdir(path):
156 path = os.path.realpath(path)
157 for item in os.listdir(path):
158 sort_path(os.path.join(path, item))
159 elif is_file_url and os.path.isdir(path):
160 urls.append(url)
161 elif os.path.isfile(path):
162 sort_path(path)
163 else:
164 urls.append(url)
165
166 return files, urls
167
168 def _candidate_sort_key(self, candidate):
169 """
170 Function used to generate link sort key for link tuples.
171 The greater the return value, the more preferred it is.
172 If not finding wheels, then sorted by version only.
173 If finding wheels, then the sort order is by version, then:
174 1. existing installs
175 2. wheels ordered via Wheel.support_index_min()
176 3. source archives
177 Note: it was considered to embed this logic into the Link
178 comparison operators, but then different sdist links
179 with the same version, would have to be considered equal
180 """
181 if self.use_wheel:
182 support_num = len(supported_tags)
183 if candidate.location == INSTALLED_VERSION:
184 pri = 1
185 elif candidate.location.ext == wheel_ext:
186 # can raise InvalidWheelFilename
187 wheel = Wheel(candidate.location.filename)
188 if not wheel.supported():
189 raise UnsupportedWheel(
190 "%s is not a supported wheel for this platform. It "
191 "can't be sorted." % wheel.filename
192 )
193 pri = -(wheel.support_index_min())
194 else: # sdist
195 pri = -(support_num)
196 return (candidate.version, pri)
197 else:
198 return candidate.version
199
200 def _sort_versions(self, applicable_versions):
201 """
202 Bring the latest version (and wheels) to the front, but maintain the
203 existing ordering as secondary. See the docstring for `_link_sort_key`
204 for details. This function is isolated for easier unit testing.
205 """
206 return sorted(
207 applicable_versions,
208 key=self._candidate_sort_key,
209 reverse=True
210 )
211
212 def _validate_secure_origin(self, logger, location):
213 # Determine if this url used a secure transport mechanism
214 parsed = urllib_parse.urlparse(str(location))
215 origin = (parsed.scheme, parsed.hostname, parsed.port)
216
217 # Determine if our origin is a secure origin by looking through our
218 # hardcoded list of secure origins, as well as any additional ones
219 # configured on this PackageFinder instance.
220 for secure_origin in (SECURE_ORIGINS + self.secure_origins):
221 # Check to see if the protocol matches
222 if origin[0] != secure_origin[0] and secure_origin[0] != "*":
223 continue
224
225 try:
226 # We need to do this decode dance to ensure that we have a
227 # unicode object, even on Python 2.x.
228 addr = ipaddress.ip_address(
229 origin[1]
230 if (
231 isinstance(origin[1], six.text_type)
232 or origin[1] is None
233 )
234 else origin[1].decode("utf8")
235 )
236 network = ipaddress.ip_network(
237 secure_origin[1]
238 if isinstance(secure_origin[1], six.text_type)
239 else secure_origin[1].decode("utf8")
240 )
241 except ValueError:
242 # We don't have both a valid address or a valid network, so
243 # we'll check this origin against hostnames.
244 if origin[1] != secure_origin[1] and secure_origin[1] != "*":
245 continue
246 else:
247 # We have a valid address and network, so see if the address
248 # is contained within the network.
249 if addr not in network:
250 continue
251
252 # Check to see if the port patches
253 if (origin[2] != secure_origin[2]
254 and secure_origin[2] != "*"
255 and secure_origin[2] is not None):
256 continue
257
258 # If we've gotten here, then this origin matches the current
259 # secure origin and we should break out of the loop and continue
260 # on.
261 break
262 else:
263 # If the loop successfully completed without a break, that means
264 # that the origin we are testing is not a secure origin.
265 logger.warning(
266 "This repository located at %s is not a trusted host, if "
267 "this repository is available via HTTPS it is recommend to "
268 "use HTTPS instead, otherwise you may silence this warning "
269 "with '--trusted-host %s'.",
270 parsed.hostname,
271 parsed.hostname,
272 )
273
274 warnings.warn(
275 "Implicitly allowing locations which are not hosted at a "
276 "secure origin is deprecated and will require the use of "
277 "--trusted-host in the future.",
278 RemovedInPip7Warning,
279 )
280
281 def find_requirement(self, req, upgrade):
282
283 def mkurl_pypi_url(url):
284 loc = posixpath.join(url, url_name)
285 # For maximum compatibility with easy_install, ensure the path
286 # ends in a trailing slash. Although this isn't in the spec
287 # (and PyPI can handle it without the slash) some other index
288 # implementations might break if they relied on easy_install's
289 # behavior.
290 if not loc.endswith('/'):
291 loc = loc + '/'
292 return loc
293
294 url_name = req.url_name
295
296 # Only check main index if index URL is given:
297 main_index_url = None
298 if self.index_urls:
299 # Check that we have the url_name correctly spelled:
300 main_index_url = Link(
301 mkurl_pypi_url(self.index_urls[0]),
302 trusted=True,
303 )
304
305 page = self._get_page(main_index_url, req)
306 if page is None and PyPI.netloc not in str(main_index_url):
307 warnings.warn(
308 "Failed to find %r at %s. It is suggested to upgrade "
309 "your index to support normalized names as the name in "
310 "/simple/{name}." % (req.name, main_index_url),
311 RemovedInPip8Warning,
312 )
313
314 url_name = self._find_url_name(
315 Link(self.index_urls[0], trusted=True),
316 url_name, req
317 ) or req.url_name
318
319 if url_name is not None:
320 locations = [
321 mkurl_pypi_url(url)
322 for url in self.index_urls] + self.find_links
323 else:
324 locations = list(self.find_links)
325
326 file_locations, url_locations = self._sort_locations(locations)
327 _flocations, _ulocations = self._sort_locations(self.dependency_links)
328 file_locations.extend(_flocations)
329
330 # We trust every url that the user has given us whether it was given
331 # via --index-url or --find-links
332 locations = [Link(url, trusted=True) for url in url_locations]
333
334 # We explicitly do not trust links that came from dependency_links
335 locations.extend([Link(url) for url in _ulocations])
336
337 logger.debug('URLs to search for versions for %s:', req)
338 for location in locations:
339 logger.debug('* %s', location)
340 self._validate_secure_origin(logger, location)
341
342 found_versions = []
343 found_versions.extend(
344 self._package_versions(
345 # We trust every directly linked archive in find_links
346 [Link(url, '-f', trusted=True) for url in self.find_links],
347 req.name.lower()
348 )
349 )
350 page_versions = []
351 for page in self._get_pages(locations, req):
352 logger.debug('Analyzing links from page %s', page.url)
353 with indent_log():
354 page_versions.extend(
355 self._package_versions(page.links, req.name.lower())
356 )
357 dependency_versions = list(self._package_versions(
358 [Link(url) for url in self.dependency_links], req.name.lower()))
359 if dependency_versions:
360 logger.debug(
361 'dependency_links found: %s',
362 ', '.join([
363 link.url for p, link, version in dependency_versions
364 ])
365 )
366 file_versions = list(
367 self._package_versions(
368 [Link(url) for url in file_locations],
369 req.name.lower()
370 )
371 )
372 if (not found_versions
373 and not page_versions
374 and not dependency_versions
375 and not file_versions):
376 logger.critical(
377 'Could not find any downloads that satisfy the requirement %s',
378 req,
379 )
380
381 if self.need_warn_external:
382 logger.warning(
383 "Some externally hosted files were ignored as access to "
384 "them may be unreliable (use --allow-external %s to "
385 "allow).",
386 req.name,
387 )
388
389 if self.need_warn_unverified:
390 logger.warning(
391 "Some insecure and unverifiable files were ignored"
392 " (use --allow-unverified %s to allow).",
393 req.name,
394 )
395
396 raise DistributionNotFound(
397 'No distributions at all found for %s' % req
398 )
399 installed_version = []
400 if req.satisfied_by is not None:
401 installed_version = [
402 InstallationCandidate(
403 req.name,
404 req.satisfied_by.version,
405 INSTALLED_VERSION,
406 ),
407 ]
408 if file_versions:
409 file_versions.sort(reverse=True)
410 logger.debug(
411 'Local files found: %s',
412 ', '.join([
413 url_to_path(candidate.location.url)
414 for candidate in file_versions
415 ])
416 )
417
418 # This is an intentional priority ordering
419 all_versions = (
420 file_versions + found_versions + page_versions
421 + dependency_versions
422 )
423
424 # Filter out anything which doesn't match our specifier
425 _versions = set(
426 req.specifier.filter(
427 [x.version for x in all_versions],
428 prereleases=(
429 self.allow_all_prereleases
430 if self.allow_all_prereleases else None
431 ),
432 )
433 )
434 all_versions = [x for x in all_versions if x.version in _versions]
435
436 # Finally add our existing versions to the front of our versions.
437 applicable_versions = installed_version + all_versions
438
439 applicable_versions = self._sort_versions(applicable_versions)
440 existing_applicable = any(
441 i.location is INSTALLED_VERSION
442 for i in applicable_versions
443 )
444
445 if not upgrade and existing_applicable:
446 if applicable_versions[0].location is INSTALLED_VERSION:
447 logger.debug(
448 'Existing installed version (%s) is most up-to-date and '
449 'satisfies requirement',
450 req.satisfied_by.version,
451 )
452 else:
453 logger.debug(
454 'Existing installed version (%s) satisfies requirement '
455 '(most up-to-date version is %s)',
456 req.satisfied_by.version,
457 applicable_versions[0][2],
458 )
459 return None
460
461 if not applicable_versions:
462 logger.critical(
463 'Could not find a version that satisfies the requirement %s '
464 '(from versions: %s)',
465 req,
466 ', '.join(
467 sorted(
468 set(str(i.version) for i in all_versions),
469 key=parse_version,
470 )
471 )
472 )
473
474 if self.need_warn_external:
475 logger.warning(
476 "Some externally hosted files were ignored as access to "
477 "them may be unreliable (use --allow-external to allow)."
478 )
479
480 if self.need_warn_unverified:
481 logger.warning(
482 "Some insecure and unverifiable files were ignored"
483 " (use --allow-unverified %s to allow).",
484 req.name,
485 )
486
487 raise DistributionNotFound(
488 'No distributions matching the version for %s' % req
489 )
490
491 if applicable_versions[0].location is INSTALLED_VERSION:
492 # We have an existing version, and its the best version
493 logger.debug(
494 'Installed version (%s) is most up-to-date (past versions: ',
495 '%s)',
496 req.satisfied_by.version,
497 ', '.join(str(i.version) for i in applicable_versions[1:])
498 or "none",
499 )
500 raise BestVersionAlreadyInstalled
501
502 if len(applicable_versions) > 1:
503 logger.debug(
504 'Using version %s (newest of versions: %s)',
505 applicable_versions[0].version,
506 ', '.join(str(i.version) for i in applicable_versions)
507 )
508
509 selected_version = applicable_versions[0].location
510
511 if (selected_version.verifiable is not None
512 and not selected_version.verifiable):
513 logger.warning(
514 "%s is potentially insecure and unverifiable.", req.name,
515 )
516
517 if selected_version._deprecated_regex:
518 warnings.warn(
519 "%s discovered using a deprecated method of parsing, in the "
520 "future it will no longer be discovered." % req.name,
521 RemovedInPip7Warning,
522 )
523
524 return selected_version
525
526 def _find_url_name(self, index_url, url_name, req):
527 """
528 Finds the true URL name of a package, when the given name isn't quite
529 correct.
530 This is usually used to implement case-insensitivity.
531 """
532 if not index_url.url.endswith('/'):
533 # Vaguely part of the PyPI API... weird but true.
534 # FIXME: bad to modify this?
535 index_url.url += '/'
536 page = self._get_page(index_url, req)
537 if page is None:
538 logger.critical('Cannot fetch index base URL %s', index_url)
539 return
540 norm_name = normalize_name(req.url_name)
541 for link in page.links:
542 base = posixpath.basename(link.path.rstrip('/'))
543 if norm_name == normalize_name(base):
544 logger.debug(
545 'Real name of requirement %s is %s', url_name, base,
546 )
547 return base
548 return None
549
550 def _get_pages(self, locations, req):
551 """
552 Yields (page, page_url) from the given locations, skipping
553 locations that have errors, and adding download/homepage links
554 """
555 all_locations = list(locations)
556 seen = set()
557
558 while all_locations:
559 location = all_locations.pop(0)
560 if location in seen:
561 continue
562 seen.add(location)
563
564 page = self._get_page(location, req)
565 if page is None:
566 continue
567
568 yield page
569
570 for link in page.rel_links():
571 normalized = normalize_name(req.name).lower()
572
573 if (normalized not in self.allow_external
574 and not self.allow_all_external):
575 self.need_warn_external = True
576 logger.debug(
577 "Not searching %s for files because external "
578 "urls are disallowed.",
579 link,
580 )
581 continue
582
583 if (link.trusted is not None
584 and not link.trusted
585 and normalized not in self.allow_unverified):
586 logger.debug(
587 "Not searching %s for urls, it is an "
588 "untrusted link and cannot produce safe or "
589 "verifiable files.",
590 link,
591 )
592 self.need_warn_unverified = True
593 continue
594
595 all_locations.append(link)
596
597 _egg_fragment_re = re.compile(r'#egg=([^&]*)')
598 _egg_info_re = re.compile(r'([a-z0-9_.]+)-([a-z0-9_.!+-]+)', re.I)
599 _py_version_re = re.compile(r'-py([123]\.?[0-9]?)$')
600
601 def _sort_links(self, links):
602 """
603 Returns elements of links in order, non-egg links first, egg links
604 second, while eliminating duplicates
605 """
606 eggs, no_eggs = [], []
607 seen = set()
608 for link in links:
609 if link not in seen:
610 seen.add(link)
611 if link.egg_fragment:
612 eggs.append(link)
613 else:
614 no_eggs.append(link)
615 return no_eggs + eggs
616
617 def _package_versions(self, links, search_name):
618 for link in self._sort_links(links):
619 v = self._link_package_versions(link, search_name)
620 if v is not None:
621 yield v
622
623 def _known_extensions(self):
624 extensions = ('.tar.gz', '.tar.bz2', '.tar', '.tgz', '.zip')
625 if self.use_wheel:
626 return extensions + (wheel_ext,)
627 return extensions
628
629 def _link_package_versions(self, link, search_name):
630 """
631 Return an iterable of triples (pkg_resources_version_key,
632 link, python_version) that can be extracted from the given
633 link.
634
635 Meant to be overridden by subclasses, not called by clients.
636 """
637 platform = get_platform()
638
639 version = None
640 if link.egg_fragment:
641 egg_info = link.egg_fragment
642 else:
643 egg_info, ext = link.splitext()
644 if not ext:
645 if link not in self.logged_links:
646 logger.debug('Skipping link %s; not a file', link)
647 self.logged_links.add(link)
648 return
649 if egg_info.endswith('.tar'):
650 # Special double-extension case:
651 egg_info = egg_info[:-4]
652 ext = '.tar' + ext
653 if ext not in self._known_extensions():
654 if link not in self.logged_links:
655 logger.debug(
656 'Skipping link %s; unknown archive format: %s',
657 link,
658 ext,
659 )
660 self.logged_links.add(link)
661 return
662 if "macosx10" in link.path and ext == '.zip':
663 if link not in self.logged_links:
664 logger.debug('Skipping link %s; macosx10 one', link)
665 self.logged_links.add(link)
666 return
667 if ext == wheel_ext:
668 try:
669 wheel = Wheel(link.filename)
670 except InvalidWheelFilename:
671 logger.debug(
672 'Skipping %s because the wheel filename is invalid',
673 link
674 )
675 return
676 if (pkg_resources.safe_name(wheel.name).lower()
677 != pkg_resources.safe_name(search_name).lower()):
678 logger.debug(
679 'Skipping link %s; wrong project name (not %s)',
680 link,
681 search_name,
682 )
683 return
684 if not wheel.supported():
685 logger.debug(
686 'Skipping %s because it is not compatible with this '
687 'Python',
688 link,
689 )
690 return
691 # This is a dirty hack to prevent installing Binary Wheels from
692 # PyPI unless it is a Windows or Mac Binary Wheel. This is
693 # paired with a change to PyPI disabling uploads for the
694 # same. Once we have a mechanism for enabling support for
695 # binary wheels on linux that deals with the inherent problems
696 # of binary distribution this can be removed.
697 comes_from = getattr(link, "comes_from", None)
698 if (
699 (
700 not platform.startswith('win')
701 and not platform.startswith('macosx')
702 and not platform == 'cli'
703 )
704 and comes_from is not None
705 and urllib_parse.urlparse(
706 comes_from.url
707 ).netloc.endswith(PyPI.netloc)):
708 if not wheel.supported(tags=supported_tags_noarch):
709 logger.debug(
710 "Skipping %s because it is a pypi-hosted binary "
711 "Wheel on an unsupported platform",
712 link,
713 )
714 return
715 version = wheel.version
716
717 if not version:
718 version = self._egg_info_matches(egg_info, search_name, link)
719 if version is None:
720 logger.debug(
721 'Skipping link %s; wrong project name (not %s)',
722 link,
723 search_name,
724 )
725 return
726
727 if (link.internal is not None
728 and not link.internal
729 and not normalize_name(search_name).lower()
730 in self.allow_external
731 and not self.allow_all_external):
732 # We have a link that we are sure is external, so we should skip
733 # it unless we are allowing externals
734 logger.debug("Skipping %s because it is externally hosted.", link)
735 self.need_warn_external = True
736 return
737
738 if (link.verifiable is not None
739 and not link.verifiable
740 and not (normalize_name(search_name).lower()
741 in self.allow_unverified)):
742 # We have a link that we are sure we cannot verify its integrity,
743 # so we should skip it unless we are allowing unsafe installs
744 # for this requirement.
745 logger.debug(
746 "Skipping %s because it is an insecure and unverifiable file.",
747 link,
748 )
749 self.need_warn_unverified = True
750 return
751
752 match = self._py_version_re.search(version)
753 if match:
754 version = version[:match.start()]
755 py_version = match.group(1)
756 if py_version != sys.version[:3]:
757 logger.debug(
758 'Skipping %s because Python version is incorrect', link
759 )
760 return
761 logger.debug('Found link %s, version: %s', link, version)
762
763 return InstallationCandidate(search_name, version, link)
764
765 def _egg_info_matches(self, egg_info, search_name, link):
766 match = self._egg_info_re.search(egg_info)
767 if not match:
768 logger.debug('Could not parse version from link: %s', link)
769 return None
770 name = match.group(0).lower()
771 # To match the "safe" name that pkg_resources creates:
772 name = name.replace('_', '-')
773 # project name and version must be separated by a dash
774 look_for = search_name.lower() + "-"
775 if name.startswith(look_for):
776 return match.group(0)[len(look_for):]
777 else:
778 return None
779
780 def _get_page(self, link, req):
781 return HTMLPage.get_page(link, req, session=self.session)
782
783
784 class HTMLPage(object):
785 """Represents one page, along with its URL"""
786
787 # FIXME: these regexes are horrible hacks:
788 _homepage_re = re.compile(b'<th>\\s*home\\s*page', re.I)
789 _download_re = re.compile(b'<th>\\s*download\\s+url', re.I)
790 _href_re = re.compile(
791 b'href=(?:"([^"]*)"|\'([^\']*)\'|([^>\\s\\n]*))',
792 re.I | re.S
793 )
794
795 def __init__(self, content, url, headers=None, trusted=None):
796 # Determine if we have any encoding information in our headers
797 encoding = None
798 if headers and "Content-Type" in headers:
799 content_type, params = cgi.parse_header(headers["Content-Type"])
800
801 if "charset" in params:
802 encoding = params['charset']
803
804 self.content = content
805 self.parsed = html5lib.parse(
806 self.content,
807 encoding=encoding,
808 namespaceHTMLElements=False,
809 )
810 self.url = url
811 self.headers = headers
812 self.trusted = trusted
813
814 def __str__(self):
815 return self.url
816
817 @classmethod
818 def get_page(cls, link, req, skip_archives=True, session=None):
819 if session is None:
820 raise TypeError(
821 "get_page() missing 1 required keyword argument: 'session'"
822 )
823
824 url = link.url
825 url = url.split('#', 1)[0]
826
827 # Check for VCS schemes that do not support lookup as web pages.
828 from pip.vcs import VcsSupport
829 for scheme in VcsSupport.schemes:
830 if url.lower().startswith(scheme) and url[len(scheme)] in '+:':
831 logger.debug('Cannot look at %s URL %s', scheme, link)
832 return None
833
834 try:
835 if skip_archives:
836 filename = link.filename
837 for bad_ext in ['.tar', '.tar.gz', '.tar.bz2', '.tgz', '.zip']:
838 if filename.endswith(bad_ext):
839 content_type = cls._get_content_type(
840 url, session=session,
841 )
842 if content_type.lower().startswith('text/html'):
843 break
844 else:
845 logger.debug(
846 'Skipping page %s because of Content-Type: %s',
847 link,
848 content_type,
849 )
850 return
851
852 logger.debug('Getting page %s', url)
853
854 # Tack index.html onto file:// URLs that point to directories
855 (scheme, netloc, path, params, query, fragment) = \
856 urllib_parse.urlparse(url)
857 if (scheme == 'file'
858 and os.path.isdir(urllib_request.url2pathname(path))):
859 # add trailing slash if not present so urljoin doesn't trim
860 # final segment
861 if not url.endswith('/'):
862 url += '/'
863 url = urllib_parse.urljoin(url, 'index.html')
864 logger.debug(' file: URL is directory, getting %s', url)
865
866 resp = session.get(
867 url,
868 headers={
869 "Accept": "text/html",
870 "Cache-Control": "max-age=600",
871 },
872 )
873 resp.raise_for_status()
874
875 # The check for archives above only works if the url ends with
876 # something that looks like an archive. However that is not a
877 # requirement of an url. Unless we issue a HEAD request on every
878 # url we cannot know ahead of time for sure if something is HTML
879 # or not. However we can check after we've downloaded it.
880 content_type = resp.headers.get('Content-Type', 'unknown')
881 if not content_type.lower().startswith("text/html"):
882 logger.debug(
883 'Skipping page %s because of Content-Type: %s',
884 link,
885 content_type,
886 )
887 return
888
889 inst = cls(
890 resp.content, resp.url, resp.headers,
891 trusted=link.trusted,
892 )
893 except requests.HTTPError as exc:
894 level = 2 if exc.response.status_code == 404 else 1
895 cls._handle_fail(req, link, exc, url, level=level)
896 except requests.ConnectionError as exc:
897 cls._handle_fail(
898 req, link, "connection error: %s" % exc, url,
899 )
900 except requests.Timeout:
901 cls._handle_fail(req, link, "timed out", url)
902 except SSLError as exc:
903 reason = ("There was a problem confirming the ssl certificate: "
904 "%s" % exc)
905 cls._handle_fail(
906 req, link, reason, url,
907 level=2,
908 meth=logger.info,
909 )
910 else:
911 return inst
912
913 @staticmethod
914 def _handle_fail(req, link, reason, url, level=1, meth=None):
915 if meth is None:
916 meth = logger.debug
917
918 meth("Could not fetch URL %s: %s", link, reason)
919 meth("Will skip URL %s when looking for download links for %s" %
920 (link.url, req))
921
922 @staticmethod
923 def _get_content_type(url, session):
924 """Get the Content-Type of the given url, using a HEAD request"""
925 scheme, netloc, path, query, fragment = urllib_parse.urlsplit(url)
926 if scheme not in ('http', 'https'):
927 # FIXME: some warning or something?
928 # assertion error?
929 return ''
930
931 resp = session.head(url, allow_redirects=True)
932 resp.raise_for_status()
933
934 return resp.headers.get("Content-Type", "")
935
936 @cached_property
937 def api_version(self):
938 metas = [
939 x for x in self.parsed.findall(".//meta")
940 if x.get("name", "").lower() == "api-version"
941 ]
942 if metas:
943 try:
944 return int(metas[0].get("value", None))
945 except (TypeError, ValueError):
946 pass
947
948 return None
949
950 @cached_property
951 def base_url(self):
952 bases = [
953 x for x in self.parsed.findall(".//base")
954 if x.get("href") is not None
955 ]
956 if bases and bases[0].get("href"):
957 return bases[0].get("href")
958 else:
959 return self.url
960
961 @property
962 def links(self):
963 """Yields all links in the page"""
964 for anchor in self.parsed.findall(".//a"):
965 if anchor.get("href"):
966 href = anchor.get("href")
967 url = self.clean_link(
968 urllib_parse.urljoin(self.base_url, href)
969 )
970
971 # Determine if this link is internal. If that distinction
972 # doesn't make sense in this context, then we don't make
973 # any distinction.
974 internal = None
975 if self.api_version and self.api_version >= 2:
976 # Only api_versions >= 2 have a distinction between
977 # external and internal links
978 internal = bool(
979 anchor.get("rel")
980 and "internal" in anchor.get("rel").split()
981 )
982
983 yield Link(url, self, internal=internal)
984
985 def rel_links(self):
986 for url in self.explicit_rel_links():
987 yield url
988 for url in self.scraped_rel_links():
989 yield url
990
991 def explicit_rel_links(self, rels=('homepage', 'download')):
992 """Yields all links with the given relations"""
993 rels = set(rels)
994
995 for anchor in self.parsed.findall(".//a"):
996 if anchor.get("rel") and anchor.get("href"):
997 found_rels = set(anchor.get("rel").split())
998 # Determine the intersection between what rels were found and
999 # what rels were being looked for
1000 if found_rels & rels:
1001 href = anchor.get("href")
1002 url = self.clean_link(
1003 urllib_parse.urljoin(self.base_url, href)
1004 )
1005 yield Link(url, self, trusted=False)
1006
1007 def scraped_rel_links(self):
1008 # Can we get rid of this horrible horrible method?
1009 for regex in (self._homepage_re, self._download_re):
1010 match = regex.search(self.content)
1011 if not match:
1012 continue
1013 href_match = self._href_re.search(self.content, pos=match.end())
1014 if not href_match:
1015 continue
1016 url = (
1017 href_match.group(1)
1018 or href_match.group(2)
1019 or href_match.group(3)
1020 )
1021 if not url:
1022 continue
1023 try:
1024 url = url.decode("ascii")
1025 except UnicodeDecodeError:
1026 continue
1027 url = self.clean_link(urllib_parse.urljoin(self.base_url, url))
1028 yield Link(url, self, trusted=False, _deprecated_regex=True)
1029
1030 _clean_re = re.compile(r'[^a-z0-9$&+,/:;=?@.#%_\\|-]', re.I)
1031
1032 def clean_link(self, url):
1033 """Makes sure a link is fully encoded. That is, if a ' ' shows up in
1034 the link, it will be rewritten to %20 (while not over-quoting
1035 % or other characters)."""
1036 return self._clean_re.sub(
1037 lambda match: '%%%2x' % ord(match.group(0)), url)
1038
1039
1040 class Link(object):
1041
1042 def __init__(self, url, comes_from=None, internal=None, trusted=None,
1043 _deprecated_regex=False):
1044
1045 # url can be a UNC windows share
1046 if url != Inf and url.startswith('\\\\'):
1047 url = path_to_url(url)
1048
1049 self.url = url
1050 self.comes_from = comes_from
1051 self.internal = internal
1052 self.trusted = trusted
1053 self._deprecated_regex = _deprecated_regex
1054
1055 def __str__(self):
1056 if self.comes_from:
1057 return '%s (from %s)' % (self.url, self.comes_from)
1058 else:
1059 return str(self.url)
1060
1061 def __repr__(self):
1062 return '<Link %s>' % self
1063
1064 def __eq__(self, other):
1065 if not isinstance(other, Link):
1066 return NotImplemented
1067 return self.url == other.url
1068
1069 def __ne__(self, other):
1070 if not isinstance(other, Link):
1071 return NotImplemented
1072 return self.url != other.url
1073
1074 def __lt__(self, other):
1075 if not isinstance(other, Link):
1076 return NotImplemented
1077 return self.url < other.url
1078
1079 def __le__(self, other):
1080 if not isinstance(other, Link):
1081 return NotImplemented
1082 return self.url <= other.url
1083
1084 def __gt__(self, other):
1085 if not isinstance(other, Link):
1086 return NotImplemented
1087 return self.url > other.url
1088
1089 def __ge__(self, other):
1090 if not isinstance(other, Link):
1091 return NotImplemented
1092 return self.url >= other.url
1093
1094 def __hash__(self):
1095 return hash(self.url)
1096
1097 @property
1098 def filename(self):
1099 _, netloc, path, _, _ = urllib_parse.urlsplit(self.url)
1100 name = posixpath.basename(path.rstrip('/')) or netloc
1101 name = urllib_parse.unquote(name)
1102 assert name, ('URL %r produced no filename' % self.url)
1103 return name
1104
1105 @property
1106 def scheme(self):
1107 return urllib_parse.urlsplit(self.url)[0]
1108
1109 @property
1110 def netloc(self):
1111 return urllib_parse.urlsplit(self.url)[1]
1112
1113 @property
1114 def path(self):
1115 return urllib_parse.urlsplit(self.url)[2]
1116
1117 def splitext(self):
1118 return splitext(posixpath.basename(self.path.rstrip('/')))
1119
1120 @property
1121 def ext(self):
1122 return self.splitext()[1]
1123
1124 @property
1125 def url_without_fragment(self):
1126 scheme, netloc, path, query, fragment = urllib_parse.urlsplit(self.url)
1127 return urllib_parse.urlunsplit((scheme, netloc, path, query, None))
1128
1129 _egg_fragment_re = re.compile(r'#egg=([^&]*)')
1130
1131 @property
1132 def egg_fragment(self):
1133 match = self._egg_fragment_re.search(self.url)
1134 if not match:
1135 return None
1136 return match.group(1)
1137
1138 _hash_re = re.compile(
1139 r'(sha1|sha224|sha384|sha256|sha512|md5)=([a-f0-9]+)'
1140 )
1141
1142 @property
1143 def hash(self):
1144 match = self._hash_re.search(self.url)
1145 if match:
1146 return match.group(2)
1147 return None
1148
1149 @property
1150 def hash_name(self):
1151 match = self._hash_re.search(self.url)
1152 if match:
1153 return match.group(1)
1154 return None
1155
1156 @property
1157 def show_url(self):
1158 return posixpath.basename(self.url.split('#', 1)[0].split('?', 1)[0])
1159
1160 @property
1161 def verifiable(self):
1162 """
1163 Returns True if this link can be verified after download, False if it
1164 cannot, and None if we cannot determine.
1165 """
1166 trusted = self.trusted or getattr(self.comes_from, "trusted", None)
1167 if trusted is not None and trusted:
1168 # This link came from a trusted source. It *may* be verifiable but
1169 # first we need to see if this page is operating under the new
1170 # API version.
1171 try:
1172 api_version = getattr(self.comes_from, "api_version", None)
1173 api_version = int(api_version)
1174 except (ValueError, TypeError):
1175 api_version = None
1176
1177 if api_version is None or api_version <= 1:
1178 # This link is either trusted, or it came from a trusted,
1179 # however it is not operating under the API version 2 so
1180 # we can't make any claims about if it's safe or not
1181 return
1182
1183 if self.hash:
1184 # This link came from a trusted source and it has a hash, so we
1185 # can consider it safe.
1186 return True
1187 else:
1188 # This link came from a trusted source, using the new API
1189 # version, and it does not have a hash. It is NOT verifiable
1190 return False
1191 elif trusted is not None:
1192 # This link came from an untrusted source and we cannot trust it
1193 return False
1194
1195
1196 # An object to represent the "link" for the installed version of a requirement.
1197 # Using Inf as the url makes it sort higher.
1198 INSTALLED_VERSION = Link(Inf)
1199
[end of pip/index.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pypa/pip
|
962ffa876e5361e584b0845c7e4728f357a3b546
|
We have two similar options for index security in 6.0
Currently in develop branch we have two flags, `--no-check-certificate` which globally disables TLS verification and we have `--trusted-host <foo>` which allows non HTTPS for a particular index.
The general idea behind both of these flags is the same, pip wants a valid HTTPS index and the person doesn't have one, so they tell us to allow an invalid one. I think we can and should probably just drop `--no-check-certificate` and roll that into `--trusted-host <foo>`.
|
2014-12-20T22:43:07Z
|
<patch>
diff --git a/pip/__init__.py b/pip/__init__.py
--- a/pip/__init__.py
+++ b/pip/__init__.py
@@ -16,6 +16,9 @@
from pip.baseparser import ConfigOptionParser, UpdatingDefaultsHelpFormatter
from pip.commands import get_summaries, get_similar_commands
from pip.commands import commands_dict
+from pip._vendor.requests.packages.urllib3.exceptions import (
+ InsecureRequestWarning,
+)
# assignment for flake8 to be happy
@@ -32,6 +35,9 @@
logger = logging.getLogger(__name__)
+# Hide the InsecureRequestWArning from urllib3
+warnings.filterwarnings("ignore", category=InsecureRequestWarning)
+
def autocomplete():
"""Command and option completion for the main option parser (and options)
diff --git a/pip/basecommand.py b/pip/basecommand.py
--- a/pip/basecommand.py
+++ b/pip/basecommand.py
@@ -70,13 +70,12 @@ def _build_session(self, options):
if options.cache_dir else None
),
retries=options.retries,
+ insecure_hosts=options.trusted_hosts,
)
# Handle custom ca-bundles from the user
if options.cert:
session.verify = options.cert
- elif options.no_check_certificate:
- session.verify = False
# Handle SSL client certificate
if options.client_cert:
diff --git a/pip/cmdoptions.py b/pip/cmdoptions.py
--- a/pip/cmdoptions.py
+++ b/pip/cmdoptions.py
@@ -181,14 +181,6 @@ def make(self):
help="Path to SSL client certificate, a single file containing the "
"private key and the certificate in PEM format.")
-no_check_certificate = OptionMaker(
- "--no-check-certificate",
- dest="no_check_certificate",
- action="store_true",
- default=False,
- help="Don't validate SSL certificates.",
-)
-
index_url = OptionMaker(
'-i', '--index-url', '--pypi-url',
dest='index_url',
@@ -260,7 +252,9 @@ def make(self):
dest="trusted_hosts",
action="append",
metavar="HOSTNAME",
- help="Mark this host as trusted, even though it does not have HTTPS.",
+ default=[],
+ help="Mark this host as trusted, even though it does not have valid or "
+ "any HTTPS.",
)
# Remove after 7.0
@@ -436,9 +430,9 @@ def make(self):
default_vcs,
skip_requirements_regex,
exists_action,
+ trusted_host,
cert,
client_cert,
- no_check_certificate,
cache_dir,
no_cache,
disable_pip_version_check,
@@ -456,7 +450,6 @@ def make(self):
mirrors,
allow_external,
allow_all_external,
- trusted_host,
no_allow_external,
allow_unsafe,
no_allow_unsafe,
diff --git a/pip/download.py b/pip/download.py
--- a/pip/download.py
+++ b/pip/download.py
@@ -253,6 +253,13 @@ def delete(self, *args, **kwargs):
pass
+class InsecureHTTPAdapter(HTTPAdapter):
+
+ def cert_verify(self, conn, url, verify, cert):
+ conn.cert_reqs = 'CERT_NONE'
+ conn.ca_certs = None
+
+
class PipSession(requests.Session):
timeout = None
@@ -260,6 +267,7 @@ class PipSession(requests.Session):
def __init__(self, *args, **kwargs):
retries = kwargs.pop("retries", 0)
cache = kwargs.pop("cache", None)
+ insecure_hosts = kwargs.pop("insecure_hosts", [])
super(PipSession, self).__init__(*args, **kwargs)
@@ -287,20 +295,35 @@ def __init__(self, *args, **kwargs):
backoff_factor=0.25,
)
+ # We want to _only_ cache responses on securely fetched origins. We do
+ # this because we can't validate the response of an insecurely fetched
+ # origin, and we don't want someone to be able to poison the cache and
+ # require manual evication from the cache to fix it.
if cache:
- http_adapter = CacheControlAdapter(
+ secure_adapter = CacheControlAdapter(
cache=SafeFileCache(cache),
max_retries=retries,
)
else:
- http_adapter = HTTPAdapter(max_retries=retries)
+ secure_adapter = HTTPAdapter(max_retries=retries)
- self.mount("http://", http_adapter)
- self.mount("https://", http_adapter)
+ # Our Insecure HTTPAdapter disables HTTPS validation. It does not
+ # support caching (see above) so we'll use it for all http:// URLs as
+ # well as any https:// host that we've marked as ignoring TLS errors
+ # for.
+ insecure_adapter = InsecureHTTPAdapter(max_retries=retries)
+
+ self.mount("https://", secure_adapter)
+ self.mount("http://", insecure_adapter)
# Enable file:// urls
self.mount("file://", LocalFSAdapter())
+ # We want to use a non-validating adapter for any requests which are
+ # deemed insecure.
+ for host in insecure_hosts:
+ self.mount("https://{0}/".format(host), insecure_adapter)
+
def request(self, method, url, *args, **kwargs):
# Allow setting a default timeout on a session
kwargs.setdefault("timeout", self.timeout)
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-8651
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'module' object has no attribute 'open_file'
```
======================================================================
ERROR: test_frame_select_complex2 (pandas.io.tests.test_pytables.TestHDFStore)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_pytables.py", line 3686, in test_frame_select_complex2
parms.to_hdf(pp,'df',mode='w',format='table',data_columns=['A'])
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/core/generic.py", line 896, in to_hdf
return pytables.to_hdf(path_or_buf, key, self, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 293, in to_hdf
complib=complib) as store:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 274, in get_store
store = HDFStore(path, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 423, in __init__
self.open(mode=mode, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 553, in open
self._handle = tables.open_file(self._path, self._mode, **kwargs)
AttributeError: 'module' object has no attribute 'open_file'
```
on the same old ubuntu 13.10
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.5.final.0
python-bits: 64
OS: Linux
OS-release: 3.2.0-4-amd64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C
LANG: C
pandas: 0.15.0
nose: 1.3.0
Cython: 0.19
numpy: 1.7.1
scipy: 0.12.0
statsmodels: 0.5.0
IPython: None
sphinx: 1.1.3
patsy: 0.3.0
dateutil: 1.5
pytz: 2012c
bottleneck: None
tables: 2.4.0
numexpr: 2.0.1
matplotlib: 1.2.1
openpyxl: 1.7.0
xlrd: 0.9.2
xlwt: 0.7.4
xlsxwriter: None
lxml: None
bs4: 4.2.0
html5lib: 0.95-dev
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 [](http://scatterci.github.io/pydata/pandas)
6
7 ## What is it
8
9 **pandas** is a Python package providing fast, flexible, and expressive data
10 structures designed to make working with "relational" or "labeled" data both
11 easy and intuitive. It aims to be the fundamental high-level building block for
12 doing practical, **real world** data analysis in Python. Additionally, it has
13 the broader goal of becoming **the most powerful and flexible open source data
14 analysis / manipulation tool available in any language**. It is already well on
15 its way toward this goal.
16
17 ## Main Features
18 Here are just a few of the things that pandas does well:
19
20 - Easy handling of [**missing data**][missing-data] (represented as
21 `NaN`) in floating point as well as non-floating point data
22 - Size mutability: columns can be [**inserted and
23 deleted**][insertion-deletion] from DataFrame and higher dimensional
24 objects
25 - Automatic and explicit [**data alignment**][alignment]: objects can
26 be explicitly aligned to a set of labels, or the user can simply
27 ignore the labels and let `Series`, `DataFrame`, etc. automatically
28 align the data for you in computations
29 - Powerful, flexible [**group by**][groupby] functionality to perform
30 split-apply-combine operations on data sets, for both aggregating
31 and transforming data
32 - Make it [**easy to convert**][conversion] ragged,
33 differently-indexed data in other Python and NumPy data structures
34 into DataFrame objects
35 - Intelligent label-based [**slicing**][slicing], [**fancy
36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
37 large data sets
38 - Intuitive [**merging**][merging] and [**joining**][joining] data
39 sets
40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
41 data sets
42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
43 labels per tick)
44 - Robust IO tools for loading data from [**flat files**][flat-files]
45 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
47 - [**Time series**][timeseries]-specific functionality: date range
48 generation and frequency conversion, moving window statistics,
49 moving window linear regressions, date shifting and lagging, etc.
50
51
52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
70
71 ## Where to get it
72 The source code is currently hosted on GitHub at:
73 http://github.com/pydata/pandas
74
75 Binary installers for the latest released version are available at the Python
76 package index
77
78 http://pypi.python.org/pypi/pandas/
79
80 And via `easy_install`:
81
82 ```sh
83 easy_install pandas
84 ```
85
86 or `pip`:
87
88 ```sh
89 pip install pandas
90 ```
91
92 ## Dependencies
93 - [NumPy](http://www.numpy.org): 1.7.0 or higher
94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
95 - [pytz](http://pytz.sourceforge.net)
96 - Needed for time zone support with ``pandas.date_range``
97
98 ### Highly Recommended Dependencies
99 - [numexpr](https://github.com/pydata/numexpr)
100 - Needed to accelerate some expression evaluation operations
101 - Required by PyTables
102 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
103 - Needed to accelerate certain numerical operations
104
105 ### Optional dependencies
106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
111 - [statsmodels](http://statsmodels.sourceforge.net/)
112 - Needed for parts of `pandas.stats`
113 - For Excel I/O:
114 - [xlrd/xlwt](http://www.python-excel.org/)
115 - Excel reading (xlrd) and writing (xlwt)
116 - [openpyxl](http://packages.python.org/openpyxl/)
117 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for
118 writing .xlsx files
119 - xlrd >= 0.9.0
120 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
121 - Alternative Excel writer.
122 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
123 - Needed for `pandas.io.gbq`
124 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
125 - One of the following combinations of libraries is needed to use the
126 top-level [`pandas.read_html`][read-html-docs] function:
127 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
128 recent version of [html5lib][html5lib] is okay.)
129 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
130 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
131 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
132 for reasons as to why you should probably **not** take this approach.
133
134 #### Notes about HTML parsing libraries
135 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
136 either [lxml][lxml] or [html5lib][html5lib] or both.
137 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
138 installed.
139 - You are strongly encouraged to read [HTML reading
140 gotchas][html-gotchas]. It explains issues surrounding the
141 installation and usage of the above three libraries.
142 - You may need to install an older version of
143 [BeautifulSoup4][BeautifulSoup4]:
144 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
145 32-bit Ubuntu/Debian
146 - Additionally, if you're using [Anaconda][Anaconda] you should
147 definitely read [the gotchas about HTML parsing][html-gotchas]
148 libraries
149 - If you're on a system with `apt-get` you can do
150
151 ```sh
152 sudo apt-get build-dep python-lxml
153 ```
154
155 to get the necessary dependencies for installation of [lxml][lxml].
156 This will prevent further headaches down the line.
157
158 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
159 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
160 [lxml]: http://lxml.de
161 [Anaconda]: https://store.continuum.io/cshop/anaconda
162 [NumPy]: http://numpy.scipy.org/
163 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
164 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
165
166 ## Installation from sources
167 To install pandas from source you need Cython in addition to the normal
168 dependencies above. Cython can be installed from pypi:
169
170 ```sh
171 pip install cython
172 ```
173
174 In the `pandas` directory (same one where you found this file after
175 cloning the git repo), execute:
176
177 ```sh
178 python setup.py install
179 ```
180
181 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
182
183 ```sh
184 python setup.py develop
185 ```
186
187 Alternatively, you can use `pip` if you want all the dependencies pulled
188 in automatically (the `-e` option is for installing it in [development
189 mode](http://www.pip-installer.org/en/latest/usage.html)):
190
191 ```sh
192 pip install -e .
193 ```
194
195 On Windows, you will need to install MinGW and execute:
196
197 ```sh
198 python setup.py build --compiler=mingw32
199 python setup.py install
200 ```
201
202 See http://pandas.pydata.org/ for more information.
203
204 ## License
205 BSD
206
207 ## Documentation
208 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
209
210 The Sphinx documentation should provide a good starting point for learning how
211 to use the library. Expect the docs to continue to expand as time goes on.
212
213 ## Background
214 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
215 has been under active development since then.
216
217 ## Discussion and Development
218 Since pandas development is related to a number of other scientific
219 Python projects, questions are welcome on the scipy-user mailing
220 list. Specialized discussions or design issues should take place on
221 the PyData mailing list / Google group:
222
223 https://groups.google.com/forum/#!forum/pydata
224
[end of README.md]
[start of bench/io_roundtrip.py]
1 from __future__ import print_function
2 import time
3 import os
4 import numpy as np
5
6 import la
7 import pandas
8 from pandas.compat import range
9 from pandas import datetools, DatetimeIndex
10
11
12 def timeit(f, iterations):
13 start = time.clock()
14
15 for i in range(iterations):
16 f()
17
18 return time.clock() - start
19
20
21 def rountrip_archive(N, K=50, iterations=10):
22 # Create data
23 arr = np.random.randn(N, K)
24 # lar = la.larry(arr)
25 dma = pandas.DataFrame(arr,
26 DatetimeIndex('1/1/2000', periods=N,
27 offset=datetools.Minute()))
28 dma[201] = 'bar'
29
30 # filenames
31 filename_numpy = '/Users/wesm/tmp/numpy.npz'
32 filename_larry = '/Users/wesm/tmp/archive.hdf5'
33 filename_pandas = '/Users/wesm/tmp/pandas_tmp'
34
35 # Delete old files
36 try:
37 os.unlink(filename_numpy)
38 except:
39 pass
40 try:
41 os.unlink(filename_larry)
42 except:
43 pass
44
45 try:
46 os.unlink(filename_pandas)
47 except:
48 pass
49
50 # Time a round trip save and load
51 # numpy_f = lambda: numpy_roundtrip(filename_numpy, arr, arr)
52 # numpy_time = timeit(numpy_f, iterations) / iterations
53
54 # larry_f = lambda: larry_roundtrip(filename_larry, lar, lar)
55 # larry_time = timeit(larry_f, iterations) / iterations
56
57 pandas_f = lambda: pandas_roundtrip(filename_pandas, dma, dma)
58 pandas_time = timeit(pandas_f, iterations) / iterations
59 print('pandas (HDF5) %7.4f seconds' % pandas_time)
60
61 pickle_f = lambda: pandas_roundtrip(filename_pandas, dma, dma)
62 pickle_time = timeit(pickle_f, iterations) / iterations
63 print('pandas (pickle) %7.4f seconds' % pickle_time)
64
65 # print('Numpy (npz) %7.4f seconds' % numpy_time)
66 # print('larry (HDF5) %7.4f seconds' % larry_time)
67
68 # Delete old files
69 try:
70 os.unlink(filename_numpy)
71 except:
72 pass
73 try:
74 os.unlink(filename_larry)
75 except:
76 pass
77
78 try:
79 os.unlink(filename_pandas)
80 except:
81 pass
82
83
84 def numpy_roundtrip(filename, arr1, arr2):
85 np.savez(filename, arr1=arr1, arr2=arr2)
86 npz = np.load(filename)
87 arr1 = npz['arr1']
88 arr2 = npz['arr2']
89
90
91 def larry_roundtrip(filename, lar1, lar2):
92 io = la.IO(filename)
93 io['lar1'] = lar1
94 io['lar2'] = lar2
95 lar1 = io['lar1']
96 lar2 = io['lar2']
97
98
99 def pandas_roundtrip(filename, dma1, dma2):
100 # What's the best way to code this?
101 from pandas.io.pytables import HDFStore
102 store = HDFStore(filename)
103 store['dma1'] = dma1
104 store['dma2'] = dma2
105 dma1 = store['dma1']
106 dma2 = store['dma2']
107
108
109 def pandas_roundtrip_pickle(filename, dma1, dma2):
110 dma1.save(filename)
111 dma1 = pandas.DataFrame.load(filename)
112 dma2.save(filename)
113 dma2 = pandas.DataFrame.load(filename)
114
115 if __name__ == '__main__':
116 rountrip_archive(10000, K=200)
117
[end of bench/io_roundtrip.py]
[start of bench/serialize.py]
1 from __future__ import print_function
2 from pandas.compat import range, lrange
3 import time
4 import os
5 import numpy as np
6
7 import la
8 import pandas
9
10
11 def timeit(f, iterations):
12 start = time.clock()
13
14 for i in range(iterations):
15 f()
16
17 return time.clock() - start
18
19
20 def roundtrip_archive(N, iterations=10):
21
22 # Create data
23 arr = np.random.randn(N, N)
24 lar = la.larry(arr)
25 dma = pandas.DataFrame(arr, lrange(N), lrange(N))
26
27 # filenames
28 filename_numpy = '/Users/wesm/tmp/numpy.npz'
29 filename_larry = '/Users/wesm/tmp/archive.hdf5'
30 filename_pandas = '/Users/wesm/tmp/pandas_tmp'
31
32 # Delete old files
33 try:
34 os.unlink(filename_numpy)
35 except:
36 pass
37 try:
38 os.unlink(filename_larry)
39 except:
40 pass
41 try:
42 os.unlink(filename_pandas)
43 except:
44 pass
45
46 # Time a round trip save and load
47 numpy_f = lambda: numpy_roundtrip(filename_numpy, arr, arr)
48 numpy_time = timeit(numpy_f, iterations) / iterations
49
50 larry_f = lambda: larry_roundtrip(filename_larry, lar, lar)
51 larry_time = timeit(larry_f, iterations) / iterations
52
53 pandas_f = lambda: pandas_roundtrip(filename_pandas, dma, dma)
54 pandas_time = timeit(pandas_f, iterations) / iterations
55
56 print('Numpy (npz) %7.4f seconds' % numpy_time)
57 print('larry (HDF5) %7.4f seconds' % larry_time)
58 print('pandas (HDF5) %7.4f seconds' % pandas_time)
59
60
61 def numpy_roundtrip(filename, arr1, arr2):
62 np.savez(filename, arr1=arr1, arr2=arr2)
63 npz = np.load(filename)
64 arr1 = npz['arr1']
65 arr2 = npz['arr2']
66
67
68 def larry_roundtrip(filename, lar1, lar2):
69 io = la.IO(filename)
70 io['lar1'] = lar1
71 io['lar2'] = lar2
72 lar1 = io['lar1']
73 lar2 = io['lar2']
74
75
76 def pandas_roundtrip(filename, dma1, dma2):
77 from pandas.io.pytables import HDFStore
78 store = HDFStore(filename)
79 store['dma1'] = dma1
80 store['dma2'] = dma2
81 dma1 = store['dma1']
82 dma2 = store['dma2']
83
84
85 def pandas_roundtrip_pickle(filename, dma1, dma2):
86 dma1.save(filename)
87 dma1 = pandas.DataFrame.load(filename)
88 dma2.save(filename)
89 dma2 = pandas.DataFrame.load(filename)
90
[end of bench/serialize.py]
[start of pandas/util/print_versions.py]
1 import os
2 import platform
3 import sys
4 import struct
5 import subprocess
6 import codecs
7
8
9 def get_sys_info():
10 "Returns system information as a dict"
11
12 blob = []
13
14 # get full commit hash
15 commit = None
16 if os.path.isdir(".git") and os.path.isdir("pandas"):
17 try:
18 pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "),
19 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
20 so, serr = pipe.communicate()
21 except:
22 pass
23 else:
24 if pipe.returncode == 0:
25 commit = so
26 try:
27 commit = so.decode('utf-8')
28 except ValueError:
29 pass
30 commit = commit.strip().strip('"')
31
32 blob.append(('commit', commit))
33
34 try:
35 sysname, nodename, release, version, machine, processor = platform.uname(
36 )
37 blob.extend([
38 ("python", "%d.%d.%d.%s.%s" % sys.version_info[:]),
39 ("python-bits", struct.calcsize("P") * 8),
40 ("OS", "%s" % (sysname)),
41 ("OS-release", "%s" % (release)),
42 # ("Version", "%s" % (version)),
43 ("machine", "%s" % (machine)),
44 ("processor", "%s" % (processor)),
45 ("byteorder", "%s" % sys.byteorder),
46 ("LC_ALL", "%s" % os.environ.get('LC_ALL', "None")),
47 ("LANG", "%s" % os.environ.get('LANG', "None")),
48
49 ])
50 except:
51 pass
52
53 return blob
54
55
56 def show_versions(as_json=False):
57 import imp
58 sys_info = get_sys_info()
59
60 deps = [
61 # (MODULE_NAME, f(mod) -> mod version)
62 ("pandas", lambda mod: mod.__version__),
63 ("nose", lambda mod: mod.__version__),
64 ("Cython", lambda mod: mod.__version__),
65 ("numpy", lambda mod: mod.version.version),
66 ("scipy", lambda mod: mod.version.version),
67 ("statsmodels", lambda mod: mod.__version__),
68 ("IPython", lambda mod: mod.__version__),
69 ("sphinx", lambda mod: mod.__version__),
70 ("patsy", lambda mod: mod.__version__),
71 ("dateutil", lambda mod: mod.__version__),
72 ("pytz", lambda mod: mod.VERSION),
73 ("bottleneck", lambda mod: mod.__version__),
74 ("tables", lambda mod: mod.__version__),
75 ("numexpr", lambda mod: mod.__version__),
76 ("matplotlib", lambda mod: mod.__version__),
77 ("openpyxl", lambda mod: mod.__version__),
78 ("xlrd", lambda mod: mod.__VERSION__),
79 ("xlwt", lambda mod: mod.__VERSION__),
80 ("xlsxwriter", lambda mod: mod.__version__),
81 ("lxml", lambda mod: mod.etree.__version__),
82 ("bs4", lambda mod: mod.__version__),
83 ("html5lib", lambda mod: mod.__version__),
84 ("httplib2", lambda mod: mod.__version__),
85 ("apiclient", lambda mod: mod.__version__),
86 ("rpy2", lambda mod: mod.__version__),
87 ("sqlalchemy", lambda mod: mod.__version__),
88 ("pymysql", lambda mod: mod.__version__),
89 ("psycopg2", lambda mod: mod.__version__),
90 ]
91
92 deps_blob = list()
93 for (modname, ver_f) in deps:
94 try:
95 try:
96 mod = imp.load_module(modname, *imp.find_module(modname))
97 except (ImportError):
98 import importlib
99 mod = importlib.import_module(modname)
100 ver = ver_f(mod)
101 deps_blob.append((modname, ver))
102 except:
103 deps_blob.append((modname, None))
104
105 if (as_json):
106 # 2.6-safe
107 try:
108 import json
109 except:
110 import simplejson as json
111
112 j = dict(system=dict(sys_info), dependencies=dict(deps_blob))
113
114 if as_json == True:
115 print(j)
116 else:
117 with codecs.open(as_json, "wb", encoding='utf8') as f:
118 json.dump(j, f, indent=2)
119
120 else:
121
122 print("\nINSTALLED VERSIONS")
123 print("------------------")
124
125 for k, stat in sys_info:
126 print("%s: %s" % (k, stat))
127
128 print("")
129 for k, stat in deps_blob:
130 print("%s: %s" % (k, stat))
131
132
133 def main():
134 # optparse is 2.6-safe
135 from optparse import OptionParser
136 parser = OptionParser()
137 parser.add_option("-j", "--json", metavar="FILE", nargs=1,
138 help="Save output as JSON into file, pass in '-' to output to stdout")
139
140 (options, args) = parser.parse_args()
141
142 if options.json == "-":
143 options.json = True
144
145 show_versions(as_json=options.json)
146
147 return 0
148
149 if __name__ == "__main__":
150 sys.exit(main())
151
[end of pandas/util/print_versions.py]
[start of setup.py]
1 #!/usr/bin/env python
2
3 """
4 Parts of this file were taken from the pyzmq project
5 (https://github.com/zeromq/pyzmq) which have been permitted for use under the
6 BSD license. Parts are from lxml (https://github.com/lxml/lxml)
7 """
8
9 import os
10 import sys
11 import shutil
12 import warnings
13 import re
14
15 # may need to work around setuptools bug by providing a fake Pyrex
16 try:
17 import Cython
18 sys.path.insert(0, os.path.join(os.path.dirname(__file__), "fake_pyrex"))
19 except ImportError:
20 pass
21
22 # try bootstrapping setuptools if it doesn't exist
23 try:
24 import pkg_resources
25 try:
26 pkg_resources.require("setuptools>=0.6c5")
27 except pkg_resources.VersionConflict:
28 from ez_setup import use_setuptools
29 use_setuptools(version="0.6c5")
30 from setuptools import setup, Command
31 _have_setuptools = True
32 except ImportError:
33 # no setuptools installed
34 from distutils.core import setup, Command
35 _have_setuptools = False
36
37 setuptools_kwargs = {}
38 min_numpy_ver = '1.7.0'
39 if sys.version_info[0] >= 3:
40
41 setuptools_kwargs = {
42 'zip_safe': False,
43 'install_requires': ['python-dateutil >= 2',
44 'pytz >= 2011k',
45 'numpy >= %s' % min_numpy_ver],
46 'setup_requires': ['numpy >= %s' % min_numpy_ver],
47 }
48 if not _have_setuptools:
49 sys.exit("need setuptools/distribute for Py3k"
50 "\n$ pip install distribute")
51
52 else:
53 setuptools_kwargs = {
54 'install_requires': ['python-dateutil',
55 'pytz >= 2011k',
56 'numpy >= %s' % min_numpy_ver],
57 'setup_requires': ['numpy >= %s' % min_numpy_ver],
58 'zip_safe': False,
59 }
60
61 if not _have_setuptools:
62 try:
63 import numpy
64 import dateutil
65 setuptools_kwargs = {}
66 except ImportError:
67 sys.exit("install requires: 'python-dateutil < 2','numpy'."
68 " use pip or easy_install."
69 "\n $ pip install 'python-dateutil < 2' 'numpy'")
70
71 from distutils.extension import Extension
72 from distutils.command.build import build
73 from distutils.command.sdist import sdist
74 from distutils.command.build_ext import build_ext as _build_ext
75
76 try:
77 from Cython.Distutils import build_ext as _build_ext
78 # from Cython.Distutils import Extension # to get pyrex debugging symbols
79 cython = True
80 except ImportError:
81 cython = False
82
83 from os.path import join as pjoin
84
85
86 class build_ext(_build_ext):
87 def build_extensions(self):
88 numpy_incl = pkg_resources.resource_filename('numpy', 'core/include')
89
90 for ext in self.extensions:
91 if hasattr(ext, 'include_dirs') and not numpy_incl in ext.include_dirs:
92 ext.include_dirs.append(numpy_incl)
93 _build_ext.build_extensions(self)
94
95
96 DESCRIPTION = ("Powerful data structures for data analysis, time series,"
97 "and statistics")
98 LONG_DESCRIPTION = """
99 **pandas** is a Python package providing fast, flexible, and expressive data
100 structures designed to make working with structured (tabular, multidimensional,
101 potentially heterogeneous) and time series data both easy and intuitive. It
102 aims to be the fundamental high-level building block for doing practical,
103 **real world** data analysis in Python. Additionally, it has the broader goal
104 of becoming **the most powerful and flexible open source data analysis /
105 manipulation tool available in any language**. It is already well on its way
106 toward this goal.
107
108 pandas is well suited for many different kinds of data:
109
110 - Tabular data with heterogeneously-typed columns, as in an SQL table or
111 Excel spreadsheet
112 - Ordered and unordered (not necessarily fixed-frequency) time series data.
113 - Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
114 column labels
115 - Any other form of observational / statistical data sets. The data actually
116 need not be labeled at all to be placed into a pandas data structure
117
118 The two primary data structures of pandas, Series (1-dimensional) and DataFrame
119 (2-dimensional), handle the vast majority of typical use cases in finance,
120 statistics, social science, and many areas of engineering. For R users,
121 DataFrame provides everything that R's ``data.frame`` provides and much
122 more. pandas is built on top of `NumPy <http://www.numpy.org>`__ and is
123 intended to integrate well within a scientific computing environment with many
124 other 3rd party libraries.
125
126 Here are just a few of the things that pandas does well:
127
128 - Easy handling of **missing data** (represented as NaN) in floating point as
129 well as non-floating point data
130 - Size mutability: columns can be **inserted and deleted** from DataFrame and
131 higher dimensional objects
132 - Automatic and explicit **data alignment**: objects can be explicitly
133 aligned to a set of labels, or the user can simply ignore the labels and
134 let `Series`, `DataFrame`, etc. automatically align the data for you in
135 computations
136 - Powerful, flexible **group by** functionality to perform
137 split-apply-combine operations on data sets, for both aggregating and
138 transforming data
139 - Make it **easy to convert** ragged, differently-indexed data in other
140 Python and NumPy data structures into DataFrame objects
141 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
142 of large data sets
143 - Intuitive **merging** and **joining** data sets
144 - Flexible **reshaping** and pivoting of data sets
145 - **Hierarchical** labeling of axes (possible to have multiple labels per
146 tick)
147 - Robust IO tools for loading data from **flat files** (CSV and delimited),
148 Excel files, databases, and saving / loading data from the ultrafast **HDF5
149 format**
150 - **Time series**-specific functionality: date range generation and frequency
151 conversion, moving window statistics, moving window linear regressions,
152 date shifting and lagging, etc.
153
154 Many of these principles are here to address the shortcomings frequently
155 experienced using other languages / scientific research environments. For data
156 scientists, working with data is typically divided into multiple stages:
157 munging and cleaning data, analyzing / modeling it, then organizing the results
158 of the analysis into a form suitable for plotting or tabular display. pandas is
159 the ideal tool for all of these tasks.
160
161 Note
162 ----
163 Windows binaries built against NumPy 1.8.1
164 """
165
166 DISTNAME = 'pandas'
167 LICENSE = 'BSD'
168 AUTHOR = "The PyData Development Team"
169 EMAIL = "[email protected]"
170 URL = "http://pandas.pydata.org"
171 DOWNLOAD_URL = ''
172 CLASSIFIERS = [
173 'Development Status :: 4 - Beta',
174 'Environment :: Console',
175 'Operating System :: OS Independent',
176 'Intended Audience :: Science/Research',
177 'Programming Language :: Python',
178 'Programming Language :: Python :: 2',
179 'Programming Language :: Python :: 3',
180 'Programming Language :: Python :: 2.6',
181 'Programming Language :: Python :: 2.7',
182 'Programming Language :: Python :: 3.2',
183 'Programming Language :: Python :: 3.3',
184 'Programming Language :: Python :: 3.4',
185 'Programming Language :: Cython',
186 'Topic :: Scientific/Engineering',
187 ]
188
189 MAJOR = 0
190 MINOR = 15
191 MICRO = 0
192 ISRELEASED = False
193 VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)
194 QUALIFIER = ''
195
196 FULLVERSION = VERSION
197 write_version = True
198
199 if not ISRELEASED:
200 import subprocess
201 FULLVERSION += '.dev'
202
203 pipe = None
204 for cmd in ['git','git.cmd']:
205 try:
206 pipe = subprocess.Popen([cmd, "describe", "--always", "--match", "v[0-9]*"],
207 stdout=subprocess.PIPE)
208 (so,serr) = pipe.communicate()
209 if pipe.returncode == 0:
210 break
211 except:
212 pass
213
214 if pipe is None or pipe.returncode != 0:
215 # no git, or not in git dir
216 if os.path.exists('pandas/version.py'):
217 warnings.warn("WARNING: Couldn't get git revision, using existing pandas/version.py")
218 write_version = False
219 else:
220 warnings.warn("WARNING: Couldn't get git revision, using generic version string")
221 else:
222 # have git, in git dir, but may have used a shallow clone (travis does this)
223 rev = so.strip()
224 # makes distutils blow up on Python 2.7
225 if sys.version_info[0] >= 3:
226 rev = rev.decode('ascii')
227
228 if not rev.startswith('v') and re.match("[a-zA-Z0-9]{7,9}",rev):
229 # partial clone, manually construct version string
230 # this is the format before we started using git-describe
231 # to get an ordering on dev version strings.
232 rev ="v%s.dev-%s" % (VERSION, rev)
233
234 # Strip leading v from tags format "vx.y.z" to get th version string
235 FULLVERSION = rev.lstrip('v')
236
237 else:
238 FULLVERSION += QUALIFIER
239
240
241 def write_version_py(filename=None):
242 cnt = """\
243 version = '%s'
244 short_version = '%s'
245 """
246 if not filename:
247 filename = os.path.join(
248 os.path.dirname(__file__), 'pandas', 'version.py')
249
250 a = open(filename, 'w')
251 try:
252 a.write(cnt % (FULLVERSION, VERSION))
253 finally:
254 a.close()
255
256 if write_version:
257 write_version_py()
258
259 class CleanCommand(Command):
260 """Custom distutils command to clean the .so and .pyc files."""
261
262 user_options = [("all", "a", "")]
263
264 def initialize_options(self):
265 self.all = True
266 self._clean_me = []
267 self._clean_trees = []
268 self._clean_exclude = ['np_datetime.c',
269 'np_datetime_strings.c',
270 'period.c',
271 'tokenizer.c',
272 'io.c',
273 'ujson.c',
274 'objToJSON.c',
275 'JSONtoObj.c',
276 'ultrajsonenc.c',
277 'ultrajsondec.c',
278 ]
279
280 for root, dirs, files in os.walk('pandas'):
281 for f in files:
282 if f in self._clean_exclude:
283 continue
284
285 # XXX
286 if 'ujson' in f:
287 continue
288
289 if os.path.splitext(f)[-1] in ('.pyc', '.so', '.o',
290 '.pyo',
291 '.pyd', '.c', '.orig'):
292 self._clean_me.append(pjoin(root, f))
293 for d in dirs:
294 if d == '__pycache__':
295 self._clean_trees.append(pjoin(root, d))
296
297 for d in ('build', 'dist'):
298 if os.path.exists(d):
299 self._clean_trees.append(d)
300
301 def finalize_options(self):
302 pass
303
304 def run(self):
305 for clean_me in self._clean_me:
306 try:
307 os.unlink(clean_me)
308 except Exception:
309 pass
310 for clean_tree in self._clean_trees:
311 try:
312 shutil.rmtree(clean_tree)
313 except Exception:
314 pass
315
316
317 class CheckSDist(sdist):
318 """Custom sdist that ensures Cython has compiled all pyx files to c."""
319
320 _pyxfiles = ['pandas/lib.pyx',
321 'pandas/hashtable.pyx',
322 'pandas/tslib.pyx',
323 'pandas/index.pyx',
324 'pandas/algos.pyx',
325 'pandas/parser.pyx',
326 'pandas/src/sparse.pyx',
327 'pandas/src/testing.pyx']
328
329 def initialize_options(self):
330 sdist.initialize_options(self)
331
332 '''
333 self._pyxfiles = []
334 for root, dirs, files in os.walk('pandas'):
335 for f in files:
336 if f.endswith('.pyx'):
337 self._pyxfiles.append(pjoin(root, f))
338 '''
339
340 def run(self):
341 if 'cython' in cmdclass:
342 self.run_command('cython')
343 else:
344 for pyxfile in self._pyxfiles:
345 cfile = pyxfile[:-3] + 'c'
346 msg = "C-source file '%s' not found." % (cfile) +\
347 " Run 'setup.py cython' before sdist."
348 assert os.path.isfile(cfile), msg
349 sdist.run(self)
350
351
352 class CheckingBuildExt(build_ext):
353 """Subclass build_ext to get clearer report if Cython is necessary."""
354
355 def check_cython_extensions(self, extensions):
356 for ext in extensions:
357 for src in ext.sources:
358 if not os.path.exists(src):
359 raise Exception("""Cython-generated file '%s' not found.
360 Cython is required to compile pandas from a development branch.
361 Please install Cython or download a release package of pandas.
362 """ % src)
363
364 def build_extensions(self):
365 self.check_cython_extensions(self.extensions)
366 build_ext.build_extensions(self)
367
368
369 class CythonCommand(build_ext):
370 """Custom distutils command subclassed from Cython.Distutils.build_ext
371 to compile pyx->c, and stop there. All this does is override the
372 C-compile method build_extension() with a no-op."""
373 def build_extension(self, ext):
374 pass
375
376
377 class DummyBuildSrc(Command):
378 """ numpy's build_src command interferes with Cython's build_ext.
379 """
380 user_options = []
381
382 def initialize_options(self):
383 self.py_modules_dict = {}
384
385 def finalize_options(self):
386 pass
387
388 def run(self):
389 pass
390
391 cmdclass = {'clean': CleanCommand,
392 'build': build,
393 'sdist': CheckSDist}
394
395 try:
396 from wheel.bdist_wheel import bdist_wheel
397
398 class BdistWheel(bdist_wheel):
399 def get_tag(self):
400 tag = bdist_wheel.get_tag(self)
401 repl = 'macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64'
402 if tag[2] == 'macosx_10_6_intel':
403 tag = (tag[0], tag[1], repl)
404 return tag
405 cmdclass['bdist_wheel'] = BdistWheel
406 except ImportError:
407 pass
408
409 if cython:
410 suffix = '.pyx'
411 cmdclass['build_ext'] = CheckingBuildExt
412 cmdclass['cython'] = CythonCommand
413 else:
414 suffix = '.c'
415 cmdclass['build_src'] = DummyBuildSrc
416 cmdclass['build_ext'] = CheckingBuildExt
417
418 lib_depends = ['reduce', 'inference', 'properties']
419
420
421 def srcpath(name=None, suffix='.pyx', subdir='src'):
422 return pjoin('pandas', subdir, name + suffix)
423
424 if suffix == '.pyx':
425 lib_depends = [srcpath(f, suffix='.pyx') for f in lib_depends]
426 lib_depends.append('pandas/src/util.pxd')
427 else:
428 lib_depends = []
429 plib_depends = []
430
431 common_include = ['pandas/src/klib', 'pandas/src']
432
433
434 def pxd(name):
435 return os.path.abspath(pjoin('pandas', name + '.pxd'))
436
437
438 lib_depends = lib_depends + ['pandas/src/numpy_helper.h',
439 'pandas/src/parse_helper.h']
440
441
442 tseries_depends = ['pandas/src/datetime/np_datetime.h',
443 'pandas/src/datetime/np_datetime_strings.h',
444 'pandas/src/period.h']
445
446
447 # some linux distros require it
448 libraries = ['m'] if 'win32' not in sys.platform else []
449
450 ext_data = dict(
451 lib={'pyxfile': 'lib',
452 'pxdfiles': [],
453 'depends': lib_depends},
454 hashtable={'pyxfile': 'hashtable',
455 'pxdfiles': ['hashtable']},
456 tslib={'pyxfile': 'tslib',
457 'depends': tseries_depends,
458 'sources': ['pandas/src/datetime/np_datetime.c',
459 'pandas/src/datetime/np_datetime_strings.c',
460 'pandas/src/period.c']},
461 index={'pyxfile': 'index',
462 'sources': ['pandas/src/datetime/np_datetime.c',
463 'pandas/src/datetime/np_datetime_strings.c']},
464 algos={'pyxfile': 'algos',
465 'depends': [srcpath('generated', suffix='.pyx'),
466 srcpath('join', suffix='.pyx')]},
467 parser=dict(pyxfile='parser',
468 depends=['pandas/src/parser/tokenizer.h',
469 'pandas/src/parser/io.h',
470 'pandas/src/numpy_helper.h'],
471 sources=['pandas/src/parser/tokenizer.c',
472 'pandas/src/parser/io.c'])
473 )
474
475 extensions = []
476
477 for name, data in ext_data.items():
478 sources = [srcpath(data['pyxfile'], suffix=suffix, subdir='')]
479 pxds = [pxd(x) for x in data.get('pxdfiles', [])]
480 if suffix == '.pyx' and pxds:
481 sources.extend(pxds)
482
483 sources.extend(data.get('sources', []))
484
485 include = data.get('include', common_include)
486
487 obj = Extension('pandas.%s' % name,
488 sources=sources,
489 depends=data.get('depends', []),
490 include_dirs=include)
491
492 extensions.append(obj)
493
494
495 sparse_ext = Extension('pandas._sparse',
496 sources=[srcpath('sparse', suffix=suffix)],
497 include_dirs=[],
498 libraries=libraries)
499
500 extensions.extend([sparse_ext])
501
502 testing_ext = Extension('pandas._testing',
503 sources=[srcpath('testing', suffix=suffix)],
504 include_dirs=[],
505 libraries=libraries)
506
507 extensions.extend([testing_ext])
508
509 #----------------------------------------------------------------------
510 # msgpack stuff here
511
512 if sys.byteorder == 'big':
513 macros = [('__BIG_ENDIAN__', '1')]
514 else:
515 macros = [('__LITTLE_ENDIAN__', '1')]
516
517 msgpack_ext = Extension('pandas.msgpack',
518 sources = [srcpath('msgpack',
519 suffix=suffix if suffix == '.pyx' else '.cpp',
520 subdir='')],
521 language='c++',
522 include_dirs=common_include,
523 define_macros=macros)
524
525 extensions.append(msgpack_ext)
526
527 # if not ISRELEASED:
528 # extensions.extend([sandbox_ext])
529
530 if suffix == '.pyx' and 'setuptools' in sys.modules:
531 # undo dumb setuptools bug clobbering .pyx sources back to .c
532 for ext in extensions:
533 if ext.sources[0].endswith(('.c','.cpp')):
534 root, _ = os.path.splitext(ext.sources[0])
535 ext.sources[0] = root + suffix
536
537 ujson_ext = Extension('pandas.json',
538 depends=['pandas/src/ujson/lib/ultrajson.h',
539 'pandas/src/numpy_helper.h'],
540 sources=['pandas/src/ujson/python/ujson.c',
541 'pandas/src/ujson/python/objToJSON.c',
542 'pandas/src/ujson/python/JSONtoObj.c',
543 'pandas/src/ujson/lib/ultrajsonenc.c',
544 'pandas/src/ujson/lib/ultrajsondec.c',
545 'pandas/src/datetime/np_datetime.c',
546 'pandas/src/datetime/np_datetime_strings.c'],
547 include_dirs=['pandas/src/ujson/python',
548 'pandas/src/ujson/lib',
549 'pandas/src/datetime'] + common_include,
550 extra_compile_args=['-D_GNU_SOURCE'])
551
552
553 extensions.append(ujson_ext)
554
555
556 if _have_setuptools:
557 setuptools_kwargs["test_suite"] = "nose.collector"
558
559 # The build cache system does string matching below this point.
560 # if you change something, be careful.
561
562 setup(name=DISTNAME,
563 version=FULLVERSION,
564 maintainer=AUTHOR,
565 packages=['pandas',
566 'pandas.compat',
567 'pandas.computation',
568 'pandas.computation.tests',
569 'pandas.core',
570 'pandas.io',
571 'pandas.rpy',
572 'pandas.sandbox',
573 'pandas.sparse',
574 'pandas.sparse.tests',
575 'pandas.stats',
576 'pandas.util',
577 'pandas.tests',
578 'pandas.tests.test_msgpack',
579 'pandas.tools',
580 'pandas.tools.tests',
581 'pandas.tseries',
582 'pandas.tseries.tests',
583 'pandas.io.tests',
584 'pandas.io.tests.test_json',
585 'pandas.stats.tests',
586 ],
587 package_data={'pandas.io': ['tests/data/legacy_hdf/*.h5',
588 'tests/data/legacy_pickle/0.10.1/*.pickle',
589 'tests/data/legacy_pickle/0.11.0/*.pickle',
590 'tests/data/legacy_pickle/0.12.0/*.pickle',
591 'tests/data/legacy_pickle/0.13.0/*.pickle',
592 'tests/data/legacy_pickle/0.14.0/*.pickle',
593 'tests/data/*.csv',
594 'tests/data/*.dta',
595 'tests/data/*.txt',
596 'tests/data/*.xls',
597 'tests/data/*.xlsx',
598 'tests/data/*.xlsm',
599 'tests/data/*.table',
600 'tests/data/*.html',
601 'tests/data/html_encoding/*.html',
602 'tests/test_json/data/*.json'],
603 'pandas.tools': ['tests/*.csv'],
604 'pandas.tests': ['data/*.pickle',
605 'data/*.csv'],
606 'pandas.tseries.tests': ['data/*.pickle',
607 'data/*.csv']
608 },
609 ext_modules=extensions,
610 maintainer_email=EMAIL,
611 description=DESCRIPTION,
612 license=LICENSE,
613 cmdclass=cmdclass,
614 url=URL,
615 download_url=DOWNLOAD_URL,
616 long_description=LONG_DESCRIPTION,
617 classifiers=CLASSIFIERS,
618 platforms='any',
619 **setuptools_kwargs)
620
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
46c52e20d8418b0668738007564f3a8669275aaf
|
AttributeError: 'module' object has no attribute 'open_file'
```
======================================================================
ERROR: test_frame_select_complex2 (pandas.io.tests.test_pytables.TestHDFStore)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_pytables.py", line 3686, in test_frame_select_complex2
parms.to_hdf(pp,'df',mode='w',format='table',data_columns=['A'])
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/core/generic.py", line 896, in to_hdf
return pytables.to_hdf(path_or_buf, key, self, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 293, in to_hdf
complib=complib) as store:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 274, in get_store
store = HDFStore(path, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 423, in __init__
self.open(mode=mode, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 553, in open
self._handle = tables.open_file(self._path, self._mode, **kwargs)
AttributeError: 'module' object has no attribute 'open_file'
```
on the same old ubuntu 13.10
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.5.final.0
python-bits: 64
OS: Linux
OS-release: 3.2.0-4-amd64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C
LANG: C
pandas: 0.15.0
nose: 1.3.0
Cython: 0.19
numpy: 1.7.1
scipy: 0.12.0
statsmodels: 0.5.0
IPython: None
sphinx: 1.1.3
patsy: 0.3.0
dateutil: 1.5
pytz: 2012c
bottleneck: None
tables: 2.4.0
numexpr: 2.0.1
matplotlib: 1.2.1
openpyxl: 1.7.0
xlrd: 0.9.2
xlwt: 0.7.4
xlsxwriter: None
lxml: None
bs4: 4.2.0
html5lib: 0.95-dev
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
|
same here this PyTables too old and not supported
Numexpr too
we do s delayed import of tables so it's possible that the test machinery swallows the error
|
2014-10-27T12:09:24Z
|
<patch>
</patch>
|
[]
|
[]
| |||
conan-io__conan-391
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
conan install -u traceback
When running `conan install -u` while no previous `conan install` was run (so e.g. no conan conanbuildinfo.cmake exists) I get the following traceback:

</issue>
<code>
[start of README.rst]
1 Conan
2 =====
3
4 A distributed, open source, package manager.
5
6 +------------------------+-------------------------+----------------------+-----------------------+
7 | **master (linux/osx)** | **develop (linux/osx)** | **master (windows)** | **develop** (windows) |
8 +========================+=========================+======================+=======================+
9 | |Build Status1| | |Build Status2| | |Build status3| | |Build status4| |
10 +------------------------+-------------------------+----------------------+-----------------------+
11
12 Setup
13 =====
14
15 From binaries
16 -------------
17
18 We have installers for `most platforms here <http://conan.io>`__ but you
19 can run **conan** from sources if you want
20
21
22 From pip
23 --------
24
25 Conan is compatible with Python 2 and Python 3.
26
27 - Install pip following `pip docs`_
28
29 - Install conan:
30
31 ::
32
33 $ pip install conan
34
35
36 From Homebrew (OSx)
37 -------------------
38
39 - Install Homebrew following `brew homepage`_.
40
41 ::
42
43 $ brew update
44 $ brew install conan
45
46
47
48 From source
49 -----------
50
51 You can run **conan** client and server in Windows, MacOS, and Linux.
52
53 Install *python and pip*, search in google instructions for your operating system.
54 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
55
56 Clone conan repository
57 ~~~~~~~~~~~~~~~~~~~~~~
58
59 ::
60
61 $ git clone https://github.com/conan-io/conan.git
62
63 Install python requirements
64 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
65
66 For running the client:
67
68 ::
69
70 $ sudo pip install -r conans/requirements.txt
71
72 Server:
73
74 ::
75
76 $ sudo apt-get install python-dev
77 $ sudo pip install -r conans/requirements_server.txt
78
79 Development:
80
81 ::
82
83 $ sudo pip install -r conans/requirements_dev.txt
84
85 Running the tests on Ubuntu
86 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
87
88 Make sure that the Python requirements have been installed.
89
90 Before you can run the tests, you need to set a few environment
91 variables first.
92
93 ::
94
95 $ export PYTHONPATH=$PYTHONPATH:$(pwd)
96
97 Ensure that your ``cmake`` has version 2.8 or later. You can see the
98 version with the following command:
99
100 ::
101
102 $ cmake --version
103
104 The appropriate values of ``CONAN_COMPILER`` and
105 ``CONAN_COMPILER_VERSION`` depend on your operating system and your
106 requirements.
107
108 These should work for the GCC from ``build-essential`` on Ubuntu 14.04:
109
110 ::
111
112 $ export CONAN_COMPILER=gcc
113 $ export CONAN_COMPILER_VERSION=4.8
114
115 These should work for OS X:
116
117 ::
118
119 $ export CONAN_COMPILER=clang
120 $ export CONAN_COMPILER_VERSION=3.5
121
122 Finally, there are some tests that use conan to package Go-lang
123 libraries, so you would **need to install go-lang** in your computer and
124 add it to the path.
125
126 You can run the actual tests like this:
127
128 ::
129
130 $ nosetests .
131
132 About one minute later it should print ``OK``:
133
134 ::
135
136 ..................................................................................................................................................
137 ----------------------------------------------------------------------
138 Ran 146 tests in 50.993s
139
140 OK
141
142 Create a launcher
143 ~~~~~~~~~~~~~~~~~
144
145 Conan entry point is "conans.conan.main" module. Fill the absolute path
146 of the cloned repository folder:
147
148 ::
149
150 #!/usr/bin/env python
151 import sys
152 sys.path.append('/home/user/conan') # EDIT!!
153
154 from conans.conan import main
155 main(sys.argv[1:])
156
157 If you are a Windows user, you can name this file "conan.py" and create
158 a file "conan.bat" that calls the python module:
159
160 ::
161
162 CALL python C:/Users/user/conan.py %*
163
164 Then add that 'conan' file to your PATH and you are ready:
165
166 ::
167
168 $ conan --help
169
170 Conan commands. Type $conan "command" -h for help
171 build calls your project conanfile.py "build" method.
172 export copies a conanfile.py and associated (export) files to your local store,
173 install install in the local store the given requirements.
174 remove Remove any folder from your local/remote store
175 search show local/remote packages
176 test build and run your package test. Must have conanfile.py with "test"
177 upload uploads a conanfile or binary packages from the local store to any remote.
178 user shows or change the current user
179
180 License
181 -------
182
183 `MIT LICENSE <./LICENSE.md>`__
184
185 .. |Build Status1| image:: https://travis-ci.org/conan-io/conan.svg?branch=master
186 :target: https://travis-ci.org/conan-io/conan
187 .. |Build Status2| image:: https://travis-ci.org/conan-io/conan.svg?branch=develop
188 :target: https://travis-ci.org/conan-io/conan
189 .. |Build status3| image:: https://ci.appveyor.com/api/projects/status/5hedgjt9ggj1artx/branch/develop?svg=true
190 :target: https://ci.appveyor.com/project/lasote/conan-mva77/branch/master
191 .. |Build status4| image:: https://ci.appveyor.com/api/projects/status/5hedgjt9ggj1artx/branch/develop?svg=true
192 :target: https://ci.appveyor.com/project/lasote/conan-mva77/branch/develop
193 .. _`pip docs`: https://pip.pypa.io/en/stable/installing/
194 .. _`brew homepage`: http://brew.sh/
195
196
[end of README.rst]
[start of conans/client/command.py]
1 from conans.client.paths import ConanPaths
2 import sys
3 import os
4 from conans.client.output import ConanOutput, Color
5 import argparse
6 from conans.errors import ConanException
7 import inspect
8 from conans.client.remote_manager import RemoteManager
9 from conans.client.userio import UserIO
10 from conans.client.rest.auth_manager import ConanApiAuthManager
11 from conans.client.rest.rest_client import RestApiClient
12 from conans.client.store.localdb import LocalDB
13 from conans.util.log import logger
14 from conans.model.ref import ConanFileReference
15 from conans.client.manager import ConanManager
16 from conans.paths import CONANFILE, conan_expand_user
17 import requests
18 from conans.client.rest.version_checker import VersionCheckerRequester
19 from conans import __version__ as CLIENT_VERSION
20 from conans.client.conf import MIN_SERVER_COMPATIBLE_VERSION
21 from conans.model.version import Version
22 from conans.client.migrations import ClientMigrator
23 import hashlib
24 from conans.util.files import rmdir, load, save_files
25 from argparse import RawTextHelpFormatter
26 from conans.client.runner import ConanRunner
27 from conans.client.remote_registry import RemoteRegistry
28 from conans.model.scope import Scopes
29 import re
30
31
32 class Extender(argparse.Action):
33 '''Allows to use the same flag several times in a command and creates a list with the values.
34 For example:
35 conan install MyPackage/1.2@user/channel -o qt:value -o mode:2 -s cucumber:true
36 It creates:
37 options = ['qt:value', 'mode:2']
38 settings = ['cucumber:true']
39 '''
40
41 def __call__(self, parser, namespace, values, option_strings=None): # @UnusedVariable
42 # Need None here incase `argparse.SUPPRESS` was supplied for `dest`
43 dest = getattr(namespace, self.dest, None)
44 if(not hasattr(dest, 'extend') or dest == self.default):
45 dest = []
46 setattr(namespace, self.dest, dest)
47 # if default isn't set to None, this method might be called
48 # with the default as `values` for other arguments which
49 # share this destination.
50 parser.set_defaults(**{self.dest: None})
51
52 try:
53 dest.extend(values)
54 except ValueError:
55 dest.append(values)
56
57
58 class Command(object):
59 """ A single command of the conan application, with all the first level commands.
60 Manages the parsing of parameters and delegates functionality in
61 collaborators.
62 It can also show help of the tool
63 """
64 def __init__(self, paths, user_io, runner, remote_manager):
65 assert isinstance(user_io, UserIO)
66 assert isinstance(paths, ConanPaths)
67 self._conan_paths = paths
68 self._user_io = user_io
69 self._runner = runner
70 self._manager = ConanManager(paths, user_io, runner, remote_manager)
71
72 def _parse_args(self, parser):
73 parser.add_argument("-r", "--remote", help='look for in the remote storage')
74 parser.add_argument("--options", "-o",
75 help='load options to build the package, e.g., -o with_qt=true',
76 nargs=1, action=Extender)
77 parser.add_argument("--settings", "-s",
78 help='load settings to build the package, -s compiler:gcc',
79 nargs=1, action=Extender)
80 parser.add_argument("--build", "-b", action=Extender, nargs="*",
81 help='''Optional, use it to choose if you want to build from sources:
82
83 --build Build all from sources, do not use binary packages.
84 --build=never Default option. Never build, use binary packages or fail if a binary package is not found.
85 --build=missing Build from code if a binary package is not found.
86 --build=[pattern] Build always these packages from source, but never build the others. Allows multiple --build parameters.
87 ''')
88
89 def _get_tuples_list_from_extender_arg(self, items):
90 if not items:
91 return []
92 # Validate the pairs
93 for item in items:
94 chunks = item.split("=")
95 if len(chunks) != 2:
96 raise ConanException("Invalid input '%s', use 'name=value'" % item)
97 return [(item[0], item[1]) for item in [item.split("=") for item in items]]
98
99 def _get_build_sources_parameter(self, build_param):
100 # returns True if we want to build the missing libraries
101 # False if building is forbidden
102 # A list with patterns: Will force build matching libraries,
103 # will look for the package for the rest
104
105 if isinstance(build_param, list):
106 if len(build_param) == 0: # All packages from source
107 return ["*"]
108 elif len(build_param) == 1 and build_param[0] == "never":
109 return False # Default
110 elif len(build_param) == 1 and build_param[0] == "missing":
111 return True
112 else: # A list of expressions to match (if matches, will build from source)
113 return ["%s*" % ref_expr for ref_expr in build_param]
114 else:
115 return False # Nothing is built
116
117 def _test_check(self, test_folder, test_folder_name):
118 """ To ensure that the 0.9 version new layout is detected and users warned
119 """
120 # Check old tests, format
121 test_conanfile = os.path.join(test_folder, "conanfile.py")
122 if not os.path.exists(test_conanfile):
123 raise ConanException("Test conanfile.py does not exist")
124 test_conanfile_content = load(test_conanfile)
125 if ".conanfile_directory" not in test_conanfile_content:
126 self._user_io.out.error("""******* conan test command layout has changed *******
127
128 In your "%s" folder 'conanfile.py' you should use the
129 path to the conanfile_directory, something like:
130
131 self.run('cmake %%s %%s' %% (self.conanfile_directory, cmake.command_line))
132
133 """ % (test_folder_name))
134
135 # Test the CMakeLists, if existing
136 test_cmake = os.path.join(test_folder, "CMakeLists.txt")
137 if os.path.exists(test_cmake):
138 test_cmake_content = load(test_cmake)
139 if "${CMAKE_BINARY_DIR}/conanbuildinfo.cmake" not in test_cmake_content:
140 self._user_io.out.error("""******* conan test command layout has changed *******
141
142 In your "%s" folder 'CMakeLists.txt' you should use the
143 path to the CMake binary directory, like this:
144
145 include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
146
147 """ % (test_folder_name))
148
149 def new(self, *args):
150 """ create a new package template conanfile.py and other optional files
151 """
152 parser = argparse.ArgumentParser(description=self.new.__doc__, prog="conan new",
153 formatter_class=RawTextHelpFormatter)
154 parser.add_argument("name", help='Package name, e.g.: Poco/1.7.3@user/testing')
155 parser.add_argument("-t", "--test", action='store_true', default=False,
156 help='Create test_package skeleton to test package')
157 parser.add_argument("-i", "--header", action='store_true', default=False,
158 help='Create a headers only package')
159 parser.add_argument("-c", "--pure_c", action='store_true', default=False,
160 help='Create a C language package only package (non-headers)')
161
162 args = parser.parse_args(*args)
163
164 root_folder = os.getcwd()
165 try:
166 name, version, user, channel = ConanFileReference.loads(args.name)
167 pattern = re.compile('[\W_]+')
168 package_name = pattern.sub('', name).capitalize()
169 except:
170 raise ConanException("Bad parameter, please use full package name,"
171 "e.g: MyLib/1.2.3@user/testing")
172 from conans.client.new import (conanfile, conanfile_header, test_conanfile, test_cmake,
173 test_main)
174 if args.header:
175 files = {"conanfile.py": conanfile_header.format(name=name, version=version,
176 package_name=package_name)}
177 else:
178 files = {"conanfile.py": conanfile.format(name=name, version=version,
179 package_name=package_name)}
180 if args.pure_c:
181 config = "\n def config(self):\n del self.settings.compiler.libcxx"
182 files["conanfile.py"] = files["conanfile.py"] + config
183 if args.test:
184 files["test_package/conanfile.py"] = test_conanfile.format(name=name, version=version,
185 user=user, channel=channel,
186 package_name=package_name)
187 files["test_package/CMakeLists.txt"] = test_cmake
188 files["test_package/example.cpp"] = test_main
189 save_files(root_folder, files)
190 for f in sorted(files):
191 self._user_io.out.success("File saved: %s" % f)
192
193 def test_package(self, *args):
194 """ build and run your package test. Must have conanfile.py with "test"
195 method and "test_package" subfolder with package consumer test project
196 """
197 parser = argparse.ArgumentParser(description=self.test_package.__doc__, prog="conan test_package",
198 formatter_class=RawTextHelpFormatter)
199 parser.add_argument("path", nargs='?', default="", help='path to conanfile file, '
200 'e.g. /my_project/')
201 parser.add_argument("-ne", "--not-export", default=False, action='store_true', help='Do not export the conanfile before test execution')
202 parser.add_argument("-f", "--folder", help='alternative test folder name')
203 parser.add_argument("--scope", "-sc", nargs=1, action=Extender,
204 help='Define scopes for packages')
205 parser.add_argument('--keep-source', '-k', default=False, action='store_true',
206 help='Optional. Do not remove the source folder in local store. '
207 'Use for testing purposes only')
208 self._parse_args(parser)
209
210 args = parser.parse_args(*args)
211
212 root_folder = os.path.normpath(os.path.join(os.getcwd(), args.path))
213 if args.folder:
214 test_folder_name = args.folder
215 test_folder = os.path.join(root_folder, test_folder_name)
216 test_conanfile = os.path.join(test_folder, "conanfile.py")
217 if not os.path.exists(test_conanfile):
218 raise ConanException("test folder '%s' not available, "
219 "or it doesn't have a conanfile.py" % args.folder)
220 else:
221 for name in ["test_package", "test"]:
222 test_folder_name = name
223 test_folder = os.path.join(root_folder, test_folder_name)
224 test_conanfile = os.path.join(test_folder, "conanfile.py")
225 if os.path.exists(test_conanfile):
226 break
227 else:
228 raise ConanException("test folder 'test_package' not available, "
229 "or it doesn't have a conanfile.py")
230
231 options = args.options or []
232 settings = args.settings or []
233
234 sha = hashlib.sha1("".join(options + settings).encode()).hexdigest()
235 build_folder = os.path.join(test_folder, "build", sha)
236 rmdir(build_folder)
237 # shutil.copytree(test_folder, build_folder)
238
239 options = self._get_tuples_list_from_extender_arg(args.options)
240 settings = self._get_tuples_list_from_extender_arg(args.settings)
241 scopes = Scopes.from_list(args.scope) if args.scope else None
242
243 manager = self._manager
244 loader = manager._loader(None, settings, options, scopes)
245 conanfile = loader.load_conan(test_conanfile, self._user_io.out, consumer=True)
246 try:
247 # convert to list from ItemViews required for python3
248 reqs = list(conanfile.requires.items())
249 first_dep = reqs[0][1].conan_reference
250 except Exception:
251 raise ConanException("Unable to retrieve first requirement of test conanfile.py")
252
253 # Forcing an export!
254 if not args.not_export:
255 self._user_io.out.info("Exporting package recipe")
256 user_channel = "%s/%s" % (first_dep.user, first_dep.channel)
257 self._manager.export(user_channel, root_folder, keep_source=args.keep_source)
258
259 lib_to_test = first_dep.name + "*"
260 # Get False or a list of patterns to check
261 if args.build is None and lib_to_test: # Not specified, force build the tested library
262 args.build = [lib_to_test]
263 else:
264 args.build = self._get_build_sources_parameter(args.build)
265
266 self._manager.install(reference=test_folder,
267 current_path=build_folder,
268 remote=args.remote,
269 options=options,
270 settings=settings,
271 build_mode=args.build,
272 scopes=scopes)
273 self._test_check(test_folder, test_folder_name)
274 self._manager.build(test_folder, build_folder, test=True)
275
276 # Alias to test
277 def test(self, *args):
278 """ (deprecated). Alias to test_package, use it instead
279 """
280 self.test_package(*args)
281
282 def install(self, *args):
283 """ install in the local store the given requirements.
284 Requirements can be defined in the command line or in a conanfile.
285 EX: conan install opencv/2.4.10@lasote/testing
286 """
287 parser = argparse.ArgumentParser(description=self.install.__doc__, prog="conan install",
288 formatter_class=RawTextHelpFormatter)
289 parser.add_argument("reference", nargs='?', default="",
290 help='package recipe reference'
291 'e.g., MyPackage/1.2@user/channel or ./my_project/')
292 parser.add_argument("--package", "-p", nargs=1, action=Extender,
293 help='Force install specified package ID (ignore settings/options)')
294 parser.add_argument("--all", action='store_true', default=False,
295 help='Install all packages from the specified package recipe')
296 parser.add_argument("--integrity", "-i", action='store_true', default=False,
297 help='Check that the stored recipe or package manifests are correct')
298 parser.add_argument("--file", "-f", help="specify conanfile filename")
299 parser.add_argument("--update", "-u", action='store_true', default=False,
300 help="update with new upstream packages")
301 parser.add_argument("--scope", "-sc", nargs=1, action=Extender,
302 help='Define scopes for packages')
303 parser.add_argument("--generator", "-g", nargs=1, action=Extender,
304 help='Generators to use')
305 self._parse_args(parser)
306
307 args = parser.parse_args(*args)
308
309 current_path = os.getcwd()
310 try:
311 reference = ConanFileReference.loads(args.reference)
312 except:
313 reference = os.path.normpath(os.path.join(current_path, args.reference))
314
315 if args.all or args.package: # Install packages without settings (fixed ids or all)
316 if args.all:
317 args.package = []
318 if not args.reference or not isinstance(reference, ConanFileReference):
319 raise ConanException("Invalid package recipe reference. "
320 "e.g., MyPackage/1.2@user/channel")
321 self._manager.download(reference, args.package, remote=args.remote)
322 else: # Classic install, package chosen with settings and options
323 # Get False or a list of patterns to check
324 args.build = self._get_build_sources_parameter(args.build)
325 options = self._get_tuples_list_from_extender_arg(args.options)
326 settings = self._get_tuples_list_from_extender_arg(args.settings)
327 scopes = Scopes.from_list(args.scope) if args.scope else None
328 self._manager.install(reference=reference,
329 current_path=current_path,
330 remote=args.remote,
331 options=options,
332 settings=settings,
333 build_mode=args.build,
334 filename=args.file,
335 update=args.update,
336 integrity=args.integrity,
337 scopes=scopes,
338 generators=args.generator)
339
340 def info(self, *args):
341 """ Prints information about the requirements.
342 Requirements can be defined in the command line or in a conanfile.
343 EX: conan info opencv/2.4.10@lasote/testing
344 """
345 parser = argparse.ArgumentParser(description=self.info.__doc__, prog="conan info",
346 formatter_class=RawTextHelpFormatter)
347 parser.add_argument("reference", nargs='?', default="",
348 help='reference name or path to conanfile file, '
349 'e.g., MyPackage/1.2@user/channel or ./my_project/')
350 parser.add_argument("--file", "-f", help="specify conanfile filename")
351 parser.add_argument("-r", "--remote", help='look for in the remote storage')
352 parser.add_argument("--options", "-o",
353 help='load options to build the package, e.g., -o with_qt=true',
354 nargs=1, action=Extender)
355 parser.add_argument("--settings", "-s",
356 help='load settings to build the package, -s compiler:gcc',
357 nargs=1, action=Extender)
358 parser.add_argument("--only", "-n",
359 help='show fields only')
360 parser.add_argument("--integrity", "-i", action='store_true', default=False,
361 help='Check that the stored recipe or package manifests are correct')
362 parser.add_argument("--update", "-u", action='store_true', default=False,
363 help="check updates exist from upstream remotes")
364 parser.add_argument("--build_order", "-bo",
365 help='given a modified reference, return ordered list to build (CI)',
366 nargs=1, action=Extender)
367 parser.add_argument("--scope", "-sc", nargs=1, action=Extender,
368 help='Define scopes for packages')
369 args = parser.parse_args(*args)
370
371 options = self._get_tuples_list_from_extender_arg(args.options)
372 settings = self._get_tuples_list_from_extender_arg(args.settings)
373 current_path = os.getcwd()
374 try:
375 reference = ConanFileReference.loads(args.reference)
376 except:
377 reference = os.path.normpath(os.path.join(current_path, args.reference))
378 scopes = Scopes.from_list(args.scope) if args.scope else None
379 self._manager.info(reference=reference,
380 current_path=current_path,
381 remote=args.remote,
382 options=options,
383 settings=settings,
384 info=args.only or True,
385 check_updates=args.update,
386 integrity=args.integrity,
387 filename=args.file,
388 build_order=args.build_order,
389 scopes=scopes)
390
391 def build(self, *args):
392 """ calls your project conanfile.py "build" method.
393 EX: conan build ./my_project
394 Intended for package creators, requires a conanfile.py.
395 """
396 parser = argparse.ArgumentParser(description=self.build.__doc__, prog="conan build")
397 parser.add_argument("path", nargs="?",
398 help='path to user conanfile.py, e.g., conan build .',
399 default="")
400 parser.add_argument("--file", "-f", help="specify conanfile filename")
401 args = parser.parse_args(*args)
402 current_path = os.getcwd()
403 if args.path:
404 root_path = os.path.abspath(args.path)
405 else:
406 root_path = current_path
407 self._manager.build(root_path, current_path, filename=args.file)
408
409 def package(self, *args):
410 """ calls your conanfile.py "package" method for a specific package or
411 regenerates the existing package's manifest.
412 Intended for package creators, for regenerating a package without
413 recompiling the source.
414 e.g. conan package MyPackage/1.2@user/channel 9cf83afd07b678da9c1645f605875400847ff3
415 """
416 parser = argparse.ArgumentParser(description=self.package.__doc__, prog="conan package")
417 parser.add_argument("reference", help='package recipe reference name. '
418 'e.g., MyPackage/1.2@user/channel')
419 parser.add_argument("package", nargs="?", default="",
420 help='Package ID to regenerate. e.g., '
421 '9cf83afd07b678d38a9c1645f605875400847ff3')
422 parser.add_argument("-o", "--only-manifest", default=False, action='store_true',
423 help='Just regenerate manifest for the existing package.'
424 'If True conan won\'t call your conanfile\'s package method.')
425 parser.add_argument("--all", action='store_true',
426 default=False, help='Package all packages from specified reference')
427
428 args = parser.parse_args(*args)
429
430 try:
431 reference = ConanFileReference.loads(args.reference)
432 except:
433 raise ConanException("Invalid package recipe reference. "
434 "e.g., MyPackage/1.2@user/channel")
435
436 if not args.all and not args.package:
437 raise ConanException("'conan package': Please specify --all or a package ID")
438
439 self._manager.package(reference, args.package, args.only_manifest, args.all)
440
441 def export(self, *args):
442 """ copies the package recipe (conanfile.py and associated files) to your local store,
443 where it can be shared and reused in other projects.
444 From that store, it can be uploaded to any remote with "upload" command.
445 """
446 parser = argparse.ArgumentParser(description=self.export.__doc__, prog="conan export")
447 parser.add_argument("user", help='user_name[/channel]. By default, channel is '
448 '"testing", e.g., phil or phil/stable')
449 parser.add_argument('--path', '-p', default=None,
450 help='Optional. Folder with a %s. Default current directory.'
451 % CONANFILE)
452 parser.add_argument('--keep-source', '-k', default=False, action='store_true',
453 help='Optional. Do not remove the source folder in local store. '
454 'Use for testing purposes only')
455 args = parser.parse_args(*args)
456
457 current_path = args.path or os.getcwd()
458 keep_source = args.keep_source
459 self._manager.export(args.user, current_path, keep_source)
460
461 def remove(self, *args):
462 """ Remove any package recipe or package from your local/remote store
463 """
464 parser = argparse.ArgumentParser(description=self.remove.__doc__, prog="conan remove")
465 parser.add_argument('pattern', help='Pattern name, e.g., openssl/*')
466 parser.add_argument('-p', '--packages', const=[], nargs='?',
467 help='By default, remove all the packages or select one, '
468 'specifying the SHA key')
469 parser.add_argument('-b', '--builds', const=[], nargs='?',
470 help='By default, remove all the build folders or select one, '
471 'specifying the SHA key')
472 parser.add_argument('-s', '--src', default=False, action="store_true",
473 help='Remove source folders')
474 parser.add_argument('-f', '--force', default=False,
475 action='store_true', help='Remove without requesting a confirmation')
476 parser.add_argument('-r', '--remote', help='Remote origin')
477 args = parser.parse_args(*args)
478
479 if args.packages:
480 args.packages = args.packages.split(",")
481 if args.builds:
482 args.builds = args.builds.split(",")
483 self._manager.remove(args.pattern, package_ids_filter=args.packages,
484 build_ids=args.builds,
485 src=args.src, force=args.force, remote=args.remote)
486
487 def copy(self, *args):
488 """ Copy package recipe and packages to another user/channel
489 """
490 parser = argparse.ArgumentParser(description=self.copy.__doc__, prog="conan copy")
491 parser.add_argument("reference", default="",
492 help='package recipe reference'
493 'e.g., MyPackage/1.2@user/channel')
494 parser.add_argument("user_channel", default="",
495 help='Destination user/channel'
496 'e.g., lasote/testing')
497 parser.add_argument("--package", "-p", nargs=1, action=Extender,
498 help='copy specified package ID')
499 parser.add_argument("--all", action='store_true',
500 default=False,
501 help='Copy all packages from the specified package recipe')
502 parser.add_argument("--force", action='store_true',
503 default=False,
504 help='Override destination packages and the package recipe')
505 args = parser.parse_args(*args)
506
507 reference = ConanFileReference.loads(args.reference)
508 new_ref = ConanFileReference.loads("%s/%s@%s" % (reference.name,
509 reference.version,
510 args.user_channel))
511 if args.all:
512 args.package = []
513 self._manager.copy(reference, args.package, new_ref.user, new_ref.channel, args.force)
514
515 def user(self, *parameters):
516 """ shows or change the current user """
517 parser = argparse.ArgumentParser(description=self.user.__doc__, prog="conan user")
518 parser.add_argument("name", nargs='?', default=None,
519 help='Username you want to use. '
520 'If no name is provided it will show the current user.')
521 parser.add_argument("-p", "--password", help='User password. Use double quotes '
522 'if password with spacing, and escape quotes if existing')
523 parser.add_argument("--remote", "-r", help='look for in the remote storage')
524 parser.add_argument('-c', '--clean', default=False,
525 action='store_true', help='Remove user and tokens for all remotes')
526 args = parser.parse_args(*parameters) # To enable -h
527
528 if args.clean:
529 localdb = LocalDB(self._conan_paths.localdb)
530 localdb.init(clean=True)
531 self._user_io.out.success("Deleted user data")
532 return
533 self._manager.user(args.remote, args.name, args.password)
534
535 def search(self, *args):
536 """ show local/remote packages
537 """
538 parser = argparse.ArgumentParser(description=self.search.__doc__, prog="conan search")
539 parser.add_argument('pattern', nargs='?', help='Pattern name, e.g., openssl/*')
540 parser.add_argument('--case-sensitive', default=False,
541 action='store_true', help='Make a case-sensitive search')
542 parser.add_argument('-r', '--remote', help='Remote origin')
543 parser.add_argument('-v', '--verbose', default=False,
544 action='store_true', help='Show packages')
545 parser.add_argument('-x', '--extra-verbose', default=False,
546 action='store_true', help='Show packages options and settings')
547 parser.add_argument('-p', '--package', help='Package ID pattern. EX: 23*', default=None)
548 args = parser.parse_args(*args)
549
550 self._manager.search(args.pattern,
551 args.remote,
552 ignorecase=not args.case_sensitive,
553 verbose=args.verbose,
554 extra_verbose=args.extra_verbose,
555 package_pattern=args.package)
556
557 def upload(self, *args):
558 """ uploads a conanfile or binary packages from the local store to any remote.
559 To upload something, it should be "exported" first.
560 """
561 parser = argparse.ArgumentParser(description=self.upload.__doc__,
562 prog="conan upload")
563 parser.add_argument("reference",
564 help='package recipe reference, e.g., MyPackage/1.2@user/channel')
565 # TODO: packageparser.add_argument('package', help='user name')
566 parser.add_argument("--package", "-p", default=None, help='package ID to upload')
567 parser.add_argument("--remote", "-r", help='upload to this specific remote')
568 parser.add_argument("--all", action='store_true',
569 default=False, help='Upload both package recipe and packages')
570 parser.add_argument("--force", action='store_true',
571 default=False,
572 help='Do not check conan recipe date, override remote with local')
573
574 args = parser.parse_args(*args)
575
576 conan_ref = ConanFileReference.loads(args.reference)
577 package_id = args.package
578
579 if not conan_ref and not package_id:
580 raise ConanException("Enter conan reference or package id")
581
582 self._manager.upload(conan_ref, package_id,
583 args.remote, all_packages=args.all, force=args.force)
584
585 def remote(self, *args):
586 """ manage remotes
587 """
588 parser = argparse.ArgumentParser(description=self.remote.__doc__, prog="conan remote")
589 subparsers = parser.add_subparsers(dest='subcommand', help='sub-command help')
590
591 # create the parser for the "a" command
592 subparsers.add_parser('list', help='list current remotes')
593 parser_add = subparsers.add_parser('add', help='add a remote')
594 parser_add.add_argument('remote', help='name of the remote')
595 parser_add.add_argument('url', help='url of the remote')
596 parser_rm = subparsers.add_parser('remove', help='remove a remote')
597 parser_rm.add_argument('remote', help='name of the remote')
598 parser_upd = subparsers.add_parser('update', help='update the remote url')
599 parser_upd.add_argument('remote', help='name of the remote')
600 parser_upd.add_argument('url', help='url')
601 subparsers.add_parser('list_ref',
602 help='list the package recipes and its associated remotes')
603 parser_padd = subparsers.add_parser('add_ref',
604 help="associate a recipe's reference to a remote")
605 parser_padd.add_argument('reference', help='package recipe reference')
606 parser_padd.add_argument('remote', help='name of the remote')
607 parser_prm = subparsers.add_parser('remove_ref',
608 help="dissociate a recipe's reference and its remote")
609 parser_prm.add_argument('reference', help='package recipe reference')
610 parser_pupd = subparsers.add_parser('update_ref', help="update the remote associated "
611 "with a package recipe")
612 parser_pupd.add_argument('reference', help='package recipe reference')
613 parser_pupd.add_argument('remote', help='name of the remote')
614 args = parser.parse_args(*args)
615
616 registry = RemoteRegistry(self._conan_paths.registry, self._user_io.out)
617 if args.subcommand == "list":
618 for r in registry.remotes:
619 self._user_io.out.info("%s: %s" % (r.name, r.url))
620 elif args.subcommand == "add":
621 registry.add(args.remote, args.url)
622 elif args.subcommand == "remove":
623 registry.remove(args.remote)
624 elif args.subcommand == "update":
625 registry.update(args.remote, args.url)
626 elif args.subcommand == "list_ref":
627 for ref, remote in registry.refs.items():
628 self._user_io.out.info("%s: %s" % (ref, remote))
629 elif args.subcommand == "add_ref":
630 registry.add_ref(args.reference, args.remote)
631 elif args.subcommand == "remove_ref":
632 registry.remove_ref(args.reference)
633 elif args.subcommand == "update_ref":
634 registry.update_ref(args.reference, args.remote)
635
636 def _show_help(self):
637 """ prints a summary of all commands
638 """
639 self._user_io.out.writeln('Conan commands. Type $conan "command" -h for help',
640 Color.BRIGHT_YELLOW)
641 commands = self._commands()
642 for name in sorted(self._commands()):
643 self._user_io.out.write(' %-10s' % name, Color.GREEN)
644 self._user_io.out.writeln(commands[name].__doc__.split('\n', 1)[0])
645
646 def _commands(self):
647 """ returns a list of available commands
648 """
649 result = {}
650 for m in inspect.getmembers(self, predicate=inspect.ismethod):
651 method_name = m[0]
652 if not method_name.startswith('_'):
653 method = m[1]
654 if method.__doc__ and not method.__doc__.startswith('HIDDEN'):
655 result[method_name] = method
656 return result
657
658 def run(self, *args):
659 """HIDDEN: entry point for executing commands, dispatcher to class
660 methods
661 """
662 errors = False
663 try:
664 try:
665 command = args[0][0]
666 commands = self._commands()
667 method = commands[command]
668 except KeyError as exc:
669 if command in ["-v", "--version"]:
670 self._user_io.out.success("Conan version %s" % CLIENT_VERSION)
671 return False
672 self._show_help()
673 if command in ["-h", "--help"]:
674 return False
675 raise ConanException("Unknown command %s" % str(exc))
676 except IndexError as exc: # No parameters
677 self._show_help()
678 return False
679 method(args[0][1:])
680 except (KeyboardInterrupt, SystemExit) as exc:
681 logger.error(exc)
682 errors = True
683 except ConanException as exc:
684 logger.error(exc)
685 # import traceback
686 # logger.debug(traceback.format_exc())
687 errors = True
688 self._user_io.out.error(str(exc))
689
690 return errors
691
692
693 def migrate_and_get_paths(base_folder, out, manager, storage_folder=None):
694 # Init paths
695 paths = ConanPaths(base_folder, storage_folder, out)
696
697 # Migration system
698 migrator = ClientMigrator(paths, Version(CLIENT_VERSION), out, manager)
699 migrator.migrate()
700
701 # Init again paths, migration could change config
702 paths = ConanPaths(base_folder, storage_folder, out)
703 return paths
704
705
706 def get_command():
707
708 def instance_remote_manager(paths):
709 requester = requests.Session()
710 requester.proxies = paths.conan_config.proxies
711 # Verify client version against remotes
712 version_checker_requester = VersionCheckerRequester(requester, Version(CLIENT_VERSION),
713 Version(MIN_SERVER_COMPATIBLE_VERSION),
714 out)
715 # To handle remote connections
716 rest_api_client = RestApiClient(out, requester=version_checker_requester)
717 # To store user and token
718 localdb = LocalDB(paths.localdb)
719 # Wraps RestApiClient to add authentication support (same interface)
720 auth_manager = ConanApiAuthManager(rest_api_client, user_io, localdb)
721 # Handle remote connections
722 remote_manager = RemoteManager(paths, auth_manager, out)
723 return remote_manager
724
725 if hasattr(sys.stdout, "isatty") and sys.stdout.isatty():
726 import colorama
727 colorama.init()
728 color = True
729 else:
730 color = False
731 out = ConanOutput(sys.stdout, color)
732 user_io = UserIO(out=out)
733
734 user_folder = os.getenv("CONAN_USER_HOME", conan_expand_user("~"))
735
736 try:
737 # To capture exceptions in conan.conf parsing
738 paths = ConanPaths(user_folder, None, out)
739 # obtain a temp ConanManager instance to execute the migrations
740 remote_manager = instance_remote_manager(paths)
741 manager = ConanManager(paths, user_io, ConanRunner(), remote_manager)
742 paths = migrate_and_get_paths(user_folder, out, manager)
743 except Exception as e:
744 out.error(str(e))
745 sys.exit(True)
746
747 # Get the new command instance after migrations have been done
748 manager = instance_remote_manager(paths)
749 command = Command(paths, user_io, ConanRunner(), manager)
750 return command
751
752
753 def main(args):
754 """ main entry point of the conan application, using a Command to
755 parse parameters
756 """
757 command = get_command()
758 current_dir = os.getcwd()
759 try:
760 import signal
761
762 def sigint_handler(signal, frame): # @UnusedVariable
763 print('You pressed Ctrl+C!')
764 sys.exit(0)
765
766 signal.signal(signal.SIGINT, sigint_handler)
767 error = command.run(args)
768 finally:
769 os.chdir(current_dir)
770 sys.exit(error)
771
[end of conans/client/command.py]
[start of conans/server/rest/bottle_plugins/return_handler.py]
1 from bottle import HTTPResponse
2 from conans.errors import ConanException
3 from conans.util.log import logger
4 import traceback
5
6
7 class ReturnHandlerPlugin(object):
8 ''' The ReturnHandlerPlugin plugin unify REST return and exception management'''
9
10 name = 'ReturnHandlerPlugin'
11 api = 2
12
13 def __init__(self, exception_mapping):
14 self.exception_mapping = exception_mapping
15
16 def setup(self, app):
17 ''' Make sure that other installed plugins don't affect the same
18 keyword argument.'''
19 for other in app.plugins:
20 if not isinstance(other, ReturnHandlerPlugin):
21 continue
22
23 def apply(self, callback, _):
24 '''Apply plugin'''
25 def wrapper(*args, **kwargs):
26 '''Capture possible exceptions to manage the return'''
27 try:
28 # The encoding from browsers is utf-8, so we assume it
29 for key, value in kwargs.items():
30 if isinstance(value, str):
31 kwargs[key] = value
32 return callback(*args, **kwargs) # kwargs has :xxx variables from url
33 except HTTPResponse:
34 raise
35 except ConanException as excep:
36 return get_response_from_exception(excep, self.exception_mapping)
37 except Exception as e:
38 logger.error(e)
39 logger.error(traceback.print_exc())
40 return get_response_from_exception(e, self.exception_mapping)
41
42 return wrapper
43
44
45 def get_response_from_exception(excep, exception_mapping):
46 status = exception_mapping.get(excep.__class__, None)
47 if status is None:
48 status = 500
49 ret = HTTPResponse(status=status, body=str(excep))
50 ret.add_header("Content-Type", "text/plain")
51 return ret
52
[end of conans/server/rest/bottle_plugins/return_handler.py]
[start of pyinstaller.py]
1 from __future__ import print_function
2 import os
3 import platform
4 import subprocess
5 import shutil
6 from distutils import dir_util
7
8
9 def _install_pyintaller(pyinstaller_path):
10 # try to install pyinstaller if not installed
11 if not os.path.exists(pyinstaller_path):
12 subprocess.call('git clone https://github.com/pyinstaller/pyinstaller.git',
13 cwd=os.path.curdir, shell=True)
14 subprocess.call('git checkout v3.1.1', cwd=pyinstaller_path, shell=True)
15
16
17 def _run_bin(pyinstaller_path):
18 # run the binary to test if working
19 conan_bin = os.path.join(pyinstaller_path, 'conan', 'dist', 'conan', 'conan')
20 if platform.system() == 'Windows':
21 conan_bin += '.exe'
22 retcode = os.system(conan_bin)
23 if retcode != 0:
24 raise Exception("Binary not working")
25
26
27 def pyinstall(source_folder):
28 pyinstaller_path = os.path.join(os.path.curdir, 'pyinstaller')
29 _install_pyintaller(pyinstaller_path)
30
31 try:
32 shutil.rmtree(os.path.join(pyinstaller_path, 'conan'))
33 except Exception as e:
34 print("Unable to remove old folder", e)
35 try:
36 shutil.rmtree(os.path.join(pyinstaller_path, 'conan_server'))
37 except Exception as e:
38 print("Unable to remove old server folder", e)
39
40 conan_path = os.path.join(source_folder, 'conans', 'conan.py')
41 conan_server_path = os.path.join(source_folder, 'conans', 'conan_server.py')
42 subprocess.call('python pyinstaller.py -y -p %s --console %s' % (source_folder, conan_path),
43 cwd=pyinstaller_path, shell=True)
44 _run_bin(pyinstaller_path)
45
46 subprocess.call('python pyinstaller.py -y -p %s --console %s'
47 % (source_folder, conan_server_path),
48 cwd=pyinstaller_path, shell=True)
49
50 conan_bin = os.path.join(pyinstaller_path, 'conan', 'dist', 'conan')
51 conan_server_folder = os.path.join(pyinstaller_path, 'conan_server', 'dist', 'conan_server')
52 dir_util.copy_tree(conan_server_folder, conan_bin)
53 _run_bin(pyinstaller_path)
54
55 return os.path.abspath(os.path.join(pyinstaller_path, 'conan', 'dist', 'conan'))
56
57
58 if __name__ == "__main__":
59 source_folder = os.path.abspath(os.path.dirname(os.path.abspath(__file__)))
60 output_folder = pyinstall(source_folder)
61 print("\n**************Conan binaries created!******************\n \
62 \nAppend this folder to your system PATH: '%s'\nFeel free to move the whole folder to another location." % output_folder)
63
64
[end of pyinstaller.py]
[start of setup.py]
1 """A setuptools based setup module.
2 See:
3 https://packaging.python.org/en/latest/distributing.html
4 https://github.com/pypa/sampleproject
5 """
6
7 # Always prefer setuptools over distutils
8 from setuptools import setup, find_packages
9 # To use a consistent encoding
10 from codecs import open
11 from os import path
12 import os
13 import re
14 import platform
15
16
17 here = path.abspath(path.dirname(__file__))
18
19
20 def get_requires(filename):
21 requirements = []
22 with open(filename, "rt") as req_file:
23 for line in req_file.read().splitlines():
24 if not line.strip().startswith("#"):
25 requirements.append(line)
26 return requirements
27
28 project_requirements = get_requires("conans/requirements.txt")
29 if platform.system() == "Darwin":
30 project_requirements.extend(get_requires("conans/requirements_osx.txt"))
31 project_requirements.extend(get_requires("conans/requirements_server.txt"))
32 dev_requirements = get_requires("conans/requirements_dev.txt")
33
34
35 def load_version():
36 '''Loads a file content'''
37 filename = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)),
38 "conans", "__init__.py"))
39 with open(filename, "rt") as version_file:
40 conan_init = version_file.read()
41 version = re.search("__version__ = '([0-9a-z.]+)'", conan_init).group(1)
42 return version
43
44
45 # def generate_long_description_file():
46 # import pypandoc
47 #
48 # output = pypandoc.convert('README.md', 'rst')
49 # return output
50
51 setup(
52 name='conan',
53 # Versions should comply with PEP440. For a discussion on single-sourcing
54 # the version across setup.py and the project code, see
55 # https://packaging.python.org/en/latest/single_source_version.html
56 version=load_version(), # + ".rc5",
57
58 description='Conan C/C++ package manager',
59 # long_description="An open source, decentralized package manager, to automate building and sharing of packages",
60 # long_description=generate_long_description_file(),
61
62 # The project's main homepage.
63 url='https://conan.io',
64
65 # Author details
66 author='Luis Martinez de Bartolome',
67 author_email='[email protected]',
68
69 # Choose your license
70 license='MIT',
71
72 # See https://pypi.python.org/pypi?%3Aaction=list_classifiers
73 classifiers=[
74 'Development Status :: 4 - Beta',
75 'Intended Audience :: Developers',
76 'Topic :: Software Development :: Build Tools',
77 'License :: OSI Approved :: MIT License',
78 'Programming Language :: Python :: 2',
79 'Programming Language :: Python :: 2.7',
80 ],
81
82 # What does your project relate to?
83 keywords=['C/C++', 'package', 'libraries', 'developer', 'manager',
84 'dependency', 'tool', 'c', 'c++', 'cpp'],
85
86 # You can just specify the packages manually here if your project is
87 # simple. Or you can use find_packages().
88 packages=find_packages(),
89
90 # Alternatively, if you want to distribute just a my_module.py, uncomment
91 # this:
92 # py_modules=["my_module"],
93
94 # List run-time dependencies here. These will be installed by pip when
95 # your project is installed. For an analysis of "install_requires" vs pip's
96 # requirements files see:
97 # https://packaging.python.org/en/latest/requirements.html
98 install_requires=project_requirements,
99
100 # List additional groups of dependencies here (e.g. development
101 # dependencies). You can install these using the following syntax,
102 # for example:
103 # $ pip install -e .[dev,test]
104 extras_require={
105 'dev': dev_requirements,
106 'test': dev_requirements,
107 },
108
109 # If there are data files included in your packages that need to be
110 # installed, specify them here. If using Python 2.6 or less, then these
111 # have to be included in MANIFEST.in as well.
112 package_data={
113 'conans': ['*.txt'],
114 },
115
116 # Although 'package_data' is the preferred approach, in some case you may
117 # need to place data files outside of your packages. See:
118 # http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files # noqa
119 # In this case, 'data_file' will be installed into '<sys.prefix>/my_data'
120 # data_files=[('my_data', ['data/data_file'])],
121
122 # To provide executable scripts, use entry points in preference to the
123 # "scripts" keyword. Entry points provide cross-platform support and allow
124 # pip to create the appropriate form of executable for the target platform.
125 entry_points={
126 'console_scripts': [
127 'conan=conans.conan:run',
128 'conan_server=conans.conan_server:run',
129 ],
130 },
131 )
132
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
conan-io/conan
|
d2eb169a66586d1d8727ad37b720c77f11790bf5
|
conan install -u traceback
When running `conan install -u` while no previous `conan install` was run (so e.g. no conan conanbuildinfo.cmake exists) I get the following traceback:

|
I can't reproduce the error. Do you have a conanfile.py? conanfile.txt? Windows right?
It's Windows, yes. Conan version is 0.10.1. I have a simple repo with a simple conanfile.txt like this:
```
[requires]
googlemock/1.7.0@image/stable
Qt/5.7.0@image/stable
[generators]
cmake
[imports]
bin, *.dll -> ./bin # Copies all dll files from the package "bin" folder to my project "bin" folder
```
When I create a new folder and run `conan install -u {SOURCE_DIR}` I always get the shown traceback.
Actually, the Qt / short_paths.conf might be the culprit. I have tried with a different repository that does not have a Qt dependency (and thus, does not use short_paths.conf) and it is working properly.
Have you modified the short_paths.conf? Could you paste here?
There is one entry in my short_paths.conf for the Qt package:
`Qt/5.7.0@image/stable: C:/Qt/5.7.0/image/stable`
|
2016-07-29T13:42:46Z
|
<patch>
diff --git a/conans/client/proxy.py b/conans/client/proxy.py
--- a/conans/client/proxy.py
+++ b/conans/client/proxy.py
@@ -140,12 +140,13 @@ def update_available(self, conan_reference):
if not conan_reference:
return 0
read_manifest, _ = self._paths.conan_manifests(conan_reference)
- try: # get_conan_digest can fail, not in server
- upstream_manifest = self.get_conan_digest(conan_reference)
- if upstream_manifest.file_sums != read_manifest.file_sums:
- return 1 if upstream_manifest.time > read_manifest.time else -1
- except ConanException:
- pass
+ if read_manifest:
+ try: # get_conan_digest can fail, not in server
+ upstream_manifest = self.get_conan_digest(conan_reference)
+ if upstream_manifest.file_sums != read_manifest.file_sums:
+ return 1 if upstream_manifest.time > read_manifest.time else -1
+ except ConanException:
+ pass
return 0
</patch>
|
[]
|
[]
| |||
pypa__pip-3075
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pip wheel doesn't properly do ABI tag for py2.7
It appears that using a debug build of python 2.7 doesn't mark the wheels built using it in any special way.
Pip would install them on a regular python 2.7 (if they would be on an package index) and then later on imports for C extensions would fail.
ref @dstufft wheel caching
</issue>
<code>
[start of README.rst]
1 pip
2 ===
3
4 The `PyPA recommended
5 <https://python-packaging-user-guide.readthedocs.org/en/latest/current.html>`_
6 tool for installing Python packages.
7
8 * `Installation <https://pip.pypa.io/en/stable/installing.html>`_
9 * `Documentation <https://pip.pypa.io/>`_
10 * `Changelog <https://pip.pypa.io/en/stable/news.html>`_
11 * `Github Page <https://github.com/pypa/pip>`_
12 * `Issue Tracking <https://github.com/pypa/pip/issues>`_
13 * `User mailing list <http://groups.google.com/group/python-virtualenv>`_
14 * `Dev mailing list <http://groups.google.com/group/pypa-dev>`_
15 * User IRC: #pypa on Freenode.
16 * Dev IRC: #pypa-dev on Freenode.
17
18
19 .. image:: https://img.shields.io/pypi/v/pip.svg
20 :target: https://pypi.python.org/pypi/pip
21
22 .. image:: https://img.shields.io/travis/pypa/pip/develop.svg
23 :target: http://travis-ci.org/pypa/pip
24
25
26 Code of Conduct
27 ---------------
28
29 Everyone interacting in the pip project's codebases, issue trackers, chat
30 rooms, and mailing lists is expected to follow the `PyPA Code of Conduct`_.
31
32 .. _PyPA Code of Conduct: https://www.pypa.io/en/latest/code-of-conduct/
33
[end of README.rst]
[start of pip/cmdoptions.py]
1 """
2 shared options and groups
3
4 The principle here is to define options once, but *not* instantiate them
5 globally. One reason being that options with action='append' can carry state
6 between parses. pip parses general options twice internally, and shouldn't
7 pass on state. To be consistent, all options will follow this design.
8
9 """
10 from __future__ import absolute_import
11
12 from functools import partial
13 from optparse import OptionGroup, SUPPRESS_HELP, Option
14 import warnings
15
16 from pip.index import (
17 PyPI, FormatControl, fmt_ctl_handle_mutual_exclude, fmt_ctl_no_binary,
18 fmt_ctl_no_use_wheel)
19 from pip.locations import CA_BUNDLE_PATH, USER_CACHE_DIR, src_prefix
20
21
22 def make_option_group(group, parser):
23 """
24 Return an OptionGroup object
25 group -- assumed to be dict with 'name' and 'options' keys
26 parser -- an optparse Parser
27 """
28 option_group = OptionGroup(parser, group['name'])
29 for option in group['options']:
30 option_group.add_option(option())
31 return option_group
32
33
34 def resolve_wheel_no_use_binary(options):
35 if not options.use_wheel:
36 control = options.format_control
37 fmt_ctl_no_use_wheel(control)
38
39
40 def check_install_build_global(options, check_options=None):
41 """Disable wheels if per-setup.py call options are set.
42
43 :param options: The OptionParser options to update.
44 :param check_options: The options to check, if not supplied defaults to
45 options.
46 """
47 if check_options is None:
48 check_options = options
49
50 def getname(n):
51 return getattr(check_options, n, None)
52 names = ["build_options", "global_options", "install_options"]
53 if any(map(getname, names)):
54 control = options.format_control
55 fmt_ctl_no_binary(control)
56 warnings.warn(
57 'Disabling all use of wheels due to the use of --build-options '
58 '/ --global-options / --install-options.', stacklevel=2)
59
60
61 ###########
62 # options #
63 ###########
64
65 help_ = partial(
66 Option,
67 '-h', '--help',
68 dest='help',
69 action='help',
70 help='Show help.')
71
72 isolated_mode = partial(
73 Option,
74 "--isolated",
75 dest="isolated_mode",
76 action="store_true",
77 default=False,
78 help=(
79 "Run pip in an isolated mode, ignoring environment variables and user "
80 "configuration."
81 ),
82 )
83
84 require_virtualenv = partial(
85 Option,
86 # Run only if inside a virtualenv, bail if not.
87 '--require-virtualenv', '--require-venv',
88 dest='require_venv',
89 action='store_true',
90 default=False,
91 help=SUPPRESS_HELP)
92
93 verbose = partial(
94 Option,
95 '-v', '--verbose',
96 dest='verbose',
97 action='count',
98 default=0,
99 help='Give more output. Option is additive, and can be used up to 3 times.'
100 )
101
102 version = partial(
103 Option,
104 '-V', '--version',
105 dest='version',
106 action='store_true',
107 help='Show version and exit.')
108
109 quiet = partial(
110 Option,
111 '-q', '--quiet',
112 dest='quiet',
113 action='count',
114 default=0,
115 help='Give less output.')
116
117 log = partial(
118 Option,
119 "--log", "--log-file", "--local-log",
120 dest="log",
121 metavar="path",
122 help="Path to a verbose appending log."
123 )
124
125 no_input = partial(
126 Option,
127 # Don't ask for input
128 '--no-input',
129 dest='no_input',
130 action='store_true',
131 default=False,
132 help=SUPPRESS_HELP)
133
134 proxy = partial(
135 Option,
136 '--proxy',
137 dest='proxy',
138 type='str',
139 default='',
140 help="Specify a proxy in the form [user:passwd@]proxy.server:port.")
141
142 retries = partial(
143 Option,
144 '--retries',
145 dest='retries',
146 type='int',
147 default=5,
148 help="Maximum number of retries each connection should attempt "
149 "(default %default times).")
150
151 timeout = partial(
152 Option,
153 '--timeout', '--default-timeout',
154 metavar='sec',
155 dest='timeout',
156 type='float',
157 default=15,
158 help='Set the socket timeout (default %default seconds).')
159
160 default_vcs = partial(
161 Option,
162 # The default version control system for editables, e.g. 'svn'
163 '--default-vcs',
164 dest='default_vcs',
165 type='str',
166 default='',
167 help=SUPPRESS_HELP)
168
169 skip_requirements_regex = partial(
170 Option,
171 # A regex to be used to skip requirements
172 '--skip-requirements-regex',
173 dest='skip_requirements_regex',
174 type='str',
175 default='',
176 help=SUPPRESS_HELP)
177
178
179 def exists_action():
180 return Option(
181 # Option when path already exist
182 '--exists-action',
183 dest='exists_action',
184 type='choice',
185 choices=['s', 'i', 'w', 'b'],
186 default=[],
187 action='append',
188 metavar='action',
189 help="Default action when a path already exists: "
190 "(s)witch, (i)gnore, (w)ipe, (b)ackup.")
191
192
193 cert = partial(
194 Option,
195 '--cert',
196 dest='cert',
197 type='str',
198 default=CA_BUNDLE_PATH,
199 metavar='path',
200 help="Path to alternate CA bundle.")
201
202 client_cert = partial(
203 Option,
204 '--client-cert',
205 dest='client_cert',
206 type='str',
207 default=None,
208 metavar='path',
209 help="Path to SSL client certificate, a single file containing the "
210 "private key and the certificate in PEM format.")
211
212 index_url = partial(
213 Option,
214 '-i', '--index-url', '--pypi-url',
215 dest='index_url',
216 metavar='URL',
217 default=PyPI.simple_url,
218 help='Base URL of Python Package Index (default %default).')
219
220
221 def extra_index_url():
222 return Option(
223 '--extra-index-url',
224 dest='extra_index_urls',
225 metavar='URL',
226 action='append',
227 default=[],
228 help='Extra URLs of package indexes to use in addition to --index-url.'
229 )
230
231
232 no_index = partial(
233 Option,
234 '--no-index',
235 dest='no_index',
236 action='store_true',
237 default=False,
238 help='Ignore package index (only looking at --find-links URLs instead).')
239
240
241 def find_links():
242 return Option(
243 '-f', '--find-links',
244 dest='find_links',
245 action='append',
246 default=[],
247 metavar='url',
248 help="If a url or path to an html file, then parse for links to "
249 "archives. If a local path or file:// url that's a directory,"
250 "then look for archives in the directory listing.")
251
252
253 def allow_external():
254 return Option(
255 "--allow-external",
256 dest="allow_external",
257 action="append",
258 default=[],
259 metavar="PACKAGE",
260 help=SUPPRESS_HELP,
261 )
262
263
264 allow_all_external = partial(
265 Option,
266 "--allow-all-external",
267 dest="allow_all_external",
268 action="store_true",
269 default=False,
270 help=SUPPRESS_HELP,
271 )
272
273
274 def trusted_host():
275 return Option(
276 "--trusted-host",
277 dest="trusted_hosts",
278 action="append",
279 metavar="HOSTNAME",
280 default=[],
281 help="Mark this host as trusted, even though it does not have valid "
282 "or any HTTPS.",
283 )
284
285
286 # Remove after 7.0
287 no_allow_external = partial(
288 Option,
289 "--no-allow-external",
290 dest="allow_all_external",
291 action="store_false",
292 default=False,
293 help=SUPPRESS_HELP,
294 )
295
296
297 # Remove --allow-insecure after 7.0
298 def allow_unsafe():
299 return Option(
300 "--allow-unverified", "--allow-insecure",
301 dest="allow_unverified",
302 action="append",
303 default=[],
304 metavar="PACKAGE",
305 help=SUPPRESS_HELP,
306 )
307
308 # Remove after 7.0
309 no_allow_unsafe = partial(
310 Option,
311 "--no-allow-insecure",
312 dest="allow_all_insecure",
313 action="store_false",
314 default=False,
315 help=SUPPRESS_HELP
316 )
317
318 # Remove after 1.5
319 process_dependency_links = partial(
320 Option,
321 "--process-dependency-links",
322 dest="process_dependency_links",
323 action="store_true",
324 default=False,
325 help="Enable the processing of dependency links.",
326 )
327
328
329 def constraints():
330 return Option(
331 '-c', '--constraint',
332 dest='constraints',
333 action='append',
334 default=[],
335 metavar='file',
336 help='Constrain versions using the given constraints file. '
337 'This option can be used multiple times.')
338
339
340 def requirements():
341 return Option(
342 '-r', '--requirement',
343 dest='requirements',
344 action='append',
345 default=[],
346 metavar='file',
347 help='Install from the given requirements file. '
348 'This option can be used multiple times.')
349
350
351 def editable():
352 return Option(
353 '-e', '--editable',
354 dest='editables',
355 action='append',
356 default=[],
357 metavar='path/url',
358 help=('Install a project in editable mode (i.e. setuptools '
359 '"develop mode") from a local project path or a VCS url.'),
360 )
361
362 src = partial(
363 Option,
364 '--src', '--source', '--source-dir', '--source-directory',
365 dest='src_dir',
366 metavar='dir',
367 default=src_prefix,
368 help='Directory to check out editable projects into. '
369 'The default in a virtualenv is "<venv path>/src". '
370 'The default for global installs is "<current dir>/src".'
371 )
372
373 # XXX: deprecated, remove in 9.0
374 use_wheel = partial(
375 Option,
376 '--use-wheel',
377 dest='use_wheel',
378 action='store_true',
379 default=True,
380 help=SUPPRESS_HELP,
381 )
382
383 # XXX: deprecated, remove in 9.0
384 no_use_wheel = partial(
385 Option,
386 '--no-use-wheel',
387 dest='use_wheel',
388 action='store_false',
389 default=True,
390 help=('Do not Find and prefer wheel archives when searching indexes and '
391 'find-links locations. DEPRECATED in favour of --no-binary.'),
392 )
393
394
395 def _get_format_control(values, option):
396 """Get a format_control object."""
397 return getattr(values, option.dest)
398
399
400 def _handle_no_binary(option, opt_str, value, parser):
401 existing = getattr(parser.values, option.dest)
402 fmt_ctl_handle_mutual_exclude(
403 value, existing.no_binary, existing.only_binary)
404
405
406 def _handle_only_binary(option, opt_str, value, parser):
407 existing = getattr(parser.values, option.dest)
408 fmt_ctl_handle_mutual_exclude(
409 value, existing.only_binary, existing.no_binary)
410
411
412 def no_binary():
413 return Option(
414 "--no-binary", dest="format_control", action="callback",
415 callback=_handle_no_binary, type="str",
416 default=FormatControl(set(), set()),
417 help="Do not use binary packages. Can be supplied multiple times, and "
418 "each time adds to the existing value. Accepts either :all: to "
419 "disable all binary packages, :none: to empty the set, or one or "
420 "more package names with commas between them. Note that some "
421 "packages are tricky to compile and may fail to install when "
422 "this option is used on them.")
423
424
425 def only_binary():
426 return Option(
427 "--only-binary", dest="format_control", action="callback",
428 callback=_handle_only_binary, type="str",
429 default=FormatControl(set(), set()),
430 help="Do not use source packages. Can be supplied multiple times, and "
431 "each time adds to the existing value. Accepts either :all: to "
432 "disable all source packages, :none: to empty the set, or one or "
433 "more package names with commas between them. Packages without "
434 "binary distributions will fail to install when this option is "
435 "used on them.")
436
437
438 cache_dir = partial(
439 Option,
440 "--cache-dir",
441 dest="cache_dir",
442 default=USER_CACHE_DIR,
443 metavar="dir",
444 help="Store the cache data in <dir>."
445 )
446
447 no_cache = partial(
448 Option,
449 "--no-cache-dir",
450 dest="cache_dir",
451 action="store_false",
452 help="Disable the cache.",
453 )
454
455 no_deps = partial(
456 Option,
457 '--no-deps', '--no-dependencies',
458 dest='ignore_dependencies',
459 action='store_true',
460 default=False,
461 help="Don't install package dependencies.")
462
463 build_dir = partial(
464 Option,
465 '-b', '--build', '--build-dir', '--build-directory',
466 dest='build_dir',
467 metavar='dir',
468 help='Directory to unpack packages into and build in.'
469 )
470
471 install_options = partial(
472 Option,
473 '--install-option',
474 dest='install_options',
475 action='append',
476 metavar='options',
477 help="Extra arguments to be supplied to the setup.py install "
478 "command (use like --install-option=\"--install-scripts=/usr/local/"
479 "bin\"). Use multiple --install-option options to pass multiple "
480 "options to setup.py install. If you are using an option with a "
481 "directory path, be sure to use absolute path.")
482
483 global_options = partial(
484 Option,
485 '--global-option',
486 dest='global_options',
487 action='append',
488 metavar='options',
489 help="Extra global options to be supplied to the setup.py "
490 "call before the install command.")
491
492 no_clean = partial(
493 Option,
494 '--no-clean',
495 action='store_true',
496 default=False,
497 help="Don't clean up build directories.")
498
499 pre = partial(
500 Option,
501 '--pre',
502 action='store_true',
503 default=False,
504 help="Include pre-release and development versions. By default, "
505 "pip only finds stable versions.")
506
507 disable_pip_version_check = partial(
508 Option,
509 "--disable-pip-version-check",
510 dest="disable_pip_version_check",
511 action="store_true",
512 default=False,
513 help="Don't periodically check PyPI to determine whether a new version "
514 "of pip is available for download. Implied with --no-index.")
515
516 # Deprecated, Remove later
517 always_unzip = partial(
518 Option,
519 '-Z', '--always-unzip',
520 dest='always_unzip',
521 action='store_true',
522 help=SUPPRESS_HELP,
523 )
524
525
526 ##########
527 # groups #
528 ##########
529
530 general_group = {
531 'name': 'General Options',
532 'options': [
533 help_,
534 isolated_mode,
535 require_virtualenv,
536 verbose,
537 version,
538 quiet,
539 log,
540 no_input,
541 proxy,
542 retries,
543 timeout,
544 default_vcs,
545 skip_requirements_regex,
546 exists_action,
547 trusted_host,
548 cert,
549 client_cert,
550 cache_dir,
551 no_cache,
552 disable_pip_version_check,
553 ]
554 }
555
556 non_deprecated_index_group = {
557 'name': 'Package Index Options',
558 'options': [
559 index_url,
560 extra_index_url,
561 no_index,
562 find_links,
563 process_dependency_links,
564 ]
565 }
566
567 index_group = {
568 'name': 'Package Index Options (including deprecated options)',
569 'options': non_deprecated_index_group['options'] + [
570 allow_external,
571 allow_all_external,
572 no_allow_external,
573 allow_unsafe,
574 no_allow_unsafe,
575 ]
576 }
577
[end of pip/cmdoptions.py]
[start of pip/wheel.py]
1 """
2 Support for installing and building the "wheel" binary package format.
3 """
4 from __future__ import absolute_import
5
6 import compileall
7 import csv
8 import errno
9 import functools
10 import hashlib
11 import logging
12 import os
13 import os.path
14 import re
15 import shutil
16 import stat
17 import sys
18 import tempfile
19 import warnings
20
21 from base64 import urlsafe_b64encode
22 from email.parser import Parser
23
24 from pip._vendor.six import StringIO
25
26 import pip
27 from pip.compat import expanduser
28 from pip.download import path_to_url, unpack_url
29 from pip.exceptions import (
30 InstallationError, InvalidWheelFilename, UnsupportedWheel)
31 from pip.locations import distutils_scheme, PIP_DELETE_MARKER_FILENAME
32 from pip import pep425tags
33 from pip.utils import (
34 call_subprocess, ensure_dir, captured_stdout, rmtree, canonicalize_name)
35 from pip.utils.logging import indent_log
36 from pip._vendor.distlib.scripts import ScriptMaker
37 from pip._vendor import pkg_resources
38 from pip._vendor.six.moves import configparser
39
40
41 wheel_ext = '.whl'
42
43 VERSION_COMPATIBLE = (1, 0)
44
45
46 logger = logging.getLogger(__name__)
47
48
49 class WheelCache(object):
50 """A cache of wheels for future installs."""
51
52 def __init__(self, cache_dir, format_control):
53 """Create a wheel cache.
54
55 :param cache_dir: The root of the cache.
56 :param format_control: A pip.index.FormatControl object to limit
57 binaries being read from the cache.
58 """
59 self._cache_dir = expanduser(cache_dir) if cache_dir else None
60 self._format_control = format_control
61
62 def cached_wheel(self, link, package_name):
63 return cached_wheel(
64 self._cache_dir, link, self._format_control, package_name)
65
66
67 def _cache_for_link(cache_dir, link):
68 """
69 Return a directory to store cached wheels in for link.
70
71 Because there are M wheels for any one sdist, we provide a directory
72 to cache them in, and then consult that directory when looking up
73 cache hits.
74
75 We only insert things into the cache if they have plausible version
76 numbers, so that we don't contaminate the cache with things that were not
77 unique. E.g. ./package might have dozens of installs done for it and build
78 a version of 0.0...and if we built and cached a wheel, we'd end up using
79 the same wheel even if the source has been edited.
80
81 :param cache_dir: The cache_dir being used by pip.
82 :param link: The link of the sdist for which this will cache wheels.
83 """
84
85 # We want to generate an url to use as our cache key, we don't want to just
86 # re-use the URL because it might have other items in the fragment and we
87 # don't care about those.
88 key_parts = [link.url_without_fragment]
89 if link.hash_name is not None and link.hash is not None:
90 key_parts.append("=".join([link.hash_name, link.hash]))
91 key_url = "#".join(key_parts)
92
93 # Encode our key url with sha224, we'll use this because it has similar
94 # security properties to sha256, but with a shorter total output (and thus
95 # less secure). However the differences don't make a lot of difference for
96 # our use case here.
97 hashed = hashlib.sha224(key_url.encode()).hexdigest()
98
99 # We want to nest the directories some to prevent having a ton of top level
100 # directories where we might run out of sub directories on some FS.
101 parts = [hashed[:2], hashed[2:4], hashed[4:6], hashed[6:]]
102
103 # Inside of the base location for cached wheels, expand our parts and join
104 # them all together.
105 return os.path.join(cache_dir, "wheels", *parts)
106
107
108 def cached_wheel(cache_dir, link, format_control, package_name):
109 if not cache_dir:
110 return link
111 if not link:
112 return link
113 if link.is_wheel:
114 return link
115 if not link.is_artifact:
116 return link
117 if not package_name:
118 return link
119 canonical_name = canonicalize_name(package_name)
120 formats = pip.index.fmt_ctl_formats(format_control, canonical_name)
121 if "binary" not in formats:
122 return link
123 root = _cache_for_link(cache_dir, link)
124 try:
125 wheel_names = os.listdir(root)
126 except OSError as e:
127 if e.errno in (errno.ENOENT, errno.ENOTDIR):
128 return link
129 raise
130 candidates = []
131 for wheel_name in wheel_names:
132 try:
133 wheel = Wheel(wheel_name)
134 except InvalidWheelFilename:
135 continue
136 if not wheel.supported():
137 # Built for a different python/arch/etc
138 continue
139 candidates.append((wheel.support_index_min(), wheel_name))
140 if not candidates:
141 return link
142 candidates.sort()
143 path = os.path.join(root, candidates[0][1])
144 return pip.index.Link(path_to_url(path))
145
146
147 def rehash(path, algo='sha256', blocksize=1 << 20):
148 """Return (hash, length) for path using hashlib.new(algo)"""
149 h = hashlib.new(algo)
150 length = 0
151 with open(path, 'rb') as f:
152 block = f.read(blocksize)
153 while block:
154 length += len(block)
155 h.update(block)
156 block = f.read(blocksize)
157 digest = 'sha256=' + urlsafe_b64encode(
158 h.digest()
159 ).decode('latin1').rstrip('=')
160 return (digest, length)
161
162
163 def open_for_csv(name, mode):
164 if sys.version_info[0] < 3:
165 nl = {}
166 bin = 'b'
167 else:
168 nl = {'newline': ''}
169 bin = ''
170 return open(name, mode + bin, **nl)
171
172
173 def fix_script(path):
174 """Replace #!python with #!/path/to/python
175 Return True if file was changed."""
176 # XXX RECORD hashes will need to be updated
177 if os.path.isfile(path):
178 with open(path, 'rb') as script:
179 firstline = script.readline()
180 if not firstline.startswith(b'#!python'):
181 return False
182 exename = sys.executable.encode(sys.getfilesystemencoding())
183 firstline = b'#!' + exename + os.linesep.encode("ascii")
184 rest = script.read()
185 with open(path, 'wb') as script:
186 script.write(firstline)
187 script.write(rest)
188 return True
189
190 dist_info_re = re.compile(r"""^(?P<namever>(?P<name>.+?)(-(?P<ver>\d.+?))?)
191 \.dist-info$""", re.VERBOSE)
192
193
194 def root_is_purelib(name, wheeldir):
195 """
196 Return True if the extracted wheel in wheeldir should go into purelib.
197 """
198 name_folded = name.replace("-", "_")
199 for item in os.listdir(wheeldir):
200 match = dist_info_re.match(item)
201 if match and match.group('name') == name_folded:
202 with open(os.path.join(wheeldir, item, 'WHEEL')) as wheel:
203 for line in wheel:
204 line = line.lower().rstrip()
205 if line == "root-is-purelib: true":
206 return True
207 return False
208
209
210 def get_entrypoints(filename):
211 if not os.path.exists(filename):
212 return {}, {}
213
214 # This is done because you can pass a string to entry_points wrappers which
215 # means that they may or may not be valid INI files. The attempt here is to
216 # strip leading and trailing whitespace in order to make them valid INI
217 # files.
218 with open(filename) as fp:
219 data = StringIO()
220 for line in fp:
221 data.write(line.strip())
222 data.write("\n")
223 data.seek(0)
224
225 cp = configparser.RawConfigParser()
226 cp.readfp(data)
227
228 console = {}
229 gui = {}
230 if cp.has_section('console_scripts'):
231 console = dict(cp.items('console_scripts'))
232 if cp.has_section('gui_scripts'):
233 gui = dict(cp.items('gui_scripts'))
234 return console, gui
235
236
237 def move_wheel_files(name, req, wheeldir, user=False, home=None, root=None,
238 pycompile=True, scheme=None, isolated=False):
239 """Install a wheel"""
240
241 if not scheme:
242 scheme = distutils_scheme(
243 name, user=user, home=home, root=root, isolated=isolated
244 )
245
246 if root_is_purelib(name, wheeldir):
247 lib_dir = scheme['purelib']
248 else:
249 lib_dir = scheme['platlib']
250
251 info_dir = []
252 data_dirs = []
253 source = wheeldir.rstrip(os.path.sep) + os.path.sep
254
255 # Record details of the files moved
256 # installed = files copied from the wheel to the destination
257 # changed = files changed while installing (scripts #! line typically)
258 # generated = files newly generated during the install (script wrappers)
259 installed = {}
260 changed = set()
261 generated = []
262
263 # Compile all of the pyc files that we're going to be installing
264 if pycompile:
265 with captured_stdout() as stdout:
266 with warnings.catch_warnings():
267 warnings.filterwarnings('ignore')
268 compileall.compile_dir(source, force=True, quiet=True)
269 logger.debug(stdout.getvalue())
270
271 def normpath(src, p):
272 return os.path.relpath(src, p).replace(os.path.sep, '/')
273
274 def record_installed(srcfile, destfile, modified=False):
275 """Map archive RECORD paths to installation RECORD paths."""
276 oldpath = normpath(srcfile, wheeldir)
277 newpath = normpath(destfile, lib_dir)
278 installed[oldpath] = newpath
279 if modified:
280 changed.add(destfile)
281
282 def clobber(source, dest, is_base, fixer=None, filter=None):
283 ensure_dir(dest) # common for the 'include' path
284
285 for dir, subdirs, files in os.walk(source):
286 basedir = dir[len(source):].lstrip(os.path.sep)
287 destdir = os.path.join(dest, basedir)
288 if is_base and basedir.split(os.path.sep, 1)[0].endswith('.data'):
289 continue
290 for s in subdirs:
291 destsubdir = os.path.join(dest, basedir, s)
292 if is_base and basedir == '' and destsubdir.endswith('.data'):
293 data_dirs.append(s)
294 continue
295 elif (is_base and
296 s.endswith('.dist-info') and
297 # is self.req.project_name case preserving?
298 s.lower().startswith(
299 req.project_name.replace('-', '_').lower())):
300 assert not info_dir, 'Multiple .dist-info directories'
301 info_dir.append(destsubdir)
302 for f in files:
303 # Skip unwanted files
304 if filter and filter(f):
305 continue
306 srcfile = os.path.join(dir, f)
307 destfile = os.path.join(dest, basedir, f)
308 # directory creation is lazy and after the file filtering above
309 # to ensure we don't install empty dirs; empty dirs can't be
310 # uninstalled.
311 ensure_dir(destdir)
312
313 # We use copyfile (not move, copy, or copy2) to be extra sure
314 # that we are not moving directories over (copyfile fails for
315 # directories) as well as to ensure that we are not copying
316 # over any metadata because we want more control over what
317 # metadata we actually copy over.
318 shutil.copyfile(srcfile, destfile)
319
320 # Copy over the metadata for the file, currently this only
321 # includes the atime and mtime.
322 st = os.stat(srcfile)
323 if hasattr(os, "utime"):
324 os.utime(destfile, (st.st_atime, st.st_mtime))
325
326 # If our file is executable, then make our destination file
327 # executable.
328 if os.access(srcfile, os.X_OK):
329 st = os.stat(srcfile)
330 permissions = (
331 st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH
332 )
333 os.chmod(destfile, permissions)
334
335 changed = False
336 if fixer:
337 changed = fixer(destfile)
338 record_installed(srcfile, destfile, changed)
339
340 clobber(source, lib_dir, True)
341
342 assert info_dir, "%s .dist-info directory not found" % req
343
344 # Get the defined entry points
345 ep_file = os.path.join(info_dir[0], 'entry_points.txt')
346 console, gui = get_entrypoints(ep_file)
347
348 def is_entrypoint_wrapper(name):
349 # EP, EP.exe and EP-script.py are scripts generated for
350 # entry point EP by setuptools
351 if name.lower().endswith('.exe'):
352 matchname = name[:-4]
353 elif name.lower().endswith('-script.py'):
354 matchname = name[:-10]
355 elif name.lower().endswith(".pya"):
356 matchname = name[:-4]
357 else:
358 matchname = name
359 # Ignore setuptools-generated scripts
360 return (matchname in console or matchname in gui)
361
362 for datadir in data_dirs:
363 fixer = None
364 filter = None
365 for subdir in os.listdir(os.path.join(wheeldir, datadir)):
366 fixer = None
367 if subdir == 'scripts':
368 fixer = fix_script
369 filter = is_entrypoint_wrapper
370 source = os.path.join(wheeldir, datadir, subdir)
371 dest = scheme[subdir]
372 clobber(source, dest, False, fixer=fixer, filter=filter)
373
374 maker = ScriptMaker(None, scheme['scripts'])
375
376 # Ensure old scripts are overwritten.
377 # See https://github.com/pypa/pip/issues/1800
378 maker.clobber = True
379
380 # Ensure we don't generate any variants for scripts because this is almost
381 # never what somebody wants.
382 # See https://bitbucket.org/pypa/distlib/issue/35/
383 maker.variants = set(('', ))
384
385 # This is required because otherwise distlib creates scripts that are not
386 # executable.
387 # See https://bitbucket.org/pypa/distlib/issue/32/
388 maker.set_mode = True
389
390 # Simplify the script and fix the fact that the default script swallows
391 # every single stack trace.
392 # See https://bitbucket.org/pypa/distlib/issue/34/
393 # See https://bitbucket.org/pypa/distlib/issue/33/
394 def _get_script_text(entry):
395 if entry.suffix is None:
396 raise InstallationError(
397 "Invalid script entry point: %s for req: %s - A callable "
398 "suffix is required. Cf https://packaging.python.org/en/"
399 "latest/distributing.html#console-scripts for more "
400 "information." % (entry, req)
401 )
402 return maker.script_template % {
403 "module": entry.prefix,
404 "import_name": entry.suffix.split(".")[0],
405 "func": entry.suffix,
406 }
407
408 maker._get_script_text = _get_script_text
409 maker.script_template = """# -*- coding: utf-8 -*-
410 import re
411 import sys
412
413 from %(module)s import %(import_name)s
414
415 if __name__ == '__main__':
416 sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
417 sys.exit(%(func)s())
418 """
419
420 # Special case pip and setuptools to generate versioned wrappers
421 #
422 # The issue is that some projects (specifically, pip and setuptools) use
423 # code in setup.py to create "versioned" entry points - pip2.7 on Python
424 # 2.7, pip3.3 on Python 3.3, etc. But these entry points are baked into
425 # the wheel metadata at build time, and so if the wheel is installed with
426 # a *different* version of Python the entry points will be wrong. The
427 # correct fix for this is to enhance the metadata to be able to describe
428 # such versioned entry points, but that won't happen till Metadata 2.0 is
429 # available.
430 # In the meantime, projects using versioned entry points will either have
431 # incorrect versioned entry points, or they will not be able to distribute
432 # "universal" wheels (i.e., they will need a wheel per Python version).
433 #
434 # Because setuptools and pip are bundled with _ensurepip and virtualenv,
435 # we need to use universal wheels. So, as a stopgap until Metadata 2.0, we
436 # override the versioned entry points in the wheel and generate the
437 # correct ones. This code is purely a short-term measure until Metadat 2.0
438 # is available.
439 #
440 # To add the level of hack in this section of code, in order to support
441 # ensurepip this code will look for an ``ENSUREPIP_OPTIONS`` environment
442 # variable which will control which version scripts get installed.
443 #
444 # ENSUREPIP_OPTIONS=altinstall
445 # - Only pipX.Y and easy_install-X.Y will be generated and installed
446 # ENSUREPIP_OPTIONS=install
447 # - pipX.Y, pipX, easy_install-X.Y will be generated and installed. Note
448 # that this option is technically if ENSUREPIP_OPTIONS is set and is
449 # not altinstall
450 # DEFAULT
451 # - The default behavior is to install pip, pipX, pipX.Y, easy_install
452 # and easy_install-X.Y.
453 pip_script = console.pop('pip', None)
454 if pip_script:
455 if "ENSUREPIP_OPTIONS" not in os.environ:
456 spec = 'pip = ' + pip_script
457 generated.extend(maker.make(spec))
458
459 if os.environ.get("ENSUREPIP_OPTIONS", "") != "altinstall":
460 spec = 'pip%s = %s' % (sys.version[:1], pip_script)
461 generated.extend(maker.make(spec))
462
463 spec = 'pip%s = %s' % (sys.version[:3], pip_script)
464 generated.extend(maker.make(spec))
465 # Delete any other versioned pip entry points
466 pip_ep = [k for k in console if re.match(r'pip(\d(\.\d)?)?$', k)]
467 for k in pip_ep:
468 del console[k]
469 easy_install_script = console.pop('easy_install', None)
470 if easy_install_script:
471 if "ENSUREPIP_OPTIONS" not in os.environ:
472 spec = 'easy_install = ' + easy_install_script
473 generated.extend(maker.make(spec))
474
475 spec = 'easy_install-%s = %s' % (sys.version[:3], easy_install_script)
476 generated.extend(maker.make(spec))
477 # Delete any other versioned easy_install entry points
478 easy_install_ep = [
479 k for k in console if re.match(r'easy_install(-\d\.\d)?$', k)
480 ]
481 for k in easy_install_ep:
482 del console[k]
483
484 # Generate the console and GUI entry points specified in the wheel
485 if len(console) > 0:
486 generated.extend(
487 maker.make_multiple(['%s = %s' % kv for kv in console.items()])
488 )
489 if len(gui) > 0:
490 generated.extend(
491 maker.make_multiple(
492 ['%s = %s' % kv for kv in gui.items()],
493 {'gui': True}
494 )
495 )
496
497 record = os.path.join(info_dir[0], 'RECORD')
498 temp_record = os.path.join(info_dir[0], 'RECORD.pip')
499 with open_for_csv(record, 'r') as record_in:
500 with open_for_csv(temp_record, 'w+') as record_out:
501 reader = csv.reader(record_in)
502 writer = csv.writer(record_out)
503 for row in reader:
504 row[0] = installed.pop(row[0], row[0])
505 if row[0] in changed:
506 row[1], row[2] = rehash(row[0])
507 writer.writerow(row)
508 for f in generated:
509 h, l = rehash(f)
510 writer.writerow((f, h, l))
511 for f in installed:
512 writer.writerow((installed[f], '', ''))
513 shutil.move(temp_record, record)
514
515
516 def _unique(fn):
517 @functools.wraps(fn)
518 def unique(*args, **kw):
519 seen = set()
520 for item in fn(*args, **kw):
521 if item not in seen:
522 seen.add(item)
523 yield item
524 return unique
525
526
527 # TODO: this goes somewhere besides the wheel module
528 @_unique
529 def uninstallation_paths(dist):
530 """
531 Yield all the uninstallation paths for dist based on RECORD-without-.pyc
532
533 Yield paths to all the files in RECORD. For each .py file in RECORD, add
534 the .pyc in the same directory.
535
536 UninstallPathSet.add() takes care of the __pycache__ .pyc.
537 """
538 from pip.utils import FakeFile # circular import
539 r = csv.reader(FakeFile(dist.get_metadata_lines('RECORD')))
540 for row in r:
541 path = os.path.join(dist.location, row[0])
542 yield path
543 if path.endswith('.py'):
544 dn, fn = os.path.split(path)
545 base = fn[:-3]
546 path = os.path.join(dn, base + '.pyc')
547 yield path
548
549
550 def wheel_version(source_dir):
551 """
552 Return the Wheel-Version of an extracted wheel, if possible.
553
554 Otherwise, return False if we couldn't parse / extract it.
555 """
556 try:
557 dist = [d for d in pkg_resources.find_on_path(None, source_dir)][0]
558
559 wheel_data = dist.get_metadata('WHEEL')
560 wheel_data = Parser().parsestr(wheel_data)
561
562 version = wheel_data['Wheel-Version'].strip()
563 version = tuple(map(int, version.split('.')))
564 return version
565 except:
566 return False
567
568
569 def check_compatibility(version, name):
570 """
571 Raises errors or warns if called with an incompatible Wheel-Version.
572
573 Pip should refuse to install a Wheel-Version that's a major series
574 ahead of what it's compatible with (e.g 2.0 > 1.1); and warn when
575 installing a version only minor version ahead (e.g 1.2 > 1.1).
576
577 version: a 2-tuple representing a Wheel-Version (Major, Minor)
578 name: name of wheel or package to raise exception about
579
580 :raises UnsupportedWheel: when an incompatible Wheel-Version is given
581 """
582 if not version:
583 raise UnsupportedWheel(
584 "%s is in an unsupported or invalid wheel" % name
585 )
586 if version[0] > VERSION_COMPATIBLE[0]:
587 raise UnsupportedWheel(
588 "%s's Wheel-Version (%s) is not compatible with this version "
589 "of pip" % (name, '.'.join(map(str, version)))
590 )
591 elif version > VERSION_COMPATIBLE:
592 logger.warning(
593 'Installing from a newer Wheel-Version (%s)',
594 '.'.join(map(str, version)),
595 )
596
597
598 class Wheel(object):
599 """A wheel file"""
600
601 # TODO: maybe move the install code into this class
602
603 wheel_file_re = re.compile(
604 r"""^(?P<namever>(?P<name>.+?)-(?P<ver>\d.*?))
605 ((-(?P<build>\d.*?))?-(?P<pyver>.+?)-(?P<abi>.+?)-(?P<plat>.+?)
606 \.whl|\.dist-info)$""",
607 re.VERBOSE
608 )
609
610 def __init__(self, filename):
611 """
612 :raises InvalidWheelFilename: when the filename is invalid for a wheel
613 """
614 wheel_info = self.wheel_file_re.match(filename)
615 if not wheel_info:
616 raise InvalidWheelFilename(
617 "%s is not a valid wheel filename." % filename
618 )
619 self.filename = filename
620 self.name = wheel_info.group('name').replace('_', '-')
621 # we'll assume "_" means "-" due to wheel naming scheme
622 # (https://github.com/pypa/pip/issues/1150)
623 self.version = wheel_info.group('ver').replace('_', '-')
624 self.pyversions = wheel_info.group('pyver').split('.')
625 self.abis = wheel_info.group('abi').split('.')
626 self.plats = wheel_info.group('plat').split('.')
627
628 # All the tag combinations from this file
629 self.file_tags = set(
630 (x, y, z) for x in self.pyversions
631 for y in self.abis for z in self.plats
632 )
633
634 def support_index_min(self, tags=None):
635 """
636 Return the lowest index that one of the wheel's file_tag combinations
637 achieves in the supported_tags list e.g. if there are 8 supported tags,
638 and one of the file tags is first in the list, then return 0. Returns
639 None is the wheel is not supported.
640 """
641 if tags is None: # for mock
642 tags = pep425tags.supported_tags
643 indexes = [tags.index(c) for c in self.file_tags if c in tags]
644 return min(indexes) if indexes else None
645
646 def supported(self, tags=None):
647 """Is this wheel supported on this system?"""
648 if tags is None: # for mock
649 tags = pep425tags.supported_tags
650 return bool(set(tags).intersection(self.file_tags))
651
652
653 class WheelBuilder(object):
654 """Build wheels from a RequirementSet."""
655
656 def __init__(self, requirement_set, finder, build_options=None,
657 global_options=None):
658 self.requirement_set = requirement_set
659 self.finder = finder
660 self._cache_root = requirement_set._wheel_cache._cache_dir
661 self._wheel_dir = requirement_set.wheel_download_dir
662 self.build_options = build_options or []
663 self.global_options = global_options or []
664
665 def _build_one(self, req, output_dir):
666 """Build one wheel.
667
668 :return: The filename of the built wheel, or None if the build failed.
669 """
670 tempd = tempfile.mkdtemp('pip-wheel-')
671 try:
672 if self.__build_one(req, tempd):
673 try:
674 wheel_name = os.listdir(tempd)[0]
675 wheel_path = os.path.join(output_dir, wheel_name)
676 shutil.move(os.path.join(tempd, wheel_name), wheel_path)
677 logger.info('Stored in directory: %s', output_dir)
678 return wheel_path
679 except:
680 pass
681 # Ignore return, we can't do anything else useful.
682 self._clean_one(req)
683 return None
684 finally:
685 rmtree(tempd)
686
687 def _base_setup_args(self, req):
688 return [
689 sys.executable, '-c',
690 "import setuptools;__file__=%r;"
691 "exec(compile(open(__file__).read().replace('\\r\\n', '\\n'), "
692 "__file__, 'exec'))" % req.setup_py
693 ] + list(self.global_options)
694
695 def __build_one(self, req, tempd):
696 base_args = self._base_setup_args(req)
697
698 logger.info('Running setup.py bdist_wheel for %s', req.name)
699 logger.debug('Destination directory: %s', tempd)
700 wheel_args = base_args + ['bdist_wheel', '-d', tempd] \
701 + self.build_options
702 try:
703 call_subprocess(wheel_args, cwd=req.source_dir, show_stdout=False)
704 return True
705 except:
706 logger.error('Failed building wheel for %s', req.name)
707 return False
708
709 def _clean_one(self, req):
710 base_args = self._base_setup_args(req)
711
712 logger.info('Running setup.py clean for %s', req.name)
713 clean_args = base_args + ['clean', '--all']
714 try:
715 call_subprocess(clean_args, cwd=req.source_dir, show_stdout=False)
716 return True
717 except:
718 logger.error('Failed cleaning build dir for %s', req.name)
719 return False
720
721 def build(self, autobuilding=False):
722 """Build wheels.
723
724 :param unpack: If True, replace the sdist we built from the with the
725 newly built wheel, in preparation for installation.
726 :return: True if all the wheels built correctly.
727 """
728 assert self._wheel_dir or (autobuilding and self._cache_root)
729 # unpack sdists and constructs req set
730 self.requirement_set.prepare_files(self.finder)
731
732 reqset = self.requirement_set.requirements.values()
733
734 buildset = []
735 for req in reqset:
736 if req.constraint:
737 continue
738 if req.is_wheel:
739 if not autobuilding:
740 logger.info(
741 'Skipping %s, due to already being wheel.', req.name)
742 elif req.editable:
743 if not autobuilding:
744 logger.info(
745 'Skipping bdist_wheel for %s, due to being editable',
746 req.name)
747 elif autobuilding and req.link and not req.link.is_artifact:
748 pass
749 elif autobuilding and not req.source_dir:
750 pass
751 else:
752 if autobuilding:
753 link = req.link
754 base, ext = link.splitext()
755 if pip.index.egg_info_matches(base, None, link) is None:
756 # Doesn't look like a package - don't autobuild a wheel
757 # because we'll have no way to lookup the result sanely
758 continue
759 if "binary" not in pip.index.fmt_ctl_formats(
760 self.finder.format_control,
761 canonicalize_name(req.name)):
762 logger.info(
763 "Skipping bdist_wheel for %s, due to binaries "
764 "being disabled for it.", req.name)
765 continue
766 buildset.append(req)
767
768 if not buildset:
769 return True
770
771 # Build the wheels.
772 logger.info(
773 'Building wheels for collected packages: %s',
774 ', '.join([req.name for req in buildset]),
775 )
776 with indent_log():
777 build_success, build_failure = [], []
778 for req in buildset:
779 if autobuilding:
780 output_dir = _cache_for_link(self._cache_root, req.link)
781 try:
782 ensure_dir(output_dir)
783 except OSError as e:
784 logger.warn("Building wheel for %s failed: %s",
785 req.name, e)
786 build_failure.append(req)
787 continue
788 else:
789 output_dir = self._wheel_dir
790 wheel_file = self._build_one(req, output_dir)
791 if wheel_file:
792 build_success.append(req)
793 if autobuilding:
794 # XXX: This is mildly duplicative with prepare_files,
795 # but not close enough to pull out to a single common
796 # method.
797 # The code below assumes temporary source dirs -
798 # prevent it doing bad things.
799 if req.source_dir and not os.path.exists(os.path.join(
800 req.source_dir, PIP_DELETE_MARKER_FILENAME)):
801 raise AssertionError(
802 "bad source dir - missing marker")
803 # Delete the source we built the wheel from
804 req.remove_temporary_source()
805 # set the build directory again - name is known from
806 # the work prepare_files did.
807 req.source_dir = req.build_location(
808 self.requirement_set.build_dir)
809 # Update the link for this.
810 req.link = pip.index.Link(
811 path_to_url(wheel_file))
812 assert req.link.is_wheel
813 # extract the wheel into the dir
814 unpack_url(
815 req.link, req.source_dir, None, False,
816 session=self.requirement_set.session)
817 else:
818 build_failure.append(req)
819
820 # notify success/failure
821 if build_success:
822 logger.info(
823 'Successfully built %s',
824 ' '.join([req.name for req in build_success]),
825 )
826 if build_failure:
827 logger.info(
828 'Failed to build %s',
829 ' '.join([req.name for req in build_failure]),
830 )
831 # Return True if all builds were successful
832 return len(build_failure) == 0
833
[end of pip/wheel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pypa/pip
|
13d43e3af8b5c8e8c7c66dde8a9f76b340697c3e
|
Pip wheel doesn't properly do ABI tag for py2.7
It appears that using a debug build of python 2.7 doesn't mark the wheels built using it in any special way.
Pip would install them on a regular python 2.7 (if they would be on an package index) and then later on imports for C extensions would fail.
ref @dstufft wheel caching
|
This is up to bdist_wheel to do, not pip IMO.
(unless we don't _honour_ markers for debug python). We need to honour it, of course.
The SOABI thing is new in Python 3 which the wheel spec uses (and the wheel tool and pip) which works fine, however there is no SOABI api in Python 2 so nothing currently creates or looks for ABI specific wheels (instead we look for CPython X.Y specific wheels). The outcome of this ticket would be to find a way to emulate the SOABI api on Python 2.x so we can start looking for ABI tagged wheels on Python 2.x. It would also involve coordination with wheel to add the same emulation there and get it to generate wheels tagged with an ABI.
Ok. So _should_ this be a release blocker? Its no worse than the behaviour with pip wheel.
It's no worse than `pip wheel`, except using `pip wheel` is opt in while using the new auto wheels is opt out. Whether it's a release blocker or not would, in my opinion, depend on how likely it is that you're going to have two Pythons of the same minor version with different ABIs installed on the same machine.
In the IRC discussion that lead to this, I believe it was said (at least that's how I understood it) that it's not uncommon to run two different ABI Python when it comes to debug builds so I marked it as a release-blocker. If my understanding was wrong we can drop the release-block part of this.
Of course the above is really, two minor versions, _in the 2.x line_, installed on the same machine. It should already work fine on 3.x.
## ok so - python2.7-dbg on ubuntu says this:
Description-en: Debug Build of the Python Interpreter (version 2.7)
The package holds two things:
.
- A Python interpreter configured with --pydebug. Dynamically loaded modules
are searched as <foo>_d.so first. Third party extensions need a separate
build to be used by this interpreter.
- Debug information for standard python interpreter and extensions.
---
So having that installed doesn't magically get folk _using_ the debug build.
My argument is that anyone using the debug python2 build knows what they're doing. They might not know that the C extensions can't be automatically discriminated between, but they won't accidentally switch from python to python-dbg.
As well as debug builds, there's also 32-bit and 64-bit Pythons on the same Windows machine. I know this can be done (for example, Appveyor includes this) but I don't know if it would ever be a problem in practice (Appveyor shouldn't be an issue because it runs tests in isolated VMs).
On Fri, Apr 17, 2015 at 4:17 AM, rbtcollins [email protected]
wrote:
> My argument is that anyone using the debug python2 build knows what
> they're doing. They might not know that the C extensions can't be
> automatically discriminated between, but they won't accidentally switch
> from python to python-dbg.
The element of surprise is still there. You should have seen my face when
I finally realized the import issue was caused by me building wheels with
debug python couple days before.
Thanks,
-- Ionel Cristian Mărieș, http://blog.ionelmc.ro
Sure. I'm just at a fairly big loss about what to do here.
My assumptions:
- most (call it 99%) of devs won't install the debug version of Python.
- the benefit to the ones that don't is unambiguous
- the ones that do are into a grey area that CPython doesn't really help with, and is documented as having dragons.
I might be the only person who understands wheel tagging in bdist_wheel. If that doesn't change then you are going to have to wait for me to implement the debug / unicode detection so that bdist_wheel can generate a better Python ABI tag for Python 2.7. Otherwise someone else should figure it out and submit a pull request to bdist_wheel.
So - my concern here is that this is tagged release-blocker. I don't think it should be.
|
2015-09-03T16:42:01Z
|
<patch>
diff --git a/pip/pep425tags.py b/pip/pep425tags.py
--- a/pip/pep425tags.py
+++ b/pip/pep425tags.py
@@ -15,6 +15,14 @@
_osx_arch_pat = re.compile(r'(.+)_(\d+)_(\d+)_(.+)')
+def get_config_var(var):
+ try:
+ return sysconfig.get_config_var(var)
+ except IOError as e: # Issue #1074
+ warnings.warn("{0}".format(e), RuntimeWarning)
+ return None
+
+
def get_abbr_impl():
"""Return abbreviated implementation name."""
if hasattr(sys, 'pypy_version_info'):
@@ -30,7 +38,67 @@ def get_abbr_impl():
def get_impl_ver():
"""Return implementation version."""
- return ''.join(map(str, sys.version_info[:2]))
+ impl_ver = get_config_var("py_version_nodot")
+ if not impl_ver or get_abbr_impl() == 'pp':
+ impl_ver = ''.join(map(str, get_impl_version_info()))
+ return impl_ver
+
+
+def get_impl_version_info():
+ """Return sys.version_info-like tuple for use in decrementing the minor
+ version."""
+ if get_abbr_impl() == 'pp':
+ # as per https://github.com/pypa/pip/issues/2882
+ return (sys.version_info[0], sys.pypy_version_info.major,
+ sys.pypy_version_info.minor)
+ else:
+ return sys.version_info[0], sys.version_info[1]
+
+
+def get_flag(var, fallback, expected=True, warn=True):
+ """Use a fallback method for determining SOABI flags if the needed config
+ var is unset or unavailable."""
+ val = get_config_var(var)
+ if val is None:
+ if warn:
+ warnings.warn("Config variable '{0}' is unset, Python ABI tag may "
+ "be incorrect".format(var), RuntimeWarning, 2)
+ return fallback()
+ return val == expected
+
+
+def get_abi_tag():
+ """Return the ABI tag based on SOABI (if available) or emulate SOABI
+ (CPython 2, PyPy)."""
+ soabi = get_config_var('SOABI')
+ impl = get_abbr_impl()
+ if not soabi and impl in ('cp', 'pp') and hasattr(sys, 'maxunicode'):
+ d = ''
+ m = ''
+ u = ''
+ if get_flag('Py_DEBUG',
+ lambda: hasattr(sys, 'gettotalrefcount'),
+ warn=(impl == 'cp')):
+ d = 'd'
+ if get_flag('WITH_PYMALLOC',
+ lambda: impl == 'cp',
+ warn=(impl == 'cp')):
+ m = 'm'
+ if get_flag('Py_UNICODE_SIZE',
+ lambda: sys.maxunicode == 0x10ffff,
+ expected=4,
+ warn=(impl == 'cp' and
+ sys.version_info < (3, 3))) \
+ and sys.version_info < (3, 3):
+ u = 'u'
+ abi = '%s%s%s%s%s' % (impl, get_impl_ver(), d, m, u)
+ elif soabi and soabi.startswith('cpython-'):
+ abi = 'cp' + soabi.split('-')[1]
+ elif soabi:
+ abi = soabi.replace('.', '_').replace('-', '_')
+ else:
+ abi = None
+ return abi
def get_platform():
@@ -51,23 +119,19 @@ def get_supported(versions=None, noarch=False):
# Versions must be given with respect to the preference
if versions is None:
versions = []
- major = sys.version_info[0]
+ version_info = get_impl_version_info()
+ major = version_info[:-1]
# Support all previous minor Python versions.
- for minor in range(sys.version_info[1], -1, -1):
- versions.append(''.join(map(str, (major, minor))))
+ for minor in range(version_info[-1], -1, -1):
+ versions.append(''.join(map(str, major + (minor,))))
impl = get_abbr_impl()
abis = []
- try:
- soabi = sysconfig.get_config_var('SOABI')
- except IOError as e: # Issue #1074
- warnings.warn("{0}".format(e), RuntimeWarning)
- soabi = None
-
- if soabi and soabi.startswith('cpython-'):
- abis[0:0] = ['cp' + soabi.split('-')[1]]
+ abi = get_abi_tag()
+ if abi:
+ abis[0:0] = [abi]
abi3s = set()
import imp
</patch>
|
[]
|
[]
| |||
wagtail__wagtail-8648
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ModelAdmin title-column always links to "edit" page
### Issue Summary
Even if a user does not have the "change" permission, the `ModelAdmin` still renders a link to the "edit"-view, which then results in a permission error for this user. For example, we often use ModelAdmins which use `inspect_view_enabled = True` and only allow inspect and delete.
This worked before https://github.com/wagtail/wagtail/pull/7408 (wagtail < 2.15), because the title column has not been linked and the "Edit"-button did not appear if the user didn't have the permission.
I don't know, what's the best way to solve it. Falling back to the inspect view only works, if it has been enabled. Skipping the link if the permission is missing maybe re-creates the issue wagtail/wagtail#7333 in some cases, or at least needs some extra work to improve the html for screen readers?
### Steps to Reproduce
1. Start a new project with `wagtail start myproject`
2. Create a simple model and a `ModelAdmin` for it
3. Create a user in the "Moderators" group and only add the "view" permission for the new model to the group.
4. Create an instance with a superuser and find the linked item with the newly created moderator.
### Technical details
* Wagtail version: >= 2.15
</issue>
<code>
[start of README.md]
1 <h1 align="center">
2 <img width="343" src=".github/wagtail.svg#gh-light-mode-only" alt="Wagtail">
3 <img width="343" src=".github/wagtail-inverse.svg#gh-dark-mode-only" alt="Wagtail">
4 </h1>
5 <p align="center">
6 <br>
7 <a href="https://github.com/wagtail/wagtail/actions">
8 <img src="https://github.com/wagtail/wagtail/workflows/Wagtail%20CI/badge.svg" alt="Build Status" />
9 </a>
10 <a href="https://opensource.org/licenses/BSD-3-Clause">
11 <img src="https://img.shields.io/badge/license-BSD-blue.svg" alt="License" />
12 </a>
13 <a href="https://pypi.python.org/pypi/wagtail/">
14 <img src="https://img.shields.io/pypi/v/wagtail.svg" alt="Version" />
15 </a>
16 <a href="https://lgtm.com/projects/g/wagtail/wagtail/alerts/">
17 <img src="https://img.shields.io/lgtm/alerts/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Total alerts" />
18 </a>
19 <a href="https://lgtm.com/projects/g/wagtail/wagtail/context:python">
20 <img src="https://img.shields.io/lgtm/grade/python/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Language grade: Python" />
21 </a>
22 <a href="https://lgtm.com/projects/g/wagtail/wagtail/context:javascript">
23 <img src="https://img.shields.io/lgtm/grade/javascript/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Language grade: JavaScript" />
24 </a>
25 <a href="https://pypi.python.org/pypi/wagtail/">
26 <img src="https://img.shields.io/pypi/dm/wagtail?logo=Downloads" alt="Monthly downloads" />
27 </a>
28 </p>
29
30 Wagtail is an open source content management system built on Django, with a strong community and commercial support. It's focused on user experience, and offers precise control for designers and developers.
31
32 
33
34 ### 🔥 Features
35
36 - A fast, attractive interface for authors
37 - Complete control over front-end design and structure
38 - Scales to millions of pages and thousands of editors
39 - Fast out of the box, cache-friendly when you need it
40 - Content API for 'headless' sites with de-coupled front-end
41 - Runs on a Raspberry Pi or a multi-datacenter cloud platform
42 - StreamField encourages flexible content without compromising structure
43 - Powerful, integrated search, using Elasticsearch or PostgreSQL
44 - Excellent support for images and embedded content
45 - Multi-site and multi-language ready
46 - Embraces and extends Django
47
48 Find out more at [wagtail.org](https://wagtail.org/).
49
50 ### 👉 Getting started
51
52 Wagtail works with [Python 3](https://www.python.org/downloads/), on any platform.
53
54 To get started with using Wagtail, run the following in a virtual environment:
55
56 
57
58 ```bash
59 pip install wagtail
60 wagtail start mysite
61 cd mysite
62 pip install -r requirements.txt
63 python manage.py migrate
64 python manage.py createsuperuser
65 python manage.py runserver
66 ```
67
68 For detailed installation and setup docs, see [the getting started tutorial](https://docs.wagtail.org/en/stable/getting_started/tutorial.html).
69
70 ### 👨👩👧👦 Who’s using it?
71
72 Wagtail is used by [NASA](https://www.nasa.gov/), [Google](https://www.google.com/), [Oxfam](https://www.oxfam.org/en), the [NHS](https://www.nhs.uk/), [Mozilla](https://www.mozilla.org/en-US/), [MIT](https://www.mit.edu/), the [Red Cross](https://www.icrc.org/en), [Salesforce](https://www.salesforce.com/), [NBC](https://www.nbc.com/), [BMW](https://www.bmw.com/en/index.html), and the US and UK governments. Add your own Wagtail site to [madewithwagtail.org](https://madewithwagtail.org).
73
74 ### 📖 Documentation
75
76 [docs.wagtail.org](https://docs.wagtail.org/) is the full reference for Wagtail, and includes guides for developers, designers and editors, alongside release notes and our roadmap.
77
78 For those who are **new to Wagtail**, the [Zen of Wagtail](https://docs.wagtail.org/en/stable/getting_started/the_zen_of_wagtail.html) will help you understand what Wagtail is, and what Wagtail is _not_.
79
80 **For developers** who are ready to jump in to their first Wagtail website the [Getting Started Tutorial](https://docs.wagtail.org/en/stable/getting_started/tutorial.html) will guide you through creating and editing your first page.
81
82 **Do you have an existing Django project?** The [Wagtail Integration documentation](https://docs.wagtail.org/en/stable/getting_started/integrating_into_django.html) is the best place to start.
83
84 ### 📌 Compatibility
85
86 _(If you are reading this on GitHub, the details here may not be indicative of the current released version - please see [Compatible Django / Python versions](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) in the Wagtail documentation.)_
87
88 Wagtail supports:
89
90 - Django 3.2.x and 4.0.x
91 - Python 3.7, 3.8, 3.9 and 3.10
92 - PostgreSQL, MySQL and SQLite (with JSON1) as database backends
93
94 [Previous versions of Wagtail](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) additionally supported Python 2.7 and earlier Django versions.
95
96 ---
97
98 ### 📢 Community Support
99
100 There is an active community of Wagtail users and developers responding to questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/wagtail). When posting questions, please read Stack Overflow's advice on [how to ask questions](https://stackoverflow.com/help/how-to-ask) and remember to tag your question "wagtail".
101
102 For topics and discussions that do not fit Stack Overflow's question and answer format we have a [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack). Please respect the time and effort of volunteers by not asking the same question in multiple places.
103
104 [](https://github.com/wagtail/wagtail/wiki/Slack)
105
106 Our [Github discussion boards](https://github.com/wagtail/wagtail/discussions) are open for sharing ideas and plans for the Wagtail project.
107
108 We maintain a curated list of third party packages, articles and other resources at [Awesome Wagtail](https://github.com/springload/awesome-wagtail).
109
110 ### 🧑💼 Commercial Support
111
112 Wagtail is sponsored by [Torchbox](https://torchbox.com/). If you need help implementing or hosting Wagtail, please contact us: [email protected]. See also [madewithwagtail.org/developers/](https://madewithwagtail.org/developers/) for expert Wagtail developers around the world.
113
114 ### 🔐 Security
115
116 We take the security of Wagtail, and related packages we maintain, seriously. If you have found a security issue with any of our projects please email us at [[email protected]](mailto:[email protected]) so we can work together to find and patch the issue. We appreciate responsible disclosure with any security related issues, so please contact us first before creating a Github issue.
117
118 If you want to send an encrypted email (optional), the public key ID for [email protected] is 0xbed227b4daf93ff9, and this public key is available from most commonly-used keyservers.
119
120 ### 🕒 Release schedule
121
122 Feature releases of Wagtail are released every three months. Selected releases are designated as Long Term Support (LTS) releases, and will receive maintenance updates for an extended period to address any security and data-loss related issues. For dates of past and upcoming releases and support periods, see [Release Schedule](https://github.com/wagtail/wagtail/wiki/Release-schedule).
123
124 #### 🕛 Nightly releases
125
126 To try out the latest features before a release, we also create builds from `main` every night. You can find instructions on how to install the latest nightly release at https://releases.wagtail.org/nightly/index.html
127
128 ### 🙋🏽 Contributing
129
130 If you're a Python or Django developer, fork the repo and get stuck in! We have several developer focused channels on the [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack).
131
132 You might like to start by reviewing the [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html) and checking issues with the [good first issue](https://github.com/wagtail/wagtail/labels/good%20first%20issue) label.
133
134 We also welcome translations for Wagtail's interface. Translation work should be submitted through [Transifex](https://www.transifex.com/projects/p/wagtail/).
135
136 ### 🔓 License
137
138 [BSD](https://github.com/wagtail/wagtail/blob/main/LICENSE) - Free to use and modify for any purpose, including both open and closed-source code.
139
140 ### 👏 Thanks
141
142 We thank the following organisations for their services used in Wagtail's development:
143
144 [](https://www.browserstack.com/)<br>
145 [BrowserStack](https://www.browserstack.com/) provides the project with free access to their live web-based browser testing tool, and automated Selenium cloud testing.
146
147 [](https://www.squash.io/)<br>
148 [Squash](https://www.squash.io/) provides the project with free test environments for reviewing pull requests.
149
150 [](https://assistivlabs.com/)<br>
151 [Assistiv Labs](https://assistivlabs.com/) provides the project with unlimited access to their remote testing with assistive technologies.
152
[end of README.md]
[start of wagtail/contrib/modeladmin/helpers/permission.py]
1 from functools import lru_cache
2
3 from django.contrib.auth import get_permission_codename
4 from django.contrib.auth.models import Permission
5 from django.contrib.contenttypes.models import ContentType
6 from django.utils.functional import cached_property
7
8 from wagtail.models import Page, UserPagePermissionsProxy
9
10
11 class PermissionHelper:
12 """
13 Provides permission-related helper functions to help determine what a
14 user can do with a 'typical' model (where permissions are granted
15 model-wide), and to a specific instance of that model.
16 """
17
18 def __init__(self, model, inspect_view_enabled=False):
19 self.model = model
20 self.opts = model._meta
21 self.inspect_view_enabled = inspect_view_enabled
22
23 def get_all_model_permissions(self):
24 """
25 Return a queryset of all Permission objects pertaining to the `model`
26 specified at initialisation.
27 """
28
29 return Permission.objects.filter(
30 content_type__app_label=self.opts.app_label,
31 content_type__model=self.opts.model_name,
32 )
33
34 @cached_property
35 def all_permission_codenames(self):
36 return list(
37 self.get_all_model_permissions()
38 .values_list("codename", flat=True)
39 .distinct()
40 )
41
42 def get_perm_codename(self, action):
43 return get_permission_codename(action, self.opts)
44
45 def user_has_specific_permission(self, user, perm_codename):
46 """
47 Combine `perm_codename` with `self.opts.app_label` to call the provided
48 Django user's built-in `has_perm` method.
49 """
50
51 return user.has_perm("%s.%s" % (self.opts.app_label, perm_codename))
52
53 @lru_cache(maxsize=128)
54 def user_has_any_permissions(self, user):
55 """
56 Return a boolean to indicate whether `user` has any model-wide
57 permissions
58 """
59 for perm_codename in self.all_permission_codenames:
60 if self.user_has_specific_permission(user, perm_codename):
61 return True
62 return False
63
64 def user_can_list(self, user):
65 """
66 Return a boolean to indicate whether `user` is permitted to access the
67 list view for self.model
68 """
69 return self.user_has_any_permissions(user)
70
71 def user_can_create(self, user):
72 """
73 Return a boolean to indicate whether `user` is permitted to create new
74 instances of `self.model`
75 """
76 perm_codename = self.get_perm_codename("add")
77 return self.user_has_specific_permission(user, perm_codename)
78
79 def user_can_inspect_obj(self, user, obj):
80 """
81 Return a boolean to indicate whether `user` is permitted to 'inspect'
82 a specific `self.model` instance.
83 """
84 return self.inspect_view_enabled and self.user_has_any_permissions(user)
85
86 def user_can_edit_obj(self, user, obj):
87 """
88 Return a boolean to indicate whether `user` is permitted to 'change'
89 a specific `self.model` instance.
90 """
91 perm_codename = self.get_perm_codename("change")
92 return self.user_has_specific_permission(user, perm_codename)
93
94 def user_can_delete_obj(self, user, obj):
95 """
96 Return a boolean to indicate whether `user` is permitted to 'delete'
97 a specific `self.model` instance.
98 """
99 perm_codename = self.get_perm_codename("delete")
100 return self.user_has_specific_permission(user, perm_codename)
101
102 def user_can_unpublish_obj(self, user, obj):
103 return False
104
105 def user_can_copy_obj(self, user, obj):
106 return False
107
108
109 class PagePermissionHelper(PermissionHelper):
110 """
111 Provides permission-related helper functions to help determine what
112 a user can do with a model extending Wagtail's Page model. It differs
113 from `PermissionHelper`, because model-wide permissions aren't really
114 relevant. We generally need to determine permissions on an
115 object-specific basis.
116 """
117
118 def get_valid_parent_pages(self, user):
119 """
120 Identifies possible parent pages for the current user by first looking
121 at allowed_parent_page_models() on self.model to limit options to the
122 correct type of page, then checking permissions on those individual
123 pages to make sure we have permission to add a subpage to it.
124 """
125 # Get queryset of pages where this page type can be added
126 allowed_parent_page_content_types = list(
127 ContentType.objects.get_for_models(
128 *self.model.allowed_parent_page_models()
129 ).values()
130 )
131 allowed_parent_pages = Page.objects.filter(
132 content_type__in=allowed_parent_page_content_types
133 )
134
135 # Get queryset of pages where the user has permission to add subpages
136 if user.is_superuser:
137 pages_where_user_can_add = Page.objects.all()
138 else:
139 pages_where_user_can_add = Page.objects.none()
140 user_perms = UserPagePermissionsProxy(user)
141
142 for perm in user_perms.permissions.filter(permission_type="add"):
143 # user has add permission on any subpage of perm.page
144 # (including perm.page itself)
145 pages_where_user_can_add |= Page.objects.descendant_of(
146 perm.page, inclusive=True
147 )
148
149 # Combine them
150 return allowed_parent_pages & pages_where_user_can_add
151
152 def user_can_list(self, user):
153 """
154 For models extending Page, permitted actions are determined by
155 permissions on individual objects. Rather than check for change
156 permissions on every object individually (which would be quite
157 resource intensive), we simply always allow the list view to be
158 viewed, and limit further functionality when relevant.
159 """
160 return True
161
162 def user_can_create(self, user):
163 """
164 For models extending Page, whether or not a page of this type can be
165 added somewhere in the tree essentially determines the add permission,
166 rather than actual model-wide permissions
167 """
168 return self.get_valid_parent_pages(user).exists()
169
170 def user_can_edit_obj(self, user, obj):
171 perms = obj.permissions_for_user(user)
172 return perms.can_edit()
173
174 def user_can_delete_obj(self, user, obj):
175 perms = obj.permissions_for_user(user)
176 return perms.can_delete()
177
178 def user_can_publish_obj(self, user, obj):
179 perms = obj.permissions_for_user(user)
180 return obj.live and perms.can_unpublish()
181
182 def user_can_copy_obj(self, user, obj):
183 parent_page = obj.get_parent()
184 return parent_page.permissions_for_user(user).can_publish_subpage()
185
[end of wagtail/contrib/modeladmin/helpers/permission.py]
[start of wagtail/contrib/modeladmin/options.py]
1 from warnings import warn
2
3 from django.conf import settings
4 from django.contrib.admin import site as default_django_admin_site
5 from django.contrib.auth.models import Permission
6 from django.core import checks
7 from django.core.exceptions import ImproperlyConfigured
8 from django.db.models import Model
9 from django.urls import re_path
10 from django.utils.safestring import mark_safe
11
12 from wagtail import hooks
13 from wagtail.admin.admin_url_finder import register_admin_url_finder
14 from wagtail.admin.checks import check_panels_in_model
15 from wagtail.admin.panels import ObjectList, extract_panel_definitions_from_model_class
16 from wagtail.coreutils import accepts_kwarg
17 from wagtail.models import Page, TranslatableMixin
18 from wagtail.utils.deprecation import RemovedInWagtail50Warning
19
20 from .helpers import (
21 AdminURLHelper,
22 ButtonHelper,
23 DjangoORMSearchHandler,
24 ModelAdminURLFinder,
25 PageAdminURLHelper,
26 PageButtonHelper,
27 PagePermissionHelper,
28 PermissionHelper,
29 )
30 from .menus import GroupMenuItem, ModelAdminMenuItem, SubMenu
31 from .mixins import ThumbnailMixin # NOQA
32 from .views import (
33 ChooseParentView,
34 CreateView,
35 DeleteView,
36 EditView,
37 HistoryView,
38 IndexView,
39 InspectView,
40 )
41
42
43 class WagtailRegisterable:
44 """
45 Base class, providing a more convenient way for ModelAdmin or
46 ModelAdminGroup instances to be registered with Wagtail's admin area.
47 """
48
49 add_to_settings_menu = False
50 add_to_admin_menu = True
51 exclude_from_explorer = False
52
53 def register_with_wagtail(self):
54 @hooks.register("register_permissions")
55 def register_permissions():
56 return self.get_permissions_for_registration()
57
58 @hooks.register("register_admin_urls")
59 def register_admin_urls():
60 return self.get_admin_urls_for_registration()
61
62 if self.add_to_settings_menu:
63 menu_hook = "register_settings_menu_item"
64 elif self.add_to_admin_menu:
65 menu_hook = "register_admin_menu_item"
66 else:
67 menu_hook = None
68
69 if menu_hook:
70
71 @hooks.register(menu_hook)
72 def register_admin_menu_item():
73 return self.get_menu_item()
74
75 # Overriding the explorer page queryset is a somewhat 'niche' / experimental
76 # operation, so only attach that hook if we specifically opt into it
77 # by returning True from will_modify_explorer_page_queryset
78 if self.will_modify_explorer_page_queryset():
79
80 @hooks.register("construct_explorer_page_queryset")
81 def construct_explorer_page_queryset(parent_page, queryset, request):
82 return self.modify_explorer_page_queryset(
83 parent_page, queryset, request
84 )
85
86 self.register_admin_url_finders()
87
88 def register_admin_url_finders(self):
89 pass
90
91 def will_modify_explorer_page_queryset(self):
92 return False
93
94
95 class ModelAdmin(WagtailRegisterable):
96 """
97 The core modeladmin class. It provides an alternative means to
98 list and manage instances of a given 'model' within Wagtail's admin area.
99 It is essentially comprised of attributes and methods that allow a degree
100 of control over how the data is represented, and other methods to make the
101 additional functionality available via various Wagtail hooks.
102 """
103
104 model = None
105 menu_label = None
106 menu_icon = None
107 menu_order = None
108 list_display = ("__str__",)
109 list_display_add_buttons = None
110 list_export = ()
111 inspect_view_fields = []
112 inspect_view_fields_exclude = []
113 inspect_view_enabled = False
114 history_view_enabled = True
115 empty_value_display = "-"
116 list_filter = ()
117 list_select_related = False
118 list_per_page = 100
119 search_fields = None
120 ordering = None
121 parent = None
122 prepopulated_fields = {}
123 index_view_class = IndexView
124 create_view_class = CreateView
125 edit_view_class = EditView
126 inspect_view_class = InspectView
127 delete_view_class = DeleteView
128 history_view_class = HistoryView
129 choose_parent_view_class = ChooseParentView
130 index_template_name = ""
131 create_template_name = ""
132 edit_template_name = ""
133 inspect_template_name = ""
134 delete_template_name = ""
135 history_template_name = ""
136 choose_parent_template_name = ""
137 search_handler_class = DjangoORMSearchHandler
138 extra_search_kwargs = {}
139 permission_helper_class = None
140 url_helper_class = None
141 button_helper_class = None
142 index_view_extra_css = []
143 index_view_extra_js = []
144 inspect_view_extra_css = []
145 inspect_view_extra_js = []
146 form_view_extra_css = []
147 form_view_extra_js = []
148 form_fields_exclude = []
149 base_url_path = None
150
151 def __init__(self, parent=None):
152 """
153 Don't allow initialisation unless self.model is set to a valid model
154 """
155 if not self.model or not issubclass(self.model, Model):
156 raise ImproperlyConfigured(
157 "The model attribute on your '%s' class must be set, and "
158 "must be a valid Django model." % self.__class__.__name__
159 )
160 self.opts = self.model._meta
161 self.is_pagemodel = issubclass(self.model, Page)
162 self.parent = parent
163 self.permission_helper = self.get_permission_helper_class()(
164 self.model, self.inspect_view_enabled
165 )
166 url_helper_class = self.get_url_helper_class()
167 if accepts_kwarg(url_helper_class, "base_url_path"):
168 self.url_helper = url_helper_class(
169 self.model, base_url_path=self.base_url_path
170 )
171 else:
172 warn(
173 "%s.__init__ needs to be updated to accept a `base_url_path` keyword argument"
174 % url_helper_class.__name__,
175 category=RemovedInWagtail50Warning,
176 )
177 self.url_helper = url_helper_class(self.model)
178
179 # Needed to support RelatedFieldListFilter
180 # See: https://github.com/wagtail/wagtail/issues/5105
181 self.admin_site = default_django_admin_site
182
183 def get_permission_helper_class(self):
184 """
185 Returns a permission_helper class to help with permission-based logic
186 for the given model.
187 """
188 if self.permission_helper_class:
189 return self.permission_helper_class
190 if self.is_pagemodel:
191 return PagePermissionHelper
192 return PermissionHelper
193
194 def get_url_helper_class(self):
195 if self.url_helper_class:
196 return self.url_helper_class
197 if self.is_pagemodel:
198 return PageAdminURLHelper
199 return AdminURLHelper
200
201 def get_button_helper_class(self):
202 """
203 Returns a ButtonHelper class to help generate buttons for the given
204 model.
205 """
206 if self.button_helper_class:
207 return self.button_helper_class
208 if self.is_pagemodel:
209 return PageButtonHelper
210 return ButtonHelper
211
212 def get_menu_label(self):
213 """
214 Returns the label text to be used for the menu item.
215 """
216 return self.menu_label or self.opts.verbose_name_plural.title()
217
218 def get_menu_icon(self):
219 """
220 Returns the icon to be used for the menu item. The value is prepended
221 with 'icon-' to create the full icon class name. For design
222 consistency, the same icon is also applied to the main heading for
223 views called by this class.
224 """
225 if self.menu_icon:
226 return self.menu_icon
227 if self.is_pagemodel:
228 return "doc-full-inverse"
229 return "snippet"
230
231 def get_menu_order(self):
232 """
233 Returns the 'order' to be applied to the menu item. 000 being first
234 place. Where ModelAdminGroup is used, the menu_order value should be
235 applied to that, and any ModelAdmin classes added to 'items'
236 attribute will be ordered automatically, based on their order in that
237 sequence.
238 """
239 return self.menu_order or 999
240
241 def get_list_display(self, request):
242 """
243 Return a sequence containing the fields/method output to be displayed
244 in the list view.
245 """
246 return self.list_display
247
248 def get_list_display_add_buttons(self, request):
249 """
250 Return the name of the field/method from list_display where action
251 buttons should be added. Defaults to the first item from
252 get_list_display()
253 """
254 return self.list_display_add_buttons or self.get_list_display(request)[0]
255
256 def get_list_export(self, request):
257 """
258 Return a sequence containing the fields/method output to be displayed
259 in spreadsheet exports.
260 """
261 return self.list_export
262
263 def get_empty_value_display(self, field_name=None):
264 """
265 Return the empty_value_display value defined on ModelAdmin
266 """
267 return mark_safe(self.empty_value_display)
268
269 def get_list_filter(self, request):
270 """
271 Returns a sequence containing the fields to be displayed as filters in
272 the right sidebar in the list view.
273 """
274 list_filter = self.list_filter
275
276 if (
277 getattr(settings, "WAGTAIL_I18N_ENABLED", False)
278 and issubclass(self.model, TranslatableMixin)
279 and "locale" not in list_filter
280 ):
281 list_filter += ("locale",)
282
283 return list_filter
284
285 def get_ordering(self, request):
286 """
287 Returns a sequence defining the default ordering for results in the
288 list view.
289 """
290 return self.ordering or ()
291
292 def get_queryset(self, request):
293 """
294 Returns a QuerySet of all model instances that can be edited by the
295 admin site.
296 """
297 qs = self.model._default_manager.get_queryset()
298 ordering = self.get_ordering(request)
299 if ordering:
300 qs = qs.order_by(*ordering)
301 if self.is_pagemodel:
302 # If we're listing pages, exclude the root page
303 qs = qs.exclude(depth=1)
304 return qs
305
306 def get_search_fields(self, request):
307 """
308 Returns a sequence defining which fields on a model should be searched
309 when a search is initiated from the list view.
310 """
311 return self.search_fields or ()
312
313 def get_search_handler(self, request, search_fields=None):
314 """
315 Returns an instance of ``self.search_handler_class`` that can be used by
316 ``IndexView``.
317 """
318 return self.search_handler_class(
319 search_fields or self.get_search_fields(request)
320 )
321
322 def get_extra_search_kwargs(self, request, search_term):
323 """
324 Returns a dictionary of additional kwargs to be sent to
325 ``SearchHandler.search_queryset()``.
326 """
327 return self.extra_search_kwargs
328
329 def get_extra_attrs_for_row(self, obj, context):
330 """
331 Return a dictionary of HTML attributes to be added to the `<tr>`
332 element for the suppled `obj` when rendering the results table in
333 `index_view`. `data-object-pk` is already added by default.
334 """
335 return {}
336
337 def get_extra_class_names_for_field_col(self, obj, field_name):
338 """
339 Return a list of additional CSS class names to be added to the table
340 cell's `class` attribute when rendering the output of `field_name` for
341 `obj` in `index_view`.
342
343 Must always return a list.
344 """
345 return []
346
347 def get_extra_attrs_for_field_col(self, obj, field_name):
348 """
349 Return a dictionary of additional HTML attributes to be added to a
350 table cell when rendering the output of `field_name` for `obj` in
351 `index_view`.
352
353 Must always return a dictionary.
354 """
355 return {}
356
357 def get_prepopulated_fields(self, request):
358 """
359 Returns a sequence specifying custom prepopulated fields slugs on Create/Edit pages.
360 """
361 return self.prepopulated_fields or {}
362
363 # RemovedInWagtail50Warning - remove request arg, included here so that old-style super()
364 # calls will still work
365 def get_form_fields_exclude(self, request=None):
366 """
367 Returns a list or tuple of fields names to be excluded from Create/Edit pages.
368 """
369 return self.form_fields_exclude
370
371 def get_index_view_extra_css(self):
372 return self.index_view_extra_css
373
374 def get_index_view_extra_js(self):
375 return self.index_view_extra_js
376
377 def get_form_view_extra_css(self):
378 return self.form_view_extra_css
379
380 def get_form_view_extra_js(self):
381 return self.form_view_extra_js
382
383 def get_inspect_view_extra_css(self):
384 return self.inspect_view_extra_css
385
386 def get_inspect_view_extra_js(self):
387 return self.inspect_view_extra_js
388
389 def get_inspect_view_fields(self):
390 """
391 Return a list of field names, indicating the model fields that
392 should be displayed in the 'inspect' view. Returns the value of the
393 'inspect_view_fields' attribute if populated, otherwise a sensible
394 list of fields is generated automatically, with any field named in
395 'inspect_view_fields_exclude' not being included.
396 """
397 if not self.inspect_view_fields:
398 found_fields = []
399 for f in self.model._meta.get_fields():
400 if f.name not in self.inspect_view_fields_exclude:
401 if f.concrete and (
402 not f.is_relation or (not f.auto_created and f.related_model)
403 ):
404 found_fields.append(f.name)
405 return found_fields
406 return self.inspect_view_fields
407
408 def index_view(self, request):
409 """
410 Instantiates a class-based view to provide listing functionality for
411 the assigned model. The view class used can be overridden by changing
412 the 'index_view_class' attribute.
413 """
414 kwargs = {"model_admin": self}
415 view_class = self.index_view_class
416 return view_class.as_view(**kwargs)(request)
417
418 def create_view(self, request):
419 """
420 Instantiates a class-based view to provide 'creation' functionality for
421 the assigned model, or redirect to Wagtail's create view if the
422 assigned model extends 'Page'. The view class used can be overridden by
423 changing the 'create_view_class' attribute.
424 """
425 kwargs = {"model_admin": self}
426 view_class = self.create_view_class
427 return view_class.as_view(**kwargs)(request)
428
429 def choose_parent_view(self, request):
430 """
431 Instantiates a class-based view to allows a parent page to be chosen
432 for a new object, where the assigned model extends Wagtail's Page
433 model, and there is more than one potential parent for new instances.
434 The view class used can be overridden by changing the
435 'choose_parent_view_class' attribute.
436 """
437 kwargs = {"model_admin": self}
438 view_class = self.choose_parent_view_class
439 return view_class.as_view(**kwargs)(request)
440
441 def inspect_view(self, request, instance_pk):
442 """
443 Instantiates a class-based view to provide 'inspect' functionality for
444 the assigned model. The view class used can be overridden by changing
445 the 'inspect_view_class' attribute.
446 """
447 kwargs = {"model_admin": self, "instance_pk": instance_pk}
448 view_class = self.inspect_view_class
449 return view_class.as_view(**kwargs)(request)
450
451 def edit_view(self, request, instance_pk):
452 """
453 Instantiates a class-based view to provide 'edit' functionality for the
454 assigned model, or redirect to Wagtail's edit view if the assigned
455 model extends 'Page'. The view class used can be overridden by changing
456 the 'edit_view_class' attribute.
457 """
458 kwargs = {"model_admin": self, "instance_pk": instance_pk}
459 view_class = self.edit_view_class
460 return view_class.as_view(**kwargs)(request)
461
462 def delete_view(self, request, instance_pk):
463 """
464 Instantiates a class-based view to provide 'delete confirmation'
465 functionality for the assigned model, or redirect to Wagtail's delete
466 confirmation view if the assigned model extends 'Page'. The view class
467 used can be overridden by changing the 'delete_view_class'
468 attribute.
469 """
470 kwargs = {"model_admin": self, "instance_pk": instance_pk}
471 view_class = self.delete_view_class
472 return view_class.as_view(**kwargs)(request)
473
474 def history_view(self, request, instance_pk):
475 kwargs = {"model_admin": self, "instance_pk": instance_pk}
476 view_class = self.history_view_class
477 return view_class.as_view(**kwargs)(request)
478
479 # RemovedInWagtail50Warning - remove instance and request args, included here so that
480 # old-style super() calls will still work
481 def get_edit_handler(self, instance=None, request=None):
482 """
483 Returns the appropriate edit_handler for this modeladmin class.
484 edit_handlers can be defined either on the model itself or on the
485 modeladmin (as property edit_handler or panels). Falls back to
486 extracting panel / edit handler definitions from the model class.
487 """
488 if hasattr(self, "edit_handler"):
489 edit_handler = self.edit_handler
490 elif hasattr(self, "panels"):
491 panels = self.panels
492 edit_handler = ObjectList(panels)
493 elif hasattr(self.model, "edit_handler"):
494 edit_handler = self.model.edit_handler
495 elif hasattr(self.model, "panels"):
496 panels = self.model.panels
497 edit_handler = ObjectList(panels)
498 else:
499 try:
500 fields_to_exclude = self.get_form_fields_exclude()
501 except TypeError:
502 fields_to_exclude = self.get_form_fields_exclude(request=None)
503 warn(
504 "%s.get_form_fields_exclude should not accept a request argument"
505 % type(self).__name__,
506 category=RemovedInWagtail50Warning,
507 )
508
509 panels = extract_panel_definitions_from_model_class(
510 self.model, exclude=fields_to_exclude
511 )
512 edit_handler = ObjectList(panels)
513 return edit_handler
514
515 def get_templates(self, action="index"):
516 """
517 Utility function that provides a list of templates to try for a given
518 view, when the template isn't overridden by one of the template
519 attributes on the class.
520 """
521 app_label = self.opts.app_label.lower()
522 model_name = self.opts.model_name.lower()
523 return [
524 "modeladmin/%s/%s/%s.html" % (app_label, model_name, action),
525 "modeladmin/%s/%s.html" % (app_label, action),
526 "modeladmin/%s.html" % (action,),
527 ]
528
529 def get_index_template(self):
530 """
531 Returns a template to be used when rendering 'index_view'. If a
532 template is specified by the 'index_template_name' attribute, that will
533 be used. Otherwise, a list of preferred template names are returned.
534 """
535 return self.index_template_name or self.get_templates("index")
536
537 def get_choose_parent_template(self):
538 """
539 Returns a template to be used when rendering 'choose_parent_view'. If a
540 template is specified by the 'choose_parent_template_name' attribute,
541 that will be used. Otherwise, a list of preferred template names are
542 returned.
543 """
544 return self.choose_parent_template_name or self.get_templates("choose_parent")
545
546 def get_inspect_template(self):
547 """
548 Returns a template to be used when rendering 'inspect_view'. If a
549 template is specified by the 'inspect_template_name' attribute, that
550 will be used. Otherwise, a list of preferred template names are
551 returned.
552 """
553 return self.inspect_template_name or self.get_templates("inspect")
554
555 def get_history_template(self):
556 """
557 Returns a template to be used when rendering 'history_view'. If a
558 template is specified by the 'history_template_name' attribute, that
559 will be used. Otherwise, a list of preferred template names are
560 returned.
561 """
562 return self.history_template_name or self.get_templates("history")
563
564 def get_create_template(self):
565 """
566 Returns a template to be used when rendering 'create_view'. If a
567 template is specified by the 'create_template_name' attribute,
568 that will be used. Otherwise, a list of preferred template names are
569 returned.
570 """
571 return self.create_template_name or self.get_templates("create")
572
573 def get_edit_template(self):
574 """
575 Returns a template to be used when rendering 'edit_view'. If a template
576 is specified by the 'edit_template_name' attribute, that will be used.
577 Otherwise, a list of preferred template names are returned.
578 """
579 return self.edit_template_name or self.get_templates("edit")
580
581 def get_delete_template(self):
582 """
583 Returns a template to be used when rendering 'delete_view'. If
584 a template is specified by the 'delete_template_name'
585 attribute, that will be used. Otherwise, a list of preferred template
586 names are returned.
587 """
588 return self.delete_template_name or self.get_templates("delete")
589
590 def get_menu_item(self, order=None):
591 """
592 Utilised by Wagtail's 'register_menu_item' hook to create a menu item
593 to access the listing view, or can be called by ModelAdminGroup
594 to create a SubMenu
595 """
596 return ModelAdminMenuItem(self, order or self.get_menu_order())
597
598 def get_permissions_for_registration(self):
599 """
600 Utilised by Wagtail's 'register_permissions' hook to allow permissions
601 for a model to be assigned to groups in settings. This is only required
602 if the model isn't a Page model, and isn't registered as a Snippet
603 """
604 from wagtail.snippets.models import SNIPPET_MODELS
605
606 if not self.is_pagemodel and self.model not in SNIPPET_MODELS:
607 return self.permission_helper.get_all_model_permissions()
608 return Permission.objects.none()
609
610 def get_admin_urls_for_registration(self):
611 """
612 Utilised by Wagtail's 'register_admin_urls' hook to register urls for
613 our the views that class offers.
614 """
615 urls = (
616 re_path(
617 self.url_helper.get_action_url_pattern("index"),
618 self.index_view,
619 name=self.url_helper.get_action_url_name("index"),
620 ),
621 re_path(
622 self.url_helper.get_action_url_pattern("create"),
623 self.create_view,
624 name=self.url_helper.get_action_url_name("create"),
625 ),
626 re_path(
627 self.url_helper.get_action_url_pattern("edit"),
628 self.edit_view,
629 name=self.url_helper.get_action_url_name("edit"),
630 ),
631 re_path(
632 self.url_helper.get_action_url_pattern("delete"),
633 self.delete_view,
634 name=self.url_helper.get_action_url_name("delete"),
635 ),
636 )
637 if self.inspect_view_enabled:
638 urls = urls + (
639 re_path(
640 self.url_helper.get_action_url_pattern("inspect"),
641 self.inspect_view,
642 name=self.url_helper.get_action_url_name("inspect"),
643 ),
644 )
645 if self.history_view_enabled:
646 urls = urls + (
647 re_path(
648 self.url_helper.get_action_url_pattern("history"),
649 self.history_view,
650 name=self.url_helper.get_action_url_name("history"),
651 ),
652 )
653 if self.is_pagemodel:
654 urls = urls + (
655 re_path(
656 self.url_helper.get_action_url_pattern("choose_parent"),
657 self.choose_parent_view,
658 name=self.url_helper.get_action_url_name("choose_parent"),
659 ),
660 )
661 return urls
662
663 def will_modify_explorer_page_queryset(self):
664 return self.is_pagemodel and self.exclude_from_explorer
665
666 def modify_explorer_page_queryset(self, parent_page, queryset, request):
667 if self.is_pagemodel and self.exclude_from_explorer:
668 queryset = queryset.not_type(self.model)
669 return queryset
670
671 def register_with_wagtail(self):
672 super().register_with_wagtail()
673
674 @checks.register("panels")
675 def modeladmin_model_check(app_configs, **kwargs):
676 errors = check_panels_in_model(self.model, "modeladmin")
677 return errors
678
679 def register_admin_url_finders(self):
680 if not self.is_pagemodel:
681 finder_class = type(
682 "_ModelAdminURLFinder",
683 (ModelAdminURLFinder,),
684 {
685 "permission_helper": self.permission_helper,
686 "url_helper": self.url_helper,
687 },
688 )
689 register_admin_url_finder(self.model, finder_class)
690
691
692 class ModelAdminGroup(WagtailRegisterable):
693 """
694 Acts as a container for grouping together mutltiple PageModelAdmin and
695 SnippetModelAdmin instances. Creates a menu item with a SubMenu for
696 accessing the listing pages of those instances
697 """
698
699 items = ()
700 menu_label = None
701 menu_order = None
702 menu_icon = None
703
704 def __init__(self):
705 """
706 When initialising, instantiate the classes within 'items', and assign
707 the instances to a 'modeladmin_instances' attribute for convenient
708 access later
709 """
710 self.modeladmin_instances = []
711 for ModelAdminClass in self.items:
712 self.modeladmin_instances.append(ModelAdminClass(parent=self))
713
714 def get_menu_label(self):
715 return self.menu_label or self.get_app_label_from_subitems()
716
717 def get_app_label_from_subitems(self):
718 for instance in self.modeladmin_instances:
719 return instance.opts.app_label.title()
720 return ""
721
722 def get_menu_icon(self):
723 return self.menu_icon or "folder-open-inverse"
724
725 def get_menu_order(self):
726 return self.menu_order or 999
727
728 def get_menu_item(self):
729 """
730 Utilised by Wagtail's 'register_menu_item' hook to create a menu
731 for this group with a SubMenu linking to listing pages for any
732 associated ModelAdmin instances
733 """
734 if self.modeladmin_instances:
735 submenu = SubMenu(self.get_submenu_items())
736 return GroupMenuItem(self, self.get_menu_order(), submenu)
737
738 def get_submenu_items(self):
739 menu_items = []
740 item_order = 1
741 for modeladmin in self.modeladmin_instances:
742 menu_items.append(modeladmin.get_menu_item(order=item_order))
743 item_order += 1
744 return menu_items
745
746 def get_permissions_for_registration(self):
747 """
748 Utilised by Wagtail's 'register_permissions' hook to allow permissions
749 for a all models grouped by this class to be assigned to Groups in
750 settings.
751 """
752 qs = Permission.objects.none()
753 for instance in self.modeladmin_instances:
754 qs = qs | instance.get_permissions_for_registration()
755 return qs
756
757 def get_admin_urls_for_registration(self):
758 """
759 Utilised by Wagtail's 'register_admin_urls' hook to register urls for
760 used by any associated ModelAdmin instances
761 """
762 urls = ()
763 for instance in self.modeladmin_instances:
764 urls += instance.get_admin_urls_for_registration()
765 return urls
766
767 def will_modify_explorer_page_queryset(self):
768 return any(
769 instance.will_modify_explorer_page_queryset()
770 for instance in self.modeladmin_instances
771 )
772
773 def modify_explorer_page_queryset(self, parent_page, queryset, request):
774 for instance in self.modeladmin_instances:
775 queryset = instance.modify_explorer_page_queryset(
776 parent_page, queryset, request
777 )
778 return queryset
779
780 def register_with_wagtail(self):
781 super().register_with_wagtail()
782
783 @checks.register("panels")
784 def modeladmin_model_check(app_configs, **kwargs):
785 errors = []
786 for modeladmin_class in self.items:
787 errors.extend(check_panels_in_model(modeladmin_class.model))
788 return errors
789
790 def register_admin_url_finders(self):
791 for instance in self.modeladmin_instances:
792 instance.register_admin_url_finders()
793
794
795 def modeladmin_register(modeladmin_class):
796 """
797 Method for registering ModelAdmin or ModelAdminGroup classes with Wagtail.
798 """
799 instance = modeladmin_class()
800 instance.register_with_wagtail()
801 return modeladmin_class
802
[end of wagtail/contrib/modeladmin/options.py]
[start of wagtail/migrations/0002_initial_data.py]
1 # -*- coding: utf-8 -*-
2 from django.db import migrations
3
4
5 def initial_data(apps, schema_editor):
6 ContentType = apps.get_model("contenttypes.ContentType")
7 Group = apps.get_model("auth.Group")
8 Page = apps.get_model("wagtailcore.Page")
9 Site = apps.get_model("wagtailcore.Site")
10 GroupPagePermission = apps.get_model("wagtailcore.GroupPagePermission")
11
12 # Create page content type
13 page_content_type, created = ContentType.objects.get_or_create(
14 model="page", app_label="wagtailcore"
15 )
16
17 # Create root page
18 root = Page.objects.create(
19 title="Root",
20 slug="root",
21 content_type=page_content_type,
22 path="0001",
23 depth=1,
24 numchild=1,
25 url_path="/",
26 )
27
28 # Create homepage
29 homepage = Page.objects.create(
30 title="Welcome to your new Wagtail site!",
31 slug="home",
32 content_type=page_content_type,
33 path="00010001",
34 depth=2,
35 numchild=0,
36 url_path="/home/",
37 )
38
39 # Create default site
40 Site.objects.create(
41 hostname="localhost", root_page_id=homepage.id, is_default_site=True
42 )
43
44 # Create auth groups
45 moderators_group = Group.objects.create(name="Moderators")
46 editors_group = Group.objects.create(name="Editors")
47
48 # Create group permissions
49 GroupPagePermission.objects.create(
50 group=moderators_group,
51 page=root,
52 permission_type="add",
53 )
54 GroupPagePermission.objects.create(
55 group=moderators_group,
56 page=root,
57 permission_type="edit",
58 )
59 GroupPagePermission.objects.create(
60 group=moderators_group,
61 page=root,
62 permission_type="publish",
63 )
64
65 GroupPagePermission.objects.create(
66 group=editors_group,
67 page=root,
68 permission_type="add",
69 )
70 GroupPagePermission.objects.create(
71 group=editors_group,
72 page=root,
73 permission_type="edit",
74 )
75
76
77 def remove_initial_data(apps, schema_editor):
78 """This function does nothing. The below code is commented out together
79 with an explanation of why we don't need to bother reversing any of the
80 initial data"""
81 pass
82 # This does not need to be deleted, Django takes care of it.
83 # page_content_type = ContentType.objects.get(
84 # model='page',
85 # app_label='wagtailcore',
86 # )
87
88 # Page objects: Do nothing, the table will be deleted when reversing 0001
89
90 # Do not reverse Site creation since other models might depend on it
91
92 # Remove auth groups -- is this safe? External objects might depend
93 # on these groups... seems unsafe.
94 # Group.objects.filter(
95 # name__in=('Moderators', 'Editors')
96 # ).delete()
97 #
98 # Likewise, we're leaving all GroupPagePermission unchanged as users may
99 # have been assigned such permissions and its harmless to leave them.
100
101
102 class Migration(migrations.Migration):
103
104 dependencies = [
105 ("wagtailcore", "0001_initial"),
106 ]
107
108 operations = [
109 migrations.RunPython(initial_data, remove_initial_data),
110 ]
111
[end of wagtail/migrations/0002_initial_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
wagtail/wagtail
|
73c8178d0d03226ea0b335ffedd07ff4e7ccbcd1
|
ModelAdmin title-column always links to "edit" page
### Issue Summary
Even if a user does not have the "change" permission, the `ModelAdmin` still renders a link to the "edit"-view, which then results in a permission error for this user. For example, we often use ModelAdmins which use `inspect_view_enabled = True` and only allow inspect and delete.
This worked before https://github.com/wagtail/wagtail/pull/7408 (wagtail < 2.15), because the title column has not been linked and the "Edit"-button did not appear if the user didn't have the permission.
I don't know, what's the best way to solve it. Falling back to the inspect view only works, if it has been enabled. Skipping the link if the permission is missing maybe re-creates the issue wagtail/wagtail#7333 in some cases, or at least needs some extra work to improve the html for screen readers?
### Steps to Reproduce
1. Start a new project with `wagtail start myproject`
2. Create a simple model and a `ModelAdmin` for it
3. Create a user in the "Moderators" group and only add the "view" permission for the new model to the group.
4. Create an instance with a superuser and find the linked item with the newly created moderator.
### Technical details
* Wagtail version: >= 2.15
|
After looking at the code again, I would propose the following solution:
Instead of only using the "edit"-URL here
https://github.com/wagtail/wagtail/blob/820c27fca9abec82406dad0ca6cbbf66c7e9ed7d/wagtail/contrib/modeladmin/templatetags/modeladmin_tags.py#L74-L84
I would instead use the `ModelAdmin`s `button_helper.get_buttons_for_obj()`, and use the URL of the first button, which defines a non-empty "url" (so it won't use buttons, which only define classes for JS-side actions).
With that the developer can easily change the "default" action by reordering the buttons returned by the button helper.
If that's ok, I could make a PR for that.
|
2022-06-08T13:40:37Z
|
<patch>
diff --git a/wagtail/contrib/modeladmin/helpers/button.py b/wagtail/contrib/modeladmin/helpers/button.py
--- a/wagtail/contrib/modeladmin/helpers/button.py
+++ b/wagtail/contrib/modeladmin/helpers/button.py
@@ -107,6 +107,15 @@ def get_buttons_for_obj(
btns.append(self.delete_button(pk, classnames_add, classnames_exclude))
return btns
+ def get_primary_button(self, obj):
+ ph = self.permission_helper
+ usr = self.request.user
+ pk = getattr(obj, self.opts.pk.attname)
+ if ph.user_can_edit_obj(usr, obj):
+ return self.edit_button(pk)
+ if ph.user_can_inspect_obj(usr, obj):
+ return self.inspect_button(pk)
+
class PageButtonHelper(ButtonHelper):
diff --git a/wagtail/contrib/modeladmin/templatetags/modeladmin_tags.py b/wagtail/contrib/modeladmin/templatetags/modeladmin_tags.py
--- a/wagtail/contrib/modeladmin/templatetags/modeladmin_tags.py
+++ b/wagtail/contrib/modeladmin/templatetags/modeladmin_tags.py
@@ -71,13 +71,15 @@ def items_for_result(view, result, request):
row_attrs = modeladmin.get_extra_attrs_for_field_col(result, field_name)
row_attrs["class"] = " ".join(row_classes)
row_attrs_flat = flatatt(row_attrs)
+ primary_button = None
if field_name == modeladmin.get_list_display_add_buttons(request):
- edit_url = view.url_helper.get_action_url("edit", result.pk)
+ primary_button = view.button_helper.get_primary_button(result)
+ if primary_button is not None and primary_button.get("url"):
yield format_html(
'<td{}><div class="title-wrapper"><a href="{}" title="{}">{}</a></div></td>',
row_attrs_flat,
- edit_url,
- _("Edit this %s") % view.verbose_name,
+ primary_button["url"],
+ primary_button.get("title", ""),
result_repr,
)
else:
</patch>
|
[]
|
[]
| |||
ytdl-org__youtube-dl-2725
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in subtitle download error reporting
Exceptions during in subtitle downloading erroneously try to report errors using the description filename instead of the subtitle filename. Depending on whether `--write-description` is set or not, this causes either a misleading error message or a `UnboundLocalError` exception.
</issue>
<code>
[start of README.md]
1 % YOUTUBE-DL(1)
2
3 # NAME
4 youtube-dl - download videos from youtube.com or other video platforms
5
6 # SYNOPSIS
7 **youtube-dl** [OPTIONS] URL [URL...]
8
9 # DESCRIPTION
10 **youtube-dl** is a small command-line program to download videos from
11 YouTube.com and a few more sites. It requires the Python interpreter, version
12 2.6, 2.7, or 3.3+, and it is not platform specific. It should work on
13 your Unix box, on Windows or on Mac OS X. It is released to the public domain,
14 which means you can modify it, redistribute it or use it however you like.
15
16 # OPTIONS
17 -h, --help print this help text and exit
18 --version print program version and exit
19 -U, --update update this program to latest version. Make
20 sure that you have sufficient permissions
21 (run with sudo if needed)
22 -i, --ignore-errors continue on download errors, for example to
23 skip unavailable videos in a playlist
24 --abort-on-error Abort downloading of further videos (in the
25 playlist or the command line) if an error
26 occurs
27 --dump-user-agent display the current browser identification
28 --user-agent UA specify a custom user agent
29 --referer REF specify a custom referer, use if the video
30 access is restricted to one domain
31 --add-header FIELD:VALUE specify a custom HTTP header and its value,
32 separated by a colon ':'. You can use this
33 option multiple times
34 --list-extractors List all supported extractors and the URLs
35 they would handle
36 --extractor-descriptions Output descriptions of all supported
37 extractors
38 --proxy URL Use the specified HTTP/HTTPS proxy. Pass in
39 an empty string (--proxy "") for direct
40 connection
41 --no-check-certificate Suppress HTTPS certificate validation.
42 --prefer-insecure Use an unencrypted connection to retrieve
43 information about the video. (Currently
44 supported only for YouTube)
45 --cache-dir DIR Location in the filesystem where youtube-dl
46 can store some downloaded information
47 permanently. By default $XDG_CACHE_HOME
48 /youtube-dl or ~/.cache/youtube-dl . At the
49 moment, only YouTube player files (for
50 videos with obfuscated signatures) are
51 cached, but that may change.
52 --no-cache-dir Disable filesystem caching
53 --socket-timeout None Time to wait before giving up, in seconds
54 --bidi-workaround Work around terminals that lack
55 bidirectional text support. Requires bidiv
56 or fribidi executable in PATH
57 --default-search PREFIX Use this prefix for unqualified URLs. For
58 example "gvsearch2:" downloads two videos
59 from google videos for youtube-dl "large
60 apple". By default (with value "auto")
61 youtube-dl guesses.
62 --ignore-config Do not read configuration files. When given
63 in the global configuration file /etc
64 /youtube-dl.conf: do not read the user
65 configuration in ~/.config/youtube-dl.conf
66 (%APPDATA%/youtube-dl/config.txt on
67 Windows)
68 --encoding ENCODING Force the specified encoding (experimental)
69
70 ## Video Selection:
71 --playlist-start NUMBER playlist video to start at (default is 1)
72 --playlist-end NUMBER playlist video to end at (default is last)
73 --match-title REGEX download only matching titles (regex or
74 caseless sub-string)
75 --reject-title REGEX skip download for matching titles (regex or
76 caseless sub-string)
77 --max-downloads NUMBER Abort after downloading NUMBER files
78 --min-filesize SIZE Do not download any videos smaller than
79 SIZE (e.g. 50k or 44.6m)
80 --max-filesize SIZE Do not download any videos larger than SIZE
81 (e.g. 50k or 44.6m)
82 --date DATE download only videos uploaded in this date
83 --datebefore DATE download only videos uploaded on or before
84 this date (i.e. inclusive)
85 --dateafter DATE download only videos uploaded on or after
86 this date (i.e. inclusive)
87 --min-views COUNT Do not download any videos with less than
88 COUNT views
89 --max-views COUNT Do not download any videos with more than
90 COUNT views
91 --no-playlist download only the currently playing video
92 --age-limit YEARS download only videos suitable for the given
93 age
94 --download-archive FILE Download only videos not listed in the
95 archive file. Record the IDs of all
96 downloaded videos in it.
97 --include-ads Download advertisements as well
98 (experimental)
99 --youtube-include-dash-manifest Try to download the DASH manifest on
100 YouTube videos (experimental)
101
102 ## Download Options:
103 -r, --rate-limit LIMIT maximum download rate in bytes per second
104 (e.g. 50K or 4.2M)
105 -R, --retries RETRIES number of retries (default is 10)
106 --buffer-size SIZE size of download buffer (e.g. 1024 or 16K)
107 (default is 1024)
108 --no-resize-buffer do not automatically adjust the buffer
109 size. By default, the buffer size is
110 automatically resized from an initial value
111 of SIZE.
112
113 ## Filesystem Options:
114 -t, --title use title in file name (default)
115 --id use only video ID in file name
116 -l, --literal [deprecated] alias of --title
117 -A, --auto-number number downloaded files starting from 00000
118 -o, --output TEMPLATE output filename template. Use %(title)s to
119 get the title, %(uploader)s for the
120 uploader name, %(uploader_id)s for the
121 uploader nickname if different,
122 %(autonumber)s to get an automatically
123 incremented number, %(ext)s for the
124 filename extension, %(format)s for the
125 format description (like "22 - 1280x720" or
126 "HD"), %(format_id)s for the unique id of
127 the format (like Youtube's itags: "137"),
128 %(upload_date)s for the upload date
129 (YYYYMMDD), %(extractor)s for the provider
130 (youtube, metacafe, etc), %(id)s for the
131 video id, %(playlist)s for the playlist the
132 video is in, %(playlist_index)s for the
133 position in the playlist and %% for a
134 literal percent. %(height)s and %(width)s
135 for the width and height of the video
136 format. %(resolution)s for a textual
137 description of the resolution of the video
138 format. Use - to output to stdout. Can also
139 be used to download to a different
140 directory, for example with -o '/my/downloa
141 ds/%(uploader)s/%(title)s-%(id)s.%(ext)s' .
142 --autonumber-size NUMBER Specifies the number of digits in
143 %(autonumber)s when it is present in output
144 filename template or --auto-number option
145 is given
146 --restrict-filenames Restrict filenames to only ASCII
147 characters, and avoid "&" and spaces in
148 filenames
149 -a, --batch-file FILE file containing URLs to download ('-' for
150 stdin)
151 --load-info FILE json file containing the video information
152 (created with the "--write-json" option)
153 -w, --no-overwrites do not overwrite files
154 -c, --continue force resume of partially downloaded files.
155 By default, youtube-dl will resume
156 downloads if possible.
157 --no-continue do not resume partially downloaded files
158 (restart from beginning)
159 --cookies FILE file to read cookies from and dump cookie
160 jar in
161 --no-part do not use .part files
162 --no-mtime do not use the Last-modified header to set
163 the file modification time
164 --write-description write video description to a .description
165 file
166 --write-info-json write video metadata to a .info.json file
167 --write-annotations write video annotations to a .annotation
168 file
169 --write-thumbnail write thumbnail image to disk
170
171 ## Verbosity / Simulation Options:
172 -q, --quiet activates quiet mode
173 --no-warnings Ignore warnings
174 -s, --simulate do not download the video and do not write
175 anything to disk
176 --skip-download do not download the video
177 -g, --get-url simulate, quiet but print URL
178 -e, --get-title simulate, quiet but print title
179 --get-id simulate, quiet but print id
180 --get-thumbnail simulate, quiet but print thumbnail URL
181 --get-description simulate, quiet but print video description
182 --get-duration simulate, quiet but print video length
183 --get-filename simulate, quiet but print output filename
184 --get-format simulate, quiet but print output format
185 -j, --dump-json simulate, quiet but print JSON information.
186 See --output for a description of available
187 keys.
188 --newline output progress bar as new lines
189 --no-progress do not print progress bar
190 --console-title display progress in console titlebar
191 -v, --verbose print various debugging information
192 --dump-intermediate-pages print downloaded pages to debug problems
193 (very verbose)
194 --write-pages Write downloaded intermediary pages to
195 files in the current directory to debug
196 problems
197 --print-traffic Display sent and read HTTP traffic
198
199 ## Video Format Options:
200 -f, --format FORMAT video format code, specify the order of
201 preference using slashes: "-f 22/17/18".
202 "-f mp4" and "-f flv" are also supported.
203 You can also use the special names "best",
204 "bestvideo", "bestaudio", "worst",
205 "worstvideo" and "worstaudio". By default,
206 youtube-dl will pick the best quality.
207 --all-formats download all available video formats
208 --prefer-free-formats prefer free video formats unless a specific
209 one is requested
210 --max-quality FORMAT highest quality format to download
211 -F, --list-formats list all available formats
212
213 ## Subtitle Options:
214 --write-sub write subtitle file
215 --write-auto-sub write automatic subtitle file (youtube
216 only)
217 --all-subs downloads all the available subtitles of
218 the video
219 --list-subs lists all available subtitles for the video
220 --sub-format FORMAT subtitle format (default=srt) ([sbv/vtt]
221 youtube only)
222 --sub-lang LANGS languages of the subtitles to download
223 (optional) separated by commas, use IETF
224 language tags like 'en,pt'
225
226 ## Authentication Options:
227 -u, --username USERNAME account username
228 -p, --password PASSWORD account password
229 -n, --netrc use .netrc authentication data
230 --video-password PASSWORD video password (vimeo, smotri)
231
232 ## Post-processing Options:
233 -x, --extract-audio convert video files to audio-only files
234 (requires ffmpeg or avconv and ffprobe or
235 avprobe)
236 --audio-format FORMAT "best", "aac", "vorbis", "mp3", "m4a",
237 "opus", or "wav"; best by default
238 --audio-quality QUALITY ffmpeg/avconv audio quality specification,
239 insert a value between 0 (better) and 9
240 (worse) for VBR or a specific bitrate like
241 128K (default 5)
242 --recode-video FORMAT Encode the video to another format if
243 necessary (currently supported:
244 mp4|flv|ogg|webm)
245 -k, --keep-video keeps the video file on disk after the
246 post-processing; the video is erased by
247 default
248 --no-post-overwrites do not overwrite post-processed files; the
249 post-processed files are overwritten by
250 default
251 --embed-subs embed subtitles in the video (only for mp4
252 videos)
253 --add-metadata write metadata to the video file
254 --xattrs write metadata to the video file's xattrs
255 (using dublin core and xdg standards)
256 --prefer-avconv Prefer avconv over ffmpeg for running the
257 postprocessors (default)
258 --prefer-ffmpeg Prefer ffmpeg over avconv for running the
259 postprocessors
260
261 # CONFIGURATION
262
263 You can configure youtube-dl by placing default arguments (such as `--extract-audio --no-mtime` to always extract the audio and not copy the mtime) into `/etc/youtube-dl.conf` and/or `~/.config/youtube-dl/config`. On Windows, the configuration file locations are `%APPDATA%\youtube-dl\config.txt` and `C:\Users\<Yourname>\youtube-dl.conf`.
264
265 # OUTPUT TEMPLATE
266
267 The `-o` option allows users to indicate a template for the output file names. The basic usage is not to set any template arguments when downloading a single file, like in `youtube-dl -o funny_video.flv "http://some/video"`. However, it may contain special sequences that will be replaced when downloading each video. The special sequences have the format `%(NAME)s`. To clarify, that is a percent symbol followed by a name in parenthesis, followed by a lowercase S. Allowed names are:
268
269 - `id`: The sequence will be replaced by the video identifier.
270 - `url`: The sequence will be replaced by the video URL.
271 - `uploader`: The sequence will be replaced by the nickname of the person who uploaded the video.
272 - `upload_date`: The sequence will be replaced by the upload date in YYYYMMDD format.
273 - `title`: The sequence will be replaced by the video title.
274 - `ext`: The sequence will be replaced by the appropriate extension (like flv or mp4).
275 - `epoch`: The sequence will be replaced by the Unix epoch when creating the file.
276 - `autonumber`: The sequence will be replaced by a five-digit number that will be increased with each download, starting at zero.
277 - `playlist`: The name or the id of the playlist that contains the video.
278 - `playlist_index`: The index of the video in the playlist, a five-digit number.
279
280 The current default template is `%(title)s-%(id)s.%(ext)s`.
281
282 In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the `--restrict-filenames` flag to get a shorter title:
283
284 $ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc
285 youtube-dl test video ''_ä↭𝕐.mp4 # All kinds of weird characters
286 $ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc --restrict-filenames
287 youtube-dl_test_video_.mp4 # A simple file name
288
289 # VIDEO SELECTION
290
291 Videos can be filtered by their upload date using the options `--date`, `--datebefore` or `--dateafter`, they accept dates in two formats:
292
293 - Absolute dates: Dates in the format `YYYYMMDD`.
294 - Relative dates: Dates in the format `(now|today)[+-][0-9](day|week|month|year)(s)?`
295
296 Examples:
297
298 # Download only the videos uploaded in the last 6 months
299 $ youtube-dl --dateafter now-6months
300
301 # Download only the videos uploaded on January 1, 1970
302 $ youtube-dl --date 19700101
303
304 $ # will only download the videos uploaded in the 200x decade
305 $ youtube-dl --dateafter 20000101 --datebefore 20091231
306
307 # FAQ
308
309 ### Can you please put the -b option back?
310
311 Most people asking this question are not aware that youtube-dl now defaults to downloading the highest available quality as reported by YouTube, which will be 1080p or 720p in some cases, so you no longer need the `-b` option. For some specific videos, maybe YouTube does not report them to be available in a specific high quality format you're interested in. In that case, simply request it with the `-f` option and youtube-dl will try to download it.
312
313 ### I get HTTP error 402 when trying to download a video. What's this?
314
315 Apparently YouTube requires you to pass a CAPTCHA test if you download too much. We're [considering to provide a way to let you solve the CAPTCHA](https://github.com/rg3/youtube-dl/issues/154), but at the moment, your best course of action is pointing a webbrowser to the youtube URL, solving the CAPTCHA, and restart youtube-dl.
316
317 ### I have downloaded a video but how can I play it?
318
319 Once the video is fully downloaded, use any video player, such as [vlc](http://www.videolan.org) or [mplayer](http://www.mplayerhq.hu/).
320
321 ### The links provided by youtube-dl -g are not working anymore
322
323 The URLs youtube-dl outputs require the downloader to have the correct cookies. Use the `--cookies` option to write the required cookies into a file, and advise your downloader to read cookies from that file. Some sites also require a common user agent to be used, use `--dump-user-agent` to see the one in use by youtube-dl.
324
325 ### ERROR: no fmt_url_map or conn information found in video info
326
327 youtube has switched to a new video info format in July 2011 which is not supported by old versions of youtube-dl. You can update youtube-dl with `sudo youtube-dl --update`.
328
329 ### ERROR: unable to download video ###
330
331 youtube requires an additional signature since September 2012 which is not supported by old versions of youtube-dl. You can update youtube-dl with `sudo youtube-dl --update`.
332
333 ### SyntaxError: Non-ASCII character ###
334
335 The error
336
337 File "youtube-dl", line 2
338 SyntaxError: Non-ASCII character '\x93' ...
339
340 means you're using an outdated version of Python. Please update to Python 2.6 or 2.7.
341
342 ### What is this binary file? Where has the code gone?
343
344 Since June 2012 (#342) youtube-dl is packed as an executable zipfile, simply unzip it (might need renaming to `youtube-dl.zip` first on some systems) or clone the git repository, as laid out above. If you modify the code, you can run it by executing the `__main__.py` file. To recompile the executable, run `make youtube-dl`.
345
346 ### The exe throws a *Runtime error from Visual C++*
347
348 To run the exe you need to install first the [Microsoft Visual C++ 2008 Redistributable Package](http://www.microsoft.com/en-us/download/details.aspx?id=29).
349
350 # DEVELOPER INSTRUCTIONS
351
352 Most users do not need to build youtube-dl and can [download the builds](http://rg3.github.io/youtube-dl/download.html) or get them from their distribution.
353
354 To run youtube-dl as a developer, you don't need to build anything either. Simply execute
355
356 python -m youtube_dl
357
358 To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
359
360 python -m unittest discover
361 python test/test_download.py
362 nosetests
363
364 If you want to create a build of youtube-dl yourself, you'll need
365
366 * python
367 * make
368 * pandoc
369 * zip
370 * nosetests
371
372 ### Adding support for a new site
373
374 If you want to add support for a new site, you can follow this quick list (assuming your service is called `yourextractor`):
375
376 1. [Fork this repository](https://github.com/rg3/youtube-dl/fork)
377 2. Check out the source code with `git clone [email protected]:YOUR_GITHUB_USERNAME/youtube-dl.git`
378 3. Start a new git branch with `cd youtube-dl; git checkout -b yourextractor`
379 4. Start with this simple template and save it to `youtube_dl/extractor/yourextractor.py`:
380
381 # coding: utf-8
382 from __future__ import unicode_literals
383
384 import re
385
386 from .common import InfoExtractor
387
388
389 class YourExtractorIE(InfoExtractor):
390 _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
391 _TEST = {
392 'url': 'http://yourextractor.com/watch/42',
393 'md5': 'TODO: md5 sum of the first 10KiB of the video file',
394 'info_dict': {
395 'id': '42',
396 'ext': 'mp4',
397 'title': 'Video title goes here',
398 # TODO more properties, either as:
399 # * A value
400 # * MD5 checksum; start the string with md5:
401 # * A regular expression; start the string with re:
402 # * Any Python type (for example int or float)
403 }
404 }
405
406 def _real_extract(self, url):
407 mobj = re.match(self._VALID_URL, url)
408 video_id = mobj.group('id')
409
410 # TODO more code goes here, for example ...
411 webpage = self._download_webpage(url, video_id)
412 title = self._html_search_regex(r'<h1>(.*?)</h1>', webpage, 'title')
413
414 return {
415 'id': video_id,
416 'title': title,
417 # TODO more properties (see youtube_dl/extractor/common.py)
418 }
419
420
421 5. Add an import in [`youtube_dl/extractor/__init__.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/__init__.py).
422 6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done.
423 7. Have a look at [`youtube_dl/common/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should return](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L38). Add tests and code for as many as you want.
424 8. If you can, check the code with [pyflakes](https://pypi.python.org/pypi/pyflakes) (a good idea) and [pep8](https://pypi.python.org/pypi/pep8) (optional, ignore E501).
425 9. When the tests pass, [add](https://www.kernel.org/pub/software/scm/git/docs/git-add.html) the new files and [commit](https://www.kernel.org/pub/software/scm/git/docs/git-commit.html) them and [push](https://www.kernel.org/pub/software/scm/git/docs/git-push.html) the result, like this:
426
427 $ git add youtube_dl/extractor/__init__.py
428 $ git add youtube_dl/extractor/yourextractor.py
429 $ git commit -m '[yourextractor] Add new extractor'
430 $ git push origin yourextractor
431
432 10. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
433
434 In any case, thank you very much for your contributions!
435
436 # BUGS
437
438 Bugs and suggestions should be reported at: <https://github.com/rg3/youtube-dl/issues> . Unless you were prompted so or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email.
439
440 Please include the full output of the command when run with `--verbose`. The output (including the first lines) contain important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever.
441
442 For discussions, join us in the irc channel #youtube-dl on freenode.
443
444 When you submit a request, please re-read it once to avoid a couple of mistakes (you can and should use this as a checklist):
445
446 ### Is the description of the issue itself sufficient?
447
448 We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts.
449
450 So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious
451
452 - What the problem is
453 - How it could be fixed
454 - How your proposed solution would look like
455
456 If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a commiter myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over.
457
458 For bug reports, this means that your report should contain the *complete* output of youtube-dl when called with the -v flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information.
459
460 Site support requests must contain an example URL. An example URL is a URL you might want to download, like http://www.youtube.com/watch?v=BaW_jenozKc . There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. http://www.youtube.com/ ) is *not* an example URL.
461
462 ### Are you using the latest version?
463
464 Before reporting any issue, type youtube-dl -U. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well.
465
466 ### Is the issue already documented?
467
468 Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or at https://github.com/rg3/youtube-dl/search?type=Issues . If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
469
470 ### Why are existing options not enough?
471
472 Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/rg3/youtube-dl/blob/master/README.md#synopsis). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
473
474 ### Is there enough context in your bug report?
475
476 People want to solve problems, and often think they do us a favor by breaking down their larger problems (e.g. wanting to skip already downloaded files) to a specific request (e.g. requesting us to look whether the file exists before downloading the info page). However, what often happens is that they break down the problem into two steps: One simple, and one impossible (or extremely complicated one).
477
478 We are then presented with a very complicated request when the original problem could be solved far easier, e.g. by recording the downloaded video IDs in a separate file. To avoid this, you must include the greater context where it is non-obvious. In particular, every feature request that does not consist of adding support for a new site should contain a use case scenario that explains in what situation the missing feature would be useful.
479
480 ### Does the issue involve one problem, and one problem only?
481
482 Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones.
483
484 In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, Whitehouse podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service.
485
486 ### Is anyone going to need the feature?
487
488 Only post features that you (or an incapicated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them.
489
490 ### Is your question about youtube-dl?
491
492 It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different or even the reporter's own application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug.
493
494 # COPYRIGHT
495
496 youtube-dl is released into the public domain by the copyright holders.
497
498 This README file was originally written by Daniel Bolton (<https://github.com/dbbolton>) and is likewise released into the public domain.
499
[end of README.md]
[start of devscripts/transition_helper.py]
1 #!/usr/bin/env python
2
3 import sys, os
4
5 try:
6 import urllib.request as compat_urllib_request
7 except ImportError: # Python 2
8 import urllib2 as compat_urllib_request
9
10 sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
11 sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
12 sys.stderr.write(u'The new location of the binaries is https://github.com/rg3/youtube-dl/downloads, not the git repository.\n\n')
13
14 try:
15 raw_input()
16 except NameError: # Python 3
17 input()
18
19 filename = sys.argv[0]
20
21 API_URL = "https://api.github.com/repos/rg3/youtube-dl/downloads"
22 BIN_URL = "https://github.com/downloads/rg3/youtube-dl/youtube-dl"
23
24 if not os.access(filename, os.W_OK):
25 sys.exit('ERROR: no write permissions on %s' % filename)
26
27 try:
28 urlh = compat_urllib_request.urlopen(BIN_URL)
29 newcontent = urlh.read()
30 urlh.close()
31 except (IOError, OSError) as err:
32 sys.exit('ERROR: unable to download latest version')
33
34 try:
35 with open(filename, 'wb') as outf:
36 outf.write(newcontent)
37 except (IOError, OSError) as err:
38 sys.exit('ERROR: unable to overwrite current version')
39
40 sys.stderr.write(u'Done! Now you can run youtube-dl.\n')
41
[end of devscripts/transition_helper.py]
[start of youtube_dl/downloader/common.py]
1 import os
2 import re
3 import sys
4 import time
5
6 from ..utils import (
7 compat_str,
8 encodeFilename,
9 format_bytes,
10 timeconvert,
11 )
12
13
14 class FileDownloader(object):
15 """File Downloader class.
16
17 File downloader objects are the ones responsible of downloading the
18 actual video file and writing it to disk.
19
20 File downloaders accept a lot of parameters. In order not to saturate
21 the object constructor with arguments, it receives a dictionary of
22 options instead.
23
24 Available options:
25
26 verbose: Print additional info to stdout.
27 quiet: Do not print messages to stdout.
28 ratelimit: Download speed limit, in bytes/sec.
29 retries: Number of times to retry for HTTP error 5xx
30 buffersize: Size of download buffer in bytes.
31 noresizebuffer: Do not automatically resize the download buffer.
32 continuedl: Try to continue downloads if possible.
33 noprogress: Do not print the progress bar.
34 logtostderr: Log messages to stderr instead of stdout.
35 consoletitle: Display progress in console window's titlebar.
36 nopart: Do not use temporary .part files.
37 updatetime: Use the Last-modified header to set output file timestamps.
38 test: Download only first bytes to test the downloader.
39 min_filesize: Skip files smaller than this size
40 max_filesize: Skip files larger than this size
41
42 Subclasses of this one must re-define the real_download method.
43 """
44
45 params = None
46
47 def __init__(self, ydl, params):
48 """Create a FileDownloader object with the given options."""
49 self.ydl = ydl
50 self._progress_hooks = []
51 self.params = params
52
53 @staticmethod
54 def format_seconds(seconds):
55 (mins, secs) = divmod(seconds, 60)
56 (hours, mins) = divmod(mins, 60)
57 if hours > 99:
58 return '--:--:--'
59 if hours == 0:
60 return '%02d:%02d' % (mins, secs)
61 else:
62 return '%02d:%02d:%02d' % (hours, mins, secs)
63
64 @staticmethod
65 def calc_percent(byte_counter, data_len):
66 if data_len is None:
67 return None
68 return float(byte_counter) / float(data_len) * 100.0
69
70 @staticmethod
71 def format_percent(percent):
72 if percent is None:
73 return '---.-%'
74 return '%6s' % ('%3.1f%%' % percent)
75
76 @staticmethod
77 def calc_eta(start, now, total, current):
78 if total is None:
79 return None
80 dif = now - start
81 if current == 0 or dif < 0.001: # One millisecond
82 return None
83 rate = float(current) / dif
84 return int((float(total) - float(current)) / rate)
85
86 @staticmethod
87 def format_eta(eta):
88 if eta is None:
89 return '--:--'
90 return FileDownloader.format_seconds(eta)
91
92 @staticmethod
93 def calc_speed(start, now, bytes):
94 dif = now - start
95 if bytes == 0 or dif < 0.001: # One millisecond
96 return None
97 return float(bytes) / dif
98
99 @staticmethod
100 def format_speed(speed):
101 if speed is None:
102 return '%10s' % '---b/s'
103 return '%10s' % ('%s/s' % format_bytes(speed))
104
105 @staticmethod
106 def best_block_size(elapsed_time, bytes):
107 new_min = max(bytes / 2.0, 1.0)
108 new_max = min(max(bytes * 2.0, 1.0), 4194304) # Do not surpass 4 MB
109 if elapsed_time < 0.001:
110 return int(new_max)
111 rate = bytes / elapsed_time
112 if rate > new_max:
113 return int(new_max)
114 if rate < new_min:
115 return int(new_min)
116 return int(rate)
117
118 @staticmethod
119 def parse_bytes(bytestr):
120 """Parse a string indicating a byte quantity into an integer."""
121 matchobj = re.match(r'(?i)^(\d+(?:\.\d+)?)([kMGTPEZY]?)$', bytestr)
122 if matchobj is None:
123 return None
124 number = float(matchobj.group(1))
125 multiplier = 1024.0 ** 'bkmgtpezy'.index(matchobj.group(2).lower())
126 return int(round(number * multiplier))
127
128 def to_screen(self, *args, **kargs):
129 self.ydl.to_screen(*args, **kargs)
130
131 def to_stderr(self, message):
132 self.ydl.to_screen(message)
133
134 def to_console_title(self, message):
135 self.ydl.to_console_title(message)
136
137 def trouble(self, *args, **kargs):
138 self.ydl.trouble(*args, **kargs)
139
140 def report_warning(self, *args, **kargs):
141 self.ydl.report_warning(*args, **kargs)
142
143 def report_error(self, *args, **kargs):
144 self.ydl.report_error(*args, **kargs)
145
146 def slow_down(self, start_time, byte_counter):
147 """Sleep if the download speed is over the rate limit."""
148 rate_limit = self.params.get('ratelimit', None)
149 if rate_limit is None or byte_counter == 0:
150 return
151 now = time.time()
152 elapsed = now - start_time
153 if elapsed <= 0.0:
154 return
155 speed = float(byte_counter) / elapsed
156 if speed > rate_limit:
157 time.sleep((byte_counter - rate_limit * (now - start_time)) / rate_limit)
158
159 def temp_name(self, filename):
160 """Returns a temporary filename for the given filename."""
161 if self.params.get('nopart', False) or filename == u'-' or \
162 (os.path.exists(encodeFilename(filename)) and not os.path.isfile(encodeFilename(filename))):
163 return filename
164 return filename + u'.part'
165
166 def undo_temp_name(self, filename):
167 if filename.endswith(u'.part'):
168 return filename[:-len(u'.part')]
169 return filename
170
171 def try_rename(self, old_filename, new_filename):
172 try:
173 if old_filename == new_filename:
174 return
175 os.rename(encodeFilename(old_filename), encodeFilename(new_filename))
176 except (IOError, OSError) as err:
177 self.report_error(u'unable to rename file: %s' % compat_str(err))
178
179 def try_utime(self, filename, last_modified_hdr):
180 """Try to set the last-modified time of the given file."""
181 if last_modified_hdr is None:
182 return
183 if not os.path.isfile(encodeFilename(filename)):
184 return
185 timestr = last_modified_hdr
186 if timestr is None:
187 return
188 filetime = timeconvert(timestr)
189 if filetime is None:
190 return filetime
191 # Ignore obviously invalid dates
192 if filetime == 0:
193 return
194 try:
195 os.utime(filename, (time.time(), filetime))
196 except:
197 pass
198 return filetime
199
200 def report_destination(self, filename):
201 """Report destination filename."""
202 self.to_screen(u'[download] Destination: ' + filename)
203
204 def _report_progress_status(self, msg, is_last_line=False):
205 fullmsg = u'[download] ' + msg
206 if self.params.get('progress_with_newline', False):
207 self.to_screen(fullmsg)
208 else:
209 if os.name == 'nt':
210 prev_len = getattr(self, '_report_progress_prev_line_length',
211 0)
212 if prev_len > len(fullmsg):
213 fullmsg += u' ' * (prev_len - len(fullmsg))
214 self._report_progress_prev_line_length = len(fullmsg)
215 clear_line = u'\r'
216 else:
217 clear_line = (u'\r\x1b[K' if sys.stderr.isatty() else u'\r')
218 self.to_screen(clear_line + fullmsg, skip_eol=not is_last_line)
219 self.to_console_title(u'youtube-dl ' + msg)
220
221 def report_progress(self, percent, data_len_str, speed, eta):
222 """Report download progress."""
223 if self.params.get('noprogress', False):
224 return
225 if eta is not None:
226 eta_str = self.format_eta(eta)
227 else:
228 eta_str = 'Unknown ETA'
229 if percent is not None:
230 percent_str = self.format_percent(percent)
231 else:
232 percent_str = 'Unknown %'
233 speed_str = self.format_speed(speed)
234
235 msg = (u'%s of %s at %s ETA %s' %
236 (percent_str, data_len_str, speed_str, eta_str))
237 self._report_progress_status(msg)
238
239 def report_progress_live_stream(self, downloaded_data_len, speed, elapsed):
240 if self.params.get('noprogress', False):
241 return
242 downloaded_str = format_bytes(downloaded_data_len)
243 speed_str = self.format_speed(speed)
244 elapsed_str = FileDownloader.format_seconds(elapsed)
245 msg = u'%s at %s (%s)' % (downloaded_str, speed_str, elapsed_str)
246 self._report_progress_status(msg)
247
248 def report_finish(self, data_len_str, tot_time):
249 """Report download finished."""
250 if self.params.get('noprogress', False):
251 self.to_screen(u'[download] Download completed')
252 else:
253 self._report_progress_status(
254 (u'100%% of %s in %s' %
255 (data_len_str, self.format_seconds(tot_time))),
256 is_last_line=True)
257
258 def report_resuming_byte(self, resume_len):
259 """Report attempt to resume at given byte."""
260 self.to_screen(u'[download] Resuming download at byte %s' % resume_len)
261
262 def report_retry(self, count, retries):
263 """Report retry in case of HTTP error 5xx"""
264 self.to_screen(u'[download] Got server HTTP error. Retrying (attempt %d of %d)...' % (count, retries))
265
266 def report_file_already_downloaded(self, file_name):
267 """Report file has already been fully downloaded."""
268 try:
269 self.to_screen(u'[download] %s has already been downloaded' % file_name)
270 except UnicodeEncodeError:
271 self.to_screen(u'[download] The file has already been downloaded')
272
273 def report_unable_to_resume(self):
274 """Report it was impossible to resume download."""
275 self.to_screen(u'[download] Unable to resume')
276
277 def download(self, filename, info_dict):
278 """Download to a filename using the info from info_dict
279 Return True on success and False otherwise
280 """
281 # Check file already present
282 if self.params.get('continuedl', False) and os.path.isfile(encodeFilename(filename)) and not self.params.get('nopart', False):
283 self.report_file_already_downloaded(filename)
284 self._hook_progress({
285 'filename': filename,
286 'status': 'finished',
287 'total_bytes': os.path.getsize(encodeFilename(filename)),
288 })
289 return True
290
291 return self.real_download(filename, info_dict)
292
293 def real_download(self, filename, info_dict):
294 """Real download process. Redefine in subclasses."""
295 raise NotImplementedError(u'This method must be implemented by sublcasses')
296
297 def _hook_progress(self, status):
298 for ph in self._progress_hooks:
299 ph(status)
300
301 def add_progress_hook(self, ph):
302 """ ph gets called on download progress, with a dictionary with the entries
303 * filename: The final filename
304 * status: One of "downloading" and "finished"
305
306 It can also have some of the following entries:
307
308 * downloaded_bytes: Bytes on disks
309 * total_bytes: Total bytes, None if unknown
310 * tmpfilename: The filename we're currently writing to
311 * eta: The estimated time in seconds, None if unknown
312 * speed: The download speed in bytes/second, None if unknown
313
314 Hooks are guaranteed to be called at least once (with status "finished")
315 if the download is successful.
316 """
317 self._progress_hooks.append(ph)
318
[end of youtube_dl/downloader/common.py]
[start of youtube_dl/downloader/http.py]
1 import os
2 import time
3
4 from .common import FileDownloader
5 from ..utils import (
6 compat_urllib_request,
7 compat_urllib_error,
8 ContentTooShortError,
9
10 encodeFilename,
11 sanitize_open,
12 format_bytes,
13 )
14
15
16 class HttpFD(FileDownloader):
17 def real_download(self, filename, info_dict):
18 url = info_dict['url']
19 tmpfilename = self.temp_name(filename)
20 stream = None
21
22 # Do not include the Accept-Encoding header
23 headers = {'Youtubedl-no-compression': 'True'}
24 if 'user_agent' in info_dict:
25 headers['Youtubedl-user-agent'] = info_dict['user_agent']
26 if 'http_referer' in info_dict:
27 headers['Referer'] = info_dict['http_referer']
28 basic_request = compat_urllib_request.Request(url, None, headers)
29 request = compat_urllib_request.Request(url, None, headers)
30
31 if self.params.get('test', False):
32 request.add_header('Range', 'bytes=0-10240')
33
34 # Establish possible resume length
35 if os.path.isfile(encodeFilename(tmpfilename)):
36 resume_len = os.path.getsize(encodeFilename(tmpfilename))
37 else:
38 resume_len = 0
39
40 open_mode = 'wb'
41 if resume_len != 0:
42 if self.params.get('continuedl', False):
43 self.report_resuming_byte(resume_len)
44 request.add_header('Range', 'bytes=%d-' % resume_len)
45 open_mode = 'ab'
46 else:
47 resume_len = 0
48
49 count = 0
50 retries = self.params.get('retries', 0)
51 while count <= retries:
52 # Establish connection
53 try:
54 data = self.ydl.urlopen(request)
55 break
56 except (compat_urllib_error.HTTPError, ) as err:
57 if (err.code < 500 or err.code >= 600) and err.code != 416:
58 # Unexpected HTTP error
59 raise
60 elif err.code == 416:
61 # Unable to resume (requested range not satisfiable)
62 try:
63 # Open the connection again without the range header
64 data = self.ydl.urlopen(basic_request)
65 content_length = data.info()['Content-Length']
66 except (compat_urllib_error.HTTPError, ) as err:
67 if err.code < 500 or err.code >= 600:
68 raise
69 else:
70 # Examine the reported length
71 if (content_length is not None and
72 (resume_len - 100 < int(content_length) < resume_len + 100)):
73 # The file had already been fully downloaded.
74 # Explanation to the above condition: in issue #175 it was revealed that
75 # YouTube sometimes adds or removes a few bytes from the end of the file,
76 # changing the file size slightly and causing problems for some users. So
77 # I decided to implement a suggested change and consider the file
78 # completely downloaded if the file size differs less than 100 bytes from
79 # the one in the hard drive.
80 self.report_file_already_downloaded(filename)
81 self.try_rename(tmpfilename, filename)
82 self._hook_progress({
83 'filename': filename,
84 'status': 'finished',
85 })
86 return True
87 else:
88 # The length does not match, we start the download over
89 self.report_unable_to_resume()
90 resume_len = 0
91 open_mode = 'wb'
92 break
93 # Retry
94 count += 1
95 if count <= retries:
96 self.report_retry(count, retries)
97
98 if count > retries:
99 self.report_error(u'giving up after %s retries' % retries)
100 return False
101
102 data_len = data.info().get('Content-length', None)
103 if data_len is not None:
104 data_len = int(data_len) + resume_len
105 min_data_len = self.params.get("min_filesize", None)
106 max_data_len = self.params.get("max_filesize", None)
107 if min_data_len is not None and data_len < min_data_len:
108 self.to_screen(u'\r[download] File is smaller than min-filesize (%s bytes < %s bytes). Aborting.' % (data_len, min_data_len))
109 return False
110 if max_data_len is not None and data_len > max_data_len:
111 self.to_screen(u'\r[download] File is larger than max-filesize (%s bytes > %s bytes). Aborting.' % (data_len, max_data_len))
112 return False
113
114 data_len_str = format_bytes(data_len)
115 byte_counter = 0 + resume_len
116 block_size = self.params.get('buffersize', 1024)
117 start = time.time()
118 while True:
119 # Download and write
120 before = time.time()
121 data_block = data.read(block_size)
122 after = time.time()
123 if len(data_block) == 0:
124 break
125 byte_counter += len(data_block)
126
127 # Open file just in time
128 if stream is None:
129 try:
130 (stream, tmpfilename) = sanitize_open(tmpfilename, open_mode)
131 assert stream is not None
132 filename = self.undo_temp_name(tmpfilename)
133 self.report_destination(filename)
134 except (OSError, IOError) as err:
135 self.report_error(u'unable to open for writing: %s' % str(err))
136 return False
137 try:
138 stream.write(data_block)
139 except (IOError, OSError) as err:
140 self.to_stderr(u"\n")
141 self.report_error(u'unable to write data: %s' % str(err))
142 return False
143 if not self.params.get('noresizebuffer', False):
144 block_size = self.best_block_size(after - before, len(data_block))
145
146 # Progress message
147 speed = self.calc_speed(start, time.time(), byte_counter - resume_len)
148 if data_len is None:
149 eta = percent = None
150 else:
151 percent = self.calc_percent(byte_counter, data_len)
152 eta = self.calc_eta(start, time.time(), data_len - resume_len, byte_counter - resume_len)
153 self.report_progress(percent, data_len_str, speed, eta)
154
155 self._hook_progress({
156 'downloaded_bytes': byte_counter,
157 'total_bytes': data_len,
158 'tmpfilename': tmpfilename,
159 'filename': filename,
160 'status': 'downloading',
161 'eta': eta,
162 'speed': speed,
163 })
164
165 # Apply rate limit
166 self.slow_down(start, byte_counter - resume_len)
167
168 if stream is None:
169 self.to_stderr(u"\n")
170 self.report_error(u'Did not get any data blocks')
171 return False
172 stream.close()
173 self.report_finish(data_len_str, (time.time() - start))
174 if data_len is not None and byte_counter != data_len:
175 raise ContentTooShortError(byte_counter, int(data_len))
176 self.try_rename(tmpfilename, filename)
177
178 # Update file modification time
179 if self.params.get('updatetime', True):
180 info_dict['filetime'] = self.try_utime(filename, data.info().get('last-modified', None))
181
182 self._hook_progress({
183 'downloaded_bytes': byte_counter,
184 'total_bytes': byte_counter,
185 'filename': filename,
186 'status': 'finished',
187 })
188
189 return True
190
[end of youtube_dl/downloader/http.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ytdl-org/youtube-dl
|
c47d21da80c0a616410ca6c0d61d7ccfed36e943
|
Bug in subtitle download error reporting
Exceptions during in subtitle downloading erroneously try to report errors using the description filename instead of the subtitle filename. Depending on whether `--write-description` is set or not, this causes either a misleading error message or a `UnboundLocalError` exception.
|
2014-04-08T15:10:34Z
|
<patch>
diff --git a/youtube_dl/YoutubeDL.py b/youtube_dl/YoutubeDL.py
old mode 100644
new mode 100755
--- a/youtube_dl/YoutubeDL.py
+++ b/youtube_dl/YoutubeDL.py
@@ -936,7 +936,7 @@ def process_info(self, info_dict):
with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8') as subfile:
subfile.write(sub)
except (OSError, IOError):
- self.report_error('Cannot write subtitles file ' + descfn)
+ self.report_error('Cannot write subtitles file ' + sub_filename)
return
if self.params.get('writeinfojson', False):
</patch>
|
[]
|
[]
| ||||
googleapis__google-cloud-python-3424
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IAM policy should return an empty set of the bindings do not exist
https://github.com/GoogleCloudPlatform/google-cloud-python/blob/bd9fdfba07fac63a91847628613928a569250c0f/core/google/cloud/iam.py#L71
Should return an empty set if no bindings are set for that role.
</issue>
<code>
[start of README.rst]
1 Google Cloud Python Client
2 ==========================
3
4 Python idiomatic client for `Google Cloud Platform`_ services.
5
6 .. _Google Cloud Platform: https://cloud.google.com/
7
8 |pypi| |circleci| |appveyor| |coverage| |versions|
9
10 - `Homepage`_
11 - `API Documentation`_
12 - `Read The Docs Documentation`_
13
14 .. _Homepage: https://googlecloudplatform.github.io/google-cloud-python/
15 .. _API Documentation: https://googlecloudplatform.github.io/google-cloud-python/stable/
16 .. _Read The Docs Documentation: https://google-cloud-python.readthedocs.io/en/latest/
17
18 The following client libraries have **GA** support:
19
20 - `Google Cloud Datastore`_ (`Datastore README`_)
21 - `Stackdriver Logging`_ (`Logging README`_)
22 - `Google Cloud Storage`_ (`Storage README`_)
23
24 **GA** (general availability) indicates that the client library for a
25 particular service is stable, and that the code surface will not change in
26 backwards-incompatible ways unless either absolutely necessary (e.g. because
27 of critical security issues) or with an extensive deprecation period.
28 Issues and requests against GA libraries are addressed with the highest
29 priority.
30
31 The following client libraries have **beta** support:
32
33 - `Google BigQuery`_ (`BigQuery README`_)
34 - `Google Cloud Vision`_ (`Vision README`_)
35 - `Google Cloud Natural Language`_ (`Natural Language README`_)
36 - `Google Cloud Translation`_ (`Translation README`_)
37
38 **Beta** indicates that the client library for a particular service is
39 mostly stable and is being prepared for release. Issues and requests
40 against beta libraries are addressed with a higher priority.
41
42 This client library has **alpha** support for the following Google
43 Cloud Platform services:
44
45 - `Google Cloud Pub/Sub`_ (`Pub/Sub README`_)
46 - `Google Cloud Resource Manager`_ (`Resource Manager README`_)
47 - `Stackdriver Monitoring`_ (`Monitoring README`_)
48 - `Google Cloud Bigtable`_ (`Bigtable README`_)
49 - `Google Cloud DNS`_ (`DNS README`_)
50 - `Stackdriver Error Reporting`_ (`Error Reporting README`_)
51 - `Google Cloud Speech`_ (`Speech README`_)
52 - `Google Cloud Bigtable - HappyBase`_ (`HappyBase README`_)
53 - `Google Cloud Runtime Configuration`_ (`Runtime Config README`_)
54 - `Cloud Spanner`_ (`Cloud Spanner README`_)
55
56 **Alpha** indicates that the client library for a particular service is
57 still a work-in-progress and is more likely to get backwards-incompatible
58 updates. See `versioning`_ for more details.
59
60 .. _Google Cloud Datastore: https://pypi.python.org/pypi/google-cloud-datastore
61 .. _Datastore README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/datastore
62 .. _Google Cloud Storage: https://pypi.python.org/pypi/google-cloud-storage
63 .. _Storage README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/storage
64 .. _Google Cloud Pub/Sub: https://pypi.python.org/pypi/google-cloud-pubsub
65 .. _Pub/Sub README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/pubsub
66 .. _Google BigQuery: https://pypi.python.org/pypi/google-cloud-bigquery
67 .. _BigQuery README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigquery
68 .. _Google Cloud Resource Manager: https://pypi.python.org/pypi/google-cloud-resource-manager
69 .. _Resource Manager README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/resource_manager
70 .. _Stackdriver Logging: https://pypi.python.org/pypi/google-cloud-logging
71 .. _Logging README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/logging
72 .. _Stackdriver Monitoring: https://pypi.python.org/pypi/google-cloud-monitoring
73 .. _Monitoring README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/monitoring
74 .. _Google Cloud Bigtable: https://pypi.python.org/pypi/google-cloud-bigtable
75 .. _Bigtable README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigtable
76 .. _Google Cloud DNS: https://pypi.python.org/pypi/google-cloud-dns
77 .. _DNS README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/dns
78 .. _Stackdriver Error Reporting: https://pypi.python.org/pypi/google-cloud-error-reporting
79 .. _Error Reporting README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/error_reporting
80 .. _Google Cloud Natural Language: https://pypi.python.org/pypi/google-cloud-language
81 .. _Natural Language README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/language
82 .. _Google Cloud Translation: https://pypi.python.org/pypi/google-cloud-translate
83 .. _Translation README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/translate
84 .. _Google Cloud Speech: https://pypi.python.org/pypi/google-cloud-speech
85 .. _Speech README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/speech
86 .. _Google Cloud Vision: https://pypi.python.org/pypi/google-cloud-vision
87 .. _Vision README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/vision
88 .. _Google Cloud Bigtable - HappyBase: https://pypi.python.org/pypi/google-cloud-happybase/
89 .. _HappyBase README: https://github.com/GoogleCloudPlatform/google-cloud-python-happybase
90 .. _Google Cloud Runtime Configuration: https://cloud.google.com/deployment-manager/runtime-configurator/
91 .. _Runtime Config README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/runtimeconfig
92 .. _Cloud Spanner: https://cloud.google.com/spanner/
93 .. _Cloud Spanner README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/spanner
94 .. _versioning: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/CONTRIBUTING.rst#versioning
95
96 If you need support for other Google APIs, check out the
97 `Google APIs Python Client library`_.
98
99 .. _Google APIs Python Client library: https://github.com/google/google-api-python-client
100
101 Quick Start
102 -----------
103
104 .. code-block:: console
105
106 $ pip install --upgrade google-cloud
107
108 Example Applications
109 --------------------
110
111 - `getting-started-python`_ - A sample and `tutorial`_ that demonstrates how to build a complete web application using Cloud Datastore, Cloud Storage, and Cloud Pub/Sub and deploy it to Google App Engine or Google Compute Engine.
112 - `google-cloud-python-expenses-demo`_ - A sample expenses demo using Cloud Datastore and Cloud Storage
113
114 .. _getting-started-python: https://github.com/GoogleCloudPlatform/getting-started-python
115 .. _tutorial: https://cloud.google.com/python
116 .. _google-cloud-python-expenses-demo: https://github.com/GoogleCloudPlatform/google-cloud-python-expenses-demo
117
118 Authentication
119 --------------
120
121 With ``google-cloud-python`` we try to make authentication as painless as possible.
122 Check out the `Authentication section`_ in our documentation to learn more.
123 You may also find the `authentication document`_ shared by all the
124 ``google-cloud-*`` libraries to be helpful.
125
126 .. _Authentication section: https://google-cloud-python.readthedocs.io/en/latest/google-cloud-auth.html
127 .. _authentication document: https://github.com/GoogleCloudPlatform/gcloud-common/tree/master/authentication
128
129 Contributing
130 ------------
131
132 Contributions to this library are always welcome and highly encouraged.
133
134 See the `CONTRIBUTING doc`_ for more information on how to get started.
135
136 .. _CONTRIBUTING doc: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/CONTRIBUTING.rst
137
138 Community
139 ---------
140
141 Google Cloud Platform Python developers hang out in `Slack`_ in the ``#python``
142 channel, click here to `get an invitation`_.
143
144
145 .. _Slack: https://googlecloud-community.slack.com
146 .. _get an invitation: https://gcp-slack.appspot.com/
147
148 License
149 -------
150
151 Apache 2.0 - See `the LICENSE`_ for more information.
152
153 .. _the LICENSE: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/LICENSE
154
155 .. |circleci| image:: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python.svg?style=shield
156 :target: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python
157 .. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/googlecloudplatform/google-cloud-python?branch=master&svg=true
158 :target: https://ci.appveyor.com/project/GoogleCloudPlatform/google-cloud-python
159 .. |coverage| image:: https://coveralls.io/repos/GoogleCloudPlatform/google-cloud-python/badge.svg?branch=master
160 :target: https://coveralls.io/r/GoogleCloudPlatform/google-cloud-python?branch=master
161 .. |pypi| image:: https://img.shields.io/pypi/v/google-cloud.svg
162 :target: https://pypi.python.org/pypi/google-cloud
163 .. |versions| image:: https://img.shields.io/pypi/pyversions/google-cloud.svg
164 :target: https://pypi.python.org/pypi/google-cloud
165
[end of README.rst]
[start of core/google/cloud/credentials.py]
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """A simple wrapper around the OAuth2 credentials library."""
16
17 import base64
18 import datetime
19 import six
20 from six.moves.urllib.parse import urlencode
21
22 import google.auth
23 import google.auth.credentials
24
25 from google.cloud._helpers import UTC
26 from google.cloud._helpers import _NOW
27 from google.cloud._helpers import _microseconds_from_datetime
28
29
30 def get_credentials():
31 """Gets credentials implicitly from the current environment.
32
33 Uses :func:`google.auth.default()`.
34
35 :rtype: :class:`google.auth.credentials.Credentials`,
36 :returns: A new credentials instance corresponding to the implicit
37 environment.
38 """
39 credentials, _ = google.auth.default()
40 return credentials
41
42
43 def _get_signed_query_params(credentials, expiration, string_to_sign):
44 """Gets query parameters for creating a signed URL.
45
46 :type credentials: :class:`google.auth.credentials.Signer`
47 :param credentials: The credentials used to create a private key
48 for signing text.
49
50 :type expiration: int or long
51 :param expiration: When the signed URL should expire.
52
53 :type string_to_sign: str
54 :param string_to_sign: The string to be signed by the credentials.
55
56 :raises AttributeError: If :meth: sign_blob is unavailable.
57
58 :rtype: dict
59 :returns: Query parameters matching the signing credentials with a
60 signed payload.
61 """
62 if not isinstance(credentials, google.auth.credentials.Signing):
63 auth_uri = ('http://google-cloud-python.readthedocs.io/en/latest/'
64 'google-cloud-auth.html#setting-up-a-service-account')
65 raise AttributeError('you need a private key to sign credentials.'
66 'the credentials you are currently using %s '
67 'just contains a token. see %s for more '
68 'details.' % (type(credentials), auth_uri))
69
70 signature_bytes = credentials.sign_bytes(string_to_sign)
71 signature = base64.b64encode(signature_bytes)
72 service_account_name = credentials.signer_email
73 return {
74 'GoogleAccessId': service_account_name,
75 'Expires': str(expiration),
76 'Signature': signature,
77 }
78
79
80 def _get_expiration_seconds(expiration):
81 """Convert 'expiration' to a number of seconds in the future.
82
83 :type expiration: int, long, datetime.datetime, datetime.timedelta
84 :param expiration: When the signed URL should expire.
85
86 :raises TypeError: When expiration is not an integer.
87
88 :rtype: int
89 :returns: a timestamp as an absolute number of seconds.
90 """
91 # If it's a timedelta, add it to `now` in UTC.
92 if isinstance(expiration, datetime.timedelta):
93 now = _NOW().replace(tzinfo=UTC)
94 expiration = now + expiration
95
96 # If it's a datetime, convert to a timestamp.
97 if isinstance(expiration, datetime.datetime):
98 micros = _microseconds_from_datetime(expiration)
99 expiration = micros // 10**6
100
101 if not isinstance(expiration, six.integer_types):
102 raise TypeError('Expected an integer timestamp, datetime, or '
103 'timedelta. Got %s' % type(expiration))
104 return expiration
105
106
107 def generate_signed_url(credentials, resource, expiration,
108 api_access_endpoint='',
109 method='GET', content_md5=None,
110 content_type=None, response_type=None,
111 response_disposition=None, generation=None):
112 """Generate signed URL to provide query-string auth'n to a resource.
113
114 .. note::
115
116 Assumes ``credentials`` implements the
117 :class:`google.auth.credentials.Signing` interface. Also assumes
118 ``credentials`` has a ``service_account_email`` property which
119 identifies the credentials.
120
121 .. note::
122
123 If you are on Google Compute Engine, you can't generate a signed URL.
124 Follow `Issue 922`_ for updates on this. If you'd like to be able to
125 generate a signed URL from GCE, you can use a standard service account
126 from a JSON file rather than a GCE service account.
127
128 See headers `reference`_ for more details on optional arguments.
129
130 .. _Issue 922: https://github.com/GoogleCloudPlatform/\
131 google-cloud-python/issues/922
132 .. _reference: https://cloud.google.com/storage/docs/reference-headers
133
134 :type credentials: :class:`google.auth.credentials.Signing`
135 :param credentials: Credentials object with an associated private key to
136 sign text.
137
138 :type resource: str
139 :param resource: A pointer to a specific resource
140 (typically, ``/bucket-name/path/to/blob.txt``).
141
142 :type expiration: :class:`int`, :class:`long`, :class:`datetime.datetime`,
143 :class:`datetime.timedelta`
144 :param expiration: When the signed URL should expire.
145
146 :type api_access_endpoint: str
147 :param api_access_endpoint: Optional URI base. Defaults to empty string.
148
149 :type method: str
150 :param method: The HTTP verb that will be used when requesting the URL.
151 Defaults to ``'GET'``.
152
153 :type content_md5: str
154 :param content_md5: (Optional) The MD5 hash of the object referenced by
155 ``resource``.
156
157 :type content_type: str
158 :param content_type: (Optional) The content type of the object referenced
159 by ``resource``.
160
161 :type response_type: str
162 :param response_type: (Optional) Content type of responses to requests for
163 the signed URL. Used to over-ride the content type of
164 the underlying resource.
165
166 :type response_disposition: str
167 :param response_disposition: (Optional) Content disposition of responses to
168 requests for the signed URL.
169
170 :type generation: str
171 :param generation: (Optional) A value that indicates which generation of
172 the resource to fetch.
173
174 :rtype: str
175 :returns: A signed URL you can use to access the resource
176 until expiration.
177 """
178 expiration = _get_expiration_seconds(expiration)
179
180 # Generate the string to sign.
181 string_to_sign = '\n'.join([
182 method,
183 content_md5 or '',
184 content_type or '',
185 str(expiration),
186 resource])
187
188 # Set the right query parameters.
189 query_params = _get_signed_query_params(credentials,
190 expiration,
191 string_to_sign)
192 if response_type is not None:
193 query_params['response-content-type'] = response_type
194 if response_disposition is not None:
195 query_params['response-content-disposition'] = response_disposition
196 if generation is not None:
197 query_params['generation'] = generation
198
199 # Return the built URL.
200 return '{endpoint}{resource}?{querystring}'.format(
201 endpoint=api_access_endpoint, resource=resource,
202 querystring=urlencode(query_params))
203
[end of core/google/cloud/credentials.py]
[start of core/google/cloud/iam.py]
1 # Copyright 2017 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Non-API-specific IAM policy definitions
15
16 For allowed roles / permissions, see:
17 https://cloud.google.com/iam/docs/understanding-roles
18 """
19
20 import collections
21 import warnings
22
23 # Generic IAM roles
24
25 OWNER_ROLE = 'roles/owner'
26 """Generic role implying all rights to an object."""
27
28 EDITOR_ROLE = 'roles/editor'
29 """Generic role implying rights to modify an object."""
30
31 VIEWER_ROLE = 'roles/viewer'
32 """Generic role implying rights to access an object."""
33
34 _ASSIGNMENT_DEPRECATED_MSG = """\
35 Assigning to '{}' is deprecated. Replace with 'policy[{}] = members."""
36
37
38 class Policy(collections.MutableMapping):
39 """IAM Policy
40
41 See:
42 https://cloud.google.com/iam/reference/rest/v1/Policy
43
44 :type etag: str
45 :param etag: ETag used to identify a unique of the policy
46
47 :type version: int
48 :param version: unique version of the policy
49 """
50 _OWNER_ROLES = (OWNER_ROLE,)
51 """Roles mapped onto our ``owners`` attribute."""
52
53 _EDITOR_ROLES = (EDITOR_ROLE,)
54 """Roles mapped onto our ``editors`` attribute."""
55
56 _VIEWER_ROLES = (VIEWER_ROLE,)
57 """Roles mapped onto our ``viewers`` attribute."""
58
59 def __init__(self, etag=None, version=None):
60 self.etag = etag
61 self.version = version
62 self._bindings = {}
63
64 def __iter__(self):
65 return iter(self._bindings)
66
67 def __len__(self):
68 return len(self._bindings)
69
70 def __getitem__(self, key):
71 return self._bindings[key]
72
73 def __setitem__(self, key, value):
74 self._bindings[key] = set(value)
75
76 def __delitem__(self, key):
77 del self._bindings[key]
78
79 @property
80 def owners(self):
81 """Legacy access to owner role."""
82 result = set()
83 for role in self._OWNER_ROLES:
84 for member in self._bindings.get(role, ()):
85 result.add(member)
86 return frozenset(result)
87
88 @owners.setter
89 def owners(self, value):
90 """Update owners."""
91 warnings.warn(
92 _ASSIGNMENT_DEPRECATED_MSG.format('owners', OWNER_ROLE),
93 DeprecationWarning)
94 self[OWNER_ROLE] = value
95
96 @property
97 def editors(self):
98 """Legacy access to editor role."""
99 result = set()
100 for role in self._EDITOR_ROLES:
101 for member in self._bindings.get(role, ()):
102 result.add(member)
103 return frozenset(result)
104
105 @editors.setter
106 def editors(self, value):
107 """Update editors."""
108 warnings.warn(
109 _ASSIGNMENT_DEPRECATED_MSG.format('editors', EDITOR_ROLE),
110 DeprecationWarning)
111 self[EDITOR_ROLE] = value
112
113 @property
114 def viewers(self):
115 """Legacy access to viewer role."""
116 result = set()
117 for role in self._VIEWER_ROLES:
118 for member in self._bindings.get(role, ()):
119 result.add(member)
120 return frozenset(result)
121
122 @viewers.setter
123 def viewers(self, value):
124 """Update viewers."""
125 warnings.warn(
126 _ASSIGNMENT_DEPRECATED_MSG.format('viewers', VIEWER_ROLE),
127 DeprecationWarning)
128 self[VIEWER_ROLE] = value
129
130 @staticmethod
131 def user(email):
132 """Factory method for a user member.
133
134 :type email: str
135 :param email: E-mail for this particular user.
136
137 :rtype: str
138 :returns: A member string corresponding to the given user.
139 """
140 return 'user:%s' % (email,)
141
142 @staticmethod
143 def service_account(email):
144 """Factory method for a service account member.
145
146 :type email: str
147 :param email: E-mail for this particular service account.
148
149 :rtype: str
150 :returns: A member string corresponding to the given service account.
151 """
152 return 'serviceAccount:%s' % (email,)
153
154 @staticmethod
155 def group(email):
156 """Factory method for a group member.
157
158 :type email: str
159 :param email: An id or e-mail for this particular group.
160
161 :rtype: str
162 :returns: A member string corresponding to the given group.
163 """
164 return 'group:%s' % (email,)
165
166 @staticmethod
167 def domain(domain):
168 """Factory method for a domain member.
169
170 :type domain: str
171 :param domain: The domain for this member.
172
173 :rtype: str
174 :returns: A member string corresponding to the given domain.
175 """
176 return 'domain:%s' % (domain,)
177
178 @staticmethod
179 def all_users():
180 """Factory method for a member representing all users.
181
182 :rtype: str
183 :returns: A member string representing all users.
184 """
185 return 'allUsers'
186
187 @staticmethod
188 def authenticated_users():
189 """Factory method for a member representing all authenticated users.
190
191 :rtype: str
192 :returns: A member string representing all authenticated users.
193 """
194 return 'allAuthenticatedUsers'
195
196 @classmethod
197 def from_api_repr(cls, resource):
198 """Create a policy from the resource returned from the API.
199
200 :type resource: dict
201 :param resource: resource returned from the ``getIamPolicy`` API.
202
203 :rtype: :class:`Policy`
204 :returns: the parsed policy
205 """
206 version = resource.get('version')
207 etag = resource.get('etag')
208 policy = cls(etag, version)
209 for binding in resource.get('bindings', ()):
210 role = binding['role']
211 members = sorted(binding['members'])
212 policy[role] = members
213 return policy
214
215 def to_api_repr(self):
216 """Construct a Policy resource.
217
218 :rtype: dict
219 :returns: a resource to be passed to the ``setIamPolicy`` API.
220 """
221 resource = {}
222
223 if self.etag is not None:
224 resource['etag'] = self.etag
225
226 if self.version is not None:
227 resource['version'] = self.version
228
229 if len(self._bindings) > 0:
230 bindings = resource['bindings'] = []
231 for role, members in sorted(self._bindings.items()):
232 if len(members) > 0:
233 bindings.append(
234 {'role': role, 'members': sorted(set(members))})
235
236 if len(bindings) == 0:
237 del resource['bindings']
238
239 return resource
240
241
242 collections.MutableMapping.register(Policy)
243
[end of core/google/cloud/iam.py]
[start of pubsub/google/cloud/pubsub/iam.py]
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """PubSub API IAM policy definitions
15
16 For allowed roles / permissions, see:
17 https://cloud.google.com/pubsub/access_control#permissions
18 """
19
20 import warnings
21
22 # pylint: disable=unused-import
23 from google.cloud.iam import OWNER_ROLE # noqa - backward compat
24 from google.cloud.iam import EDITOR_ROLE # noqa - backward compat
25 from google.cloud.iam import VIEWER_ROLE # noqa - backward compat
26 # pylint: enable=unused-import
27 from google.cloud.iam import Policy as _BasePolicy
28 from google.cloud.iam import _ASSIGNMENT_DEPRECATED_MSG
29
30 # Pubsub-specific IAM roles
31
32 PUBSUB_ADMIN_ROLE = 'roles/pubsub.admin'
33 """Role implying all rights to an object."""
34
35 PUBSUB_EDITOR_ROLE = 'roles/pubsub.editor'
36 """Role implying rights to modify an object."""
37
38 PUBSUB_VIEWER_ROLE = 'roles/pubsub.viewer'
39 """Role implying rights to access an object."""
40
41 PUBSUB_PUBLISHER_ROLE = 'roles/pubsub.publisher'
42 """Role implying rights to publish to a topic."""
43
44 PUBSUB_SUBSCRIBER_ROLE = 'roles/pubsub.subscriber'
45 """Role implying rights to subscribe to a topic."""
46
47
48 # Pubsub-specific permissions
49
50 PUBSUB_TOPICS_CONSUME = 'pubsub.topics.consume'
51 """Permission: consume events from a subscription."""
52
53 PUBSUB_TOPICS_CREATE = 'pubsub.topics.create'
54 """Permission: create topics."""
55
56 PUBSUB_TOPICS_DELETE = 'pubsub.topics.delete'
57 """Permission: delete topics."""
58
59 PUBSUB_TOPICS_GET = 'pubsub.topics.get'
60 """Permission: retrieve topics."""
61
62 PUBSUB_TOPICS_GET_IAM_POLICY = 'pubsub.topics.getIamPolicy'
63 """Permission: retrieve subscription IAM policies."""
64
65 PUBSUB_TOPICS_LIST = 'pubsub.topics.list'
66 """Permission: list topics."""
67
68 PUBSUB_TOPICS_SET_IAM_POLICY = 'pubsub.topics.setIamPolicy'
69 """Permission: update subscription IAM policies."""
70
71 PUBSUB_SUBSCRIPTIONS_CONSUME = 'pubsub.subscriptions.consume'
72 """Permission: consume events from a subscription."""
73
74 PUBSUB_SUBSCRIPTIONS_CREATE = 'pubsub.subscriptions.create'
75 """Permission: create subscriptions."""
76
77 PUBSUB_SUBSCRIPTIONS_DELETE = 'pubsub.subscriptions.delete'
78 """Permission: delete subscriptions."""
79
80 PUBSUB_SUBSCRIPTIONS_GET = 'pubsub.subscriptions.get'
81 """Permission: retrieve subscriptions."""
82
83 PUBSUB_SUBSCRIPTIONS_GET_IAM_POLICY = 'pubsub.subscriptions.getIamPolicy'
84 """Permission: retrieve subscription IAM policies."""
85
86 PUBSUB_SUBSCRIPTIONS_LIST = 'pubsub.subscriptions.list'
87 """Permission: list subscriptions."""
88
89 PUBSUB_SUBSCRIPTIONS_SET_IAM_POLICY = 'pubsub.subscriptions.setIamPolicy'
90 """Permission: update subscription IAM policies."""
91
92 PUBSUB_SUBSCRIPTIONS_UPDATE = 'pubsub.subscriptions.update'
93 """Permission: update subscriptions."""
94
95
96 class Policy(_BasePolicy):
97 """IAM Policy / Bindings.
98
99 See:
100 https://cloud.google.com/pubsub/docs/reference/rest/Shared.Types/Policy
101 https://cloud.google.com/pubsub/docs/reference/rest/Shared.Types/Binding
102 """
103 _OWNER_ROLES = (OWNER_ROLE, PUBSUB_ADMIN_ROLE)
104 """Roles mapped onto our ``owners`` attribute."""
105
106 _EDITOR_ROLES = (EDITOR_ROLE, PUBSUB_EDITOR_ROLE)
107 """Roles mapped onto our ``editors`` attribute."""
108
109 _VIEWER_ROLES = (VIEWER_ROLE, PUBSUB_VIEWER_ROLE)
110 """Roles mapped onto our ``viewers`` attribute."""
111
112 @property
113 def publishers(self):
114 """Legacy access to owner role."""
115 return frozenset(self._bindings.get(PUBSUB_PUBLISHER_ROLE, ()))
116
117 @publishers.setter
118 def publishers(self, value):
119 """Update publishers."""
120 warnings.warn(
121 _ASSIGNMENT_DEPRECATED_MSG.format(
122 'publishers', PUBSUB_PUBLISHER_ROLE),
123 DeprecationWarning)
124 self[PUBSUB_PUBLISHER_ROLE] = value
125
126 @property
127 def subscribers(self):
128 """Legacy access to owner role."""
129 return frozenset(self._bindings.get(PUBSUB_SUBSCRIBER_ROLE, ()))
130
131 @subscribers.setter
132 def subscribers(self, value):
133 """Update subscribers."""
134 warnings.warn(
135 _ASSIGNMENT_DEPRECATED_MSG.format(
136 'subscribers', PUBSUB_SUBSCRIBER_ROLE),
137 DeprecationWarning)
138 self[PUBSUB_SUBSCRIBER_ROLE] = value
139
[end of pubsub/google/cloud/pubsub/iam.py]
[start of spanner/google/cloud/spanner/pool.py]
1 # Copyright 2016 Google Inc. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Pools managing shared Session objects."""
16
17 import datetime
18
19 from six.moves import queue
20 from six.moves import xrange
21
22 from google.cloud.exceptions import NotFound
23
24
25 _NOW = datetime.datetime.utcnow # unit tests may replace
26
27
28 class AbstractSessionPool(object):
29 """Specifies required API for concrete session pool implementations."""
30
31 _database = None
32
33 def bind(self, database):
34 """Associate the pool with a database.
35
36 :type database: :class:`~google.cloud.spanner.database.Database`
37 :param database: database used by the pool: used to create sessions
38 when needed.
39
40 Concrete implementations of this method may pre-fill the pool
41 using the database.
42 """
43 raise NotImplementedError()
44
45 def get(self):
46 """Check a session out from the pool.
47
48 Concrete implementations of this method are allowed to raise an
49 error to signal that the pool is exhausted, or to block until a
50 session is available.
51 """
52 raise NotImplementedError()
53
54 def put(self, session):
55 """Return a session to the pool.
56
57 :type session: :class:`~google.cloud.spanner.session.Session`
58 :param session: the session being returned.
59
60 Concrete implementations of this method are allowed to raise an
61 error to signal that the pool is full, or to block until it is
62 not full.
63 """
64 raise NotImplementedError()
65
66 def clear(self):
67 """Delete all sessions in the pool.
68
69 Concrete implementations of this method are allowed to raise an
70 error to signal that the pool is full, or to block until it is
71 not full.
72 """
73 raise NotImplementedError()
74
75 def session(self, **kwargs):
76 """Check out a session from the pool.
77
78 :type kwargs: dict
79 :param kwargs: (optional) keyword arguments, passed through to
80 the returned checkout.
81
82 :rtype: :class:`~google.cloud.spanner.session.SessionCheckout`
83 :returns: a checkout instance, to be used as a context manager for
84 accessing the session and returning it to the pool.
85 """
86 return SessionCheckout(self, **kwargs)
87
88
89 class FixedSizePool(AbstractSessionPool):
90 """Concrete session pool implementation:
91
92 - Pre-allocates / creates a fixed number of sessions.
93
94 - "Pings" existing sessions via :meth:`session.exists` before returning
95 them, and replaces expired sessions.
96
97 - Blocks, with a timeout, when :meth:`get` is called on an empty pool.
98 Raises after timing out.
99
100 - Raises when :meth:`put` is called on a full pool. That error is
101 never expected in normal practice, as users should be calling
102 :meth:`get` followed by :meth:`put` whenever in need of a session.
103
104 :type size: int
105 :param size: fixed pool size
106
107 :type default_timeout: int
108 :param default_timeout: default timeout, in seconds, to wait for
109 a returned session.
110 """
111 DEFAULT_SIZE = 10
112 DEFAULT_TIMEOUT = 10
113
114 def __init__(self, size=DEFAULT_SIZE, default_timeout=DEFAULT_TIMEOUT):
115 self.size = size
116 self.default_timeout = default_timeout
117 self._sessions = queue.Queue(size)
118
119 def bind(self, database):
120 """Associate the pool with a database.
121
122 :type database: :class:`~google.cloud.spanner.database.Database`
123 :param database: database used by the pool: used to create sessions
124 when needed.
125 """
126 self._database = database
127
128 while not self._sessions.full():
129 session = database.session()
130 session.create()
131 self._sessions.put(session)
132
133 def get(self, timeout=None): # pylint: disable=arguments-differ
134 """Check a session out from the pool.
135
136 :type timeout: int
137 :param timeout: seconds to block waiting for an available session
138
139 :rtype: :class:`~google.cloud.spanner.session.Session`
140 :returns: an existing session from the pool, or a newly-created
141 session.
142 :raises: :exc:`six.moves.queue.Empty` if the queue is empty.
143 """
144 if timeout is None:
145 timeout = self.default_timeout
146
147 session = self._sessions.get(block=True, timeout=timeout)
148
149 if not session.exists():
150 session = self._database.session()
151 session.create()
152
153 return session
154
155 def put(self, session):
156 """Return a session to the pool.
157
158 Never blocks: if the pool is full, raises.
159
160 :type session: :class:`~google.cloud.spanner.session.Session`
161 :param session: the session being returned.
162
163 :raises: :exc:`six.moves.queue.Full` if the queue is full.
164 """
165 self._sessions.put_nowait(session)
166
167 def clear(self):
168 """Delete all sessions in the pool."""
169
170 while True:
171 try:
172 session = self._sessions.get(block=False)
173 except queue.Empty:
174 break
175 else:
176 session.delete()
177
178
179 class BurstyPool(AbstractSessionPool):
180 """Concrete session pool implementation:
181
182 - "Pings" existing sessions via :meth:`session.exists` before returning
183 them.
184
185 - Creates a new session, rather than blocking, when :meth:`get` is called
186 on an empty pool.
187
188 - Discards the returned session, rather than blocking, when :meth:`put`
189 is called on a full pool.
190
191 :type target_size: int
192 :param target_size: max pool size
193 """
194
195 def __init__(self, target_size=10):
196 self.target_size = target_size
197 self._database = None
198 self._sessions = queue.Queue(target_size)
199
200 def bind(self, database):
201 """Associate the pool with a database.
202
203 :type database: :class:`~google.cloud.spanner.database.Database`
204 :param database: database used by the pool: used to create sessions
205 when needed.
206 """
207 self._database = database
208
209 def get(self):
210 """Check a session out from the pool.
211
212 :rtype: :class:`~google.cloud.spanner.session.Session`
213 :returns: an existing session from the pool, or a newly-created
214 session.
215 """
216 try:
217 session = self._sessions.get_nowait()
218 except queue.Empty:
219 session = self._database.session()
220 session.create()
221 else:
222 if not session.exists():
223 session = self._database.session()
224 session.create()
225 return session
226
227 def put(self, session):
228 """Return a session to the pool.
229
230 Never blocks: if the pool is full, the returned session is
231 discarded.
232
233 :type session: :class:`~google.cloud.spanner.session.Session`
234 :param session: the session being returned.
235 """
236 try:
237 self._sessions.put_nowait(session)
238 except queue.Full:
239 try:
240 session.delete()
241 except NotFound:
242 pass
243
244 def clear(self):
245 """Delete all sessions in the pool."""
246
247 while True:
248 try:
249 session = self._sessions.get(block=False)
250 except queue.Empty:
251 break
252 else:
253 session.delete()
254
255
256 class PingingPool(AbstractSessionPool):
257 """Concrete session pool implementation:
258
259 - Pre-allocates / creates a fixed number of sessions.
260
261 - Sessions are used in "round-robin" order (LRU first).
262
263 - "Pings" existing sessions in the background after a specified interval
264 via an API call (``session.exists()``).
265
266 - Blocks, with a timeout, when :meth:`get` is called on an empty pool.
267 Raises after timing out.
268
269 - Raises when :meth:`put` is called on a full pool. That error is
270 never expected in normal practice, as users should be calling
271 :meth:`get` followed by :meth:`put` whenever in need of a session.
272
273 The application is responsible for calling :meth:`ping` at appropriate
274 times, e.g. from a background thread.
275
276 :type size: int
277 :param size: fixed pool size
278
279 :type default_timeout: int
280 :param default_timeout: default timeout, in seconds, to wait for
281 a returned session.
282
283 :type ping_interval: int
284 :param ping_interval: interval at which to ping sessions.
285 """
286
287 def __init__(self, size=10, default_timeout=10, ping_interval=3000):
288 self.size = size
289 self.default_timeout = default_timeout
290 self._delta = datetime.timedelta(seconds=ping_interval)
291 self._sessions = queue.PriorityQueue(size)
292
293 def bind(self, database):
294 """Associate the pool with a database.
295
296 :type database: :class:`~google.cloud.spanner.database.Database`
297 :param database: database used by the pool: used to create sessions
298 when needed.
299 """
300 self._database = database
301
302 for _ in xrange(self.size):
303 session = database.session()
304 session.create()
305 self.put(session)
306
307 def get(self, timeout=None): # pylint: disable=arguments-differ
308 """Check a session out from the pool.
309
310 :type timeout: int
311 :param timeout: seconds to block waiting for an available session
312
313 :rtype: :class:`~google.cloud.spanner.session.Session`
314 :returns: an existing session from the pool, or a newly-created
315 session.
316 :raises: :exc:`six.moves.queue.Empty` if the queue is empty.
317 """
318 if timeout is None:
319 timeout = self.default_timeout
320
321 ping_after, session = self._sessions.get(block=True, timeout=timeout)
322
323 if _NOW() > ping_after:
324 if not session.exists():
325 session = self._database.session()
326 session.create()
327
328 return session
329
330 def put(self, session):
331 """Return a session to the pool.
332
333 Never blocks: if the pool is full, raises.
334
335 :type session: :class:`~google.cloud.spanner.session.Session`
336 :param session: the session being returned.
337
338 :raises: :exc:`six.moves.queue.Full` if the queue is full.
339 """
340 self._sessions.put_nowait((_NOW() + self._delta, session))
341
342 def clear(self):
343 """Delete all sessions in the pool."""
344 while True:
345 try:
346 _, session = self._sessions.get(block=False)
347 except queue.Empty:
348 break
349 else:
350 session.delete()
351
352 def ping(self):
353 """Refresh maybe-expired sessions in the pool.
354
355 This method is designed to be called from a background thread,
356 or during the "idle" phase of an event loop.
357 """
358 while True:
359 try:
360 ping_after, session = self._sessions.get(block=False)
361 except queue.Empty: # all sessions in use
362 break
363 if ping_after > _NOW(): # oldest session is fresh
364 # Re-add to queue with existing expiration
365 self._sessions.put((ping_after, session))
366 break
367 if not session.exists(): # stale
368 session = self._database.session()
369 session.create()
370 # Re-add to queue with new expiration
371 self.put(session)
372
373
374 class TransactionPingingPool(PingingPool):
375 """Concrete session pool implementation:
376
377 In addition to the features of :class:`PingingPool`, this class
378 creates and begins a transaction for each of its sessions at startup.
379
380 When a session is returned to the pool, if its transaction has been
381 committed or rolled back, the pool creates a new transaction for the
382 session and pushes the transaction onto a separate queue of "transactions
383 to begin." The application is responsible for flushing this queue
384 as appropriate via the pool's :meth:`begin_pending_transactions` method.
385
386 :type size: int
387 :param size: fixed pool size
388
389 :type default_timeout: int
390 :param default_timeout: default timeout, in seconds, to wait for
391 a returned session.
392
393 :type ping_interval: int
394 :param ping_interval: interval at which to ping sessions.
395 """
396
397 def __init__(self, size=10, default_timeout=10, ping_interval=3000):
398 self._pending_sessions = queue.Queue()
399
400 super(TransactionPingingPool, self).__init__(
401 size, default_timeout, ping_interval)
402
403 self.begin_pending_transactions()
404
405 def bind(self, database):
406 """Associate the pool with a database.
407
408 :type database: :class:`~google.cloud.spanner.database.Database`
409 :param database: database used by the pool: used to create sessions
410 when needed.
411 """
412 super(TransactionPingingPool, self).bind(database)
413 self.begin_pending_transactions()
414
415 def put(self, session):
416 """Return a session to the pool.
417
418 Never blocks: if the pool is full, raises.
419
420 :type session: :class:`~google.cloud.spanner.session.Session`
421 :param session: the session being returned.
422
423 :raises: :exc:`six.moves.queue.Full` if the queue is full.
424 """
425 if self._sessions.full():
426 raise queue.Full
427
428 txn = session._transaction
429 if txn is None or txn.committed() or txn._rolled_back:
430 session.transaction()
431 self._pending_sessions.put(session)
432 else:
433 super(TransactionPingingPool, self).put(session)
434
435 def begin_pending_transactions(self):
436 """Begin all transactions for sessions added to the pool."""
437 while not self._pending_sessions.empty():
438 session = self._pending_sessions.get()
439 session._transaction.begin()
440 super(TransactionPingingPool, self).put(session)
441
442
443 class SessionCheckout(object):
444 """Context manager: hold session checked out from a pool.
445
446 :type pool: concrete subclass of
447 :class:`~google.cloud.spanner.session.AbstractSessionPool`
448 :param pool: Pool from which to check out a session.
449
450 :type kwargs: dict
451 :param kwargs: extra keyword arguments to be passed to :meth:`pool.get`.
452 """
453 _session = None # Not checked out until '__enter__'.
454
455 def __init__(self, pool, **kwargs):
456 self._pool = pool
457 self._kwargs = kwargs.copy()
458
459 def __enter__(self):
460 self._session = self._pool.get(**self._kwargs)
461 return self._session
462
463 def __exit__(self, *ignored):
464 self._pool.put(self._session)
465
[end of spanner/google/cloud/spanner/pool.py]
[start of storage/google/cloud/storage/iam.py]
1 # Copyright 2017 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Storage API IAM policy definitions
15
16 For allowed roles / permissions, see:
17 https://cloud.google.com/storage/docs/access-control/iam
18 """
19
20 # Storage-specific IAM roles
21
22 STORAGE_OBJECT_CREATOR_ROLE = 'roles/storage.objectCreator'
23 """Role implying rights to create objects, but not delete or overwrite them."""
24
25 STORAGE_OBJECT_VIEWER_ROLE = 'roles/storage.objectViewer'
26 """Role implying rights to view object properties, excluding ACLs."""
27
28 STORAGE_OBJECT_ADMIN_ROLE = 'roles/storage.objectViewer'
29 """Role implying full control of objects."""
30
31 STORAGE_ADMIN_ROLE = 'roles/storage.admin'
32 """Role implying full control of objects and buckets."""
33
34 STORAGE_VIEWER_ROLE = 'Viewer'
35 """Can list buckets."""
36
37 STORAGE_EDITOR_ROLE = 'Editor'
38 """Can create, list, and delete buckets."""
39
40 STORAGE_OWNER_ROLE = 'Owners'
41 """Can create, list, and delete buckets."""
42
43
44 # Storage-specific permissions
45
46 STORAGE_BUCKETS_CREATE = 'storage.buckets.create'
47 """Permission: create buckets."""
48
49 STORAGE_BUCKETS_DELETE = 'storage.buckets.delete'
50 """Permission: delete buckets."""
51
52 STORAGE_BUCKETS_GET = 'storage.buckets.get'
53 """Permission: read bucket metadata, excluding ACLs."""
54
55 STORAGE_BUCKETS_GET_IAM_POLICY = 'storage.buckets.getIamPolicy'
56 """Permission: read bucket ACLs."""
57
58 STORAGE_BUCKETS_LIST = 'storage.buckets.list'
59 """Permission: list buckets."""
60
61 STORAGE_BUCKETS_SET_IAM_POLICY = 'storage.buckets.setIamPolicy'
62 """Permission: update bucket ACLs."""
63
64 STORAGE_BUCKETS_UPDATE = 'storage.buckets.list'
65 """Permission: update buckets, excluding ACLS."""
66
67 STORAGE_OBJECTS_CREATE = 'storage.objects.create'
68 """Permission: add new objects to a bucket."""
69
70 STORAGE_OBJECTS_DELETE = 'storage.objects.delete'
71 """Permission: delete objects."""
72
73 STORAGE_OBJECTS_GET = 'storage.objects.get'
74 """Permission: read object data / metadata, excluding ACLs."""
75
76 STORAGE_OBJECTS_GET_IAM_POLICY = 'storage.objects.getIamPolicy'
77 """Permission: read object ACLs."""
78
79 STORAGE_OBJECTS_LIST = 'storage.objects.list'
80 """Permission: list objects in a bucket."""
81
82 STORAGE_OBJECTS_SET_IAM_POLICY = 'storage.objects.setIamPolicy'
83 """Permission: update object ACLs."""
84
85 STORAGE_OBJECTS_UPDATE = 'storage.objects.update'
86 """Permission: update object metadat, excluding ACLs."""
87
[end of storage/google/cloud/storage/iam.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
googleapis/google-cloud-python
|
00e8bff0a789dbc2f0992e4c8c3504385f8ba70c
|
IAM policy should return an empty set of the bindings do not exist
https://github.com/GoogleCloudPlatform/google-cloud-python/blob/bd9fdfba07fac63a91847628613928a569250c0f/core/google/cloud/iam.py#L71
Should return an empty set if no bindings are set for that role.
|
Current behavior leads to `KeyError`, yes?
ya, I'd like to be able to do:
```python
policy = something.get_iam_policy()
policy['roles/editor'].add('whatever')
something.set_iam_policy(policy)
```
and it'll work regardless of if any editors had been previously set.
@jonparrott ISTM like you want a `collections.defaultdict(set)`?
In #3325 a similar snippet was **failing** because `policy.owners` returned a `frozenset`. In the `__getitem__` case, the raw binding will be returned, which we [guarantee][1] is a `set`. Is it a bad thing that the two different means of access produce different behavior?
[1]: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/bd9fdfba07fac63a91847628613928a569250c0f/core/google/cloud/iam.py#L73-L74
Defaultdict is pretty much what I'm after, in terms of behavior.
I'm fine with the old/deprecated methods returning a frozenset.
|
2017-05-16T18:46:56Z
|
<patch>
diff --git a/core/google/cloud/iam.py b/core/google/cloud/iam.py
--- a/core/google/cloud/iam.py
+++ b/core/google/cloud/iam.py
@@ -59,7 +59,7 @@ class Policy(collections.MutableMapping):
def __init__(self, etag=None, version=None):
self.etag = etag
self.version = version
- self._bindings = {}
+ self._bindings = collections.defaultdict(set)
def __iter__(self):
return iter(self._bindings)
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-36176
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
NumPy 1.18 behavior for sorting NaT
Followup to https://github.com/pandas-dev/pandas/pull/29877. cc @jbrockmendel if you could fill out the details.
* Adopt NaT at the end for Index? (we already do for Series)
* Report timedelta NaT-sorting issue to NumPy?
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://doi.org/10.5281/zenodo.3509134)
11 [](https://pypi.org/project/pandas/)
12 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [](https://travis-ci.org/pandas-dev/pandas)
14 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
15 [](https://codecov.io/gh/pandas-dev/pandas)
16 [](https://pandas.pydata.org)
17 [](https://gitter.im/pydata/pandas)
18 [](https://numfocus.org)
19 [](https://github.com/psf/black)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging.
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 package index](https://pypi.org/project/pandas) and on conda.
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy](https://www.numpy.org)
104 - [python-dateutil](https://labix.org/python-dateutil)
105 - [pytz](https://pythonhosted.org/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need Cython in addition to the normal
111 dependencies above. Cython can be installed from pypi:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
[start of pandas/core/reshape/tile.py]
1 """
2 Quantilization functions and related stuff
3 """
4 import numpy as np
5
6 from pandas._libs import Timedelta, Timestamp
7 from pandas._libs.lib import infer_dtype
8
9 from pandas.core.dtypes.common import (
10 DT64NS_DTYPE,
11 ensure_int64,
12 is_bool_dtype,
13 is_categorical_dtype,
14 is_datetime64_dtype,
15 is_datetime64tz_dtype,
16 is_datetime_or_timedelta_dtype,
17 is_extension_array_dtype,
18 is_integer,
19 is_integer_dtype,
20 is_list_like,
21 is_scalar,
22 is_timedelta64_dtype,
23 )
24 from pandas.core.dtypes.generic import ABCSeries
25 from pandas.core.dtypes.missing import isna
26
27 from pandas import Categorical, Index, IntervalIndex, to_datetime, to_timedelta
28 import pandas.core.algorithms as algos
29 import pandas.core.nanops as nanops
30
31
32 def cut(
33 x,
34 bins,
35 right: bool = True,
36 labels=None,
37 retbins: bool = False,
38 precision: int = 3,
39 include_lowest: bool = False,
40 duplicates: str = "raise",
41 ordered: bool = True,
42 ):
43 """
44 Bin values into discrete intervals.
45
46 Use `cut` when you need to segment and sort data values into bins. This
47 function is also useful for going from a continuous variable to a
48 categorical variable. For example, `cut` could convert ages to groups of
49 age ranges. Supports binning into an equal number of bins, or a
50 pre-specified array of bins.
51
52 Parameters
53 ----------
54 x : array-like
55 The input array to be binned. Must be 1-dimensional.
56 bins : int, sequence of scalars, or IntervalIndex
57 The criteria to bin by.
58
59 * int : Defines the number of equal-width bins in the range of `x`. The
60 range of `x` is extended by .1% on each side to include the minimum
61 and maximum values of `x`.
62 * sequence of scalars : Defines the bin edges allowing for non-uniform
63 width. No extension of the range of `x` is done.
64 * IntervalIndex : Defines the exact bins to be used. Note that
65 IntervalIndex for `bins` must be non-overlapping.
66
67 right : bool, default True
68 Indicates whether `bins` includes the rightmost edge or not. If
69 ``right == True`` (the default), then the `bins` ``[1, 2, 3, 4]``
70 indicate (1,2], (2,3], (3,4]. This argument is ignored when
71 `bins` is an IntervalIndex.
72 labels : array or False, default None
73 Specifies the labels for the returned bins. Must be the same length as
74 the resulting bins. If False, returns only integer indicators of the
75 bins. This affects the type of the output container (see below).
76 This argument is ignored when `bins` is an IntervalIndex. If True,
77 raises an error. When `ordered=False`, labels must be provided.
78 retbins : bool, default False
79 Whether to return the bins or not. Useful when bins is provided
80 as a scalar.
81 precision : int, default 3
82 The precision at which to store and display the bins labels.
83 include_lowest : bool, default False
84 Whether the first interval should be left-inclusive or not.
85 duplicates : {default 'raise', 'drop'}, optional
86 If bin edges are not unique, raise ValueError or drop non-uniques.
87
88 .. versionadded:: 0.23.0
89 ordered : bool, default True
90 Whether the labels are ordered or not. Applies to returned types
91 Categorical and Series (with Categorical dtype). If True,
92 the resulting categorical will be ordered. If False, the resulting
93 categorical will be unordered (labels must be provided).
94
95 .. versionadded:: 1.1.0
96
97 Returns
98 -------
99 out : Categorical, Series, or ndarray
100 An array-like object representing the respective bin for each value
101 of `x`. The type depends on the value of `labels`.
102
103 * True (default) : returns a Series for Series `x` or a
104 Categorical for all other inputs. The values stored within
105 are Interval dtype.
106
107 * sequence of scalars : returns a Series for Series `x` or a
108 Categorical for all other inputs. The values stored within
109 are whatever the type in the sequence is.
110
111 * False : returns an ndarray of integers.
112
113 bins : numpy.ndarray or IntervalIndex.
114 The computed or specified bins. Only returned when `retbins=True`.
115 For scalar or sequence `bins`, this is an ndarray with the computed
116 bins. If set `duplicates=drop`, `bins` will drop non-unique bin. For
117 an IntervalIndex `bins`, this is equal to `bins`.
118
119 See Also
120 --------
121 qcut : Discretize variable into equal-sized buckets based on rank
122 or based on sample quantiles.
123 Categorical : Array type for storing data that come from a
124 fixed set of values.
125 Series : One-dimensional array with axis labels (including time series).
126 IntervalIndex : Immutable Index implementing an ordered, sliceable set.
127
128 Notes
129 -----
130 Any NA values will be NA in the result. Out of bounds values will be NA in
131 the resulting Series or Categorical object.
132
133 Examples
134 --------
135 Discretize into three equal-sized bins.
136
137 >>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3)
138 ... # doctest: +ELLIPSIS
139 [(0.994, 3.0], (5.0, 7.0], (3.0, 5.0], (3.0, 5.0], (5.0, 7.0], ...
140 Categories (3, interval[float64]): [(0.994, 3.0] < (3.0, 5.0] ...
141
142 >>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3, retbins=True)
143 ... # doctest: +ELLIPSIS
144 ([(0.994, 3.0], (5.0, 7.0], (3.0, 5.0], (3.0, 5.0], (5.0, 7.0], ...
145 Categories (3, interval[float64]): [(0.994, 3.0] < (3.0, 5.0] ...
146 array([0.994, 3. , 5. , 7. ]))
147
148 Discovers the same bins, but assign them specific labels. Notice that
149 the returned Categorical's categories are `labels` and is ordered.
150
151 >>> pd.cut(np.array([1, 7, 5, 4, 6, 3]),
152 ... 3, labels=["bad", "medium", "good"])
153 ['bad', 'good', 'medium', 'medium', 'good', 'bad']
154 Categories (3, object): ['bad' < 'medium' < 'good']
155
156 ``ordered=False`` will result in unordered categories when labels are passed.
157 This parameter can be used to allow non-unique labels:
158
159 >>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3,
160 ... labels=["B", "A", "B"], ordered=False)
161 ['B', 'B', 'A', 'A', 'B', 'B']
162 Categories (2, object): ['A', 'B']
163
164 ``labels=False`` implies you just want the bins back.
165
166 >>> pd.cut([0, 1, 1, 2], bins=4, labels=False)
167 array([0, 1, 1, 3])
168
169 Passing a Series as an input returns a Series with categorical dtype:
170
171 >>> s = pd.Series(np.array([2, 4, 6, 8, 10]),
172 ... index=['a', 'b', 'c', 'd', 'e'])
173 >>> pd.cut(s, 3)
174 ... # doctest: +ELLIPSIS
175 a (1.992, 4.667]
176 b (1.992, 4.667]
177 c (4.667, 7.333]
178 d (7.333, 10.0]
179 e (7.333, 10.0]
180 dtype: category
181 Categories (3, interval[float64]): [(1.992, 4.667] < (4.667, ...
182
183 Passing a Series as an input returns a Series with mapping value.
184 It is used to map numerically to intervals based on bins.
185
186 >>> s = pd.Series(np.array([2, 4, 6, 8, 10]),
187 ... index=['a', 'b', 'c', 'd', 'e'])
188 >>> pd.cut(s, [0, 2, 4, 6, 8, 10], labels=False, retbins=True, right=False)
189 ... # doctest: +ELLIPSIS
190 (a 1.0
191 b 2.0
192 c 3.0
193 d 4.0
194 e NaN
195 dtype: float64,
196 array([ 0, 2, 4, 6, 8, 10]))
197
198 Use `drop` optional when bins is not unique
199
200 >>> pd.cut(s, [0, 2, 4, 6, 10, 10], labels=False, retbins=True,
201 ... right=False, duplicates='drop')
202 ... # doctest: +ELLIPSIS
203 (a 1.0
204 b 2.0
205 c 3.0
206 d 3.0
207 e NaN
208 dtype: float64,
209 array([ 0, 2, 4, 6, 10]))
210
211 Passing an IntervalIndex for `bins` results in those categories exactly.
212 Notice that values not covered by the IntervalIndex are set to NaN. 0
213 is to the left of the first bin (which is closed on the right), and 1.5
214 falls between two bins.
215
216 >>> bins = pd.IntervalIndex.from_tuples([(0, 1), (2, 3), (4, 5)])
217 >>> pd.cut([0, 0.5, 1.5, 2.5, 4.5], bins)
218 [NaN, (0.0, 1.0], NaN, (2.0, 3.0], (4.0, 5.0]]
219 Categories (3, interval[int64]): [(0, 1] < (2, 3] < (4, 5]]
220 """
221 # NOTE: this binning code is changed a bit from histogram for var(x) == 0
222
223 original = x
224 x = _preprocess_for_cut(x)
225 x, dtype = _coerce_to_type(x)
226
227 if not np.iterable(bins):
228 if is_scalar(bins) and bins < 1:
229 raise ValueError("`bins` should be a positive integer.")
230
231 try: # for array-like
232 sz = x.size
233 except AttributeError:
234 x = np.asarray(x)
235 sz = x.size
236
237 if sz == 0:
238 raise ValueError("Cannot cut empty array")
239
240 rng = (nanops.nanmin(x), nanops.nanmax(x))
241 mn, mx = [mi + 0.0 for mi in rng]
242
243 if np.isinf(mn) or np.isinf(mx):
244 # GH 24314
245 raise ValueError(
246 "cannot specify integer `bins` when input data contains infinity"
247 )
248 elif mn == mx: # adjust end points before binning
249 mn -= 0.001 * abs(mn) if mn != 0 else 0.001
250 mx += 0.001 * abs(mx) if mx != 0 else 0.001
251 bins = np.linspace(mn, mx, bins + 1, endpoint=True)
252 else: # adjust end points after binning
253 bins = np.linspace(mn, mx, bins + 1, endpoint=True)
254 adj = (mx - mn) * 0.001 # 0.1% of the range
255 if right:
256 bins[0] -= adj
257 else:
258 bins[-1] += adj
259
260 elif isinstance(bins, IntervalIndex):
261 if bins.is_overlapping:
262 raise ValueError("Overlapping IntervalIndex is not accepted.")
263
264 else:
265 if is_datetime64tz_dtype(bins):
266 bins = np.asarray(bins, dtype=DT64NS_DTYPE)
267 else:
268 bins = np.asarray(bins)
269 bins = _convert_bin_to_numeric_type(bins, dtype)
270
271 # GH 26045: cast to float64 to avoid an overflow
272 if (np.diff(bins.astype("float64")) < 0).any():
273 raise ValueError("bins must increase monotonically.")
274
275 fac, bins = _bins_to_cuts(
276 x,
277 bins,
278 right=right,
279 labels=labels,
280 precision=precision,
281 include_lowest=include_lowest,
282 dtype=dtype,
283 duplicates=duplicates,
284 ordered=ordered,
285 )
286
287 return _postprocess_for_cut(fac, bins, retbins, dtype, original)
288
289
290 def qcut(
291 x,
292 q,
293 labels=None,
294 retbins: bool = False,
295 precision: int = 3,
296 duplicates: str = "raise",
297 ):
298 """
299 Quantile-based discretization function.
300
301 Discretize variable into equal-sized buckets based on rank or based
302 on sample quantiles. For example 1000 values for 10 quantiles would
303 produce a Categorical object indicating quantile membership for each data point.
304
305 Parameters
306 ----------
307 x : 1d ndarray or Series
308 q : int or list-like of float
309 Number of quantiles. 10 for deciles, 4 for quartiles, etc. Alternately
310 array of quantiles, e.g. [0, .25, .5, .75, 1.] for quartiles.
311 labels : array or False, default None
312 Used as labels for the resulting bins. Must be of the same length as
313 the resulting bins. If False, return only integer indicators of the
314 bins. If True, raises an error.
315 retbins : bool, optional
316 Whether to return the (bins, labels) or not. Can be useful if bins
317 is given as a scalar.
318 precision : int, optional
319 The precision at which to store and display the bins labels.
320 duplicates : {default 'raise', 'drop'}, optional
321 If bin edges are not unique, raise ValueError or drop non-uniques.
322
323 Returns
324 -------
325 out : Categorical or Series or array of integers if labels is False
326 The return type (Categorical or Series) depends on the input: a Series
327 of type category if input is a Series else Categorical. Bins are
328 represented as categories when categorical data is returned.
329 bins : ndarray of floats
330 Returned only if `retbins` is True.
331
332 Notes
333 -----
334 Out of bounds values will be NA in the resulting Categorical object
335
336 Examples
337 --------
338 >>> pd.qcut(range(5), 4)
339 ... # doctest: +ELLIPSIS
340 [(-0.001, 1.0], (-0.001, 1.0], (1.0, 2.0], (2.0, 3.0], (3.0, 4.0]]
341 Categories (4, interval[float64]): [(-0.001, 1.0] < (1.0, 2.0] ...
342
343 >>> pd.qcut(range(5), 3, labels=["good", "medium", "bad"])
344 ... # doctest: +SKIP
345 [good, good, medium, bad, bad]
346 Categories (3, object): [good < medium < bad]
347
348 >>> pd.qcut(range(5), 4, labels=False)
349 array([0, 0, 1, 2, 3])
350 """
351 original = x
352 x = _preprocess_for_cut(x)
353 x, dtype = _coerce_to_type(x)
354
355 if is_integer(q):
356 quantiles = np.linspace(0, 1, q + 1)
357 else:
358 quantiles = q
359 bins = algos.quantile(x, quantiles)
360 fac, bins = _bins_to_cuts(
361 x,
362 bins,
363 labels=labels,
364 precision=precision,
365 include_lowest=True,
366 dtype=dtype,
367 duplicates=duplicates,
368 )
369
370 return _postprocess_for_cut(fac, bins, retbins, dtype, original)
371
372
373 def _bins_to_cuts(
374 x,
375 bins,
376 right: bool = True,
377 labels=None,
378 precision: int = 3,
379 include_lowest: bool = False,
380 dtype=None,
381 duplicates: str = "raise",
382 ordered: bool = True,
383 ):
384 if not ordered and not labels:
385 raise ValueError("'labels' must be provided if 'ordered = False'")
386
387 if duplicates not in ["raise", "drop"]:
388 raise ValueError(
389 "invalid value for 'duplicates' parameter, valid options are: raise, drop"
390 )
391
392 if isinstance(bins, IntervalIndex):
393 # we have a fast-path here
394 ids = bins.get_indexer(x)
395 result = Categorical.from_codes(ids, categories=bins, ordered=True)
396 return result, bins
397
398 unique_bins = algos.unique(bins)
399 if len(unique_bins) < len(bins) and len(bins) != 2:
400 if duplicates == "raise":
401 raise ValueError(
402 f"Bin edges must be unique: {repr(bins)}.\n"
403 f"You can drop duplicate edges by setting the 'duplicates' kwarg"
404 )
405 else:
406 bins = unique_bins
407
408 side = "left" if right else "right"
409 ids = ensure_int64(bins.searchsorted(x, side=side))
410
411 if include_lowest:
412 ids[x == bins[0]] = 1
413
414 na_mask = isna(x) | (ids == len(bins)) | (ids == 0)
415 has_nas = na_mask.any()
416
417 if labels is not False:
418 if not (labels is None or is_list_like(labels)):
419 raise ValueError(
420 "Bin labels must either be False, None or passed in as a "
421 "list-like argument"
422 )
423
424 elif labels is None:
425 labels = _format_labels(
426 bins, precision, right=right, include_lowest=include_lowest, dtype=dtype
427 )
428 elif ordered and len(set(labels)) != len(labels):
429 raise ValueError(
430 "labels must be unique if ordered=True; pass ordered=False for duplicate labels" # noqa
431 )
432 else:
433 if len(labels) != len(bins) - 1:
434 raise ValueError(
435 "Bin labels must be one fewer than the number of bin edges"
436 )
437 if not is_categorical_dtype(labels):
438 labels = Categorical(
439 labels,
440 categories=labels if len(set(labels)) == len(labels) else None,
441 ordered=ordered,
442 )
443 # TODO: handle mismatch between categorical label order and pandas.cut order.
444 np.putmask(ids, na_mask, 0)
445 result = algos.take_nd(labels, ids - 1)
446
447 else:
448 result = ids - 1
449 if has_nas:
450 result = result.astype(np.float64)
451 np.putmask(result, na_mask, np.nan)
452
453 return result, bins
454
455
456 def _coerce_to_type(x):
457 """
458 if the passed data is of datetime/timedelta, bool or nullable int type,
459 this method converts it to numeric so that cut or qcut method can
460 handle it
461 """
462 dtype = None
463
464 if is_datetime64tz_dtype(x.dtype):
465 dtype = x.dtype
466 elif is_datetime64_dtype(x.dtype):
467 x = to_datetime(x)
468 dtype = np.dtype("datetime64[ns]")
469 elif is_timedelta64_dtype(x.dtype):
470 x = to_timedelta(x)
471 dtype = np.dtype("timedelta64[ns]")
472 elif is_bool_dtype(x.dtype):
473 # GH 20303
474 x = x.astype(np.int64)
475 # To support cut and qcut for IntegerArray we convert to float dtype.
476 # Will properly support in the future.
477 # https://github.com/pandas-dev/pandas/pull/31290
478 # https://github.com/pandas-dev/pandas/issues/31389
479 elif is_extension_array_dtype(x.dtype) and is_integer_dtype(x.dtype):
480 x = x.to_numpy(dtype=np.float64, na_value=np.nan)
481
482 if dtype is not None:
483 # GH 19768: force NaT to NaN during integer conversion
484 x = np.where(x.notna(), x.view(np.int64), np.nan)
485
486 return x, dtype
487
488
489 def _convert_bin_to_numeric_type(bins, dtype):
490 """
491 if the passed bin is of datetime/timedelta type,
492 this method converts it to integer
493
494 Parameters
495 ----------
496 bins : list-like of bins
497 dtype : dtype of data
498
499 Raises
500 ------
501 ValueError if bins are not of a compat dtype to dtype
502 """
503 bins_dtype = infer_dtype(bins, skipna=False)
504 if is_timedelta64_dtype(dtype):
505 if bins_dtype in ["timedelta", "timedelta64"]:
506 bins = to_timedelta(bins).view(np.int64)
507 else:
508 raise ValueError("bins must be of timedelta64 dtype")
509 elif is_datetime64_dtype(dtype) or is_datetime64tz_dtype(dtype):
510 if bins_dtype in ["datetime", "datetime64"]:
511 bins = to_datetime(bins).view(np.int64)
512 else:
513 raise ValueError("bins must be of datetime64 dtype")
514
515 return bins
516
517
518 def _convert_bin_to_datelike_type(bins, dtype):
519 """
520 Convert bins to a DatetimeIndex or TimedeltaIndex if the original dtype is
521 datelike
522
523 Parameters
524 ----------
525 bins : list-like of bins
526 dtype : dtype of data
527
528 Returns
529 -------
530 bins : Array-like of bins, DatetimeIndex or TimedeltaIndex if dtype is
531 datelike
532 """
533 if is_datetime64tz_dtype(dtype):
534 bins = to_datetime(bins.astype(np.int64), utc=True).tz_convert(dtype.tz)
535 elif is_datetime_or_timedelta_dtype(dtype):
536 bins = Index(bins.astype(np.int64), dtype=dtype)
537 return bins
538
539
540 def _format_labels(
541 bins, precision: int, right: bool = True, include_lowest: bool = False, dtype=None
542 ):
543 """ based on the dtype, return our labels """
544 closed = "right" if right else "left"
545
546 if is_datetime64tz_dtype(dtype):
547 formatter = lambda x: Timestamp(x, tz=dtype.tz)
548 adjust = lambda x: x - Timedelta("1ns")
549 elif is_datetime64_dtype(dtype):
550 formatter = Timestamp
551 adjust = lambda x: x - Timedelta("1ns")
552 elif is_timedelta64_dtype(dtype):
553 formatter = Timedelta
554 adjust = lambda x: x - Timedelta("1ns")
555 else:
556 precision = _infer_precision(precision, bins)
557 formatter = lambda x: _round_frac(x, precision)
558 adjust = lambda x: x - 10 ** (-precision)
559
560 breaks = [formatter(b) for b in bins]
561 if right and include_lowest:
562 # adjust lhs of first interval by precision to account for being right closed
563 breaks[0] = adjust(breaks[0])
564
565 return IntervalIndex.from_breaks(breaks, closed=closed)
566
567
568 def _preprocess_for_cut(x):
569 """
570 handles preprocessing for cut where we convert passed
571 input to array, strip the index information and store it
572 separately
573 """
574 # Check that the passed array is a Pandas or Numpy object
575 # We don't want to strip away a Pandas data-type here (e.g. datetimetz)
576 ndim = getattr(x, "ndim", None)
577 if ndim is None:
578 x = np.asarray(x)
579 if x.ndim != 1:
580 raise ValueError("Input array must be 1 dimensional")
581
582 return x
583
584
585 def _postprocess_for_cut(fac, bins, retbins: bool, dtype, original):
586 """
587 handles post processing for the cut method where
588 we combine the index information if the originally passed
589 datatype was a series
590 """
591 if isinstance(original, ABCSeries):
592 fac = original._constructor(fac, index=original.index, name=original.name)
593
594 if not retbins:
595 return fac
596
597 bins = _convert_bin_to_datelike_type(bins, dtype)
598
599 return fac, bins
600
601
602 def _round_frac(x, precision: int):
603 """
604 Round the fractional part of the given number
605 """
606 if not np.isfinite(x) or x == 0:
607 return x
608 else:
609 frac, whole = np.modf(x)
610 if whole == 0:
611 digits = -int(np.floor(np.log10(abs(frac)))) - 1 + precision
612 else:
613 digits = precision
614 return np.around(x, digits)
615
616
617 def _infer_precision(base_precision: int, bins) -> int:
618 """
619 Infer an appropriate precision for _round_frac
620 """
621 for precision in range(base_precision, 20):
622 levels = [_round_frac(b, precision) for b in bins]
623 if algos.unique(levels).size == bins.size:
624 return precision
625 return base_precision # default
626
[end of pandas/core/reshape/tile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
c1d7bbddf945de315995ed84663d672697f0185c
|
NumPy 1.18 behavior for sorting NaT
Followup to https://github.com/pandas-dev/pandas/pull/29877. cc @jbrockmendel if you could fill out the details.
* Adopt NaT at the end for Index? (we already do for Series)
* Report timedelta NaT-sorting issue to NumPy?
|
I reported the timedelta sorting to numpy, see https://github.com/numpy/numpy/issues/15063, I think the plan is to fix this as well for numpy 1.8
I would propose we follow this change for Index.
xref #30460.
So the remaining item here is sorting NaT at the end for DatetimeIndex and TimedeltaIndex? So this would change from
```python
In [4]: pd.DatetimeIndex(['2000', None, '2001']).sort_values()
Out[4]: DatetimeIndex(['NaT', '2000-01-01', '2001-01-01'], dtype='datetime64[ns]', freq=None)
```
to
```python
DatetimeIndex(['2000-01-01', '2001-01-01', 'NaT'], dtype='datetime64[ns]', freq=None)
```
Is this a blocker for 1.0? If so, do we need a deprecation cycle?
Need to double check if this affects searchsorted. IIRC we use m8 values
in some places and i8 in others.
On Mon, Dec 30, 2019 at 9:37 AM Tom Augspurger <[email protected]>
wrote:
> So the remaining item here is sorting NaT at the end for DatetimeIndex and
> TimedeltaIndex? So this would change from
>
> In [4]: pd.DatetimeIndex(['2000', None, '2001']).sort_values()
> Out[4]: DatetimeIndex(['NaT', '2000-01-01', '2001-01-01'], dtype='datetime64[ns]', freq=None)
>
> to
>
> DatetimeIndex(['2000-01-01', '2001-01-01', 'NaT'], dtype='datetime64[ns]', freq=None)
>
> Is this a blocker for 1.0? If so, do we need a deprecation cycle?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/29884?email_source=notifications&email_token=AB5UM6ERPUUC2TR3CSLDOWTQ3IWUBA5CNFSM4JSHTYS2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEH2ZRXI#issuecomment-569743581>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AB5UM6BE7WMDECR4QAM2JDDQ3IWUBANCNFSM4JSHTYSQ>
> .
>
Since other index types sort the NaNs last, I would maybe consider this a bug and just change it (that's also what numpy did, IIUC). Eg:
```
In [22]: pd.Index([1, np.nan, 2]).sort_values()
Out[22]: Float64Index([1.0, 2.0, nan], dtype='float64')
```
If we want to do a deprecation cycle, we probably need to add a `na_position` keyword to Index.sort_values? (Series.sort_values has that, but Index not)
> If we want to do a deprecation cycle, we probably need to add a na_position keyword to Index.sort_values?
if we're calling it a bugfix we can probably do without a deprecation cycle
Do we have consensus that long-term we want NaT to be sorted last for each of DTA/TDA/PA/DTI/TDI/PI? If so, we should be able to update the DTA/TDA/PA versions without a deprecation cycle.
Following NumPy seems fine to me.
I think this is a blocker for the RC ...
I quickly looked at this. Fixing it for the latest numpy is easy (as we then just need to sort with the M8 values instead of i8 values), but we probably want to have it consistent for all numpy versions? (ensuring that might involve a bit more trickery)
The change to enable it for latest numpy would be something like:
```patch
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -191,10 +191,7 @@ class DatetimeIndexOpsMixin(ExtensionIndex):
sorted_index = self.take(_as)
return sorted_index, _as
else:
- # NB: using asi8 instead of _ndarray_values matters in numpy 1.18
- # because the treatment of NaT has been changed to put NaT last
- # instead of first.
- sorted_values = np.sort(self.asi8)
+ sorted_values = np.sort(self.values)
attribs = self._get_attributes_dict()
freq = attribs["freq"]
```
I'm OK with this missing the RC (or we can include it in a second RC if we're doing that).
Otherwise, we'll likely need a keyword to control the na position, which can be done anytime.
I am thinking we should maybe just change it, still for 1.0.0. Not ideal that it was not in the RC, but it's a clear inconsistency within pandas, and it would also be good to be consistent with latest numpy.
@TomAugspurger @jbrockmendel thoughts on this?
Matching numpy seems OK
> On Jan 20, 2020, at 08:58, Joris Van den Bossche <[email protected]> wrote:
>
>
> @TomAugspurger @jbrockmendel thoughts on this?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub, or unsubscribe.
@jbrockmendel do you have time to look at this?
I'm looking into it today.
Just FYI, I started https://github.com/pandas-dev/pandas/pull/31210, but am unsure how to proceed. The fact that `DatetimeIndex.sort_values().asi8` will no longer be sorted makes this pretty complex to get right.
Pushing this from 1.0 unfortunately. We'll need to re-evaluate how to proceed, but I think an option to control the `na_position` in `sort_values` makes the most sense.
|
2020-09-06T23:14:17Z
|
<patch>
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -228,6 +228,7 @@ Datetimelike
- Bug in :class:`DateOffset` where attributes reconstructed from pickle files differ from original objects when input values exceed normal ranges (e.g months=12) (:issue:`34511`)
- Bug in :meth:`DatetimeIndex.get_slice_bound` where ``datetime.date`` objects were not accepted or naive :class:`Timestamp` with a tz-aware :class:`DatetimeIndex` (:issue:`35690`)
- Bug in :meth:`DatetimeIndex.slice_locs` where ``datetime.date`` objects were not accepted (:issue:`34077`)
+- Bug in :meth:`DatetimeIndex.searchsorted`, :meth:`TimedeltaIndex.searchsorted`, and :meth:`Series.searchsorted` with ``datetime64`` or ``timedelta64`` dtype placement of ``NaT`` values being inconsistent with ``NumPy`` (:issue:`36176`)
Timedelta
^^^^^^^^^
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -858,7 +858,8 @@ def _validate_searchsorted_value(self, value):
# TODO: cast_str? we accept it for scalar
value = self._validate_listlike(value, "searchsorted")
- return self._unbox(value)
+ rv = self._unbox(value)
+ return self._rebox_native(rv)
def _validate_setitem_value(self, value):
msg = (
@@ -937,9 +938,7 @@ def searchsorted(self, value, side="left", sorter=None):
Array of insertion points with the same shape as `value`.
"""
value = self._validate_searchsorted_value(value)
-
- # TODO: Use datetime64 semantics for sorting, xref GH#29844
- return self.asi8.searchsorted(value, side=side, sorter=sorter)
+ return self._data.searchsorted(value, side=side, sorter=sorter)
def value_counts(self, dropna=False):
"""
</patch>
|
[]
|
[]
| |||
numpy__numpy-5636
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cryptic SystemError when creating array with weird structured-but-empty dtype
Trying to create an array with a weird structured but empty dtype results in a `SystemError`:
```
In [214]: array([(), (), (), (), ()], dtype={'names':[], 'formats':[], 'offsets':[], 'itemsize':12})
---------------------------------------------------------------------------
SystemError Traceback (most recent call last)
<ipython-input-214-1b509554769d> in <module>()
----> 1 array([(), (), (), (), ()], dtype={'names':[], 'formats':[], 'offsets':[], 'itemsize':12})
SystemError: error return without exception set
```
This should be caught somewhere and a suitable exception should be raised instead of `SystemError`.
Cryptic SystemError when creating array with weird structured-but-empty dtype
Trying to create an array with a weird structured but empty dtype results in a `SystemError`:
```
In [214]: array([(), (), (), (), ()], dtype={'names':[], 'formats':[], 'offsets':[], 'itemsize':12})
---------------------------------------------------------------------------
SystemError Traceback (most recent call last)
<ipython-input-214-1b509554769d> in <module>()
----> 1 array([(), (), (), (), ()], dtype={'names':[], 'formats':[], 'offsets':[], 'itemsize':12})
SystemError: error return without exception set
```
This should be caught somewhere and a suitable exception should be raised instead of `SystemError`.
</issue>
<code>
[start of README.txt]
1 NumPy is the fundamental package needed for scientific computing with Python.
2 This package contains:
3
4 * a powerful N-dimensional array object
5 * sophisticated (broadcasting) functions
6 * tools for integrating C/C++ and Fortran code
7 * useful linear algebra, Fourier transform, and random number capabilities.
8
9 It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray.
10
11 More information can be found at the website:
12
13 http://www.numpy.org
14
15 After installation, tests can be run with:
16
17 python -c 'import numpy; numpy.test()'
18
19 The most current development version is always available from our
20 git repository:
21
22 http://github.com/numpy/numpy
23
[end of README.txt]
[start of numpy/core/records.py]
1 """
2 Record Arrays
3 =============
4 Record arrays expose the fields of structured arrays as properties.
5
6 Most commonly, ndarrays contain elements of a single type, e.g. floats,
7 integers, bools etc. However, it is possible for elements to be combinations
8 of these using structured types, such as::
9
10 >>> a = np.array([(1, 2.0), (1, 2.0)], dtype=[('x', int), ('y', float)])
11 >>> a
12 array([(1, 2.0), (1, 2.0)],
13 dtype=[('x', '<i4'), ('y', '<f8')])
14
15 Here, each element consists of two fields: x (and int), and y (a float).
16 This is known as a structured array. The different fields are analogous
17 to columns in a spread-sheet. The different fields can be accessed as
18 one would a dictionary::
19
20 >>> a['x']
21 array([1, 1])
22
23 >>> a['y']
24 array([ 2., 2.])
25
26 Record arrays allow us to access fields as properties::
27
28 >>> ar = np.rec.array(a)
29
30 >>> ar.x
31 array([1, 1])
32
33 >>> ar.y
34 array([ 2., 2.])
35
36 """
37 from __future__ import division, absolute_import, print_function
38
39 import sys
40 import os
41
42 from . import numeric as sb
43 from . import numerictypes as nt
44 from numpy.compat import isfileobj, bytes, long
45
46 # All of the functions allow formats to be a dtype
47 __all__ = ['record', 'recarray', 'format_parser']
48
49
50 ndarray = sb.ndarray
51
52 _byteorderconv = {'b':'>',
53 'l':'<',
54 'n':'=',
55 'B':'>',
56 'L':'<',
57 'N':'=',
58 'S':'s',
59 's':'s',
60 '>':'>',
61 '<':'<',
62 '=':'=',
63 '|':'|',
64 'I':'|',
65 'i':'|'}
66
67 # formats regular expression
68 # allows multidimension spec with a tuple syntax in front
69 # of the letter code '(2,3)f4' and ' ( 2 , 3 ) f4 '
70 # are equally allowed
71
72 numfmt = nt.typeDict
73
74 def find_duplicate(list):
75 """Find duplication in a list, return a list of duplicated elements"""
76 dup = []
77 for i in range(len(list)):
78 if (list[i] in list[i + 1:]):
79 if (list[i] not in dup):
80 dup.append(list[i])
81 return dup
82
83 class format_parser:
84 """
85 Class to convert formats, names, titles description to a dtype.
86
87 After constructing the format_parser object, the dtype attribute is
88 the converted data-type:
89 ``dtype = format_parser(formats, names, titles).dtype``
90
91 Attributes
92 ----------
93 dtype : dtype
94 The converted data-type.
95
96 Parameters
97 ----------
98 formats : str or list of str
99 The format description, either specified as a string with
100 comma-separated format descriptions in the form ``'f8, i4, a5'``, or
101 a list of format description strings in the form
102 ``['f8', 'i4', 'a5']``.
103 names : str or list/tuple of str
104 The field names, either specified as a comma-separated string in the
105 form ``'col1, col2, col3'``, or as a list or tuple of strings in the
106 form ``['col1', 'col2', 'col3']``.
107 An empty list can be used, in that case default field names
108 ('f0', 'f1', ...) are used.
109 titles : sequence
110 Sequence of title strings. An empty list can be used to leave titles
111 out.
112 aligned : bool, optional
113 If True, align the fields by padding as the C-compiler would.
114 Default is False.
115 byteorder : str, optional
116 If specified, all the fields will be changed to the
117 provided byte-order. Otherwise, the default byte-order is
118 used. For all available string specifiers, see `dtype.newbyteorder`.
119
120 See Also
121 --------
122 dtype, typename, sctype2char
123
124 Examples
125 --------
126 >>> np.format_parser(['f8', 'i4', 'a5'], ['col1', 'col2', 'col3'],
127 ... ['T1', 'T2', 'T3']).dtype
128 dtype([(('T1', 'col1'), '<f8'), (('T2', 'col2'), '<i4'),
129 (('T3', 'col3'), '|S5')])
130
131 `names` and/or `titles` can be empty lists. If `titles` is an empty list,
132 titles will simply not appear. If `names` is empty, default field names
133 will be used.
134
135 >>> np.format_parser(['f8', 'i4', 'a5'], ['col1', 'col2', 'col3'],
136 ... []).dtype
137 dtype([('col1', '<f8'), ('col2', '<i4'), ('col3', '|S5')])
138 >>> np.format_parser(['f8', 'i4', 'a5'], [], []).dtype
139 dtype([('f0', '<f8'), ('f1', '<i4'), ('f2', '|S5')])
140
141 """
142 def __init__(self, formats, names, titles, aligned=False, byteorder=None):
143 self._parseFormats(formats, aligned)
144 self._setfieldnames(names, titles)
145 self._createdescr(byteorder)
146 self.dtype = self._descr
147
148 def _parseFormats(self, formats, aligned=0):
149 """ Parse the field formats """
150
151 if formats is None:
152 raise ValueError("Need formats argument")
153 if isinstance(formats, list):
154 if len(formats) < 2:
155 formats.append('')
156 formats = ','.join(formats)
157 dtype = sb.dtype(formats, aligned)
158 fields = dtype.fields
159 if fields is None:
160 dtype = sb.dtype([('f1', dtype)], aligned)
161 fields = dtype.fields
162 keys = dtype.names
163 self._f_formats = [fields[key][0] for key in keys]
164 self._offsets = [fields[key][1] for key in keys]
165 self._nfields = len(keys)
166
167 def _setfieldnames(self, names, titles):
168 """convert input field names into a list and assign to the _names
169 attribute """
170
171 if (names):
172 if (type(names) in [list, tuple]):
173 pass
174 elif isinstance(names, str):
175 names = names.split(',')
176 else:
177 raise NameError("illegal input names %s" % repr(names))
178
179 self._names = [n.strip() for n in names[:self._nfields]]
180 else:
181 self._names = []
182
183 # if the names are not specified, they will be assigned as
184 # "f0, f1, f2,..."
185 # if not enough names are specified, they will be assigned as "f[n],
186 # f[n+1],..." etc. where n is the number of specified names..."
187 self._names += ['f%d' % i for i in range(len(self._names),
188 self._nfields)]
189 # check for redundant names
190 _dup = find_duplicate(self._names)
191 if _dup:
192 raise ValueError("Duplicate field names: %s" % _dup)
193
194 if (titles):
195 self._titles = [n.strip() for n in titles[:self._nfields]]
196 else:
197 self._titles = []
198 titles = []
199
200 if (self._nfields > len(titles)):
201 self._titles += [None] * (self._nfields - len(titles))
202
203 def _createdescr(self, byteorder):
204 descr = sb.dtype({'names':self._names,
205 'formats':self._f_formats,
206 'offsets':self._offsets,
207 'titles':self._titles})
208 if (byteorder is not None):
209 byteorder = _byteorderconv[byteorder[0]]
210 descr = descr.newbyteorder(byteorder)
211
212 self._descr = descr
213
214 class record(nt.void):
215 """A data-type scalar that allows field access as attribute lookup.
216 """
217
218 # manually set name and module so that this class's type shows up
219 # as numpy.record when printed
220 __name__ = 'record'
221 __module__ = 'numpy'
222
223 def __repr__(self):
224 return self.__str__()
225
226 def __str__(self):
227 return str(self.item())
228
229 def __getattribute__(self, attr):
230 if attr in ['setfield', 'getfield', 'dtype']:
231 return nt.void.__getattribute__(self, attr)
232 try:
233 return nt.void.__getattribute__(self, attr)
234 except AttributeError:
235 pass
236 fielddict = nt.void.__getattribute__(self, 'dtype').fields
237 res = fielddict.get(attr, None)
238 if res:
239 obj = self.getfield(*res[:2])
240 # if it has fields return a record,
241 # otherwise return the object
242 try:
243 dt = obj.dtype
244 except AttributeError:
245 #happens if field is Object type
246 return obj
247 if dt.fields:
248 return obj.view((self.__class__, obj.dtype.fields))
249 return obj
250 else:
251 raise AttributeError("'record' object has no "
252 "attribute '%s'" % attr)
253
254 def __setattr__(self, attr, val):
255 if attr in ['setfield', 'getfield', 'dtype']:
256 raise AttributeError("Cannot set '%s' attribute" % attr)
257 fielddict = nt.void.__getattribute__(self, 'dtype').fields
258 res = fielddict.get(attr, None)
259 if res:
260 return self.setfield(val, *res[:2])
261 else:
262 if getattr(self, attr, None):
263 return nt.void.__setattr__(self, attr, val)
264 else:
265 raise AttributeError("'record' object has no "
266 "attribute '%s'" % attr)
267
268 def __getitem__(self, indx):
269 obj = nt.void.__getitem__(self, indx)
270
271 # copy behavior of record.__getattribute__,
272 if isinstance(obj, nt.void) and obj.dtype.fields:
273 return obj.view((self.__class__, obj.dtype.fields))
274 else:
275 # return a single element
276 return obj
277
278 def pprint(self):
279 """Pretty-print all fields."""
280 # pretty-print all fields
281 names = self.dtype.names
282 maxlen = max(len(name) for name in names)
283 rows = []
284 fmt = '%% %ds: %%s' % maxlen
285 for name in names:
286 rows.append(fmt % (name, getattr(self, name)))
287 return "\n".join(rows)
288
289 # The recarray is almost identical to a standard array (which supports
290 # named fields already) The biggest difference is that it can use
291 # attribute-lookup to find the fields and it is constructed using
292 # a record.
293
294 # If byteorder is given it forces a particular byteorder on all
295 # the fields (and any subfields)
296
297 class recarray(ndarray):
298 """
299 Construct an ndarray that allows field access using attributes.
300
301 Arrays may have a data-types containing fields, analogous
302 to columns in a spread sheet. An example is ``[(x, int), (y, float)]``,
303 where each entry in the array is a pair of ``(int, float)``. Normally,
304 these attributes are accessed using dictionary lookups such as ``arr['x']``
305 and ``arr['y']``. Record arrays allow the fields to be accessed as members
306 of the array, using ``arr.x`` and ``arr.y``.
307
308 Parameters
309 ----------
310 shape : tuple
311 Shape of output array.
312 dtype : data-type, optional
313 The desired data-type. By default, the data-type is determined
314 from `formats`, `names`, `titles`, `aligned` and `byteorder`.
315 formats : list of data-types, optional
316 A list containing the data-types for the different columns, e.g.
317 ``['i4', 'f8', 'i4']``. `formats` does *not* support the new
318 convention of using types directly, i.e. ``(int, float, int)``.
319 Note that `formats` must be a list, not a tuple.
320 Given that `formats` is somewhat limited, we recommend specifying
321 `dtype` instead.
322 names : tuple of str, optional
323 The name of each column, e.g. ``('x', 'y', 'z')``.
324 buf : buffer, optional
325 By default, a new array is created of the given shape and data-type.
326 If `buf` is specified and is an object exposing the buffer interface,
327 the array will use the memory from the existing buffer. In this case,
328 the `offset` and `strides` keywords are available.
329
330 Other Parameters
331 ----------------
332 titles : tuple of str, optional
333 Aliases for column names. For example, if `names` were
334 ``('x', 'y', 'z')`` and `titles` is
335 ``('x_coordinate', 'y_coordinate', 'z_coordinate')``, then
336 ``arr['x']`` is equivalent to both ``arr.x`` and ``arr.x_coordinate``.
337 byteorder : {'<', '>', '='}, optional
338 Byte-order for all fields.
339 aligned : bool, optional
340 Align the fields in memory as the C-compiler would.
341 strides : tuple of ints, optional
342 Buffer (`buf`) is interpreted according to these strides (strides
343 define how many bytes each array element, row, column, etc.
344 occupy in memory).
345 offset : int, optional
346 Start reading buffer (`buf`) from this offset onwards.
347 order : {'C', 'F'}, optional
348 Row-major or column-major order.
349
350 Returns
351 -------
352 rec : recarray
353 Empty array of the given shape and type.
354
355 See Also
356 --------
357 rec.fromrecords : Construct a record array from data.
358 record : fundamental data-type for `recarray`.
359 format_parser : determine a data-type from formats, names, titles.
360
361 Notes
362 -----
363 This constructor can be compared to ``empty``: it creates a new record
364 array but does not fill it with data. To create a record array from data,
365 use one of the following methods:
366
367 1. Create a standard ndarray and convert it to a record array,
368 using ``arr.view(np.recarray)``
369 2. Use the `buf` keyword.
370 3. Use `np.rec.fromrecords`.
371
372 Examples
373 --------
374 Create an array with two fields, ``x`` and ``y``:
375
376 >>> x = np.array([(1.0, 2), (3.0, 4)], dtype=[('x', float), ('y', int)])
377 >>> x
378 array([(1.0, 2), (3.0, 4)],
379 dtype=[('x', '<f8'), ('y', '<i4')])
380
381 >>> x['x']
382 array([ 1., 3.])
383
384 View the array as a record array:
385
386 >>> x = x.view(np.recarray)
387
388 >>> x.x
389 array([ 1., 3.])
390
391 >>> x.y
392 array([2, 4])
393
394 Create a new, empty record array:
395
396 >>> np.recarray((2,),
397 ... dtype=[('x', int), ('y', float), ('z', int)]) #doctest: +SKIP
398 rec.array([(-1073741821, 1.2249118382103472e-301, 24547520),
399 (3471280, 1.2134086255804012e-316, 0)],
400 dtype=[('x', '<i4'), ('y', '<f8'), ('z', '<i4')])
401
402 """
403
404 # manually set name and module so that this class's type shows
405 # up as "numpy.recarray" when printed
406 __name__ = 'recarray'
407 __module__ = 'numpy'
408
409 def __new__(subtype, shape, dtype=None, buf=None, offset=0, strides=None,
410 formats=None, names=None, titles=None,
411 byteorder=None, aligned=False, order='C'):
412
413 if dtype is not None:
414 descr = sb.dtype(dtype)
415 else:
416 descr = format_parser(formats, names, titles, aligned, byteorder)._descr
417
418 if buf is None:
419 self = ndarray.__new__(subtype, shape, (record, descr), order=order)
420 else:
421 self = ndarray.__new__(subtype, shape, (record, descr),
422 buffer=buf, offset=offset,
423 strides=strides, order=order)
424 return self
425
426 def __getattribute__(self, attr):
427 # See if ndarray has this attr, and return it if so. (note that this
428 # means a field with the same name as an ndarray attr cannot be
429 # accessed by attribute).
430 try:
431 return object.__getattribute__(self, attr)
432 except AttributeError: # attr must be a fieldname
433 pass
434
435 # look for a field with this name
436 fielddict = ndarray.__getattribute__(self, 'dtype').fields
437 try:
438 res = fielddict[attr][:2]
439 except (TypeError, KeyError):
440 raise AttributeError("recarray has no attribute %s" % attr)
441 obj = self.getfield(*res)
442
443 # At this point obj will always be a recarray, since (see
444 # PyArray_GetField) the type of obj is inherited. Next, if obj.dtype is
445 # non-structured, convert it to an ndarray. If obj is structured leave
446 # it as a recarray, but make sure to convert to the same dtype.type (eg
447 # to preserve numpy.record type if present), since nested structured
448 # fields do not inherit type.
449 if obj.dtype.fields:
450 return obj.view(dtype=(self.dtype.type, obj.dtype.fields))
451 else:
452 return obj.view(ndarray)
453
454 # Save the dictionary.
455 # If the attr is a field name and not in the saved dictionary
456 # Undo any "setting" of the attribute and do a setfield
457 # Thus, you can't create attributes on-the-fly that are field names.
458 def __setattr__(self, attr, val):
459 newattr = attr not in self.__dict__
460 try:
461 ret = object.__setattr__(self, attr, val)
462 except:
463 fielddict = ndarray.__getattribute__(self, 'dtype').fields or {}
464 if attr not in fielddict:
465 exctype, value = sys.exc_info()[:2]
466 raise exctype(value)
467 else:
468 fielddict = ndarray.__getattribute__(self, 'dtype').fields or {}
469 if attr not in fielddict:
470 return ret
471 if newattr: # We just added this one
472 try: # or this setattr worked on an internal
473 # attribute.
474 object.__delattr__(self, attr)
475 except:
476 return ret
477 try:
478 res = fielddict[attr][:2]
479 except (TypeError, KeyError):
480 raise AttributeError("record array has no attribute %s" % attr)
481 return self.setfield(val, *res)
482
483 def __getitem__(self, indx):
484 obj = ndarray.__getitem__(self, indx)
485
486 # copy behavior of getattr, except that here
487 # we might also be returning a single element
488 if isinstance(obj, ndarray):
489 if obj.dtype.fields:
490 return obj.view(dtype=(self.dtype.type, obj.dtype.fields))
491 else:
492 return obj.view(type=ndarray)
493 else:
494 # return a single element
495 return obj
496
497 def __repr__(self):
498 # get data/shape string. logic taken from numeric.array_repr
499 if self.size > 0 or self.shape==(0,):
500 lst = sb.array2string(self, separator=', ')
501 else:
502 # show zero-length shape unless it is (0,)
503 lst = "[], shape=%s" % (repr(self.shape),)
504
505 if self.dtype.type is record:
506 # If this is a full record array (has numpy.record dtype),
507 # represent it using the rec.array function. Since rec.array
508 # converts dtype to a numpy.record for us, use only dtype.descr,
509 # not repr(dtype).
510 lf = '\n'+' '*len("rec.array(")
511 return ('rec.array(%s, %sdtype=%s)' %
512 (lst, lf, repr(self.dtype.descr)))
513 else:
514 # otherwise represent it using np.array plus a view
515 # (There is currently (v1.10) no other easy way to create it)
516 lf = '\n'+' '*len("array(")
517 return ('array(%s, %sdtype=%s).view(numpy.recarray)' %
518 (lst, lf, str(self.dtype)))
519
520 def field(self, attr, val=None):
521 if isinstance(attr, int):
522 names = ndarray.__getattribute__(self, 'dtype').names
523 attr = names[attr]
524
525 fielddict = ndarray.__getattribute__(self, 'dtype').fields
526
527 res = fielddict[attr][:2]
528
529 if val is None:
530 obj = self.getfield(*res)
531 if obj.dtype.fields:
532 return obj
533 return obj.view(ndarray)
534 else:
535 return self.setfield(val, *res)
536
537 def view(self, dtype=None, type=None):
538 if dtype is None:
539 return ndarray.view(self, type)
540 elif type is None:
541 try:
542 if issubclass(dtype, ndarray):
543 return ndarray.view(self, dtype)
544 except TypeError:
545 pass
546 dtype = sb.dtype(dtype)
547 if dtype.fields is None:
548 return self.__array__().view(dtype)
549 return ndarray.view(self, dtype)
550 else:
551 return ndarray.view(self, dtype, type)
552
553
554 def fromarrays(arrayList, dtype=None, shape=None, formats=None,
555 names=None, titles=None, aligned=False, byteorder=None):
556 """ create a record array from a (flat) list of arrays
557
558 >>> x1=np.array([1,2,3,4])
559 >>> x2=np.array(['a','dd','xyz','12'])
560 >>> x3=np.array([1.1,2,3,4])
561 >>> r = np.core.records.fromarrays([x1,x2,x3],names='a,b,c')
562 >>> print r[1]
563 (2, 'dd', 2.0)
564 >>> x1[1]=34
565 >>> r.a
566 array([1, 2, 3, 4])
567 """
568
569 arrayList = [sb.asarray(x) for x in arrayList]
570
571 if shape is None or shape == 0:
572 shape = arrayList[0].shape
573
574 if isinstance(shape, int):
575 shape = (shape,)
576
577 if formats is None and dtype is None:
578 # go through each object in the list to see if it is an ndarray
579 # and determine the formats.
580 formats = []
581 for obj in arrayList:
582 if not isinstance(obj, ndarray):
583 raise ValueError("item in the array list must be an ndarray.")
584 formats.append(obj.dtype.str)
585 formats = ','.join(formats)
586
587 if dtype is not None:
588 descr = sb.dtype(dtype)
589 _names = descr.names
590 else:
591 parsed = format_parser(formats, names, titles, aligned, byteorder)
592 _names = parsed._names
593 descr = parsed._descr
594
595 # Determine shape from data-type.
596 if len(descr) != len(arrayList):
597 raise ValueError("mismatch between the number of fields "
598 "and the number of arrays")
599
600 d0 = descr[0].shape
601 nn = len(d0)
602 if nn > 0:
603 shape = shape[:-nn]
604
605 for k, obj in enumerate(arrayList):
606 nn = len(descr[k].shape)
607 testshape = obj.shape[:len(obj.shape) - nn]
608 if testshape != shape:
609 raise ValueError("array-shape mismatch in array %d" % k)
610
611 _array = recarray(shape, descr)
612
613 # populate the record array (makes a copy)
614 for i in range(len(arrayList)):
615 _array[_names[i]] = arrayList[i]
616
617 return _array
618
619 # shape must be 1-d if you use list of lists...
620 def fromrecords(recList, dtype=None, shape=None, formats=None, names=None,
621 titles=None, aligned=False, byteorder=None):
622 """ create a recarray from a list of records in text form
623
624 The data in the same field can be heterogeneous, they will be promoted
625 to the highest data type. This method is intended for creating
626 smaller record arrays. If used to create large array without formats
627 defined
628
629 r=fromrecords([(2,3.,'abc')]*100000)
630
631 it can be slow.
632
633 If formats is None, then this will auto-detect formats. Use list of
634 tuples rather than list of lists for faster processing.
635
636 >>> r=np.core.records.fromrecords([(456,'dbe',1.2),(2,'de',1.3)],
637 ... names='col1,col2,col3')
638 >>> print r[0]
639 (456, 'dbe', 1.2)
640 >>> r.col1
641 array([456, 2])
642 >>> r.col2
643 array(['dbe', 'de'],
644 dtype='|S3')
645 >>> import pickle
646 >>> print pickle.loads(pickle.dumps(r))
647 [(456, 'dbe', 1.2) (2, 'de', 1.3)]
648 """
649
650 nfields = len(recList[0])
651 if formats is None and dtype is None: # slower
652 obj = sb.array(recList, dtype=object)
653 arrlist = [sb.array(obj[..., i].tolist()) for i in range(nfields)]
654 return fromarrays(arrlist, formats=formats, shape=shape, names=names,
655 titles=titles, aligned=aligned, byteorder=byteorder)
656
657 if dtype is not None:
658 descr = sb.dtype((record, dtype))
659 else:
660 descr = format_parser(formats, names, titles, aligned, byteorder)._descr
661
662 try:
663 retval = sb.array(recList, dtype=descr)
664 except TypeError: # list of lists instead of list of tuples
665 if (shape is None or shape == 0):
666 shape = len(recList)
667 if isinstance(shape, (int, long)):
668 shape = (shape,)
669 if len(shape) > 1:
670 raise ValueError("Can only deal with 1-d array.")
671 _array = recarray(shape, descr)
672 for k in range(_array.size):
673 _array[k] = tuple(recList[k])
674 return _array
675 else:
676 if shape is not None and retval.shape != shape:
677 retval.shape = shape
678
679 res = retval.view(recarray)
680
681 return res
682
683
684 def fromstring(datastring, dtype=None, shape=None, offset=0, formats=None,
685 names=None, titles=None, aligned=False, byteorder=None):
686 """ create a (read-only) record array from binary data contained in
687 a string"""
688
689
690 if dtype is None and formats is None:
691 raise ValueError("Must have dtype= or formats=")
692
693 if dtype is not None:
694 descr = sb.dtype(dtype)
695 else:
696 descr = format_parser(formats, names, titles, aligned, byteorder)._descr
697
698 itemsize = descr.itemsize
699 if (shape is None or shape == 0 or shape == -1):
700 shape = (len(datastring) - offset) / itemsize
701
702 _array = recarray(shape, descr, buf=datastring, offset=offset)
703 return _array
704
705 def get_remaining_size(fd):
706 try:
707 fn = fd.fileno()
708 except AttributeError:
709 return os.path.getsize(fd.name) - fd.tell()
710 st = os.fstat(fn)
711 size = st.st_size - fd.tell()
712 return size
713
714 def fromfile(fd, dtype=None, shape=None, offset=0, formats=None,
715 names=None, titles=None, aligned=False, byteorder=None):
716 """Create an array from binary file data
717
718 If file is a string then that file is opened, else it is assumed
719 to be a file object.
720
721 >>> from tempfile import TemporaryFile
722 >>> a = np.empty(10,dtype='f8,i4,a5')
723 >>> a[5] = (0.5,10,'abcde')
724 >>>
725 >>> fd=TemporaryFile()
726 >>> a = a.newbyteorder('<')
727 >>> a.tofile(fd)
728 >>>
729 >>> fd.seek(0)
730 >>> r=np.core.records.fromfile(fd, formats='f8,i4,a5', shape=10,
731 ... byteorder='<')
732 >>> print r[5]
733 (0.5, 10, 'abcde')
734 >>> r.shape
735 (10,)
736 """
737
738 if (shape is None or shape == 0):
739 shape = (-1,)
740 elif isinstance(shape, (int, long)):
741 shape = (shape,)
742
743 name = 0
744 if isinstance(fd, str):
745 name = 1
746 fd = open(fd, 'rb')
747 if (offset > 0):
748 fd.seek(offset, 1)
749 size = get_remaining_size(fd)
750
751 if dtype is not None:
752 descr = sb.dtype(dtype)
753 else:
754 descr = format_parser(formats, names, titles, aligned, byteorder)._descr
755
756 itemsize = descr.itemsize
757
758 shapeprod = sb.array(shape).prod()
759 shapesize = shapeprod * itemsize
760 if shapesize < 0:
761 shape = list(shape)
762 shape[ shape.index(-1) ] = size / -shapesize
763 shape = tuple(shape)
764 shapeprod = sb.array(shape).prod()
765
766 nbytes = shapeprod * itemsize
767
768 if nbytes > size:
769 raise ValueError(
770 "Not enough bytes left in file for specified shape and type")
771
772 # create the array
773 _array = recarray(shape, descr)
774 nbytesread = fd.readinto(_array.data)
775 if nbytesread != nbytes:
776 raise IOError("Didn't read as many bytes as expected")
777 if name:
778 fd.close()
779
780 return _array
781
782 def array(obj, dtype=None, shape=None, offset=0, strides=None, formats=None,
783 names=None, titles=None, aligned=False, byteorder=None, copy=True):
784 """Construct a record array from a wide-variety of objects.
785 """
786
787 if (isinstance(obj, (type(None), str)) or isfileobj(obj)) \
788 and (formats is None) \
789 and (dtype is None):
790 raise ValueError("Must define formats (or dtype) if object is "\
791 "None, string, or an open file")
792
793 kwds = {}
794 if dtype is not None:
795 dtype = sb.dtype(dtype)
796 elif formats is not None:
797 dtype = format_parser(formats, names, titles,
798 aligned, byteorder)._descr
799 else:
800 kwds = {'formats': formats,
801 'names' : names,
802 'titles' : titles,
803 'aligned' : aligned,
804 'byteorder' : byteorder
805 }
806
807 if obj is None:
808 if shape is None:
809 raise ValueError("Must define a shape if obj is None")
810 return recarray(shape, dtype, buf=obj, offset=offset, strides=strides)
811
812 elif isinstance(obj, bytes):
813 return fromstring(obj, dtype, shape=shape, offset=offset, **kwds)
814
815 elif isinstance(obj, (list, tuple)):
816 if isinstance(obj[0], (tuple, list)):
817 return fromrecords(obj, dtype=dtype, shape=shape, **kwds)
818 else:
819 return fromarrays(obj, dtype=dtype, shape=shape, **kwds)
820
821 elif isinstance(obj, recarray):
822 if dtype is not None and (obj.dtype != dtype):
823 new = obj.view(dtype)
824 else:
825 new = obj
826 if copy:
827 new = new.copy()
828 return new
829
830 elif isfileobj(obj):
831 return fromfile(obj, dtype=dtype, shape=shape, offset=offset)
832
833 elif isinstance(obj, ndarray):
834 if dtype is not None and (obj.dtype != dtype):
835 new = obj.view(dtype)
836 else:
837 new = obj
838 if copy:
839 new = new.copy()
840 res = new.view(recarray)
841 if issubclass(res.dtype.type, nt.void):
842 res.dtype = sb.dtype((record, res.dtype))
843 return res
844
845 else:
846 interface = getattr(obj, "__array_interface__", None)
847 if interface is None or not isinstance(interface, dict):
848 raise ValueError("Unknown input type")
849 obj = sb.array(obj)
850 if dtype is not None and (obj.dtype != dtype):
851 obj = obj.view(dtype)
852 res = obj.view(recarray)
853 if issubclass(res.dtype.type, nt.void):
854 res.dtype = sb.dtype((record, res.dtype))
855 return res
856
[end of numpy/core/records.py]
[start of numpy/polynomial/polyutils.py]
1 """
2 Utililty classes and functions for the polynomial modules.
3
4 This module provides: error and warning objects; a polynomial base class;
5 and some routines used in both the `polynomial` and `chebyshev` modules.
6
7 Error objects
8 -------------
9
10 .. autosummary::
11 :toctree: generated/
12
13 PolyError base class for this sub-package's errors.
14 PolyDomainError raised when domains are mismatched.
15
16 Warning objects
17 ---------------
18
19 .. autosummary::
20 :toctree: generated/
21
22 RankWarning raised in least-squares fit for rank-deficient matrix.
23
24 Base class
25 ----------
26
27 .. autosummary::
28 :toctree: generated/
29
30 PolyBase Obsolete base class for the polynomial classes. Do not use.
31
32 Functions
33 ---------
34
35 .. autosummary::
36 :toctree: generated/
37
38 as_series convert list of array_likes into 1-D arrays of common type.
39 trimseq remove trailing zeros.
40 trimcoef remove small trailing coefficients.
41 getdomain return the domain appropriate for a given set of abscissae.
42 mapdomain maps points between domains.
43 mapparms parameters of the linear map between domains.
44
45 """
46 from __future__ import division, absolute_import, print_function
47
48 import numpy as np
49
50 __all__ = [
51 'RankWarning', 'PolyError', 'PolyDomainError', 'as_series', 'trimseq',
52 'trimcoef', 'getdomain', 'mapdomain', 'mapparms', 'PolyBase']
53
54 #
55 # Warnings and Exceptions
56 #
57
58 class RankWarning(UserWarning):
59 """Issued by chebfit when the design matrix is rank deficient."""
60 pass
61
62 class PolyError(Exception):
63 """Base class for errors in this module."""
64 pass
65
66 class PolyDomainError(PolyError):
67 """Issued by the generic Poly class when two domains don't match.
68
69 This is raised when an binary operation is passed Poly objects with
70 different domains.
71
72 """
73 pass
74
75 #
76 # Base class for all polynomial types
77 #
78
79 class PolyBase(object):
80 """
81 Base class for all polynomial types.
82
83 Deprecated in numpy 1.9.0, use the abstract
84 ABCPolyBase class instead. Note that the latter
85 reguires a number of virtual functions to be
86 implemented.
87
88 """
89 pass
90
91 #
92 # Helper functions to convert inputs to 1-D arrays
93 #
94 def trimseq(seq):
95 """Remove small Poly series coefficients.
96
97 Parameters
98 ----------
99 seq : sequence
100 Sequence of Poly series coefficients. This routine fails for
101 empty sequences.
102
103 Returns
104 -------
105 series : sequence
106 Subsequence with trailing zeros removed. If the resulting sequence
107 would be empty, return the first element. The returned sequence may
108 or may not be a view.
109
110 Notes
111 -----
112 Do not lose the type info if the sequence contains unknown objects.
113
114 """
115 if len(seq) == 0:
116 return seq
117 else:
118 for i in range(len(seq) - 1, -1, -1):
119 if seq[i] != 0:
120 break
121 return seq[:i+1]
122
123
124 def as_series(alist, trim=True):
125 """
126 Return argument as a list of 1-d arrays.
127
128 The returned list contains array(s) of dtype double, complex double, or
129 object. A 1-d argument of shape ``(N,)`` is parsed into ``N`` arrays of
130 size one; a 2-d argument of shape ``(M,N)`` is parsed into ``M`` arrays
131 of size ``N`` (i.e., is "parsed by row"); and a higher dimensional array
132 raises a Value Error if it is not first reshaped into either a 1-d or 2-d
133 array.
134
135 Parameters
136 ----------
137 alist : array_like
138 A 1- or 2-d array_like
139 trim : boolean, optional
140 When True, trailing zeros are removed from the inputs.
141 When False, the inputs are passed through intact.
142
143 Returns
144 -------
145 [a1, a2,...] : list of 1-D arrays
146 A copy of the input data as a list of 1-d arrays.
147
148 Raises
149 ------
150 ValueError
151 Raised when `as_series` cannot convert its input to 1-d arrays, or at
152 least one of the resulting arrays is empty.
153
154 Examples
155 --------
156 >>> from numpy import polynomial as P
157 >>> a = np.arange(4)
158 >>> P.as_series(a)
159 [array([ 0.]), array([ 1.]), array([ 2.]), array([ 3.])]
160 >>> b = np.arange(6).reshape((2,3))
161 >>> P.as_series(b)
162 [array([ 0., 1., 2.]), array([ 3., 4., 5.])]
163
164 """
165 arrays = [np.array(a, ndmin=1, copy=0) for a in alist]
166 if min([a.size for a in arrays]) == 0:
167 raise ValueError("Coefficient array is empty")
168 if any([a.ndim != 1 for a in arrays]):
169 raise ValueError("Coefficient array is not 1-d")
170 if trim:
171 arrays = [trimseq(a) for a in arrays]
172
173 if any([a.dtype == np.dtype(object) for a in arrays]):
174 ret = []
175 for a in arrays:
176 if a.dtype != np.dtype(object):
177 tmp = np.empty(len(a), dtype=np.dtype(object))
178 tmp[:] = a[:]
179 ret.append(tmp)
180 else:
181 ret.append(a.copy())
182 else:
183 try:
184 dtype = np.common_type(*arrays)
185 except:
186 raise ValueError("Coefficient arrays have no common type")
187 ret = [np.array(a, copy=1, dtype=dtype) for a in arrays]
188 return ret
189
190
191 def trimcoef(c, tol=0):
192 """
193 Remove "small" "trailing" coefficients from a polynomial.
194
195 "Small" means "small in absolute value" and is controlled by the
196 parameter `tol`; "trailing" means highest order coefficient(s), e.g., in
197 ``[0, 1, 1, 0, 0]`` (which represents ``0 + x + x**2 + 0*x**3 + 0*x**4``)
198 both the 3-rd and 4-th order coefficients would be "trimmed."
199
200 Parameters
201 ----------
202 c : array_like
203 1-d array of coefficients, ordered from lowest order to highest.
204 tol : number, optional
205 Trailing (i.e., highest order) elements with absolute value less
206 than or equal to `tol` (default value is zero) are removed.
207
208 Returns
209 -------
210 trimmed : ndarray
211 1-d array with trailing zeros removed. If the resulting series
212 would be empty, a series containing a single zero is returned.
213
214 Raises
215 ------
216 ValueError
217 If `tol` < 0
218
219 See Also
220 --------
221 trimseq
222
223 Examples
224 --------
225 >>> from numpy import polynomial as P
226 >>> P.trimcoef((0,0,3,0,5,0,0))
227 array([ 0., 0., 3., 0., 5.])
228 >>> P.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed
229 array([ 0.])
230 >>> i = complex(0,1) # works for complex
231 >>> P.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3)
232 array([ 0.0003+0.j , 0.0010-0.001j])
233
234 """
235 if tol < 0:
236 raise ValueError("tol must be non-negative")
237
238 [c] = as_series([c])
239 [ind] = np.where(np.abs(c) > tol)
240 if len(ind) == 0:
241 return c[:1]*0
242 else:
243 return c[:ind[-1] + 1].copy()
244
245 def getdomain(x):
246 """
247 Return a domain suitable for given abscissae.
248
249 Find a domain suitable for a polynomial or Chebyshev series
250 defined at the values supplied.
251
252 Parameters
253 ----------
254 x : array_like
255 1-d array of abscissae whose domain will be determined.
256
257 Returns
258 -------
259 domain : ndarray
260 1-d array containing two values. If the inputs are complex, then
261 the two returned points are the lower left and upper right corners
262 of the smallest rectangle (aligned with the axes) in the complex
263 plane containing the points `x`. If the inputs are real, then the
264 two points are the ends of the smallest interval containing the
265 points `x`.
266
267 See Also
268 --------
269 mapparms, mapdomain
270
271 Examples
272 --------
273 >>> from numpy.polynomial import polyutils as pu
274 >>> points = np.arange(4)**2 - 5; points
275 array([-5, -4, -1, 4])
276 >>> pu.getdomain(points)
277 array([-5., 4.])
278 >>> c = np.exp(complex(0,1)*np.pi*np.arange(12)/6) # unit circle
279 >>> pu.getdomain(c)
280 array([-1.-1.j, 1.+1.j])
281
282 """
283 [x] = as_series([x], trim=False)
284 if x.dtype.char in np.typecodes['Complex']:
285 rmin, rmax = x.real.min(), x.real.max()
286 imin, imax = x.imag.min(), x.imag.max()
287 return np.array((complex(rmin, imin), complex(rmax, imax)))
288 else:
289 return np.array((x.min(), x.max()))
290
291 def mapparms(old, new):
292 """
293 Linear map parameters between domains.
294
295 Return the parameters of the linear map ``offset + scale*x`` that maps
296 `old` to `new` such that ``old[i] -> new[i]``, ``i = 0, 1``.
297
298 Parameters
299 ----------
300 old, new : array_like
301 Domains. Each domain must (successfully) convert to a 1-d array
302 containing precisely two values.
303
304 Returns
305 -------
306 offset, scale : scalars
307 The map ``L(x) = offset + scale*x`` maps the first domain to the
308 second.
309
310 See Also
311 --------
312 getdomain, mapdomain
313
314 Notes
315 -----
316 Also works for complex numbers, and thus can be used to calculate the
317 parameters required to map any line in the complex plane to any other
318 line therein.
319
320 Examples
321 --------
322 >>> from numpy import polynomial as P
323 >>> P.mapparms((-1,1),(-1,1))
324 (0.0, 1.0)
325 >>> P.mapparms((1,-1),(-1,1))
326 (0.0, -1.0)
327 >>> i = complex(0,1)
328 >>> P.mapparms((-i,-1),(1,i))
329 ((1+1j), (1+0j))
330
331 """
332 oldlen = old[1] - old[0]
333 newlen = new[1] - new[0]
334 off = (old[1]*new[0] - old[0]*new[1])/oldlen
335 scl = newlen/oldlen
336 return off, scl
337
338 def mapdomain(x, old, new):
339 """
340 Apply linear map to input points.
341
342 The linear map ``offset + scale*x`` that maps the domain `old` to
343 the domain `new` is applied to the points `x`.
344
345 Parameters
346 ----------
347 x : array_like
348 Points to be mapped. If `x` is a subtype of ndarray the subtype
349 will be preserved.
350 old, new : array_like
351 The two domains that determine the map. Each must (successfully)
352 convert to 1-d arrays containing precisely two values.
353
354 Returns
355 -------
356 x_out : ndarray
357 Array of points of the same shape as `x`, after application of the
358 linear map between the two domains.
359
360 See Also
361 --------
362 getdomain, mapparms
363
364 Notes
365 -----
366 Effectively, this implements:
367
368 .. math ::
369 x\\_out = new[0] + m(x - old[0])
370
371 where
372
373 .. math ::
374 m = \\frac{new[1]-new[0]}{old[1]-old[0]}
375
376 Examples
377 --------
378 >>> from numpy import polynomial as P
379 >>> old_domain = (-1,1)
380 >>> new_domain = (0,2*np.pi)
381 >>> x = np.linspace(-1,1,6); x
382 array([-1. , -0.6, -0.2, 0.2, 0.6, 1. ])
383 >>> x_out = P.mapdomain(x, old_domain, new_domain); x_out
384 array([ 0. , 1.25663706, 2.51327412, 3.76991118, 5.02654825,
385 6.28318531])
386 >>> x - P.mapdomain(x_out, new_domain, old_domain)
387 array([ 0., 0., 0., 0., 0., 0.])
388
389 Also works for complex numbers (and thus can be used to map any line in
390 the complex plane to any other line therein).
391
392 >>> i = complex(0,1)
393 >>> old = (-1 - i, 1 + i)
394 >>> new = (-1 + i, 1 - i)
395 >>> z = np.linspace(old[0], old[1], 6); z
396 array([-1.0-1.j , -0.6-0.6j, -0.2-0.2j, 0.2+0.2j, 0.6+0.6j, 1.0+1.j ])
397 >>> new_z = P.mapdomain(z, old, new); new_z
398 array([-1.0+1.j , -0.6+0.6j, -0.2+0.2j, 0.2-0.2j, 0.6-0.6j, 1.0-1.j ])
399
400 """
401 x = np.asanyarray(x)
402 off, scl = mapparms(old, new)
403 return off + scl*x
404
[end of numpy/polynomial/polyutils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
numpy/numpy
|
8c86a0a879a9f6d8bc9b225e95512fd7f2fca964
|
Cryptic SystemError when creating array with weird structured-but-empty dtype
Trying to create an array with a weird structured but empty dtype results in a `SystemError`:
```
In [214]: array([(), (), (), (), ()], dtype={'names':[], 'formats':[], 'offsets':[], 'itemsize':12})
---------------------------------------------------------------------------
SystemError Traceback (most recent call last)
<ipython-input-214-1b509554769d> in <module>()
----> 1 array([(), (), (), (), ()], dtype={'names':[], 'formats':[], 'offsets':[], 'itemsize':12})
SystemError: error return without exception set
```
This should be caught somewhere and a suitable exception should be raised instead of `SystemError`.
Cryptic SystemError when creating array with weird structured-but-empty dtype
Trying to create an array with a weird structured but empty dtype results in a `SystemError`:
```
In [214]: array([(), (), (), (), ()], dtype={'names':[], 'formats':[], 'offsets':[], 'itemsize':12})
---------------------------------------------------------------------------
SystemError Traceback (most recent call last)
<ipython-input-214-1b509554769d> in <module>()
----> 1 array([(), (), (), (), ()], dtype={'names':[], 'formats':[], 'offsets':[], 'itemsize':12})
SystemError: error return without exception set
```
This should be caught somewhere and a suitable exception should be raised instead of `SystemError`.
|
2015-03-06T04:43:30Z
|
<patch>
diff --git a/numpy/core/_internal.py b/numpy/core/_internal.py
--- a/numpy/core/_internal.py
+++ b/numpy/core/_internal.py
@@ -10,7 +10,10 @@
import sys
import warnings
-from numpy.compat import asbytes, bytes
+from numpy.compat import asbytes, bytes, basestring
+from .multiarray import dtype, array, ndarray
+import ctypes
+from .numerictypes import object_
if (sys.byteorder == 'little'):
_nbo = asbytes('<')
@@ -18,7 +21,6 @@
_nbo = asbytes('>')
def _makenames_list(adict, align):
- from .multiarray import dtype
allfields = []
fnames = list(adict.keys())
for fname in fnames:
@@ -52,7 +54,6 @@ def _makenames_list(adict, align):
# a dictionary without "names" and "formats"
# fields is used as a data-type descriptor.
def _usefields(adict, align):
- from .multiarray import dtype
try:
names = adict[-1]
except KeyError:
@@ -130,7 +131,6 @@ def _array_descr(descriptor):
# so don't remove the name here, or you'll
# break backward compatibilty.
def _reconstruct(subtype, shape, dtype):
- from .multiarray import ndarray
return ndarray.__new__(subtype, shape, dtype)
@@ -194,12 +194,10 @@ def _commastring(astr):
return result
def _getintp_ctype():
- from .multiarray import dtype
val = _getintp_ctype.cache
if val is not None:
return val
char = dtype('p').char
- import ctypes
if (char == 'i'):
val = ctypes.c_int
elif char == 'l':
@@ -224,7 +222,6 @@ def c_void_p(self, num):
class _ctypes(object):
def __init__(self, array, ptr=None):
try:
- import ctypes
self._ctypes = ctypes
except ImportError:
self._ctypes = _missing_ctypes()
@@ -287,23 +284,55 @@ def _newnames(datatype, order):
return tuple(list(order) + nameslist)
raise ValueError("unsupported order value: %s" % (order,))
-# Given an array with fields and a sequence of field names
-# construct a new array with just those fields copied over
-def _index_fields(ary, fields):
- from .multiarray import empty, dtype, array
+def _index_fields(ary, names):
+ """ Given a structured array and a sequence of field names
+ construct new array with just those fields.
+
+ Parameters
+ ----------
+ ary : ndarray
+ Structured array being subscripted
+ names : string or list of strings
+ Either a single field name, or a list of field names
+
+ Returns
+ -------
+ sub_ary : ndarray
+ If `names` is a single field name, the return value is identical to
+ ary.getfield, a writeable view into `ary`. If `names` is a list of
+ field names the return value is a copy of `ary` containing only those
+ fields. This is planned to return a view in the future.
+
+ Raises
+ ------
+ ValueError
+ If `ary` does not contain a field given in `names`.
+
+ """
dt = ary.dtype
- names = [name for name in fields if name in dt.names]
- formats = [dt.fields[name][0] for name in fields if name in dt.names]
- offsets = [dt.fields[name][1] for name in fields if name in dt.names]
+ #use getfield to index a single field
+ if isinstance(names, basestring):
+ try:
+ return ary.getfield(dt.fields[names][0], dt.fields[names][1])
+ except KeyError:
+ raise ValueError("no field of name %s" % names)
+
+ for name in names:
+ if name not in dt.fields:
+ raise ValueError("no field of name %s" % name)
- view_dtype = {'names':names, 'formats':formats, 'offsets':offsets, 'itemsize':dt.itemsize}
- view = ary.view(dtype=view_dtype)
+ formats = [dt.fields[name][0] for name in names]
+ offsets = [dt.fields[name][1] for name in names]
+
+ view_dtype = {'names': names, 'formats': formats,
+ 'offsets': offsets, 'itemsize': dt.itemsize}
+
+ # return copy for now (future plan to return ary.view(dtype=view_dtype))
+ copy_dtype = {'names': view_dtype['names'],
+ 'formats': view_dtype['formats']}
+ return array(ary.view(dtype=view_dtype), dtype=copy_dtype, copy=True)
- # Return a copy for now until behavior is fully deprecated
- # in favor of returning view
- copy_dtype = {'names':view_dtype['names'], 'formats':view_dtype['formats']}
- return array(view, dtype=copy_dtype, copy=True)
def _get_all_field_offsets(dtype, base_offset=0):
""" Returns the types and offsets of all fields in a (possibly structured)
@@ -363,8 +392,6 @@ def _check_field_overlap(new_fields, old_fields):
If the new fields are incompatible with the old fields
"""
- from .numerictypes import object_
- from .multiarray import dtype
#first go byte by byte and check we do not access bytes not in old_fields
new_bytes = set()
@@ -527,8 +554,6 @@ def _view_is_safe(oldtype, newtype):
_pep3118_standard_typechars = ''.join(_pep3118_standard_map.keys())
def _dtype_from_pep3118(spec, byteorder='@', is_subdtype=False):
- from numpy.core.multiarray import dtype
-
fields = {}
offset = 0
explicit_name = False
@@ -694,8 +719,6 @@ def get_dummy_name():
def _add_trailing_padding(value, padding):
"""Inject the specified number of padding bytes at the end of a dtype"""
- from numpy.core.multiarray import dtype
-
if value.fields is None:
vfields = {'f0': (value, 0)}
else:
</patch>
|
[]
|
[]
| ||||
Qiskit__qiskit-1141
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add tree search swap mapper
### What is the expected behavior?
Add a new tree search based swap mapper as a possible pass in the transpiler. The algorithm was submitted to the 2018 Qiskit Developer Challange and is described at https://medium.com/qiskit/improving-a-quantum-compiler-48410d7a7084 .
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://pypi.python.org/pypi/qiskit)
4 [](https://travis-ci.org/Qiskit/qiskit-terra)
5 [](https://travis-ci.org/Qiskit/qiskit-terra)
6
7 **Qiskit** is a software development kit for
8 developing quantum computing applications and working with NISQ (Noisy-Intermediate Scale Quantum) computers.
9
10 Qiskit is made up elements that each work together to enable quantum computing. This element is **Terra**
11 and is the foundation on which the rest of Qiskit is built (see this [post](https://medium.com/qiskit/qiskit-and-its-fundamental-elements-bcd7ead80492) for an overview).
12
13
14 ## Installation
15
16
17 We encourage installing Qiskit via the PIP tool (a python package manager):
18
19 ```bash
20 pip install qiskit
21 ```
22
23 PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version.
24
25 At least [Python 3.5 or later](https://www.python.org/downloads/) is needed for using Qiskit. In
26 addition, [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) is recommended
27 for interacting with the tutorials.
28 For this reason we recommend installing the [Anaconda 3](https://www.continuum.io/downloads)
29 python distribution, as it comes with all of these dependencies pre-installed.
30
31 See [installing](doc/install.rst) Qiskit for detailed instructions, how to build from source and using environments.
32
33
34 ## Creating your first quantum program
35
36 Now that Qiskit is installed, it's time to begin working with Terra.
37
38 We are ready to try out a quantum circuit example, which is simulated locally using
39 the Qiskt Aer element. This is a simple example that makes an entangled state.
40
41 ```
42 $ python
43 ```
44
45 ```python
46 >>> from qiskit import *
47 >>> q = QuantumRegister(2)
48 >>> c = ClassicalRegister(2)
49 >>> qc = QuantumCircuit(q, c)
50 >>> qc.h(q[0])
51 >>> qc.cx(q[0], q[1])
52 >>> qc.measure(q, c)
53 >>> backend_sim = Aer.get_backend('qasm_simulator')
54 >>> result = execute(qc, backend_sim).result()
55 >>> print(result.get_counts(qc))
56 ```
57
58 In this case, the output will be:
59
60 ```python
61 {'counts': {'00': 513, '11': 511}}
62 ```
63
64 A script is available [here](examples/python/hello_quantum.py), where we also show how to
65 run the same program on a real quantum computer via IBMQ.
66
67 ### Executing your code on a real quantum chip
68
69 You can also use Qiskit to execute your code on a
70 **real quantum chip**.
71 In order to do so, you need to configure Qiskit for using the credentials in
72 your IBM Q account:
73
74 #### Configure your IBMQ credentials
75
76 1. Create an _[IBM Q](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so.
77
78 2. Get an API token from the IBM Q website under _My Account > Advanced > API Token_.
79
80 3. Take your token from step 2, here called `MY_API_TOKEN`, and run:
81
82 ```python
83 >>> from qiskit import IBMQ
84 >>> IBMQ.save_account('MY_API_TOKEN')
85 ```
86
87 4. If you have access to the IBM Q Network features, you also need to pass the
88 url listed on your IBM Q account page to `save_account`.
89
90 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
91 Once they are stored, at any point in the future you can load and use them
92 in your program simply via:
93
94 ```python
95 >>> from qiskit import IBMQ
96 >>> IBMQ.load_accounts()
97 ```
98
99 For those who do not want to save there credentials to disk please use
100
101 ```python
102 >>> from qiskit import IBMQ
103 >>> IBMQ.enable_account('MY_API_TOKEN')
104 ```
105
106 and the token will only be active for the session. For examples using Terra with real
107 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
108 the levels.
109
110 ## Contribution guidelines
111
112 If you'd like to contribute to Qiskit, please take a look at our
113 [contribution guidelines](.github/CONTRIBUTING.rst). This project adheres to Qiskit's [code of conduct](.github/CODE_OF_CONDUCT.rst). By participating, you are expect to uphold to this code.
114
115 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs.
116 Please use our [slack](https://qiskit.slack.com) for discussion. To join our Slack community use the [link](https://join.slack.com/t/qiskit/shared_invite/enQtNDc2NjUzMjE4Mzc0LTMwZmE0YTM4ZThiNGJmODkzN2Y2NTNlMDIwYWNjYzA2ZmM1YTRlZGQ3OGM0NjcwMjZkZGE0MTA4MGQ1ZTVmYzk). To ask questions to [Stack Overflow](https://stackoverflow.com/questions/tagged/qiskit).
117
118
119
120 ### Next Steps
121
122 Now you're set up and ready to check out some of the other examples from our
123 [Qiskit Tutorial](https://github.com/Qiskit/qiskit-tutorial) repository.
124
125
126 ## Authors
127
128 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
129 to the project at different levels.
130
131 ## License
132
133 [Apache License 2.0](LICENSE.txt)
[end of README.md]
[start of doc/conf.py]
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Qiskit documentation build configuration file, created by
5 # sphinx-quickstart on Tue Jul 25 18:13:28 2017.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20 import os
21 import sys
22 from qiskit import __version__
23 sys.path.insert(0, os.path.abspath('.'))
24
25 # Imported manually, as otherwise it will not be fully imported.
26 import qiskit.extensions.simulator
27
28 # -- General configuration ------------------------------------------------
29
30 # If your documentation needs a minimal Sphinx version, state it here.
31 #
32 # needs_sphinx = '1.0'
33
34 # Add any Sphinx extension module names here, as strings. They can be
35 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
36 # ones.
37 extensions = ['sphinx.ext.autodoc',
38 'sphinx.ext.autosummary',
39 'sphinx.ext.napoleon',
40 'sphinx.ext.doctest',
41 'sphinx.ext.coverage',
42 'sphinx.ext.mathjax',
43 'sphinx.ext.viewcode',
44 'sphinx.ext.githubpages']
45 # Napoleon settings
46 napoleon_google_docstring = True
47 napoleon_numpy_docstring = False
48 napoleon_include_init_with_doc = True
49 napoleon_include_private_with_doc = False
50 napoleon_include_special_with_doc = False
51 napoleon_use_admonition_for_examples = False
52 napoleon_use_admonition_for_notes = False
53 napoleon_use_admonition_for_references = False
54 napoleon_use_ivar = False
55 napoleon_use_param = True
56 napoleon_use_rtype = True
57
58 autoclass_content = 'both'
59
60 # Add any paths that contain templates here, relative to this directory.
61 templates_path = ['_templates']
62
63 # The suffix(es) of source filenames.
64 # You can specify multiple suffix as a list of string:
65 #
66 # source_suffix = ['.rst', '.md']
67 source_suffix = '.rst'
68
69 # The master toctree document.
70 master_doc = 'index'
71
72 # General information about the project.
73 project = 'Qiskit Terra'
74 copyright = '2017-2018 IBM'
75 author = 'IBM'
76
77 # Add description
78 html_context = {
79 'description': 'Qiskit Terra'
80 }
81
82 # The version info for the project you're documenting, acts as replacement for
83 # |version| and |release|, also used in various other places throughout the
84 # built documents.
85 #
86 # The short X.Y version.
87 version = __version__
88 # The full version, including alpha/beta/rc tags.
89 release = version
90
91 # The language for content autogenerated by Sphinx. Refer to documentation
92 # for a list of supported languages.
93 #
94 # This is also used if you do content translation via gettext catalogs.
95 # Usually you set "language" from the command line for these cases.
96 language = None
97
98 # List of patterns, relative to source directory, that match files and
99 # directories to ignore when looking for source files.
100 # This patterns also effect to html_static_path and html_extra_path
101 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store',
102 '_autodoc/modules.rst', 'de', 'ja']
103
104 # The name of the Pygments (syntax highlighting) style to use.
105 pygments_style = 'sphinx'
106
107 # If true, `todo` and `todoList` produce output, else they produce nothing.
108 todo_include_todos = False
109
110
111 # -- Options for HTML output ----------------------------------------------
112
113 # The theme to use for HTML and HTML Help pages. See the documentation for
114 # a list of builtin themes.
115 #
116 # html_theme = 'alabaster'
117 # html_theme = 'bizstyle'
118 # html_theme = agogo
119
120 html_sidebars = {
121 '**': ['globaltoc.html']
122 }
123
124
125 html_theme = 'sphinx_materialdesign_theme' # use the theme in subdir 'theme'
126 html_theme_path = ['./'] # make sphinx search for themes in current dir
127
128
129 # Theme options are theme-specific and customize the look and feel of a theme
130 # further. For a list of options available for each theme, see the
131 # documentation.
132 #
133 html_theme_options = {
134 # Specify a list of menu in Header.
135 # Tuples forms:
136 # ('Name', 'external url or path of pages in the document', boolean, 'icon name')
137 #
138 # Third argument:
139 # True indicates an external link.
140 # False indicates path of pages in the document.
141 #
142 # Fourth argument:
143 # Specify the icon name.
144 # For details see link.
145 # https://material.io/icons/
146 'header_links' : [
147 ('Home', 'index', False, 'home'),
148 ("ExternalLink", "http://example.com", True, 'launch'),
149 ("NoIconLink", "http://example.com", True, ''),
150 ("GitHub", "https://github.com/myyasuda/sphinx_materialdesign_theme", True, 'link')
151 ],
152
153 # Customize css colors.
154 # For details see link.
155 # https://getmdl.io/customize/index.html
156 #
157 # Values: amber, blue, brown, cyan deep_orange, deep_purple, green, grey, indigo, light_blue,
158 # light_green, lime, orange, pink, purple, red, teal, yellow(Default: indigo)
159 'primary_color': 'blue',
160 # Values: Same as primary_color. (Default: pink)
161 'accent_color': 'indigo',
162
163 # Customize layout.
164 # For details see link.
165 # https://getmdl.io/components/index.html#layout-section
166 'fixed_drawer': True,
167 'fixed_header': False,
168 'header_waterfall': True,
169 'header_scroll': False,
170
171 # Render title in header.
172 # Values: True, False (Default: False)
173 'show_header_title': False,
174 # Render title in drawer.
175 # Values: True, False (Default: True)
176 'show_drawer_title': True,
177 # Render footer.
178 }
179 # Add any paths that contain custom static files (such as style sheets) here,
180 # relative to this directory. They are copied after the builtin static files,
181 # so a file named "default.css" will overwrite the builtin "default.css".
182 html_static_path = ['./theme/static/']
183
184 # The name of an image file (relative to this directory) to place at the top
185 # of the sidebar.
186 html_logo = 'theme/static/qiskit-terra-logo.png'
187
188 html_favicon = 'theme/static/favicon.ico'
189
190 html_last_updated_fmt = '%Y/%m/%d'
191
192 # -- Options for HTMLHelp output ------------------------------------------
193
194 # Output file base name for HTML help builder.
195 htmlhelp_basename = 'Qiskitdoc'
196
197
198 # -- Options for LaTeX output ---------------------------------------------
199
200 latex_elements = {
201 # The paper size ('letterpaper' or 'a4paper').
202 #
203 # 'papersize': 'letterpaper',
204
205 # The font size ('10pt', '11pt' or '12pt').
206 #
207 # 'pointsize': '10pt',
208
209 # Additional stuff for the LaTeX preamble.
210 #
211 # 'preamble': '',
212
213 # Latex figure (float) alignment
214 #
215 # 'figure_align': 'htbp',
216 }
217
218 # Grouping the document tree into LaTeX files. List of tuples
219 # (source start file, target name, title,
220 # author, documentclass [howto, manual, or own class]).
221 latex_documents = [
222 (master_doc, 'Qiskit.tex', 'Qiskit Documentation',
223 '''Jim Challenger, Andrew Cross, Ismael Faro, Jay Gambetta''', 'manual'),
224 ]
225
226
227 # -- Options for manual page output ---------------------------------------
228
229 # One entry per manual page. List of tuples
230 # (source start file, name, description, authors, manual section).
231 man_pages = [
232 (master_doc, 'qiskit', 'Qiskit Documentation',
233 [author], 1)
234 ]
235
236
237 # -- Options for Texinfo output -------------------------------------------
238
239 # Grouping the document tree into Texinfo files. List of tuples
240 # (source start file, target name, title, author,
241 # dir menu entry, description, category)
242 texinfo_documents = [
243 (master_doc, 'Qiskit Terra', 'Qiskit Terra Documentation',
244 author, 'Qiskit', 'One line description of project.',
245 'Miscellaneous'),
246 ]
247
248
249 # Avoid a warning and treat the docstrings of the QasmLexer tokens as verbatim,
250 # as PLY uses docstring as a way to define the patterns the token matches.
251 def remove_module_docstring(app, what, name, obj, options, lines):
252 if name.startswith('qiskit.qasm._qasmlexer.QasmLexer.t_') and lines:
253 lines[0] = u'Token matching: ``%s``' % lines[0]
254
255
256 def setup(app):
257 app.connect('autodoc-process-docstring', remove_module_docstring)
258
[end of doc/conf.py]
[start of qiskit/mapper/_layout.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 A two-ways dict that represent a layout.
10
11 Layout is the relation between virtual (qu)bits and physical (qu)bits.
12 Virtual (qu)bits are tuples (eg, `(QuantumRegister(3, 'qr'),2)`.
13 Physical (qu)bits are numbers.
14 """
15
16 from qiskit import QiskitError
17
18
19 class Layout(dict):
20 """ Two-ways dict to represent a Layout."""
21
22 def __init__(self, input_=None):
23 dict.__init__(self)
24 if isinstance(input_, dict):
25 self.from_dict(input_)
26 if isinstance(input_, list):
27 self.from_list(input_)
28
29 def from_dict(self, input_dict):
30 """
31 Populates a Layout from a dictionary.
32
33 Args:
34 input_dict (dict): For example,
35 {(QuantumRegister(3, 'qr'), 0): 0,
36 (QuantumRegister(3, 'qr'), 1): 1,
37 (QuantumRegister(3, 'qr'), 2): 2}
38 """
39 for key, value in input_dict.items():
40 self[key] = value
41
42 def from_list(self, input_list):
43 """
44 Populates a Layout from a list.
45
46 Args:
47 input_list (list): For example,
48 [(QuantumRegister(3, 'qr'), 0), None,
49 (QuantumRegister(3, 'qr'), 2), (QuantumRegister(3, 'qr'), 3)]
50 """
51 for key, value in enumerate(input_list):
52 self[key] = value
53
54 def __getitem__(self, item):
55 if isinstance(item, int) and item < len(self) and item not in self:
56 return None
57 return dict.__getitem__(self, item)
58
59 def __setitem__(self, key, value):
60 if key in self:
61 del self[key]
62 if value in self:
63 del self[value]
64 if key is not None:
65 dict.__setitem__(self, key, value)
66 if value is not None:
67 dict.__setitem__(self, value, key)
68
69 def __delitem__(self, key):
70 dict.__delitem__(self, self[key])
71 dict.__delitem__(self, key)
72
73 def __len__(self):
74 return max([key for key in self.keys() if isinstance(key, int)], default=-1) + 1
75
76 def add(self, virtual_bit, physical_bit=None):
77 """
78 Adds a map element between `bit` and `physical_bit`. If `physical_bit` is not
79 defined, `bit` will be mapped to a new physical bit (extending the length of the
80 layout by one.)
81 Args:
82 virtual_bit (tuple): A (qu)bit. For example, (QuantumRegister(3, 'qr'),2).
83 physical_bit (int): A physical bit. For example, 3.
84 """
85 if physical_bit is None:
86 physical_bit = len(self)
87 self[virtual_bit] = physical_bit
88
89 def add_register(self, reg):
90 """
91 Adds at the end physical_qubits that map each bit in reg.
92 Args:
93 reg (Register): A (qu)bit Register. For example, QuantumRegister(3, 'qr').
94 """
95 for bit in reg:
96 self.add(bit)
97
98 def set_length(self, amount_of_physical_bits):
99 """
100 Extends the layout length to `amount_of_physical_bits`.
101 Args:
102 amount_of_physical_bits (int): The amount of physical_qubits to
103 set in the layout.
104 Raises:
105 LayoutError: If amount_of_physical_bits is used to reduced the
106 length instead of extending it.
107 """
108 current_length = len(self)
109 if amount_of_physical_bits < current_length:
110 raise LayoutError('Lenght setting cannot be smaller than the current amount of physical'
111 ' (qu)bits.')
112 for new_physical_bit in range(current_length, amount_of_physical_bits):
113 self[new_physical_bit] = None
114
115 def idle_physical_bits(self):
116 """
117 Returns a list of physical (qu)bits that are not mapped to a virtual (qu)bit.
118 """
119 idle_physical_bit_list = []
120 for physical_bit in range(self.__len__()):
121 if self[physical_bit] is None:
122 idle_physical_bit_list.append(physical_bit)
123 return idle_physical_bit_list
124
125 def get_virtual_bits(self):
126 """
127 Returns the dictionary where the keys are virtual (qu)bits and the
128 values are physical (qu)bits.
129 """
130 return {key: value for key, value in self.items() if isinstance(key, tuple)}
131
132 def get_physical_bits(self):
133 """
134 Returns the dictionary where the keys are physical (qu)bits and the
135 values are virtual (qu)bits.
136 """
137 return {key: value for key, value in self.items() if isinstance(key, int)}
138
139 def swap(self, left, right):
140 """ Swaps the map between left and right.
141 Args:
142 left (tuple or int): Item to swap with right.
143 right (tuple or int): Item to swap with left.
144 Raises:
145 LayoutError: If left and right have not the same type.
146 """
147 if type(left) is not type(right):
148 raise LayoutError('The method swap only works with elements of the same type.')
149 temp = self[left]
150 self[left] = self[right]
151 self[right] = temp
152
153 def combine_into_edge_map(self, another_layout):
154 """ Combines self and another_layout into an "edge map". For example
155
156 self another_layout resulting edge map
157 qr_1 -> 0 0 <- q_2 qr_1 -> q_2
158 qr_2 -> 2 2 <- q_1 qr_2 -> q_1
159 qr_3 -> 3 3 <- q_0 qr_3 -> q_0
160
161 The edge map is used to compose dags via, for example, compose_back.
162
163 Args:
164 another_layout (Layout): The other layout to combine.
165 Returns:
166 Dict: A "edge map".
167 Raises:
168 LayoutError: another_layout can be bigger than self, but not smaller. Otherwise, raises.
169 """
170 edge_map = dict()
171
172 for virtual, physical in self.get_virtual_bits().items():
173 if physical not in another_layout:
174 raise LayoutError('The wire_map_from_layouts() method does not support when the'
175 ' other layout (another_layout) is smaller.')
176 edge_map[virtual] = another_layout[physical]
177
178 return edge_map
179
180
181 class LayoutError(QiskitError):
182 """Errors raised by the layout object."""
183
184 def __init__(self, *msg):
185 """Set the error message."""
186 super().__init__(*msg)
187 self.msg = ' '.join(msg)
188
189 def __str__(self):
190 """Return the message."""
191 return repr(self.msg)
192
[end of qiskit/mapper/_layout.py]
[start of qiskit/tools/compiler.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """Helper module for simplified Qiskit usage."""
9 import warnings
10 import logging
11
12 from qiskit import transpiler
13 from qiskit.transpiler._passmanager import PassManager
14 from qiskit.converters import circuits_to_qobj
15 from qiskit import QiskitError
16
17 logger = logging.getLogger(__name__)
18
19
20 # pylint: disable=redefined-builtin
21 def compile(circuits, backend,
22 config=None, basis_gates=None, coupling_map=None, initial_layout=None,
23 shots=1024, max_credits=10, seed=None, qobj_id=None,
24 skip_transpiler=False, seed_mapper=None, pass_manager=None, memory=False):
25 """Compile a list of circuits into a qobj.
26
27 Args:
28 circuits (QuantumCircuit or list[QuantumCircuit]): circuits to compile
29 backend (BaseBackend): a backend to compile for
30 config (dict): dictionary of parameters (e.g. noise) used by runner
31 basis_gates (str): comma-separated basis gate set to compile to
32 coupling_map (list): coupling map (perhaps custom) to target in mapping
33 initial_layout (list): initial layout of qubits in mapping
34 shots (int): number of repetitions of each circuit, for sampling
35 max_credits (int): maximum credits to use
36 seed (int): random seed for simulators
37 seed_mapper (int): random seed for swapper mapper
38 qobj_id (int): identifier for the generated qobj
39 pass_manager (PassManager): a pass manger for the transpiler pipeline
40 memory (bool): if True, per-shot measurement bitstrings are returned as well
41 skip_transpiler (bool): DEPRECATED skip transpiler and create qobj directly
42
43 Returns:
44 Qobj: the qobj to be run on the backends
45
46 Raises:
47 QiskitError: if the desired options are not supported by backend
48 """
49 if skip_transpiler: # empty pass manager which does nothing
50 pass_manager = PassManager()
51 warnings.warn('The skip_transpiler option has been deprecated. '
52 'Please pass an empty PassManager() instance instead',
53 DeprecationWarning)
54
55 backend_memory = getattr(backend.configuration(), 'memory', False)
56 if memory and not backend_memory:
57 raise QiskitError("Backend %s only returns total counts, not single-shot memory." %
58 backend.name())
59
60 circuits = transpiler.transpile(circuits, backend, basis_gates, coupling_map, initial_layout,
61 seed_mapper, pass_manager)
62
63 # step 4: Making a qobj
64 qobj = circuits_to_qobj(circuits, backend_name=backend.name(),
65 config=config, shots=shots, max_credits=max_credits,
66 qobj_id=qobj_id, basis_gates=basis_gates,
67 coupling_map=coupling_map, seed=seed, memory=memory)
68
69 return qobj
70
71
72 def execute(circuits, backend, config=None, basis_gates=None, coupling_map=None,
73 initial_layout=None, shots=1024, max_credits=10, seed=None,
74 qobj_id=None, skip_transpiler=False, seed_mapper=None, pass_manager=None,
75 memory=False, **kwargs):
76 """Executes a set of circuits.
77
78 Args:
79 circuits (QuantumCircuit or list[QuantumCircuit]): circuits to execute
80 backend (BaseBackend): a backend to execute the circuits on
81 config (dict): dictionary of parameters (e.g. noise) used by runner
82 basis_gates (str): comma-separated basis gate set to compile to
83 coupling_map (list): coupling map (perhaps custom) to target in mapping
84 initial_layout (list): initial layout of qubits in mapping
85 shots (int): number of repetitions of each circuit, for sampling
86 max_credits (int): maximum credits to use
87 seed (int): random seed for simulators
88 seed_mapper (int): random seed for swapper mapper
89 qobj_id (int): identifier for the generated qobj
90 pass_manager (PassManager): a pass manger for the transpiler pipeline
91 memory (bool): if True, per-shot measurement bitstrings are returned as well.
92 skip_transpiler (bool): DEPRECATED skip transpiler and create qobj directly
93 kwargs: extra arguments used by AER for running configurable backends.
94 Refer to the backend documentation for details on these arguments
95
96 Returns:
97 BaseJob: returns job instance derived from BaseJob
98 """
99 if skip_transpiler: # empty pass manager which does nothing
100 pass_manager = PassManager()
101 warnings.warn('The skip_transpiler option has been deprecated. '
102 'Please pass an empty PassManager() instance instead',
103 DeprecationWarning)
104
105 qobj = compile(circuits, backend,
106 config, basis_gates, coupling_map, initial_layout,
107 shots, max_credits, seed, qobj_id,
108 skip_transpiler, seed_mapper, pass_manager, memory)
109
110 return backend.run(qobj, **kwargs)
111
[end of qiskit/tools/compiler.py]
[start of qiskit/transpiler/_transpiler.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """Tools for compiling a batch of quantum circuits."""
9 import logging
10 import warnings
11 import numpy as np
12 import scipy.sparse as sp
13 import scipy.sparse.csgraph as cs
14
15 from qiskit.qiskiterror import QiskitError
16 from qiskit.circuit import QuantumCircuit
17 from qiskit.circuit import QuantumRegister
18 from qiskit.mapper import (Coupling, optimize_1q_gates, swap_mapper,
19 cx_cancellation, direction_mapper,
20 remove_last_measurements, return_last_measurements)
21 from qiskit.converters import circuit_to_dag
22 from qiskit.converters import dag_to_circuit
23 from ._parallel import parallel_map
24 from .passes.mapping.unroller import Unroller
25
26
27 logger = logging.getLogger(__name__)
28
29
30 def transpile(circuits, backend, basis_gates=None, coupling_map=None, initial_layout=None,
31 seed_mapper=None, pass_manager=None):
32 """transpile one or more circuits.
33
34 Args:
35 circuits (QuantumCircuit or list[QuantumCircuit]): circuits to compile
36 backend (BaseBackend): a backend to compile for
37 basis_gates (str): comma-separated basis gate set to compile to
38 coupling_map (list): coupling map (perhaps custom) to target in mapping
39 initial_layout (list): initial layout of qubits in mapping
40 seed_mapper (int): random seed for the swap_mapper
41 pass_manager (PassManager): a pass_manager for the transpiler stage
42
43 Returns:
44 QuantumCircuit or list[QuantumCircuit]: transpiled circuit(s).
45 """
46 return_form_is_single = False
47 if isinstance(circuits, QuantumCircuit):
48 circuits = [circuits]
49 return_form_is_single = True
50
51 # FIXME: THIS NEEDS TO BE CLEANED UP -- some things to decide for list of circuits:
52 # 1. do all circuits have same coupling map?
53 # 2. do all circuit have the same basis set?
54 # 3. do they all have same registers etc?
55 # Check for valid parameters for the experiments.
56 basis_gates = basis_gates or ','.join(backend.configuration().basis_gates)
57 coupling_map = coupling_map or getattr(backend.configuration(),
58 'coupling_map', None)
59
60 circuits = parallel_map(_transpilation, circuits,
61 task_args=(backend,),
62 task_kwargs={'basis_gates': basis_gates,
63 'coupling_map': coupling_map,
64 'initial_layout': initial_layout,
65 'seed_mapper': seed_mapper,
66 'pass_manager': pass_manager})
67 if return_form_is_single:
68 return circuits[0]
69 return circuits
70
71
72 def _transpilation(circuit, backend, basis_gates=None, coupling_map=None,
73 initial_layout=None, seed_mapper=None,
74 pass_manager=None):
75 """Perform transpilation of a single circuit.
76
77 Args:
78 circuit (QuantumCircuit): A circuit to transpile.
79 backend (BaseBackend): a backend to compile for
80 basis_gates (str): comma-separated basis gate set to compile to
81 coupling_map (list): coupling map (perhaps custom) to target in mapping
82 initial_layout (list): initial layout of qubits in mapping
83 seed_mapper (int): random seed for the swap_mapper
84 pass_manager (PassManager): a pass_manager for the transpiler stage
85
86 Returns:
87 QuantumCircuit: A transpiled circuit.
88
89 """
90 dag = circuit_to_dag(circuit)
91 if (initial_layout is None and not backend.configuration().simulator
92 and not _matches_coupling_map(dag, coupling_map)):
93 initial_layout = _pick_best_layout(dag, backend)
94
95 final_dag, final_layout = transpile_dag(dag, basis_gates=basis_gates,
96 coupling_map=coupling_map,
97 initial_layout=initial_layout,
98 get_layout=True, format='dag',
99 seed_mapper=seed_mapper,
100 pass_manager=pass_manager)
101 final_dag.layout = [[k, v]
102 for k, v in final_layout.items()] if final_layout else None
103
104 out_circuit = dag_to_circuit(final_dag)
105
106 return out_circuit
107
108
109 # pylint: disable=redefined-builtin
110 def transpile_dag(dag, basis_gates='u1,u2,u3,cx,id', coupling_map=None,
111 initial_layout=None, get_layout=False,
112 format='dag', seed_mapper=None, pass_manager=None):
113 """Transform a dag circuit into another dag circuit (transpile), through
114 consecutive passes on the dag.
115
116 Args:
117 dag (DAGCircuit): dag circuit to transform via transpilation
118 basis_gates (str): a comma separated string for the target basis gates
119 coupling_map (list): A graph of coupling::
120
121 [
122 [control0(int), target0(int)],
123 [control1(int), target1(int)],
124 ]
125
126 eg. [[0, 2], [1, 2], [1, 3], [3, 4]}
127
128 initial_layout (dict): A mapping of qubit to qubit::
129
130 {
131 ("q", start(int)): ("q", final(int)),
132 ...
133 }
134 eg.
135 {
136 ("q", 0): ("q", 0),
137 ("q", 1): ("q", 1),
138 ("q", 2): ("q", 2),
139 ("q", 3): ("q", 3)
140 }
141 get_layout (bool): flag for returning the final layout after mapping
142 format (str): DEPRECATED The target format of the compilation: {'dag', 'json', 'qasm'}
143 seed_mapper (int): random seed_mapper for the swap mapper
144 pass_manager (PassManager): pass manager instance for the transpilation process
145 If None, a default set of passes are run.
146 Otherwise, the passes defined in it will run.
147 If contains no passes in it, no dag transformations occur.
148
149 Returns:
150 DAGCircuit: transformed dag
151 DAGCircuit, dict: transformed dag along with the final layout on backend qubits
152 """
153 # TODO: `basis_gates` will be removed after we have the unroller pass.
154 # TODO: `coupling_map`, `initial_layout`, `get_layout`, `seed_mapper` removed after mapper pass.
155
156 # TODO: move this to the mapper pass
157 num_qubits = sum([qreg.size for qreg in dag.qregs.values()])
158 if num_qubits == 1 or coupling_map == "all-to-all":
159 coupling_map = None
160
161 final_layout = None
162
163 if pass_manager:
164 # run the passes specified by the pass manager
165 # TODO return the property set too. See #1086
166 dag = pass_manager.run_passes(dag)
167 else:
168 # default set of passes
169 # TODO: move each step here to a pass, and use a default passmanager below
170 basis = basis_gates.split(',') if basis_gates else []
171 dag = Unroller(basis).run(dag)
172 # if a coupling map is given compile to the map
173 if coupling_map:
174 logger.info("pre-mapping properties: %s",
175 dag.properties())
176 # Insert swap gates
177 coupling = Coupling(Coupling.coupling_list2dict(coupling_map))
178 removed_meas = remove_last_measurements(dag)
179 logger.info("measurements moved: %s", removed_meas)
180 logger.info("initial layout: %s", initial_layout)
181 dag, final_layout, last_layout = swap_mapper(
182 dag, coupling, initial_layout, trials=20, seed=seed_mapper)
183 logger.info("final layout: %s", final_layout)
184 # Expand swaps
185 dag = Unroller(basis).run(dag)
186 # Change cx directions
187 dag = direction_mapper(dag, coupling)
188 # Simplify cx gates
189 cx_cancellation(dag)
190 # Simplify single qubit gates
191 dag = optimize_1q_gates(dag)
192 return_last_measurements(dag, removed_meas,
193 last_layout)
194 logger.info("post-mapping properties: %s",
195 dag.properties())
196
197 if format != 'dag':
198 warnings.warn("transpiler no longer supports different formats. "
199 "only dag to dag transformations are supported.",
200 DeprecationWarning)
201
202 if get_layout:
203 return dag, final_layout
204 return dag
205
206
207 def _best_subset(backend, n_qubits):
208 """Computes the qubit mapping with the best
209 connectivity.
210
211 Parameters:
212 backend (BaseBackend): A Qiskit backend instance.
213 n_qubits (int): Number of subset qubits to consider.
214
215 Returns:
216 ndarray: Array of qubits to use for best
217 connectivity mapping.
218
219 Raises:
220 QiskitError: Wrong number of qubits given.
221 """
222 if n_qubits == 1:
223 return np.array([0])
224 elif n_qubits <= 0:
225 raise QiskitError('Number of qubits <= 0.')
226
227 device_qubits = backend.configuration().n_qubits
228 if n_qubits > device_qubits:
229 raise QiskitError('Number of qubits greater than device.')
230
231 cmap = np.asarray(getattr(backend.configuration(), 'coupling_map', None))
232 data = np.ones_like(cmap[:, 0])
233 sp_cmap = sp.coo_matrix((data, (cmap[:, 0], cmap[:, 1])),
234 shape=(device_qubits, device_qubits)).tocsr()
235 best = 0
236 best_map = None
237 # do bfs with each node as starting point
238 for k in range(sp_cmap.shape[0]):
239 bfs = cs.breadth_first_order(sp_cmap, i_start=k, directed=False,
240 return_predecessors=False)
241
242 connection_count = 0
243 for i in range(n_qubits):
244 node_idx = bfs[i]
245 for j in range(sp_cmap.indptr[node_idx],
246 sp_cmap.indptr[node_idx + 1]):
247 node = sp_cmap.indices[j]
248 for counter in range(n_qubits):
249 if node == bfs[counter]:
250 connection_count += 1
251 break
252
253 if connection_count > best:
254 best = connection_count
255 best_map = bfs[0:n_qubits]
256 return best_map
257
258
259 def _matches_coupling_map(dag, coupling_map):
260 """Iterate over circuit gates to check if all multi-qubit couplings
261 match the qubit coupling graph in the backend.
262
263 Parameters:
264 dag (DAGCircuit): DAG representation of circuit.
265 coupling_map (list): Backend coupling map, represented as an adjacency list.
266
267 Returns:
268 bool: True if all gates readily fit the backend coupling graph.
269 False if there's at least one gate that uses multiple qubits
270 which does not match the backend couplings.
271 """
272 match = True
273 for _, data in dag.multi_graph.nodes(data=True):
274 if data['type'] == 'op':
275 gate_map = [qr[1] for qr in data['qargs']]
276 if len(gate_map) > 1:
277 if gate_map not in coupling_map:
278 match = False
279 break
280 return match
281
282
283 def _pick_best_layout(dag, backend):
284 """Pick a convenient layout depending on the best matching qubit connectivity
285
286 Parameters:
287 dag (DAGCircuit): DAG representation of circuit.
288 backend (BaseBackend) : The backend with the coupling_map for searching
289
290 Returns:
291 dict: A special ordered initial_layout
292 """
293 num_qubits = sum([qreg.size for qreg in dag.qregs.values()])
294 best_sub = _best_subset(backend, num_qubits)
295 layout = {}
296 map_iter = 0
297 device_qubits = backend.configuration().n_qubits
298 q = QuantumRegister(device_qubits, 'q')
299 for qreg in dag.qregs.values():
300 for i in range(qreg.size):
301 layout[(qreg.name, i)] = (q, int(best_sub[map_iter]))
302 map_iter += 1
303 return layout
304
[end of qiskit/transpiler/_transpiler.py]
[start of qiskit/transpiler/_transpilererror.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Exception for errors raised by the transpiler.
10 """
11 from qiskit import QiskitError
12
13
14 class TranspilerError(QiskitError):
15 """Exceptions raised during transpilation"""
16
17
18 class TranspilerAccessError(QiskitError):
19 """ Exception of access error in the transpiler passes. """
20
21
22 class MapperError(QiskitError):
23 """ Exception for cases where a mapper pass cannot map. """
24
[end of qiskit/transpiler/_transpilererror.py]
[start of qiskit/transpiler/passes/mapping/basic_mapper.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 A pass implementing a basic mapper.
10
11 The basic mapper is a minimum effort to insert swap gates to map the DAG into a coupling map. When
12 a cx is not in the coupling map possibilities, it inserts one or more swaps in front to make it
13 compatible.
14 """
15
16 from copy import copy
17
18 from qiskit.transpiler._basepasses import TransformationPass
19 from qiskit.dagcircuit import DAGCircuit
20 from qiskit.mapper import Layout
21 from qiskit.extensions.standard import SwapGate
22
23
24 class BasicMapper(TransformationPass):
25 """
26 Maps (with minimum effort) a DAGCircuit onto a `coupling_map` adding swap gates.
27 """
28
29 def __init__(self,
30 coupling_map,
31 initial_layout=None):
32 """
33 Maps a DAGCircuit onto a `coupling_map` using swap gates.
34 Args:
35 coupling_map (Coupling): Directed graph represented a coupling map.
36 initial_layout (Layout): initial layout of qubits in mapping
37 """
38 super().__init__()
39 self.coupling_map = coupling_map
40 self.initial_layout = initial_layout
41 self.swap_gate = SwapGate
42
43 def run(self, dag):
44 """
45 Runs the BasicMapper pass on `dag`.
46 Args:
47 dag (DAGCircuit): DAG to map.
48
49 Returns:
50 DAGCircuit: A mapped DAG.
51 """
52 new_dag = DAGCircuit()
53
54 if self.initial_layout is None:
55 # create a one-to-one layout
56 self.initial_layout = Layout()
57 physical_qubit = 0
58 for qreg in dag.qregs.values():
59 for index in range(qreg.size):
60 self.initial_layout[(qreg, index)] = physical_qubit
61 physical_qubit += 1
62 current_layout = copy(self.initial_layout)
63
64 for layer in dag.serial_layers():
65 subdag = layer['graph']
66
67 for a_cx in subdag.get_cnot_nodes():
68 physical_q0 = current_layout[a_cx['qargs'][0]]
69 physical_q1 = current_layout[a_cx['qargs'][1]]
70 if self.coupling_map.distance(physical_q0, physical_q1) != 1:
71 # Insert a new layer with the SWAP(s).
72 swap_layer = DAGCircuit()
73
74 path = self.coupling_map.shortest_undirected_path(physical_q0, physical_q1)
75 for swap in range(len(path) - 2):
76 connected_wire_1 = path[swap]
77 connected_wire_2 = path[swap + 1]
78
79 qubit_1 = current_layout[connected_wire_1]
80 qubit_2 = current_layout[connected_wire_2]
81
82 # create the involved registers
83 if qubit_1[0] not in swap_layer.qregs.values():
84 swap_layer.add_qreg(qubit_1[0])
85 if qubit_2[0] not in swap_layer.qregs.values():
86 swap_layer.add_qreg(qubit_2[0])
87
88 # create the swap operation
89 swap_layer.add_basis_element('swap', 2, 0, 0)
90 swap_layer.apply_operation_back(self.swap_gate(qubit_1, qubit_2),
91 qargs=[qubit_1, qubit_2])
92
93 # layer insertion
94 edge_map = current_layout.combine_into_edge_map(self.initial_layout)
95 new_dag.compose_back(swap_layer, edge_map)
96
97 # update current_layout
98 for swap in range(len(path) - 2):
99 current_layout.swap(path[swap], path[swap + 1])
100
101 edge_map = current_layout.combine_into_edge_map(self.initial_layout)
102 new_dag.extend_back(subdag, edge_map)
103
104 return new_dag
105
[end of qiskit/transpiler/passes/mapping/basic_mapper.py]
[start of qiskit/transpiler/passes/mapping/unroller.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """Pass for unrolling a circuit to a given basis."""
9
10 import networkx as nx
11
12 from qiskit.circuit import QuantumRegister, ClassicalRegister
13 from qiskit.transpiler._basepasses import TransformationPass
14
15
16 class Unroller(TransformationPass):
17 """
18 Unroll (expand) non-basis, non-opaque instructions recursively
19 to a desired basis, using decomposition rules defined for each instruction.
20 """
21
22 def __init__(self, basis=None):
23 """
24 Args:
25 basis (list[Instruction]): target basis gates to unroll to
26 """
27 super().__init__()
28 self.basis = basis or []
29 self.basis += ['U', 'CX'] # Add default basis.
30
31 def run(self, dag):
32 """Expand all op nodes to the given basis.
33
34 If self.basis is empty, the circuit is unrolled down to
35 fundamental (opaque) gates (U, CX).
36
37 Args:
38 dag(DAGCircuit): input dag
39
40 Returns:
41 DAGCircuit: output unrolled dag
42
43 Raises:
44 TranspilerError: if no decomposition rule is found for an op
45 """
46 # Walk through the DAG and expand each non-basis node
47 for node in dag.get_gate_nodes():
48 current_node = dag.multi_graph.node[node]
49
50 if current_node["op"].name in self.basis: # If already a base, ignore.
51 continue
52
53 decomposition_rules = current_node["op"].decompositions()
54
55 # TODO: allow choosing other possible decompositions
56 decomposition_dag = self.run(decomposition_rules[0]) # recursively unroll gates
57
58 condition = current_node["condition"]
59 # the decomposition rule must be amended if used in a
60 # conditional context. delete the op nodes and replay
61 # them with the condition.
62 if condition:
63 decomposition_dag.add_creg(condition[0])
64 to_replay = []
65 for n_it in nx.topological_sort(decomposition_dag.multi_graph):
66 n = decomposition_dag.multi_graph.nodes[n_it]
67 if n["type"] == "op":
68 n["op"].control = condition
69 to_replay.append(n)
70 for n in decomposition_dag.get_op_nodes():
71 decomposition_dag._remove_op_node(n)
72 for n in to_replay:
73 decomposition_dag.apply_operation_back(n["op"], condition=condition)
74
75 # the wires for substitute_circuit_one are expected as qargs first,
76 # then cargs, then conditions
77 qwires = [w for w in decomposition_dag.wires
78 if isinstance(w[0], QuantumRegister)]
79 cwires = [w for w in decomposition_dag.wires
80 if isinstance(w[0], ClassicalRegister)]
81
82 dag.substitute_circuit_one(node,
83 decomposition_dag,
84 qwires + cwires)
85 return dag
86
[end of qiskit/transpiler/passes/mapping/unroller.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
42864787c40b47a1d2119e83f605125074aad58a
|
Add tree search swap mapper
### What is the expected behavior?
Add a new tree search based swap mapper as a possible pass in the transpiler. The algorithm was submitted to the 2018 Qiskit Developer Challange and is described at https://medium.com/qiskit/improving-a-quantum-compiler-48410d7a7084 .
|
2018-10-26T18:59:16Z
|
<patch>
diff --git a/qiskit/mapper/_coupling.py b/qiskit/mapper/_coupling.py
--- a/qiskit/mapper/_coupling.py
+++ b/qiskit/mapper/_coupling.py
@@ -16,6 +16,7 @@
onto a device with this coupling.
"""
+import warnings
import networkx as nx
from ._couplingerror import CouplingError
@@ -68,20 +69,35 @@ def coupling_list2dict(couplinglist):
couplingdict[pair[0]] = [pair[1]]
return couplingdict
- def __init__(self, couplingdict=None):
+ def __init__(self, couplingdict=None, couplinglist=None):
"""
- Create coupling graph.
+ Create coupling graph. By default, the generated coupling has no nodes.
- By default, the coupling graph has no nodes. The optional couplingdict
- specifies the graph as an adjacency list. For example,
- couplingdict = {0: [1, 2], 1: [2]}.
+ Args:
+ couplinglist (list or None): An initial coupling graph, specified as
+ an adjacency list, e.g. [[0,1], [0,2], [1,2]].
+ couplingdict (dict or None): DEPRECATED An initial coupling graph
+ specified as an adjacency dict, e.g. {0: [1, 2], 1: [2]}.
+ Raises:
+ CouplingError: If both couplinglist and couplingdict are supplied.
"""
+ if couplingdict is not None and couplinglist is not None:
+ raise CouplingError('Cannot specify both couplingdict and couplinglist')
+
+ if couplingdict is not None:
+ warnings.warn('Initializing a coupling object through a couplingdict is deprecated. '
+ 'Use a couplinglist instead.', DeprecationWarning)
+
self.graph = nx.DiGraph()
- if isinstance(couplingdict, dict):
+ if couplingdict is not None:
for origin, destinations in couplingdict.items():
for destination in destinations:
self.add_edge(origin, destination)
+ if couplinglist is not None:
+ for source, target in couplinglist:
+ self.add_edge(source, target)
+
def size(self):
"""Return the number of physical qubits in this graph."""
return len(self.graph.nodes)
diff --git a/qiskit/mapper/_layout.py b/qiskit/mapper/_layout.py
--- a/qiskit/mapper/_layout.py
+++ b/qiskit/mapper/_layout.py
@@ -73,6 +73,10 @@ def __delitem__(self, key):
def __len__(self):
return max([key for key in self.keys() if isinstance(key, int)], default=-1) + 1
+ # Override dict's built-in copy method which would return a dict instead of a Layout.
+ def copy(self):
+ return type(self)(self)
+
def add(self, virtual_bit, physical_bit=None):
"""
Adds a map element between `bit` and `physical_bit`. If `physical_bit` is not
diff --git a/qiskit/transpiler/_transpiler.py b/qiskit/transpiler/_transpiler.py
--- a/qiskit/transpiler/_transpiler.py
+++ b/qiskit/transpiler/_transpiler.py
@@ -174,7 +174,7 @@ def transpile_dag(dag, basis_gates='u1,u2,u3,cx,id', coupling_map=None,
logger.info("pre-mapping properties: %s",
dag.properties())
# Insert swap gates
- coupling = Coupling(Coupling.coupling_list2dict(coupling_map))
+ coupling = Coupling(couplinglist=coupling_map)
removed_meas = remove_last_measurements(dag)
logger.info("measurements moved: %s", removed_meas)
logger.info("initial layout: %s", initial_layout)
diff --git a/qiskit/transpiler/passes/__init__.py b/qiskit/transpiler/passes/__init__.py
--- a/qiskit/transpiler/passes/__init__.py
+++ b/qiskit/transpiler/passes/__init__.py
@@ -13,3 +13,4 @@
from .mapping.basic_mapper import BasicMapper
from .mapping.direction_mapper import DirectionMapper
from .mapping.unroller import Unroller
+from .mapping.lookahead_mapper import LookaheadMapper
diff --git a/qiskit/transpiler/passes/mapping/lookahead_mapper.py b/qiskit/transpiler/passes/mapping/lookahead_mapper.py
new file mode 100644
--- /dev/null
+++ b/qiskit/transpiler/passes/mapping/lookahead_mapper.py
@@ -0,0 +1,289 @@
+# -*- coding: utf-8 -*-
+
+# Copyright 2018, IBM.
+#
+# This source code is licensed under the Apache License, Version 2.0 found in
+# the LICENSE.txt file in the root directory of this source tree.
+
+"""
+Implementation of Sven Jandura's swap mapper submission for the 2018 QISKit
+Developer Challenge, adapted to integrate into the transpiler architecture.
+
+The role of the mapper pass is to modify the starting circuit to be compatible
+with the target device's topology (the set of two-qubit gates available on the
+hardware.) To do this, the mapper will insert SWAP gates to relocate the virtual
+qubits for each upcoming gate onto a set of coupled physical qubits. However, as
+SWAP gates are particularly lossy, the goal is to accomplish this remapping while
+introducing the fewest possible additional SWAPs.
+
+This algorithm searches through the available combinations of SWAP gates by means
+of a narrowed best first/beam search, described as follows:
+
+- Start with a layout of virtual qubits onto physical qubits.
+- Find any gates in the input circuit which can be performed with the current
+ layout and mark them as mapped.
+- For all possible SWAP gates, calculate the layout that would result from their
+ application and rank them according to the distance of the resulting layout
+ over upcoming gates (see _calc_layout_distance.)
+- For the four (SEARCH_WIDTH) highest-ranking SWAPs, repeat the above process on
+ the layout that would be generated if they were applied.
+- Repeat this process down to a depth of four (SEARCH_DEPTH) SWAPs away from the
+ initial layout, for a total of 256 (SEARCH_WIDTH^SEARCH_DEPTH) prospective
+ layouts.
+- Choose the layout which maximizes the number of two-qubit which could be
+ performed. Add its mapped gates, including the SWAPs generated, to the
+ output circuit.
+- Repeat the above until all gates from the initial circuit are mapped.
+
+For more details on the algorithm, see Sven's blog post:
+https://medium.com/qiskit/improving-a-quantum-compiler-48410d7a7084
+
+"""
+
+from copy import deepcopy
+
+from qiskit import QuantumRegister
+from qiskit.dagcircuit import DAGCircuit
+from qiskit.extensions.standard import SwapGate
+from qiskit.transpiler._basepasses import TransformationPass
+from qiskit.mapper import Layout, remove_last_measurements, return_last_measurements, MapperError
+
+SEARCH_DEPTH = 4
+SEARCH_WIDTH = 4
+
+
+class LookaheadMapper(TransformationPass):
+ """Map input circuit onto a backend topology via insertion of SWAPs."""
+
+ def __init__(self, coupling_map):
+ """Initialize a LookaheadMapper instance.
+
+ Arguments:
+ coupling_map (Coupling): Coupling of the target backend.
+ """
+
+ super().__init__()
+
+ self._coupling_map = coupling_map
+
+ def run(self, dag):
+ """Run one pass of the lookahead mapper on the provided DAG.
+
+ Args:
+ dag (DAGCircuit): the directed acyclic graph to be mapped
+ Returns:
+ DAGCircuit: A dag mapped to be compatible with the coupling_map in
+ the property_set.
+ Raises:
+ MapperError: If the provided DAG has more qubits than are available
+ in the coupling map.
+
+ """
+
+ # Preserve fix for https://github.com/Qiskit/qiskit-terra/issues/674
+ removed_measures = remove_last_measurements(dag)
+
+ coupling_map = self._coupling_map
+ ordered_virtual_gates = list(dag.serial_layers())
+
+ if len(dag.get_qubits()) > len(coupling_map.physical_qubits):
+ raise MapperError('DAG contains more qubits than are present in the coupling map.')
+
+ dag_qubits = dag.get_qubits()
+ coupling_qubits = coupling_map.physical_qubits
+
+ starting_layout = [dag_qubits[i] if i < len(dag_qubits) else None
+ for i in range(len(coupling_qubits))]
+
+ mapped_gates = []
+ layout = Layout(starting_layout)
+ gates_remaining = ordered_virtual_gates.copy()
+
+ while gates_remaining:
+ best_step = _search_forward_n_swaps(layout, gates_remaining,
+ coupling_map)
+
+ layout = best_step['layout']
+ gates_mapped = best_step['gates_mapped']
+ gates_remaining = best_step['gates_remaining']
+
+ mapped_gates.extend(gates_mapped)
+
+ # Preserve input DAG's name, regs, wire_map, etc. but replace the graph.
+ mapped_dag = _copy_circuit_metadata(dag, coupling_map)
+
+ for gate in mapped_gates:
+ mapped_dag.apply_operation_back(**gate)
+
+ return_last_measurements(mapped_dag, removed_measures, layout)
+
+ return mapped_dag
+
+
+def _search_forward_n_swaps(layout, gates, coupling_map,
+ depth=SEARCH_DEPTH, width=SEARCH_WIDTH):
+ """Search for SWAPs which allow for application of largest number of gates.
+
+ Arguments:
+ layout (Layout): Map from virtual qubit index to physical qubit index.
+ gates (list): Gates to be mapped.
+ coupling_map (Coupling): Coupling of the target backend.
+ depth (int): Number of SWAP layers to search before choosing a result.
+ width (int): Number of SWAPs to consider at each layer.
+ Returns:
+ dict: Describes solution step found.
+ layout (Layout): Virtual to physical qubit map after SWAPs.
+ gates_remaining (list): Gates that could not be mapped.
+ gates_mapped (list): Gates that were mapped, including added SWAPs.
+
+ """
+
+ gates_mapped, gates_remaining = _map_free_gates(layout, gates, coupling_map)
+
+ base_step = {'layout': layout,
+ 'swaps_added': 0,
+ 'gates_mapped': gates_mapped,
+ 'gates_remaining': gates_remaining}
+
+ if not gates_remaining or depth == 0:
+ return base_step
+
+ possible_swaps = coupling_map.get_edges()
+
+ def _score_swap(swap):
+ """Calculate the relative score for a given SWAP."""
+ trial_layout = layout.copy()
+ trial_layout.swap(*swap)
+ return _calc_layout_distance(gates, coupling_map, trial_layout)
+
+ ranked_swaps = sorted(possible_swaps, key=_score_swap)
+
+ best_swap, best_step = None, None
+ for swap in ranked_swaps[:width]:
+ trial_layout = layout.copy()
+ trial_layout.swap(*swap)
+ next_step = _search_forward_n_swaps(trial_layout, gates_remaining,
+ coupling_map, depth - 1, width)
+
+ # ranked_swaps already sorted by distance, so distance is the tie-breaker.
+ if best_swap is None or _score_step(next_step) > _score_step(best_step):
+ best_swap, best_step = swap, next_step
+
+ best_swap_gate = _swap_ops_from_edge(best_swap, layout)
+ return {
+ 'layout': best_step['layout'],
+ 'swaps_added': 1 + best_step['swaps_added'],
+ 'gates_remaining': best_step['gates_remaining'],
+ 'gates_mapped': gates_mapped + best_swap_gate + best_step['gates_mapped'],
+ }
+
+
+def _map_free_gates(layout, gates, coupling_map):
+ """Map all gates that can be executed with the current layout.
+
+ Args:
+ layout (Layout): Map from virtual qubit index to physical qubit index.
+ gates (list): Gates to be mapped.
+ coupling_map (Coupling): Coupling for target device topology.
+
+ Returns:
+ tuple:
+ mapped_gates (list): ops for gates that can be executed, mapped onto layout.
+ remaining_gates (list): gates that cannot be executed on the layout.
+
+ """
+
+ blocked_qubits = set()
+
+ mapped_gates = []
+ remaining_gates = []
+
+ for gate in gates:
+ # Ignore gates which do not have associated qubits.
+ if not gate['partition']:
+ continue
+
+ qubits = gate['partition'][0]
+
+ if blocked_qubits.intersection(qubits):
+ blocked_qubits.update(qubits)
+ remaining_gates.append(gate)
+ elif len(qubits) == 1:
+ mapped_gate = _transform_gate_for_layout(gate, layout)
+ mapped_gates.append(mapped_gate)
+ elif coupling_map.distance(*[layout[q] for q in qubits]) == 1:
+ mapped_gate = _transform_gate_for_layout(gate, layout)
+ mapped_gates.append(mapped_gate)
+ else:
+ blocked_qubits.update(qubits)
+ remaining_gates.append(gate)
+
+ return mapped_gates, remaining_gates
+
+
+def _calc_layout_distance(gates, coupling_map, layout, max_gates=None):
+ """Return the sum of the distances of two-qubit pairs in each CNOT in gates
+ according to the layout and the coupling.
+ """
+
+ if max_gates is None:
+ max_gates = 50 + 10 * len(coupling_map.physical_qubits)
+
+ return sum(coupling_map.distance(*[layout[q] for q in gate['partition'][0]])
+ for gate in gates[:max_gates]
+ if len(gate['partition'][0]) == 2)
+
+
+def _score_step(step):
+ """Count the mapped two-qubit gates, less the number of added SWAPs."""
+
+ # Each added swap will add 3 ops to gates_mapped, so subtract 3.
+ return len([g for g in step['gates_mapped']
+ if len(g.get('qargs', [])) == 2]) - 3 * step['swaps_added']
+
+
+def _copy_circuit_metadata(source_dag, coupling_map):
+ """Return a copy of source_dag with metadata but without a multi_graph.
+ Generate only a single qreg in the output DAG, matching the size of the
+ coupling_map."""
+
+ target_dag = DAGCircuit()
+ target_dag.name = source_dag.name
+
+ for creg in source_dag.cregs.values():
+ target_dag.add_creg(creg)
+
+ device_qreg = QuantumRegister(len(coupling_map.physical_qubits), 'q')
+ target_dag.add_qreg(device_qreg)
+
+ for name, (num_qbits, num_cbits, num_params) in source_dag.basis.items():
+ target_dag.add_basis_element(name, num_qbits, num_cbits, num_params)
+
+ for name, gate_data in source_dag.gates.items():
+ target_dag.add_gate_data(name, gate_data)
+
+ return target_dag
+
+
+def _transform_gate_for_layout(gate, layout):
+ """Return op implementing a virtual gate on given layout."""
+
+ mapped_op = deepcopy([n for n in gate['graph'].multi_graph.nodes.values()
+ if n['type'] == 'op'][0])
+
+ device_qreg = QuantumRegister(len(layout.get_physical_bits()), 'q')
+ mapped_op['qargs'] = [(device_qreg, layout[a]) for a in mapped_op['qargs']]
+ mapped_op.pop('type')
+ mapped_op.pop('name')
+
+ return mapped_op
+
+
+def _swap_ops_from_edge(edge, layout):
+ """Generate list of ops to implement a SWAP gate along a coupling edge."""
+
+ device_qreg = QuantumRegister(len(layout.get_physical_bits()), 'q')
+ qreg_edge = [(device_qreg, i) for i in edge]
+ return [
+ {'op': SwapGate(*qreg_edge), 'qargs': qreg_edge},
+ ]
</patch>
|
[]
|
[]
| ||||
conan-io__conan-4667
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in revisions: Create json output doesn't return the revision
From a server with revisions, I install a package that is stored with revisions using the json output:
`conan install <ref> -r remote --json file.json`
Apparently, the "id" field of the json (recipe and packages) doesn't contain the revision, and it should.
</issue>
<code>
[start of README.rst]
1 Conan
2 =====
3
4 A distributed, open-source, C/C++ package manager.
5
6 +------------------------+-------------------------+
7 | **master** | **develop** |
8 +========================+=========================+
9 | |Build Status Master| | |Build Status Develop| |
10 +------------------------+-------------------------+
11
12
13 +------------------------+---------------------------+---------------------------------------------+
14 | **Coverage master** | **Coverage develop** | **Coverage graph** |
15 +========================+===========================+=============================================+
16 | |Master coverage| | |Develop coverage| | |Coverage graph| |
17 +------------------------+---------------------------+---------------------------------------------+
18
19
20 Setup
21 =====
22
23 From binaries
24 -------------
25
26 We have installers for `most platforms here <http://conan.io>`__ but you
27 can run **conan** from sources if you want.
28
29 From pip
30 --------
31
32 Conan is compatible with Python 2 and Python 3.
33
34 - Install pip following `pip docs`_.
35 - Install conan:
36
37 .. code-block:: bash
38
39 $ pip install conan
40
41 You can also use `test.pypi.org <https://test.pypi.org/project/conan/#history>`_ repository to install development (non-stable) Conan versions:
42
43
44 .. code-block:: bash
45
46 $ pip install --index-url https://test.pypi.org/simple/ conan
47
48
49 From Homebrew (OSx)
50 -------------------
51
52 - Install Homebrew following `brew homepage`_.
53
54 .. code-block:: bash
55
56 $ brew update
57 $ brew install conan
58
59 From source
60 -----------
61
62 You can run **conan** client and server in Windows, MacOS, and Linux.
63
64 - **Install pip following** `pip docs`_.
65
66 - **Clone conan repository:**
67
68 .. code-block:: bash
69
70 $ git clone https://github.com/conan-io/conan.git
71
72 - **Install in editable mode**
73
74 .. code-block:: bash
75
76 $ cd conan && sudo pip install -e .
77
78 If you are in Windows, using ``sudo`` is not required.
79
80 - **You are ready, try to run conan:**
81
82 .. code-block::
83
84 $ conan --help
85
86 Consumer commands
87 install Installs the requirements specified in a conanfile (.py or .txt).
88 config Manages configuration. Edits the conan.conf or installs config files.
89 get Gets a file or list a directory of a given reference or package.
90 info Gets information about the dependency graph of a recipe.
91 search Searches package recipes and binaries in the local cache or in a remote.
92 Creator commands
93 new Creates a new package recipe template with a 'conanfile.py'.
94 create Builds a binary package for recipe (conanfile.py) located in current dir.
95 upload Uploads a recipe and binary packages to a remote.
96 export Copies the recipe (conanfile.py & associated files) to your local cache.
97 export-pkg Exports a recipe & creates a package with given files calling 'package'.
98 test Test a package, consuming it with a conanfile recipe with a test() method.
99 Package development commands
100 source Calls your local conanfile.py 'source()' method.
101 build Calls your local conanfile.py 'build()' method.
102 package Calls your local conanfile.py 'package()' method.
103 Misc commands
104 profile Lists profiles in the '.conan/profiles' folder, or shows profile details.
105 remote Manages the remote list and the package recipes associated to a remote.
106 user Authenticates against a remote with user/pass, caching the auth token.
107 imports Calls your local conanfile.py or conanfile.txt 'imports' method.
108 copy Copies conan recipes and packages to another user/channel.
109 remove Removes packages or binaries matching pattern from local cache or remote.
110 alias Creates and exports an 'alias recipe'.
111 download Downloads recipe and binaries to the local cache, without using settings.
112
113 Conan commands. Type "conan <command> -h" for help
114
115 Contributing to the project
116 ===========================
117
118 Feedback and contribution is always welcome in this project.
119 Please read our [contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
120
121 Running the tests
122 =================
123
124 Using tox
125 ---------
126
127 .. code-block:: bash
128
129 $ tox
130
131 It will install the needed requirements and launch `nose` skipping some heavy and slow test.
132 If you want to run the full test suite:
133
134 .. code-block:: bash
135
136 $ tox -e full
137
138 Without tox
139 -----------
140
141 **Install python requirements**
142
143 .. code-block:: bash
144
145 $ pip install -r conans/requirements.txt
146 $ pip install -r conans/requirements_server.txt
147 $ pip install -r conans/requirements_dev.txt
148
149
150 Only in OSX:
151
152 .. code-block:: bash
153
154 $ pip install -r conans/requirements_osx.txt # You can omit this one if not running OSX
155
156
157 If you are not Windows and you are not using a python virtual environment, you will need to run these
158 commands using `sudo`.
159
160 Before you can run the tests, you need to set a few environment variables first.
161
162 .. code-block:: bash
163
164 $ export PYTHONPATH=$PYTHONPATH:$(pwd)
165
166 On Windows it would be (while being in the conan root directory):
167
168 .. code-block:: bash
169
170 $ set PYTHONPATH=.
171
172 Ensure that your ``cmake`` has version 2.8 or later. You can see the
173 version with the following command:
174
175 .. code-block:: bash
176
177 $ cmake --version
178
179 The appropriate values of ``CONAN_COMPILER`` and ``CONAN_COMPILER_VERSION`` depend on your
180 operating system and your requirements.
181
182 These should work for the GCC from ``build-essential`` on Ubuntu 14.04:
183
184 .. code-block:: bash
185
186 $ export CONAN_COMPILER=gcc
187 $ export CONAN_COMPILER_VERSION=4.8
188
189 These should work for OS X:
190
191 .. code-block:: bash
192
193 $ export CONAN_COMPILER=clang
194 $ export CONAN_COMPILER_VERSION=3.5
195
196 Finally, there are some tests that use conan to package Go-lang
197 libraries, so you might **need to install go-lang** in your computer and
198 add it to the path.
199
200 You can run the actual tests like this:
201
202 .. code-block:: bash
203
204 $ nosetests .
205
206
207 There are a couple of test attributes defined, as ``slow``, or ``golang`` that you can use
208 to filter the tests, and do not execute them:
209
210 .. code-block:: bash
211
212 $ nosetests . -a !golang
213
214 A few minutes later it should print ``OK``:
215
216 .. code-block:: bash
217
218 ............................................................................................
219 ----------------------------------------------------------------------
220 Ran 146 tests in 50.993s
221
222 OK
223
224 To run specific tests, you can specify the test name too, something like:
225
226 .. code-block:: bash
227
228 $ nosetests conans.test.command.config_install_test:ConfigInstallTest.install_file_test --nocapture
229
230 The ``--nocapture`` argument can be useful to see some output that otherwise is captured by nosetests.
231
232 License
233 -------
234
235 `MIT LICENSE <./LICENSE.md>`__
236
237 .. |Build Status Master| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/master
238 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/master
239
240 .. |Build Status Develop| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/develop
241 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/develop
242
243 .. |Master coverage| image:: https://codecov.io/gh/conan-io/conan/branch/master/graph/badge.svg
244 :target: https://codecov.io/gh/conan-io/conan/branch/master
245
246 .. |Develop coverage| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graph/badge.svg
247 :target: https://codecov.io/gh/conan-io/conan/branch/develop
248
249 .. |Coverage graph| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graphs/tree.svg
250 :height: 50px
251 :width: 50 px
252 :alt: Conan develop coverage
253
254 .. _`pip docs`: https://pip.pypa.io/en/stable/installing/
255
256 .. _`brew homepage`: http://brew.sh/
257
[end of README.rst]
[start of conans/client/cmd/uploader.py]
1 import os
2 import stat
3 import tarfile
4 import time
5 from collections import defaultdict
6
7 from conans.client.source import complete_recipe_sources
8 from conans.errors import ConanException, NotFoundException
9 from conans.model.manifest import gather_files, FileTreeManifest
10 from conans.model.ref import ConanFileReference, PackageReference, check_valid_ref
11 from conans.paths import (CONAN_MANIFEST, CONANFILE, EXPORT_SOURCES_TGZ_NAME,
12 EXPORT_TGZ_NAME, PACKAGE_TGZ_NAME, CONANINFO)
13 from conans.search.search import search_packages, search_recipes
14 from conans.util.files import (load, clean_dirty, is_dirty,
15 gzopen_without_timestamps, set_dirty)
16 from conans.util.log import logger
17 from conans.util.tracer import (log_recipe_upload, log_compressed_files,
18 log_package_upload)
19
20
21 UPLOAD_POLICY_FORCE = "force-upload"
22 UPLOAD_POLICY_NO_OVERWRITE = "no-overwrite"
23 UPLOAD_POLICY_NO_OVERWRITE_RECIPE = "no-overwrite-recipe"
24 UPLOAD_POLICY_SKIP = "skip-upload"
25
26
27 class CmdUpload(object):
28 """ This class is responsible for uploading packages to remotes. The flow is:
29 - Collect all the data from the local cache:
30 - Collect the refs that matches the given pattern _collect_refs_to_upload
31 - Collect for every ref all the binaries IDs that has to be uploaded
32 "_collect_packages_to_upload". This may discard binaries that do not
33 belong to the current RREV
34 The collection of this does the interactivity (ask user if yes/no),
35 the errors (don't upload packages with policy=build_always, and computing
36 the full REVISIONS for every that has to be uploaded.
37 No remote API calls are done in this step, everything is local
38 - Execute the upload. For every ref:
39 - Upload the recipe of the ref: "_upload_recipe"
40 - If not FORCE, check the date "_check_recipe_date", i.e. if there are
41 changes, do not allow uploading if the remote date is newer than the
42 local cache one
43 - Retrieve the sources (exports_sources), if they are not cached, and
44 uploading to a different remote. "complete_recipe_sources"
45 - Gather files and create 2 .tgz (exports, exports_sources) with
46 "_compress_recipe_files"
47 - Decide which files have to be uploaded and deleted from the server
48 based on the different with the remote snapshot "_recipe_files_to_upload"
49 This can raise if upload policy is not overwrite
50 - Execute the real transfer "remote_manager.upload_recipe()"
51 - For every package_id of every ref: "_upload_package"
52 - Gather files and create package.tgz. "_compress_package_files"
53 - (Optional) Do the integrity check of the package
54 - Decide which files to upload and delete from server:
55 "_package_files_to_upload". Can raise if policy is NOT overwrite
56 - Do the actual upload
57
58 All the REVISIONS are local defined, not retrieved from servers
59
60 This requires calling to the remote API methods:
61 - get_recipe_sources() to get the export_sources if they are missing
62 - get_recipe_snapshot() to do the diff and know what files to upload
63 - get_package_snapshot() to do the diff and know what files to upload
64 - get_recipe_manifest() to check the date and raise if policy requires
65 - get_package_manifest() to raise if policy!=force and manifests change
66 """
67 def __init__(self, cache, user_io, remote_manager, loader, hook_manager):
68 self._cache = cache
69 self._user_io = user_io
70 self._remote_manager = remote_manager
71 self._registry = cache.registry
72 self._loader = loader
73 self._hook_manager = hook_manager
74
75 def upload(self, upload_recorder, reference_or_pattern, package_id=None, all_packages=None,
76 confirm=False, retry=0, retry_wait=0, integrity_check=False, policy=None,
77 remote_name=None, query=None):
78 t1 = time.time()
79 refs, confirm = self._collects_refs_to_upload(package_id, reference_or_pattern, confirm)
80 refs_by_remote = self._collect_packages_to_upload(refs, confirm, remote_name, all_packages,
81 query, package_id)
82 # Do the job
83 for remote, refs in refs_by_remote.items():
84 self._user_io.out.info("Uploading to remote '{}':".format(remote.name))
85 for (ref, conanfile, prefs) in refs:
86 self._upload_ref(conanfile, ref, prefs, retry, retry_wait,
87 integrity_check, policy, remote, upload_recorder)
88
89 logger.debug("UPLOAD: Time manager upload: %f" % (time.time() - t1))
90
91 def _collects_refs_to_upload(self, package_id, reference_or_pattern, confirm):
92 """ validate inputs and compute the refs (without revisions) to be uploaded
93 """
94 if package_id and not check_valid_ref(reference_or_pattern, allow_pattern=False):
95 raise ConanException("-p parameter only allowed with a valid recipe reference, "
96 "not with a pattern")
97
98 if package_id or check_valid_ref(reference_or_pattern, allow_pattern=False):
99 # Upload package
100 ref = ConanFileReference.loads(reference_or_pattern)
101 refs = [ref, ]
102 confirm = True
103 else:
104 refs = search_recipes(self._cache, reference_or_pattern)
105 if not refs:
106 raise NotFoundException(("No packages found matching pattern '%s'" %
107 reference_or_pattern))
108 return refs, confirm
109
110 def _collect_packages_to_upload(self, refs, confirm, remote_name, all_packages, query,
111 package_id):
112 """ compute the references with revisions and the package_ids to be uploaded
113 """
114 # Group recipes by remote
115 refs_by_remote = defaultdict(list)
116 default_remote = (self._registry.remotes.get(remote_name) if remote_name else
117 self._registry.remotes.default)
118
119 for ref in refs:
120 metadata = self._cache.package_layout(ref).load_metadata()
121 ref = ref.copy_with_rev(metadata.recipe.revision)
122 if not remote_name:
123 remote = self._registry.refs.get(ref) or default_remote
124 else:
125 remote = default_remote
126
127 upload = True
128 if not confirm:
129 msg = "Are you sure you want to upload '%s' to '%s'?" % (str(ref), remote.name)
130 upload = self._user_io.request_boolean(msg)
131 if upload:
132 try:
133 conanfile_path = self._cache.conanfile(ref)
134 conanfile = self._loader.load_class(conanfile_path)
135 except NotFoundException:
136 raise NotFoundException(("There is no local conanfile exported as %s" %
137 str(ref)))
138
139 # TODO: This search of binary packages has to be improved, more robust
140 # So only real packages are retrieved
141 if all_packages or query:
142 if all_packages:
143 query = None
144 # better to do a search, that will retrieve real packages with ConanInfo
145 # Not only "package_id" folders that could be empty
146 package_layout = self._cache.package_layout(ref.copy_clear_rev())
147 packages = search_packages(package_layout, query)
148 packages_ids = list(packages.keys())
149 elif package_id:
150 packages_ids = [package_id, ]
151 else:
152 packages_ids = []
153 if packages_ids:
154 if conanfile.build_policy == "always":
155 raise ConanException("Conanfile '%s' has build_policy='always', "
156 "no packages can be uploaded" % str(ref))
157 prefs = []
158 # Gather all the complete PREFS with PREV
159 for package_id in packages_ids:
160 if package_id not in metadata.packages:
161 raise ConanException("Binary package %s:%s not found"
162 % (str(ref), package_id))
163 # Filter packages that don't match the recipe revision
164 if self._cache.config.revisions_enabled and ref.revision:
165 rec_rev = metadata.packages[package_id].recipe_revision
166 if ref.revision != rec_rev:
167 self._user_io.out.warn("Skipping package '%s', it doesn't belong to the "
168 "current recipe revision" % package_id)
169 continue
170 package_revision = metadata.packages[package_id].revision
171 assert package_revision is not None, "PREV cannot be None to upload"
172 prefs.append(PackageReference(ref, package_id, package_revision))
173 refs_by_remote[remote].append((ref, conanfile, prefs))
174
175 return refs_by_remote
176
177 def _upload_ref(self, conanfile, ref, prefs, retry, retry_wait, integrity_check, policy,
178 recipe_remote, upload_recorder):
179 """ Uploads the recipes and binaries identified by ref
180 """
181 assert (ref.revision is not None), "Cannot upload a recipe without RREV"
182 conanfile_path = self._cache.conanfile(ref)
183 # FIXME: I think it makes no sense to specify a remote to "pre_upload"
184 # FIXME: because the recipe can have one and the package a different one
185 self._hook_manager.execute("pre_upload", conanfile_path=conanfile_path,
186 reference=ref, remote=recipe_remote)
187
188 self._user_io.out.info("Uploading %s to remote '%s'" % (str(ref), recipe_remote.name))
189 self._upload_recipe(ref, conanfile, retry, retry_wait, policy, recipe_remote)
190 upload_recorder.add_recipe(ref, recipe_remote.name, recipe_remote.url)
191
192 # Now the binaries
193 if prefs:
194 total = len(prefs)
195 for index, pref in enumerate(prefs):
196 p_remote = recipe_remote
197 msg = ("Uploading package %d/%d: %s to '%s'" % (index+1, total, str(pref.id),
198 p_remote.name))
199 self._user_io.out.info(msg)
200 self._upload_package(pref, retry, retry_wait,
201 integrity_check, policy, p_remote)
202 upload_recorder.add_package(pref, p_remote.name, p_remote.url)
203
204 # FIXME: I think it makes no sense to specify a remote to "post_upload"
205 # FIXME: because the recipe can have one and the package a different one
206 self._hook_manager.execute("post_upload", conanfile_path=conanfile_path, reference=ref,
207 remote=recipe_remote)
208
209 def _upload_recipe(self, ref, conanfile, retry, retry_wait, policy, remote):
210 if policy != UPLOAD_POLICY_FORCE:
211 remote_manifest = self._check_recipe_date(ref, remote)
212 else:
213 remote_manifest = None
214
215 current_remote = self._registry.refs.get(ref)
216
217 if remote != current_remote:
218 complete_recipe_sources(self._remote_manager, self._cache, conanfile, ref)
219
220 conanfile_path = self._cache.conanfile(ref)
221 self._hook_manager.execute("pre_upload_recipe", conanfile_path=conanfile_path,
222 reference=ref, remote=remote)
223
224 t1 = time.time()
225 the_files = self._compress_recipe_files(ref)
226 if policy == UPLOAD_POLICY_SKIP:
227 return ref
228 files_to_upload, deleted = self._recipe_files_to_upload(ref, policy, the_files,
229 remote, remote_manifest)
230 if files_to_upload or deleted:
231 self._remote_manager.upload_recipe(ref, files_to_upload, deleted,
232 remote, retry, retry_wait)
233 self._upload_recipe_end_msg(ref, remote)
234 else:
235 self._user_io.out.info("Recipe is up to date, upload skipped")
236 duration = time.time() - t1
237 log_recipe_upload(ref, duration, the_files, remote.name)
238 self._hook_manager.execute("post_upload_recipe", conanfile_path=conanfile_path,
239 reference=ref, remote=remote)
240
241 # The recipe wasn't in the registry or it has changed the revision field only
242 if not current_remote:
243 self._registry.refs.set(ref, remote.name)
244
245 return ref
246
247 def _upload_package(self, pref, retry=None, retry_wait=None, integrity_check=False,
248 policy=None, p_remote=None):
249
250 assert (pref.revision is not None), "Cannot upload a package without PREV"
251 assert (pref.ref.revision is not None), "Cannot upload a package without RREV"
252
253 conanfile_path = self._cache.conanfile(pref.ref)
254 self._hook_manager.execute("pre_upload_package", conanfile_path=conanfile_path,
255 reference=pref.ref,
256 package_id=pref.id,
257 remote=p_remote)
258
259 t1 = time.time()
260 the_files = self._compress_package_files(pref, integrity_check)
261 if policy == UPLOAD_POLICY_SKIP:
262 return None
263 files_to_upload, deleted = self._package_files_to_upload(pref, policy, the_files, p_remote)
264
265 if files_to_upload or deleted:
266 self._remote_manager.upload_package(pref, files_to_upload, deleted, p_remote, retry,
267 retry_wait)
268 logger.debug("UPLOAD: Time upload package: %f" % (time.time() - t1))
269 else:
270 self._user_io.out.info("Package is up to date, upload skipped")
271
272 duration = time.time() - t1
273 log_package_upload(pref, duration, the_files, p_remote)
274 self._hook_manager.execute("post_upload_package", conanfile_path=conanfile_path,
275 reference=pref.ref, package_id=pref.id, remote=p_remote)
276
277 logger.debug("UPLOAD: Time uploader upload_package: %f" % (time.time() - t1))
278 cur_package_remote = self._registry.prefs.get(pref.copy_clear_rev())
279 if not cur_package_remote and policy != UPLOAD_POLICY_SKIP:
280 self._registry.prefs.set(pref, p_remote.name)
281
282 return pref
283
284 def _compress_recipe_files(self, ref):
285 export_folder = self._cache.export(ref)
286
287 for f in (EXPORT_TGZ_NAME, EXPORT_SOURCES_TGZ_NAME):
288 tgz_path = os.path.join(export_folder, f)
289 if is_dirty(tgz_path):
290 self._user_io.out.warn("%s: Removing %s, marked as dirty" % (str(ref), f))
291 os.remove(tgz_path)
292 clean_dirty(tgz_path)
293
294 files, symlinks = gather_files(export_folder)
295 if CONANFILE not in files or CONAN_MANIFEST not in files:
296 raise ConanException("Cannot upload corrupted recipe '%s'" % str(ref))
297 export_src_folder = self._cache.export_sources(ref, short_paths=None)
298 src_files, src_symlinks = gather_files(export_src_folder)
299 the_files = _compress_recipe_files(files, symlinks, src_files, src_symlinks, export_folder,
300 self._user_io.out)
301 return the_files
302
303 def _compress_package_files(self, pref, integrity_check):
304
305 t1 = time.time()
306 # existing package, will use short paths if defined
307 package_folder = self._cache.package(pref, short_paths=None)
308
309 if is_dirty(package_folder):
310 raise ConanException("Package %s is corrupted, aborting upload.\n"
311 "Remove it with 'conan remove %s -p=%s'"
312 % (pref, pref.ref, pref.id))
313 tgz_path = os.path.join(package_folder, PACKAGE_TGZ_NAME)
314 if is_dirty(tgz_path):
315 self._user_io.out.warn("%s: Removing %s, marked as dirty"
316 % (str(pref), PACKAGE_TGZ_NAME))
317 os.remove(tgz_path)
318 clean_dirty(tgz_path)
319 # Get all the files in that directory
320 files, symlinks = gather_files(package_folder)
321
322 if CONANINFO not in files or CONAN_MANIFEST not in files:
323 logger.error("Missing info or manifest in uploading files: %s" % (str(files)))
324 raise ConanException("Cannot upload corrupted package '%s'" % str(pref))
325
326 logger.debug("UPLOAD: Time remote_manager build_files_set : %f" % (time.time() - t1))
327 if integrity_check:
328 self._package_integrity_check(pref, files, package_folder)
329 logger.debug("UPLOAD: Time remote_manager check package integrity : %f"
330 % (time.time() - t1))
331
332 the_files = _compress_package_files(files, symlinks, package_folder, self._user_io.out)
333 return the_files
334
335 def _recipe_files_to_upload(self, ref, policy, the_files, remote, remote_manifest):
336 # Get the remote snapshot
337 remote_snapshot = self._remote_manager.get_recipe_snapshot(ref, remote)
338
339 if remote_snapshot and policy != UPLOAD_POLICY_FORCE:
340 local_manifest = FileTreeManifest.loads(load(the_files["conanmanifest.txt"]))
341
342 if remote_manifest == local_manifest:
343 return None, None
344
345 if policy in (UPLOAD_POLICY_NO_OVERWRITE, UPLOAD_POLICY_NO_OVERWRITE_RECIPE):
346 raise ConanException("Local recipe is different from the remote recipe. "
347 "Forbidden overwrite.")
348
349 files_to_upload = {filename.replace("\\", "/"): path
350 for filename, path in the_files.items()}
351 deleted = set(remote_snapshot).difference(the_files)
352 return files_to_upload, deleted
353
354 def _package_files_to_upload(self, pref, policy, the_files, remote):
355 # Get the remote snapshot
356 remote_snapshot = self._remote_manager.get_package_snapshot(pref, remote)
357
358 if remote_snapshot:
359 remote_manifest, _ = self._remote_manager.get_package_manifest(pref, remote)
360 local_manifest = FileTreeManifest.loads(load(the_files["conanmanifest.txt"]))
361
362 if remote_manifest == local_manifest:
363 return None, None
364
365 if policy == UPLOAD_POLICY_NO_OVERWRITE:
366 raise ConanException("Local package is different from the remote package. "
367 "Forbidden overwrite.")
368 files_to_upload = the_files
369 deleted = set(remote_snapshot).difference(the_files)
370
371 return files_to_upload, deleted
372
373 def _upload_recipe_end_msg(self, ref, remote):
374 msg = "Uploaded conan recipe '%s' to '%s'" % (str(ref), remote.name)
375 url = remote.url.replace("https://api.bintray.com/conan", "https://bintray.com")
376 msg += ": %s" % url
377 self._user_io.out.info(msg)
378
379 def _package_integrity_check(self, pref, files, package_folder):
380 # If package has been modified remove tgz to regenerate it
381 self._user_io.out.rewrite_line("Checking package integrity...")
382
383 # short_paths = None is enough if there exist short_paths
384 layout = self._cache.package_layout(pref.ref, short_paths=None)
385 read_manifest, expected_manifest = layout.package_manifests(pref)
386
387 if read_manifest != expected_manifest:
388 self._user_io.out.writeln("")
389 diff = read_manifest.difference(expected_manifest)
390 for fname, (h1, h2) in diff.items():
391 self._user_io.out.warn("Mismatched checksum '%s' (manifest: %s, file: %s)"
392 % (fname, h1, h2))
393
394 if PACKAGE_TGZ_NAME in files:
395 try:
396 tgz_path = os.path.join(package_folder, PACKAGE_TGZ_NAME)
397 os.unlink(tgz_path)
398 except Exception:
399 pass
400 error_msg = os.linesep.join("Mismatched checksum '%s' (manifest: %s, file: %s)"
401 % (fname, h1, h2) for fname, (h1, h2) in diff.items())
402 logger.error("Manifests doesn't match!\n%s" % error_msg)
403 raise ConanException("Cannot upload corrupted package '%s'" % str(pref))
404 else:
405 self._user_io.out.rewrite_line("Package integrity OK!")
406 self._user_io.out.writeln("")
407
408 def _check_recipe_date(self, ref, remote):
409 try:
410 remote_recipe_manifest, ref = self._remote_manager.get_recipe_manifest(ref, remote)
411 except NotFoundException:
412 return # First time uploading this package
413
414 local_manifest = self._cache.package_layout(ref).recipe_manifest()
415 if (remote_recipe_manifest != local_manifest and
416 remote_recipe_manifest.time > local_manifest.time):
417 self._print_manifest_information(remote_recipe_manifest, local_manifest, ref, remote)
418 raise ConanException("Remote recipe is newer than local recipe: "
419 "\n Remote date: %s\n Local date: %s" %
420 (remote_recipe_manifest.time, local_manifest.time))
421
422 return remote_recipe_manifest
423
424 def _print_manifest_information(self, remote_recipe_manifest, local_manifest, ref, remote):
425 try:
426 self._user_io.out.info("\n%s" % ("-"*40))
427 self._user_io.out.info("Remote manifest:")
428 self._user_io.out.info(remote_recipe_manifest)
429 self._user_io.out.info("Local manifest:")
430 self._user_io.out.info(local_manifest)
431 difference = remote_recipe_manifest.difference(local_manifest)
432 if "conanfile.py" in difference:
433 contents = load(os.path.join(self._cache.export(ref), "conanfile.py"))
434 endlines = "\\r\\n" if "\r\n" in contents else "\\n"
435 self._user_io.out.info("Local 'conanfile.py' using '%s' line-ends" % endlines)
436 remote_contents = self._remote_manager.get_recipe_path(ref, path="conanfile.py",
437 remote=remote)
438 endlines = "\\r\\n" if "\r\n" in remote_contents else "\\n"
439 self._user_io.out.info("Remote 'conanfile.py' using '%s' line-ends" % endlines)
440 self._user_io.out.info("\n%s" % ("-"*40))
441 except Exception as e:
442 self._user_io.out.info("Error printing information about the diff: %s" % str(e))
443
444
445 def _compress_recipe_files(files, symlinks, src_files, src_symlinks, dest_folder, output):
446 # This is the minimum recipe
447 result = {CONANFILE: files.pop(CONANFILE),
448 CONAN_MANIFEST: files.pop(CONAN_MANIFEST)}
449
450 export_tgz_path = files.pop(EXPORT_TGZ_NAME, None)
451 sources_tgz_path = files.pop(EXPORT_SOURCES_TGZ_NAME, None)
452
453 def add_tgz(tgz_name, tgz_path, tgz_files, tgz_symlinks, msg):
454 if tgz_path:
455 result[tgz_name] = tgz_path
456 elif tgz_files:
457 output.rewrite_line(msg)
458 tgz_path = compress_files(tgz_files, tgz_symlinks, tgz_name, dest_folder, output)
459 result[tgz_name] = tgz_path
460
461 add_tgz(EXPORT_TGZ_NAME, export_tgz_path, files, symlinks, "Compressing recipe...")
462 add_tgz(EXPORT_SOURCES_TGZ_NAME, sources_tgz_path, src_files, src_symlinks,
463 "Compressing recipe sources...")
464
465 return result
466
467
468 def _compress_package_files(files, symlinks, dest_folder, output):
469 tgz_path = files.get(PACKAGE_TGZ_NAME)
470 if not tgz_path:
471 output.writeln("Compressing package...")
472 tgz_files = {f: path for f, path in files.items() if f not in [CONANINFO, CONAN_MANIFEST]}
473 tgz_path = compress_files(tgz_files, symlinks, PACKAGE_TGZ_NAME, dest_folder, output)
474
475 return {PACKAGE_TGZ_NAME: tgz_path,
476 CONANINFO: files[CONANINFO],
477 CONAN_MANIFEST: files[CONAN_MANIFEST]}
478
479
480 def compress_files(files, symlinks, name, dest_dir, output=None):
481 t1 = time.time()
482 # FIXME, better write to disk sequentially and not keep tgz contents in memory
483 tgz_path = os.path.join(dest_dir, name)
484 set_dirty(tgz_path)
485 with open(tgz_path, "wb") as tgz_handle:
486 # tgz_contents = BytesIO()
487 tgz = gzopen_without_timestamps(name, mode="w", fileobj=tgz_handle)
488
489 for filename, dest in sorted(symlinks.items()):
490 info = tarfile.TarInfo(name=filename)
491 info.type = tarfile.SYMTYPE
492 info.linkname = dest
493 tgz.addfile(tarinfo=info)
494
495 mask = ~(stat.S_IWOTH | stat.S_IWGRP)
496 i_file = 0
497 n_files = len(files)
498 last_progress = None
499 if output and n_files > 1 and not output.is_terminal:
500 output.write("[")
501 for filename, abs_path in sorted(files.items()):
502 info = tarfile.TarInfo(name=filename)
503 info.size = os.stat(abs_path).st_size
504 info.mode = os.stat(abs_path).st_mode & mask
505 if os.path.islink(abs_path):
506 info.type = tarfile.SYMTYPE
507 info.linkname = os.readlink(abs_path) # @UndefinedVariable
508 tgz.addfile(tarinfo=info)
509 else:
510 with open(abs_path, 'rb') as file_handler:
511 tgz.addfile(tarinfo=info, fileobj=file_handler)
512 if output and n_files > 1:
513 i_file = i_file + 1
514 units = min(50, int(50 * i_file / n_files))
515 if last_progress != units: # Avoid screen refresh if nothing has change
516 if output.is_terminal:
517 text = "%s/%s files" % (i_file, n_files)
518 output.rewrite_line("[%s%s] %s" % ('=' * units, ' ' * (50 - units), text))
519 else:
520 output.write('=' * (units - (last_progress or 0)))
521 last_progress = units
522
523 if output and n_files > 1:
524 if output.is_terminal:
525 output.writeln("")
526 else:
527 output.writeln("]")
528 tgz.close()
529
530 clean_dirty(tgz_path)
531 duration = time.time() - t1
532 log_compressed_files(files, duration, tgz_path)
533
534 return tgz_path
535
[end of conans/client/cmd/uploader.py]
[start of conans/client/conan_command_output.py]
1 import json
2 import os
3 from collections import OrderedDict
4
5 from conans.client.graph.graph import RECIPE_CONSUMER, RECIPE_VIRTUAL
6 from conans.client.graph.graph import RECIPE_EDITABLE
7 from conans.client.installer import build_id
8 from conans.client.printer import Printer
9 from conans.model.ref import ConanFileReference, PackageReference
10 from conans.paths.simple_paths import SimplePaths
11 from conans.search.binary_html_table import html_binary_graph
12 from conans.unicode import get_cwd
13 from conans.util.dates import iso8601_to_str
14 from conans.util.env_reader import get_env
15 from conans.util.files import save
16
17
18 class CommandOutputer(object):
19
20 def __init__(self, user_io, cache):
21 self.user_io = user_io
22 self.cache = cache
23
24 def writeln(self, value):
25 self.user_io.out.writeln(value)
26
27 def print_profile(self, profile, profile_text):
28 Printer(self.user_io.out).print_profile(profile, profile_text)
29
30 def profile_list(self, profiles):
31 for p in sorted(profiles):
32 self.user_io.out.info(p)
33
34 def remote_list(self, remotes, raw):
35 for r in remotes:
36 if raw:
37 self.user_io.out.info("%s %s %s" % (r.name, r.url, r.verify_ssl))
38 else:
39 self.user_io.out.info("%s: %s [Verify SSL: %s]" % (r.name, r.url, r.verify_ssl))
40
41 def remote_ref_list(self, refs):
42 for reference, remote_name in refs.items():
43 ref = ConanFileReference.loads(reference)
44 self.user_io.out.info("%s: %s" % (ref.full_repr(), remote_name))
45
46 def remote_pref_list(self, package_references):
47 for package_reference, remote_name in package_references.items():
48 pref = PackageReference.loads(package_reference)
49 self.user_io.out.info("%s: %s" % (pref.full_repr(), remote_name))
50
51 def build_order(self, info):
52 msg = ", ".join(str(s) for s in info)
53 self.user_io.out.info(msg)
54
55 def json_build_order(self, info, json_output, cwd):
56 data = {"groups": [[str(ref) for ref in group] for group in info]}
57 json_str = json.dumps(data)
58 if json_output is True: # To the output
59 self.user_io.out.write(json_str)
60 else: # Path to a file
61 cwd = os.path.abspath(cwd or get_cwd())
62 if not os.path.isabs(json_output):
63 json_output = os.path.join(cwd, json_output)
64 save(json_output, json_str)
65
66 def json_output(self, info, json_output, cwd):
67 cwd = os.path.abspath(cwd or get_cwd())
68 if not os.path.isabs(json_output):
69 json_output = os.path.join(cwd, json_output)
70
71 def date_handler(obj):
72 if hasattr(obj, 'isoformat'):
73 return obj.isoformat()
74 else:
75 raise TypeError("Unserializable object {} of type {}".format(obj, type(obj)))
76
77 save(json_output, json.dumps(info, default=date_handler))
78 self.user_io.out.writeln("")
79 self.user_io.out.info("JSON file created at '%s'" % json_output)
80
81 def _read_dates(self, deps_graph):
82 ret = {}
83 for node in sorted(deps_graph.nodes):
84 ref = node.ref
85 if node.recipe not in (RECIPE_CONSUMER, RECIPE_VIRTUAL, RECIPE_EDITABLE):
86 manifest = self.cache.package_layout(ref).recipe_manifest()
87 ret[ref] = manifest.time_str
88 return ret
89
90 def nodes_to_build(self, nodes_to_build):
91 self.user_io.out.info(", ".join(str(n) for n in nodes_to_build))
92
93 def _handle_json_output(self, data, json_output, cwd):
94 json_str = json.dumps(data)
95
96 if json_output is True:
97 self.user_io.out.write(json_str)
98 else:
99 if not os.path.isabs(json_output):
100 json_output = os.path.join(cwd, json_output)
101 save(json_output, json.dumps(data))
102 self.user_io.out.writeln("")
103 self.user_io.out.info("JSON file created at '%s'" % json_output)
104
105 def json_nodes_to_build(self, nodes_to_build, json_output, cwd):
106 data = [str(n) for n in nodes_to_build]
107 self._handle_json_output(data, json_output, cwd)
108
109 def _grab_info_data(self, deps_graph, grab_paths):
110 """ Convert 'deps_graph' into consumible information for json and cli """
111 compact_nodes = OrderedDict()
112 for node in sorted(deps_graph.nodes):
113 compact_nodes.setdefault((node.ref, node.package_id), []).append(node)
114
115 ret = []
116 for (ref, package_id), list_nodes in compact_nodes.items():
117 node = list_nodes[0]
118 if node.recipe == RECIPE_VIRTUAL:
119 continue
120
121 item_data = {}
122 conanfile = node.conanfile
123 if node.recipe == RECIPE_CONSUMER:
124 ref = str(conanfile)
125 else:
126 item_data["revision"] = str(ref.revision)
127
128 item_data["reference"] = str(ref)
129 item_data["is_ref"] = isinstance(ref, ConanFileReference)
130 item_data["display_name"] = conanfile.display_name
131 item_data["id"] = package_id
132 item_data["build_id"] = build_id(conanfile)
133
134 # Paths
135 if isinstance(ref, ConanFileReference) and grab_paths:
136 item_data["export_folder"] = self.cache.export(ref)
137 item_data["source_folder"] = self.cache.source(ref, conanfile.short_paths)
138 if isinstance(self.cache, SimplePaths):
139 # @todo: check if this is correct or if it must always be package_id
140 package_id = build_id(conanfile) or package_id
141 pref = PackageReference(ref, package_id)
142 item_data["build_folder"] = self.cache.build(pref, conanfile.short_paths)
143
144 pref = PackageReference(ref, package_id)
145 item_data["package_folder"] = self.cache.package(pref, conanfile.short_paths)
146
147 try:
148 reg_remote = self.cache.registry.refs.get(ref)
149 if reg_remote:
150 item_data["remote"] = {"name": reg_remote.name, "url": reg_remote.url}
151 except:
152 pass
153
154 def _add_if_exists(attrib, as_list=False):
155 value = getattr(conanfile, attrib, None)
156 if value:
157 if not as_list:
158 item_data[attrib] = value
159 else:
160 item_data[attrib] = list(value) if isinstance(value, (list, tuple, set)) \
161 else [value, ]
162
163 _add_if_exists("url")
164 _add_if_exists("homepage")
165 _add_if_exists("license", as_list=True)
166 _add_if_exists("author")
167 _add_if_exists("topics", as_list=True)
168
169 if isinstance(ref, ConanFileReference):
170 item_data["recipe"] = node.recipe
171
172 if get_env("CONAN_CLIENT_REVISIONS_ENABLED", False) and node.ref.revision:
173 item_data["revision"] = node.ref.revision
174
175 item_data["binary"] = node.binary
176 if node.binary_remote:
177 item_data["binary_remote"] = node.binary_remote.name
178
179 node_times = self._read_dates(deps_graph)
180 if node_times and node_times.get(ref, None):
181 item_data["creation_date"] = node_times.get(ref, None)
182
183 if isinstance(ref, ConanFileReference):
184 dependants = [n for node in list_nodes for n in node.inverse_neighbors()]
185 required = [d.conanfile for d in dependants if d.recipe != RECIPE_VIRTUAL]
186 if required:
187 item_data["required_by"] = [d.display_name for d in required]
188
189 depends = node.neighbors()
190 requires = [d for d in depends if not d.build_require]
191 build_requires = [d for d in depends if d.build_require]
192
193 if requires:
194 item_data["requires"] = [repr(d.ref) for d in requires]
195
196 if build_requires:
197 item_data["build_requires"] = [repr(d.ref) for d in build_requires]
198
199 ret.append(item_data)
200
201 return ret
202
203 def info(self, deps_graph, only, package_filter, show_paths):
204 data = self._grab_info_data(deps_graph, grab_paths=show_paths)
205 Printer(self.user_io.out).print_info(data, only, package_filter=package_filter,
206 show_paths=show_paths,
207 show_revisions=self.cache.config.revisions_enabled)
208
209 def info_graph(self, graph_filename, deps_graph, cwd):
210 if graph_filename.endswith(".html"):
211 from conans.client.graph.grapher import ConanHTMLGrapher
212 grapher = ConanHTMLGrapher(deps_graph, self.cache.conan_folder)
213 else:
214 from conans.client.graph.grapher import ConanGrapher
215 grapher = ConanGrapher(deps_graph)
216
217 cwd = os.path.abspath(cwd or get_cwd())
218 if not os.path.isabs(graph_filename):
219 graph_filename = os.path.join(cwd, graph_filename)
220 grapher.graph_file(graph_filename)
221
222 def json_info(self, deps_graph, json_output, cwd, show_paths):
223 data = self._grab_info_data(deps_graph, grab_paths=show_paths)
224 self._handle_json_output(data, json_output, cwd)
225
226 def print_search_references(self, search_info, pattern, raw, all_remotes_search):
227 printer = Printer(self.user_io.out)
228 printer.print_search_recipes(search_info, pattern, raw, all_remotes_search)
229
230 def print_search_packages(self, search_info, reference, packages_query, table,
231 outdated=False):
232 if table:
233 html_binary_graph(search_info, reference, table)
234 else:
235 printer = Printer(self.user_io.out)
236 printer.print_search_packages(search_info, reference, packages_query,
237 outdated=outdated)
238
239 def print_revisions(self, reference, revisions, remote_name=None):
240 remote_test = " at remote '%s'" % remote_name if remote_name else ""
241 self.user_io.out.info("Revisions for '%s'%s:" % (reference, remote_test))
242 lines = ["%s (%s)" % (r["revision"],
243 iso8601_to_str(r["time"]) if r["time"] else "No time")
244 for r in revisions]
245 self.user_io.out.writeln("\n".join(lines))
246
247 def print_dir_list(self, list_files, path, raw):
248 if not raw:
249 self.user_io.out.info("Listing directory '%s':" % path)
250 self.user_io.out.writeln("\n".join([" %s" % i for i in list_files]))
251 else:
252 self.user_io.out.writeln("\n".join(list_files))
253
254 def print_file_contents(self, contents, file_name, raw):
255 if raw or not self.user_io.out.is_terminal:
256 self.user_io.out.writeln(contents)
257 return
258
259 from pygments import highlight
260 from pygments.lexers import PythonLexer, IniLexer, TextLexer
261 from pygments.formatters import TerminalFormatter
262
263 if file_name.endswith(".py"):
264 lexer = PythonLexer()
265 elif file_name.endswith(".txt"):
266 lexer = IniLexer()
267 else:
268 lexer = TextLexer()
269
270 self.user_io.out.write(highlight(contents, lexer, TerminalFormatter()))
271
272 def print_user_list(self, info):
273 for remote in info["remotes"]:
274 authenticated = " [Authenticated]" if remote["authenticated"] else ""
275 anonymous = " (anonymous)" if not remote["user_name"] else ""
276 self.user_io.out.info("Current user of remote '%s' set to: '%s'%s%s" %
277 (remote["name"], str(remote["user_name"]), anonymous,
278 authenticated))
279
280 def print_user_set(self, remote_name, prev_user, user):
281 previous_username = prev_user or "None"
282 previous_anonymous = " (anonymous)" if not prev_user else ""
283 username = user or "None"
284 anonymous = " (anonymous)" if not user else ""
285
286 if prev_user == user:
287 self.user_io.out.info("User of remote '%s' is already '%s'%s" %
288 (remote_name, previous_username, previous_anonymous))
289 else:
290 self.user_io.out.info("Changed user of remote '%s' from '%s'%s to '%s'%s" %
291 (remote_name, previous_username, previous_anonymous, username,
292 anonymous))
293
[end of conans/client/conan_command_output.py]
[start of conans/server/rest/controller/v1/delete.py]
1 import codecs
2 import json
3 import os
4
5 from bottle import request
6
7 from conans import DEFAULT_REVISION_V1
8 from conans.model.ref import ConanFileReference
9 from conans.server.rest.bottle_routes import BottleRoutes
10 from conans.server.service.v1.service import ConanService
11
12
13 class DeleteController(object):
14 """
15 Serve requests related with Conan
16 """
17 @staticmethod
18 def attach_to(app):
19
20 r = BottleRoutes()
21
22 @app.route(r.recipe, method="DELETE")
23 def remove_recipe(name, version, username, channel, auth_user):
24 """ Remove any existing recipes or its packages created.
25 Will remove all revisions, packages and package revisions (parent folder)"""
26 ref = ConanFileReference(name, version, username, channel)
27 conan_service = ConanService(app.authorizer, app.server_store, auth_user)
28 conan_service.remove_conanfile(ref)
29
30 @app.route('%s/delete' % r.packages, method="POST")
31 def remove_packages(name, version, username, channel, auth_user):
32 ref = ConanFileReference(name, version, username, channel)
33 conan_service = ConanService(app.authorizer, app.server_store, auth_user)
34 reader = codecs.getreader("utf-8")
35 payload = json.load(reader(request.body))
36 conan_service.remove_packages(ref, payload["package_ids"])
37
38 @app.route('%s/remove_files' % r.recipe, method="POST")
39 def remove_recipe_files(name, version, username, channel, auth_user):
40 # The remove files is a part of the upload process, where the revision in v1 will
41 # always be DEFAULT_REVISION_V1
42 revision = DEFAULT_REVISION_V1
43 ref = ConanFileReference(name, version, username, channel, revision)
44 conan_service = ConanService(app.authorizer, app.server_store, auth_user)
45 reader = codecs.getreader("utf-8")
46 payload = json.load(reader(request.body))
47 files = [os.path.normpath(filename) for filename in payload["files"]]
48 conan_service.remove_conanfile_files(ref, files)
49
[end of conans/server/rest/controller/v1/delete.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
conan-io/conan
|
4609d0895ca4ba07ea5a0c4eb85e11ede47b4cd7
|
Bug in revisions: Create json output doesn't return the revision
From a server with revisions, I install a package that is stored with revisions using the json output:
`conan install <ref> -r remote --json file.json`
Apparently, the "id" field of the json (recipe and packages) doesn't contain the revision, and it should.
|
2019-03-05T18:54:43Z
|
<patch>
diff --git a/conans/client/cmd/export.py b/conans/client/cmd/export.py
--- a/conans/client/cmd/export.py
+++ b/conans/client/cmd/export.py
@@ -93,11 +93,11 @@ def cmd_export(package_layout, conanfile_path, conanfile, keep_source, revisions
digest.save(package_layout.export())
# Compute the revision for the recipe
- _update_revision_in_metadata(package_layout=package_layout,
- revisions_enabled=revisions_enabled,
- output=output,
- path=os.path.dirname(conanfile_path),
- digest=digest)
+ revision = _update_revision_in_metadata(package_layout=package_layout,
+ revisions_enabled=revisions_enabled,
+ output=output,
+ path=os.path.dirname(conanfile_path),
+ digest=digest)
# FIXME: Conan 2.0 Clear the registry entry if the recipe has changed
source_folder = package_layout.source()
@@ -128,6 +128,8 @@ def cmd_export(package_layout, conanfile_path, conanfile, keep_source, revisions
remover = DiskRemover()
remover.remove_packages(package_layout, ids_filter=to_remove)
+ return package_layout.ref.copy_with_rev(revision)
+
def _capture_export_scm_data(conanfile, conanfile_dir, destination_folder, output, scm_src_file):
@@ -251,6 +253,8 @@ def _update_revision_in_metadata(package_layout, revisions_enabled, output, path
with package_layout.update_metadata() as metadata:
metadata.recipe.revision = revision
+ return revision
+
def _recreate_folders(destination_folder, destination_src_folder):
try:
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -363,11 +363,11 @@ def create(self, conanfile_path, name=None, version=None, user=None, channel=Non
if not not_export:
check_casing_conflict(cache=self._cache, ref=ref)
package_layout = self._cache.package_layout(ref, short_paths=conanfile.short_paths)
- cmd_export(package_layout, conanfile_path, conanfile, keep_source,
- self._cache.config.revisions_enabled, self._user_io.out,
- self._hook_manager)
-
- recorder.recipe_exported(ref)
+ new_ref = cmd_export(package_layout, conanfile_path, conanfile, keep_source,
+ self._cache.config.revisions_enabled, self._user_io.out,
+ self._hook_manager)
+ # The new_ref contains the revision
+ recorder.recipe_exported(new_ref)
if build_modes is None: # Not specified, force build the tested library
build_modes = [conanfile.name]
@@ -384,11 +384,11 @@ def create(self, conanfile_path, name=None, version=None, user=None, channel=Non
manifest_folder, manifest_verify, manifest_interactive, keep_build,
test_build_folder, test_folder, conanfile_path)
- return recorder.get_info()
+ return recorder.get_info(self._cache.config.revisions_enabled)
except ConanException as exc:
recorder.error = True
- exc.info = recorder.get_info()
+ exc.info = recorder.get_info(self._cache.config.revisions_enabled)
raise
@api_method
@@ -428,22 +428,24 @@ def export_pkg(self, conanfile_path, name, channel, source_folder=None, build_fo
conanfile = self._loader.load_export(conanfile_path, name, version, user, channel)
ref = ConanFileReference(conanfile.name, conanfile.version, user, channel)
- recorder.recipe_exported(ref)
- recorder.add_recipe_being_developed(ref)
check_casing_conflict(cache=self._cache, ref=ref)
package_layout = self._cache.package_layout(ref, short_paths=conanfile.short_paths)
- cmd_export(package_layout, conanfile_path, conanfile, False,
- self._cache.config.revisions_enabled, self._user_io.out,
- self._hook_manager)
+ new_ref = cmd_export(package_layout, conanfile_path, conanfile, False,
+ self._cache.config.revisions_enabled, self._user_io.out,
+ self._hook_manager)
+ # new_ref has revision
+ recorder.recipe_exported(new_ref)
+ recorder.add_recipe_being_developed(ref)
+
export_pkg(self._cache, self._graph_manager, self._hook_manager, recorder,
self._user_io.out,
ref, source_folder=source_folder, build_folder=build_folder,
package_folder=package_folder, install_folder=install_folder,
graph_info=graph_info, force=force)
- return recorder.get_info()
+ return recorder.get_info(self._cache.config.revisions_enabled)
except ConanException as exc:
recorder.error = True
- exc.info = recorder.get_info()
+ exc.info = recorder.get_info(self._cache.config.revisions_enabled)
raise
@api_method
@@ -531,10 +533,10 @@ def install_reference(self, reference, settings=None, options=None, env=None,
manifest_verify=manifest_verify,
manifest_interactive=manifest_interactive,
generators=generators)
- return recorder.get_info()
+ return recorder.get_info(self._cache.config.revisions_enabled)
except ConanException as exc:
recorder.error = True
- exc.info = recorder.get_info()
+ exc.info = recorder.get_info(self._cache.config.revisions_enabled)
raise
@api_method
@@ -568,10 +570,10 @@ def install(self, path="", name=None, version=None, user=None, channel=None,
manifest_interactive=manifest_interactive,
generators=generators,
no_imports=no_imports)
- return recorder.get_info()
+ return recorder.get_info(self._cache.config.revisions_enabled)
except ConanException as exc:
recorder.error = True
- exc.info = recorder.get_info()
+ exc.info = recorder.get_info(self._cache.config.revisions_enabled)
raise
@api_method
diff --git a/conans/client/recorder/action_recorder.py b/conans/client/recorder/action_recorder.py
--- a/conans/client/recorder/action_recorder.py
+++ b/conans/client/recorder/action_recorder.py
@@ -39,12 +39,12 @@ def _cpp_info_to_dict(cpp_info):
return doc
-class Action(namedtuple("Action", "type, doc, time")):
+class Action(namedtuple("Action", "type, full_ref, doc, time")):
- def __new__(cls, the_type, doc=None):
+ def __new__(cls, the_type, full_ref, doc=None):
doc = doc or {}
the_time = datetime.utcnow()
- return super(cls, Action).__new__(cls, the_type, doc, the_time)
+ return super(cls, Action).__new__(cls, the_type, full_ref, doc, the_time)
class ActionRecorder(object):
@@ -59,7 +59,7 @@ def __init__(self):
# ###### INSTALL METHODS ############
def add_recipe_being_developed(self, ref):
assert(isinstance(ref, ConanFileReference))
- self._inst_recipes_develop.add(ref)
+ self._inst_recipes_develop.add(ref.copy_clear_rev())
def _add_recipe_action(self, ref, action):
assert(isinstance(ref, ConanFileReference))
@@ -70,51 +70,49 @@ def _add_recipe_action(self, ref, action):
def _add_package_action(self, pref, action):
assert(isinstance(pref, PackageReference))
- pref = pref.copy_clear_rev()
+ pref = pref.copy_clear_revs()
if pref not in self._inst_packages_actions:
self._inst_packages_actions[pref] = []
self._inst_packages_actions[pref].append(action)
# RECIPE METHODS
def recipe_exported(self, ref):
- self._add_recipe_action(ref, Action(INSTALL_EXPORTED))
+ self._add_recipe_action(ref, Action(INSTALL_EXPORTED, ref))
def recipe_fetched_from_cache(self, ref):
- self._add_recipe_action(ref, Action(INSTALL_CACHE))
+ self._add_recipe_action(ref, Action(INSTALL_CACHE, ref))
def recipe_downloaded(self, ref, remote_name):
- self._add_recipe_action(ref, Action(INSTALL_DOWNLOADED, {"remote": remote_name}))
+ self._add_recipe_action(ref, Action(INSTALL_DOWNLOADED, ref, {"remote": remote_name}))
def recipe_install_error(self, ref, error_type, description, remote_name):
doc = {"type": error_type, "description": description, "remote": remote_name}
- self._add_recipe_action(ref, Action(INSTALL_ERROR, doc))
+ self._add_recipe_action(ref, Action(INSTALL_ERROR, ref, doc))
# PACKAGE METHODS
def package_exported(self, pref):
- self._add_package_action(pref, Action(INSTALL_EXPORTED))
+ self._add_package_action(pref, Action(INSTALL_EXPORTED, pref))
def package_built(self, pref):
- self._add_package_action(pref, Action(INSTALL_BUILT))
+ self._add_package_action(pref, Action(INSTALL_BUILT, pref))
def package_fetched_from_cache(self, pref):
- self._add_package_action(pref, Action(INSTALL_CACHE))
+ self._add_package_action(pref, Action(INSTALL_CACHE, pref))
def package_downloaded(self, pref, remote_name):
- self._add_package_action(pref, Action(INSTALL_DOWNLOADED, {"remote": remote_name}))
+ self._add_package_action(pref, Action(INSTALL_DOWNLOADED, pref, {"remote": remote_name}))
def package_install_error(self, pref, error_type, description, remote_name=None):
assert(isinstance(pref, PackageReference))
- pref = pref.copy_clear_rev()
if pref not in self._inst_packages_actions:
- self._inst_packages_actions[pref] = []
+ self._inst_packages_actions[pref.copy_clear_revs()] = []
doc = {"type": error_type, "description": description, "remote": remote_name}
- self._inst_packages_actions[pref].append(Action(INSTALL_ERROR, doc))
+ self._inst_packages_actions[pref.copy_clear_revs()].append(Action(INSTALL_ERROR, pref, doc))
def package_cpp_info(self, pref, cpp_info):
assert isinstance(pref, PackageReference)
- pref = pref.copy_clear_rev()
# assert isinstance(cpp_info, CppInfo)
- self._inst_packages_info[pref]['cpp_info'] = _cpp_info_to_dict(cpp_info)
+ self._inst_packages_info[pref.copy_clear_revs()]['cpp_info'] = _cpp_info_to_dict(cpp_info)
@property
def install_errored(self):
@@ -138,10 +136,10 @@ def _get_installed_packages(self, ref):
def in_development_recipe(self, ref):
return ref in self._inst_recipes_develop
- def get_info(self):
- return self.get_install_info()
+ def get_info(self, revisions_enabled):
+ return self.get_install_info(revisions_enabled)
- def get_install_info(self):
+ def get_install_info(self, revisions_enabled):
ret = {"error": self.install_errored or self.error,
"installed": []}
@@ -153,8 +151,12 @@ def get_doc_for_ref(the_ref, the_actions):
remote = None if not remotes else remotes[0]
action_types = [action.type for action in the_actions]
time = the_actions[0].time
+ if revisions_enabled and isinstance(the_ref, ConanFileReference):
+ the_id = the_actions[0].full_ref.full_repr()
+ else:
+ the_id = str(the_ref)
- doc = {"id": str(the_ref),
+ doc = {"id": the_id,
"downloaded": INSTALL_DOWNLOADED in action_types,
"exported": INSTALL_EXPORTED in action_types,
"error": error,
diff --git a/conans/model/ref.py b/conans/model/ref.py
--- a/conans/model/ref.py
+++ b/conans/model/ref.py
@@ -174,3 +174,6 @@ def copy_with_revs(self, revision, p_revision):
def copy_clear_rev(self):
ref = self.ref.copy_clear_rev()
return PackageReference(ref, self.id, revision=None)
+
+ def copy_clear_revs(self):
+ return self.copy_with_revs(None, None)
</patch>
|
[]
|
[]
| ||||
dagster-io__dagster-1202
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ensure the dev_env_setup.sh and installation guides reflect new module structures
</issue>
<code>
[start of README.rst]
1 .. image:: https://user-images.githubusercontent.com/28738937/44878798-b6e17e00-ac5c-11e8-8d25-2e47e5a53418.png
2 :align: center
3
4 .. docs-include
5
6 .. image:: https://badge.fury.io/py/dagster.svg
7 :target: https://badge.fury.io/py/dagster
8 .. image:: https://coveralls.io/repos/github/dagster-io/dagster/badge.svg?branch=master
9 :target: https://coveralls.io/github/dagster-io/dagster?branch=master
10 .. image:: https://circleci.com/gh/dagster-io/dagster.svg?style=svg
11 :target: https://circleci.com/gh/dagster-io/dagster
12 .. image:: https://readthedocs.org/projects/dagster/badge/?version=master
13 :target: https://dagster.readthedocs.io/en/master/
14
15 ============
16 Introduction
17 ============
18
19 Dagster is an opinionated system and programming model for data pipelines. This process goes by
20 many names -- ETL (extract-transform-load), ELT (extract-load-transform), model production, data
21 integration, and so on -- but in essence they all describe the same activity: performing a set of
22 computations structured as a DAG (directed, acyclic graph) that end up producing data assets,
23 whether those assets be tables, files, machine-learning models, etc.
24
25 This repository contains a number of distinct subprojects:
26
27 - **dagster**: The core programming model and abstraction stack; a stateless single-node,
28 single-process execution engine; and a CLI tool for driving that engine.
29 - **dagit**: Dagit is a rich viewer for Dagster assets, including a DAG browser, a type-aware
30 config editor, and a streaming execution interface.
31 - **dagster-ge**: A Dagster integration with Great Expectations. (see
32 https://github.com/great-expectations/great_expectations)
33 - **dagster-pandas**: A Dagster integration with Pandas.
34 - **dagster-sqlalchemy**: A Dagster integration with SQLAlchemy.
35 - **dagstermill**: An experimental prototype for integrating productionized notebooks into
36 dagster pipelines. Built on the papermill library (https://github.com/nteract/papermill).
37 - **airline-demo**: A substantial demo project illustrating how these tools can be used together
38 to manage a realistic data pipeline.
39 - **js_modules/dagit** - a web app that is a ui for dagit
40 - **dagma** - An experimental execution engine for Dagster built on top of AWS Lambda.
41
42 Go to https://dagster.readthedocs.io for complete documentation, including a
43 step-by-step tutorial and notes on the demo project.
44
45 For details on contributing or running the project for development, see
46 https://dagster.readthedocs.io/en/latest/contributing.html.
47
[end of README.rst]
[start of bin/publish.py]
1 """Tools to manage tagging and publishing releases of the Dagster projects.
2
3 For detailed usage instructions, please consult the command line help,
4 available by running `python publish.py --help`.
5 """
6 import contextlib
7 import datetime
8 import distutils
9 import inspect
10 import os
11 import re
12 import subprocess
13
14 from itertools import groupby
15
16 import click
17 import packaging.version
18
19
20 from pypirc import ConfigFileError, RCParser
21
22
23 PYPIRC_EXCEPTION_MESSAGE = '''You must have credentials available to PyPI in the form of a '
24 '~/.pypirc file (see: https://docs.python.org/2/distutils/packageindex.html#pypirc):
25
26 [distutils]
27 index-servers =
28 pypi
29
30 [pypi]
31 repository: https://upload.pypi.org/legacy/
32 username: <username>
33 password: <password>
34 '''
35
36
37 def script_relative_path(file_path):
38 scriptdir = inspect.stack()[1][1]
39 return os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(scriptdir)), file_path))
40
41
42 def _which(exe):
43 '''Uses distutils to look for an executable, mimicking unix which'''
44 # https://github.com/PyCQA/pylint/issues/73
45 return distutils.spawn.find_executable(exe) # pylint: disable=no-member
46
47
48 def construct_publish_comands(additional_steps=None, nightly=False):
49 '''Get the shell commands we'll use to actually build and publish a package to PyPI.'''
50 publish_commands = (
51 ['rm -rf dist']
52 + (additional_steps if additional_steps else [])
53 + [
54 'python setup.py sdist bdist_wheel{nightly}'.format(
55 nightly=' --nightly' if nightly else ''
56 ),
57 'twine upload dist/*',
58 ]
59 )
60
61 return publish_commands
62
63
64 '''For dagit, we need to build the JS assets.'''
65 DAGIT_ADDITIONAL_STEPS = [
66 'pushd ../../js_modules/dagit; yarn install && yarn build-for-python; popd'
67 ]
68
69
70 '''The modules managed by this script.'''
71 MODULE_NAMES = [
72 'dagit',
73 'dagster-airflow',
74 'dagster-ge',
75 'dagster-graphql',
76 'dagster-pandas',
77 'dagster-sqlalchemy',
78 'dagster',
79 'dagstermill',
80 ]
81
82
83 def normalize_module_name(name):
84 '''Our package convention is to find the source for module foo_bar in foo-bar/foo_bar.'''
85 return name.replace('-', '_')
86
87
88 def all_equal(iterable):
89 g = groupby(iterable)
90 return next(g, True) and not next(g, False)
91
92
93 def path_to_module(module_name):
94 return script_relative_path('../python_modules/{module_name}'.format(module_name=module_name))
95
96
97 @contextlib.contextmanager
98 def pushd_module(module_name):
99 old_cwd = os.getcwd()
100 new_cwd = path_to_module(module_name)
101 os.chdir(new_cwd)
102 try:
103 yield new_cwd
104 finally:
105 os.chdir(old_cwd)
106
107
108 def publish_module(module, nightly=False, additional_steps=''):
109 with pushd_module(module) as cwd:
110 for command in construct_publish_comands(additional_steps=additional_steps, nightly=nightly):
111 print('About to run command: {}'.format(command))
112 process = subprocess.Popen(
113 command, stderr=subprocess.PIPE, cwd=cwd, shell=True, stdout=subprocess.PIPE
114 )
115 for line in iter(process.stdout.readline, b''):
116 print(line.decode('utf-8'))
117
118
119 def publish_dagster(nightly):
120 publish_module('dagster', nightly)
121
122
123 def publish_dagit(nightly):
124 publish_module('dagit', nightly, additional_steps=DAGIT_ADDITIONAL_STEPS)
125
126
127 def publish_dagstermill(nightly):
128 publish_module('dagstermill', nightly)
129
130
131 def publish_dagster_ge(nightly):
132 publish_module('dagster-ge', nightly)
133
134
135 def publish_dagster_sqlalchemy(nightly):
136 publish_module('dagster-sqlalchemy', nightly)
137
138
139 def publish_dagster_pandas(nightly):
140 publish_module('dagster-pandas', nightly)
141
142
143 def publish_dagster_airflow(nightly):
144 publish_module('dagster-airflow', nightly)
145
146
147 def publish_dagster_graphql(nightly):
148 publish_module('dagster-graphql', nightly)
149
150
151 def publish_all(nightly):
152 publish_dagster(nightly)
153 publish_dagit(nightly)
154 publish_dagstermill(nightly)
155 publish_dagster_airflow(nightly)
156 publish_dagster_ge(nightly)
157 publish_dagster_pandas(nightly)
158 publish_dagster_sqlalchemy(nightly)
159 publish_dagster_graphql(nightly)
160
161
162 def get_most_recent_git_tag():
163 try:
164 git_tag = str(
165 subprocess.check_output(['git', 'describe', '--abbrev=0'], stderr=subprocess.STDOUT)
166 ).strip('\'b\\n')
167 except subprocess.CalledProcessError as exc_info:
168 raise Exception(str(exc_info.output))
169 return git_tag
170
171
172 def get_git_tag():
173 try:
174 git_tag = str(
175 subprocess.check_output(
176 ['git', 'describe', '--exact-match', '--abbrev=0'], stderr=subprocess.STDOUT
177 )
178 ).strip('\'b\\n')
179 except subprocess.CalledProcessError as exc_info:
180 match = re.search(
181 'fatal: no tag exactly matches \'(?P<commit>[a-z0-9]+)\'', str(exc_info.output)
182 )
183 if match:
184 raise Exception(
185 'Bailing: there is no git tag for the current commit, {commit}'.format(
186 commit=match.group('commit')
187 )
188 )
189 raise Exception(str(exc_info.output))
190
191 return git_tag
192
193
194 def set_git_tag(tag, signed=False):
195 try:
196 if signed:
197 subprocess.check_output(['git', 'tag', '-s', '-m', tag, tag], stderr=subprocess.STDOUT)
198 else:
199 subprocess.check_output(['git', 'tag', '-a', '-m', tag, tag], stderr=subprocess.STDOUT)
200 except subprocess.CalledProcessError as exc_info:
201 match = re.search('error: gpg failed to sign the data', str(exc_info.output))
202 if match:
203 raise Exception(
204 'Bailing: cannot sign tag. You may find '
205 'https://stackoverflow.com/q/39494631/324449 helpful. Original error '
206 'output:\n{output}'.format(output=str(exc_info.output))
207 )
208
209 match = re.search(
210 'fatal: tag \'(?P<tag>[\.a-z0-9]+)\' already exists', str(exc_info.output)
211 )
212 if match:
213 raise Exception(
214 'Bailing: cannot release version tag {tag}: already exists'.format(
215 tag=match.group('tag')
216 )
217 )
218 raise Exception(str(exc_info.output))
219
220
221 def format_module_versions(module_versions, nightly=False):
222 return '\n'.join(
223 [
224 ' {module_name}: {version} {nightly}'.format(
225 module_name=module_name,
226 version=module_version['__version__'],
227 nightly=module_version['__nightly__'],
228 )
229 for module_name, module_version in module_versions.items()
230 ]
231 )
232
233
234 def get_module_versions(module_name):
235 with pushd_module(module_name):
236 module_version = {}
237 with open(
238 '{module_name}/version.py'.format(module_name=normalize_module_name(module_name))
239 ) as fp:
240 exec(fp.read(), module_version) # pylint: disable=W0122
241 return module_version
242
243
244 def get_versions(modules=None):
245 if modules is None:
246 modules = MODULE_NAMES
247 module_versions = {}
248 for module_name in modules:
249 module_versions[module_name] = get_module_versions(module_name)
250 return module_versions
251
252
253 def check_versions_equal(nightly=False):
254 module_versions = get_versions()
255 assert all_equal(
256 [module_version['__version__'] for module_version in module_versions.values()]
257 ), 'Module versions must be in lockstep to release. Found:\n{versions}'.format(
258 versions=format_module_versions(module_versions)
259 )
260 if nightly:
261 assert all_equal(
262 [module_version['__nightly__'] for module_version in module_versions.values()]
263 ), 'Module versions must be in lockstep to release. Found:\n{versions}'.format(
264 versions=format_module_versions(module_versions)
265 )
266 return module_versions[MODULE_NAMES[0]]
267
268
269 def check_versions(nightly=False):
270 module_version = check_versions_equal(nightly)
271 if not nightly:
272 git_tag = get_git_tag()
273 assert (
274 module_version['__version__'] == git_tag
275 ), 'Version {version} does not match expected git tag {git_tag}'.format(
276 version=module_version['__version__'], git_tag=git_tag
277 )
278
279 return module_version
280
281
282 def set_version(module_name, version, nightly):
283 with pushd_module(module_name):
284 with open(
285 os.path.abspath(
286 '{module_name}/version.py'.format(module_name=normalize_module_name(module_name))
287 ),
288 'w',
289 ) as fd:
290 fd.write(
291 '__version__ = \'{version}\'\n'
292 '\n'
293 '__nightly__ = \'{nightly}\'\n'.format(version=version, nightly=nightly)
294 )
295
296
297 def get_nightly_version():
298 return datetime.datetime.utcnow().strftime('%Y%m%d')
299
300
301 def increment_nightly_version(module_name, module_version):
302 new_nightly = get_nightly_version()
303 set_version(module_name, module_version['__version__'], new_nightly)
304 return {'__version__': module_version['__version__'], '__nightly__': new_nightly}
305
306
307 def increment_nightly_versions():
308 versions = get_versions()
309 for module_name in MODULE_NAMES:
310 new_version = increment_nightly_version(module_name, versions[module_name])
311 return new_version
312
313
314 def set_new_version(new_version):
315 for module_name in MODULE_NAMES:
316 set_version(module_name, new_version, get_nightly_version())
317
318
319 def commit_new_version(version):
320 try:
321 for module_name in MODULE_NAMES:
322 subprocess.check_output(
323 [
324 'git',
325 'add',
326 os.path.join(
327 path_to_module(module_name),
328 normalize_module_name(module_name),
329 'version.py',
330 ),
331 ],
332 stderr=subprocess.STDOUT,
333 )
334 subprocess.check_output(
335 ['git', 'commit', '--no-verify', '-m', '{version}'.format(version=version)],
336 stderr=subprocess.STDOUT,
337 )
338 except subprocess.CalledProcessError as exc_info:
339 raise Exception(exc_info.output)
340
341
342 def check_new_version(version):
343 parsed_version = packaging.version.parse(version)
344 module_versions = get_versions()
345 if not all_equal(module_versions.values()):
346 print(
347 'Warning! Found repository in a bad state. Existing package versions were not '
348 'equal:\n{versions}'.format(versions=format_module_versions(module_versions))
349 )
350 errors = {}
351 for module_name, module_version in module_versions.items():
352 if packaging.version.parse(module_version['__version__']) >= parsed_version:
353 errors[module_name] = module_version['__version__']
354 if errors:
355 raise Exception(
356 'Bailing: Found modules with existing versions greater than or equal to the new version '
357 '{version}:\n{versions}'.format(
358 version=version, versions=format_module_versions(module_versions)
359 )
360 )
361 return True
362
363
364 def check_git_status():
365 changes = subprocess.check_output(['git', 'status', '--porcelain'])
366 if changes != b'':
367 raise Exception(
368 'Bailing: Cannot publish with changes present in git repo:\n{changes}'.format(
369 changes=changes
370 )
371 )
372
373
374 def git_push(tags=False):
375 github_token = os.getenv('GITHUB_TOKEN')
376 github_username = os.getenv('GITHUB_USERNAME')
377 if github_token and github_username:
378 if tags:
379 subprocess.check_output(
380 [
381 'git',
382 'push',
383 '--tags',
384 '-q',
385 'https://{github_username}:{github_token}@github.com/dagster-io/dagster.git'.format(
386 github_username=github_username, github_token=github_token
387 ),
388 ]
389 )
390 else:
391 subprocess.check_output(
392 [
393 'git',
394 'push',
395 '-q',
396 'https://{github_username}:{github_token}@github.com/dagster-io/dagster.git'.format(
397 github_username=github_username, github_token=github_token
398 ),
399 ]
400 )
401 else:
402 if tags:
403 subprocess.check_output(['git', 'push', '--tags'])
404 else:
405 subprocess.check_output(['git', 'push'])
406
407
408 CLI_HELP = """Tools to help tag and publish releases of the Dagster projects.
409
410 By convention, these projects live in a single monorepo, and the submodules are versioned in
411 lockstep to avoid confusion, i.e., if dagster is at 0.3.0, dagit is also expected to be at
412 0.3.0.
413
414 Versions are tracked in the version.py files present in each submodule and in the git tags
415 applied to the repository as a whole. These tools help ensure that these versions do not drift.
416 """
417
418
419 @click.group(help=CLI_HELP)
420 def cli():
421 pass
422
423
424 @cli.command()
425 @click.option('--nightly', is_flag=True)
426 def publish(nightly):
427 """Publishes (uploads) all submodules to PyPI.
428
429 Appropriate credentials must be available to twine, e.g. in a ~/.pypirc file, and users must
430 be permissioned as maintainers on the PyPI projects. Publishing will fail if versions (git
431 tags and Python versions) are not in lockstep, if the current commit is not tagged, or if
432 there are untracked changes.
433 """
434
435 try:
436 RCParser.from_file()
437 except ConfigFileError:
438 raise ConfigFileError(PYPIRC_EXCEPTION_MESSAGE)
439
440 assert '\nwheel' in subprocess.check_output(['pip', 'list']).decode('utf-8'), (
441 'You must have wheel installed in order to build packages for release -- run '
442 '`pip install wheel`.'
443 )
444
445 assert _which('twine'), (
446 'You must have twin installed in order to upload packages to PyPI -- run '
447 '`pip install twine`.'
448 )
449
450 assert _which('yarn'), (
451 'You must have yarn installed in order to build dagit for release -- see '
452 'https://yarnpkg.com/lang/en/docs/install/'
453 )
454
455 print('Checking that module versions are in lockstep')
456 check_versions(nightly=nightly)
457 if not nightly:
458 print('... and match git tag on most recent commit...')
459 check_git_status()
460
461 print('Publishing packages to PyPI...')
462
463 if nightly:
464 version = increment_nightly_versions()
465 commit_new_version(
466 'nightly: {nightly}'.format(
467 nightly=version['__nightly__']
468 )
469 )
470 set_git_tag(
471 '{nightly}'.format(
472 nightly=version['__nightly__']
473 )
474 )
475 git_push()
476 git_push(tags=True)
477 publish_all(nightly)
478
479
480 @cli.command()
481 @click.argument('version')
482 def release(version):
483 """Tags all submodules for a new release.
484
485 Ensures that git tags, as well as the version.py files in each submodule, agree and that the
486 new version is strictly greater than the current version. Will fail if the new version
487 is not an increment (following PEP 440). Creates a new git tag and commit.
488 """
489 check_new_version(version)
490 set_new_version(version)
491 commit_new_version(version)
492 set_git_tag(version)
493
494
495 @cli.command()
496 def version():
497 """Gets the most recent tagged version."""
498 print(get_most_recent_git_tag())
499
500
501 cli = click.CommandCollection(sources=[cli], help=CLI_HELP)
502
503 if __name__ == '__main__':
504 cli()
505
[end of bin/publish.py]
[start of python_modules/dagster/dagster/cli/pipeline.py]
1 from __future__ import print_function
2 import logging
3 import re
4 import textwrap
5 import yaml
6
7 import click
8
9 from dagster import PipelineDefinition, check
10
11 from dagster.core.definitions import Solid
12 from dagster.core.execution import execute_pipeline
13 from dagster.core.execution_plan.create import solids_in_topological_order
14 from dagster.graphviz import build_graphviz_graph
15 from dagster.utils import load_yaml_from_glob_list
16 from dagster.utils.indenting_printer import IndentingPrinter
17
18 from .config_scaffolder import scaffold_pipeline_config
19
20 from .dynamic_loader import (
21 PipelineTargetInfo,
22 load_pipeline_from_target_info,
23 load_repository_from_target_info,
24 pipeline_target_command,
25 repository_target_argument,
26 load_target_info_from_cli_args,
27 )
28
29
30 def create_pipeline_cli_group():
31 group = click.Group(name="pipeline")
32 group.add_command(pipeline_list_command)
33 group.add_command(pipeline_print_command)
34 group.add_command(pipeline_graphviz_command)
35 group.add_command(pipeline_execute_command)
36 group.add_command(pipeline_scaffold_command)
37 return group
38
39
40 REPO_TARGET_WARNING = (
41 'Can only use ONE of --repository-yaml/-y, --python-file/-f, --module-name/-m.'
42 )
43
44
45 @click.command(
46 name='list',
47 help="List the pipelines in a repository. {warning}".format(warning=REPO_TARGET_WARNING),
48 )
49 @repository_target_argument
50 def pipeline_list_command(**kwargs):
51 return execute_list_command(kwargs, click.echo)
52
53
54 def execute_list_command(cli_args, print_fn):
55 repository_target_info = load_target_info_from_cli_args(cli_args)
56 repository = load_repository_from_target_info(repository_target_info)
57
58 title = 'Repository {name}'.format(name=repository.name)
59 print_fn(title)
60 print_fn('*' * len(title))
61 first = True
62 for pipeline in repository.get_all_pipelines():
63 pipeline_title = 'Pipeline: {name}'.format(name=pipeline.name)
64
65 if not first:
66 print_fn('*' * len(pipeline_title))
67 first = False
68
69 print_fn(pipeline_title)
70 if pipeline.description:
71 print_fn('Description:')
72 print_fn(format_description(pipeline.description, indent=' ' * 4))
73 print_fn('Solids: (Execution Order)')
74 for solid in solids_in_topological_order(pipeline):
75 print_fn(' ' + solid.name)
76
77
78 def format_description(desc, indent):
79 check.str_param(desc, 'desc')
80 check.str_param(indent, 'indent')
81 desc = re.sub(r'\s+', ' ', desc)
82 dedented = textwrap.dedent(desc)
83 wrapper = textwrap.TextWrapper(initial_indent='', subsequent_indent=indent)
84 filled = wrapper.fill(dedented)
85 return filled
86
87
88 def create_pipeline_from_cli_args(kwargs):
89 check.dict_param(kwargs, 'kwargs')
90
91 pipeline_names = list(kwargs['pipeline_name'])
92
93 if not pipeline_names:
94 pipeline_name = None
95 elif len(pipeline_names) == 1:
96 pipeline_name = pipeline_names[0]
97 else:
98 check.failed(
99 'Can only handle zero or one pipeline args. Got {pipeline_names}'.format(
100 pipeline_names=repr(pipeline_names)
101 )
102 )
103
104 if (
105 kwargs['pipeline_name']
106 and kwargs['repository_yaml'] is None
107 and kwargs['module_name'] is None
108 and kwargs['python_file'] is None
109 ):
110 repository_yaml = 'repository.yml'
111 else:
112 repository_yaml = kwargs['repository_yaml']
113
114 return load_pipeline_from_target_info(
115 PipelineTargetInfo(
116 repository_yaml=repository_yaml,
117 pipeline_name=pipeline_name,
118 python_file=kwargs['python_file'],
119 module_name=kwargs['module_name'],
120 fn_name=kwargs['fn_name'],
121 )
122 )
123
124
125 def get_pipeline_instructions(command_name):
126 return (
127 'This commands targets a pipeline. The pipeline can be specified in a number of ways:'
128 '\n\n1. dagster {command_name} <<pipeline_name>> (works if .repository.yml exists)'
129 '\n\n2. dagster {command_name} <<pipeline_name>> -y path/to/repository.yml'
130 '\n\n3. dagster {command_name} -f /path/to/file.py -n define_some_pipeline'
131 '\n\n4. dagster {command_name} -m a_module.submodule -n define_some_pipeline'
132 '\n\n5. dagster {command_name} -f /path/to/file.py -n define_some_repo -p pipeline_name'
133 '\n\n6. dagster {command_name} -m a_module.submodule -n define_some_repo -p pipeline_name'
134 ).format(command_name=command_name)
135
136
137 @click.command(
138 name='print',
139 help='Print a pipeline.\n\n{instructions}'.format(
140 instructions=get_pipeline_instructions('print')
141 ),
142 )
143 @click.option('--verbose', is_flag=True)
144 @pipeline_target_command
145 def pipeline_print_command(verbose, **cli_args):
146 return execute_print_command(verbose, cli_args, click.echo)
147
148
149 def execute_print_command(verbose, cli_args, print_fn):
150 pipeline = create_pipeline_from_cli_args(cli_args)
151
152 if verbose:
153 print_pipeline(pipeline, full=True, print_fn=print_fn)
154 else:
155 print_solids(pipeline, print_fn=print_fn)
156
157
158 def print_solids(pipeline, print_fn):
159 check.inst_param(pipeline, 'pipeline', PipelineDefinition)
160 check.callable_param(print_fn, 'print_fn')
161
162 printer = IndentingPrinter(indent_level=2, printer=print_fn)
163 printer.line('Pipeline: {name}'.format(name=pipeline.name))
164
165 printer.line('Solids:')
166 for solid in pipeline.solids:
167 with printer.with_indent():
168 printer.line('Solid: {name}'.format(name=solid.name))
169
170
171 def print_pipeline(pipeline, full, print_fn):
172 check.inst_param(pipeline, 'pipeline', PipelineDefinition)
173 check.bool_param(full, 'full')
174 check.callable_param(print_fn, 'print_fn')
175
176 printer = IndentingPrinter(indent_level=2, printer=print_fn)
177 printer.line('Pipeline: {name}'.format(name=pipeline.name))
178 print_description(printer, pipeline.description)
179
180 if not full:
181 return
182
183 with printer.with_indent():
184 printer.line('Context Definitions:')
185
186 with printer.with_indent():
187
188 for context_name, context_definition in pipeline.context_definitions.items():
189 print_context_definition(printer, context_name, context_definition)
190
191 printer.line('Solids:')
192 for solid in pipeline.solids:
193 with printer.with_indent():
194 print_solid(printer, solid)
195
196
197 def print_description(printer, desc):
198 with printer.with_indent():
199 if desc:
200 printer.line('Description:')
201 with printer.with_indent():
202 printer.line(format_description(desc, printer.current_indent_str))
203
204
205 def print_context_definition(printer, context_name, context_definition):
206 printer.line('Name: {context_name}'.format(context_name=context_name))
207
208 print_description(printer, context_definition.description)
209
210 printer.line(
211 'Type: {runtime_type}'.format(runtime_type=context_definition.config_field.config_type.name)
212 )
213
214
215 def print_solid(printer, solid):
216 check.inst_param(solid, 'solid', Solid)
217 printer.line('Solid: {name}'.format(name=solid.name))
218
219 with printer.with_indent():
220 print_inputs(printer, solid)
221
222 printer.line('Outputs:')
223
224 for output_def in solid.definition.output_defs:
225 printer.line(output_def.name)
226
227
228 def print_inputs(printer, solid):
229 printer.line('Inputs:')
230 for input_def in solid.definition.input_defs:
231 with printer.with_indent():
232 printer.line('Input: {name}'.format(name=input_def.name))
233
234
235 def format_argument_dict(arg_def_dict):
236 return ', '.join(
237 [
238 '{name}: {type}'.format(name=name, type=arg_def.runtime_type.name)
239 for name, arg_def in arg_def_dict.items()
240 ]
241 )
242
243
244 @click.command(
245 name='graphviz',
246 help=(
247 'Visualize a pipeline using graphviz. Must be installed on your system '
248 '(e.g. homebrew install graphviz on mac). \n\n{instructions}'.format(
249 instructions=get_pipeline_instructions('graphviz')
250 )
251 ),
252 )
253 @click.option('--only-solids', is_flag=True)
254 @pipeline_target_command
255 def pipeline_graphviz_command(only_solids, **kwargs):
256 pipeline = create_pipeline_from_cli_args(kwargs)
257 build_graphviz_graph(pipeline, only_solids).view(cleanup=True)
258
259
260 LOGGING_DICT = {
261 'DEBUG': logging.DEBUG,
262 'INFO': logging.INFO,
263 'WARN': logging.WARN,
264 'ERROR': logging.ERROR,
265 'CRITICAL': logging.CRITICAL,
266 }
267
268
269 @click.command(
270 name='execute',
271 help='Execute a pipeline.\n\n{instructions}'.format(
272 instructions=get_pipeline_instructions('execute')
273 ),
274 )
275 @pipeline_target_command
276 @click.option(
277 '-e',
278 '--env',
279 type=click.STRING,
280 multiple=True,
281 help=(
282 'Specify one or more environment files. These can also be file patterns. '
283 'If more than one environment file is captured then those files are merged. '
284 'Files listed first take precendence. They will smash the values of subsequent '
285 'files at the key-level granularity. If the file is a pattern then you must '
286 'enclose it in double quotes'
287 '\n\nExample: '
288 'dagster pipeline execute pandas_hello_world -e "pandas_hello_world/*.yml"'
289 '\n\nYou can also specifiy multiple files:'
290 '\n\nExample: '
291 'dagster pipeline execute pandas_hello_world -e pandas_hello_world/solids.yml '
292 '-e pandas_hello_world/env.yml'
293 ),
294 )
295 def pipeline_execute_command(env, **kwargs):
296 check.invariant(isinstance(env, tuple))
297 env = list(env)
298 execute_execute_command(env, kwargs, click.echo)
299
300
301 def execute_execute_command(env, cli_args, print_fn):
302 pipeline = create_pipeline_from_cli_args(cli_args)
303 return do_execute_command(pipeline, env, print_fn)
304
305
306 def do_execute_command(pipeline, env_file_list, printer):
307 check.inst_param(pipeline, 'pipeline', PipelineDefinition)
308 env_file_list = check.opt_list_param(env_file_list, 'env_file_list', of_type=str)
309 check.callable_param(printer, 'printer')
310
311 environment_dict = load_yaml_from_glob_list(env_file_list) if env_file_list else {}
312
313 return execute_pipeline(pipeline, environment_dict=environment_dict)
314
315
316 @click.command(
317 name='scaffold_config',
318 help='Scaffold the config for a pipeline.\n\n{instructions}'.format(
319 instructions=get_pipeline_instructions('scaffold_config')
320 ),
321 )
322 @pipeline_target_command
323 @click.option('-p', '--print-only-required', default=False, is_flag=True)
324 def pipeline_scaffold_command(**kwargs):
325 execute_scaffold_command(kwargs, click.echo)
326
327
328 def execute_scaffold_command(cli_args, print_fn):
329 pipeline = create_pipeline_from_cli_args(cli_args)
330 skip_optional = cli_args['print_only_required']
331 do_scaffold_command(pipeline, print_fn, skip_optional)
332
333
334 def do_scaffold_command(pipeline, printer, skip_optional):
335 check.inst_param(pipeline, 'pipeline', PipelineDefinition)
336 check.callable_param(printer, 'printer')
337 check.bool_param(skip_optional, 'skip_optional')
338
339 config_dict = scaffold_pipeline_config(pipeline, skip_optional=skip_optional)
340 yaml_string = yaml.dump(config_dict, default_flow_style=False)
341 printer(yaml_string)
342
[end of python_modules/dagster/dagster/cli/pipeline.py]
[start of python_modules/dagster/dagster/core/log.py]
1 import datetime
2 import itertools
3 import logging
4 import uuid
5
6 from dagster import check, seven
7
8 DAGSTER_META_KEY = 'dagster_meta'
9 DAGSTER_DEFAULT_LOGGER = 'dagster'
10
11
12 def _dump_value(value):
13 # dump namedtuples as objects instead of arrays
14 if isinstance(value, tuple) and hasattr(value, '_asdict'):
15 return seven.json.dumps(value._asdict())
16
17 return seven.json.dumps(value)
18
19
20 def _kv_message(all_items, multiline=False):
21 sep = '\n' if multiline else ' '
22 format_str = '{key:>20} = {value}' if multiline else '{key}={value}'
23 return sep + sep.join(
24 [format_str.format(key=key, value=_dump_value(value)) for key, value in all_items]
25 )
26
27
28 class DagsterLog:
29 def __init__(self, run_id, tags, loggers):
30 self.run_id = check.str_param(run_id, 'run_id')
31 self.tags = check.dict_param(tags, 'tags')
32 self.loggers = check.list_param(loggers, 'loggers', of_type=logging.Logger)
33
34 def _log(self, method, orig_message, message_props):
35 check.str_param(method, 'method')
36 check.str_param(orig_message, 'orig_message')
37 check.dict_param(message_props, 'message_props')
38
39 check.invariant(
40 'extra' not in message_props, 'do not allow until explicit support is handled'
41 )
42 check.invariant(
43 'exc_info' not in message_props, 'do not allow until explicit support is handled'
44 )
45
46 check.invariant('orig_message' not in message_props, 'orig_message reserved value')
47 check.invariant('message' not in message_props, 'message reserved value')
48 check.invariant('log_message_id' not in message_props, 'log_message_id reserved value')
49 check.invariant('log_timestamp' not in message_props, 'log_timestamp reserved value')
50
51 log_message_id = str(uuid.uuid4())
52
53 log_timestamp = datetime.datetime.utcnow().isoformat()
54
55 synth_props = {
56 'orig_message': orig_message,
57 'log_message_id': log_message_id,
58 'log_timestamp': log_timestamp,
59 'run_id': self.run_id,
60 }
61
62 # We first generate all props for the purpose of producing the semi-structured
63 # log message via _kv_messsage
64 all_props = dict(
65 itertools.chain(synth_props.items(), self.tags.items(), message_props.items())
66 )
67
68 msg_with_structured_props = _kv_message(all_props.items())
69 msg_with_multiline_structured_props = _kv_message(all_props.items(), multiline=True)
70
71 # So here we use the arbitrary key DAGSTER_META_KEY to store a dictionary of
72 # all the meta information that dagster injects into log message.
73 # The python logging module, in its infinite wisdom, actually takes all the
74 # keys in extra and unconditionally smashes them into the internal dictionary
75 # of the logging.LogRecord class. We used a reserved key here to avoid naming
76 # collisions with internal variables of the LogRecord class.
77 # See __init__.py:363 (makeLogRecord) in the python 3.6 logging module source
78 # for the gory details.
79 # getattr(self.logger, method)(
80 # message_with_structured_props, extra={DAGSTER_META_KEY: all_props}
81 # )
82
83 for logger in self.loggers:
84 logger_method = check.is_callable(getattr(logger, method))
85 if logger.name == DAGSTER_DEFAULT_LOGGER:
86 logger_method(
87 msg_with_multiline_structured_props, extra={DAGSTER_META_KEY: all_props}
88 )
89 else:
90 logger_method(msg_with_structured_props, extra={DAGSTER_META_KEY: all_props})
91
92 def debug(self, msg, **kwargs):
93 '''
94 Debug level logging directive. Ends up invoking loggers with DEBUG error level.
95
96 The message will be automatically adorned with context information about the name
97 of the pipeline, the name of the solid, and so forth. The use can also add
98 context values during execution using the value() method of ExecutionContext.
99 Therefore it is generally unnecessary to include this type of information
100 (solid name, pipeline name, etc) in the log message unless it is critical
101 for the readability/fluency of the log message text itself.
102
103 You can optionally additional context key-value pairs to an individual log
104 message using the keyword args to this message
105
106 Args:
107 msg (str): The core string
108 **kwargs (Dict[str, Any]): Additional context values for only this log message.
109 '''
110 return self._log('debug', msg, kwargs)
111
112 def info(self, msg, **kwargs):
113 '''Log at INFO level
114
115 See debug()'''
116 return self._log('info', msg, kwargs)
117
118 def warning(self, msg, **kwargs):
119 '''Log at WARNING level
120
121 See debug()'''
122 return self._log('warning', msg, kwargs)
123
124 def error(self, msg, **kwargs):
125 '''Log at ERROR level
126
127 See debug()'''
128 return self._log('error', msg, kwargs)
129
130 def critical(self, msg, **kwargs):
131 '''Log at CRITICAL level
132
133 See debug()'''
134 return self._log('critical', msg, kwargs)
135
[end of python_modules/dagster/dagster/core/log.py]
[start of python_modules/dagster/dagster/seven/__init__.py]
1 '''Internal py2/3 compatibility library. A little more than six.'''
2
3 import sys
4
5 from .json import dump, dumps, JSONDecodeError
6 from .temp_dir import get_system_temp_directory
7
8 try:
9 FileNotFoundError = FileNotFoundError # pylint:disable=redefined-builtin
10 except NameError:
11 FileNotFoundError = IOError
12
13 if sys.version_info.major >= 3:
14 from io import StringIO # pylint:disable=import-error
15 else:
16 from StringIO import StringIO # pylint:disable=import-error
17
18
19 # TODO implement a generic import by name -- see https://stackoverflow.com/questions/301134/how-to-import-a-module-given-its-name
20
21 # https://stackoverflow.com/a/67692/324449
22 def import_module_from_path(module_name, path_to_file):
23 version = sys.version_info
24 if version.major >= 3 and version.minor >= 5:
25 import importlib.util
26
27 spec = importlib.util.spec_from_file_location(module_name, path_to_file)
28 module = importlib.util.module_from_spec(spec)
29 sys.modules[spec.name] = module
30 spec.loader.exec_module(module)
31 elif version.major >= 3 and version.minor >= 3:
32 from importlib.machinery import SourceFileLoader
33
34 # pylint:disable=deprecated-method, no-value-for-parameter
35 module = SourceFileLoader(module_name, path_to_file).load_module()
36 else:
37 import imp
38
39 module = imp.load_source(module_name, path_to_file)
40
41 return module
42
43
44 # https://stackoverflow.com/a/437591/324449
45 def reload_module(module):
46 version = sys.version_info
47 if version.major >= 3 and version.minor >= 4:
48 from importlib import reload as reload_
49
50 return reload_(module)
51 elif version.major >= 3:
52 from imp import reload as reload_
53
54 return reload_(module)
55
56 return reload(module) # pylint: disable=undefined-variable
57
58
59 def is_ascii(str_):
60 if sys.version_info.major < 3:
61 try:
62 str_.decode('ascii')
63 return True
64 except UnicodeEncodeError:
65 return False
66 elif sys.version_info.major == 3 and sys.version_info.minor < 7:
67 try:
68 str_.encode('ascii')
69 return True
70 except UnicodeEncodeError:
71 return False
72 else:
73 return str_.isascii()
74
[end of python_modules/dagster/dagster/seven/__init__.py]
[start of python_modules/dagstermill/dagstermill/cli.py]
1 from collections import namedtuple
2 import copy
3 import os
4 import re
5
6 import click
7 from papermill.iorw import load_notebook_node, write_ipynb
8 import nbformat
9
10 from dagster import check
11 from dagster.cli.dynamic_loader import (
12 repository_target_argument,
13 load_target_info_from_cli_args,
14 RepositoryTargetInfo,
15 entrypoint_from_module_target,
16 load_yaml_from_path,
17 InvalidRepositoryLoadingComboError,
18 )
19 from dagster.utils import safe_isfile
20
21
22 def get_acceptable_entrypoint(repo_target_info):
23 check.inst_param(repo_target_info, 'repo_target_info', RepositoryTargetInfo)
24 if repo_target_info.repository_yaml:
25 check.str_param(repo_target_info.repository_yaml, 'repository_yaml')
26 config = load_yaml_from_path(repo_target_info.repository_yaml)
27 repository_config = check.dict_elem(config, 'repository')
28 module_name = check.opt_str_elem(repository_config, 'module')
29 fn_name = check.str_elem(repository_config, 'fn')
30 if module_name:
31 return entrypoint_from_module_target(module_name, fn_name)
32 return None
33 elif repo_target_info.module_name and repo_target_info.fn_name:
34 return entrypoint_from_module_target(repo_target_info.module_name, repo_target_info.fn_name)
35 elif repo_target_info.python_file and repo_target_info.fn_name:
36 return None
37 else:
38 raise InvalidRepositoryLoadingComboError()
39
40
41 def get_notebook_scaffolding(register_repo_info):
42 if register_repo_info is None: # do not register repo
43 first_cell_source = '"import dagstermill as dm"'
44 else:
45 check.str_param(register_repo_info.import_statement, 'register_repo_info.import_statement')
46 check.str_param(
47 register_repo_info.declaration_statement, 'register_repo_info.declaration_statement'
48 )
49 first_cell_source = '''"import dagstermill as dm\\n",
50 "{import_statement}\\n",
51 "{declaration_statement}"'''.format(
52 import_statement=register_repo_info.import_statement,
53 declaration_statement=register_repo_info.declaration_statement,
54 )
55
56 starting_notebook_init = '''
57 {{
58 "cells": [
59 {{
60 "cell_type": "code",
61 "execution_count": null,
62 "metadata": {{}},
63 "outputs": [],
64 "source": [
65 {first_cell_source}
66 ]
67 }},
68 {{
69 "cell_type": "code",
70 "execution_count": null,
71 "metadata": {{
72 "tags": [
73 "parameters"
74 ]
75 }},
76 "outputs": [],
77 "source": [
78 "context = dm.get_context()"
79 ]
80 }}
81 ],
82 "metadata": {{
83 "celltoolbar": "Tags"
84 }},
85 "nbformat": 4,
86 "nbformat_minor": 2
87 }}'''
88 return starting_notebook_init.format(first_cell_source=first_cell_source)
89
90
91 @click.command(name='register-notebook', help=('Registers repository in existing notebook'))
92 @repository_target_argument
93 @click.option('--notebook', '-note', type=click.STRING, help='Path to notebook')
94 def retroactively_scaffold_notebook(notebook, **kwargs):
95 execute_retroactive_scaffold(notebook, **kwargs)
96
97
98 def execute_retroactive_scaffold(notebook_path, **kwargs):
99 nb = load_notebook_node(notebook_path)
100 new_nb = copy.deepcopy(nb)
101 register_repo_info = get_register_repo_info(kwargs, allow_none=False)
102
103 cell_source = 'import dagstermill as dm\n{import_statement}\n{declaration_statement}'.format(
104 import_statement=register_repo_info.import_statement,
105 declaration_statement=register_repo_info.declaration_statement,
106 )
107
108 newcell = nbformat.v4.new_code_cell(source=cell_source)
109 newcell.metadata['tags'] = ['injected-repo-registration']
110 new_nb.cells = [newcell] + nb.cells
111 write_ipynb(new_nb, notebook_path)
112
113
114 @click.command(name='create-notebook', help=('Creates new dagstermill notebook.'))
115 @repository_target_argument
116 @click.option('--notebook', '-note', type=click.STRING, help="Name of notebook")
117 @click.option(
118 '--force-overwrite',
119 is_flag=True,
120 help="Will force overwrite any existing notebook or file with the same name.",
121 )
122 def create_notebook(notebook, force_overwrite, **kwargs):
123 execute_create_notebook(notebook, force_overwrite, **kwargs)
124
125
126 def get_register_repo_info(cli_args, allow_none=True):
127 def all_none(kwargs):
128 for value in kwargs.values():
129 if value is not None:
130 return False
131 return True
132
133 scaffolding_with_repo = True
134 if all_none(cli_args):
135 if os.path.exists(os.path.join(os.getcwd(), 'repository.yml')):
136 cli_args['repository_yaml'] = 'repository.yml'
137 elif allow_none: # register_repo_info can remain None
138 scaffolding_with_repo = False
139
140 register_repo_info = None
141 if scaffolding_with_repo:
142 repository_target_info = load_target_info_from_cli_args(cli_args)
143 entrypoint = get_acceptable_entrypoint(repository_target_info)
144
145 if entrypoint:
146 module = entrypoint.module_name
147 fn_name = entrypoint.fn_name
148 RegisterRepoInfo = namedtuple(
149 'RegisterRepoInfo', 'import_statement declaration_statement'
150 )
151 register_repo_info = RegisterRepoInfo(
152 "from {module} import {fn_name}".format(module=module, fn_name=fn_name),
153 "dm.register_repository({fn_name}())".format(fn_name=fn_name),
154 )
155 else:
156 raise click.UsageError(
157 "Cannot instantiate notebook with repository definition given by a function from a file"
158 )
159 return register_repo_info
160
161
162 def execute_create_notebook(notebook, force_overwrite, **kwargs):
163 if not re.match(r'^[a-zA-Z0-9\-_\\/]+$', notebook):
164 raise click.BadOptionUsage(
165 notebook,
166 (
167 'Notebook name {name} is not valid, '
168 'cannot contain anything except alphanumeric characters, '
169 '-, _, \\ and / for path manipulation'
170 ).format(name=notebook),
171 )
172
173 notebook_path = os.path.join(
174 os.getcwd(), notebook if notebook.endswith('.ipynb') else notebook + ".ipynb"
175 )
176
177 notebook_dir = os.path.dirname(notebook_path)
178 if not os.path.exists(notebook_dir):
179 os.makedirs(notebook_dir)
180
181 if not force_overwrite and safe_isfile(notebook_path):
182 click.confirm(
183 (
184 'Warning, {notebook_path} already exists and continuing '
185 'will overwrite the existing notebook. '
186 'Are you sure you want to continue?'
187 ).format(notebook_path=notebook_path),
188 abort=True,
189 )
190 register_repo_info = get_register_repo_info(kwargs)
191
192 with open(notebook_path, 'w') as f:
193 f.write(get_notebook_scaffolding(register_repo_info))
194 click.echo("Created new dagstermill notebook at {path}".format(path=notebook_path))
195
196
197 def create_dagstermill_cli():
198 group = click.Group(name="dagstermill")
199 group.add_command(create_notebook)
200 group.add_command(retroactively_scaffold_notebook)
201 return group
202
203
204 def main():
205 cli = create_dagstermill_cli()
206 # click magic
207 cli(obj={}) # pylint:disable=E1120
208
[end of python_modules/dagstermill/dagstermill/cli.py]
[start of python_modules/libraries/dagster-snowflake/dagster_snowflake/configs.py]
1 from dagster import Bool, Dict, Field, Int, Path, String
2
3
4 def define_snowflake_config():
5 '''Snowflake configuration.
6
7 See the Snowflake documentation for reference:
8 https://docs.snowflake.net/manuals/user-guide/python-connector-api.html
9 '''
10
11 account = Field(
12 String,
13 description='Your Snowflake account name. For more details, see https://bit.ly/2FBL320.',
14 is_optional=True,
15 )
16
17 user = Field(String, description='User login name.', is_optional=False)
18
19 password = Field(String, description='User password.', is_optional=False)
20
21 database = Field(
22 String,
23 description='''Name of the default database to use. After login, you can use USE DATABASE
24 to change the database.''',
25 is_optional=True,
26 )
27
28 schema = Field(
29 String,
30 description='''Name of the default schema to use. After login, you can use USE SCHEMA to
31 change the schema.''',
32 is_optional=True,
33 )
34
35 role = Field(
36 String,
37 description='''Name of the default role to use. After login, you can use USE ROLE to change
38 the role.''',
39 is_optional=True,
40 )
41
42 warehouse = Field(
43 String,
44 description='''Name of the default warehouse to use. After login, you can use USE WAREHOUSE
45 to change the role.''',
46 is_optional=True,
47 )
48
49 autocommit = Field(
50 Bool,
51 description='''None by default, which honors the Snowflake parameter AUTOCOMMIT. Set to True
52 or False to enable or disable autocommit mode in the session, respectively.''',
53 is_optional=True,
54 )
55
56 client_prefetch_threads = Field(
57 Int,
58 description='''Number of threads used to download the results sets (4 by default).
59 Increasing the value improves fetch performance but requires more memory.''',
60 is_optional=True,
61 )
62
63 client_session_keep_alive = Field(
64 String,
65 description='''False by default. Set this to True to keep the session active indefinitely,
66 even if there is no activity from the user. Make certain to call the close method to
67 terminate the thread properly or the process may hang.''',
68 is_optional=True,
69 )
70
71 login_timeout = Field(
72 Int,
73 description='''Timeout in seconds for login. By default, 60 seconds. The login request gives
74 up after the timeout length if the HTTP response is "success".''',
75 is_optional=True,
76 )
77
78 network_timeout = Field(
79 Int,
80 description='''Timeout in seconds for all other operations. By default, none/infinite. A
81 general request gives up after the timeout length if the HTTP response is not "success"''',
82 is_optional=True,
83 )
84
85 ocsp_response_cache_filename = Field(
86 Path,
87 description='''URI for the OCSP response cache file.
88 By default, the OCSP response cache file is created in the cache directory.''',
89 is_optional=True,
90 )
91
92 validate_default_parameters = Field(
93 Bool,
94 description='''False by default. Raise an exception if either one of specified database,
95 schema or warehouse doesn't exists if True.''',
96 is_optional=True,
97 )
98
99 paramstyle = Field(
100 # TODO should validate only against permissible values for this
101 String,
102 description='''pyformat by default for client side binding. Specify qmark or numeric to
103 change bind variable formats for server side binding.''',
104 is_optional=True,
105 )
106
107 timezone = Field(
108 String,
109 description='''None by default, which honors the Snowflake parameter TIMEZONE. Set to a
110 valid time zone (e.g. America/Los_Angeles) to set the session time zone.''',
111 is_optional=True,
112 )
113
114 return Field(
115 Dict(
116 fields={
117 'account': account,
118 'user': user,
119 'password': password,
120 'database': database,
121 'schema': schema,
122 'role': role,
123 'warehouse': warehouse,
124 'autocommit': autocommit,
125 'client_prefetch_threads': client_prefetch_threads,
126 'client_session_keep_alive': client_session_keep_alive,
127 'login_timeout': login_timeout,
128 'network_timeout': network_timeout,
129 'ocsp_response_cache_filename': ocsp_response_cache_filename,
130 'validate_default_parameters': validate_default_parameters,
131 'paramstyle': paramstyle,
132 'timezone': timezone,
133 }
134 ),
135 description='Snowflake configuration',
136 )
137
[end of python_modules/libraries/dagster-snowflake/dagster_snowflake/configs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
dagster-io/dagster
|
4851bf18a4237c5bbb936a5e983f31a0ded71543
|
Ensure the dev_env_setup.sh and installation guides reflect new module structures
|
2019-04-10T17:23:12Z
|
<patch>
diff --git a/python_modules/dagster/dagster/core/object_store.py b/python_modules/dagster/dagster/core/object_store.py
--- a/python_modules/dagster/dagster/core/object_store.py
+++ b/python_modules/dagster/dagster/core/object_store.py
@@ -19,7 +19,7 @@ def ensure_boto_requirements():
# TODO this could be factored to check.import
try:
import boto3
- except ImportError:
+ except (ImportError, ModuleNotFoundError):
raise check.CheckError(
'boto3 must be available for import in order to make use of an S3ObjectStore'
)
</patch>
|
[]
|
[]
| ||||
wagtail__wagtail-8210
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tags over 100 characters
Found a bug? Please fill out the sections below. 👍
### Issue Summary
When adding a tag while using the ClusterTaggableManager class, if the tag name is greater than the character limit for the database column no validation error is given.
### Steps to Reproduce
1. login to admin and edit a page with a tag content panel
2. create a tag with more than 100 characters
3. save, or publish the page
### Technical details
* Python version: Python 3.5.1
* Django version: 1.11.13
* Wagtail version: 1.13.1
Tags over 100 characters
Found a bug? Please fill out the sections below. 👍
### Issue Summary
When adding a tag while using the ClusterTaggableManager class, if the tag name is greater than the character limit for the database column no validation error is given.
### Steps to Reproduce
1. login to admin and edit a page with a tag content panel
2. create a tag with more than 100 characters
3. save, or publish the page
### Technical details
* Python version: Python 3.5.1
* Django version: 1.11.13
* Wagtail version: 1.13.1
</issue>
<code>
[start of README.md]
1 <h1 align="center">
2 <img width="343" src=".github/wagtail.svg#gh-light-mode-only" alt="Wagtail">
3 <img width="343" src=".github/wagtail-inverse.svg#gh-dark-mode-only" alt="Wagtail">
4 <br>
5 <br>
6 </h1>
7
8 Wagtail is an open source content management system built on Django, with a strong community and commercial support. It's focused on user experience, and offers precise control for designers and developers.
9
10 
11
12 ### Features
13
14 - A fast, attractive interface for authors
15 - Complete control over front-end design and structure
16 - Scales to millions of pages and thousands of editors
17 - Fast out of the box, cache-friendly when you need it
18 - Content API for 'headless' sites with de-coupled front-end
19 - Runs on a Raspberry Pi or a multi-datacenter cloud platform
20 - StreamField encourages flexible content without compromising structure
21 - Powerful, integrated search, using Elasticsearch or PostgreSQL
22 - Excellent support for images and embedded content
23 - Multi-site and multi-language ready
24 - Embraces and extends Django
25
26 Find out more at [wagtail.org](https://wagtail.org/).
27
28 ### Getting started
29
30 Wagtail works with [Python 3](https://www.python.org/downloads/), on any platform.
31
32 To get started with Wagtail, run the following in a virtual environment:
33
34 ```bash
35 pip install wagtail
36 wagtail start mysite
37 cd mysite
38 pip install -r requirements.txt
39 python manage.py migrate
40 python manage.py createsuperuser
41 python manage.py runserver
42 ```
43
44 For detailed installation and setup docs, see [docs.wagtail.org](https://docs.wagtail.org/).
45
46 ### Who’s using it?
47
48 Wagtail is used by [NASA](https://www.nasa.gov/), [Google](https://www.google.com/), [Oxfam](https://www.oxfam.org/en), the [NHS](https://www.nhs.uk/), [Mozilla](https://www.mozilla.org/en-US/), [MIT](https://www.mit.edu/), the [Red Cross](https://www.icrc.org/en), [Salesforce](https://www.salesforce.com/), [NBC](https://www.nbc.com/), [BMW](https://www.bmw.com/en/index.html), and the US and UK governments. Add your own Wagtail site to [madewithwagtail.org](https://madewithwagtail.org).
49
50 ### Documentation
51
52 [docs.wagtail.org](https://docs.wagtail.org/) is the full reference for Wagtail, and includes guides for developers, designers and editors, alongside release notes and our roadmap.
53
54 ### Compatibility
55
56 _(If you are reading this on GitHub, the details here may not be indicative of the current released version - please see [Compatible Django / Python versions](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) in the Wagtail documentation.)_
57
58 Wagtail supports:
59
60 - Django 3.2.x and 4.0.x
61 - Python 3.7, 3.8, 3.9 and 3.10
62 - PostgreSQL, MySQL and SQLite as database backends
63
64 [Previous versions of Wagtail](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) additionally supported Python 2.7 and earlier Django versions.
65
66 ---
67
68 ### Community Support
69
70 There is an active community of Wagtail users and developers responding to questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/wagtail). When posting questions, please read Stack Overflow's advice on [how to ask questions](https://stackoverflow.com/help/how-to-ask) and remember to tag your question "wagtail".
71
72 For topics and discussions that do not fit Stack Overflow's question and answer format, we have a [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack) and a [Wagtail Support mailing list](https://groups.google.com/forum/#!forum/wagtail). Please respect the time and effort of volunteers by not asking the same question in multiple places.
73
74 Our [Github discussion boards](https://github.com/wagtail/wagtail/discussions) are open for sharing ideas and plans for the Wagtail project.
75
76 We maintain a curated list of third party packages, articles and other resources at [Awesome Wagtail](https://github.com/springload/awesome-wagtail).
77
78 ### Commercial Support
79
80 Wagtail is sponsored by [Torchbox](https://torchbox.com/). If you need help implementing or hosting Wagtail, please contact us: [email protected]. See also [madewithwagtail.org/developers/](https://madewithwagtail.org/developers/) for expert Wagtail developers around the world.
81
82 ### Security
83
84 We take the security of Wagtail, and related packages we maintain, seriously. If you have found a security issue with any of our projects please email us at [[email protected]](mailto:[email protected]) so we can work together to find and patch the issue. We appreciate responsible disclosure with any security related issues, so please contact us first before creating a Github issue.
85
86 If you want to send an encrypted email (optional), the public key ID for [email protected] is 0xbed227b4daf93ff9, and this public key is available from most commonly-used keyservers.
87
88 ### Release schedule
89
90 Feature releases of Wagtail are released every three months. Selected releases are designated as Long Term Support (LTS) releases, and will receive maintenance updates for an extended period to address any security and data-loss related issues. For dates of past and upcoming releases and support periods, see [Release Schedule](https://github.com/wagtail/wagtail/wiki/Release-schedule).
91
92 #### Nightly releases
93
94 To try out the latest features before a release, we also create builds from `main` every night. You can find instructions on how to install the latest nightly release at https://releases.wagtail.org/nightly/index.html
95
96 ### Contributing
97
98 If you're a Python or Django developer, fork the repo and get stuck in! We have several developer focused channels on the [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack).
99
100 You might like to start by reviewing the [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html) and checking issues with the [good first issue](https://github.com/wagtail/wagtail/labels/good%20first%20issue) label.
101
102 We also welcome translations for Wagtail's interface. Translation work should be submitted through [Transifex](https://www.transifex.com/projects/p/wagtail/).
103
104 ### License
105
106 [BSD](https://github.com/wagtail/wagtail/blob/main/LICENSE)
107
108 ### Thanks
109
110 We thank the following organisations for their services used in Wagtail's development:
111
112 [](https://www.browserstack.com/)<br>
113 [BrowserStack](https://www.browserstack.com/) provides the project with free access to their live web-based browser testing tool, and automated Selenium cloud testing.
114
115 [](https://www.squash.io/)<br>
116 [Squash](https://www.squash.io/) provides the project with free test environments for reviewing pull requests.
117
118 [](https://assistivlabs.com/)<br>
119 [Assistiv Labs](https://assistivlabs.com/) provides the project with unlimited access to their remote testing with assistive technologies.
120
121 [](https://github.com/wagtail/wagtail/actions)
122 [](https://opensource.org/licenses/BSD-3-Clause)
123 [](https://pypi.python.org/pypi/wagtail/)
124 [](https://lgtm.com/projects/g/wagtail/wagtail/alerts/)
125 [](https://lgtm.com/projects/g/wagtail/wagtail/context:python)
126 [](https://lgtm.com/projects/g/wagtail/wagtail/context:javascript)
127
[end of README.md]
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Wagtail documentation build configuration file, created by
4 # sphinx-quickstart on Tue Jan 14 17:38:55 2014.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import os
16 import sys
17 from datetime import datetime
18
19 import django
20 import sphinx_wagtail_theme
21
22 from wagtail import VERSION, __version__
23
24 # on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org
25 on_rtd = os.environ.get("READTHEDOCS", None) == "True"
26
27 html_theme = "sphinx_wagtail_theme"
28 html_theme_path = [sphinx_wagtail_theme.get_html_theme_path()]
29
30 html_theme_options = {
31 "project_name": "Wagtail Documentation",
32 "github_url": "https://github.com/wagtail/wagtail/blob/main/docs/",
33 }
34
35 # If extensions (or modules to document with autodoc) are in another directory,
36 # add these directories to sys.path here. If the directory is relative to the
37 # documentation root, use os.path.abspath to make it absolute, like shown here.
38 sys.path.insert(0, os.path.abspath(".."))
39
40 # Autodoc may need to import some models modules which require django settings
41 # be configured
42 os.environ["DJANGO_SETTINGS_MODULE"] = "wagtail.test.settings"
43 django.setup()
44
45 # Use SQLite3 database engine so it doesn't attempt to use psycopg2 on RTD
46 os.environ["DATABASE_ENGINE"] = "django.db.backends.sqlite3"
47
48 # -- General configuration ------------------------------------------------
49
50 # If your documentation needs a minimal Sphinx version, state it here.
51 # needs_sphinx = '1.0'
52
53 # Add any Sphinx extension module names here, as strings. They can be
54 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
55 # ones.
56 extensions = [
57 "sphinx.ext.autodoc",
58 "sphinx.ext.intersphinx",
59 "myst_parser",
60 "sphinx_wagtail_theme",
61 ]
62
63 if not on_rtd:
64 extensions.append("sphinxcontrib.spelling")
65
66 # Add any paths that contain templates here, relative to this directory.
67 templates_path = ["_templates"]
68
69 # The suffix of source filenames.
70 source_suffix = ".rst"
71
72 # The encoding of source files.
73 # source_encoding = 'utf-8-sig'
74
75 # The master toctree document.
76 master_doc = "index"
77
78 # General information about the project.
79 project = "Wagtail Documentation"
80 copyright = f"{datetime.now().year}, Torchbox and contributors"
81
82 # The version info for the project you're documenting, acts as replacement for
83 # |version| and |release|, also used in various other places throughout the
84 # built documents.
85
86 # The short X.Y version.
87 version = "{}.{}".format(VERSION[0], VERSION[1])
88 # The full version, including alpha/beta/rc tags.
89 release = __version__
90
91 # The language for content autogenerated by Sphinx. Refer to documentation
92 # for a list of supported languages.
93 # language = None
94
95 # There are two options for replacing |today|: either, you set today to some
96 # non-false value, then it is used:
97 # today = ''
98 # Else, today_fmt is used as the format for a strftime call.
99 # today_fmt = '%B %d, %Y'
100
101 # List of patterns, relative to source directory, that match files and
102 # directories to ignore when looking for source files.
103 exclude_patterns = ["_build", "README.md"]
104
105 # The reST default role (used for this markup: `text`) to use for all
106 # documents.
107 # default_role = None
108
109 # If true, '()' will be appended to :func: etc. cross-reference text.
110 # add_function_parentheses = True
111
112 # If true, the current module name will be prepended to all description
113 # unit titles (such as .. function::).
114 # add_module_names = True
115
116 # If true, sectionauthor and moduleauthor directives will be shown in the
117 # output. They are ignored by default.
118 # show_authors = False
119
120 # The name of the Pygments (syntax highlighting) style to use.
121 pygments_style = "default"
122
123 # A list of ignored prefixes for module index sorting.
124 # modindex_common_prefix = []
125
126 # If true, keep warnings as "system message" paragraphs in the built documents.
127 # keep_warnings = False
128
129 # splhinxcontrib.spelling settings
130
131 spelling_lang = "en_GB"
132 spelling_word_list_filename = "spelling_wordlist.txt"
133
134 # sphinx.ext.intersphinx settings
135 intersphinx_mapping = {
136 "django": (
137 "https://docs.djangoproject.com/en/stable/",
138 "https://docs.djangoproject.com/en/stable/_objects/",
139 )
140 }
141
142 # -- Options for HTML output ----------------------------------------------
143
144 # Theme options are theme-specific and customize the look and feel of a theme
145 # further. For a list of options available for each theme, see the
146 # documentation.
147 # html_theme_options = {}
148
149 # The name for this set of Sphinx documents. If None, it defaults to
150 # "<project> v<release> documentation".
151 # html_title = None
152
153 # A shorter title for the navigation bar. Default is the same as html_title.
154 # html_short_title = None
155
156 # The name of an image file (relative to this directory) to place at the top
157 # of the sidebar.
158 # html_logo = 'logo.png'
159
160 # The name of an image file (within the static path) to use as favicon of the
161 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
162 # pixels large.
163 html_favicon = "favicon.ico"
164
165 # Add any paths that contain custom static files (such as style sheets) here,
166 # relative to this directory. They are copied after the builtin static files,
167 # so a file named "default.css" will overwrite the builtin "default.css".
168 html_static_path = ["_static"]
169
170 # Add any extra paths that contain custom files (such as robots.txt or
171 # .htaccess) here, relative to this directory. These files are copied
172 # directly to the root of the documentation.
173 html_extra_path = ["robots.txt"]
174
175 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
176 # using the given strftime format.
177 # html_last_updated_fmt = '%b %d, %Y'
178
179 # If true, SmartyPants will be used to convert quotes and dashes to
180 # typographically correct entities.
181 # html_use_smartypants = True
182
183 # Custom sidebar templates, maps document names to template names.
184 # html_sidebars = {}
185
186 # Additional templates that should be rendered to pages, maps page names to
187 # template names.
188 # html_additional_pages = {}
189
190 # If false, no module index is generated.
191 # html_domain_indices = True
192
193 # If false, no index is generated.
194 # Since we are implementing search with Algolia DocSearch, we do not need Sphinx to
195 # generate its own index. It might not hurt to keep the Sphinx index, but it
196 # could potentially speed up the build process.
197 html_use_index = False
198
199 # If true, the index is split into individual pages for each letter.
200 # html_split_index = False
201
202 # If true, links to the reST sources are added to the pages.
203 # html_show_sourcelink = True
204
205 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
206 # html_show_sphinx = True
207
208 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
209 # html_show_copyright = True
210
211 # If true, an OpenSearch description file will be output, and all pages will
212 # contain a <link> tag referring to it. The value of this option must be the
213 # base URL from which the finished HTML is served.
214 # html_use_opensearch = ''
215
216 # This is the file name suffix for HTML files (e.g. ".xhtml").
217 # html_file_suffix = None
218
219 # Output file base name for HTML help builder.
220 htmlhelp_basename = "Wagtaildoc"
221
222 # -- Options for LaTeX output ---------------------------------------------
223
224 latex_elements = {
225 # The paper size ('letterpaper' or 'a4paper').
226 # 'papersize': 'letterpaper',
227 # The font size ('10pt', '11pt' or '12pt').
228 # 'pointsize': '10pt',
229 # Additional stuff for the LaTeX preamble.
230 # 'preamble': '',
231 }
232
233 # Grouping the document tree into LaTeX files. List of tuples
234 # (source start file, target name, title,
235 # author, documentclass [howto, manual, or own class]).
236 latex_documents = [
237 ("index", "Wagtail.tex", "Wagtail Documentation", "Torchbox", "manual"),
238 ]
239
240 # The name of an image file (relative to this directory) to place at the top of
241 # the title page.
242 # latex_logo = None
243
244 # For "manual" documents, if this is true, then toplevel headings are parts,
245 # not chapters.
246 # latex_use_parts = False
247
248 # If true, show page references after internal links.
249 # latex_show_pagerefs = False
250
251 # If true, show URL addresses after external links.
252 # latex_show_urls = False
253
254 # Documents to append as an appendix to all manuals.
255 # latex_appendices = []
256
257 # If false, no module index is generated.
258 # latex_domain_indices = True
259
260 # -- Options for manual page output ---------------------------------------
261
262 # One entry per manual page. List of tuples
263 # (source start file, name, description, authors, manual section).
264 man_pages = [("index", "wagtail", "Wagtail Documentation", ["Torchbox"], 1)]
265
266 # If true, show URL addresses after external links.
267 # man_show_urls = False
268
269 # -- Options for Texinfo output -------------------------------------------
270
271 # Grouping the document tree into Texinfo files. List of tuples
272 # (source start file, target name, title, author,
273 # dir menu entry, description, category)
274 texinfo_documents = [
275 (
276 "index",
277 "Wagtail",
278 "Wagtail Documentation",
279 "Torchbox",
280 "Wagtail",
281 "One line description of project.",
282 "Miscellaneous",
283 ),
284 ]
285
286 # Documents to append as an appendix to all manuals.
287 # texinfo_appendices = []
288
289 # If false, no module index is generated.
290 # texinfo_domain_indices = True
291
292 # How to display URL addresses: 'footnote', 'no', or 'inline'.
293 # texinfo_show_urls = 'footnote'
294
295 # If true, do not generate a @detailmenu in the "Top" node's menu.
296 # texinfo_no_detailmenu = False
297
298
299 def setup(app):
300 app.add_js_file("js/banner.js")
301
[end of docs/conf.py]
[start of wagtail/coreutils.py]
1 import functools
2 import inspect
3 import logging
4 import re
5 import unicodedata
6 from typing import TYPE_CHECKING, Any, Dict, Iterable, Union
7
8 from anyascii import anyascii
9 from django.apps import apps
10 from django.conf import settings
11 from django.conf.locale import LANG_INFO
12 from django.core.exceptions import ImproperlyConfigured, SuspiciousOperation
13 from django.core.signals import setting_changed
14 from django.db.models import Model
15 from django.db.models.base import ModelBase
16 from django.dispatch import receiver
17 from django.http import HttpRequest
18 from django.utils.encoding import force_str
19 from django.utils.text import slugify
20 from django.utils.translation import check_for_language, get_supported_language_variant
21
22 if TYPE_CHECKING:
23 from wagtail.models import Site
24
25 logger = logging.getLogger(__name__)
26
27 WAGTAIL_APPEND_SLASH = getattr(settings, "WAGTAIL_APPEND_SLASH", True)
28
29
30 def camelcase_to_underscore(str):
31 # https://djangosnippets.org/snippets/585/
32 return (
33 re.sub("(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))", "_\\1", str).lower().strip("_")
34 )
35
36
37 def string_to_ascii(value):
38 """
39 Convert a string to ascii.
40 """
41
42 return str(anyascii(value))
43
44
45 def get_model_string(model):
46 """
47 Returns a string that can be used to identify the specified model.
48
49 The format is: `app_label.ModelName`
50
51 This an be reversed with the `resolve_model_string` function
52 """
53 return model._meta.app_label + "." + model.__name__
54
55
56 def resolve_model_string(model_string, default_app=None):
57 """
58 Resolve an 'app_label.model_name' string into an actual model class.
59 If a model class is passed in, just return that.
60
61 Raises a LookupError if a model can not be found, or ValueError if passed
62 something that is neither a model or a string.
63 """
64 if isinstance(model_string, str):
65 try:
66 app_label, model_name = model_string.split(".")
67 except ValueError:
68 if default_app is not None:
69 # If we can't split, assume a model in current app
70 app_label = default_app
71 model_name = model_string
72 else:
73 raise ValueError(
74 "Can not resolve {0!r} into a model. Model names "
75 "should be in the form app_label.model_name".format(model_string),
76 model_string,
77 )
78
79 return apps.get_model(app_label, model_name)
80
81 elif isinstance(model_string, type) and issubclass(model_string, Model):
82 return model_string
83
84 else:
85 raise ValueError(
86 "Can not resolve {0!r} into a model".format(model_string), model_string
87 )
88
89
90 SCRIPT_RE = re.compile(r"<(-*)/script>")
91
92
93 def escape_script(text):
94 """
95 Escape `</script>` tags in 'text' so that it can be placed within a `<script>` block without
96 accidentally closing it. A '-' character will be inserted for each time it is escaped:
97 `<-/script>`, `<--/script>` etc.
98 """
99 return SCRIPT_RE.sub(r"<-\1/script>", text)
100
101
102 SLUGIFY_RE = re.compile(r"[^\w\s-]", re.UNICODE)
103
104
105 def cautious_slugify(value):
106 """
107 Convert a string to ASCII exactly as Django's slugify does, with the exception
108 that any non-ASCII alphanumeric characters (that cannot be ASCIIfied under Unicode
109 normalisation) are escaped into codes like 'u0421' instead of being deleted entirely.
110
111 This ensures that the result of slugifying e.g. Cyrillic text will not be an empty
112 string, and can thus be safely used as an identifier (albeit not a human-readable one).
113 """
114 value = force_str(value)
115
116 # Normalize the string to decomposed unicode form. This causes accented Latin
117 # characters to be split into 'base character' + 'accent modifier'; the latter will
118 # be stripped out by the regexp, resulting in an ASCII-clean character that doesn't
119 # need to be escaped
120 value = unicodedata.normalize("NFKD", value)
121
122 # Strip out characters that aren't letterlike, underscores or hyphens,
123 # using the same regexp that slugify uses. This ensures that non-ASCII non-letters
124 # (e.g. accent modifiers, fancy punctuation) get stripped rather than escaped
125 value = SLUGIFY_RE.sub("", value)
126
127 # Encode as ASCII, escaping non-ASCII characters with backslashreplace, then convert
128 # back to a unicode string (which is what slugify expects)
129 value = value.encode("ascii", "backslashreplace").decode("ascii")
130
131 # Pass to slugify to perform final conversion (whitespace stripping, applying
132 # mark_safe); this will also strip out the backslashes from the 'backslashreplace'
133 # conversion
134 return slugify(value)
135
136
137 def safe_snake_case(value):
138 """
139 Convert a string to ASCII similar to Django's slugify, with catious handling of
140 non-ASCII alphanumeric characters. See `cautious_slugify`.
141
142 Any inner whitespace, hyphens or dashes will be converted to underscores and
143 will be safe for Django template or filename usage.
144 """
145
146 slugified_ascii_string = cautious_slugify(value)
147
148 snake_case_string = slugified_ascii_string.replace("-", "_")
149
150 return snake_case_string
151
152
153 def get_content_type_label(content_type):
154 """
155 Return a human-readable label for a content type object, suitable for display in the admin
156 in place of the default 'wagtailcore | page' representation
157 """
158 model = content_type.model_class()
159 if model:
160 return model._meta.verbose_name.capitalize()
161 else:
162 # no corresponding model class found; fall back on the name field of the ContentType
163 return content_type.model.capitalize()
164
165
166 def accepts_kwarg(func, kwarg):
167 """
168 Determine whether the callable `func` has a signature that accepts the keyword argument `kwarg`
169 """
170 signature = inspect.signature(func)
171 try:
172 signature.bind_partial(**{kwarg: None})
173 return True
174 except TypeError:
175 return False
176
177
178 class InvokeViaAttributeShortcut:
179 """
180 Used to create a shortcut that allows an object's named
181 single-argument method to be invoked using a simple
182 attribute reference syntax. For example, adding the
183 following to an object:
184
185 obj.page_url = InvokeViaAttributeShortcut(obj, 'get_page_url')
186
187 Would allow you to invoke get_page_url() like so:
188
189 obj.page_url.terms_and_conditions
190
191 As well as the usual:
192
193 obj.get_page_url('terms_and_conditions')
194 """
195
196 __slots__ = "obj", "method_name"
197
198 def __init__(self, obj, method_name):
199 self.obj = obj
200 self.method_name = method_name
201
202 def __getattr__(self, name):
203 method = getattr(self.obj, self.method_name)
204 return method(name)
205
206
207 def find_available_slug(parent, requested_slug, ignore_page_id=None):
208 """
209 Finds an available slug within the specified parent.
210
211 If the requested slug is not available, this adds a number on the end, for example:
212
213 - 'requested-slug'
214 - 'requested-slug-1'
215 - 'requested-slug-2'
216
217 And so on, until an available slug is found.
218
219 The `ignore_page_id` keyword argument is useful for when you are updating a page,
220 you can pass the page being updated here so the page's current slug is not
221 treated as in use by another page.
222 """
223 pages = parent.get_children().filter(slug__startswith=requested_slug)
224
225 if ignore_page_id:
226 pages = pages.exclude(id=ignore_page_id)
227
228 existing_slugs = set(pages.values_list("slug", flat=True))
229 slug = requested_slug
230 number = 1
231
232 while slug in existing_slugs:
233 slug = requested_slug + "-" + str(number)
234 number += 1
235
236 return slug
237
238
239 @functools.lru_cache()
240 def get_content_languages():
241 """
242 Cache of settings.WAGTAIL_CONTENT_LANGUAGES in a dictionary for easy lookups by key.
243 """
244 content_languages = getattr(settings, "WAGTAIL_CONTENT_LANGUAGES", None)
245 languages = dict(settings.LANGUAGES)
246
247 if content_languages is None:
248 # Default to a single language based on LANGUAGE_CODE
249 default_language_code = get_supported_language_variant(settings.LANGUAGE_CODE)
250 try:
251 language_name = languages[default_language_code]
252 except KeyError:
253 # get_supported_language_variant on the 'null' translation backend (used for
254 # USE_I18N=False) returns settings.LANGUAGE_CODE unchanged without accounting for
255 # language variants (en-us versus en), so retry with the generic version.
256 default_language_code = default_language_code.split("-")[0]
257 try:
258 language_name = languages[default_language_code]
259 except KeyError:
260 # Can't extract a display name, so fall back on displaying LANGUAGE_CODE instead
261 language_name = settings.LANGUAGE_CODE
262 # Also need to tweak the languages dict to get around the check below
263 languages[default_language_code] = settings.LANGUAGE_CODE
264
265 content_languages = [
266 (default_language_code, language_name),
267 ]
268
269 # Check that each content language is in LANGUAGES
270 for language_code, name in content_languages:
271 if language_code not in languages:
272 raise ImproperlyConfigured(
273 "The language {} is specified in WAGTAIL_CONTENT_LANGUAGES but not LANGUAGES. "
274 "WAGTAIL_CONTENT_LANGUAGES must be a subset of LANGUAGES.".format(
275 language_code
276 )
277 )
278
279 return dict(content_languages)
280
281
282 @functools.lru_cache(maxsize=1000)
283 def get_supported_content_language_variant(lang_code, strict=False):
284 """
285 Return the language code that's listed in supported languages, possibly
286 selecting a more generic variant. Raise LookupError if nothing is found.
287 If `strict` is False (the default), look for a country-specific variant
288 when neither the language code nor its generic variant is found.
289 lru_cache should have a maxsize to prevent from memory exhaustion attacks,
290 as the provided language codes are taken from the HTTP request. See also
291 <https://www.djangoproject.com/weblog/2007/oct/26/security-fix/>.
292
293 This is equvilant to Django's `django.utils.translation.get_supported_content_language_variant`
294 but reads the `WAGTAIL_CONTENT_LANGUAGES` setting instead.
295 """
296 if lang_code:
297 # If 'fr-ca' is not supported, try special fallback or language-only 'fr'.
298 possible_lang_codes = [lang_code]
299 try:
300 possible_lang_codes.extend(LANG_INFO[lang_code]["fallback"])
301 except KeyError:
302 pass
303 generic_lang_code = lang_code.split("-")[0]
304 possible_lang_codes.append(generic_lang_code)
305 supported_lang_codes = get_content_languages()
306
307 for code in possible_lang_codes:
308 if code in supported_lang_codes and check_for_language(code):
309 return code
310 if not strict:
311 # if fr-fr is not supported, try fr-ca.
312 for supported_code in supported_lang_codes:
313 if supported_code.startswith(generic_lang_code + "-"):
314 return supported_code
315 raise LookupError(lang_code)
316
317
318 @functools.lru_cache()
319 def get_locales_display_names() -> dict:
320 """
321 Cache of the locale id -> locale display name mapping
322 """
323 from wagtail.models import Locale # inlined to avoid circular imports
324
325 locales_map = {
326 locale.pk: locale.get_display_name() for locale in Locale.objects.all()
327 }
328 return locales_map
329
330
331 @receiver(setting_changed)
332 def reset_cache(**kwargs):
333 """
334 Clear cache when global WAGTAIL_CONTENT_LANGUAGES/LANGUAGES/LANGUAGE_CODE settings are changed
335 """
336 if kwargs["setting"] in ("WAGTAIL_CONTENT_LANGUAGES", "LANGUAGES", "LANGUAGE_CODE"):
337 get_content_languages.cache_clear()
338 get_supported_content_language_variant.cache_clear()
339
340
341 def multigetattr(item, accessor):
342 """
343 Like getattr, but accepts a dotted path as the accessor to be followed to any depth.
344 At each step, the lookup on the object can be a dictionary lookup (foo['bar']) or an attribute
345 lookup (foo.bar), and if it results in a callable, will be called (provided we can do so with
346 no arguments, and it does not have an 'alters_data' property).
347
348 Modelled on the variable resolution logic in Django templates:
349 https://github.com/django/django/blob/f331eba6d576752dd79c4b37c41d981daa537fe6/django/template/base.py#L838
350 """
351
352 current = item
353
354 for bit in accessor.split("."):
355 try: # dictionary lookup
356 current = current[bit]
357 # ValueError/IndexError are for numpy.array lookup on
358 # numpy < 1.9 and 1.9+ respectively
359 except (TypeError, AttributeError, KeyError, ValueError, IndexError):
360 try: # attribute lookup
361 current = getattr(current, bit)
362 except (TypeError, AttributeError):
363 # Reraise if the exception was raised by a @property
364 if bit in dir(current):
365 raise
366 try: # list-index lookup
367 current = current[int(bit)]
368 except (
369 IndexError, # list index out of range
370 ValueError, # invalid literal for int()
371 KeyError, # current is a dict without `int(bit)` key
372 TypeError, # unsubscriptable object
373 ):
374 raise AttributeError(
375 "Failed lookup for key [%s] in %r" % (bit, current)
376 )
377
378 if callable(current):
379 if getattr(current, "alters_data", False):
380 raise SuspiciousOperation(
381 "Cannot call %r from multigetattr" % (current,)
382 )
383
384 # if calling without arguments is invalid, let the exception bubble up
385 current = current()
386
387 return current
388
389
390 def get_dummy_request(path: str = "/", site: "Site" = None) -> HttpRequest:
391 """
392 Return a simple ``HttpRequest`` instance that can be passed to
393 ``Page.get_url()`` and other methods to benefit from improved performance
394 when no real ``HttpRequest`` instance is available.
395
396 If ``site`` is provided, the ``HttpRequest`` is made to look like it came
397 from that Wagtail ``Site``.
398 """
399 request = HttpRequest()
400 request.path = path
401 request.method = "GET"
402 SERVER_PORT = 80
403 if site:
404 SERVER_NAME = site.hostname
405 SERVER_PORT = site.port
406 elif settings.ALLOWED_HOSTS == ["*"]:
407 SERVER_NAME = "example.com"
408 else:
409 SERVER_NAME = settings.ALLOWED_HOSTS[0]
410 request.META = {"SERVER_NAME": SERVER_NAME, "SERVER_PORT": SERVER_PORT}
411 return request
412
413
414 class BatchProcessor:
415 """
416 A class to help with processing of an unknown (and potentially very
417 high) number of objects.
418
419 Just set ``max_size`` to the maximum number of instances you want
420 to be held in memory at any one time, and batches will be sent to the
421 ``process()`` method as that number is reached, without you having to
422 invoke ``process()`` regularly yourself. Just remember to invoke
423 ``process()`` when you're done adding items, otherwise the final batch
424 of objects will not be processed.
425 """
426
427 def __init__(self, max_size: int):
428 self.max_size = max_size
429 self.items = []
430 self.added_count = 0
431
432 def __len__(self):
433 return self.added_count
434
435 def add(self, item: Any) -> None:
436 self.items.append(item)
437 self.added_count += 1
438 if self.max_size and len(self.items) == self.max_size:
439 self.process()
440
441 def extend(self, iterable: Iterable[Any]) -> None:
442 for item in iterable:
443 self.add(item)
444
445 def process(self):
446 self.pre_process()
447 self._do_processing()
448 self.post_process()
449 self.items.clear()
450
451 def pre_process(self):
452 """
453 A hook to allow subclasses to do any pre-processing of the data
454 before the ``process()`` method is called.
455 """
456 pass
457
458 def _do_processing(self):
459 """
460 To be overridden by subclasses to do whatever it is
461 that needs to be done to the items in ``self.items``.
462 """
463 raise NotImplementedError
464
465 def post_process(self):
466 """
467 A hook to allow subclasses to do any post-processing
468 after the ``process()`` method is called, and before
469 ``self.items`` is cleared
470 """
471 pass
472
473
474 class BatchCreator(BatchProcessor):
475 """
476 A class to help with bulk creation of an unknown (and potentially very
477 high) number of model instances.
478
479 Just set ``max_size`` to the maximum number of instances you want
480 to be held in memory at any one time, and batches of objects will
481 be created as that number is reached, without you having to invoke
482 the ``process()`` method regularly yourself. Just remember to
483 invoke ``process()`` when you're done adding items, to ensure
484 that the final batch items is saved.
485
486 ``BatchSaver`` is migration-friendly! Just use the ``model``
487 keyword argument when initializing to override the hardcoded model
488 class with the version from your migration.
489 """
490
491 model: ModelBase = None
492
493 def __init__(
494 self, max_size: int, *, model: ModelBase = None, ignore_conflicts=False
495 ):
496 super().__init__(max_size)
497 self.ignore_conflicts = ignore_conflicts
498 self.created_count = 0
499 if model is not None:
500 self.model = model
501
502 def initialize_instance(self, kwargs):
503 return self.model(**kwargs)
504
505 def add(self, *, instance: Model = None, **kwargs) -> None:
506 if instance is None:
507 instance = self.initialize_instance(kwargs)
508 self.items.append(instance)
509 self.added_count += 1
510 if self.max_size and len(self.items) == self.max_size:
511 self.process()
512
513 def extend(self, iterable: Iterable[Union[Model, Dict[str, Any]]]) -> None:
514 for value in iterable:
515 if isinstance(value, self.model):
516 self.add(instance=value)
517 else:
518 self.add(**value)
519
520 def _do_processing(self):
521 """
522 Use bulk_create() to save ``self.items``.
523 """
524 if not self.items:
525 return None
526 self.created_count += len(
527 self.model.objects.bulk_create(
528 self.items, ignore_conflicts=self.ignore_conflicts
529 )
530 )
531
532 def get_summary(self):
533 opts = self.model._meta
534 return f"{self.created_count}/{self.added_count} {opts.verbose_name_plural} were created successfully."
535
[end of wagtail/coreutils.py]
[start of wagtail/images/migrations/0016_deprecate_rendition_filter_relation.py]
1 # -*- coding: utf-8 -*-
2 # Generated by Django 1.9.11 on 2016-11-11 17:04
3 from django.db import migrations, models
4
5
6 class Migration(migrations.Migration):
7
8 dependencies = [
9 ("wagtailimages", "0015_fill_filter_spec_field"),
10 ]
11
12 operations = [
13 migrations.AlterField(
14 model_name="rendition",
15 name="filter_spec",
16 field=models.CharField(db_index=True, max_length=255),
17 ),
18 # New step introduced in Wagtail 1.8.1:
19 #
20 # Reduce max_length of rendition.focal_point_key to 16, from the previous value of 255
21 # which existed on Wagtail <= 1.8. MySQL has a limit of 767 (on InnoDB) or 1000 (on MyISAM)
22 # bytes; depending on the character encoding used, this limit may be reached by the
23 # original index on ['image', 'filter', 'focal_point_key'] (= 1 varchar and two FKs)
24 # or the new index on ['image', 'filter_spec', 'focal_point_key'] (= 2 varchars and one FK).
25 #
26 # To mitigate this, we reduce focal_point_key in the following places:
27 # * Retrospectively in the original migration, so that configurations that previously
28 # failed on wagtailimages/0001_initial can now run (see #2925 / #2953);
29 # * Here, so that previously-working Wagtail <=1.7 installations that failed on the
30 # AlterUniqueTogether below when upgrading to 1.8 can now succeed;
31 # * In the newly-added migration wagtailimages/0017, so that existing Wagtail 1.8 installations
32 # that successfully applied the old 1.8 version of this migration are consistent with
33 # other setups.
34 #
35 # Since Django will optimise away any AlterField operations that appear to match
36 # the current state (according to earlier migrations) - which would cause them to be
37 # skipped on installations that ran the earlier (max_length=255) versions of the
38 # migrations - we need to make them superficially different; we do this by stepping
39 # max_length down from 18 to 17 then 16.
40 #
41 # Projects with a custom image model don't have to worry about this - they'll have an existing
42 # migration with the max_length=255, and will get a new migration reducing it to max_length=16
43 # the next time they run makemigrations.
44 migrations.AlterField(
45 model_name="rendition",
46 name="focal_point_key",
47 field=models.CharField(
48 blank=True, default="", max_length=17, editable=False
49 ),
50 ),
51 migrations.AlterUniqueTogether(
52 name="rendition",
53 unique_together={("image", "filter_spec", "focal_point_key")},
54 ),
55 ]
56
[end of wagtail/images/migrations/0016_deprecate_rendition_filter_relation.py]
[start of wagtail/whitelist.py]
1 """
2 A generic HTML whitelisting engine, designed to accommodate subclassing to override
3 specific rules.
4 """
5 import re
6
7 from bs4 import BeautifulSoup, Comment, NavigableString, Tag
8 from django.utils.html import escape
9
10 ALLOWED_URL_SCHEMES = ["http", "https", "ftp", "mailto", "tel"]
11
12 PROTOCOL_RE = re.compile("^[a-z0-9][-+.a-z0-9]*:")
13
14
15 def check_url(url_string):
16 # Remove control characters and other disallowed characters
17 # Browsers sometimes ignore these, so that 'jav\tascript:alert("XSS")'
18 # is treated as a valid javascript: link
19
20 unescaped = url_string.lower()
21 unescaped = unescaped.replace("<", "<")
22 unescaped = unescaped.replace(">", ">")
23 unescaped = unescaped.replace("&", "&")
24 unescaped = re.sub(r"[`\000-\040\177-\240\s]+", "", unescaped)
25 unescaped = unescaped.replace("\ufffd", "")
26 if PROTOCOL_RE.match(unescaped):
27 protocol = unescaped.split(":", 1)[0]
28 if protocol not in ALLOWED_URL_SCHEMES:
29 return None
30 return url_string
31
32
33 def attribute_rule(allowed_attrs):
34 """
35 Generator for functions that can be used as entries in Whitelister.element_rules.
36 These functions accept a tag, and modify its attributes by looking each attribute
37 up in the 'allowed_attrs' dict defined here:
38 * if the lookup fails, drop the attribute
39 * if the lookup returns a callable, replace the attribute with the result of calling
40 it - e.g. {'title': uppercase} will replace 'title' with the result of uppercasing
41 the title. If the callable returns None, the attribute is dropped
42 * if the lookup returns a truthy value, keep the attribute; if falsy, drop it
43 """
44
45 def fn(tag):
46 for attr, val in list(tag.attrs.items()):
47 rule = allowed_attrs.get(attr)
48 if rule:
49 if callable(rule):
50 new_val = rule(val)
51 if new_val is None:
52 del tag[attr]
53 else:
54 tag[attr] = new_val
55 else:
56 # rule is not callable, just truthy - keep the attribute
57 pass
58 else:
59 # rule is falsy or absent - remove the attribute
60 del tag[attr]
61
62 return fn
63
64
65 allow_without_attributes = attribute_rule({})
66
67
68 DEFAULT_ELEMENT_RULES = {
69 "[document]": allow_without_attributes,
70 "a": attribute_rule({"href": check_url}),
71 "b": allow_without_attributes,
72 "br": allow_without_attributes,
73 "div": allow_without_attributes,
74 "em": allow_without_attributes,
75 "h1": allow_without_attributes,
76 "h2": allow_without_attributes,
77 "h3": allow_without_attributes,
78 "h4": allow_without_attributes,
79 "h5": allow_without_attributes,
80 "h6": allow_without_attributes,
81 "hr": allow_without_attributes,
82 "i": allow_without_attributes,
83 "img": attribute_rule(
84 {"src": check_url, "width": True, "height": True, "alt": True}
85 ),
86 "li": allow_without_attributes,
87 "ol": allow_without_attributes,
88 "p": allow_without_attributes,
89 "strong": allow_without_attributes,
90 "sub": allow_without_attributes,
91 "sup": allow_without_attributes,
92 "ul": allow_without_attributes,
93 }
94
95
96 class Whitelister:
97 element_rules = DEFAULT_ELEMENT_RULES
98
99 def clean(self, html):
100 """Clean up an HTML string to contain just the allowed elements /
101 attributes"""
102 doc = BeautifulSoup(html, "html5lib")
103 self.clean_node(doc, doc)
104
105 # Pass strings through django.utils.html.escape when generating the final HTML.
106 # This differs from BeautifulSoup's default EntitySubstitution.substitute_html formatter
107 # in that it escapes " to " as well as escaping < > & - if we don't do this, then
108 # BeautifulSoup will try to be clever and use single-quotes to wrap attribute values,
109 # which confuses our regexp-based db-HTML-to-real-HTML conversion.
110 return doc.decode(formatter=escape)
111
112 def clean_node(self, doc, node):
113 """Clean a BeautifulSoup document in-place"""
114 if isinstance(node, NavigableString):
115 self.clean_string_node(doc, node)
116 elif isinstance(node, Tag):
117 self.clean_tag_node(doc, node)
118 # This branch is here in case node is a BeautifulSoup object that does
119 # not inherit from NavigableString or Tag. I can't find any examples
120 # of such a thing at the moment, so this branch is untested.
121 else: # pragma: no cover
122 self.clean_unknown_node(doc, node)
123
124 def clean_string_node(self, doc, node):
125 # Remove comments
126 if isinstance(node, Comment):
127 node.extract()
128 return
129
130 # by default, nothing needs to be done to whitelist string nodes
131 pass
132
133 def clean_tag_node(self, doc, tag):
134 # first, whitelist the contents of this tag
135
136 # NB tag.contents will change while this iteration is running, so we need
137 # to capture the initial state into a static list() and iterate over that
138 # to avoid losing our place in the sequence.
139 for child in list(tag.contents):
140 self.clean_node(doc, child)
141
142 # see if there is a rule in element_rules for this tag type
143 try:
144 rule = self.element_rules[tag.name]
145 except KeyError:
146 # don't recognise this tag name, so KILL IT WITH FIRE
147 tag.unwrap()
148 return
149
150 # apply the rule
151 rule(tag)
152
153 def clean_unknown_node(self, doc, node):
154 # don't know what type of object this is, so KILL IT WITH FIRE
155 node.decompose()
156
[end of wagtail/whitelist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
wagtail/wagtail
|
9007bda6862efa0bb1782ef7087fbb3924e19ad7
|
Tags over 100 characters
Found a bug? Please fill out the sections below. 👍
### Issue Summary
When adding a tag while using the ClusterTaggableManager class, if the tag name is greater than the character limit for the database column no validation error is given.
### Steps to Reproduce
1. login to admin and edit a page with a tag content panel
2. create a tag with more than 100 characters
3. save, or publish the page
### Technical details
* Python version: Python 3.5.1
* Django version: 1.11.13
* Wagtail version: 1.13.1
Tags over 100 characters
Found a bug? Please fill out the sections below. 👍
### Issue Summary
When adding a tag while using the ClusterTaggableManager class, if the tag name is greater than the character limit for the database column no validation error is given.
### Steps to Reproduce
1. login to admin and edit a page with a tag content panel
2. create a tag with more than 100 characters
3. save, or publish the page
### Technical details
* Python version: Python 3.5.1
* Django version: 1.11.13
* Wagtail version: 1.13.1
|
Hi, I and my friend are trying to fix this issue. We confirm that we can reproduce the bug already.

We are able to navigate the createTag function. However, we are not sure how the error should pop up. Should we pop a red message right below the tag area? This seems unnecessary since other errors do not show this way. Can we highlight the tag with color (yellow/red) and do not create the tag until the length is valid? Or we can just get the only first 100 characters of the tag. In this case, the users may not know what is wrong with the tag - we may include a notice next to "A comma separated list of tags", something like this

Do you have any suggestion?
Hi, just want to check in again - I wonder if @thoang43 and I can get some feedback on our suggested solution above?
Hi, I and my friend are trying to fix this issue. We confirm that we can reproduce the bug already.

We are able to navigate the createTag function. However, we are not sure how the error should pop up. Should we pop a red message right below the tag area? This seems unnecessary since other errors do not show this way. Can we highlight the tag with color (yellow/red) and do not create the tag until the length is valid? Or we can just get the only first 100 characters of the tag. In this case, the users may not know what is wrong with the tag - we may include a notice next to "A comma separated list of tags", something like this

Do you have any suggestion?
Hi, just want to check in again - I wonder if @thoang43 and I can get some feedback on our suggested solution above?
|
2022-03-26T09:59:43Z
|
<patch>
diff --git a/wagtail/admin/forms/tags.py b/wagtail/admin/forms/tags.py
--- a/wagtail/admin/forms/tags.py
+++ b/wagtail/admin/forms/tags.py
@@ -1,3 +1,5 @@
+from django.core.exceptions import ValidationError
+from django.utils.translation import gettext_lazy as _
from taggit.forms import TagField as TaggitTagField
from taggit.models import Tag
@@ -31,8 +33,27 @@ def __init__(self, *args, **kwargs):
self.widget.free_tagging = self.free_tagging
def clean(self, value):
+
value = super().clean(value)
+ max_tag_length = self.tag_model.name.field.max_length
+ value_too_long = ""
+ for val in value:
+ if len(val) > max_tag_length:
+ if value_too_long:
+ value_too_long += ", "
+ value_too_long += val
+ if value_too_long:
+ raise ValidationError(
+ _(
+ "Tag(s) %(value_too_long)s are over %(max_tag_length)d characters"
+ % {
+ "value_too_long": value_too_long,
+ "max_tag_length": max_tag_length,
+ }
+ )
+ )
+
if not self.free_tagging:
# filter value to just the tags that already exist in tag_model
value = list(
</patch>
|
[]
|
[]
| |||
huggingface__transformers-10334
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Trainer.train argument resume_from_last_checkpoint
# 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
`Trainer.train` accepts `resume_from_checkpoint` argument, which requires the user to explicitly provide the checkpoint location to continue training from.
`resume_from_last_checkpoint` can be useful to resume training by picking the latest checkpoint from `output_dir` of the `TrainingArguments` passed.
## Motivation
1. The checkpoint directory is created by the library, so user needs to navigate to the directory to find the value to provide for `resume_from_checkpoint`
2. User may just want to resume from the last valid checkpoint since their training got disrupted previously (a common scenario for someone to want to resume training). All they know is the `output_dir` they provided initially
This motivates to provide a `resume_from_last_checkpoint=True` to the `Trainer.train(...)` call, which will pick the latest checkpoint from `args.output_dir`. FYI `get_last_checkpoint` function from `trainer_utils` can be used to do exactly the same.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I can raise a PR if it is a useful feature to have!
</issue>
<code>
[start of README.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/transformers/index.html">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/transformers/index.html.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 </p>
39
40 <h3 align="center">
41 <p>State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0
42 </h3>
43
44 🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction, question answering, summarization, translation, text generation, etc in 100+ languages. Its aim is to make cutting-edge NLP easier to use for everyone.
45
46 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture can be used as a standalone and modified to enable quick research experiments.
47
48 🤗 Transformers is backed by the two most popular deep learning libraries, [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/), with a seamless integration between them, allowing you to train your models with one then load it for inference with the other.
49
50 ## Online demos
51
52 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) to use those models.
53
54 Here are a few examples:
55 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
56 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
57 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
58 - [Natural Language Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
59 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
60 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
61 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
62
63 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities.
64
65 ## Quick tour
66
67 To immediately use a model on a given text, we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model training. Here is how to quickly use a pipeline to classify positive versus negative texts
68
69 ```python
70 >>> from transformers import pipeline
71
72 # Allocate a pipeline for sentiment-analysis
73 >>> classifier = pipeline('sentiment-analysis')
74 >>> classifier('We are very happy to include pipeline into the transformers repository.')
75 [{'label': 'POSITIVE', 'score': 0.9978193640708923}]
76 ```
77
78 The second line of code downloads and caches the pretrained model used by the pipeline, the third line evaluates it on the given text. Here the answer is "positive" with a confidence of 99.8%.
79
80 This is another example of pipeline used for that can extract question answers from some context:
81
82 ``` python
83 >>> from transformers import pipeline
84
85 # Allocate a pipeline for question-answering
86 >>> question_answerer = pipeline('question-answering')
87 >>> question_answerer({
88 ... 'question': 'What is the name of the repository ?',
89 ... 'context': 'Pipeline have been included in the huggingface/transformers repository'
90 ... })
91 {'score': 0.5135612454720828, 'start': 35, 'end': 59, 'answer': 'huggingface/transformers'}
92
93 ```
94
95 On top of the answer, the pretrained model used here returned its confidence score, along with the start position and its end position in the tokenized sentence. You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/transformers/task_summary.html).
96
97 To download and use any of the pretrained models on your given task, you just need to use those three lines of codes (PyTorch version):
98 ```python
99 >>> from transformers import AutoTokenizer, AutoModel
100
101 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
102 >>> model = AutoModel.from_pretrained("bert-base-uncased")
103
104 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
105 >>> outputs = model(**inputs)
106 ```
107 or for TensorFlow:
108 ```python
109 >>> from transformers import AutoTokenizer, TFAutoModel
110
111 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
112 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
113
114 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
115 >>> outputs = model(**inputs)
116 ```
117
118 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on one (or list) of texts (as we can see on the fourth line of both code examples). It will output a dictionary you can directly pass to your model (which is done on the fifth line).
119
120 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use normally. For instance, [this tutorial](https://huggingface.co/transformers/training.html) explains how to integrate such a model in classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune the on a new dataset.
121
122 ## Why should I use transformers?
123
124 1. Easy-to-use state-of-the-art models:
125 - High performance on NLU and NLG tasks.
126 - Low barrier to entry for educators and practitioners.
127 - Few user-facing abstractions with just three classes to learn.
128 - A unified API for using all our pretrained models.
129
130 1. Lower compute costs, smaller carbon footprint:
131 - Researchers can share trained models instead of always retraining.
132 - Practitioners can reduce compute time and production costs.
133 - Dozens of architectures with over 2,000 pretrained models, some in more than 100 languages.
134
135 1. Choose the right framework for every part of a model's lifetime:
136 - Train state-of-the-art models in 3 lines of code.
137 - Move a single model between TF2.0/PyTorch frameworks at will.
138 - Seamlessly pick the right framework for training, evaluation, production.
139
140 1. Easily customize a model or an example to your needs:
141 - Examples for each architecture to reproduce the results by the official authors of said architecture.
142 - Expose the models internal as consistently as possible.
143 - Model files can be used independently of the library for quick experiments.
144
145 ## Why shouldn't I use transformers?
146
147 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving in additional abstractions/files.
148 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library.
149 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/master/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs.
150
151 ## Installation
152
153 ### With pip
154
155 This repository is tested on Python 3.6+, PyTorch 1.0.0+ (PyTorch 1.3.1+ for [examples](https://github.com/huggingface/transformers/tree/master/examples)) and TensorFlow 2.0.
156
157 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
158
159 First, create a virtual environment with the version of Python you're going to use and activate it.
160
161 Then, you will need to install at least one of TensorFlow 2.0, PyTorch or Flax.
162 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/pip#tensorflow-2.0-rc-is-available), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) regarding the specific install command for your platform and/or [Flax installation page](https://github.com/google/flax#quick-install).
163
164 When TensorFlow 2.0 and/or PyTorch has been installed, 🤗 Transformers can be installed using pip as follows:
165
166 ```bash
167 pip install transformers
168 ```
169
170 If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/transformers/installation.html#installing-from-source).
171
172 ### With conda
173
174 Since Transformers version v4.0.0, we now have a conda channel: `huggingface`.
175
176 🤗 Transformers can be installed using conda as follows:
177
178 ```shell script
179 conda install -c huggingface transformers
180 ```
181
182 Follow the installation pages of TensorFlow, PyTorch or Flax to see how to install them with conda.
183
184 ## Models architectures
185
186 **[All the model checkpoints](https://huggingface.co/models)** provided by 🤗 Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co) where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations).
187
188 Current number of checkpoints: 
189
190 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/transformers/model_summary.html) for a high-level summary of each them):
191
192 1. **[ALBERT](https://huggingface.co/transformers/model_doc/albert.html)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
193 1. **[BART](https://huggingface.co/transformers/model_doc/bart.html)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
194 1. **[BARThez](https://huggingface.co/transformers/model_doc/barthez.html)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
195 1. **[BERT](https://huggingface.co/transformers/model_doc/bert.html)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
196 1. **[BERT For Sequence Generation](https://huggingface.co/transformers/model_doc/bertgeneration.html)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
197 1. **[Blenderbot](https://huggingface.co/transformers/model_doc/blenderbot.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
198 1. **[BlenderbotSmall](https://huggingface.co/transformers/model_doc/blenderbot_small.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
199 1. **[BORT](https://huggingface.co/transformers/model_doc/bort.html)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
200 1. **[CamemBERT](https://huggingface.co/transformers/model_doc/camembert.html)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
201 1. **[ConvBERT](https://huggingface.co/transformers/model_doc/convbert.html)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
202 1. **[CTRL](https://huggingface.co/transformers/model_doc/ctrl.html)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
203 1. **[DeBERTa](https://huggingface.co/transformers/model_doc/deberta.html)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
204 1. **[DeBERTa-v2](https://huggingface.co/transformers/master/model_doc/deberta_v2.html)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
205 1. **[DialoGPT](https://huggingface.co/transformers/model_doc/dialogpt.html)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
206 1. **[DistilBERT](https://huggingface.co/transformers/model_doc/distilbert.html)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT.
207 1. **[DPR](https://huggingface.co/transformers/model_doc/dpr.html)** (from Facebook) released with the paper [Dense Passage Retrieval
208 for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon
209 Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
210 1. **[ELECTRA](https://huggingface.co/transformers/model_doc/electra.html)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
211 1. **[FlauBERT](https://huggingface.co/transformers/model_doc/flaubert.html)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
212 1. **[Funnel Transformer](https://huggingface.co/transformers/model_doc/funnel.html)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
213 1. **[GPT](https://huggingface.co/transformers/model_doc/gpt.html)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
214 1. **[GPT-2](https://huggingface.co/transformers/model_doc/gpt2.html)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
215 1. **[LayoutLM](https://huggingface.co/transformers/model_doc/layoutlm.html)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
216 1. **[LED](https://huggingface.co/transformers/model_doc/led.html)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
217 1. **[Longformer](https://huggingface.co/transformers/model_doc/longformer.html)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
218 1. **[LXMERT](https://huggingface.co/transformers/model_doc/lxmert.html)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
219 1. **[MarianMT](https://huggingface.co/transformers/model_doc/marian.html)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
220 1. **[MBart](https://huggingface.co/transformers/model_doc/mbart.html)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
221 1. **[MBart-50](https://huggingface.co/transformers/model_doc/mbart.html)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
222 1. **[MPNet](https://huggingface.co/transformers/model_doc/mpnet.html)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
223 1. **[MT5](https://huggingface.co/transformers/model_doc/mt5.html)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
224 1. **[Pegasus](https://huggingface.co/transformers/model_doc/pegasus.html)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777)> by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
225 1. **[ProphetNet](https://huggingface.co/transformers/model_doc/prophetnet.html)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
226 1. **[Reformer](https://huggingface.co/transformers/model_doc/reformer.html)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
227 1. **[RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
228 1. **[SqueezeBert](https://huggingface.co/transformers/model_doc/squeezebert.html)** released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
229 1. **[T5](https://huggingface.co/transformers/model_doc/t5.html)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
230 1. **[TAPAS](https://huggingface.co/transformers/model_doc/tapas.html)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
231 1. **[Transformer-XL](https://huggingface.co/transformers/model_doc/transformerxl.html)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
232 1. **[Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
233 1. **[XLM](https://huggingface.co/transformers/model_doc/xlm.html)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
234 1. **[XLM-ProphetNet](https://huggingface.co/transformers/model_doc/xlmprophetnet.html)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
235 1. **[XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
236 1. **[XLNet](https://huggingface.co/transformers/model_doc/xlnet.html)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
237 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
238
239 To check if each model has an implementation in PyTorch/TensorFlow/Flax or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/transformers/index.html#bigtable)
240
241 These implementations have been tested on several datasets (see the example scripts) and should match the performances of the original implementations. You can find more details on the performances in the Examples section of the [documentation](https://huggingface.co/transformers/examples.html).
242
243
244 ## Learn more
245
246 | Section | Description |
247 |-|-|
248 | [Documentation](https://huggingface.co/transformers/) | Full API documentation and tutorials |
249 | [Task summary](https://huggingface.co/transformers/task_summary.html) | Tasks supported by 🤗 Transformers |
250 | [Preprocessing tutorial](https://huggingface.co/transformers/preprocessing.html) | Using the `Tokenizer` class to prepare data for the models |
251 | [Training and fine-tuning](https://huggingface.co/transformers/training.html) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API |
252 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/master/examples) | Example scripts for fine-tuning models on a wide range of tasks |
253 | [Model sharing and uploading](https://huggingface.co/transformers/model_sharing.html) | Upload and share your fine-tuned models with the community |
254 | [Migration](https://huggingface.co/transformers/migration.html) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` |
255
256 ## Citation
257
258 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library:
259 ```bibtex
260 @inproceedings{wolf-etal-2020-transformers,
261 title = "Transformers: State-of-the-Art Natural Language Processing",
262 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
263 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
264 month = oct,
265 year = "2020",
266 address = "Online",
267 publisher = "Association for Computational Linguistics",
268 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
269 pages = "38--45"
270 }
271 ```
272
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
huggingface/transformers
|
eab0afc19ceaf9a31190777f5548312d2346cd44
|
Trainer.train argument resume_from_last_checkpoint
# 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
`Trainer.train` accepts `resume_from_checkpoint` argument, which requires the user to explicitly provide the checkpoint location to continue training from.
`resume_from_last_checkpoint` can be useful to resume training by picking the latest checkpoint from `output_dir` of the `TrainingArguments` passed.
## Motivation
1. The checkpoint directory is created by the library, so user needs to navigate to the directory to find the value to provide for `resume_from_checkpoint`
2. User may just want to resume from the last valid checkpoint since their training got disrupted previously (a common scenario for someone to want to resume training). All they know is the `output_dir` they provided initially
This motivates to provide a `resume_from_last_checkpoint=True` to the `Trainer.train(...)` call, which will pick the latest checkpoint from `args.output_dir`. FYI `get_last_checkpoint` function from `trainer_utils` can be used to do exactly the same.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I can raise a PR if it is a useful feature to have!
|
2021-02-22T15:43:34Z
|
<patch>
diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -97,6 +97,7 @@
TrainOutput,
default_compute_objective,
default_hp_space,
+ get_last_checkpoint,
set_seed,
speed_metrics,
)
@@ -758,7 +759,7 @@ def _wrap_model(self, model, training=True):
def train(
self,
- resume_from_checkpoint: Optional[str] = None,
+ resume_from_checkpoint: Optional[Union[str, bool]] = None,
trial: Union["optuna.Trial", Dict[str, Any]] = None,
**kwargs,
):
@@ -766,9 +767,11 @@ def train(
Main training entry point.
Args:
- resume_from_checkpoint (:obj:`str`, `optional`):
- Local path to a saved checkpoint as saved by a previous instance of :class:`~transformers.Trainer`. If
- present, training will resume from the model/optimizer/scheduler states loaded here.
+ resume_from_checkpoint (:obj:`str` or :obj:`bool`, `optional`):
+ If a :obj:`str`, local path to a saved checkpoint as saved by a previous instance of
+ :class:`~transformers.Trainer`. If a :obj:`bool` and equals `True`, load the last checkpoint in
+ `args.output_dir` as saved by a previous instance of :class:`~transformers.Trainer`. If present,
+ training will resume from the model/optimizer/scheduler states loaded here.
trial (:obj:`optuna.Trial` or :obj:`Dict[str, Any]`, `optional`):
The trial run or the hyperparameter dictionary for hyperparameter search.
kwargs:
@@ -803,6 +806,11 @@ def train(
self.optimizer, self.lr_scheduler = None, None
# Load potential model checkpoint
+ if isinstance(resume_from_checkpoint, bool) and resume_from_checkpoint:
+ resume_from_checkpoint = get_last_checkpoint(self.args.output_dir)
+ if resume_from_checkpoint is None:
+ raise ValueError(f"No valid checkpoint found in output directory ({self.args.output_dir})")
+
if resume_from_checkpoint is not None and os.path.isfile(os.path.join(resume_from_checkpoint, WEIGHTS_NAME)):
logger.info(f"Loading model from {resume_from_checkpoint}).")
if isinstance(self.model, PreTrainedModel):
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-9525
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Non-monotonic-increasing DatetimeIndex claims not to __contain__ duplicate entries
This was fun to debug.
``` python
In [1]: import pandas as pd
In [2]: 0 in pd.Int64Index([0, 0, 1])
Out[2]: True
In [3]: 0 in pd.Int64Index([0, 1, 0])
Out[3]: True
In [4]: 0 in pd.Int64Index([0, 0, -1])
Out[4]: True
In [5]: pd.Timestamp(0) in pd.DatetimeIndex([0, 1, -1])
Out[5]: True
In [6]: pd.Timestamp(0) in pd.DatetimeIndex([0, 1, 0])
Out[6]: False # BAD
In [7]: pd.Timestamp(0) in pd.DatetimeIndex([0, 0, 1])
Out[7]: True
In [8]: pd.Timestamp(0) in pd.DatetimeIndex([0, 0, -1])
Out[8]: False # BAD
```
TimedeltaIndex is also broken.
The problem is in [`DatetimeIndexOpsMixin.__contains__`](https://github.com/pydata/pandas/blob/v0.15.2/pandas/tseries/base.py#L68), which checks the type of `idx.get_loc(key)` to determine whether the key was found in the index. If the index contains duplicate entries and is not monotonic increasing (for some reason, monotonic decreasing doesn't cut it), `get_loc` eventually falls back to [`Int64Engine._maybe_get_bool_indexer`](https://github.com/pydata/pandas/blob/v0.15.2/pandas/index.pyx#L376), which returns an ndarray of bools if the key is duplicated. Since the original `__contains__` method is looking for scalars or slices, it reports that the duplicated entry is not present.
FIX: Fix some instances where idx[0] not in idx
`DatetimeIndex.__contains__` and `TimedeltaIndex.__contains__` were failing to see duplicated elements in some circumstances.
Fixes #9512
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 ## What is it
6
7 **pandas** is a Python package providing fast, flexible, and expressive data
8 structures designed to make working with "relational" or "labeled" data both
9 easy and intuitive. It aims to be the fundamental high-level building block for
10 doing practical, **real world** data analysis in Python. Additionally, it has
11 the broader goal of becoming **the most powerful and flexible open source data
12 analysis / manipulation tool available in any language**. It is already well on
13 its way toward this goal.
14
15 ## Main Features
16 Here are just a few of the things that pandas does well:
17
18 - Easy handling of [**missing data**][missing-data] (represented as
19 `NaN`) in floating point as well as non-floating point data
20 - Size mutability: columns can be [**inserted and
21 deleted**][insertion-deletion] from DataFrame and higher dimensional
22 objects
23 - Automatic and explicit [**data alignment**][alignment]: objects can
24 be explicitly aligned to a set of labels, or the user can simply
25 ignore the labels and let `Series`, `DataFrame`, etc. automatically
26 align the data for you in computations
27 - Powerful, flexible [**group by**][groupby] functionality to perform
28 split-apply-combine operations on data sets, for both aggregating
29 and transforming data
30 - Make it [**easy to convert**][conversion] ragged,
31 differently-indexed data in other Python and NumPy data structures
32 into DataFrame objects
33 - Intelligent label-based [**slicing**][slicing], [**fancy
34 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
35 large data sets
36 - Intuitive [**merging**][merging] and [**joining**][joining] data
37 sets
38 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
39 data sets
40 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
41 labels per tick)
42 - Robust IO tools for loading data from [**flat files**][flat-files]
43 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
44 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
45 - [**Time series**][timeseries]-specific functionality: date range
46 generation and frequency conversion, moving window statistics,
47 moving window linear regressions, date shifting and lagging, etc.
48
49
50 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
51 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
52 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
53 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
54 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
55 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
56 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
57 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
58 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
59 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
60 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
61 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
62 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
63 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
64 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
65 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
66 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
67 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
68
69 ## Where to get it
70 The source code is currently hosted on GitHub at:
71 http://github.com/pydata/pandas
72
73 Binary installers for the latest released version are available at the Python
74 package index
75
76 http://pypi.python.org/pypi/pandas/
77
78 And via `easy_install`:
79
80 ```sh
81 easy_install pandas
82 ```
83
84 or `pip`:
85
86 ```sh
87 pip install pandas
88 ```
89
90 ## Dependencies
91 - [NumPy](http://www.numpy.org): 1.7.0 or higher
92 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
93 - [pytz](http://pytz.sourceforge.net)
94 - Needed for time zone support with ``pandas.date_range``
95
96 ### Highly Recommended Dependencies
97 - [numexpr](https://github.com/pydata/numexpr)
98 - Needed to accelerate some expression evaluation operations
99 - Required by PyTables
100 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
101 - Needed to accelerate certain numerical operations
102
103 ### Optional dependencies
104 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
105 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
106 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
107 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
108 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
109 - [statsmodels](http://statsmodels.sourceforge.net/)
110 - Needed for parts of `pandas.stats`
111 - For Excel I/O:
112 - [xlrd/xlwt](http://www.python-excel.org/)
113 - Excel reading (xlrd) and writing (xlwt)
114 - [openpyxl](http://packages.python.org/openpyxl/)
115 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for
116 writing .xlsx files
117 - xlrd >= 0.9.0
118 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
119 - Alternative Excel writer.
120 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
121 - Needed for `pandas.io.gbq`
122 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
123 - One of the following combinations of libraries is needed to use the
124 top-level [`pandas.read_html`][read-html-docs] function:
125 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
126 recent version of [html5lib][html5lib] is okay.)
127 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
128 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
129 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
130 for reasons as to why you should probably **not** take this approach.
131
132 #### Notes about HTML parsing libraries
133 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
134 either [lxml][lxml] or [html5lib][html5lib] or both.
135 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
136 installed.
137 - You are strongly encouraged to read [HTML reading
138 gotchas][html-gotchas]. It explains issues surrounding the
139 installation and usage of the above three libraries.
140 - You may need to install an older version of
141 [BeautifulSoup4][BeautifulSoup4]:
142 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
143 32-bit Ubuntu/Debian
144 - Additionally, if you're using [Anaconda][Anaconda] you should
145 definitely read [the gotchas about HTML parsing][html-gotchas]
146 libraries
147 - If you're on a system with `apt-get` you can do
148
149 ```sh
150 sudo apt-get build-dep python-lxml
151 ```
152
153 to get the necessary dependencies for installation of [lxml][lxml].
154 This will prevent further headaches down the line.
155
156 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
157 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
158 [lxml]: http://lxml.de
159 [Anaconda]: https://store.continuum.io/cshop/anaconda
160 [NumPy]: http://numpy.scipy.org/
161 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
162 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
163
164 ## Installation from sources
165 To install pandas from source you need Cython in addition to the normal
166 dependencies above. Cython can be installed from pypi:
167
168 ```sh
169 pip install cython
170 ```
171
172 In the `pandas` directory (same one where you found this file after
173 cloning the git repo), execute:
174
175 ```sh
176 python setup.py install
177 ```
178
179 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
180
181 ```sh
182 python setup.py develop
183 ```
184
185 Alternatively, you can use `pip` if you want all the dependencies pulled
186 in automatically (the `-e` option is for installing it in [development
187 mode](http://www.pip-installer.org/en/latest/usage.html)):
188
189 ```sh
190 pip install -e .
191 ```
192
193 On Windows, you will need to install MinGW and execute:
194
195 ```sh
196 python setup.py build --compiler=mingw32
197 python setup.py install
198 ```
199
200 See http://pandas.pydata.org/ for more information.
201
202 ## License
203 BSD
204
205 ## Documentation
206 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
207
208 The Sphinx documentation should provide a good starting point for learning how
209 to use the library. Expect the docs to continue to expand as time goes on.
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Discussion and Development
216 Since pandas development is related to a number of other scientific
217 Python projects, questions are welcome on the scipy-user mailing
218 list. Specialized discussions or design issues should take place on
219 the PyData mailing list / Google group:
220
221 https://groups.google.com/forum/#!forum/pydata
222
[end of README.md]
[start of pandas/tseries/period.py]
1 # pylint: disable=E1101,E1103,W0232
2 import operator
3
4 from datetime import datetime, date, timedelta
5 import numpy as np
6 from pandas.core.base import PandasObject
7
8 import pandas.tseries.frequencies as frequencies
9 from pandas.tseries.frequencies import get_freq_code as _gfc
10 from pandas.tseries.index import DatetimeIndex, Int64Index, Index
11 from pandas.tseries.base import DatetimeIndexOpsMixin
12 from pandas.tseries.tools import parse_time_string
13 import pandas.tseries.offsets as offsets
14
15 from pandas._period import Period
16 import pandas._period as period
17 from pandas._period import (
18 get_period_field_arr,
19 _validate_end_alias,
20 _quarter_to_myear,
21 )
22
23 import pandas.core.common as com
24 from pandas.core.common import (isnull, _INT64_DTYPE, _maybe_box,
25 _values_from_object, ABCSeries)
26 from pandas import compat
27 from pandas.lib import Timestamp, Timedelta
28 import pandas.lib as lib
29 import pandas.tslib as tslib
30 import pandas.algos as _algos
31 from pandas.compat import zip, u
32
33
34 def _field_accessor(name, alias, docstring=None):
35 def f(self):
36 base, mult = _gfc(self.freq)
37 return get_period_field_arr(alias, self.values, base)
38 f.__name__ = name
39 f.__doc__ = docstring
40 return property(f)
41
42
43 def _get_ordinals(data, freq):
44 f = lambda x: Period(x, freq=freq).ordinal
45 if isinstance(data[0], Period):
46 return period.extract_ordinals(data, freq)
47 else:
48 return lib.map_infer(data, f)
49
50
51 def dt64arr_to_periodarr(data, freq, tz):
52 if data.dtype != np.dtype('M8[ns]'):
53 raise ValueError('Wrong dtype: %s' % data.dtype)
54
55 base, mult = _gfc(freq)
56 return period.dt64arr_to_periodarr(data.view('i8'), base, tz)
57
58 # --- Period index sketch
59
60 def _period_index_cmp(opname, nat_result=False):
61 """
62 Wrap comparison operations to convert datetime-like to datetime64
63 """
64 def wrapper(self, other):
65 if isinstance(other, Period):
66 func = getattr(self.values, opname)
67 if other.freq != self.freq:
68 raise AssertionError("Frequencies must be equal")
69
70 result = func(other.ordinal)
71 elif isinstance(other, PeriodIndex):
72 if other.freq != self.freq:
73 raise AssertionError("Frequencies must be equal")
74
75 result = getattr(self.values, opname)(other.values)
76
77 mask = (com.mask_missing(self.values, tslib.iNaT) |
78 com.mask_missing(other.values, tslib.iNaT))
79 if mask.any():
80 result[mask] = nat_result
81
82 return result
83 else:
84 other = Period(other, freq=self.freq)
85 func = getattr(self.values, opname)
86 result = func(other.ordinal)
87
88 if other.ordinal == tslib.iNaT:
89 result.fill(nat_result)
90 mask = self.values == tslib.iNaT
91 if mask.any():
92 result[mask] = nat_result
93
94 return result
95 return wrapper
96
97
98 class PeriodIndex(DatetimeIndexOpsMixin, Int64Index):
99 """
100 Immutable ndarray holding ordinal values indicating regular periods in
101 time such as particular years, quarters, months, etc. A value of 1 is the
102 period containing the Gregorian proleptic datetime Jan 1, 0001 00:00:00.
103 This ordinal representation is from the scikits.timeseries project.
104
105 For instance,
106 # construct period for day 1/1/1 and get the first second
107 i = Period(year=1,month=1,day=1,freq='D').asfreq('S', 'S')
108 i.ordinal
109 ===> 1
110
111 Index keys are boxed to Period objects which carries the metadata (eg,
112 frequency information).
113
114 Parameters
115 ----------
116 data : array-like (1-dimensional), optional
117 Optional period-like data to construct index with
118 dtype : NumPy dtype (default: i8)
119 copy : bool
120 Make a copy of input ndarray
121 freq : string or period object, optional
122 One of pandas period strings or corresponding objects
123 start : starting value, period-like, optional
124 If data is None, used as the start point in generating regular
125 period data.
126 periods : int, optional, > 0
127 Number of periods to generate, if generating index. Takes precedence
128 over end argument
129 end : end value, period-like, optional
130 If periods is none, generated index will extend to first conforming
131 period on or just past end argument
132 year : int, array, or Series, default None
133 month : int, array, or Series, default None
134 quarter : int, array, or Series, default None
135 day : int, array, or Series, default None
136 hour : int, array, or Series, default None
137 minute : int, array, or Series, default None
138 second : int, array, or Series, default None
139 tz : object, default None
140 Timezone for converting datetime64 data to Periods
141
142 Examples
143 --------
144 >>> idx = PeriodIndex(year=year_arr, quarter=q_arr)
145
146 >>> idx2 = PeriodIndex(start='2000', end='2010', freq='A')
147 """
148 _box_scalars = True
149 _typ = 'periodindex'
150 _attributes = ['name','freq']
151 _datetimelike_ops = ['year','month','day','hour','minute','second',
152 'weekofyear','week','dayofweek','weekday','dayofyear','quarter', 'qyear', 'freq']
153 _is_numeric_dtype = False
154 freq = None
155
156 __eq__ = _period_index_cmp('__eq__')
157 __ne__ = _period_index_cmp('__ne__', nat_result=True)
158 __lt__ = _period_index_cmp('__lt__')
159 __gt__ = _period_index_cmp('__gt__')
160 __le__ = _period_index_cmp('__le__')
161 __ge__ = _period_index_cmp('__ge__')
162
163 def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
164 periods=None, copy=False, name=None, tz=None, **kwargs):
165
166 freq = frequencies.get_standard_freq(freq)
167
168 if periods is not None:
169 if com.is_float(periods):
170 periods = int(periods)
171 elif not com.is_integer(periods):
172 raise ValueError('Periods must be a number, got %s' %
173 str(periods))
174
175 if data is None:
176 if ordinal is not None:
177 data = np.asarray(ordinal, dtype=np.int64)
178 else:
179 data, freq = cls._generate_range(start, end, periods,
180 freq, kwargs)
181 else:
182 ordinal, freq = cls._from_arraylike(data, freq, tz)
183 data = np.array(ordinal, dtype=np.int64, copy=False)
184
185 return cls._simple_new(data, name=name, freq=freq)
186
187 @classmethod
188 def _generate_range(cls, start, end, periods, freq, fields):
189 field_count = len(fields)
190 if com._count_not_none(start, end) > 0:
191 if field_count > 0:
192 raise ValueError('Can either instantiate from fields '
193 'or endpoints, but not both')
194 subarr, freq = _get_ordinal_range(start, end, periods, freq)
195 elif field_count > 0:
196 subarr, freq = _range_from_fields(freq=freq, **fields)
197 else:
198 raise ValueError('Not enough parameters to construct '
199 'Period range')
200
201 return subarr, freq
202
203 @classmethod
204 def _from_arraylike(cls, data, freq, tz):
205
206 if not isinstance(data, (np.ndarray, PeriodIndex, DatetimeIndex, Int64Index)):
207 if np.isscalar(data) or isinstance(data, Period):
208 raise ValueError('PeriodIndex() must be called with a '
209 'collection of some kind, %s was passed'
210 % repr(data))
211
212 # other iterable of some kind
213 if not isinstance(data, (list, tuple)):
214 data = list(data)
215
216 try:
217 data = com._ensure_int64(data)
218 if freq is None:
219 raise ValueError('freq not specified')
220 data = np.array([Period(x, freq=freq).ordinal for x in data],
221 dtype=np.int64)
222 except (TypeError, ValueError):
223 data = com._ensure_object(data)
224
225 if freq is None and len(data) > 0:
226 freq = getattr(data[0], 'freq', None)
227
228 if freq is None:
229 raise ValueError('freq not specified and cannot be '
230 'inferred from first element')
231
232 data = _get_ordinals(data, freq)
233 else:
234 if isinstance(data, PeriodIndex):
235 if freq is None or freq == data.freq:
236 freq = data.freq
237 data = data.values
238 else:
239 base1, _ = _gfc(data.freq)
240 base2, _ = _gfc(freq)
241 data = period.period_asfreq_arr(data.values, base1,
242 base2, 1)
243 else:
244 if freq is None and len(data) > 0:
245 freq = getattr(data[0], 'freq', None)
246
247 if freq is None:
248 raise ValueError('freq not specified and cannot be '
249 'inferred from first element')
250
251 if data.dtype != np.int64:
252 if np.issubdtype(data.dtype, np.datetime64):
253 data = dt64arr_to_periodarr(data, freq, tz)
254 else:
255 try:
256 data = com._ensure_int64(data)
257 except (TypeError, ValueError):
258 data = com._ensure_object(data)
259 data = _get_ordinals(data, freq)
260
261 return data, freq
262
263 @classmethod
264 def _simple_new(cls, values, name=None, freq=None, **kwargs):
265 result = object.__new__(cls)
266 result._data = values
267 result.name = name
268 result.freq = freq
269 result._reset_identity()
270 return result
271
272 @property
273 def _na_value(self):
274 return self._box_func(tslib.iNaT)
275
276 def __contains__(self, key):
277 if not isinstance(key, Period) or key.freq != self.freq:
278 if isinstance(key, compat.string_types):
279 try:
280 self.get_loc(key)
281 return True
282 except Exception:
283 return False
284 return False
285 return key.ordinal in self._engine
286
287 @property
288 def _box_func(self):
289 return lambda x: Period._from_ordinal(ordinal=x, freq=self.freq)
290
291 def _to_embed(self, keep_tz=False):
292 """ return an array repr of this object, potentially casting to object """
293 return self.asobject.values
294
295 def asof_locs(self, where, mask):
296 """
297 where : array of timestamps
298 mask : array of booleans where data is not NA
299
300 """
301 where_idx = where
302 if isinstance(where_idx, DatetimeIndex):
303 where_idx = PeriodIndex(where_idx.values, freq=self.freq)
304
305 locs = self.values[mask].searchsorted(where_idx.values, side='right')
306
307 locs = np.where(locs > 0, locs - 1, 0)
308 result = np.arange(len(self))[mask].take(locs)
309
310 first = mask.argmax()
311 result[(locs == 0) & (where_idx.values < self.values[first])] = -1
312
313 return result
314
315 def _array_values(self):
316 return self.asobject
317
318 def astype(self, dtype):
319 dtype = np.dtype(dtype)
320 if dtype == np.object_:
321 return Index(np.array(list(self), dtype), dtype)
322 elif dtype == _INT64_DTYPE:
323 return Index(self.values, dtype)
324 raise ValueError('Cannot cast PeriodIndex to dtype %s' % dtype)
325
326 def searchsorted(self, key, side='left'):
327 if isinstance(key, Period):
328 if key.freq != self.freq:
329 raise ValueError("Different period frequency: %s" % key.freq)
330 key = key.ordinal
331 elif isinstance(key, compat.string_types):
332 key = Period(key, freq=self.freq).ordinal
333
334 return self.values.searchsorted(key, side=side)
335
336 @property
337 def is_all_dates(self):
338 return True
339
340 @property
341 def is_full(self):
342 """
343 Returns True if there are any missing periods from start to end
344 """
345 if len(self) == 0:
346 return True
347 if not self.is_monotonic:
348 raise ValueError('Index is not monotonic')
349 values = self.values
350 return ((values[1:] - values[:-1]) < 2).all()
351
352 @property
353 def freqstr(self):
354 return self.freq
355
356 def asfreq(self, freq=None, how='E'):
357 how = _validate_end_alias(how)
358
359 freq = frequencies.get_standard_freq(freq)
360
361 base1, mult1 = _gfc(self.freq)
362 base2, mult2 = _gfc(freq)
363
364 if mult2 != 1:
365 raise ValueError('Only mult == 1 supported')
366
367 end = how == 'E'
368 new_data = period.period_asfreq_arr(self.values, base1, base2, end)
369 return self._simple_new(new_data, self.name, freq=freq)
370
371 def to_datetime(self, dayfirst=False):
372 return self.to_timestamp()
373
374 year = _field_accessor('year', 0, "The year of the period")
375 month = _field_accessor('month', 3, "The month as January=1, December=12")
376 day = _field_accessor('day', 4, "The days of the period")
377 hour = _field_accessor('hour', 5, "The hour of the period")
378 minute = _field_accessor('minute', 6, "The minute of the period")
379 second = _field_accessor('second', 7, "The second of the period")
380 weekofyear = _field_accessor('week', 8, "The week ordinal of the year")
381 week = weekofyear
382 dayofweek = _field_accessor('dayofweek', 10, "The day of the week with Monday=0, Sunday=6")
383 weekday = dayofweek
384 dayofyear = day_of_year = _field_accessor('dayofyear', 9, "The ordinal day of the year")
385 quarter = _field_accessor('quarter', 2, "The quarter of the date")
386 qyear = _field_accessor('qyear', 1)
387
388 def _get_object_array(self):
389 freq = self.freq
390 return np.array([ Period._from_ordinal(ordinal=x, freq=freq) for x in self.values], copy=False)
391
392 def _mpl_repr(self):
393 # how to represent ourselves to matplotlib
394 return self._get_object_array()
395
396 def equals(self, other):
397 """
398 Determines if two Index objects contain the same elements.
399 """
400 if self.is_(other):
401 return True
402
403 if (not hasattr(other, 'inferred_type') or
404 other.inferred_type != 'int64'):
405 try:
406 other = PeriodIndex(other)
407 except:
408 return False
409
410 return np.array_equal(self.asi8, other.asi8)
411
412 def to_timestamp(self, freq=None, how='start'):
413 """
414 Cast to DatetimeIndex
415
416 Parameters
417 ----------
418 freq : string or DateOffset, default 'D' for week or longer, 'S'
419 otherwise
420 Target frequency
421 how : {'s', 'e', 'start', 'end'}
422
423 Returns
424 -------
425 DatetimeIndex
426 """
427 how = _validate_end_alias(how)
428
429 if freq is None:
430 base, mult = _gfc(self.freq)
431 freq = frequencies.get_to_timestamp_base(base)
432
433 base, mult = _gfc(freq)
434 new_data = self.asfreq(freq, how)
435
436 new_data = period.periodarr_to_dt64arr(new_data.values, base)
437 return DatetimeIndex(new_data, freq='infer', name=self.name)
438
439 def _add_delta(self, other):
440 if isinstance(other, (timedelta, np.timedelta64, offsets.Tick, Timedelta)):
441 offset = frequencies.to_offset(self.freq)
442 if isinstance(offset, offsets.Tick):
443 nanos = tslib._delta_to_nanoseconds(other)
444 offset_nanos = tslib._delta_to_nanoseconds(offset)
445 if nanos % offset_nanos == 0:
446 return self.shift(nanos // offset_nanos)
447 elif isinstance(other, offsets.DateOffset):
448 freqstr = frequencies.get_standard_freq(other)
449 base = frequencies.get_base_alias(freqstr)
450
451 if base == self.freq:
452 return self.shift(other.n)
453 raise ValueError("Input has different freq from PeriodIndex(freq={0})".format(self.freq))
454
455 def shift(self, n):
456 """
457 Specialized shift which produces an PeriodIndex
458
459 Parameters
460 ----------
461 n : int
462 Periods to shift by
463 freq : freq string
464
465 Returns
466 -------
467 shifted : PeriodIndex
468 """
469 mask = self.values == tslib.iNaT
470 values = self.values + n
471 values[mask] = tslib.iNaT
472 return PeriodIndex(data=values, name=self.name, freq=self.freq)
473
474 @property
475 def inferred_type(self):
476 # b/c data is represented as ints make sure we can't have ambiguous
477 # indexing
478 return 'period'
479
480 def get_value(self, series, key):
481 """
482 Fast lookup of value from 1-dimensional ndarray. Only use this if you
483 know what you're doing
484 """
485 s = _values_from_object(series)
486 try:
487 return _maybe_box(self, super(PeriodIndex, self).get_value(s, key), series, key)
488 except (KeyError, IndexError):
489 try:
490 asdt, parsed, reso = parse_time_string(key, self.freq)
491 grp = frequencies._infer_period_group(reso)
492 freqn = frequencies._period_group(self.freq)
493
494 vals = self.values
495
496 # if our data is higher resolution than requested key, slice
497 if grp < freqn:
498 iv = Period(asdt, freq=(grp, 1))
499 ord1 = iv.asfreq(self.freq, how='S').ordinal
500 ord2 = iv.asfreq(self.freq, how='E').ordinal
501
502 if ord2 < vals[0] or ord1 > vals[-1]:
503 raise KeyError(key)
504
505 pos = np.searchsorted(self.values, [ord1, ord2])
506 key = slice(pos[0], pos[1] + 1)
507 return series[key]
508 elif grp == freqn:
509 key = Period(asdt, freq=self.freq).ordinal
510 return _maybe_box(self, self._engine.get_value(s, key), series, key)
511 else:
512 raise KeyError(key)
513 except TypeError:
514 pass
515
516 key = Period(key, self.freq).ordinal
517 return _maybe_box(self, self._engine.get_value(s, key), series, key)
518
519 def get_loc(self, key):
520 """
521 Get integer location for requested label
522
523 Returns
524 -------
525 loc : int
526 """
527 try:
528 return self._engine.get_loc(key)
529 except KeyError:
530 if com.is_integer(key):
531 raise
532
533 try:
534 asdt, parsed, reso = parse_time_string(key, self.freq)
535 key = asdt
536 except TypeError:
537 pass
538
539 key = Period(key, self.freq)
540 try:
541 return self._engine.get_loc(key.ordinal)
542 except KeyError:
543 raise KeyError(key)
544
545 def _maybe_cast_slice_bound(self, label, side):
546 """
547 If label is a string or a datetime, cast it to Period.ordinal according to
548 resolution.
549
550 Parameters
551 ----------
552 label : object
553 side : {'left', 'right'}
554
555 Returns
556 -------
557 bound : Period or object
558
559 Notes
560 -----
561 Value of `side` parameter should be validated in caller.
562
563 """
564 if isinstance(label, datetime):
565 return Period(label, freq=self.freq)
566 elif isinstance(label, compat.string_types):
567 try:
568 _, parsed, reso = parse_time_string(label, self.freq)
569 bounds = self._parsed_string_to_bounds(reso, parsed)
570 return bounds[0 if side == 'left' else 1]
571 except Exception:
572 raise KeyError(label)
573
574 return label
575
576 def _parsed_string_to_bounds(self, reso, parsed):
577 if reso == 'year':
578 t1 = Period(year=parsed.year, freq='A')
579 elif reso == 'month':
580 t1 = Period(year=parsed.year, month=parsed.month, freq='M')
581 elif reso == 'quarter':
582 q = (parsed.month - 1) // 3 + 1
583 t1 = Period(year=parsed.year, quarter=q, freq='Q-DEC')
584 elif reso == 'day':
585 t1 = Period(year=parsed.year, month=parsed.month, day=parsed.day,
586 freq='D')
587 elif reso == 'hour':
588 t1 = Period(year=parsed.year, month=parsed.month, day=parsed.day,
589 hour=parsed.hour, freq='H')
590 elif reso == 'minute':
591 t1 = Period(year=parsed.year, month=parsed.month, day=parsed.day,
592 hour=parsed.hour, minute=parsed.minute, freq='T')
593 elif reso == 'second':
594 t1 = Period(year=parsed.year, month=parsed.month, day=parsed.day,
595 hour=parsed.hour, minute=parsed.minute, second=parsed.second,
596 freq='S')
597 else:
598 raise KeyError(key)
599 return (t1.asfreq(self.freq, how='start'),
600 t1.asfreq(self.freq, how='end'))
601
602 def _get_string_slice(self, key):
603 if not self.is_monotonic:
604 raise ValueError('Partial indexing only valid for '
605 'ordered time series')
606
607 key, parsed, reso = parse_time_string(key, self.freq)
608
609 grp = frequencies._infer_period_group(reso)
610 freqn = frequencies._period_group(self.freq)
611 if reso in ['day', 'hour', 'minute', 'second'] and not grp < freqn:
612 raise KeyError(key)
613
614 t1, t2 = self._parsed_string_to_bounds(reso, parsed)
615 return slice(self.searchsorted(t1.ordinal, side='left'),
616 self.searchsorted(t2.ordinal, side='right'))
617
618 def join(self, other, how='left', level=None, return_indexers=False):
619 """
620 See Index.join
621 """
622 self._assert_can_do_setop(other)
623
624 result = Int64Index.join(self, other, how=how, level=level,
625 return_indexers=return_indexers)
626
627 if return_indexers:
628 result, lidx, ridx = result
629 return self._apply_meta(result), lidx, ridx
630 return self._apply_meta(result)
631
632 def _assert_can_do_setop(self, other):
633 if not isinstance(other, PeriodIndex):
634 raise ValueError('can only call with other PeriodIndex-ed objects')
635
636 if self.freq != other.freq:
637 raise ValueError('Only like-indexed PeriodIndexes compatible '
638 'for join (for now)')
639
640 def _wrap_union_result(self, other, result):
641 name = self.name if self.name == other.name else None
642 result = self._apply_meta(result)
643 result.name = name
644 return result
645
646 def _apply_meta(self, rawarr):
647 if not isinstance(rawarr, PeriodIndex):
648 rawarr = PeriodIndex(rawarr, freq=self.freq)
649 return rawarr
650
651 def __getitem__(self, key):
652 getitem = self._data.__getitem__
653 if np.isscalar(key):
654 val = getitem(key)
655 return Period(ordinal=val, freq=self.freq)
656 else:
657 if com.is_bool_indexer(key):
658 key = np.asarray(key)
659
660 result = getitem(key)
661 if result.ndim > 1:
662 # MPL kludge
663 # values = np.asarray(list(values), dtype=object)
664 # return values.reshape(result.shape)
665
666 return PeriodIndex(result, name=self.name, freq=self.freq)
667
668 return PeriodIndex(result, name=self.name, freq=self.freq)
669
670 def _format_native_types(self, na_rep=u('NaT'), **kwargs):
671
672 values = np.array(list(self), dtype=object)
673 mask = isnull(self.values)
674 values[mask] = na_rep
675
676 imask = ~mask
677 values[imask] = np.array([u('%s') % dt for dt in values[imask]])
678 return values.tolist()
679
680 def __array_finalize__(self, obj):
681 if not self.ndim: # pragma: no cover
682 return self.item()
683
684 self.freq = getattr(obj, 'freq', None)
685 self.name = getattr(obj, 'name', None)
686 self._reset_identity()
687
688 def _format_footer(self):
689 tagline = 'Length: %d, Freq: %s'
690 return tagline % (len(self), self.freqstr)
691
692 def take(self, indices, axis=None):
693 """
694 Analogous to ndarray.take
695 """
696 indices = com._ensure_platform_int(indices)
697 taken = self.values.take(indices, axis=axis)
698 return self._simple_new(taken, self.name, freq=self.freq)
699
700 def append(self, other):
701 """
702 Append a collection of Index options together
703
704 Parameters
705 ----------
706 other : Index or list/tuple of indices
707
708 Returns
709 -------
710 appended : Index
711 """
712 name = self.name
713 to_concat = [self]
714
715 if isinstance(other, (list, tuple)):
716 to_concat = to_concat + list(other)
717 else:
718 to_concat.append(other)
719
720 for obj in to_concat:
721 if isinstance(obj, Index) and obj.name != name:
722 name = None
723 break
724
725 to_concat = self._ensure_compat_concat(to_concat)
726
727 if isinstance(to_concat[0], PeriodIndex):
728 if len(set([x.freq for x in to_concat])) > 1:
729 # box
730 to_concat = [x.asobject.values for x in to_concat]
731 else:
732 cat_values = np.concatenate([x.values for x in to_concat])
733 return PeriodIndex(cat_values, freq=self.freq, name=name)
734
735 to_concat = [x.values if isinstance(x, Index) else x
736 for x in to_concat]
737 return Index(com._concat_compat(to_concat), name=name)
738
739 def __setstate__(self, state):
740 """Necessary for making this object picklable"""
741
742 if isinstance(state, dict):
743 super(PeriodIndex, self).__setstate__(state)
744
745 elif isinstance(state, tuple):
746
747 # < 0.15 compat
748 if len(state) == 2:
749 nd_state, own_state = state
750 data = np.empty(nd_state[1], dtype=nd_state[2])
751 np.ndarray.__setstate__(data, nd_state)
752
753 try: # backcompat
754 self.freq = own_state[1]
755 except:
756 pass
757
758 else: # pragma: no cover
759 data = np.empty(state)
760 np.ndarray.__setstate__(self, state)
761
762 self._data = data
763
764 else:
765 raise Exception("invalid pickle state")
766 _unpickle_compat = __setstate__
767
768 def tz_convert(self, tz):
769 """
770 Convert tz-aware DatetimeIndex from one time zone to another (using pytz/dateutil)
771
772 Parameters
773 ----------
774 tz : string, pytz.timezone, dateutil.tz.tzfile or None
775 Time zone for time. Corresponding timestamps would be converted to
776 time zone of the TimeSeries.
777 None will remove timezone holding UTC time.
778
779 Returns
780 -------
781 normalized : DatetimeIndex
782
783 Note
784 ----
785 Not currently implemented for PeriodIndex
786 """
787 raise NotImplementedError("Not yet implemented for PeriodIndex")
788
789 def tz_localize(self, tz, infer_dst=False):
790 """
791 Localize tz-naive DatetimeIndex to given time zone (using pytz/dateutil),
792 or remove timezone from tz-aware DatetimeIndex
793
794 Parameters
795 ----------
796 tz : string, pytz.timezone, dateutil.tz.tzfile or None
797 Time zone for time. Corresponding timestamps would be converted to
798 time zone of the TimeSeries.
799 None will remove timezone holding local time.
800 infer_dst : boolean, default False
801 Attempt to infer fall dst-transition hours based on order
802
803 Returns
804 -------
805 localized : DatetimeIndex
806
807 Note
808 ----
809 Not currently implemented for PeriodIndex
810 """
811 raise NotImplementedError("Not yet implemented for PeriodIndex")
812
813
814 PeriodIndex._add_numeric_methods_disabled()
815 PeriodIndex._add_logical_methods_disabled()
816 PeriodIndex._add_datetimelike_methods()
817
818
819 def _get_ordinal_range(start, end, periods, freq):
820 if com._count_not_none(start, end, periods) < 2:
821 raise ValueError('Must specify 2 of start, end, periods')
822
823 if start is not None:
824 start = Period(start, freq)
825 if end is not None:
826 end = Period(end, freq)
827
828 is_start_per = isinstance(start, Period)
829 is_end_per = isinstance(end, Period)
830
831 if is_start_per and is_end_per and start.freq != end.freq:
832 raise ValueError('Start and end must have same freq')
833 if ((is_start_per and start.ordinal == tslib.iNaT) or
834 (is_end_per and end.ordinal == tslib.iNaT)):
835 raise ValueError('Start and end must not be NaT')
836
837 if freq is None:
838 if is_start_per:
839 freq = start.freq
840 elif is_end_per:
841 freq = end.freq
842 else: # pragma: no cover
843 raise ValueError('Could not infer freq from start/end')
844
845 if periods is not None:
846 if start is None:
847 data = np.arange(end.ordinal - periods + 1,
848 end.ordinal + 1,
849 dtype=np.int64)
850 else:
851 data = np.arange(start.ordinal, start.ordinal + periods,
852 dtype=np.int64)
853 else:
854 data = np.arange(start.ordinal, end.ordinal + 1, dtype=np.int64)
855
856 return data, freq
857
858
859 def _range_from_fields(year=None, month=None, quarter=None, day=None,
860 hour=None, minute=None, second=None, freq=None):
861 if hour is None:
862 hour = 0
863 if minute is None:
864 minute = 0
865 if second is None:
866 second = 0
867 if day is None:
868 day = 1
869
870 ordinals = []
871
872 if quarter is not None:
873 if freq is None:
874 freq = 'Q'
875 base = frequencies.FreqGroup.FR_QTR
876 else:
877 base, mult = _gfc(freq)
878 if mult != 1:
879 raise ValueError('Only mult == 1 supported')
880 if base != frequencies.FreqGroup.FR_QTR:
881 raise AssertionError("base must equal FR_QTR")
882
883 year, quarter = _make_field_arrays(year, quarter)
884 for y, q in zip(year, quarter):
885 y, m = _quarter_to_myear(y, q, freq)
886 val = period.period_ordinal(y, m, 1, 1, 1, 1, 0, 0, base)
887 ordinals.append(val)
888 else:
889 base, mult = _gfc(freq)
890 if mult != 1:
891 raise ValueError('Only mult == 1 supported')
892
893 arrays = _make_field_arrays(year, month, day, hour, minute, second)
894 for y, mth, d, h, mn, s in zip(*arrays):
895 ordinals.append(period.period_ordinal(y, mth, d, h, mn, s, 0, 0, base))
896
897 return np.array(ordinals, dtype=np.int64), freq
898
899
900 def _make_field_arrays(*fields):
901 length = None
902 for x in fields:
903 if isinstance(x, (list, np.ndarray, ABCSeries)):
904 if length is not None and len(x) != length:
905 raise ValueError('Mismatched Period array lengths')
906 elif length is None:
907 length = len(x)
908
909 arrays = [np.asarray(x) if isinstance(x, (np.ndarray, list, ABCSeries))
910 else np.repeat(x, length) for x in fields]
911
912 return arrays
913
914
915 def pnow(freq=None):
916 return Period(datetime.now(), freq=freq)
917
918
919 def period_range(start=None, end=None, periods=None, freq='D', name=None):
920 """
921 Return a fixed frequency datetime index, with day (calendar) as the default
922 frequency
923
924
925 Parameters
926 ----------
927 start :
928 end :
929 periods : int, default None
930 Number of periods in the index
931 freq : str/DateOffset, default 'D'
932 Frequency alias
933 name : str, default None
934 Name for the resulting PeriodIndex
935
936 Returns
937 -------
938 prng : PeriodIndex
939 """
940 return PeriodIndex(start=start, end=end, periods=periods,
941 freq=freq, name=name)
942
[end of pandas/tseries/period.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
a064c7d976f5e93ea91a23f97690f1d180a03948
|
Non-monotonic-increasing DatetimeIndex claims not to __contain__ duplicate entries
This was fun to debug.
``` python
In [1]: import pandas as pd
In [2]: 0 in pd.Int64Index([0, 0, 1])
Out[2]: True
In [3]: 0 in pd.Int64Index([0, 1, 0])
Out[3]: True
In [4]: 0 in pd.Int64Index([0, 0, -1])
Out[4]: True
In [5]: pd.Timestamp(0) in pd.DatetimeIndex([0, 1, -1])
Out[5]: True
In [6]: pd.Timestamp(0) in pd.DatetimeIndex([0, 1, 0])
Out[6]: False # BAD
In [7]: pd.Timestamp(0) in pd.DatetimeIndex([0, 0, 1])
Out[7]: True
In [8]: pd.Timestamp(0) in pd.DatetimeIndex([0, 0, -1])
Out[8]: False # BAD
```
TimedeltaIndex is also broken.
The problem is in [`DatetimeIndexOpsMixin.__contains__`](https://github.com/pydata/pandas/blob/v0.15.2/pandas/tseries/base.py#L68), which checks the type of `idx.get_loc(key)` to determine whether the key was found in the index. If the index contains duplicate entries and is not monotonic increasing (for some reason, monotonic decreasing doesn't cut it), `get_loc` eventually falls back to [`Int64Engine._maybe_get_bool_indexer`](https://github.com/pydata/pandas/blob/v0.15.2/pandas/index.pyx#L376), which returns an ndarray of bools if the key is duplicated. Since the original `__contains__` method is looking for scalars or slices, it reports that the duplicated entry is not present.
FIX: Fix some instances where idx[0] not in idx
`DatetimeIndex.__contains__` and `TimedeltaIndex.__contains__` were failing to see duplicated elements in some circumstances.
Fixes #9512
|
Thanks for the report and the debugging. A PR to fix this would be very welcome!
I'm just afraid that some of the other indexing code accidentally depends on this bug and will blow up...
The fix here might be as simple as just adding `or np.any(res)` to that line... only way to find out is to try!
Truuuue....
The embarrassing truth is that I lost my SSH key for uploading to github, and haven't gotten around to regenerating it yet. So I won't be able to squash. /sheepish
|
2015-02-19T20:58:07Z
|
<patch>
diff --git a/doc/source/whatsnew/v0.16.0.txt b/doc/source/whatsnew/v0.16.0.txt
--- a/doc/source/whatsnew/v0.16.0.txt
+++ b/doc/source/whatsnew/v0.16.0.txt
@@ -293,7 +293,7 @@ Bug Fixes
- Bug in ``DataFrame.where`` and ``Series.where`` coerce numerics to string incorrectly (:issue:`9280`)
- Bug in ``DataFrame.where`` and ``Series.where`` raise ``ValueError`` when string list-like is passed. (:issue:`9280`)
- Accessing ``Series.str`` methods on with non-string values now raises ``TypeError`` instead of producing incorrect results (:issue:`9184`)
-
+- Bug in ``DatetimeIndex.__contains__`` when index has duplicates and is not monotonic increasing (:issue:`9512`)
- Fixed division by zero error for ``Series.kurt()`` when all values are equal (:issue:`9197`)
diff --git a/pandas/tseries/base.py b/pandas/tseries/base.py
--- a/pandas/tseries/base.py
+++ b/pandas/tseries/base.py
@@ -65,7 +65,7 @@ def _format_with_header(self, header, **kwargs):
def __contains__(self, key):
try:
res = self.get_loc(key)
- return np.isscalar(res) or type(res) == slice
+ return np.isscalar(res) or type(res) == slice or np.any(res)
except (KeyError, TypeError):
return False
</patch>
|
[]
|
[]
| |||
pypa__pip-2699
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Proposal: Add an option to ignore sdists when installing certain packages
Some packages are difficult or even impossible to install from source for some users. For example, windows users without a compiler cannot install C extensions from source. However, such users may have a local wheelhouse.
If such a user wants to install, say, PyYAML and has a wheel for version 3.10, that wheel will not be used for "pip install PyYAML" because the current version is 3.11, which takes precedence, but has no wheel. The older version may, however, be fine for the user.
It should be possible for a user to say that for a specified list of packages, source distributions should be ignored as they are known to be unusable. (A warning should probably be given if a newer sdist exists when an older wheel is installed, but the wheel should still be used).
Suggested implementation:
```
pip install --binary-only numpy numpy
```
Note: this has the same problem of too much repetition that plagued the `--allow-external` option. Suggestions for an alternative UI would be much appreciated!
</issue>
<code>
[start of README.rst]
1 pip
2 ===
3
4 The `PyPA recommended
5 <https://python-packaging-user-guide.readthedocs.org/en/latest/current.html>`_
6 tool for installing Python packages.
7
8 * `Installation <https://pip.pypa.io/en/stable/installing.html>`_
9 * `Documentation <https://pip.pypa.io/>`_
10 * `Changelog <https://pip.pypa.io/en/stable/news.html>`_
11 * `Github Page <https://github.com/pypa/pip>`_
12 * `Issue Tracking <https://github.com/pypa/pip/issues>`_
13 * `User mailing list <http://groups.google.com/group/python-virtualenv>`_
14 * `Dev mailing list <http://groups.google.com/group/pypa-dev>`_
15 * User IRC: #pypa on Freenode.
16 * Dev IRC: #pypa-dev on Freenode.
17
18
19 .. image:: https://pypip.in/v/pip/badge.png
20 :target: https://pypi.python.org/pypi/pip
21
22 .. image:: https://secure.travis-ci.org/pypa/pip.png?branch=develop
23 :target: http://travis-ci.org/pypa/pip
24
[end of README.rst]
[start of pip/index.py]
1 """Routines related to PyPI, indexes"""
2 from __future__ import absolute_import
3
4 import logging
5 import cgi
6 from collections import namedtuple
7 import itertools
8 import sys
9 import os
10 import re
11 import mimetypes
12 import posixpath
13 import warnings
14
15 from pip._vendor.six.moves.urllib import parse as urllib_parse
16 from pip._vendor.six.moves.urllib import request as urllib_request
17
18 from pip.compat import ipaddress
19 from pip.utils import (
20 Inf, cached_property, normalize_name, splitext, normalize_path,
21 ARCHIVE_EXTENSIONS, SUPPORTED_EXTENSIONS)
22 from pip.utils.deprecation import RemovedInPip8Warning
23 from pip.utils.logging import indent_log
24 from pip.exceptions import (
25 DistributionNotFound, BestVersionAlreadyInstalled, InvalidWheelFilename,
26 UnsupportedWheel,
27 )
28 from pip.download import url_to_path, path_to_url
29 from pip.models import PyPI
30 from pip.wheel import Wheel, wheel_ext
31 from pip.pep425tags import supported_tags, supported_tags_noarch, get_platform
32 from pip._vendor import html5lib, requests, pkg_resources, six
33 from pip._vendor.packaging.version import parse as parse_version
34 from pip._vendor.requests.exceptions import SSLError
35
36
37 __all__ = ['PackageFinder']
38
39
40 # Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC)
41 SECURE_ORIGINS = [
42 # protocol, hostname, port
43 ("https", "*", "*"),
44 ("*", "localhost", "*"),
45 ("*", "127.0.0.0/8", "*"),
46 ("*", "::1/128", "*"),
47 ("file", "*", None),
48 ]
49
50
51 logger = logging.getLogger(__name__)
52
53
54 class InstallationCandidate(object):
55
56 def __init__(self, project, version, location):
57 self.project = project
58 self.version = parse_version(version)
59 self.location = location
60 self._key = (self.project, self.version, self.location)
61
62 def __repr__(self):
63 return "<InstallationCandidate({0!r}, {1!r}, {2!r})>".format(
64 self.project, self.version, self.location,
65 )
66
67 def __hash__(self):
68 return hash(self._key)
69
70 def __lt__(self, other):
71
72 return self._compare(other, lambda s, o: s < o)
73
74 def __le__(self, other):
75 return self._compare(other, lambda s, o: s <= o)
76
77 def __eq__(self, other):
78 return self._compare(other, lambda s, o: s == o)
79
80 def __ge__(self, other):
81 return self._compare(other, lambda s, o: s >= o)
82
83 def __gt__(self, other):
84 return self._compare(other, lambda s, o: s > o)
85
86 def __ne__(self, other):
87 return self._compare(other, lambda s, o: s != o)
88
89 def _compare(self, other, method):
90 if not isinstance(other, InstallationCandidate):
91 return NotImplemented
92
93 return method(self._key, other._key)
94
95
96 class PackageFinder(object):
97 """This finds packages.
98
99 This is meant to match easy_install's technique for looking for
100 packages, by reading pages and looking for appropriate links
101 """
102
103 def __init__(self, find_links, index_urls,
104 use_wheel=True, allow_external=(), allow_unverified=(),
105 allow_all_external=False, allow_all_prereleases=False,
106 trusted_hosts=None, process_dependency_links=False,
107 session=None):
108 if session is None:
109 raise TypeError(
110 "PackageFinder() missing 1 required keyword argument: "
111 "'session'"
112 )
113
114 # Build find_links. If an argument starts with ~, it may be
115 # a local file relative to a home directory. So try normalizing
116 # it and if it exists, use the normalized version.
117 # This is deliberately conservative - it might be fine just to
118 # blindly normalize anything starting with a ~...
119 self.find_links = []
120 for link in find_links:
121 if link.startswith('~'):
122 new_link = normalize_path(link)
123 if os.path.exists(new_link):
124 link = new_link
125 self.find_links.append(link)
126
127 self.index_urls = index_urls
128 self.dependency_links = []
129
130 # These are boring links that have already been logged somehow:
131 self.logged_links = set()
132
133 self.use_wheel = use_wheel
134
135 # Do we allow (safe and verifiable) externally hosted files?
136 self.allow_external = set(normalize_name(n) for n in allow_external)
137
138 # Which names are allowed to install insecure and unverifiable files?
139 self.allow_unverified = set(
140 normalize_name(n) for n in allow_unverified
141 )
142
143 # Anything that is allowed unverified is also allowed external
144 self.allow_external |= self.allow_unverified
145
146 # Do we allow all (safe and verifiable) externally hosted files?
147 self.allow_all_external = allow_all_external
148
149 # Domains that we won't emit warnings for when not using HTTPS
150 self.secure_origins = [
151 ("*", host, "*")
152 for host in (trusted_hosts if trusted_hosts else [])
153 ]
154
155 # Stores if we ignored any external links so that we can instruct
156 # end users how to install them if no distributions are available
157 self.need_warn_external = False
158
159 # Stores if we ignored any unsafe links so that we can instruct
160 # end users how to install them if no distributions are available
161 self.need_warn_unverified = False
162
163 # Do we want to allow _all_ pre-releases?
164 self.allow_all_prereleases = allow_all_prereleases
165
166 # Do we process dependency links?
167 self.process_dependency_links = process_dependency_links
168
169 # The Session we'll use to make requests
170 self.session = session
171
172 def add_dependency_links(self, links):
173 # # FIXME: this shouldn't be global list this, it should only
174 # # apply to requirements of the package that specifies the
175 # # dependency_links value
176 # # FIXME: also, we should track comes_from (i.e., use Link)
177 if self.process_dependency_links:
178 warnings.warn(
179 "Dependency Links processing has been deprecated and will be "
180 "removed in a future release.",
181 RemovedInPip8Warning,
182 )
183 self.dependency_links.extend(links)
184
185 @staticmethod
186 def _sort_locations(locations, expand_dir=False):
187 """
188 Sort locations into "files" (archives) and "urls", and return
189 a pair of lists (files,urls)
190 """
191 files = []
192 urls = []
193
194 # puts the url for the given file path into the appropriate list
195 def sort_path(path):
196 url = path_to_url(path)
197 if mimetypes.guess_type(url, strict=False)[0] == 'text/html':
198 urls.append(url)
199 else:
200 files.append(url)
201
202 for url in locations:
203
204 is_local_path = os.path.exists(url)
205 is_file_url = url.startswith('file:')
206
207 if is_local_path or is_file_url:
208 if is_local_path:
209 path = url
210 else:
211 path = url_to_path(url)
212 if os.path.isdir(path):
213 if expand_dir:
214 path = os.path.realpath(path)
215 for item in os.listdir(path):
216 sort_path(os.path.join(path, item))
217 elif is_file_url:
218 urls.append(url)
219 elif os.path.isfile(path):
220 sort_path(path)
221 else:
222 urls.append(url)
223
224 return files, urls
225
226 def _candidate_sort_key(self, candidate):
227 """
228 Function used to generate link sort key for link tuples.
229 The greater the return value, the more preferred it is.
230 If not finding wheels, then sorted by version only.
231 If finding wheels, then the sort order is by version, then:
232 1. existing installs
233 2. wheels ordered via Wheel.support_index_min()
234 3. source archives
235 Note: it was considered to embed this logic into the Link
236 comparison operators, but then different sdist links
237 with the same version, would have to be considered equal
238 """
239 support_num = len(supported_tags)
240 if candidate.location == INSTALLED_VERSION:
241 pri = 1
242 elif candidate.location.is_wheel:
243 # can raise InvalidWheelFilename
244 wheel = Wheel(candidate.location.filename)
245 if not wheel.supported():
246 raise UnsupportedWheel(
247 "%s is not a supported wheel for this platform. It "
248 "can't be sorted." % wheel.filename
249 )
250 pri = -(wheel.support_index_min())
251 else: # sdist
252 pri = -(support_num)
253 return (candidate.version, pri)
254
255 def _sort_versions(self, applicable_versions):
256 """
257 Bring the latest version (and wheels) to the front, but maintain the
258 existing ordering as secondary. See the docstring for `_link_sort_key`
259 for details. This function is isolated for easier unit testing.
260 """
261 return sorted(
262 applicable_versions,
263 key=self._candidate_sort_key,
264 reverse=True
265 )
266
267 def _validate_secure_origin(self, logger, location):
268 # Determine if this url used a secure transport mechanism
269 parsed = urllib_parse.urlparse(str(location))
270 origin = (parsed.scheme, parsed.hostname, parsed.port)
271
272 # Determine if our origin is a secure origin by looking through our
273 # hardcoded list of secure origins, as well as any additional ones
274 # configured on this PackageFinder instance.
275 for secure_origin in (SECURE_ORIGINS + self.secure_origins):
276 # Check to see if the protocol matches
277 if origin[0] != secure_origin[0] and secure_origin[0] != "*":
278 continue
279
280 try:
281 # We need to do this decode dance to ensure that we have a
282 # unicode object, even on Python 2.x.
283 addr = ipaddress.ip_address(
284 origin[1]
285 if (
286 isinstance(origin[1], six.text_type) or
287 origin[1] is None
288 )
289 else origin[1].decode("utf8")
290 )
291 network = ipaddress.ip_network(
292 secure_origin[1]
293 if isinstance(secure_origin[1], six.text_type)
294 else secure_origin[1].decode("utf8")
295 )
296 except ValueError:
297 # We don't have both a valid address or a valid network, so
298 # we'll check this origin against hostnames.
299 if origin[1] != secure_origin[1] and secure_origin[1] != "*":
300 continue
301 else:
302 # We have a valid address and network, so see if the address
303 # is contained within the network.
304 if addr not in network:
305 continue
306
307 # Check to see if the port patches
308 if (origin[2] != secure_origin[2] and
309 secure_origin[2] != "*" and
310 secure_origin[2] is not None):
311 continue
312
313 # If we've gotten here, then this origin matches the current
314 # secure origin and we should return True
315 return True
316
317 # If we've gotten to this point, then the origin isn't secure and we
318 # will not accept it as a valid location to search. We will however
319 # log a warning that we are ignoring it.
320 logger.warning(
321 "The repository located at %s is not a trusted or secure host and "
322 "is being ignored. If this repository is available via HTTPS it "
323 "is recommended to use HTTPS instead, otherwise you may silence "
324 "this warning and allow it anyways with '--trusted-host %s'.",
325 parsed.hostname,
326 parsed.hostname,
327 )
328
329 return False
330
331 def _get_index_urls_locations(self, project_name):
332 """Returns the locations found via self.index_urls
333
334 Checks the url_name on the main (first in the list) index and
335 use this url_name to produce all locations
336 """
337
338 def mkurl_pypi_url(url):
339 loc = posixpath.join(url, project_url_name)
340 # For maximum compatibility with easy_install, ensure the path
341 # ends in a trailing slash. Although this isn't in the spec
342 # (and PyPI can handle it without the slash) some other index
343 # implementations might break if they relied on easy_install's
344 # behavior.
345 if not loc.endswith('/'):
346 loc = loc + '/'
347 return loc
348
349 project_url_name = urllib_parse.quote(project_name.lower())
350
351 if self.index_urls:
352 # Check that we have the url_name correctly spelled:
353
354 # Only check main index if index URL is given
355 main_index_url = Link(
356 mkurl_pypi_url(self.index_urls[0]),
357 trusted=True,
358 )
359
360 page = self._get_page(main_index_url)
361 if page is None and PyPI.netloc not in str(main_index_url):
362 warnings.warn(
363 "Failed to find %r at %s. It is suggested to upgrade "
364 "your index to support normalized names as the name in "
365 "/simple/{name}." % (project_name, main_index_url),
366 RemovedInPip8Warning,
367 )
368
369 project_url_name = self._find_url_name(
370 Link(self.index_urls[0], trusted=True),
371 project_url_name,
372 ) or project_url_name
373
374 if project_url_name is not None:
375 return [mkurl_pypi_url(url) for url in self.index_urls]
376 return []
377
378 def _find_all_versions(self, project_name):
379 """Find all available versions for project_name
380
381 This checks index_urls, find_links and dependency_links
382 All versions found are returned
383
384 See _link_package_versions for details on which files are accepted
385 """
386 index_locations = self._get_index_urls_locations(project_name)
387 index_file_loc, index_url_loc = self._sort_locations(index_locations)
388 fl_file_loc, fl_url_loc = self._sort_locations(
389 self.find_links, expand_dir=True)
390 dep_file_loc, dep_url_loc = self._sort_locations(self.dependency_links)
391
392 file_locations = (
393 Link(url) for url in itertools.chain(
394 index_file_loc, fl_file_loc, dep_file_loc)
395 )
396
397 # We trust every url that the user has given us whether it was given
398 # via --index-url or --find-links
399 # We explicitly do not trust links that came from dependency_links
400 # We want to filter out any thing which does not have a secure origin.
401 url_locations = [
402 link for link in itertools.chain(
403 (Link(url, trusted=True) for url in index_url_loc),
404 (Link(url, trusted=True) for url in fl_url_loc),
405 (Link(url) for url in dep_url_loc),
406 )
407 if self._validate_secure_origin(logger, link)
408 ]
409
410 logger.debug('%d location(s) to search for versions of %s:',
411 len(url_locations), project_name)
412
413 for location in url_locations:
414 logger.debug('* %s', location)
415
416 formats = set(["source"])
417 if self.use_wheel:
418 formats.add("binary")
419 search = Search(
420 project_name.lower(),
421 pkg_resources.safe_name(project_name).lower(),
422 frozenset(formats))
423 find_links_versions = self._package_versions(
424 # We trust every directly linked archive in find_links
425 (Link(url, '-f', trusted=True) for url in self.find_links),
426 search
427 )
428
429 page_versions = []
430 for page in self._get_pages(url_locations, project_name):
431 logger.debug('Analyzing links from page %s', page.url)
432 with indent_log():
433 page_versions.extend(
434 self._package_versions(page.links, search)
435 )
436
437 dependency_versions = self._package_versions(
438 (Link(url) for url in self.dependency_links), search
439 )
440 if dependency_versions:
441 logger.debug(
442 'dependency_links found: %s',
443 ', '.join([
444 version.location.url for version in dependency_versions
445 ])
446 )
447
448 file_versions = self._package_versions(file_locations, search)
449 if file_versions:
450 file_versions.sort(reverse=True)
451 logger.debug(
452 'Local files found: %s',
453 ', '.join([
454 url_to_path(candidate.location.url)
455 for candidate in file_versions
456 ])
457 )
458
459 # This is an intentional priority ordering
460 return (
461 file_versions + find_links_versions + page_versions +
462 dependency_versions
463 )
464
465 def find_requirement(self, req, upgrade):
466 """Try to find an InstallationCandidate for req
467
468 Expects req, an InstallRequirement and upgrade, a boolean
469 Returns an InstallationCandidate or None
470 May raise DistributionNotFound or BestVersionAlreadyInstalled
471 """
472 all_versions = self._find_all_versions(req.name)
473 # Filter out anything which doesn't match our specifier
474
475 _versions = set(
476 req.specifier.filter(
477 [x.version for x in all_versions],
478 prereleases=(
479 self.allow_all_prereleases
480 if self.allow_all_prereleases else None
481 ),
482 )
483 )
484 applicable_versions = [
485 x for x in all_versions if x.version in _versions
486 ]
487
488 if req.satisfied_by is not None:
489 # Finally add our existing versions to the front of our versions.
490 applicable_versions.insert(
491 0,
492 InstallationCandidate(
493 req.name,
494 req.satisfied_by.version,
495 INSTALLED_VERSION,
496 )
497 )
498 existing_applicable = True
499 else:
500 existing_applicable = False
501
502 applicable_versions = self._sort_versions(applicable_versions)
503
504 if not upgrade and existing_applicable:
505 if applicable_versions[0].location is INSTALLED_VERSION:
506 logger.debug(
507 'Existing installed version (%s) is most up-to-date and '
508 'satisfies requirement',
509 req.satisfied_by.version,
510 )
511 else:
512 logger.debug(
513 'Existing installed version (%s) satisfies requirement '
514 '(most up-to-date version is %s)',
515 req.satisfied_by.version,
516 applicable_versions[0][2],
517 )
518 return None
519
520 if not applicable_versions:
521 logger.critical(
522 'Could not find a version that satisfies the requirement %s '
523 '(from versions: %s)',
524 req,
525 ', '.join(
526 sorted(
527 set(str(i.version) for i in all_versions),
528 key=parse_version,
529 )
530 )
531 )
532
533 if self.need_warn_external:
534 logger.warning(
535 "Some externally hosted files were ignored as access to "
536 "them may be unreliable (use --allow-external %s to "
537 "allow).",
538 req.name,
539 )
540
541 if self.need_warn_unverified:
542 logger.warning(
543 "Some insecure and unverifiable files were ignored"
544 " (use --allow-unverified %s to allow).",
545 req.name,
546 )
547
548 raise DistributionNotFound(
549 'No matching distribution found for %s' % req
550 )
551
552 if applicable_versions[0].location is INSTALLED_VERSION:
553 # We have an existing version, and its the best version
554 logger.debug(
555 'Installed version (%s) is most up-to-date (past versions: '
556 '%s)',
557 req.satisfied_by.version,
558 ', '.join(str(i.version) for i in applicable_versions[1:]) or
559 "none",
560 )
561 raise BestVersionAlreadyInstalled
562
563 if len(applicable_versions) > 1:
564 logger.debug(
565 'Using version %s (newest of versions: %s)',
566 applicable_versions[0].version,
567 ', '.join(str(i.version) for i in applicable_versions)
568 )
569
570 selected_version = applicable_versions[0].location
571
572 if (selected_version.verifiable is not None and not
573 selected_version.verifiable):
574 logger.warning(
575 "%s is potentially insecure and unverifiable.", req.name,
576 )
577
578 return selected_version
579
580 def _find_url_name(self, index_url, url_name):
581 """
582 Finds the true URL name of a package, when the given name isn't quite
583 correct.
584 This is usually used to implement case-insensitivity.
585 """
586 if not index_url.url.endswith('/'):
587 # Vaguely part of the PyPI API... weird but true.
588 # FIXME: bad to modify this?
589 index_url.url += '/'
590 page = self._get_page(index_url)
591 if page is None:
592 logger.critical('Cannot fetch index base URL %s', index_url)
593 return
594 norm_name = normalize_name(url_name)
595 for link in page.links:
596 base = posixpath.basename(link.path.rstrip('/'))
597 if norm_name == normalize_name(base):
598 logger.debug(
599 'Real name of requirement %s is %s', url_name, base,
600 )
601 return base
602 return None
603
604 def _get_pages(self, locations, project_name):
605 """
606 Yields (page, page_url) from the given locations, skipping
607 locations that have errors, and adding download/homepage links
608 """
609 all_locations = list(locations)
610 seen = set()
611 normalized = normalize_name(project_name)
612
613 while all_locations:
614 location = all_locations.pop(0)
615 if location in seen:
616 continue
617 seen.add(location)
618
619 page = self._get_page(location)
620 if page is None:
621 continue
622
623 yield page
624
625 for link in page.rel_links():
626
627 if (normalized not in self.allow_external and not
628 self.allow_all_external):
629 self.need_warn_external = True
630 logger.debug(
631 "Not searching %s for files because external "
632 "urls are disallowed.",
633 link,
634 )
635 continue
636
637 if (link.trusted is not None and not
638 link.trusted and
639 normalized not in self.allow_unverified):
640 logger.debug(
641 "Not searching %s for urls, it is an "
642 "untrusted link and cannot produce safe or "
643 "verifiable files.",
644 link,
645 )
646 self.need_warn_unverified = True
647 continue
648
649 all_locations.append(link)
650
651 _py_version_re = re.compile(r'-py([123]\.?[0-9]?)$')
652
653 def _sort_links(self, links):
654 """
655 Returns elements of links in order, non-egg links first, egg links
656 second, while eliminating duplicates
657 """
658 eggs, no_eggs = [], []
659 seen = set()
660 for link in links:
661 if link not in seen:
662 seen.add(link)
663 if link.egg_fragment:
664 eggs.append(link)
665 else:
666 no_eggs.append(link)
667 return no_eggs + eggs
668
669 def _package_versions(self, links, search):
670 result = []
671 for link in self._sort_links(links):
672 v = self._link_package_versions(link, search)
673 if v is not None:
674 result.append(v)
675 return result
676
677 def _log_skipped_link(self, link, reason):
678 if link not in self.logged_links:
679 logger.debug('Skipping link %s; %s', link, reason)
680 self.logged_links.add(link)
681
682 def _link_package_versions(self, link, search):
683 """Return an InstallationCandidate or None"""
684 platform = get_platform()
685
686 version = None
687 if link.egg_fragment:
688 egg_info = link.egg_fragment
689 else:
690 egg_info, ext = link.splitext()
691 if not ext:
692 self._log_skipped_link(link, 'not a file')
693 return
694 if ext not in SUPPORTED_EXTENSIONS:
695 self._log_skipped_link(
696 link, 'unsupported archive format: %s' % ext)
697 return
698 if "binary" not in search.formats and ext == wheel_ext:
699 self._log_skipped_link(
700 link, 'No binaries permitted for %s' % search.supplied)
701 return
702 if "macosx10" in link.path and ext == '.zip':
703 self._log_skipped_link(link, 'macosx10 one')
704 return
705 if ext == wheel_ext:
706 try:
707 wheel = Wheel(link.filename)
708 except InvalidWheelFilename:
709 self._log_skipped_link(link, 'invalid wheel filename')
710 return
711 if (pkg_resources.safe_name(wheel.name).lower() !=
712 search.canonical):
713 self._log_skipped_link(
714 link, 'wrong project name (not %s)' % search.supplied)
715 return
716 if not wheel.supported():
717 self._log_skipped_link(
718 link, 'it is not compatible with this Python')
719 return
720 # This is a dirty hack to prevent installing Binary Wheels from
721 # PyPI unless it is a Windows or Mac Binary Wheel. This is
722 # paired with a change to PyPI disabling uploads for the
723 # same. Once we have a mechanism for enabling support for
724 # binary wheels on linux that deals with the inherent problems
725 # of binary distribution this can be removed.
726 comes_from = getattr(link, "comes_from", None)
727 if (
728 (
729 not platform.startswith('win') and not
730 platform.startswith('macosx') and not
731 platform == 'cli'
732 ) and
733 comes_from is not None and
734 urllib_parse.urlparse(
735 comes_from.url
736 ).netloc.endswith(PyPI.netloc)):
737 if not wheel.supported(tags=supported_tags_noarch):
738 self._log_skipped_link(
739 link,
740 "it is a pypi-hosted binary "
741 "Wheel on an unsupported platform",
742 )
743 return
744 version = wheel.version
745
746 if not version:
747 version = egg_info_matches(egg_info, search.supplied, link)
748 if version is None:
749 self._log_skipped_link(
750 link, 'wrong project name (not %s)' % search.supplied)
751 return
752
753 if (link.internal is not None and not
754 link.internal and not
755 normalize_name(search.supplied).lower()
756 in self.allow_external and not
757 self.allow_all_external):
758 # We have a link that we are sure is external, so we should skip
759 # it unless we are allowing externals
760 self._log_skipped_link(link, 'it is externally hosted')
761 self.need_warn_external = True
762 return
763
764 if (link.verifiable is not None and not
765 link.verifiable and not
766 (normalize_name(search.supplied).lower()
767 in self.allow_unverified)):
768 # We have a link that we are sure we cannot verify its integrity,
769 # so we should skip it unless we are allowing unsafe installs
770 # for this requirement.
771 self._log_skipped_link(
772 link, 'it is an insecure and unverifiable file')
773 self.need_warn_unverified = True
774 return
775
776 match = self._py_version_re.search(version)
777 if match:
778 version = version[:match.start()]
779 py_version = match.group(1)
780 if py_version != sys.version[:3]:
781 self._log_skipped_link(
782 link, 'Python version is incorrect')
783 return
784 logger.debug('Found link %s, version: %s', link, version)
785
786 return InstallationCandidate(search.supplied, version, link)
787
788 def _get_page(self, link):
789 return HTMLPage.get_page(link, session=self.session)
790
791
792 def egg_info_matches(
793 egg_info, search_name, link,
794 _egg_info_re=re.compile(r'([a-z0-9_.]+)-([a-z0-9_.!+-]+)', re.I)):
795 """Pull the version part out of a string.
796
797 :param egg_info: The string to parse. E.g. foo-2.1
798 :param search_name: The name of the package this belongs to. None to
799 infer the name. Note that this cannot unambiguously parse strings
800 like foo-2-2 which might be foo, 2-2 or foo-2, 2.
801 :param link: The link the string came from, for logging on failure.
802 """
803 match = _egg_info_re.search(egg_info)
804 if not match:
805 logger.debug('Could not parse version from link: %s', link)
806 return None
807 if search_name is None:
808 full_match = match.group(0)
809 return full_match[full_match.index('-'):]
810 name = match.group(0).lower()
811 # To match the "safe" name that pkg_resources creates:
812 name = name.replace('_', '-')
813 # project name and version must be separated by a dash
814 look_for = search_name.lower() + "-"
815 if name.startswith(look_for):
816 return match.group(0)[len(look_for):]
817 else:
818 return None
819
820
821 class HTMLPage(object):
822 """Represents one page, along with its URL"""
823
824 def __init__(self, content, url, headers=None, trusted=None):
825 # Determine if we have any encoding information in our headers
826 encoding = None
827 if headers and "Content-Type" in headers:
828 content_type, params = cgi.parse_header(headers["Content-Type"])
829
830 if "charset" in params:
831 encoding = params['charset']
832
833 self.content = content
834 self.parsed = html5lib.parse(
835 self.content,
836 encoding=encoding,
837 namespaceHTMLElements=False,
838 )
839 self.url = url
840 self.headers = headers
841 self.trusted = trusted
842
843 def __str__(self):
844 return self.url
845
846 @classmethod
847 def get_page(cls, link, skip_archives=True, session=None):
848 if session is None:
849 raise TypeError(
850 "get_page() missing 1 required keyword argument: 'session'"
851 )
852
853 url = link.url
854 url = url.split('#', 1)[0]
855
856 # Check for VCS schemes that do not support lookup as web pages.
857 from pip.vcs import VcsSupport
858 for scheme in VcsSupport.schemes:
859 if url.lower().startswith(scheme) and url[len(scheme)] in '+:':
860 logger.debug('Cannot look at %s URL %s', scheme, link)
861 return None
862
863 try:
864 if skip_archives:
865 filename = link.filename
866 for bad_ext in ARCHIVE_EXTENSIONS:
867 if filename.endswith(bad_ext):
868 content_type = cls._get_content_type(
869 url, session=session,
870 )
871 if content_type.lower().startswith('text/html'):
872 break
873 else:
874 logger.debug(
875 'Skipping page %s because of Content-Type: %s',
876 link,
877 content_type,
878 )
879 return
880
881 logger.debug('Getting page %s', url)
882
883 # Tack index.html onto file:// URLs that point to directories
884 (scheme, netloc, path, params, query, fragment) = \
885 urllib_parse.urlparse(url)
886 if (scheme == 'file' and
887 os.path.isdir(urllib_request.url2pathname(path))):
888 # add trailing slash if not present so urljoin doesn't trim
889 # final segment
890 if not url.endswith('/'):
891 url += '/'
892 url = urllib_parse.urljoin(url, 'index.html')
893 logger.debug(' file: URL is directory, getting %s', url)
894
895 resp = session.get(
896 url,
897 headers={
898 "Accept": "text/html",
899 "Cache-Control": "max-age=600",
900 },
901 )
902 resp.raise_for_status()
903
904 # The check for archives above only works if the url ends with
905 # something that looks like an archive. However that is not a
906 # requirement of an url. Unless we issue a HEAD request on every
907 # url we cannot know ahead of time for sure if something is HTML
908 # or not. However we can check after we've downloaded it.
909 content_type = resp.headers.get('Content-Type', 'unknown')
910 if not content_type.lower().startswith("text/html"):
911 logger.debug(
912 'Skipping page %s because of Content-Type: %s',
913 link,
914 content_type,
915 )
916 return
917
918 inst = cls(
919 resp.content, resp.url, resp.headers,
920 trusted=link.trusted,
921 )
922 except requests.HTTPError as exc:
923 level = 2 if exc.response.status_code == 404 else 1
924 cls._handle_fail(link, exc, url, level=level)
925 except requests.ConnectionError as exc:
926 cls._handle_fail(link, "connection error: %s" % exc, url)
927 except requests.Timeout:
928 cls._handle_fail(link, "timed out", url)
929 except SSLError as exc:
930 reason = ("There was a problem confirming the ssl certificate: "
931 "%s" % exc)
932 cls._handle_fail(link, reason, url, level=2, meth=logger.info)
933 else:
934 return inst
935
936 @staticmethod
937 def _handle_fail(link, reason, url, level=1, meth=None):
938 if meth is None:
939 meth = logger.debug
940
941 meth("Could not fetch URL %s: %s - skipping", link, reason)
942
943 @staticmethod
944 def _get_content_type(url, session):
945 """Get the Content-Type of the given url, using a HEAD request"""
946 scheme, netloc, path, query, fragment = urllib_parse.urlsplit(url)
947 if scheme not in ('http', 'https'):
948 # FIXME: some warning or something?
949 # assertion error?
950 return ''
951
952 resp = session.head(url, allow_redirects=True)
953 resp.raise_for_status()
954
955 return resp.headers.get("Content-Type", "")
956
957 @cached_property
958 def api_version(self):
959 metas = [
960 x for x in self.parsed.findall(".//meta")
961 if x.get("name", "").lower() == "api-version"
962 ]
963 if metas:
964 try:
965 return int(metas[0].get("value", None))
966 except (TypeError, ValueError):
967 pass
968
969 return None
970
971 @cached_property
972 def base_url(self):
973 bases = [
974 x for x in self.parsed.findall(".//base")
975 if x.get("href") is not None
976 ]
977 if bases and bases[0].get("href"):
978 return bases[0].get("href")
979 else:
980 return self.url
981
982 @property
983 def links(self):
984 """Yields all links in the page"""
985 for anchor in self.parsed.findall(".//a"):
986 if anchor.get("href"):
987 href = anchor.get("href")
988 url = self.clean_link(
989 urllib_parse.urljoin(self.base_url, href)
990 )
991
992 # Determine if this link is internal. If that distinction
993 # doesn't make sense in this context, then we don't make
994 # any distinction.
995 internal = None
996 if self.api_version and self.api_version >= 2:
997 # Only api_versions >= 2 have a distinction between
998 # external and internal links
999 internal = bool(
1000 anchor.get("rel") and
1001 "internal" in anchor.get("rel").split()
1002 )
1003
1004 yield Link(url, self, internal=internal)
1005
1006 def rel_links(self, rels=('homepage', 'download')):
1007 """Yields all links with the given relations"""
1008 rels = set(rels)
1009
1010 for anchor in self.parsed.findall(".//a"):
1011 if anchor.get("rel") and anchor.get("href"):
1012 found_rels = set(anchor.get("rel").split())
1013 # Determine the intersection between what rels were found and
1014 # what rels were being looked for
1015 if found_rels & rels:
1016 href = anchor.get("href")
1017 url = self.clean_link(
1018 urllib_parse.urljoin(self.base_url, href)
1019 )
1020 yield Link(url, self, trusted=False)
1021
1022 _clean_re = re.compile(r'[^a-z0-9$&+,/:;=?@.#%_\\|-]', re.I)
1023
1024 def clean_link(self, url):
1025 """Makes sure a link is fully encoded. That is, if a ' ' shows up in
1026 the link, it will be rewritten to %20 (while not over-quoting
1027 % or other characters)."""
1028 return self._clean_re.sub(
1029 lambda match: '%%%2x' % ord(match.group(0)), url)
1030
1031
1032 class Link(object):
1033
1034 def __init__(self, url, comes_from=None, internal=None, trusted=None):
1035
1036 # url can be a UNC windows share
1037 if url != Inf and url.startswith('\\\\'):
1038 url = path_to_url(url)
1039
1040 self.url = url
1041 self.comes_from = comes_from
1042 self.internal = internal
1043 self.trusted = trusted
1044
1045 def __str__(self):
1046 if self.comes_from:
1047 return '%s (from %s)' % (self.url, self.comes_from)
1048 else:
1049 return str(self.url)
1050
1051 def __repr__(self):
1052 return '<Link %s>' % self
1053
1054 def __eq__(self, other):
1055 if not isinstance(other, Link):
1056 return NotImplemented
1057 return self.url == other.url
1058
1059 def __ne__(self, other):
1060 if not isinstance(other, Link):
1061 return NotImplemented
1062 return self.url != other.url
1063
1064 def __lt__(self, other):
1065 if not isinstance(other, Link):
1066 return NotImplemented
1067 return self.url < other.url
1068
1069 def __le__(self, other):
1070 if not isinstance(other, Link):
1071 return NotImplemented
1072 return self.url <= other.url
1073
1074 def __gt__(self, other):
1075 if not isinstance(other, Link):
1076 return NotImplemented
1077 return self.url > other.url
1078
1079 def __ge__(self, other):
1080 if not isinstance(other, Link):
1081 return NotImplemented
1082 return self.url >= other.url
1083
1084 def __hash__(self):
1085 return hash(self.url)
1086
1087 @property
1088 def filename(self):
1089 _, netloc, path, _, _ = urllib_parse.urlsplit(self.url)
1090 name = posixpath.basename(path.rstrip('/')) or netloc
1091 name = urllib_parse.unquote(name)
1092 assert name, ('URL %r produced no filename' % self.url)
1093 return name
1094
1095 @property
1096 def scheme(self):
1097 return urllib_parse.urlsplit(self.url)[0]
1098
1099 @property
1100 def netloc(self):
1101 return urllib_parse.urlsplit(self.url)[1]
1102
1103 @property
1104 def path(self):
1105 return urllib_parse.unquote(urllib_parse.urlsplit(self.url)[2])
1106
1107 def splitext(self):
1108 return splitext(posixpath.basename(self.path.rstrip('/')))
1109
1110 @property
1111 def ext(self):
1112 return self.splitext()[1]
1113
1114 @property
1115 def url_without_fragment(self):
1116 scheme, netloc, path, query, fragment = urllib_parse.urlsplit(self.url)
1117 return urllib_parse.urlunsplit((scheme, netloc, path, query, None))
1118
1119 _egg_fragment_re = re.compile(r'#egg=([^&]*)')
1120
1121 @property
1122 def egg_fragment(self):
1123 match = self._egg_fragment_re.search(self.url)
1124 if not match:
1125 return None
1126 return match.group(1)
1127
1128 _hash_re = re.compile(
1129 r'(sha1|sha224|sha384|sha256|sha512|md5)=([a-f0-9]+)'
1130 )
1131
1132 @property
1133 def hash(self):
1134 match = self._hash_re.search(self.url)
1135 if match:
1136 return match.group(2)
1137 return None
1138
1139 @property
1140 def hash_name(self):
1141 match = self._hash_re.search(self.url)
1142 if match:
1143 return match.group(1)
1144 return None
1145
1146 @property
1147 def show_url(self):
1148 return posixpath.basename(self.url.split('#', 1)[0].split('?', 1)[0])
1149
1150 @property
1151 def verifiable(self):
1152 """
1153 Returns True if this link can be verified after download, False if it
1154 cannot, and None if we cannot determine.
1155 """
1156 trusted = self.trusted or getattr(self.comes_from, "trusted", None)
1157 if trusted is not None and trusted:
1158 # This link came from a trusted source. It *may* be verifiable but
1159 # first we need to see if this page is operating under the new
1160 # API version.
1161 try:
1162 api_version = getattr(self.comes_from, "api_version", None)
1163 api_version = int(api_version)
1164 except (ValueError, TypeError):
1165 api_version = None
1166
1167 if api_version is None or api_version <= 1:
1168 # This link is either trusted, or it came from a trusted,
1169 # however it is not operating under the API version 2 so
1170 # we can't make any claims about if it's safe or not
1171 return
1172
1173 if self.hash:
1174 # This link came from a trusted source and it has a hash, so we
1175 # can consider it safe.
1176 return True
1177 else:
1178 # This link came from a trusted source, using the new API
1179 # version, and it does not have a hash. It is NOT verifiable
1180 return False
1181 elif trusted is not None:
1182 # This link came from an untrusted source and we cannot trust it
1183 return False
1184
1185 @property
1186 def is_wheel(self):
1187 return self.ext == wheel_ext
1188
1189
1190 # An object to represent the "link" for the installed version of a requirement.
1191 # Using Inf as the url makes it sort higher.
1192 INSTALLED_VERSION = Link(Inf)
1193
1194
1195 Search = namedtuple('Search', 'supplied canonical formats')
1196 """Capture key aspects of a search.
1197
1198 :attribute user: The user supplied package.
1199 :attribute canonical: The canonical package name.
1200 :attribute formats: The formats allowed for this package. Should be a set
1201 with 'binary' or 'source' or both in it.
1202 """
1203
[end of pip/index.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pypa/pip
|
31eb67d0dc6a3d65a748f0e2bd04cf5806e3e050
|
Proposal: Add an option to ignore sdists when installing certain packages
Some packages are difficult or even impossible to install from source for some users. For example, windows users without a compiler cannot install C extensions from source. However, such users may have a local wheelhouse.
If such a user wants to install, say, PyYAML and has a wheel for version 3.10, that wheel will not be used for "pip install PyYAML" because the current version is 3.11, which takes precedence, but has no wheel. The older version may, however, be fine for the user.
It should be possible for a user to say that for a specified list of packages, source distributions should be ignored as they are known to be unusable. (A warning should probably be given if a newer sdist exists when an older wheel is installed, but the wheel should still be used).
Suggested implementation:
```
pip install --binary-only numpy numpy
```
Note: this has the same problem of too much repetition that plagued the `--allow-external` option. Suggestions for an alternative UI would be much appreciated!
|
So I have a few thoughts:
- I've recently been thinking about multi repository support (thanks PEP 470!) and also just generally how to handle cases where we need/want more descriptive information. Mostly I've been thinking about on a per repository format but it'd apply to here as well I think. I'm becoming more convinced that some settings really don't translate to cli arguments / env vars very well and just need to be inside of a config file.
- Regardless of the above, I think it makes sense to make this a more generic option. We already have `--[no-]use-wheel` adding a `--binary-only` or `--[no-]sdist` seems like maybe it'd be better as something like `pip install --formats wheel numpy` (defaulting to `--formats wheel,sdist`) and deprecate the `--[no-]-use-wheel` arguments.
- I'm really hoping we can get rid of the repetition problems that have plagued the `--allow-external` option with PEP 470 and I'm really hesitant to re-add that in another form. Perhaps it'd make sense to make it a global on/off switch on the CLI and then make it so that the config files can specify that option for specific packages?
Thanks for the comments. I agree regarding this not translating well to a CLI form, and if we are OK with having things like this only available via the config file, I thing that's a better fit.
Also agreed the option name sucks. `--format` sounds like a better idea.
The problem with having a global on/off is that it'll almost never be appropriate. Even if you're installing a package with C extensions, it's quite possible that one of its dependencies (which you may not even be aware of) is pure Python and has no wheel. I don't object to a global switch for completeness, but I can't see it ever being the right thing.
I expect that what will happen in practice, is that users will over time accumulate a list of packages they known they need binary distributions for, in their config file.
There's one other complication I omitted (because I think it's probably not worth it) which is that a package may have optional C speedups. In that case, the user may sometimes prefer a binary over getting the latest version, and other times prefer the latest version. But I can't think of a good UI for that so I think it should be ignored for now (the user can always pin the version to one known to have wheels).
That goes back to another thought i had dealing with multi repositories and "priorities", e.g. stuff from repo A has a priority of 1000, stuff from repo B has a priority of 2000, maybe Wheels have a certain modifier while sdists don't. I don't know, it's not super well formed thought but if done right it could expose some API that lets people make their own policy for selecting which actual files are preferred.
This also ends up including the dep solver stuff too...
Yeah, this gets hard fast.
Really, this is about metadata. For building from sdist, PyYAML has an optional dependency on a C compiler, pyzmq has a non-optional dependency on a C compiler, and numpy has a dependency on having a PhD in compiling it :-) But we can't capture that metadata, much less check it, so we aren't able to confirm that the requirements are satisfied before deciding whether the sdist is an eligible candidate for installation. So we need the user to assist.
I do think we need a short-term solution. I'm starting to maintain a personal archive of wheels. But I don't always update it immediately a new version of (say) numpy is released. So `pip install numpy` may fail, because my wheel is for a non-current version. Now a failure isn't that bad - I can just rerun with `pip install numpy==1.7.1` or whatever - but it does mean checking what was the last wheel I built, updating the requirements file, remembering to change it again when I build a new wheel, etc. I don't know how many people are doing something similar, but I'd like us to be able to encourage it until more projects switch to publishing wheels.
So as a short term solution, I see a `--no-use-sdist` flag, that takes a list of distribution names, and is intended for use via the user's config file (but which can be used from the command line, because it's easier to allow that than to reject it), as being a reasonable practical option. (I've changed my mind on `--format`, that leads us straight into trying to solve a more general issue that there's no obvious need for yet).
Regarding multi-repo support, I've just started playing with devpi, and the facilities it has are awesome. Maybe things like priorities would be better suited for adding to the devpi index inheritance model?
See #2087 for a scenario where this would be useful.
|
2015-04-20T00:18:24Z
|
<patch>
diff --git a/pip/cmdoptions.py b/pip/cmdoptions.py
--- a/pip/cmdoptions.py
+++ b/pip/cmdoptions.py
@@ -11,7 +11,9 @@
from functools import partial
from optparse import OptionGroup, SUPPRESS_HELP, Option
-from pip.index import PyPI
+
+from pip.index import (
+ PyPI, FormatControl, fmt_ctl_handle_mutual_exclude, fmt_ctl_no_use_wheel)
from pip.locations import CA_BUNDLE_PATH, USER_CACHE_DIR, src_prefix
@@ -27,6 +29,12 @@ def make_option_group(group, parser):
return option_group
+def resolve_wheel_no_use_binary(options):
+ if not options.use_wheel:
+ control = options.format_control
+ fmt_ctl_no_use_wheel(control)
+
+
###########
# options #
###########
@@ -339,6 +347,7 @@ def editable():
'The default for global installs is "<current dir>/src".'
)
+# XXX: deprecated, remove in 9.0
use_wheel = partial(
Option,
'--use-wheel',
@@ -354,9 +363,53 @@ def editable():
action='store_false',
default=True,
help=('Do not Find and prefer wheel archives when searching indexes and '
- 'find-links locations.'),
+ 'find-links locations. DEPRECATED in favour of --no-binary.'),
)
+
+def _get_format_control(values, option):
+ """Get a format_control object."""
+ return getattr(values, option.dest)
+
+
+def _handle_no_binary(option, opt_str, value, parser):
+ existing = getattr(parser.values, option.dest)
+ fmt_ctl_handle_mutual_exclude(
+ value, existing.no_binary, existing.only_binary)
+
+
+def _handle_only_binary(option, opt_str, value, parser):
+ existing = getattr(parser.values, option.dest)
+ fmt_ctl_handle_mutual_exclude(
+ value, existing.only_binary, existing.no_binary)
+
+
+def no_binary():
+ return Option(
+ "--no-binary", dest="format_control", action="callback",
+ callback=_handle_no_binary, type="str",
+ default=FormatControl(set(), set()),
+ help="Do not use binary packages. Can be supplied multiple times, and "
+ "each time adds to the existing value. Accepts either :all: to "
+ "disable all binary packages, :none: to empty the set, or one or "
+ "more package names with commas between them. Note that some "
+ "packages are tricky to compile and may fail to install when "
+ "this option is used on them.")
+
+
+def only_binary():
+ return Option(
+ "--only-binary", dest="format_control", action="callback",
+ callback=_handle_only_binary, type="str",
+ default=FormatControl(set(), set()),
+ help="Do not use source packages. Can be supplied multiple times, and "
+ "each time adds to the existing value. Accepts either :all: to "
+ "disable all source packages, :none: to empty the set, or one or "
+ "more package names with commas between them. Packages without "
+ "binary distributions will fail to install when this option is "
+ "used on them.")
+
+
cache_dir = partial(
Option,
"--cache-dir",
diff --git a/pip/commands/freeze.py b/pip/commands/freeze.py
--- a/pip/commands/freeze.py
+++ b/pip/commands/freeze.py
@@ -2,6 +2,7 @@
import sys
+import pip
from pip.basecommand import Command
from pip.operations.freeze import freeze
from pip.wheel import WheelCache
@@ -55,7 +56,8 @@ def __init__(self, *args, **kw):
self.parser.insert_option_group(0, self.cmd_opts)
def run(self, options, args):
- wheel_cache = WheelCache(options.cache_dir)
+ format_control = pip.index.FormatControl(set(), set())
+ wheel_cache = WheelCache(options.cache_dir, format_control)
freeze_kwargs = dict(
requirement=options.requirement,
find_links=options.find_links,
diff --git a/pip/commands/install.py b/pip/commands/install.py
--- a/pip/commands/install.py
+++ b/pip/commands/install.py
@@ -153,6 +153,8 @@ def __init__(self, *args, **kw):
cmd_opts.add_option(cmdoptions.use_wheel())
cmd_opts.add_option(cmdoptions.no_use_wheel())
+ cmd_opts.add_option(cmdoptions.no_binary())
+ cmd_opts.add_option(cmdoptions.only_binary())
cmd_opts.add_option(
'--pre',
@@ -179,8 +181,8 @@ def _build_package_finder(self, options, index_urls, session):
"""
return PackageFinder(
find_links=options.find_links,
+ format_control=options.format_control,
index_urls=index_urls,
- use_wheel=options.use_wheel,
allow_external=options.allow_external,
allow_unverified=options.allow_unverified,
allow_all_external=options.allow_all_external,
@@ -191,6 +193,7 @@ def _build_package_finder(self, options, index_urls, session):
)
def run(self, options, args):
+ cmdoptions.resolve_wheel_no_use_binary(options)
if options.download_dir:
options.ignore_installed = True
@@ -239,7 +242,7 @@ def run(self, options, args):
finder = self._build_package_finder(options, index_urls, session)
build_delete = (not (options.no_clean or options.build_dir))
- wheel_cache = WheelCache(options.cache_dir)
+ wheel_cache = WheelCache(options.cache_dir, options.format_control)
with BuildDirectory(options.build_dir,
delete=build_delete) as build_dir:
requirement_set = RequirementSet(
diff --git a/pip/commands/list.py b/pip/commands/list.py
--- a/pip/commands/list.py
+++ b/pip/commands/list.py
@@ -6,7 +6,7 @@
from pip.basecommand import Command
from pip.exceptions import DistributionNotFound
-from pip.index import PackageFinder, Search
+from pip.index import FormatControl, fmt_ctl_formats, PackageFinder, Search
from pip.req import InstallRequirement
from pip.utils import get_installed_distributions, dist_is_editable
from pip.wheel import WheelCache
@@ -131,7 +131,8 @@ def find_packages_latest_versions(self, options):
user_only=options.user,
include_editables=False,
)
- wheel_cache = WheelCache(options.cache_dir)
+ format_control = FormatControl(set(), set())
+ wheel_cache = WheelCache(options.cache_dir, format_control)
for dist in installed_packages:
req = InstallRequirement.from_line(
dist.key, None, isolated=options.isolated_mode,
@@ -148,10 +149,12 @@ def find_packages_latest_versions(self, options):
except DistributionNotFound:
continue
else:
+ canonical_name = pkg_resources.safe_name(req.name).lower()
+ formats = fmt_ctl_formats(format_control, canonical_name)
search = Search(
req.name,
- pkg_resources.safe_name(req.name).lower(),
- ["source", "binary"])
+ canonical_name,
+ formats)
remote_version = finder._link_package_versions(
link, search).version
if link.is_wheel:
diff --git a/pip/commands/uninstall.py b/pip/commands/uninstall.py
--- a/pip/commands/uninstall.py
+++ b/pip/commands/uninstall.py
@@ -1,5 +1,6 @@
from __future__ import absolute_import
+import pip
from pip.wheel import WheelCache
from pip.req import InstallRequirement, RequirementSet, parse_requirements
from pip.basecommand import Command
@@ -43,7 +44,8 @@ def __init__(self, *args, **kw):
def run(self, options, args):
with self._build_session(options) as session:
- wheel_cache = WheelCache(options.cache_dir)
+ format_control = pip.index.FormatControl(set(), set())
+ wheel_cache = WheelCache(options.cache_dir, format_control)
requirement_set = RequirementSet(
build_dir=None,
src_dir=None,
diff --git a/pip/commands/wheel.py b/pip/commands/wheel.py
--- a/pip/commands/wheel.py
+++ b/pip/commands/wheel.py
@@ -61,6 +61,8 @@ def __init__(self, *args, **kw):
)
cmd_opts.add_option(cmdoptions.use_wheel())
cmd_opts.add_option(cmdoptions.no_use_wheel())
+ cmd_opts.add_option(cmdoptions.no_binary())
+ cmd_opts.add_option(cmdoptions.only_binary())
cmd_opts.add_option(
'--build-option',
dest='build_options',
@@ -122,6 +124,7 @@ def check_required_packages(self):
def run(self, options, args):
self.check_required_packages()
+ cmdoptions.resolve_wheel_no_use_binary(options)
index_urls = [options.index_url] + options.extra_index_urls
if options.no_index:
@@ -143,8 +146,8 @@ def run(self, options, args):
finder = PackageFinder(
find_links=options.find_links,
+ format_control=options.format_control,
index_urls=index_urls,
- use_wheel=options.use_wheel,
allow_external=options.allow_external,
allow_unverified=options.allow_unverified,
allow_all_external=options.allow_all_external,
@@ -155,7 +158,7 @@ def run(self, options, args):
)
build_delete = (not (options.no_clean or options.build_dir))
- wheel_cache = WheelCache(options.cache_dir)
+ wheel_cache = WheelCache(options.cache_dir, options.format_control)
with BuildDirectory(options.build_dir,
delete=build_delete) as build_dir:
requirement_set = RequirementSet(
diff --git a/pip/index.py b/pip/index.py
--- a/pip/index.py
+++ b/pip/index.py
@@ -34,7 +34,7 @@
from pip._vendor.requests.exceptions import SSLError
-__all__ = ['PackageFinder']
+__all__ = ['FormatControl', 'fmt_ctl_handle_mutual_exclude', 'PackageFinder']
# Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC)
@@ -97,14 +97,20 @@ class PackageFinder(object):
"""This finds packages.
This is meant to match easy_install's technique for looking for
- packages, by reading pages and looking for appropriate links
+ packages, by reading pages and looking for appropriate links.
"""
def __init__(self, find_links, index_urls,
- use_wheel=True, allow_external=(), allow_unverified=(),
+ allow_external=(), allow_unverified=(),
allow_all_external=False, allow_all_prereleases=False,
trusted_hosts=None, process_dependency_links=False,
- session=None):
+ session=None, format_control=None):
+ """Create a PackageFinder.
+
+ :param format_control: A FormatControl object or None. Used to control
+ the selection of source packages / binary packages when consulting
+ the index and links.
+ """
if session is None:
raise TypeError(
"PackageFinder() missing 1 required keyword argument: "
@@ -130,7 +136,7 @@ def __init__(self, find_links, index_urls,
# These are boring links that have already been logged somehow:
self.logged_links = set()
- self.use_wheel = use_wheel
+ self.format_control = format_control or FormatControl(set(), set())
# Do we allow (safe and verifiable) externally hosted files?
self.allow_external = set(normalize_name(n) for n in allow_external)
@@ -413,13 +419,9 @@ def _find_all_versions(self, project_name):
for location in url_locations:
logger.debug('* %s', location)
- formats = set(["source"])
- if self.use_wheel:
- formats.add("binary")
- search = Search(
- project_name.lower(),
- pkg_resources.safe_name(project_name).lower(),
- frozenset(formats))
+ canonical_name = pkg_resources.safe_name(project_name).lower()
+ formats = fmt_ctl_formats(self.format_control, canonical_name)
+ search = Search(project_name.lower(), canonical_name, formats)
find_links_versions = self._package_versions(
# We trust every directly linked archive in find_links
(Link(url, '-f', trusted=True) for url in self.find_links),
@@ -686,6 +688,7 @@ def _link_package_versions(self, link, search):
version = None
if link.egg_fragment:
egg_info = link.egg_fragment
+ ext = link.ext
else:
egg_info, ext = link.splitext()
if not ext:
@@ -743,6 +746,12 @@ def _link_package_versions(self, link, search):
return
version = wheel.version
+ # This should be up by the search.ok_binary check, but see issue 2700.
+ if "source" not in search.formats and ext != wheel_ext:
+ self._log_skipped_link(
+ link, 'No sources permitted for %s' % search.supplied)
+ return
+
if not version:
version = egg_info_matches(egg_info, search.supplied, link)
if version is None:
@@ -1192,6 +1201,56 @@ def is_wheel(self):
INSTALLED_VERSION = Link(Inf)
+FormatControl = namedtuple('FormatControl', 'no_binary only_binary')
+"""This object has two fields, no_binary and only_binary.
+
+If a field is falsy, it isn't set. If it is {':all:'}, it should match all
+packages except those listed in the other field. Only one field can be set
+to {':all:'} at a time. The rest of the time exact package name matches
+are listed, with any given package only showing up in one field at a time.
+"""
+
+
+def fmt_ctl_handle_mutual_exclude(value, target, other):
+ new = value.split(',')
+ while ':all:' in new:
+ other.clear()
+ target.clear()
+ target.add(':all:')
+ del new[:new.index(':all:') + 1]
+ if ':none:' not in new:
+ # Without a none, we want to discard everything as :all: covers it
+ return
+ for name in new:
+ if name == ':none:':
+ target.clear()
+ continue
+ name = pkg_resources.safe_name(name).lower()
+ other.discard(name)
+ target.add(name)
+
+
+def fmt_ctl_formats(fmt_ctl, canonical_name):
+ result = set(["binary", "source"])
+ if canonical_name in fmt_ctl.only_binary:
+ result.discard('source')
+ elif canonical_name in fmt_ctl.no_binary:
+ result.discard('binary')
+ elif ':all:' in fmt_ctl.only_binary:
+ result.discard('source')
+ elif ':all:' in fmt_ctl.no_binary:
+ result.discard('binary')
+ return frozenset(result)
+
+
+def fmt_ctl_no_use_wheel(fmt_ctl):
+ fmt_ctl_handle_mutual_exclude(
+ ':all:', fmt_ctl.no_binary, fmt_ctl.only_binary)
+ warnings.warn(
+ '--no-use-wheel is deprecated and will be removed in the future. '
+ ' Please use --no-binary :all: instead.')
+
+
Search = namedtuple('Search', 'supplied canonical formats')
"""Capture key aspects of a search.
diff --git a/pip/req/req_file.py b/pip/req/req_file.py
--- a/pip/req/req_file.py
+++ b/pip/req/req_file.py
@@ -12,6 +12,7 @@
from pip._vendor.six.moves.urllib import parse as urllib_parse
from pip._vendor.six.moves import filterfalse
+import pip
from pip.download import get_file_content
from pip.req.req_install import InstallRequirement
from pip.exceptions import (RequirementsFileParseError,
@@ -45,6 +46,8 @@
SUPPORTED_OPTIONS_REQ = [
cmdoptions.install_options,
cmdoptions.global_options,
+ cmdoptions.no_binary,
+ cmdoptions.only_binary,
]
# the 'dest' string values
@@ -91,12 +94,27 @@ def process_line(line, filename, line_number, finder=None, comes_from=None,
Process a single requirements line; This can result in creating/yielding
requirements, or updating the finder.
"""
-
parser = build_parser()
+ values = parser.get_default_values()
+ if finder:
+ values.format_control = finder.format_control
+ else:
+ # Undo the hack that removes defaults so that
+ # this can be parsed correctly.
+ values.format_control = pip.index.FormatControl(set(), set())
+ orig_no_binary = frozenset(values.format_control.no_binary)
+ orig_only_binary = frozenset(values.format_control.only_binary)
args = shlex.split(line)
- opts, args = parser.parse_args(args)
- req = None
+ opts, args = parser.parse_args(args, values)
+ if opts.use_wheel is False and finder:
+ pip.index.fmt_ctl_no_use_wheel(finder.format_control)
+ setattr(values, 'use_wheel', None)
+ if (orig_no_binary == opts.format_control.no_binary and
+ orig_only_binary == opts.format_control.only_binary):
+ # Make the per-requirement-line check work.
+ setattr(values, 'format_control', None)
+ req = None
if args:
for key, value in opts.__dict__.items():
# only certain options can be on req lines
diff --git a/pip/req/req_install.py b/pip/req/req_install.py
--- a/pip/req/req_install.py
+++ b/pip/req/req_install.py
@@ -257,7 +257,7 @@ def link(self, link):
if self._wheel_cache is None:
self._link = link
else:
- self._link = self._wheel_cache.cached_wheel(link)
+ self._link = self._wheel_cache.cached_wheel(link, self.name)
@property
def specifier(self):
diff --git a/pip/wheel.py b/pip/wheel.py
--- a/pip/wheel.py
+++ b/pip/wheel.py
@@ -47,15 +47,19 @@
class WheelCache(object):
"""A cache of wheels for future installs."""
- def __init__(self, cache_dir):
+ def __init__(self, cache_dir, format_control):
"""Create a wheel cache.
:param cache_dir: The root of the cache.
+ :param format_control: A pip.index.FormatControl object to limit
+ binaries being read from the cache.
"""
self._cache_dir = cache_dir
+ self._format_control = format_control
- def cached_wheel(self, link):
- return cached_wheel(self._cache_dir, link)
+ def cached_wheel(self, link, package_name):
+ return cached_wheel(
+ self._cache_dir, link, self._format_control, package_name)
def _cache_for_filename(cache_dir, sdistfilename):
@@ -78,13 +82,19 @@ def _cache_for_filename(cache_dir, sdistfilename):
return os.path.join(cache_dir, 'wheels', sdistfilename)
-def cached_wheel(cache_dir, link):
+def cached_wheel(cache_dir, link, format_control, package_name):
if not cache_dir:
return link
if not link:
return link
if link.is_wheel:
return link
+ if not package_name:
+ return link
+ canonical_name = pkg_resources.safe_name(package_name).lower()
+ formats = pip.index.fmt_ctl_formats(format_control, canonical_name)
+ if "binary" not in formats:
+ return link
root = _cache_for_filename(cache_dir, link.filename)
try:
wheel_names = os.listdir(root)
@@ -693,6 +703,13 @@ def build(self, autobuilding=False):
# Doesn't look like a package - don't autobuild a wheel
# because we'll have no way to lookup the result sanely
continue
+ if "binary" not in pip.index.fmt_ctl_formats(
+ self.finder.format_control,
+ pkg_resources.safe_name(req.name).lower()):
+ logger.info(
+ "Skipping bdist_wheel for %s, due to binaries "
+ "being disabled for it.", req.name)
+ continue
buildset.append(req)
if not buildset:
</patch>
|
[]
|
[]
| |||
pyca__cryptography-3130
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scrypt bounds checking
```
[11:23:58] <Alex_Gaynor> reaperhulk: what happens if you pass a non-even n?
[11:24:10] <Alex_Gaynor> Or a negative value for any of the params?
```
Presumably it will fail with an assertion error on return from the call to `EVP_PBE_scrypt`, but we shouldn't allow those types of errors.
cc @Ayrx.
</issue>
<code>
[start of README.rst]
1 Cryptography
2 ============
3
4 .. image:: https://img.shields.io/pypi/v/cryptography.svg
5 :target: https://pypi.python.org/pypi/cryptography/
6 :alt: Latest Version
7
8 .. image:: https://readthedocs.org/projects/cryptography/badge/?version=latest
9 :target: https://cryptography.io
10 :alt: Latest Docs
11
12 .. image:: https://travis-ci.org/pyca/cryptography.svg?branch=master
13 :target: https://travis-ci.org/pyca/cryptography
14
15 .. image:: https://codecov.io/github/pyca/cryptography/coverage.svg?branch=master
16 :target: https://codecov.io/github/pyca/cryptography?branch=master
17
18
19 ``cryptography`` is a package which provides cryptographic recipes and
20 primitives to Python developers. Our goal is for it to be your "cryptographic
21 standard library". It supports Python 2.6-2.7, Python 3.3+, and PyPy 2.6+.
22
23 ``cryptography`` includes both high level recipes, and low level interfaces to
24 common cryptographic algorithms such as symmetric ciphers, message digests and
25 key derivation functions. For example, to encrypt something with
26 ``cryptography``'s high level symmetric encryption recipe:
27
28 .. code-block:: pycon
29
30 >>> from cryptography.fernet import Fernet
31 >>> # Put this somewhere safe!
32 >>> key = Fernet.generate_key()
33 >>> f = Fernet(key)
34 >>> token = f.encrypt(b"A really secret message. Not for prying eyes.")
35 >>> token
36 '...'
37 >>> f.decrypt(token)
38 'A really secret message. Not for prying eyes.'
39
40 You can find more information in the `documentation`_.
41
42 Discussion
43 ~~~~~~~~~~
44
45 If you run into bugs, you can file them in our `issue tracker`_.
46
47 We maintain a `cryptography-dev`_ mailing list for development discussion.
48
49 You can also join ``#cryptography-dev`` on Freenode to ask questions or get
50 involved.
51
52
53 .. _`documentation`: https://cryptography.io/
54 .. _`issue tracker`: https://github.com/pyca/cryptography/issues
55 .. _`cryptography-dev`: https://mail.python.org/mailman/listinfo/cryptography-dev
56
[end of README.rst]
[start of src/_cffi_src/utils.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import sys
8 from distutils.ccompiler import new_compiler
9 from distutils.dist import Distribution
10
11 from cffi import FFI
12
13
14 def build_ffi_for_binding(module_name, module_prefix, modules, libraries=[],
15 extra_compile_args=[], extra_link_args=[]):
16 """
17 Modules listed in ``modules`` should have the following attributes:
18
19 * ``INCLUDES``: A string containing C includes.
20 * ``TYPES``: A string containing C declarations for types.
21 * ``FUNCTIONS``: A string containing C declarations for functions.
22 * ``MACROS``: A string containing C declarations for any macros.
23 * ``CUSTOMIZATIONS``: A string containing arbitrary top-level C code, this
24 can be used to do things like test for a define and provide an
25 alternate implementation based on that.
26 """
27 types = []
28 includes = []
29 functions = []
30 macros = []
31 customizations = []
32 for name in modules:
33 __import__(module_prefix + name)
34 module = sys.modules[module_prefix + name]
35
36 types.append(module.TYPES)
37 macros.append(module.MACROS)
38 functions.append(module.FUNCTIONS)
39 includes.append(module.INCLUDES)
40 customizations.append(module.CUSTOMIZATIONS)
41
42 # We include functions here so that if we got any of their definitions
43 # wrong, the underlying C compiler will explode. In C you are allowed
44 # to re-declare a function if it has the same signature. That is:
45 # int foo(int);
46 # int foo(int);
47 # is legal, but the following will fail to compile:
48 # int foo(int);
49 # int foo(short);
50 verify_source = "\n".join(
51 includes +
52 functions +
53 customizations
54 )
55 ffi = build_ffi(
56 module_name,
57 cdef_source="\n".join(types + functions + macros),
58 verify_source=verify_source,
59 libraries=libraries,
60 extra_compile_args=extra_compile_args,
61 extra_link_args=extra_link_args,
62 )
63
64 return ffi
65
66
67 def build_ffi(module_name, cdef_source, verify_source, libraries=[],
68 extra_compile_args=[], extra_link_args=[]):
69 ffi = FFI()
70 ffi.cdef(cdef_source)
71 ffi.set_source(
72 module_name,
73 verify_source,
74 libraries=libraries,
75 extra_compile_args=extra_compile_args,
76 extra_link_args=extra_link_args,
77 )
78 return ffi
79
80
81 def extra_link_args(compiler_type):
82 if compiler_type == 'msvc':
83 # Enable NX and ASLR for Windows builds on MSVC. These are enabled by
84 # default on Python 3.3+ but not on 2.x.
85 return ['/NXCOMPAT', '/DYNAMICBASE']
86 else:
87 return []
88
89
90 def compiler_type():
91 """
92 Gets the compiler type from distutils. On Windows with MSVC it will be
93 "msvc". On OS X and linux it is "unix".
94 """
95 dist = Distribution()
96 dist.parse_config_files()
97 cmd = dist.get_command_obj('build')
98 cmd.ensure_finalized()
99 compiler = new_compiler(compiler=cmd.compiler)
100 return compiler.compiler_type
101
[end of src/_cffi_src/utils.py]
[start of src/cryptography/hazmat/backends/openssl/ciphers.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 from cryptography import utils
8 from cryptography.exceptions import InvalidTag, UnsupportedAlgorithm, _Reasons
9 from cryptography.hazmat.primitives import ciphers
10 from cryptography.hazmat.primitives.ciphers import modes
11
12
13 @utils.register_interface(ciphers.CipherContext)
14 @utils.register_interface(ciphers.AEADCipherContext)
15 @utils.register_interface(ciphers.AEADEncryptionContext)
16 class _CipherContext(object):
17 _ENCRYPT = 1
18 _DECRYPT = 0
19
20 def __init__(self, backend, cipher, mode, operation):
21 self._backend = backend
22 self._cipher = cipher
23 self._mode = mode
24 self._operation = operation
25 self._tag = None
26
27 if isinstance(self._cipher, ciphers.BlockCipherAlgorithm):
28 self._block_size = self._cipher.block_size
29 else:
30 self._block_size = 1
31
32 ctx = self._backend._lib.EVP_CIPHER_CTX_new()
33 ctx = self._backend._ffi.gc(
34 ctx, self._backend._lib.EVP_CIPHER_CTX_free
35 )
36
37 registry = self._backend._cipher_registry
38 try:
39 adapter = registry[type(cipher), type(mode)]
40 except KeyError:
41 raise UnsupportedAlgorithm(
42 "cipher {0} in {1} mode is not supported "
43 "by this backend.".format(
44 cipher.name, mode.name if mode else mode),
45 _Reasons.UNSUPPORTED_CIPHER
46 )
47
48 evp_cipher = adapter(self._backend, cipher, mode)
49 if evp_cipher == self._backend._ffi.NULL:
50 raise UnsupportedAlgorithm(
51 "cipher {0} in {1} mode is not supported "
52 "by this backend.".format(
53 cipher.name, mode.name if mode else mode),
54 _Reasons.UNSUPPORTED_CIPHER
55 )
56
57 if isinstance(mode, modes.ModeWithInitializationVector):
58 iv_nonce = mode.initialization_vector
59 elif isinstance(mode, modes.ModeWithNonce):
60 iv_nonce = mode.nonce
61 else:
62 iv_nonce = self._backend._ffi.NULL
63 # begin init with cipher and operation type
64 res = self._backend._lib.EVP_CipherInit_ex(ctx, evp_cipher,
65 self._backend._ffi.NULL,
66 self._backend._ffi.NULL,
67 self._backend._ffi.NULL,
68 operation)
69 self._backend.openssl_assert(res != 0)
70 # set the key length to handle variable key ciphers
71 res = self._backend._lib.EVP_CIPHER_CTX_set_key_length(
72 ctx, len(cipher.key)
73 )
74 self._backend.openssl_assert(res != 0)
75 if isinstance(mode, modes.GCM):
76 res = self._backend._lib.EVP_CIPHER_CTX_ctrl(
77 ctx, self._backend._lib.EVP_CTRL_GCM_SET_IVLEN,
78 len(iv_nonce), self._backend._ffi.NULL
79 )
80 self._backend.openssl_assert(res != 0)
81 if operation == self._DECRYPT:
82 res = self._backend._lib.EVP_CIPHER_CTX_ctrl(
83 ctx, self._backend._lib.EVP_CTRL_GCM_SET_TAG,
84 len(mode.tag), mode.tag
85 )
86 self._backend.openssl_assert(res != 0)
87
88 # pass key/iv
89 res = self._backend._lib.EVP_CipherInit_ex(
90 ctx,
91 self._backend._ffi.NULL,
92 self._backend._ffi.NULL,
93 cipher.key,
94 iv_nonce,
95 operation
96 )
97 self._backend.openssl_assert(res != 0)
98 # We purposely disable padding here as it's handled higher up in the
99 # API.
100 self._backend._lib.EVP_CIPHER_CTX_set_padding(ctx, 0)
101 self._ctx = ctx
102
103 def update(self, data):
104 buf = self._backend._ffi.new("unsigned char[]",
105 len(data) + self._block_size - 1)
106 outlen = self._backend._ffi.new("int *")
107 res = self._backend._lib.EVP_CipherUpdate(self._ctx, buf, outlen, data,
108 len(data))
109 self._backend.openssl_assert(res != 0)
110 return self._backend._ffi.buffer(buf)[:outlen[0]]
111
112 def finalize(self):
113 # OpenSSL 1.0.1 on Ubuntu 12.04 (and possibly other distributions)
114 # appears to have a bug where you must make at least one call to update
115 # even if you are only using authenticate_additional_data or the
116 # GCM tag will be wrong. An (empty) call to update resolves this
117 # and is harmless for all other versions of OpenSSL.
118 if isinstance(self._mode, modes.GCM):
119 self.update(b"")
120
121 buf = self._backend._ffi.new("unsigned char[]", self._block_size)
122 outlen = self._backend._ffi.new("int *")
123 res = self._backend._lib.EVP_CipherFinal_ex(self._ctx, buf, outlen)
124 if res == 0:
125 errors = self._backend._consume_errors()
126
127 if not errors and isinstance(self._mode, modes.GCM):
128 raise InvalidTag
129
130 self._backend.openssl_assert(
131 errors[0][1:] == (
132 self._backend._lib.ERR_LIB_EVP,
133 self._backend._lib.EVP_F_EVP_ENCRYPTFINAL_EX,
134 self._backend._lib.EVP_R_DATA_NOT_MULTIPLE_OF_BLOCK_LENGTH
135 ) or errors[0][1:] == (
136 self._backend._lib.ERR_LIB_EVP,
137 self._backend._lib.EVP_F_EVP_DECRYPTFINAL_EX,
138 self._backend._lib.EVP_R_DATA_NOT_MULTIPLE_OF_BLOCK_LENGTH
139 )
140 )
141 raise ValueError(
142 "The length of the provided data is not a multiple of "
143 "the block length."
144 )
145
146 if (isinstance(self._mode, modes.GCM) and
147 self._operation == self._ENCRYPT):
148 block_byte_size = self._block_size // 8
149 tag_buf = self._backend._ffi.new(
150 "unsigned char[]", block_byte_size
151 )
152 res = self._backend._lib.EVP_CIPHER_CTX_ctrl(
153 self._ctx, self._backend._lib.EVP_CTRL_GCM_GET_TAG,
154 block_byte_size, tag_buf
155 )
156 self._backend.openssl_assert(res != 0)
157 self._tag = self._backend._ffi.buffer(tag_buf)[:]
158
159 res = self._backend._lib.EVP_CIPHER_CTX_cleanup(self._ctx)
160 self._backend.openssl_assert(res == 1)
161 return self._backend._ffi.buffer(buf)[:outlen[0]]
162
163 def authenticate_additional_data(self, data):
164 outlen = self._backend._ffi.new("int *")
165 res = self._backend._lib.EVP_CipherUpdate(
166 self._ctx, self._backend._ffi.NULL, outlen, data, len(data)
167 )
168 self._backend.openssl_assert(res != 0)
169
170 tag = utils.read_only_property("_tag")
171
172
173 @utils.register_interface(ciphers.CipherContext)
174 class _AESCTRCipherContext(object):
175 """
176 This is needed to provide support for AES CTR mode in OpenSSL 1.0.0. It can
177 be removed when we drop 1.0.0 support (RHEL 6.4 is the only thing that
178 ships it).
179 """
180 def __init__(self, backend, cipher, mode):
181 self._backend = backend
182
183 self._key = self._backend._ffi.new("AES_KEY *")
184 res = self._backend._lib.AES_set_encrypt_key(
185 cipher.key, len(cipher.key) * 8, self._key
186 )
187 self._backend.openssl_assert(res == 0)
188 self._ecount = self._backend._ffi.new("char[]", 16)
189 self._nonce = self._backend._ffi.new("char[16]", mode.nonce)
190 self._num = self._backend._ffi.new("unsigned int *", 0)
191
192 def update(self, data):
193 buf = self._backend._ffi.new("unsigned char[]", len(data))
194 self._backend._lib.AES_ctr128_encrypt(
195 data, buf, len(data), self._key, self._nonce,
196 self._ecount, self._num
197 )
198 return self._backend._ffi.buffer(buf)[:]
199
200 def finalize(self):
201 self._key = None
202 self._ecount = None
203 self._nonce = None
204 self._num = None
205 return b""
206
[end of src/cryptography/hazmat/backends/openssl/ciphers.py]
[start of src/cryptography/hazmat/backends/openssl/rsa.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import math
8
9 from cryptography import utils
10 from cryptography.exceptions import (
11 InvalidSignature, UnsupportedAlgorithm, _Reasons
12 )
13 from cryptography.hazmat.primitives import hashes
14 from cryptography.hazmat.primitives.asymmetric import (
15 AsymmetricSignatureContext, AsymmetricVerificationContext, rsa
16 )
17 from cryptography.hazmat.primitives.asymmetric.padding import (
18 AsymmetricPadding, MGF1, OAEP, PKCS1v15, PSS, calculate_max_pss_salt_length
19 )
20 from cryptography.hazmat.primitives.asymmetric.rsa import (
21 RSAPrivateKeyWithSerialization, RSAPublicKeyWithSerialization
22 )
23
24
25 def _get_rsa_pss_salt_length(pss, key, hash_algorithm):
26 salt = pss._salt_length
27
28 if salt is MGF1.MAX_LENGTH or salt is PSS.MAX_LENGTH:
29 return calculate_max_pss_salt_length(key, hash_algorithm)
30 else:
31 return salt
32
33
34 def _enc_dec_rsa(backend, key, data, padding):
35 if not isinstance(padding, AsymmetricPadding):
36 raise TypeError("Padding must be an instance of AsymmetricPadding.")
37
38 if isinstance(padding, PKCS1v15):
39 padding_enum = backend._lib.RSA_PKCS1_PADDING
40 elif isinstance(padding, OAEP):
41 padding_enum = backend._lib.RSA_PKCS1_OAEP_PADDING
42
43 if not isinstance(padding._mgf, MGF1):
44 raise UnsupportedAlgorithm(
45 "Only MGF1 is supported by this backend.",
46 _Reasons.UNSUPPORTED_MGF
47 )
48
49 if not backend.rsa_padding_supported(padding):
50 raise UnsupportedAlgorithm(
51 "This combination of padding and hash algorithm is not "
52 "supported by this backend.",
53 _Reasons.UNSUPPORTED_PADDING
54 )
55
56 if padding._label is not None and padding._label != b"":
57 raise ValueError("This backend does not support OAEP labels.")
58
59 else:
60 raise UnsupportedAlgorithm(
61 "{0} is not supported by this backend.".format(
62 padding.name
63 ),
64 _Reasons.UNSUPPORTED_PADDING
65 )
66
67 return _enc_dec_rsa_pkey_ctx(backend, key, data, padding_enum, padding)
68
69
70 def _enc_dec_rsa_pkey_ctx(backend, key, data, padding_enum, padding):
71 if isinstance(key, _RSAPublicKey):
72 init = backend._lib.EVP_PKEY_encrypt_init
73 crypt = backend._lib.EVP_PKEY_encrypt
74 else:
75 init = backend._lib.EVP_PKEY_decrypt_init
76 crypt = backend._lib.EVP_PKEY_decrypt
77
78 pkey_ctx = backend._lib.EVP_PKEY_CTX_new(
79 key._evp_pkey, backend._ffi.NULL
80 )
81 backend.openssl_assert(pkey_ctx != backend._ffi.NULL)
82 pkey_ctx = backend._ffi.gc(pkey_ctx, backend._lib.EVP_PKEY_CTX_free)
83 res = init(pkey_ctx)
84 backend.openssl_assert(res == 1)
85 res = backend._lib.EVP_PKEY_CTX_set_rsa_padding(
86 pkey_ctx, padding_enum)
87 backend.openssl_assert(res > 0)
88 buf_size = backend._lib.EVP_PKEY_size(key._evp_pkey)
89 backend.openssl_assert(buf_size > 0)
90 if (
91 isinstance(padding, OAEP) and
92 backend._lib.Cryptography_HAS_RSA_OAEP_MD
93 ):
94 mgf1_md = backend._lib.EVP_get_digestbyname(
95 padding._mgf._algorithm.name.encode("ascii"))
96 backend.openssl_assert(mgf1_md != backend._ffi.NULL)
97 res = backend._lib.EVP_PKEY_CTX_set_rsa_mgf1_md(pkey_ctx, mgf1_md)
98 backend.openssl_assert(res > 0)
99 oaep_md = backend._lib.EVP_get_digestbyname(
100 padding._algorithm.name.encode("ascii"))
101 backend.openssl_assert(oaep_md != backend._ffi.NULL)
102 res = backend._lib.EVP_PKEY_CTX_set_rsa_oaep_md(pkey_ctx, oaep_md)
103 backend.openssl_assert(res > 0)
104
105 outlen = backend._ffi.new("size_t *", buf_size)
106 buf = backend._ffi.new("char[]", buf_size)
107 res = crypt(pkey_ctx, buf, outlen, data, len(data))
108 if res <= 0:
109 _handle_rsa_enc_dec_error(backend, key)
110
111 return backend._ffi.buffer(buf)[:outlen[0]]
112
113
114 def _handle_rsa_enc_dec_error(backend, key):
115 errors = backend._consume_errors()
116 assert errors
117 assert errors[0].lib == backend._lib.ERR_LIB_RSA
118 if isinstance(key, _RSAPublicKey):
119 assert (errors[0].reason ==
120 backend._lib.RSA_R_DATA_TOO_LARGE_FOR_KEY_SIZE)
121 raise ValueError(
122 "Data too long for key size. Encrypt less data or use a "
123 "larger key size."
124 )
125 else:
126 decoding_errors = [
127 backend._lib.RSA_R_BLOCK_TYPE_IS_NOT_01,
128 backend._lib.RSA_R_BLOCK_TYPE_IS_NOT_02,
129 backend._lib.RSA_R_OAEP_DECODING_ERROR,
130 # Though this error looks similar to the
131 # RSA_R_DATA_TOO_LARGE_FOR_KEY_SIZE, this occurs on decrypts,
132 # rather than on encrypts
133 backend._lib.RSA_R_DATA_TOO_LARGE_FOR_MODULUS,
134 ]
135 if backend._lib.Cryptography_HAS_RSA_R_PKCS_DECODING_ERROR:
136 decoding_errors.append(backend._lib.RSA_R_PKCS_DECODING_ERROR)
137
138 assert errors[0].reason in decoding_errors
139 raise ValueError("Decryption failed.")
140
141
142 @utils.register_interface(AsymmetricSignatureContext)
143 class _RSASignatureContext(object):
144 def __init__(self, backend, private_key, padding, algorithm):
145 self._backend = backend
146 self._private_key = private_key
147
148 if not isinstance(padding, AsymmetricPadding):
149 raise TypeError("Expected provider of AsymmetricPadding.")
150
151 self._pkey_size = self._backend._lib.EVP_PKEY_size(
152 self._private_key._evp_pkey
153 )
154 self._backend.openssl_assert(self._pkey_size > 0)
155
156 if isinstance(padding, PKCS1v15):
157 self._padding_enum = self._backend._lib.RSA_PKCS1_PADDING
158 elif isinstance(padding, PSS):
159 if not isinstance(padding._mgf, MGF1):
160 raise UnsupportedAlgorithm(
161 "Only MGF1 is supported by this backend.",
162 _Reasons.UNSUPPORTED_MGF
163 )
164
165 # Size of key in bytes - 2 is the maximum
166 # PSS signature length (salt length is checked later)
167 if self._pkey_size - algorithm.digest_size - 2 < 0:
168 raise ValueError("Digest too large for key size. Use a larger "
169 "key.")
170
171 if not self._backend._pss_mgf1_hash_supported(
172 padding._mgf._algorithm
173 ):
174 raise UnsupportedAlgorithm(
175 "When OpenSSL is older than 1.0.1 then only SHA1 is "
176 "supported with MGF1.",
177 _Reasons.UNSUPPORTED_HASH
178 )
179
180 self._padding_enum = self._backend._lib.RSA_PKCS1_PSS_PADDING
181 else:
182 raise UnsupportedAlgorithm(
183 "{0} is not supported by this backend.".format(padding.name),
184 _Reasons.UNSUPPORTED_PADDING
185 )
186
187 self._padding = padding
188 self._algorithm = algorithm
189 self._hash_ctx = hashes.Hash(self._algorithm, self._backend)
190
191 def update(self, data):
192 self._hash_ctx.update(data)
193
194 def finalize(self):
195 evp_md = self._backend._lib.EVP_get_digestbyname(
196 self._algorithm.name.encode("ascii"))
197 self._backend.openssl_assert(evp_md != self._backend._ffi.NULL)
198
199 pkey_ctx = self._backend._lib.EVP_PKEY_CTX_new(
200 self._private_key._evp_pkey, self._backend._ffi.NULL
201 )
202 self._backend.openssl_assert(pkey_ctx != self._backend._ffi.NULL)
203 pkey_ctx = self._backend._ffi.gc(pkey_ctx,
204 self._backend._lib.EVP_PKEY_CTX_free)
205 res = self._backend._lib.EVP_PKEY_sign_init(pkey_ctx)
206 self._backend.openssl_assert(res == 1)
207 res = self._backend._lib.EVP_PKEY_CTX_set_signature_md(
208 pkey_ctx, evp_md)
209 self._backend.openssl_assert(res > 0)
210
211 res = self._backend._lib.EVP_PKEY_CTX_set_rsa_padding(
212 pkey_ctx, self._padding_enum)
213 self._backend.openssl_assert(res > 0)
214 if isinstance(self._padding, PSS):
215 res = self._backend._lib.EVP_PKEY_CTX_set_rsa_pss_saltlen(
216 pkey_ctx,
217 _get_rsa_pss_salt_length(
218 self._padding,
219 self._private_key,
220 self._hash_ctx.algorithm,
221 )
222 )
223 self._backend.openssl_assert(res > 0)
224
225 if self._backend._lib.Cryptography_HAS_MGF1_MD:
226 # MGF1 MD is configurable in OpenSSL 1.0.1+
227 mgf1_md = self._backend._lib.EVP_get_digestbyname(
228 self._padding._mgf._algorithm.name.encode("ascii"))
229 self._backend.openssl_assert(
230 mgf1_md != self._backend._ffi.NULL
231 )
232 res = self._backend._lib.EVP_PKEY_CTX_set_rsa_mgf1_md(
233 pkey_ctx, mgf1_md
234 )
235 self._backend.openssl_assert(res > 0)
236 data_to_sign = self._hash_ctx.finalize()
237 buflen = self._backend._ffi.new("size_t *")
238 res = self._backend._lib.EVP_PKEY_sign(
239 pkey_ctx,
240 self._backend._ffi.NULL,
241 buflen,
242 data_to_sign,
243 len(data_to_sign)
244 )
245 self._backend.openssl_assert(res == 1)
246 buf = self._backend._ffi.new("unsigned char[]", buflen[0])
247 res = self._backend._lib.EVP_PKEY_sign(
248 pkey_ctx, buf, buflen, data_to_sign, len(data_to_sign))
249 if res != 1:
250 errors = self._backend._consume_errors()
251 assert errors[0].lib == self._backend._lib.ERR_LIB_RSA
252 reason = None
253 if (errors[0].reason ==
254 self._backend._lib.RSA_R_DATA_TOO_LARGE_FOR_KEY_SIZE):
255 reason = ("Salt length too long for key size. Try using "
256 "MAX_LENGTH instead.")
257 else:
258 assert (errors[0].reason ==
259 self._backend._lib.RSA_R_DIGEST_TOO_BIG_FOR_RSA_KEY)
260 reason = "Digest too large for key size. Use a larger key."
261 assert reason is not None
262 raise ValueError(reason)
263
264 return self._backend._ffi.buffer(buf)[:]
265
266
267 @utils.register_interface(AsymmetricVerificationContext)
268 class _RSAVerificationContext(object):
269 def __init__(self, backend, public_key, signature, padding, algorithm):
270 self._backend = backend
271 self._public_key = public_key
272 self._signature = signature
273
274 if not isinstance(padding, AsymmetricPadding):
275 raise TypeError("Expected provider of AsymmetricPadding.")
276
277 self._pkey_size = self._backend._lib.EVP_PKEY_size(
278 self._public_key._evp_pkey
279 )
280 self._backend.openssl_assert(self._pkey_size > 0)
281
282 if isinstance(padding, PKCS1v15):
283 self._padding_enum = self._backend._lib.RSA_PKCS1_PADDING
284 elif isinstance(padding, PSS):
285 if not isinstance(padding._mgf, MGF1):
286 raise UnsupportedAlgorithm(
287 "Only MGF1 is supported by this backend.",
288 _Reasons.UNSUPPORTED_MGF
289 )
290
291 # Size of key in bytes - 2 is the maximum
292 # PSS signature length (salt length is checked later)
293 if self._pkey_size - algorithm.digest_size - 2 < 0:
294 raise ValueError(
295 "Digest too large for key size. Check that you have the "
296 "correct key and digest algorithm."
297 )
298
299 if not self._backend._pss_mgf1_hash_supported(
300 padding._mgf._algorithm
301 ):
302 raise UnsupportedAlgorithm(
303 "When OpenSSL is older than 1.0.1 then only SHA1 is "
304 "supported with MGF1.",
305 _Reasons.UNSUPPORTED_HASH
306 )
307
308 self._padding_enum = self._backend._lib.RSA_PKCS1_PSS_PADDING
309 else:
310 raise UnsupportedAlgorithm(
311 "{0} is not supported by this backend.".format(padding.name),
312 _Reasons.UNSUPPORTED_PADDING
313 )
314
315 self._padding = padding
316 self._algorithm = algorithm
317 self._hash_ctx = hashes.Hash(self._algorithm, self._backend)
318
319 def update(self, data):
320 self._hash_ctx.update(data)
321
322 def verify(self):
323 evp_md = self._backend._lib.EVP_get_digestbyname(
324 self._algorithm.name.encode("ascii"))
325 self._backend.openssl_assert(evp_md != self._backend._ffi.NULL)
326
327 pkey_ctx = self._backend._lib.EVP_PKEY_CTX_new(
328 self._public_key._evp_pkey, self._backend._ffi.NULL
329 )
330 self._backend.openssl_assert(pkey_ctx != self._backend._ffi.NULL)
331 pkey_ctx = self._backend._ffi.gc(pkey_ctx,
332 self._backend._lib.EVP_PKEY_CTX_free)
333 res = self._backend._lib.EVP_PKEY_verify_init(pkey_ctx)
334 self._backend.openssl_assert(res == 1)
335 res = self._backend._lib.EVP_PKEY_CTX_set_signature_md(
336 pkey_ctx, evp_md)
337 self._backend.openssl_assert(res > 0)
338
339 res = self._backend._lib.EVP_PKEY_CTX_set_rsa_padding(
340 pkey_ctx, self._padding_enum)
341 self._backend.openssl_assert(res > 0)
342 if isinstance(self._padding, PSS):
343 res = self._backend._lib.EVP_PKEY_CTX_set_rsa_pss_saltlen(
344 pkey_ctx,
345 _get_rsa_pss_salt_length(
346 self._padding,
347 self._public_key,
348 self._hash_ctx.algorithm,
349 )
350 )
351 self._backend.openssl_assert(res > 0)
352 if self._backend._lib.Cryptography_HAS_MGF1_MD:
353 # MGF1 MD is configurable in OpenSSL 1.0.1+
354 mgf1_md = self._backend._lib.EVP_get_digestbyname(
355 self._padding._mgf._algorithm.name.encode("ascii"))
356 self._backend.openssl_assert(
357 mgf1_md != self._backend._ffi.NULL
358 )
359 res = self._backend._lib.EVP_PKEY_CTX_set_rsa_mgf1_md(
360 pkey_ctx, mgf1_md
361 )
362 self._backend.openssl_assert(res > 0)
363
364 data_to_verify = self._hash_ctx.finalize()
365 res = self._backend._lib.EVP_PKEY_verify(
366 pkey_ctx,
367 self._signature,
368 len(self._signature),
369 data_to_verify,
370 len(data_to_verify)
371 )
372 # The previous call can return negative numbers in the event of an
373 # error. This is not a signature failure but we need to fail if it
374 # occurs.
375 self._backend.openssl_assert(res >= 0)
376 if res == 0:
377 errors = self._backend._consume_errors()
378 assert errors
379 raise InvalidSignature
380
381
382 @utils.register_interface(RSAPrivateKeyWithSerialization)
383 class _RSAPrivateKey(object):
384 def __init__(self, backend, rsa_cdata, evp_pkey):
385 self._backend = backend
386 self._rsa_cdata = rsa_cdata
387 self._evp_pkey = evp_pkey
388
389 n = self._backend._ffi.new("BIGNUM **")
390 self._backend._lib.RSA_get0_key(
391 self._rsa_cdata, n, self._backend._ffi.NULL,
392 self._backend._ffi.NULL
393 )
394 self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
395 self._key_size = self._backend._lib.BN_num_bits(n[0])
396
397 key_size = utils.read_only_property("_key_size")
398
399 def signer(self, padding, algorithm):
400 return _RSASignatureContext(self._backend, self, padding, algorithm)
401
402 def decrypt(self, ciphertext, padding):
403 key_size_bytes = int(math.ceil(self.key_size / 8.0))
404 if key_size_bytes != len(ciphertext):
405 raise ValueError("Ciphertext length must be equal to key size.")
406
407 return _enc_dec_rsa(self._backend, self, ciphertext, padding)
408
409 def public_key(self):
410 ctx = self._backend._lib.RSAPublicKey_dup(self._rsa_cdata)
411 self._backend.openssl_assert(ctx != self._backend._ffi.NULL)
412 ctx = self._backend._ffi.gc(ctx, self._backend._lib.RSA_free)
413 res = self._backend._lib.RSA_blinding_on(ctx, self._backend._ffi.NULL)
414 self._backend.openssl_assert(res == 1)
415 evp_pkey = self._backend._rsa_cdata_to_evp_pkey(ctx)
416 return _RSAPublicKey(self._backend, ctx, evp_pkey)
417
418 def private_numbers(self):
419 n = self._backend._ffi.new("BIGNUM **")
420 e = self._backend._ffi.new("BIGNUM **")
421 d = self._backend._ffi.new("BIGNUM **")
422 p = self._backend._ffi.new("BIGNUM **")
423 q = self._backend._ffi.new("BIGNUM **")
424 dmp1 = self._backend._ffi.new("BIGNUM **")
425 dmq1 = self._backend._ffi.new("BIGNUM **")
426 iqmp = self._backend._ffi.new("BIGNUM **")
427 self._backend._lib.RSA_get0_key(self._rsa_cdata, n, e, d)
428 self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
429 self._backend.openssl_assert(e[0] != self._backend._ffi.NULL)
430 self._backend.openssl_assert(d[0] != self._backend._ffi.NULL)
431 self._backend._lib.RSA_get0_factors(self._rsa_cdata, p, q)
432 self._backend.openssl_assert(p[0] != self._backend._ffi.NULL)
433 self._backend.openssl_assert(q[0] != self._backend._ffi.NULL)
434 self._backend._lib.RSA_get0_crt_params(
435 self._rsa_cdata, dmp1, dmq1, iqmp
436 )
437 self._backend.openssl_assert(dmp1[0] != self._backend._ffi.NULL)
438 self._backend.openssl_assert(dmq1[0] != self._backend._ffi.NULL)
439 self._backend.openssl_assert(iqmp[0] != self._backend._ffi.NULL)
440 return rsa.RSAPrivateNumbers(
441 p=self._backend._bn_to_int(p[0]),
442 q=self._backend._bn_to_int(q[0]),
443 d=self._backend._bn_to_int(d[0]),
444 dmp1=self._backend._bn_to_int(dmp1[0]),
445 dmq1=self._backend._bn_to_int(dmq1[0]),
446 iqmp=self._backend._bn_to_int(iqmp[0]),
447 public_numbers=rsa.RSAPublicNumbers(
448 e=self._backend._bn_to_int(e[0]),
449 n=self._backend._bn_to_int(n[0]),
450 )
451 )
452
453 def private_bytes(self, encoding, format, encryption_algorithm):
454 return self._backend._private_key_bytes(
455 encoding,
456 format,
457 encryption_algorithm,
458 self._evp_pkey,
459 self._rsa_cdata
460 )
461
462 def sign(self, data, padding, algorithm):
463 signer = self.signer(padding, algorithm)
464 signer.update(data)
465 signature = signer.finalize()
466 return signature
467
468
469 @utils.register_interface(RSAPublicKeyWithSerialization)
470 class _RSAPublicKey(object):
471 def __init__(self, backend, rsa_cdata, evp_pkey):
472 self._backend = backend
473 self._rsa_cdata = rsa_cdata
474 self._evp_pkey = evp_pkey
475
476 n = self._backend._ffi.new("BIGNUM **")
477 self._backend._lib.RSA_get0_key(
478 self._rsa_cdata, n, self._backend._ffi.NULL,
479 self._backend._ffi.NULL
480 )
481 self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
482 self._key_size = self._backend._lib.BN_num_bits(n[0])
483
484 key_size = utils.read_only_property("_key_size")
485
486 def verifier(self, signature, padding, algorithm):
487 if not isinstance(signature, bytes):
488 raise TypeError("signature must be bytes.")
489
490 return _RSAVerificationContext(
491 self._backend, self, signature, padding, algorithm
492 )
493
494 def encrypt(self, plaintext, padding):
495 return _enc_dec_rsa(self._backend, self, plaintext, padding)
496
497 def public_numbers(self):
498 n = self._backend._ffi.new("BIGNUM **")
499 e = self._backend._ffi.new("BIGNUM **")
500 self._backend._lib.RSA_get0_key(
501 self._rsa_cdata, n, e, self._backend._ffi.NULL
502 )
503 self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
504 self._backend.openssl_assert(e[0] != self._backend._ffi.NULL)
505 return rsa.RSAPublicNumbers(
506 e=self._backend._bn_to_int(e[0]),
507 n=self._backend._bn_to_int(n[0]),
508 )
509
510 def public_bytes(self, encoding, format):
511 return self._backend._public_key_bytes(
512 encoding,
513 format,
514 self,
515 self._evp_pkey,
516 self._rsa_cdata
517 )
518
519 def verify(self, signature, data, padding, algorithm):
520 verifier = self.verifier(signature, padding, algorithm)
521 verifier.update(data)
522 verifier.verify()
523
[end of src/cryptography/hazmat/backends/openssl/rsa.py]
[start of src/cryptography/hazmat/primitives/asymmetric/rsa.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import abc
8 from fractions import gcd
9
10 import six
11
12 from cryptography import utils
13 from cryptography.exceptions import UnsupportedAlgorithm, _Reasons
14 from cryptography.hazmat.backends.interfaces import RSABackend
15
16
17 @six.add_metaclass(abc.ABCMeta)
18 class RSAPrivateKey(object):
19 @abc.abstractmethod
20 def signer(self, padding, algorithm):
21 """
22 Returns an AsymmetricSignatureContext used for signing data.
23 """
24
25 @abc.abstractmethod
26 def decrypt(self, ciphertext, padding):
27 """
28 Decrypts the provided ciphertext.
29 """
30
31 @abc.abstractproperty
32 def key_size(self):
33 """
34 The bit length of the public modulus.
35 """
36
37 @abc.abstractmethod
38 def public_key(self):
39 """
40 The RSAPublicKey associated with this private key.
41 """
42
43 @abc.abstractmethod
44 def sign(self, data, padding, algorithm):
45 """
46 Signs the data.
47 """
48
49
50 @six.add_metaclass(abc.ABCMeta)
51 class RSAPrivateKeyWithSerialization(RSAPrivateKey):
52 @abc.abstractmethod
53 def private_numbers(self):
54 """
55 Returns an RSAPrivateNumbers.
56 """
57
58 @abc.abstractmethod
59 def private_bytes(self, encoding, format, encryption_algorithm):
60 """
61 Returns the key serialized as bytes.
62 """
63
64
65 @six.add_metaclass(abc.ABCMeta)
66 class RSAPublicKey(object):
67 @abc.abstractmethod
68 def verifier(self, signature, padding, algorithm):
69 """
70 Returns an AsymmetricVerificationContext used for verifying signatures.
71 """
72
73 @abc.abstractmethod
74 def encrypt(self, plaintext, padding):
75 """
76 Encrypts the given plaintext.
77 """
78
79 @abc.abstractproperty
80 def key_size(self):
81 """
82 The bit length of the public modulus.
83 """
84
85 @abc.abstractmethod
86 def public_numbers(self):
87 """
88 Returns an RSAPublicNumbers
89 """
90
91 @abc.abstractmethod
92 def public_bytes(self, encoding, format):
93 """
94 Returns the key serialized as bytes.
95 """
96
97 @abc.abstractmethod
98 def verify(self, signature, data, padding, algorithm):
99 """
100 Verifies the signature of the data.
101 """
102
103
104 RSAPublicKeyWithSerialization = RSAPublicKey
105
106
107 def generate_private_key(public_exponent, key_size, backend):
108 if not isinstance(backend, RSABackend):
109 raise UnsupportedAlgorithm(
110 "Backend object does not implement RSABackend.",
111 _Reasons.BACKEND_MISSING_INTERFACE
112 )
113
114 _verify_rsa_parameters(public_exponent, key_size)
115 return backend.generate_rsa_private_key(public_exponent, key_size)
116
117
118 def _verify_rsa_parameters(public_exponent, key_size):
119 if public_exponent < 3:
120 raise ValueError("public_exponent must be >= 3.")
121
122 if public_exponent & 1 == 0:
123 raise ValueError("public_exponent must be odd.")
124
125 if key_size < 512:
126 raise ValueError("key_size must be at least 512-bits.")
127
128
129 def _check_private_key_components(p, q, private_exponent, dmp1, dmq1, iqmp,
130 public_exponent, modulus):
131 if modulus < 3:
132 raise ValueError("modulus must be >= 3.")
133
134 if p >= modulus:
135 raise ValueError("p must be < modulus.")
136
137 if q >= modulus:
138 raise ValueError("q must be < modulus.")
139
140 if dmp1 >= modulus:
141 raise ValueError("dmp1 must be < modulus.")
142
143 if dmq1 >= modulus:
144 raise ValueError("dmq1 must be < modulus.")
145
146 if iqmp >= modulus:
147 raise ValueError("iqmp must be < modulus.")
148
149 if private_exponent >= modulus:
150 raise ValueError("private_exponent must be < modulus.")
151
152 if public_exponent < 3 or public_exponent >= modulus:
153 raise ValueError("public_exponent must be >= 3 and < modulus.")
154
155 if public_exponent & 1 == 0:
156 raise ValueError("public_exponent must be odd.")
157
158 if dmp1 & 1 == 0:
159 raise ValueError("dmp1 must be odd.")
160
161 if dmq1 & 1 == 0:
162 raise ValueError("dmq1 must be odd.")
163
164 if p * q != modulus:
165 raise ValueError("p*q must equal modulus.")
166
167
168 def _check_public_key_components(e, n):
169 if n < 3:
170 raise ValueError("n must be >= 3.")
171
172 if e < 3 or e >= n:
173 raise ValueError("e must be >= 3 and < n.")
174
175 if e & 1 == 0:
176 raise ValueError("e must be odd.")
177
178
179 def _modinv(e, m):
180 """
181 Modular Multiplicative Inverse. Returns x such that: (x*e) mod m == 1
182 """
183 x1, y1, x2, y2 = 1, 0, 0, 1
184 a, b = e, m
185 while b > 0:
186 q, r = divmod(a, b)
187 xn, yn = x1 - q * x2, y1 - q * y2
188 a, b, x1, y1, x2, y2 = b, r, x2, y2, xn, yn
189 return x1 % m
190
191
192 def rsa_crt_iqmp(p, q):
193 """
194 Compute the CRT (q ** -1) % p value from RSA primes p and q.
195 """
196 return _modinv(q, p)
197
198
199 def rsa_crt_dmp1(private_exponent, p):
200 """
201 Compute the CRT private_exponent % (p - 1) value from the RSA
202 private_exponent (d) and p.
203 """
204 return private_exponent % (p - 1)
205
206
207 def rsa_crt_dmq1(private_exponent, q):
208 """
209 Compute the CRT private_exponent % (q - 1) value from the RSA
210 private_exponent (d) and q.
211 """
212 return private_exponent % (q - 1)
213
214
215 # Controls the number of iterations rsa_recover_prime_factors will perform
216 # to obtain the prime factors. Each iteration increments by 2 so the actual
217 # maximum attempts is half this number.
218 _MAX_RECOVERY_ATTEMPTS = 1000
219
220
221 def rsa_recover_prime_factors(n, e, d):
222 """
223 Compute factors p and q from the private exponent d. We assume that n has
224 no more than two factors. This function is adapted from code in PyCrypto.
225 """
226 # See 8.2.2(i) in Handbook of Applied Cryptography.
227 ktot = d * e - 1
228 # The quantity d*e-1 is a multiple of phi(n), even,
229 # and can be represented as t*2^s.
230 t = ktot
231 while t % 2 == 0:
232 t = t // 2
233 # Cycle through all multiplicative inverses in Zn.
234 # The algorithm is non-deterministic, but there is a 50% chance
235 # any candidate a leads to successful factoring.
236 # See "Digitalized Signatures and Public Key Functions as Intractable
237 # as Factorization", M. Rabin, 1979
238 spotted = False
239 a = 2
240 while not spotted and a < _MAX_RECOVERY_ATTEMPTS:
241 k = t
242 # Cycle through all values a^{t*2^i}=a^k
243 while k < ktot:
244 cand = pow(a, k, n)
245 # Check if a^k is a non-trivial root of unity (mod n)
246 if cand != 1 and cand != (n - 1) and pow(cand, 2, n) == 1:
247 # We have found a number such that (cand-1)(cand+1)=0 (mod n).
248 # Either of the terms divides n.
249 p = gcd(cand + 1, n)
250 spotted = True
251 break
252 k *= 2
253 # This value was not any good... let's try another!
254 a += 2
255 if not spotted:
256 raise ValueError("Unable to compute factors p and q from exponent d.")
257 # Found !
258 q, r = divmod(n, p)
259 assert r == 0
260 p, q = sorted((p, q), reverse=True)
261 return (p, q)
262
263
264 class RSAPrivateNumbers(object):
265 def __init__(self, p, q, d, dmp1, dmq1, iqmp,
266 public_numbers):
267 if (
268 not isinstance(p, six.integer_types) or
269 not isinstance(q, six.integer_types) or
270 not isinstance(d, six.integer_types) or
271 not isinstance(dmp1, six.integer_types) or
272 not isinstance(dmq1, six.integer_types) or
273 not isinstance(iqmp, six.integer_types)
274 ):
275 raise TypeError(
276 "RSAPrivateNumbers p, q, d, dmp1, dmq1, iqmp arguments must"
277 " all be an integers."
278 )
279
280 if not isinstance(public_numbers, RSAPublicNumbers):
281 raise TypeError(
282 "RSAPrivateNumbers public_numbers must be an RSAPublicNumbers"
283 " instance."
284 )
285
286 self._p = p
287 self._q = q
288 self._d = d
289 self._dmp1 = dmp1
290 self._dmq1 = dmq1
291 self._iqmp = iqmp
292 self._public_numbers = public_numbers
293
294 p = utils.read_only_property("_p")
295 q = utils.read_only_property("_q")
296 d = utils.read_only_property("_d")
297 dmp1 = utils.read_only_property("_dmp1")
298 dmq1 = utils.read_only_property("_dmq1")
299 iqmp = utils.read_only_property("_iqmp")
300 public_numbers = utils.read_only_property("_public_numbers")
301
302 def private_key(self, backend):
303 return backend.load_rsa_private_numbers(self)
304
305 def __eq__(self, other):
306 if not isinstance(other, RSAPrivateNumbers):
307 return NotImplemented
308
309 return (
310 self.p == other.p and
311 self.q == other.q and
312 self.d == other.d and
313 self.dmp1 == other.dmp1 and
314 self.dmq1 == other.dmq1 and
315 self.iqmp == other.iqmp and
316 self.public_numbers == other.public_numbers
317 )
318
319 def __ne__(self, other):
320 return not self == other
321
322 def __hash__(self):
323 return hash((
324 self.p,
325 self.q,
326 self.d,
327 self.dmp1,
328 self.dmq1,
329 self.iqmp,
330 self.public_numbers,
331 ))
332
333
334 class RSAPublicNumbers(object):
335 def __init__(self, e, n):
336 if (
337 not isinstance(e, six.integer_types) or
338 not isinstance(n, six.integer_types)
339 ):
340 raise TypeError("RSAPublicNumbers arguments must be integers.")
341
342 self._e = e
343 self._n = n
344
345 e = utils.read_only_property("_e")
346 n = utils.read_only_property("_n")
347
348 def public_key(self, backend):
349 return backend.load_rsa_public_numbers(self)
350
351 def __repr__(self):
352 return "<RSAPublicNumbers(e={0.e}, n={0.n})>".format(self)
353
354 def __eq__(self, other):
355 if not isinstance(other, RSAPublicNumbers):
356 return NotImplemented
357
358 return self.e == other.e and self.n == other.n
359
360 def __ne__(self, other):
361 return not self == other
362
363 def __hash__(self):
364 return hash((self.e, self.n))
365
[end of src/cryptography/hazmat/primitives/asymmetric/rsa.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pyca/cryptography
|
d8a27df32b1ae35f165b00a644bd2432f6e44280
|
scrypt bounds checking
```
[11:23:58] <Alex_Gaynor> reaperhulk: what happens if you pass a non-even n?
[11:24:10] <Alex_Gaynor> Or a negative value for any of the params?
```
Presumably it will fail with an assertion error on return from the call to `EVP_PBE_scrypt`, but we shouldn't allow those types of errors.
cc @Ayrx.
|
2016-09-02T07:26:50Z
|
<patch>
diff --git a/src/cryptography/hazmat/primitives/kdf/scrypt.py b/src/cryptography/hazmat/primitives/kdf/scrypt.py
--- a/src/cryptography/hazmat/primitives/kdf/scrypt.py
+++ b/src/cryptography/hazmat/primitives/kdf/scrypt.py
@@ -25,6 +25,16 @@ def __init__(self, salt, length, n, r, p, backend):
self._length = length
if not isinstance(salt, bytes):
raise TypeError("salt must be bytes.")
+
+ if n < 2 or (n & (n - 1)) != 0:
+ raise ValueError("n must be greater than 1 and be a power of 2.")
+
+ if r < 1:
+ raise ValueError("r must be greater than or equal to 1.")
+
+ if p < 1:
+ raise ValueError("p must be greater than or equal to 1.")
+
self._used = False
self._salt = salt
self._n = n
</patch>
|
[]
|
[]
| ||||
wagtail__wagtail-10039
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
🎛️ Migrate site switcher to use Stimulus approach `ActionController`
> ℹ️ **Part of the [Stimulus 🎛️ RFC 78](https://github.com/wagtail/rfcs/pull/78)**
### Is your proposal related to a problem?
There is a custom JavaScript implementation to add behaviour to select drop-down that will update the location (URL) when changed.
This approach should be very close to what we are already doing with the `SubmitController` so let's do a a bit of clean up to avoid too much ad-hoc JS.
### Describe the solution you'd like
* Update the implementation of `client/src/controllers/SubmitController.ts` to allow for a new [Stimulus Value](https://stimulus.hotwired.dev/reference/values) called `updateAction`.
* When in use, the existing method `submit` will update the form's action value before submitting from the source element's value. `form.setAttribute('action', this.element.value); // example`
* Essentially we want to use the form `get` submit to do the location change, instead of updating the `window.location.url`.
* However, we need to ensure the right page is loaded, hence we need to revise `action` dynamically when the user selects the option.
* Remove the jQuery implementation completely [`wagtail/contrib/settings/static_src/wagtailsettings/js/site-switcher.js`](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/settings/static_src/wagtailsettings/js/site-switcher.js)
* Update the select field to have the suitable data attributes [`wagtail/contrib/settings/forms.py`](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/settings/forms.py#L23).
* Unit tests in JavaScript **must** be included with a PR.
* Validate that the 'current' option in the select drop-down for the site switcher is still function, so that selecting it will not do anything. See wagtail/contrib/settings/forms.py (Update: This is not a huge problem, the browser will not trigger a `change` event if the value has not changed).
#### Example HTML
```html
<form method="get" id="settings-site-switch" novalidate>
<select
name="site-switcher"
data-controller="w-submit"
data-action="change->w-submit#submit"
data-w-submit-update-action-value="true"
>
<option value="/path/to/current-site" selected>current.com</option>
<option value="/path/to/other-site">other.com</option>
</select>
</form>
```
### Additional notes
* Remember that Site Settings is not available in the bakery demo by default, you will need to add this locally to validate the behaviour https://docs.wagtail.org/en/stable/reference/contrib/settings.html
* `AutoFieldController` was added in this PR https://github.com/wagtail/wagtail/pull/9337 and then renamed to `SubmitController` in https://github.com/wagtail/wagtail/pull/10098
* The actual `form` HTML is located in [`wagtail/contrib/settings/templates/wagtailsettings/edit.html`](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/settings/templates/wagtailsettings/edit.html) - this HTML should not need changes but good to note
</issue>
<code>
[start of README.md]
1 <h1 align="center">
2 <img width="343" src=".github/wagtail.svg#gh-light-mode-only" alt="Wagtail">
3 <img width="343" src=".github/wagtail-inverse.svg#gh-dark-mode-only" alt="Wagtail">
4 </h1>
5 <p align="center">
6 <br>
7 <a href="https://github.com/wagtail/wagtail/actions">
8 <img src="https://github.com/wagtail/wagtail/workflows/Wagtail%20CI/badge.svg" alt="Build Status" />
9 </a>
10 <a href="https://opensource.org/licenses/BSD-3-Clause">
11 <img src="https://img.shields.io/badge/license-BSD-blue.svg" alt="License" />
12 </a>
13 <a href="https://pypi.python.org/pypi/wagtail/">
14 <img src="https://img.shields.io/pypi/v/wagtail.svg" alt="Version" />
15 </a>
16 <a href="https://pypi.python.org/pypi/wagtail/">
17 <img src="https://img.shields.io/pypi/dm/wagtail?logo=Downloads" alt="Monthly downloads" />
18 </a>
19 <a href="https://twitter.com/WagtailCMS">
20 <img src="https://img.shields.io/twitter/follow/WagtailCMS?style=social&logo=twitter" alt="follow on Twitter">
21 </a>
22 </p>
23
24 Wagtail is an open source content management system built on Django, with a strong community and commercial support. It's focused on user experience, and offers precise control for designers and developers.
25
26 
27
28 ### 🔥 Features
29
30 - A fast, attractive interface for authors
31 - Complete control over front-end design and structure
32 - Scales to millions of pages and thousands of editors
33 - Fast out of the box, cache-friendly when you need it
34 - Content API for 'headless' sites with de-coupled front-end
35 - Runs on a Raspberry Pi or a multi-datacenter cloud platform
36 - StreamField encourages flexible content without compromising structure
37 - Powerful, integrated search, using Elasticsearch or PostgreSQL
38 - Excellent support for images and embedded content
39 - Multi-site and multi-language ready
40 - Embraces and extends Django
41
42 Find out more at [wagtail.org](https://wagtail.org/).
43
44 ### 👉 Getting started
45
46 Wagtail works with [Python 3](https://www.python.org/downloads/), on any platform.
47
48 To get started with using Wagtail, run the following in a virtual environment:
49
50 
51
52 ```sh
53 pip install wagtail
54 wagtail start mysite
55 cd mysite
56 pip install -r requirements.txt
57 python manage.py migrate
58 python manage.py createsuperuser
59 python manage.py runserver
60 ```
61
62 For detailed installation and setup docs, see [the getting started tutorial](https://docs.wagtail.org/en/stable/getting_started/tutorial.html).
63
64 ### 👨👩👧👦 Who’s using it?
65
66 Wagtail is used by [NASA](https://www.nasa.gov/), [Google](https://www.google.com/), [Oxfam](https://www.oxfam.org/en), the [NHS](https://www.nhs.uk/), [Mozilla](https://www.mozilla.org/en-US/), [MIT](https://www.mit.edu/), the [Red Cross](https://www.icrc.org/en), [Salesforce](https://www.salesforce.com/), [NBC](https://www.nbc.com/), [BMW](https://www.bmw.com/en/index.html), and the US and UK governments. Add your own Wagtail site to [madewithwagtail.org](https://madewithwagtail.org).
67
68 ### 📖 Documentation
69
70 [docs.wagtail.org](https://docs.wagtail.org/) is the full reference for Wagtail, and includes guides for developers, designers and editors, alongside release notes and our roadmap.
71
72 For those who are **new to Wagtail**, the [Zen of Wagtail](https://docs.wagtail.org/en/stable/getting_started/the_zen_of_wagtail.html) will help you understand what Wagtail is, and what Wagtail is _not_.
73
74 **For developers** who are ready to jump in to their first Wagtail website the [Getting Started Tutorial](https://docs.wagtail.org/en/stable/getting_started/tutorial.html) will guide you through creating and editing your first page.
75
76 **Do you have an existing Django project?** The [Wagtail Integration documentation](https://docs.wagtail.org/en/stable/getting_started/integrating_into_django.html) is the best place to start.
77
78 ### 📌 Compatibility
79
80 _(If you are reading this on GitHub, the details here may not be indicative of the current released version - please see [Compatible Django / Python versions](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) in the Wagtail documentation.)_
81
82 Wagtail supports:
83
84 - Django 3.2.x, 4.1.x and 4.2.x
85 - Python 3.7, 3.8, 3.9, 3.10 and 3.11
86 - PostgreSQL, MySQL and SQLite (with JSON1) as database backends
87
88 [Previous versions of Wagtail](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) additionally supported Python 2.7 and earlier Django versions.
89
90 ---
91
92 ### 📢 Community Support
93
94 There is an active community of Wagtail users and developers responding to questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/wagtail). When posting questions, please read Stack Overflow's advice on [how to ask questions](https://stackoverflow.com/help/how-to-ask) and remember to tag your question "wagtail".
95
96 For topics and discussions that do not fit Stack Overflow's question and answer format we have a [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack). Please respect the time and effort of volunteers by not asking the same question in multiple places.
97
98 [](https://github.com/wagtail/wagtail/wiki/Slack)
99
100 Our [Github discussion boards](https://github.com/wagtail/wagtail/discussions) are open for sharing ideas and plans for the Wagtail project.
101
102 We maintain a curated list of third party packages, articles and other resources at [Awesome Wagtail](https://github.com/springload/awesome-wagtail).
103
104 ### 🧑💼 Commercial Support
105
106 Wagtail is sponsored by [Torchbox](https://torchbox.com/). If you need help implementing or hosting Wagtail, please contact us: [email protected]. See also [madewithwagtail.org/developers/](https://madewithwagtail.org/developers/) for expert Wagtail developers around the world.
107
108 ### 🔐 Security
109
110 We take the security of Wagtail, and related packages we maintain, seriously. If you have found a security issue with any of our projects please email us at [[email protected]](mailto:[email protected]) so we can work together to find and patch the issue. We appreciate responsible disclosure with any security related issues, so please contact us first before creating a Github issue.
111
112 If you want to send an encrypted email (optional), the public key ID for [email protected] is 0xbed227b4daf93ff9, and this public key is available from most commonly-used keyservers.
113
114 ### 🕒 Release schedule
115
116 Feature releases of Wagtail are released every three months. Selected releases are designated as Long Term Support (LTS) releases, and will receive maintenance updates for an extended period to address any security and data-loss related issues. For dates of past and upcoming releases and support periods, see [Release Schedule](https://github.com/wagtail/wagtail/wiki/Release-schedule).
117
118 #### 🕛 Nightly releases
119
120 To try out the latest features before a release, we also create builds from `main` every night. You can find instructions on how to install the latest nightly release at https://releases.wagtail.org/nightly/index.html
121
122 ### 🙋🏽 Contributing
123
124 If you're a Python or Django developer, fork the repo and get stuck in! We have several developer focused channels on the [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack).
125
126 You might like to start by reviewing the [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html) and checking issues with the [good first issue](https://github.com/wagtail/wagtail/labels/good%20first%20issue) label.
127
128 We also welcome translations for Wagtail's interface. Translation work should be submitted through [Transifex](https://explore.transifex.com/torchbox/wagtail/).
129
130 ### 🔓 License
131
132 [BSD](https://github.com/wagtail/wagtail/blob/main/LICENSE) - Free to use and modify for any purpose, including both open and closed-source code.
133
134 ### 👏 Thanks
135
136 We thank the following organisations for their services used in Wagtail's development:
137
138 [](https://www.browserstack.com/)<br>
139 [BrowserStack](https://www.browserstack.com/) provides the project with free access to their live web-based browser testing tool, and automated Selenium cloud testing.
140
141 [](https://www.squash.io/)<br>
142 [Squash](https://www.squash.io/) provides the project with free test environments for reviewing pull requests.
143
144 [](https://assistivlabs.com/)<br>
145 [Assistiv Labs](https://assistivlabs.com/) provides the project with unlimited access to their remote testing with assistive technologies.
146
[end of README.md]
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Wagtail documentation build configuration file, created by
4 # sphinx-quickstart on Tue Jan 14 17:38:55 2014.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import os
16 import sys
17 from datetime import datetime
18
19 import django
20 import sphinx_wagtail_theme
21
22 from wagtail import VERSION, __version__
23
24 # on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org
25 on_rtd = os.environ.get("READTHEDOCS", None) == "True"
26
27 html_theme = "sphinx_wagtail_theme"
28 html_theme_path = [sphinx_wagtail_theme.get_html_theme_path()]
29
30 html_theme_options = {
31 "project_name": "Wagtail Documentation",
32 "github_url": "https://github.com/wagtail/wagtail/blob/main/docs/",
33 }
34
35 # If extensions (or modules to document with autodoc) are in another directory,
36 # add these directories to sys.path here. If the directory is relative to the
37 # documentation root, use os.path.abspath to make it absolute, like shown here.
38 sys.path.insert(0, os.path.abspath(".."))
39
40 # Autodoc may need to import some models modules which require django settings
41 # be configured
42 os.environ["DJANGO_SETTINGS_MODULE"] = "wagtail.test.settings"
43 django.setup()
44
45 # Use SQLite3 database engine so it doesn't attempt to use psycopg2 on RTD
46 os.environ["DATABASE_ENGINE"] = "django.db.backends.sqlite3"
47
48 # -- General configuration ------------------------------------------------
49
50 # If your documentation needs a minimal Sphinx version, state it here.
51 # needs_sphinx = '1.0'
52
53 # Add any Sphinx extension module names here, as strings. They can be
54 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
55 # ones.
56 extensions = [
57 "sphinx.ext.autodoc",
58 "sphinx.ext.intersphinx",
59 "sphinx_copybutton",
60 "myst_parser",
61 "sphinx_wagtail_theme",
62 ]
63
64 if not on_rtd:
65 extensions.append("sphinxcontrib.spelling")
66
67 # Add any paths that contain templates here, relative to this directory.
68 templates_path = ["_templates"]
69
70 # The suffix of source filenames.
71 source_suffix = ".rst"
72
73 # The encoding of source files.
74 # source_encoding = 'utf-8-sig'
75
76 # The master toctree document.
77 master_doc = "index"
78
79 # General information about the project.
80 project = "Wagtail Documentation"
81 copyright = f"{datetime.now().year}, Torchbox and contributors"
82
83 # The version info for the project you're documenting, acts as replacement for
84 # |version| and |release|, also used in various other places throughout the
85 # built documents.
86
87 # The short X.Y version.
88 version = "{}.{}".format(VERSION[0], VERSION[1])
89 # The full version, including alpha/beta/rc tags.
90 release = __version__
91
92 # The language for content autogenerated by Sphinx. Refer to documentation
93 # for a list of supported languages.
94 # language = None
95
96 # There are two options for replacing |today|: either, you set today to some
97 # non-false value, then it is used:
98 # today = ''
99 # Else, today_fmt is used as the format for a strftime call.
100 # today_fmt = '%B %d, %Y'
101
102 # List of patterns, relative to source directory, that match files and
103 # directories to ignore when looking for source files.
104 exclude_patterns = ["_build", "README.md"]
105
106 # The reST default role (used for this markup: `text`) to use for all
107 # documents.
108 # default_role = None
109
110 # If true, '()' will be appended to :func: etc. cross-reference text.
111 # add_function_parentheses = True
112
113 # If true, the current module name will be prepended to all description
114 # unit titles (such as .. function::).
115 # add_module_names = True
116
117 # If true, sectionauthor and moduleauthor directives will be shown in the
118 # output. They are ignored by default.
119 # show_authors = False
120
121 # The name of the Pygments (syntax highlighting) style to use.
122 pygments_style = None
123
124 # A list of ignored prefixes for module index sorting.
125 # modindex_common_prefix = []
126
127 # If true, keep warnings as "system message" paragraphs in the built documents.
128 # keep_warnings = False
129
130 # splhinxcontrib.spelling settings
131
132 spelling_lang = "en_GB"
133 spelling_word_list_filename = "spelling_wordlist.txt"
134
135 # sphinx.ext.intersphinx settings
136 intersphinx_mapping = {
137 "django": (
138 "https://docs.djangoproject.com/en/stable/",
139 "https://docs.djangoproject.com/en/stable/_objects/",
140 )
141 }
142
143 # -- Options for HTML output ----------------------------------------------
144
145 # Theme options are theme-specific and customise the look and feel of a theme
146 # further. For a list of options available for each theme, see the
147 # documentation.
148 # html_theme_options = {}
149
150 # The name for this set of Sphinx documents. If None, it defaults to
151 # "<project> v<release> documentation".
152 # html_title = None
153
154 # A shorter title for the navigation bar. Default is the same as html_title.
155 # html_short_title = None
156
157 # The name of an image file (relative to this directory) to place at the top
158 # of the sidebar.
159 # html_logo = 'logo.png'
160
161 # The name of an image file (within the static path) to use as favicon of the
162 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
163 # pixels large.
164 html_favicon = "favicon.ico"
165
166 # Add any paths that contain custom static files (such as style sheets) here,
167 # relative to this directory. They are copied after the builtin static files,
168 # so a file named "default.css" will overwrite the builtin "default.css".
169 html_static_path = ["_static"]
170
171 # Add any extra paths that contain custom files (such as robots.txt or
172 # .htaccess) here, relative to this directory. These files are copied
173 # directly to the root of the documentation.
174 html_extra_path = ["public"]
175
176 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
177 # using the given strftime format.
178 # html_last_updated_fmt = '%b %d, %Y'
179
180 # If true, SmartyPants will be used to convert quotes and dashes to
181 # typographically correct entities.
182 # html_use_smartypants = True
183
184 # Custom sidebar templates, maps document names to template names.
185 # html_sidebars = {}
186
187 # Additional templates that should be rendered to pages, maps page names to
188 # template names.
189 # html_additional_pages = {}
190
191 # If false, no module index is generated.
192 # html_domain_indices = True
193
194 # If false, no index is generated.
195 # Since we are implementing search with Algolia DocSearch, we do not need Sphinx to
196 # generate its own index. It might not hurt to keep the Sphinx index, but it
197 # could potentially speed up the build process.
198 html_use_index = False
199
200 # If true, the index is split into individual pages for each letter.
201 # html_split_index = False
202
203 # If true, links to the reST sources are added to the pages.
204 # html_show_sourcelink = True
205
206 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
207 # html_show_sphinx = True
208
209 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
210 # html_show_copyright = True
211
212 # If true, an OpenSearch description file will be output, and all pages will
213 # contain a <link> tag referring to it. The value of this option must be the
214 # base URL from which the finished HTML is served.
215 # html_use_opensearch = ''
216
217 # This is the file name suffix for HTML files (for example ".xhtml").
218 # html_file_suffix = None
219
220 # Output file base name for HTML help builder.
221 htmlhelp_basename = "Wagtaildoc"
222
223 # -- Options for LaTeX output ---------------------------------------------
224
225 latex_elements = {
226 # The paper size ('letterpaper' or 'a4paper').
227 # 'papersize': 'letterpaper',
228 # The font size ('10pt', '11pt' or '12pt').
229 # 'pointsize': '10pt',
230 # Additional stuff for the LaTeX preamble.
231 # 'preamble': '',
232 }
233
234 # Grouping the document tree into LaTeX files. List of tuples
235 # (source start file, target name, title,
236 # author, documentclass [howto, manual, or own class]).
237 latex_documents = [
238 ("index", "Wagtail.tex", "Wagtail Documentation", "Torchbox", "manual"),
239 ]
240
241 # The name of an image file (relative to this directory) to place at the top of
242 # the title page.
243 # latex_logo = None
244
245 # For "manual" documents, if this is true, then toplevel headings are parts,
246 # not chapters.
247 # latex_use_parts = False
248
249 # If true, show page references after internal links.
250 # latex_show_pagerefs = False
251
252 # If true, show URL addresses after external links.
253 # latex_show_urls = False
254
255 # Documents to append as an appendix to all manuals.
256 # latex_appendices = []
257
258 # If false, no module index is generated.
259 # latex_domain_indices = True
260
261 # -- Options for manual page output ---------------------------------------
262
263 # One entry per manual page. List of tuples
264 # (source start file, name, description, authors, manual section).
265 man_pages = [("index", "wagtail", "Wagtail Documentation", ["Torchbox"], 1)]
266
267 # If true, show URL addresses after external links.
268 # man_show_urls = False
269
270 # -- Options for Texinfo output -------------------------------------------
271
272 # Grouping the document tree into Texinfo files. List of tuples
273 # (source start file, target name, title, author,
274 # dir menu entry, description, category)
275 texinfo_documents = [
276 (
277 "index",
278 "Wagtail",
279 "Wagtail Documentation",
280 "Torchbox",
281 "Wagtail",
282 "One line description of project.",
283 "Miscellaneous",
284 ),
285 ]
286
287 # Documents to append as an appendix to all manuals.
288 # texinfo_appendices = []
289
290 # If false, no module index is generated.
291 # texinfo_domain_indices = True
292
293 # How to display URL addresses: 'footnote', 'no', or 'inline'.
294 # texinfo_show_urls = 'footnote'
295
296 # If true, do not generate a @detailmenu in the "Top" node's menu.
297 # texinfo_no_detailmenu = False
298
299
300 def setup(app):
301 app.add_js_file("js/banner.js")
302
[end of docs/conf.py]
[start of wagtail/contrib/simple_translation/forms.py]
1 from django import forms
2 from django.urls import reverse
3 from django.utils.safestring import mark_safe
4 from django.utils.translation import gettext_lazy, ngettext
5
6 from wagtail.models import Locale, Page
7
8
9 class CheckboxSelectMultipleWithDisabledOptions(forms.CheckboxSelectMultiple):
10 option_template_name = "simple_translation/admin/input_option.html"
11 disabled_values = []
12
13 def create_option(self, *args, **kwargs):
14 option = super().create_option(*args, **kwargs)
15 if option["value"] in self.disabled_values:
16 option["attrs"]["disabled"] = True
17 else:
18 # Only set target/action if not disabled to ignore change on disabled items
19 option["attrs"]["data-action"] = "w-bulk#toggle"
20 option["attrs"]["data-w-bulk-target"] = "item"
21 return option
22
23
24 class SubmitTranslationForm(forms.Form):
25 # Note: We don't actually use select_all in Python, it is just the
26 # easiest way to add the widget to the form. It's controlled in JS.
27 select_all = forms.BooleanField(
28 label=gettext_lazy("Select all"),
29 required=False,
30 widget=forms.CheckboxInput(
31 attrs={
32 "data-action": "w-bulk#toggleAll",
33 "data-w-bulk-target": "all",
34 },
35 ),
36 )
37
38 locales = forms.ModelMultipleChoiceField(
39 label=gettext_lazy("Locales"),
40 queryset=Locale.objects.none(),
41 widget=CheckboxSelectMultipleWithDisabledOptions,
42 )
43 include_subtree = forms.BooleanField(
44 required=False, help_text=gettext_lazy("All child pages will be created.")
45 )
46
47 def __init__(self, instance, *args, **kwargs):
48 super().__init__(*args, **kwargs)
49
50 hide_include_subtree = True
51 self.show_submit = True
52
53 if isinstance(instance, Page):
54 descendant_count = instance.get_descendants().count()
55
56 if descendant_count > 0:
57 hide_include_subtree = False
58 self.fields["include_subtree"].label = ngettext(
59 "Include subtree (%(descendant_count)s page)",
60 "Include subtree (%(descendant_count)s pages)",
61 descendant_count,
62 ) % {"descendant_count": descendant_count}
63
64 if hide_include_subtree:
65 self.fields["include_subtree"].widget = forms.HiddenInput()
66
67 untranslated_locales = Locale.objects.exclude(
68 id__in=instance.get_translations(inclusive=True).values_list(
69 "locale_id", flat=True
70 )
71 )
72 self.fields["locales"].queryset = untranslated_locales
73
74 # For snippets, hide select all if there is one option.
75 # Using len() instead of count() here as we're going to evaluate this queryset
76 # anyway and it gets cached so it'll only have one query in the end.
77 hide_select_all = len(untranslated_locales) < 2
78
79 if isinstance(instance, Page):
80 parent = instance.get_parent()
81
82 # Find allowed locale options.
83 if parent.is_root():
84 # All locale options are allowed.
85 allowed_locale_ids = Locale.objects.all().values_list("id", flat=True)
86 else:
87 # Only the locale options that have a translated parent are allowed.
88 allowed_locale_ids = (
89 instance.get_parent()
90 .get_translations(inclusive=True)
91 .values_list("locale_id", flat=True)
92 )
93
94 # Get and set the locale options that are disabled.
95 disabled_locales = Locale.objects.exclude(
96 id__in=allowed_locale_ids
97 ).values_list("id", flat=True)
98 self.fields["locales"].widget.disabled_values = disabled_locales
99
100 if disabled_locales:
101 # Display a help text.
102 url = reverse(
103 "simple_translation:submit_page_translation", args=[parent.id]
104 )
105 help_text = ngettext(
106 "A locale is disabled because a parent page is not translated.",
107 "Some locales are disabled because some parent pages are not translated.",
108 len(disabled_locales),
109 )
110 help_text += "<br>"
111 help_text += '<a href="{}">'.format(url)
112 help_text += ngettext(
113 "Translate the parent page.",
114 "Translate the parent pages.",
115 len(disabled_locales),
116 )
117 help_text += "</a>"
118 self.fields["locales"].help_text = mark_safe(help_text)
119
120 # For pages, if there is one locale or all locales are disabled.
121 hide_select_all = (
122 len(untranslated_locales) == 1
123 or len(untranslated_locales) - len(disabled_locales) == 0
124 )
125
126 # Hide the submit if all untranslated locales are disabled.
127 # This property is used in the template.
128 if len(untranslated_locales) == len(disabled_locales):
129 self.show_submit = False
130
131 if hide_select_all:
132 self.fields["select_all"].widget = forms.HiddenInput()
133
[end of wagtail/contrib/simple_translation/forms.py]
[start of wagtail/coreutils.py]
1 import functools
2 import inspect
3 import logging
4 import re
5 import unicodedata
6 from typing import TYPE_CHECKING, Any, Dict, Iterable, Union
7
8 from anyascii import anyascii
9 from django.apps import apps
10 from django.conf import settings
11 from django.conf.locale import LANG_INFO
12 from django.core.exceptions import ImproperlyConfigured, SuspiciousOperation
13 from django.core.signals import setting_changed
14 from django.db.models import Model
15 from django.db.models.base import ModelBase
16 from django.dispatch import receiver
17 from django.http import HttpRequest
18 from django.test import RequestFactory
19 from django.utils.encoding import force_str
20 from django.utils.text import capfirst, slugify
21 from django.utils.translation import check_for_language, get_supported_language_variant
22 from django.utils.translation import gettext_lazy as _
23
24 if TYPE_CHECKING:
25 from wagtail.models import Site
26
27 logger = logging.getLogger(__name__)
28
29 WAGTAIL_APPEND_SLASH = getattr(settings, "WAGTAIL_APPEND_SLASH", True)
30
31
32 def camelcase_to_underscore(str):
33 # https://djangosnippets.org/snippets/585/
34 return (
35 re.sub("(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))", "_\\1", str).lower().strip("_")
36 )
37
38
39 def string_to_ascii(value):
40 """
41 Convert a string to ascii.
42 """
43
44 return str(anyascii(value))
45
46
47 def get_model_string(model):
48 """
49 Returns a string that can be used to identify the specified model.
50
51 The format is: `app_label.ModelName`
52
53 This an be reversed with the `resolve_model_string` function
54 """
55 return model._meta.app_label + "." + model.__name__
56
57
58 def resolve_model_string(model_string, default_app=None):
59 """
60 Resolve an 'app_label.model_name' string into an actual model class.
61 If a model class is passed in, just return that.
62
63 Raises a LookupError if a model can not be found, or ValueError if passed
64 something that is neither a model or a string.
65 """
66 if isinstance(model_string, str):
67 try:
68 app_label, model_name = model_string.split(".")
69 except ValueError:
70 if default_app is not None:
71 # If we can't split, assume a model in current app
72 app_label = default_app
73 model_name = model_string
74 else:
75 raise ValueError(
76 "Can not resolve {0!r} into a model. Model names "
77 "should be in the form app_label.model_name".format(model_string),
78 model_string,
79 )
80
81 return apps.get_model(app_label, model_name)
82
83 elif isinstance(model_string, type) and issubclass(model_string, Model):
84 return model_string
85
86 else:
87 raise ValueError(
88 "Can not resolve {0!r} into a model".format(model_string), model_string
89 )
90
91
92 SCRIPT_RE = re.compile(r"<(-*)/script>")
93
94
95 def escape_script(text):
96 """
97 Escape `</script>` tags in 'text' so that it can be placed within a `<script>` block without
98 accidentally closing it. A '-' character will be inserted for each time it is escaped:
99 `<-/script>`, `<--/script>` etc.
100 """
101 return SCRIPT_RE.sub(r"<-\1/script>", text)
102
103
104 SLUGIFY_RE = re.compile(r"[^\w\s-]", re.UNICODE)
105
106
107 def cautious_slugify(value):
108 """
109 Convert a string to ASCII exactly as Django's slugify does, with the exception
110 that any non-ASCII alphanumeric characters (that cannot be ASCIIfied under Unicode
111 normalisation) are escaped into codes like 'u0421' instead of being deleted entirely.
112
113 This ensures that the result of slugifying (for example - Cyrillic) text will not be an empty
114 string, and can thus be safely used as an identifier (albeit not a human-readable one).
115 """
116 value = force_str(value)
117
118 # Normalize the string to decomposed unicode form. This causes accented Latin
119 # characters to be split into 'base character' + 'accent modifier'; the latter will
120 # be stripped out by the regexp, resulting in an ASCII-clean character that doesn't
121 # need to be escaped
122 value = unicodedata.normalize("NFKD", value)
123
124 # Strip out characters that aren't letterlike, underscores or hyphens,
125 # using the same regexp that slugify uses. This ensures that non-ASCII non-letters
126 # (accent modifiers, fancy punctuation) get stripped rather than escaped
127 value = SLUGIFY_RE.sub("", value)
128
129 # Encode as ASCII, escaping non-ASCII characters with backslashreplace, then convert
130 # back to a unicode string (which is what slugify expects)
131 value = value.encode("ascii", "backslashreplace").decode("ascii")
132
133 # Pass to slugify to perform final conversion (whitespace stripping, applying
134 # mark_safe); this will also strip out the backslashes from the 'backslashreplace'
135 # conversion
136 return slugify(value)
137
138
139 def safe_snake_case(value):
140 """
141 Convert a string to ASCII similar to Django's slugify, with catious handling of
142 non-ASCII alphanumeric characters. See `cautious_slugify`.
143
144 Any inner whitespace, hyphens or dashes will be converted to underscores and
145 will be safe for Django template or filename usage.
146 """
147
148 slugified_ascii_string = cautious_slugify(value)
149
150 snake_case_string = slugified_ascii_string.replace("-", "_")
151
152 return snake_case_string
153
154
155 def get_content_type_label(content_type):
156 """
157 Return a human-readable label for a content type object, suitable for display in the admin
158 in place of the default 'wagtailcore | page' representation
159 """
160 if content_type is None:
161 return _("Unknown content type")
162
163 model = content_type.model_class()
164 if model:
165 return str(capfirst(model._meta.verbose_name))
166 else:
167 # no corresponding model class found; fall back on the name field of the ContentType
168 return capfirst(content_type.model)
169
170
171 def accepts_kwarg(func, kwarg):
172 """
173 Determine whether the callable `func` has a signature that accepts the keyword argument `kwarg`
174 """
175 signature = inspect.signature(func)
176 try:
177 signature.bind_partial(**{kwarg: None})
178 return True
179 except TypeError:
180 return False
181
182
183 class InvokeViaAttributeShortcut:
184 """
185 Used to create a shortcut that allows an object's named
186 single-argument method to be invoked using a simple
187 attribute reference syntax. For example, adding the
188 following to an object:
189
190 obj.page_url = InvokeViaAttributeShortcut(obj, 'get_page_url')
191
192 Would allow you to invoke get_page_url() like so:
193
194 obj.page_url.terms_and_conditions
195
196 As well as the usual:
197
198 obj.get_page_url('terms_and_conditions')
199 """
200
201 __slots__ = "obj", "method_name"
202
203 def __init__(self, obj, method_name):
204 self.obj = obj
205 self.method_name = method_name
206
207 def __getattr__(self, name):
208 method = getattr(self.obj, self.method_name)
209 return method(name)
210
211 def __getstate__(self):
212 return {"obj": self.obj, "method_name": self.method_name}
213
214 def __setstate__(self, state):
215 self.obj = state["obj"]
216 self.method_name = state["method_name"]
217
218
219 def find_available_slug(parent, requested_slug, ignore_page_id=None):
220 """
221 Finds an available slug within the specified parent.
222
223 If the requested slug is not available, this adds a number on the end, for example:
224
225 - 'requested-slug'
226 - 'requested-slug-1'
227 - 'requested-slug-2'
228
229 And so on, until an available slug is found.
230
231 The `ignore_page_id` keyword argument is useful for when you are updating a page,
232 you can pass the page being updated here so the page's current slug is not
233 treated as in use by another page.
234 """
235 pages = parent.get_children().filter(slug__startswith=requested_slug)
236
237 if ignore_page_id:
238 pages = pages.exclude(id=ignore_page_id)
239
240 existing_slugs = set(pages.values_list("slug", flat=True))
241 slug = requested_slug
242 number = 1
243
244 while slug in existing_slugs:
245 slug = requested_slug + "-" + str(number)
246 number += 1
247
248 return slug
249
250
251 @functools.lru_cache()
252 def get_content_languages():
253 """
254 Cache of settings.WAGTAIL_CONTENT_LANGUAGES in a dictionary for easy lookups by key.
255 """
256 content_languages = getattr(settings, "WAGTAIL_CONTENT_LANGUAGES", None)
257 languages = dict(settings.LANGUAGES)
258
259 if content_languages is None:
260 # Default to a single language based on LANGUAGE_CODE
261 default_language_code = get_supported_language_variant(settings.LANGUAGE_CODE)
262 try:
263 language_name = languages[default_language_code]
264 except KeyError:
265 # get_supported_language_variant on the 'null' translation backend (used for
266 # USE_I18N=False) returns settings.LANGUAGE_CODE unchanged without accounting for
267 # language variants (en-us versus en), so retry with the generic version.
268 default_language_code = default_language_code.split("-")[0]
269 try:
270 language_name = languages[default_language_code]
271 except KeyError:
272 # Can't extract a display name, so fall back on displaying LANGUAGE_CODE instead
273 language_name = settings.LANGUAGE_CODE
274 # Also need to tweak the languages dict to get around the check below
275 languages[default_language_code] = settings.LANGUAGE_CODE
276
277 content_languages = [
278 (default_language_code, language_name),
279 ]
280
281 # Check that each content language is in LANGUAGES
282 for language_code, name in content_languages:
283 if language_code not in languages:
284 raise ImproperlyConfigured(
285 "The language {} is specified in WAGTAIL_CONTENT_LANGUAGES but not LANGUAGES. "
286 "WAGTAIL_CONTENT_LANGUAGES must be a subset of LANGUAGES.".format(
287 language_code
288 )
289 )
290
291 return dict(content_languages)
292
293
294 @functools.lru_cache(maxsize=1000)
295 def get_supported_content_language_variant(lang_code, strict=False):
296 """
297 Return the language code that's listed in supported languages, possibly
298 selecting a more generic variant. Raise LookupError if nothing is found.
299 If `strict` is False (the default), look for a country-specific variant
300 when neither the language code nor its generic variant is found.
301 lru_cache should have a maxsize to prevent from memory exhaustion attacks,
302 as the provided language codes are taken from the HTTP request. See also
303 <https://www.djangoproject.com/weblog/2007/oct/26/security-fix/>.
304
305 This is equvilant to Django's `django.utils.translation.get_supported_content_language_variant`
306 but reads the `WAGTAIL_CONTENT_LANGUAGES` setting instead.
307 """
308 if lang_code:
309 # If 'fr-ca' is not supported, try special fallback or language-only 'fr'.
310 possible_lang_codes = [lang_code]
311 try:
312 possible_lang_codes.extend(LANG_INFO[lang_code]["fallback"])
313 except KeyError:
314 pass
315 generic_lang_code = lang_code.split("-")[0]
316 possible_lang_codes.append(generic_lang_code)
317 supported_lang_codes = get_content_languages()
318
319 for code in possible_lang_codes:
320 if code in supported_lang_codes and check_for_language(code):
321 return code
322 if not strict:
323 # if fr-fr is not supported, try fr-ca.
324 for supported_code in supported_lang_codes:
325 if supported_code.startswith(generic_lang_code + "-"):
326 return supported_code
327 raise LookupError(lang_code)
328
329
330 @functools.lru_cache()
331 def get_locales_display_names() -> dict:
332 """
333 Cache of the locale id -> locale display name mapping
334 """
335 from wagtail.models import Locale # inlined to avoid circular imports
336
337 locales_map = {
338 locale.pk: locale.get_display_name() for locale in Locale.objects.all()
339 }
340 return locales_map
341
342
343 @receiver(setting_changed)
344 def reset_cache(**kwargs):
345 """
346 Clear cache when global WAGTAIL_CONTENT_LANGUAGES/LANGUAGES/LANGUAGE_CODE settings are changed
347 """
348 if kwargs["setting"] in ("WAGTAIL_CONTENT_LANGUAGES", "LANGUAGES", "LANGUAGE_CODE"):
349 get_content_languages.cache_clear()
350 get_supported_content_language_variant.cache_clear()
351
352
353 def multigetattr(item, accessor):
354 """
355 Like getattr, but accepts a dotted path as the accessor to be followed to any depth.
356 At each step, the lookup on the object can be a dictionary lookup (foo['bar']) or an attribute
357 lookup (foo.bar), and if it results in a callable, will be called (provided we can do so with
358 no arguments, and it does not have an 'alters_data' property).
359
360 Modelled on the variable resolution logic in Django templates:
361 https://github.com/django/django/blob/f331eba6d576752dd79c4b37c41d981daa537fe6/django/template/base.py#L838
362 """
363
364 current = item
365
366 for bit in accessor.split("."):
367 try: # dictionary lookup
368 current = current[bit]
369 # ValueError/IndexError are for numpy.array lookup on
370 # numpy < 1.9 and 1.9+ respectively
371 except (TypeError, AttributeError, KeyError, ValueError, IndexError):
372 try: # attribute lookup
373 current = getattr(current, bit)
374 except (TypeError, AttributeError):
375 # Reraise if the exception was raised by a @property
376 if bit in dir(current):
377 raise
378 try: # list-index lookup
379 current = current[int(bit)]
380 except (
381 IndexError, # list index out of range
382 ValueError, # invalid literal for int()
383 KeyError, # current is a dict without `int(bit)` key
384 TypeError, # unsubscriptable object
385 ):
386 raise AttributeError(
387 "Failed lookup for key [%s] in %r" % (bit, current)
388 )
389
390 if callable(current):
391 if getattr(current, "alters_data", False):
392 raise SuspiciousOperation(
393 "Cannot call %r from multigetattr" % (current,)
394 )
395
396 # if calling without arguments is invalid, let the exception bubble up
397 current = current()
398
399 return current
400
401
402 def get_dummy_request(*, path: str = "/", site: "Site" = None) -> HttpRequest:
403 """
404 Return a simple ``HttpRequest`` instance that can be passed to
405 ``Page.get_url()`` and other methods to benefit from improved performance
406 when no real ``HttpRequest`` instance is available.
407
408 If ``site`` is provided, the ``HttpRequest`` is made to look like it came
409 from that Wagtail ``Site``.
410 """
411 server_port = 80
412 if site:
413 server_name = site.hostname
414 server_port = site.port
415 elif settings.ALLOWED_HOSTS == ["*"]:
416 server_name = "example.com"
417 else:
418 server_name = settings.ALLOWED_HOSTS[0]
419
420 # `SERVER_PORT` doesn't work when passed to the constructor
421 return RequestFactory(SERVER_NAME=server_name).get(path, SERVER_PORT=server_port)
422
423
424 class BatchProcessor:
425 """
426 A class to help with processing of an unknown (and potentially very
427 high) number of objects.
428
429 Just set ``max_size`` to the maximum number of instances you want
430 to be held in memory at any one time, and batches will be sent to the
431 ``process()`` method as that number is reached, without you having to
432 invoke ``process()`` regularly yourself. Just remember to invoke
433 ``process()`` when you're done adding items, otherwise the final batch
434 of objects will not be processed.
435 """
436
437 def __init__(self, max_size: int):
438 self.max_size = max_size
439 self.items = []
440 self.added_count = 0
441
442 def __len__(self):
443 return self.added_count
444
445 def add(self, item: Any) -> None:
446 self.items.append(item)
447 self.added_count += 1
448 if self.max_size and len(self.items) == self.max_size:
449 self.process()
450
451 def extend(self, iterable: Iterable[Any]) -> None:
452 for item in iterable:
453 self.add(item)
454
455 def process(self):
456 self.pre_process()
457 self._do_processing()
458 self.post_process()
459 self.items.clear()
460
461 def pre_process(self):
462 """
463 A hook to allow subclasses to do any pre-processing of the data
464 before the ``process()`` method is called.
465 """
466 pass
467
468 def _do_processing(self):
469 """
470 To be overridden by subclasses to do whatever it is
471 that needs to be done to the items in ``self.items``.
472 """
473 raise NotImplementedError
474
475 def post_process(self):
476 """
477 A hook to allow subclasses to do any post-processing
478 after the ``process()`` method is called, and before
479 ``self.items`` is cleared
480 """
481 pass
482
483
484 class BatchCreator(BatchProcessor):
485 """
486 A class to help with bulk creation of an unknown (and potentially very
487 high) number of model instances.
488
489 Just set ``max_size`` to the maximum number of instances you want
490 to be held in memory at any one time, and batches of objects will
491 be created as that number is reached, without you having to invoke
492 the ``process()`` method regularly yourself. Just remember to
493 invoke ``process()`` when you're done adding items, to ensure
494 that the final batch items is saved.
495
496 ``BatchSaver`` is migration-friendly! Just use the ``model``
497 keyword argument when initializing to override the hardcoded model
498 class with the version from your migration.
499 """
500
501 model: ModelBase = None
502
503 def __init__(
504 self, max_size: int, *, model: ModelBase = None, ignore_conflicts=False
505 ):
506 super().__init__(max_size)
507 self.ignore_conflicts = ignore_conflicts
508 self.created_count = 0
509 if model is not None:
510 self.model = model
511
512 def initialize_instance(self, kwargs):
513 return self.model(**kwargs)
514
515 def add(self, *, instance: Model = None, **kwargs) -> None:
516 if instance is None:
517 instance = self.initialize_instance(kwargs)
518 self.items.append(instance)
519 self.added_count += 1
520 if self.max_size and len(self.items) == self.max_size:
521 self.process()
522
523 def extend(self, iterable: Iterable[Union[Model, Dict[str, Any]]]) -> None:
524 for value in iterable:
525 if isinstance(value, self.model):
526 self.add(instance=value)
527 else:
528 self.add(**value)
529
530 def _do_processing(self):
531 """
532 Use bulk_create() to save ``self.items``.
533 """
534 if not self.items:
535 return None
536 self.created_count += len(
537 self.model.objects.bulk_create(
538 self.items, ignore_conflicts=self.ignore_conflicts
539 )
540 )
541
542 def get_summary(self):
543 opts = self.model._meta
544 return f"{self.created_count}/{self.added_count} {opts.verbose_name_plural} were created successfully."
545
[end of wagtail/coreutils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
wagtail/wagtail
|
37192f847bde294237f2c920d8f8f3b32e5e10d9
|
🎛️ Migrate site switcher to use Stimulus approach `ActionController`
> ℹ️ **Part of the [Stimulus 🎛️ RFC 78](https://github.com/wagtail/rfcs/pull/78)**
### Is your proposal related to a problem?
There is a custom JavaScript implementation to add behaviour to select drop-down that will update the location (URL) when changed.
This approach should be very close to what we are already doing with the `SubmitController` so let's do a a bit of clean up to avoid too much ad-hoc JS.
### Describe the solution you'd like
* Update the implementation of `client/src/controllers/SubmitController.ts` to allow for a new [Stimulus Value](https://stimulus.hotwired.dev/reference/values) called `updateAction`.
* When in use, the existing method `submit` will update the form's action value before submitting from the source element's value. `form.setAttribute('action', this.element.value); // example`
* Essentially we want to use the form `get` submit to do the location change, instead of updating the `window.location.url`.
* However, we need to ensure the right page is loaded, hence we need to revise `action` dynamically when the user selects the option.
* Remove the jQuery implementation completely [`wagtail/contrib/settings/static_src/wagtailsettings/js/site-switcher.js`](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/settings/static_src/wagtailsettings/js/site-switcher.js)
* Update the select field to have the suitable data attributes [`wagtail/contrib/settings/forms.py`](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/settings/forms.py#L23).
* Unit tests in JavaScript **must** be included with a PR.
* Validate that the 'current' option in the select drop-down for the site switcher is still function, so that selecting it will not do anything. See wagtail/contrib/settings/forms.py (Update: This is not a huge problem, the browser will not trigger a `change` event if the value has not changed).
#### Example HTML
```html
<form method="get" id="settings-site-switch" novalidate>
<select
name="site-switcher"
data-controller="w-submit"
data-action="change->w-submit#submit"
data-w-submit-update-action-value="true"
>
<option value="/path/to/current-site" selected>current.com</option>
<option value="/path/to/other-site">other.com</option>
</select>
</form>
```
### Additional notes
* Remember that Site Settings is not available in the bakery demo by default, you will need to add this locally to validate the behaviour https://docs.wagtail.org/en/stable/reference/contrib/settings.html
* `AutoFieldController` was added in this PR https://github.com/wagtail/wagtail/pull/9337 and then renamed to `SubmitController` in https://github.com/wagtail/wagtail/pull/10098
* The actual `form` HTML is located in [`wagtail/contrib/settings/templates/wagtailsettings/edit.html`](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/settings/templates/wagtailsettings/edit.html) - this HTML should not need changes but good to note
|
Flagging as a good first issue, however, whoever does this will need to read up on Stimulus to get an understanding of the code.
@Lovelyfin00 - do you mind reviewing this issue and seeing if we should add any links/notes to make it easier to pick up. This may be a great chance for you to assist someone contributing if anyone picks this up. This will also be a good chance for you to have a think about how we document Stimulus for developers that make contributions to Wagtail.
|
2023-02-07T03:44:08Z
|
<patch>
diff --git a/wagtail/contrib/settings/forms.py b/wagtail/contrib/settings/forms.py
--- a/wagtail/contrib/settings/forms.py
+++ b/wagtail/contrib/settings/forms.py
@@ -2,20 +2,19 @@
from django.urls import reverse
from django.utils.translation import gettext_lazy as _
-from wagtail.admin.staticfiles import versioned_static
from wagtail.models import Site
class SiteSwitchForm(forms.Form):
- site = forms.ChoiceField(choices=[])
-
- @property
- def media(self):
- return forms.Media(
- js=[
- versioned_static("wagtailsettings/js/site-switcher.js"),
- ]
- )
+ site = forms.ChoiceField(
+ choices=[],
+ widget=forms.Select(
+ attrs={
+ "data-controller": "w-action",
+ "data-action": "change->w-action#redirect",
+ }
+ ),
+ )
def __init__(self, current_site, model, **kwargs):
initial_data = {"site": self.get_change_url(current_site, model)}
</patch>
|
[]
|
[]
| |||
pantsbuild__pants-6170
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consider porting graph-inspection tests to rust
There are a series of graph-introspection tests in:
- `tests/python/pants_test/engine/test_scheduler.py`
- `tests/python/pants_test/engine/test_fs.py`
- `tests/python/pants_test/engine/test_graph.py`
that are half integration test, and half unit test (in that they inspect the internals of the execution graph to validate how a result was achieved). All tests that inspect graph internals have been skipped.
</issue>
<code>
[start of README.md]
1 # Pants Build System
2
3 Pants is a build system for software projects in a variety of languages.
4 It works particularly well for a source code repository that contains
5 many distinct projects.
6
7 Friendly documentation: http://www.pantsbuild.org/
8
9 We release to [PyPI](https://pypi.python.org/pypi)
10 [](https://pypi.python.org/pypi/pantsbuild.pants)
11 [](https://pypi.python.org/pypi/pantsbuild.pants)
12
13 We use [Travis CI](https://travis-ci.org) to verify the build
14 [](https://travis-ci.org/pantsbuild/pants/branches).
15
16 We use [Coveralls](https://coveralls.io) to monitor test coverage
17 [](https://coveralls.io/r/pantsbuild/pants).
18
19 # Requirements
20
21 At a minimum, pants requires the following to run properly:
22
23 * Linux or Mac OS X
24 * Python 2.7.x (the latest stable version of 2.7 is recommended)
25 * A C compiler, system headers, Python headers (to compile native Python modules) and the libffi
26 library and headers (to compile and link modules that use CFFI to access native code).
27 * Internet access (so that pants can fully bootstrap itself)
28
29 Additionally, if you use the jvm backend to work with java or scala code (installed by default):
30
31 * OpenJDK or Oracle JDK version 7 or greater
32
33
[end of README.md]
[start of contrib/go/src/python/pants/contrib/go/tasks/go_task.py]
1 # coding=utf-8
2 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
3 # Licensed under the Apache License, Version 2.0 (see LICENSE).
4
5 from __future__ import absolute_import, division, print_function, unicode_literals
6
7 import json
8 import re
9 from builtins import object, str
10 from collections import namedtuple
11
12 from pants.base.workunit import WorkUnit, WorkUnitLabel
13 from pants.task.task import Task
14 from pants.util.memo import memoized_method, memoized_property
15 from pants.util.process_handler import subprocess
16 from twitter.common.collections.orderedset import OrderedSet
17
18 from pants.contrib.go.subsystems.go_distribution import GoDistribution
19 from pants.contrib.go.targets.go_binary import GoBinary
20 from pants.contrib.go.targets.go_library import GoLibrary
21 from pants.contrib.go.targets.go_local_source import GoLocalSource
22 from pants.contrib.go.targets.go_remote_library import GoRemoteLibrary
23 from pants.contrib.go.targets.go_target import GoTarget
24
25
26 class GoTask(Task):
27
28 @classmethod
29 def subsystem_dependencies(cls):
30 return super(GoTask, cls).subsystem_dependencies() + (GoDistribution.scoped(cls),)
31
32 @staticmethod
33 def is_binary(target):
34 return isinstance(target, GoBinary)
35
36 @staticmethod
37 def is_local_lib(target):
38 return isinstance(target, GoLibrary)
39
40 @staticmethod
41 def is_remote_lib(target):
42 return isinstance(target, GoRemoteLibrary)
43
44 @staticmethod
45 def is_local_src(target):
46 return isinstance(target, GoLocalSource)
47
48 @staticmethod
49 def is_go(target):
50 return isinstance(target, GoTarget)
51
52 @memoized_property
53 def go_dist(self):
54 return GoDistribution.scoped_instance(self)
55
56 @memoized_property
57 def import_oracle(self):
58 """Return an import oracle that can help look up and categorize imports.
59
60 :rtype: :class:`ImportOracle`
61 """
62 return ImportOracle(go_dist=self.go_dist, workunit_factory=self.context.new_workunit)
63
64 @memoized_property
65 def goos_goarch(self):
66 """Return concatenated $GOOS and $GOARCH environment variables, separated by an underscore.
67
68 Useful for locating where the Go compiler is placing binaries ("$GOPATH/pkg/$GOOS_$GOARCH").
69
70 :rtype: string
71 """
72 return '{goos}_{goarch}'.format(goos=self._lookup_go_env_var('GOOS'),
73 goarch=self._lookup_go_env_var('GOARCH'))
74
75 def _lookup_go_env_var(self, var):
76 return self.go_dist.create_go_cmd('env', args=[var]).check_output().strip()
77
78
79 class ImportOracle(object):
80 """Answers questions about Go imports."""
81
82 class ListDepsError(Exception):
83 """Indicates a problem listing import paths for one or more packages."""
84
85 def __init__(self, go_dist, workunit_factory):
86 self._go_dist = go_dist
87 self._workunit_factory = workunit_factory
88
89 @memoized_property
90 def go_stdlib(self):
91 """Return the set of all Go standard library import paths.
92
93 :rtype: frozenset of string
94 """
95 out = self._go_dist.create_go_cmd('list', args=['std']).check_output()
96 return frozenset(out.decode('utf-8').strip().split())
97
98 # This simple regex mirrors the behavior of the relevant go code in practice (see
99 # repoRootForImportDynamic and surrounding code in
100 # https://github.com/golang/go/blob/7bc40ffb05d8813bf9b41a331b45d37216f9e747/src/cmd/go/vcs.go).
101 _remote_import_re = re.compile('[^.]+(?:\.[^.]+)+\/')
102
103 def is_remote_import(self, import_path):
104 """Whether the specified import_path denotes a remote import."""
105 return self._remote_import_re.match(import_path) is not None
106
107 def is_go_internal_import(self, import_path):
108 """Return `True` if the given import path will be satisfied directly by the Go distribution.
109
110 For example, both the go standard library ("archive/tar", "bufio", "fmt", etc.) and "C" imports
111 are satisfiable by a Go distribution via linking of internal Go code and external c standard
112 library code respectively.
113
114 :rtype: bool
115 """
116 # The "C" package is a psuedo-package that links through to the c stdlib, see:
117 # http://blog.golang.org/c-go-cgo
118 return import_path == 'C' or import_path in self.go_stdlib
119
120 class ImportListing(namedtuple('ImportListing', ['pkg_name',
121 'imports',
122 'test_imports',
123 'x_test_imports'])):
124 """Represents all the imports of a given package."""
125
126 @property
127 def all_imports(self):
128 """Return all imports for this package, including any test imports.
129
130 :rtype: list of string
131 """
132 return list(OrderedSet(self.imports + self.test_imports + self.x_test_imports))
133
134 @memoized_method
135 def list_imports(self, pkg, gopath=None):
136 """Return a listing of the dependencies of the given package.
137
138 :param string pkg: The package whose files to list all dependencies of.
139 :param string gopath: An optional $GOPATH which points to a Go workspace containing `pkg`.
140 :returns: The import listing for `pkg` that represents all its dependencies.
141 :rtype: :class:`ImportOracle.ImportListing`
142 :raises: :class:`ImportOracle.ListDepsError` if there was a problem listing the dependencies
143 of `pkg`.
144 """
145 go_cmd = self._go_dist.create_go_cmd('list', args=['-json', pkg], gopath=gopath)
146 with self._workunit_factory('list {}'.format(pkg), cmd=str(go_cmd),
147 labels=[WorkUnitLabel.TOOL]) as workunit:
148 # TODO(John Sirois): It would be nice to be able to tee the stdout to the workunit to we have
149 # a capture of the json available for inspection in the server console.
150 process = go_cmd.spawn(stdout=subprocess.PIPE, stderr=workunit.output('stderr'))
151 out, _ = process.communicate()
152 returncode = process.returncode
153 workunit.set_outcome(WorkUnit.SUCCESS if returncode == 0 else WorkUnit.FAILURE)
154 if returncode != 0:
155 raise self.ListDepsError('Problem listing imports for {}: {} failed with exit code {}'
156 .format(pkg, go_cmd, returncode))
157 data = json.loads(out.decode('utf-8'))
158
159 # XTestImports are for black box tests. These test files live inside the package dir but
160 # declare a different package and thus can only access the public members of the package's
161 # production code. This style of test necessarily means the test file will import the main
162 # package. For pants, this would lead to a cyclic self-dependency, so we omit the main
163 # package as implicitly included as its own dependency.
164 x_test_imports = [i for i in data.get('XTestImports', []) if i != pkg]
165
166 return self.ImportListing(pkg_name=data.get('Name'),
167 imports=data.get('Imports', []),
168 test_imports=data.get('TestImports', []),
169 x_test_imports=x_test_imports)
170
[end of contrib/go/src/python/pants/contrib/go/tasks/go_task.py]
[start of src/python/pants/backend/project_info/tasks/depmap.py]
1 # coding=utf-8
2 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
3 # Licensed under the Apache License, Version 2.0 (see LICENSE).
4
5 from __future__ import absolute_import, division, print_function, unicode_literals
6
7 from builtins import object
8
9 from pants.base.exceptions import TaskError
10 from pants.java.jar.jar_dependency import JarDependency
11 from pants.task.console_task import ConsoleTask
12
13
14 class Depmap(ConsoleTask):
15 """Depict the target's dependencies.
16
17 Generates either a textual dependency tree or a graphviz digraph dot file for the dependency
18 set of a target.
19 """
20 class SourceRootTypes(object):
21 """Defines SourceRoot Types Constants"""
22 SOURCE = 'SOURCE' # Source Target
23 TEST = 'TEST' # Test Target
24 SOURCE_GENERATED = 'SOURCE_GENERATED' # Code Gen Source Targets
25 EXCLUDED = 'EXCLUDED' # Excluded Target
26 RESOURCE = 'RESOURCE' # Resource belonging to Source Target
27 TEST_RESOURCE = 'TEST_RESOURCE' # Resource belonging to Test Target
28
29 @classmethod
30 def register_options(cls, register):
31 super(Depmap, cls).register_options(register)
32 register('--internal-only', type=bool,
33 help='Specifies that only internal dependencies should be included in the graph '
34 'output (no external jars).')
35 register('--external-only', type=bool,
36 help='Specifies that only external dependencies should be included in the graph '
37 'output (only external jars).')
38 register('--minimal', type=bool,
39 help='For a textual dependency tree, only prints a dependency the 1st '
40 'time it is encountered. This is a no-op for --graph.')
41 register('--graph', type=bool,
42 help='Specifies the internal dependency graph should be output in the dot digraph '
43 'format.')
44 register('--tree', type=bool,
45 help='For text output, show an ascii tree to help visually line up indentions.')
46 register('--show-types', type=bool,
47 help='Show types of objects in depmap --graph.')
48 register('--separator', default='-',
49 help='Specifies the separator to use between the org/name/rev components of a '
50 'dependency\'s fully qualified name.')
51
52 def __init__(self, *args, **kwargs):
53 super(Depmap, self).__init__(*args, **kwargs)
54
55 self.is_internal_only = self.get_options().internal_only
56 self.is_external_only = self.get_options().external_only
57
58 if self.is_internal_only and self.is_external_only:
59 raise TaskError('At most one of --internal-only or --external-only can be selected.')
60
61 self.is_minimal = self.get_options().minimal
62 self.is_graph = self.get_options().graph
63 self.should_tree = self.get_options().tree
64 self.show_types = self.get_options().show_types
65 self.separator = self.get_options().separator
66 self.target_aliases_map = None
67
68 def console_output(self, targets):
69 if len(self.context.target_roots) == 0:
70 raise TaskError("One or more target addresses are required.")
71
72 for target in self.context.target_roots:
73 out = self._output_digraph(target) if self.is_graph else self._output_dependency_tree(target)
74 for line in out:
75 yield line
76
77 def _dep_id(self, dependency):
78 """Returns a tuple of dependency_id, is_internal_dep."""
79 params = dict(sep=self.separator)
80
81 if isinstance(dependency, JarDependency):
82 # TODO(kwilson): handle 'classifier' and 'type'.
83 params.update(org=dependency.org, name=dependency.name, rev=dependency.rev)
84 is_internal_dep = False
85 else:
86 params.update(org='internal', name=dependency.id)
87 is_internal_dep = True
88
89 return ('{org}{sep}{name}{sep}{rev}' if params.get('rev') else
90 '{org}{sep}{name}').format(**params), is_internal_dep
91
92 def _enumerate_visible_deps(self, dep, predicate):
93 # We present the dependencies out of classpath order and instead in alphabetized internal deps,
94 # then alphabetized external deps order for ease in scanning output.
95 dependencies = sorted(x for x in getattr(dep, 'dependencies', []))
96 if not self.is_internal_only:
97 dependencies.extend(sorted((x for x in getattr(dep, 'jar_dependencies', [])),
98 key=lambda x: (x.org, x.name, x.rev, x.classifier)))
99 for inner_dep in dependencies:
100 dep_id, internal = self._dep_id(inner_dep)
101 if predicate(internal):
102 yield inner_dep
103
104 def output_candidate(self, internal):
105 return ((not self.is_internal_only and not self.is_external_only)
106 or (self.is_internal_only and internal)
107 or (self.is_external_only and not internal))
108
109 def _output_dependency_tree(self, target):
110 """Plain-text depmap output handler."""
111
112 def make_line(dep, indent, is_dupe=False):
113 indent_join, indent_chars = ('--', ' |') if self.should_tree else ('', ' ')
114 dupe_char = '*' if is_dupe else ''
115 return ''.join((indent * indent_chars, indent_join, dupe_char, dep))
116
117 def output_deps(dep, indent, outputted, stack):
118 dep_id, internal = self._dep_id(dep)
119
120 if self.is_minimal and dep_id in outputted:
121 return
122
123 if self.output_candidate(internal):
124 yield make_line(dep_id,
125 0 if self.is_external_only else indent,
126 is_dupe=dep_id in outputted)
127 outputted.add(dep_id)
128
129 for sub_dep in self._enumerate_visible_deps(dep, self.output_candidate):
130 for item in output_deps(sub_dep, indent + 1, outputted, stack + [(dep_id, indent)]):
131 yield item
132
133 for item in output_deps(target, 0, set(), []):
134 yield item
135
136 def _output_digraph(self, target):
137 """Graphviz format depmap output handler."""
138 color_by_type = {}
139
140 def maybe_add_type(dep, dep_id):
141 """Add a class type to a dependency id if --show-types is passed."""
142 return dep_id if not self.show_types else '\\n'.join((dep_id, dep.__class__.__name__))
143
144 def make_node(dep, dep_id, internal):
145 line_fmt = ' "{id}" [style=filled, fillcolor={color}{internal}];'
146 int_shape = ', shape=ellipse' if not internal else ''
147
148 dep_class = dep.__class__.__name__
149 if dep_class not in color_by_type:
150 color_by_type[dep_class] = len(color_by_type.keys()) + 1
151
152 return line_fmt.format(id=dep_id, internal=int_shape, color=color_by_type[dep_class])
153
154 def make_edge(from_dep_id, to_dep_id, internal):
155 style = ' [style=dashed]' if not internal else ''
156 return ' "{}" -> "{}"{};'.format(from_dep_id, to_dep_id, style)
157
158 def output_deps(dep, parent, parent_id, outputted):
159 dep_id, internal = self._dep_id(dep)
160
161 if dep_id not in outputted:
162 yield make_node(dep, maybe_add_type(dep, dep_id), internal)
163 outputted.add(dep_id)
164
165 for sub_dep in self._enumerate_visible_deps(dep, self.output_candidate):
166 for item in output_deps(sub_dep, dep, dep_id, outputted):
167 yield item
168
169 if parent:
170 edge_id = (parent_id, dep_id)
171 if edge_id not in outputted:
172 yield make_edge(maybe_add_type(parent, parent_id), maybe_add_type(dep, dep_id), internal)
173 outputted.add(edge_id)
174
175 yield 'digraph "{}" {{'.format(target.id)
176 yield ' node [shape=rectangle, colorscheme=set312;];'
177 yield ' rankdir=LR;'
178 for line in output_deps(target, parent=None, parent_id=None, outputted=set()):
179 yield line
180 yield '}'
181
[end of src/python/pants/backend/project_info/tasks/depmap.py]
[start of src/python/pants/backend/python/tasks/pytest_run.py]
1 # coding=utf-8
2 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
3 # Licensed under the Apache License, Version 2.0 (see LICENSE).
4
5 from __future__ import absolute_import, division, print_function, unicode_literals
6
7 import configparser
8 import itertools
9 import json
10 import os
11 import shutil
12 import time
13 import traceback
14 import uuid
15 from builtins import open, str
16 from collections import OrderedDict
17 from contextlib import contextmanager
18 from io import StringIO
19 from textwrap import dedent
20
21 from pants.backend.python.targets.python_tests import PythonTests
22 from pants.backend.python.tasks.gather_sources import GatherSources
23 from pants.backend.python.tasks.pytest_prep import PytestPrep
24 from pants.base.build_environment import get_buildroot
25 from pants.base.exceptions import ErrorWhileTesting, TaskError
26 from pants.base.fingerprint_strategy import DefaultFingerprintStrategy
27 from pants.base.hash_utils import Sharder
28 from pants.base.workunit import WorkUnitLabel
29 from pants.build_graph.target import Target
30 from pants.task.task import Task
31 from pants.task.testrunner_task_mixin import PartitionedTestRunnerTaskMixin, TestResult
32 from pants.util.contextutil import environment_as, pushd, temporary_dir, temporary_file
33 from pants.util.dirutil import mergetree, safe_mkdir, safe_mkdir_for
34 from pants.util.memo import memoized_method, memoized_property
35 from pants.util.objects import datatype
36 from pants.util.process_handler import SubprocessProcessHandler
37 from pants.util.strutil import safe_shlex_split
38 from pants.util.xml_parser import XmlParser
39
40
41 class _Workdirs(datatype(['root_dir', 'partition'])):
42 @classmethod
43 def for_partition(cls, work_dir, partition):
44 root_dir = os.path.join(work_dir, Target.maybe_readable_identify(partition))
45 safe_mkdir(root_dir, clean=False)
46 return cls(root_dir=root_dir, partition=partition)
47
48 @memoized_method
49 def target_set_id(self, *targets):
50 return Target.maybe_readable_identify(targets or self.partition)
51
52 @memoized_method
53 def junitxml_path(self, *targets):
54 xml_path = os.path.join(self.root_dir, 'junitxml',
55 'TEST-{}.xml'.format(self.target_set_id(*targets)))
56 safe_mkdir_for(xml_path)
57 return xml_path
58
59 @memoized_property
60 def coverage_path(self):
61 coverage_workdir = os.path.join(self.root_dir, 'coverage')
62 safe_mkdir(coverage_workdir)
63 return coverage_workdir
64
65 def files(self):
66 def files_iter():
67 for dir_path, _, file_names in os.walk(self.root_dir):
68 for filename in file_names:
69 yield os.path.join(dir_path, filename)
70 return list(files_iter())
71
72
73 class PytestResult(TestResult):
74 _SUCCESS_EXIT_CODES = (
75 0,
76
77 # This is returned by pytest when no tests are collected (EXIT_NOTESTSCOLLECTED).
78 # We already short-circuit test runs with no test _targets_ to return 0 emulated exit codes and
79 # we should do the same for cases when there are test targets but tests themselves have been
80 # de-selected out of band via `py.test -k`.
81 5
82 )
83
84 @classmethod
85 def _map_exit_code(cls, value):
86 return 0 if value in cls._SUCCESS_EXIT_CODES else value
87
88
89 class PytestRun(PartitionedTestRunnerTaskMixin, Task):
90
91 @classmethod
92 def implementation_version(cls):
93 return super(PytestRun, cls).implementation_version() + [('PytestRun', 3)]
94
95 @classmethod
96 def register_options(cls, register):
97 super(PytestRun, cls).register_options(register)
98
99 # NB: We always produce junit xml privately, and if this option is specified, we then copy
100 # it to the user-specified directory, post any interaction with the cache to retrieve the
101 # privately generated and cached xml files. As such, this option is not part of the
102 # fingerprint.
103 register('--junit-xml-dir', metavar='<DIR>',
104 help='Specifying a directory causes junit xml results files to be emitted under '
105 'that dir for each test run.')
106
107 register('--profile', metavar='<FILE>', fingerprint=True,
108 help="Specifying a file path causes tests to be profiled with the profiling data "
109 "emitted to that file (prefix). Note that tests may run in a different cwd, so "
110 "it's best to use an absolute path to make it easy to find the subprocess "
111 "profiles later.")
112 register('--options', type=list, fingerprint=True,
113 help='Pass these options to pytest. You can also use pass-through args.')
114 register('--coverage', fingerprint=True,
115 help='Emit coverage information for specified packages or directories (absolute or '
116 'relative to the build root). The special value "auto" indicates that Pants '
117 'should attempt to deduce which packages to emit coverage for.')
118 # For a given --coverage specification (which is fingerprinted), we will always copy the
119 # associated generated and cached --coverage files to this directory post any interaction with
120 # the cache to retrieve the coverage files. As such, this option is not part of the fingerprint.
121 register('--coverage-output-dir', metavar='<DIR>', default=None,
122 help='Directory to emit coverage reports to. '
123 'If not specified, a default within dist is used.')
124
125 register('--test-shard', fingerprint=True,
126 help='Subset of tests to run, in the form M/N, 0 <= M < N. For example, 1/3 means '
127 'run tests number 2, 5, 8, 11, ...')
128
129 register('--extra-pythonpath', type=list, fingerprint=True, advanced=True,
130 help='Add these entries to the PYTHONPATH when running the tests. '
131 'Useful for attaching to debuggers in test code.')
132
133 @classmethod
134 def supports_passthru_args(cls):
135 return True
136
137 @classmethod
138 def prepare(cls, options, round_manager):
139 super(PytestRun, cls).prepare(options, round_manager)
140 round_manager.require_data(PytestPrep.PytestBinary)
141
142 def _test_target_filter(self):
143 def target_filter(target):
144 return isinstance(target, PythonTests)
145
146 return target_filter
147
148 def _validate_target(self, target):
149 pass
150
151 class InvalidShardSpecification(TaskError):
152 """Indicates an invalid `--test-shard` option."""
153
154 DEFAULT_COVERAGE_CONFIG = dedent("""
155 [run]
156 branch = True
157 timid = False
158
159 [report]
160 exclude_lines =
161 def __repr__
162 raise NotImplementedError
163 """)
164
165 @staticmethod
166 def _format_string_list(values):
167 # The coverage rc ini files accept "Multi-valued strings" - ie: lists of strings - denoted by
168 # indenting values on multiple lines like so:
169 # [section]
170 # name =
171 # value1
172 # value2
173 #
174 # See http://nedbatchelder.com/code/coverage/config.html for details.
175 return '\n\t{values}'.format(values='\n\t'.join(values))
176
177 @property
178 def _debug(self):
179 return self.get_options().level == 'debug'
180
181 @staticmethod
182 def _ensure_section(cp, section):
183 if not cp.has_section(section):
184 cp.add_section(section)
185
186 # N.B.: Extracted for tests.
187 @classmethod
188 def _add_plugin_config(cls, cp, src_chroot_path, src_to_target_base):
189 # We use a coverage plugin to map PEX chroot source paths back to their original repo paths for
190 # report output.
191 plugin_module = PytestPrep.PytestBinary.coverage_plugin_module()
192 cls._ensure_section(cp, 'run')
193 cp.set('run', 'plugins', plugin_module)
194
195 cp.add_section(plugin_module)
196 cp.set(plugin_module, 'buildroot', get_buildroot())
197 cp.set(plugin_module, 'src_chroot_path', src_chroot_path)
198 cp.set(plugin_module, 'src_to_target_base', json.dumps(src_to_target_base))
199
200 def _generate_coverage_config(self, src_to_target_base):
201 cp = configparser.ConfigParser()
202 cp.read_file(StringIO(self.DEFAULT_COVERAGE_CONFIG))
203
204 self._add_plugin_config(cp, self._source_chroot_path, src_to_target_base)
205
206 # See the debug options here: http://nedbatchelder.com/code/coverage/cmd.html#cmd-run-debug
207 if self._debug:
208 debug_options = self._format_string_list([
209 # Dumps the coverage config realized values.
210 'config',
211 # Logs which files are skipped or traced and why.
212 'trace'])
213 self._ensure_section(cp, 'run')
214 cp.set('run', 'debug', debug_options)
215
216 return cp
217
218 @staticmethod
219 def _is_coverage_env_var(name):
220 return (
221 name.startswith('COV_CORE_') # These are from `pytest-cov`.
222 or name.startswith('COVERAGE_') # These are from `coverage`.
223 )
224
225 @contextmanager
226 def _scrub_cov_env_vars(self):
227 cov_env_vars = {k: v for k, v in os.environ.items() if self._is_coverage_env_var(k)}
228 if cov_env_vars:
229 self.context.log.warn('Scrubbing coverage environment variables\n\t{}'
230 .format('\n\t'.join(sorted('{}={}'.format(k, v)
231 for k, v in cov_env_vars.items()))))
232 with environment_as(**{k: None for k in cov_env_vars}):
233 yield
234 else:
235 yield
236
237 @contextmanager
238 def _cov_setup(self, workdirs, coverage_morfs, src_to_target_base):
239 cp = self._generate_coverage_config(src_to_target_base=src_to_target_base)
240 # Note that it's important to put the tmpfile under the workdir, because pytest
241 # uses all arguments that look like paths to compute its rootdir, and we want
242 # it to pick the buildroot.
243 with temporary_file(root_dir=workdirs.root_dir, binary_mode=False) as fp:
244 cp.write(fp)
245 fp.close()
246 coverage_rc = fp.name
247 # Note that --cov-report= with no value turns off terminal reporting, which
248 # we handle separately.
249 args = ['--cov-report=', '--cov-config', coverage_rc]
250 for morf in coverage_morfs:
251 args.extend(['--cov', morf])
252
253 with self._scrub_cov_env_vars():
254 yield args, coverage_rc
255
256 @contextmanager
257 def _maybe_emit_coverage_data(self, workdirs, test_targets, pex):
258 coverage = self.get_options().coverage
259 if coverage is None:
260 yield []
261 return
262
263 pex_src_root = os.path.relpath(self._source_chroot_path, get_buildroot())
264
265 src_to_target_base = {}
266 for target in test_targets:
267 libs = (tgt for tgt in target.closure()
268 if tgt.has_sources('.py') and not isinstance(tgt, PythonTests))
269 for lib in libs:
270 for src in lib.sources_relative_to_source_root():
271 src_to_target_base[src] = lib.target_base
272
273 def ensure_trailing_sep(path):
274 return path if path.endswith(os.path.sep) else path + os.path.sep
275
276 if coverage == 'auto':
277 def compute_coverage_pkgs(tgt):
278 if tgt.coverage:
279 return tgt.coverage
280 else:
281 # This makes the assumption that tests/python/<tgt> will be testing src/python/<tgt>.
282 # Note in particular that this doesn't work for pants' own tests, as those are under
283 # the top level package 'pants_tests', rather than just 'pants'.
284 # TODO(John Sirois): consider failing fast if there is no explicit coverage scheme;
285 # but also consider supporting configuration of a global scheme whether that be parallel
286 # dirs/packages or some arbitrary function that can be registered that takes a test target
287 # and hands back the source packages or paths under test.
288 def package(test_source_path):
289 return os.path.dirname(test_source_path).replace(os.sep, '.')
290
291 def packages():
292 for test_source_path in tgt.sources_relative_to_source_root():
293 pkg = package(test_source_path)
294 if pkg:
295 yield pkg
296
297 return packages()
298
299 coverage_morfs = set(itertools.chain(*[compute_coverage_pkgs(t) for t in test_targets]))
300 else:
301 coverage_morfs = []
302 for morf in coverage.split(','):
303 if os.path.isdir(morf):
304 # The source is a dir, so correct its prefix for the chroot.
305 # E.g. if source is /path/to/src/python/foo/bar or src/python/foo/bar then
306 # rel_source is src/python/foo/bar, and ...
307 rel_source = os.path.relpath(morf, get_buildroot())
308 rel_source = ensure_trailing_sep(rel_source)
309
310 found_target_base = False
311 for target_base in set(src_to_target_base.values()):
312 prefix = ensure_trailing_sep(target_base)
313 if rel_source.startswith(prefix):
314 # ... rel_source will match on prefix=src/python/ ...
315 suffix = rel_source[len(prefix):]
316 # ... suffix will equal foo/bar ...
317 coverage_morfs.append(os.path.join(get_buildroot(), pex_src_root, suffix))
318 found_target_base = True
319 # ... and we end up appending <pex_src_root>/foo/bar to the coverage_sources.
320 break
321 if not found_target_base:
322 self.context.log.warn('Coverage path {} is not in any target. Skipping.'.format(morf))
323 else:
324 # The source is to be interpreted as a package name.
325 coverage_morfs.append(morf)
326
327 with self._cov_setup(workdirs,
328 coverage_morfs=coverage_morfs,
329 src_to_target_base=src_to_target_base) as (args, coverage_rc):
330 try:
331 yield args
332 finally:
333 env = {
334 'PEX_MODULE': 'coverage.cmdline:main'
335 }
336 def coverage_run(subcommand, arguments):
337 return self._pex_run(pex,
338 workunit_name='coverage-{}'.format(subcommand),
339 args=[subcommand] + arguments,
340 env=env)
341
342 # The '.coverage' data file is output in the CWD of the test run above; so we make sure to
343 # look for it there.
344 with self._maybe_run_in_chroot():
345 # On failures or timeouts, the .coverage file won't be written.
346 if not os.path.exists('.coverage'):
347 self.context.log.warn('No .coverage file was found! Skipping coverage reporting.')
348 else:
349 coverage_run('report', ['-i', '--rcfile', coverage_rc])
350
351 coverage_workdir = workdirs.coverage_path
352 coverage_run('html', ['-i', '--rcfile', coverage_rc, '-d', coverage_workdir])
353
354 coverage_xml = os.path.join(coverage_workdir, 'coverage.xml')
355 coverage_run('xml', ['-i', '--rcfile', coverage_rc, '-o', coverage_xml])
356
357 def _get_shard_conftest_content(self):
358 shard_spec = self.get_options().test_shard
359 if shard_spec is None:
360 return ''
361
362 try:
363 sharder = Sharder(shard_spec)
364 if sharder.nshards < 2:
365 return ''
366 return dedent("""
367
368 ### GENERATED BY PANTS ###
369
370 def pytest_report_header(config):
371 return 'shard: {shard} of {nshards} (0-based shard numbering)'
372
373 def pytest_collection_modifyitems(session, config, items):
374 total_count = len(items)
375 removed = 0
376 def is_conftest(itm):
377 return itm.fspath and itm.fspath.basename == 'conftest.py'
378 for i, item in enumerate(list(x for x in items if not is_conftest(x))):
379 if i % {nshards} != {shard}:
380 del items[i - removed]
381 removed += 1
382 reporter = config.pluginmanager.getplugin('terminalreporter')
383 reporter.write_line('Only executing {{}} of {{}} total tests in shard {shard} of '
384 '{nshards}'.format(total_count - removed, total_count),
385 bold=True, invert=True, yellow=True)
386 """.format(shard=sharder.shard, nshards=sharder.nshards))
387 except Sharder.InvalidShardSpec as e:
388 raise self.InvalidShardSpecification(e)
389
390 def _get_conftest_content(self, sources_map, rootdir_comm_path):
391 # A conftest hook to modify the console output, replacing the chroot-based
392 # source paths with the source-tree based ones, which are more readable to the end user.
393 # Note that python stringifies a dict to its source representation, so we can use sources_map
394 # as a format argument directly.
395 #
396 # We'd prefer to hook into pytest_runtest_logstart(), which actually prints the line we
397 # want to fix, but we can't because we won't have access to any of its state, so
398 # we can't actually change what it prints.
399 #
400 # Alternatively, we could hook into pytest_collect_file() and just set a custom nodeid
401 # for the entire pytest run. However this interferes with pytest internals, including
402 # fixture registration, leading to fixtures not running when they should.
403 # It also requires the generated conftest to be in the root of the source tree, which
404 # complicates matters when there's already a user conftest.py there.
405 console_output_conftest_content = dedent("""
406
407 ### GENERATED BY PANTS ###
408
409 import os
410
411 import pytest
412
413
414 class NodeRenamerPlugin(object):
415 # Map from absolute source chroot path -> path of original source relative to the buildroot.
416 _SOURCES_MAP = {sources_map!r}
417
418 def __init__(self, rootdir):
419 def rootdir_relative(path):
420 return os.path.relpath(path, rootdir)
421
422 self._sources_map = {{rootdir_relative(k): rootdir_relative(v)
423 for k, v in self._SOURCES_MAP.items()}}
424
425 @pytest.hookimpl(hookwrapper=True)
426 def pytest_runtest_protocol(self, item, nextitem):
427 # Temporarily change the nodeid, which pytest uses for display.
428 real_nodeid = item.nodeid
429 real_path = real_nodeid.split('::', 1)[0]
430 fixed_path = self._sources_map.get(real_path, real_path)
431 fixed_nodeid = fixed_path + real_nodeid[len(real_path):]
432 try:
433 item._nodeid = fixed_nodeid
434 yield
435 finally:
436 item._nodeid = real_nodeid
437
438
439 # The path to write out the py.test rootdir to.
440 _ROOTDIR_COMM_PATH = {rootdir_comm_path!r}
441
442
443 def pytest_configure(config):
444 rootdir = str(config.rootdir)
445 with open(_ROOTDIR_COMM_PATH, 'w') as fp:
446 fp.write(rootdir)
447
448 config.pluginmanager.register(NodeRenamerPlugin(rootdir), 'pants_test_renamer')
449
450 """.format(sources_map=dict(sources_map), rootdir_comm_path=rootdir_comm_path))
451 # Add in the sharding conftest, if any.
452 shard_conftest_content = self._get_shard_conftest_content()
453 return console_output_conftest_content + shard_conftest_content
454
455 @contextmanager
456 def _conftest(self, sources_map):
457 """Creates a conftest.py to customize our pytest run."""
458 # Note that it's important to put the tmpdir under the workdir, because pytest
459 # uses all arguments that look like paths to compute its rootdir, and we want
460 # it to pick the buildroot.
461 with temporary_dir(root_dir=self.workdir) as conftest_dir:
462 rootdir_comm_path = os.path.join(conftest_dir, 'pytest_rootdir.path')
463
464 def get_pytest_rootdir():
465 with open(rootdir_comm_path, 'r') as fp:
466 return fp.read()
467
468 conftest_content = self._get_conftest_content(sources_map,
469 rootdir_comm_path=rootdir_comm_path)
470
471 conftest = os.path.join(conftest_dir, 'conftest.py')
472 with open(conftest, 'w') as fp:
473 fp.write(conftest_content)
474 yield conftest, get_pytest_rootdir
475
476 @contextmanager
477 def _test_runner(self, workdirs, test_targets, sources_map):
478 pytest_binary = self.context.products.get_data(PytestPrep.PytestBinary)
479 with self._conftest(sources_map) as (conftest, get_pytest_rootdir):
480 with self._maybe_emit_coverage_data(workdirs,
481 test_targets,
482 pytest_binary.pex) as coverage_args:
483 yield pytest_binary, [conftest] + coverage_args, get_pytest_rootdir
484
485 def _do_run_tests_with_args(self, pex, args):
486 try:
487 env = dict(os.environ)
488
489 # Ensure we don't leak source files or undeclared 3rdparty requirements into the py.test PEX
490 # environment.
491 pythonpath = env.pop('PYTHONPATH', None)
492 if pythonpath:
493 self.context.log.warn('scrubbed PYTHONPATH={} from py.test environment'.format(pythonpath))
494 # But allow this back door for users who do want to force something onto the test pythonpath,
495 # e.g., modules required during a debugging session.
496 extra_pythonpath = self.get_options().extra_pythonpath
497 if extra_pythonpath:
498 env['PYTHONPATH'] = os.pathsep.join(extra_pythonpath)
499
500 # The pytest runner we use accepts a --pdb argument that will launch an interactive pdb
501 # session on any test failure. In order to support use of this pass-through flag we must
502 # turn off stdin buffering that otherwise occurs. Setting the PYTHONUNBUFFERED env var to
503 # any value achieves this in python2.7. We'll need a different solution when we support
504 # running pants under CPython 3 which does not unbuffer stdin using this trick.
505 env['PYTHONUNBUFFERED'] = '1'
506
507 # pytest uses py.io.terminalwriter for output. That class detects the terminal
508 # width and attempts to use all of it. However we capture and indent the console
509 # output, leading to weird-looking line wraps. So we trick the detection code
510 # into thinking the terminal window is narrower than it is.
511 env['COLUMNS'] = str(int(os.environ.get('COLUMNS', 80)) - 30)
512
513 profile = self.get_options().profile
514 if profile:
515 env['PEX_PROFILE_FILENAME'] = '{0}.subprocess.{1:.6f}'.format(profile, time.time())
516
517 with self.context.new_workunit(name='run',
518 cmd=' '.join(pex.cmdline(args)),
519 labels=[WorkUnitLabel.TOOL, WorkUnitLabel.TEST]) as workunit:
520 rc = self._spawn_and_wait(pex, workunit=workunit, args=args, setsid=True, env=env)
521 return PytestResult.rc(rc)
522 except ErrorWhileTesting:
523 # _spawn_and_wait wraps the test runner in a timeout, so it could
524 # fail with a ErrorWhileTesting. We can't just set PythonTestResult
525 # to a failure because the resultslog doesn't have all the failures
526 # when tests are killed with a timeout. Therefore we need to re-raise
527 # here.
528 raise
529 except Exception:
530 self.context.log.error('Failed to run test!')
531 self.context.log.info(traceback.format_exc())
532 return PytestResult.exception()
533
534 def _map_relsrc_to_targets(self, targets):
535 pex_src_root = os.path.relpath(self._source_chroot_path, get_buildroot())
536 # First map chrooted sources back to their targets.
537 relsrc_to_target = {os.path.join(pex_src_root, src): target for target in targets
538 for src in target.sources_relative_to_source_root()}
539 # Also map the source tree-rooted sources, because in some cases (e.g., a failure to even
540 # eval the test file during test collection), that's the path pytest will use in the junit xml.
541 relsrc_to_target.update({src: target for target in targets
542 for src in target.sources_relative_to_buildroot()})
543
544 return relsrc_to_target
545
546 def _get_failed_targets_from_junitxml(self, junitxml, targets, pytest_rootdir):
547 relsrc_to_target = self._map_relsrc_to_targets(targets)
548 buildroot_relpath = os.path.relpath(pytest_rootdir, get_buildroot())
549
550 # Now find the sources that contained failing tests.
551 failed_targets = set()
552
553 try:
554 xml = XmlParser.from_file(junitxml)
555 failures = int(xml.get_attribute('testsuite', 'failures'))
556 errors = int(xml.get_attribute('testsuite', 'errors'))
557 if failures or errors:
558 for testcase in xml.parsed.getElementsByTagName('testcase'):
559 test_failed = testcase.getElementsByTagName('failure')
560 test_errored = testcase.getElementsByTagName('error')
561 if test_failed or test_errored:
562 # The file attribute is always relative to the py.test rootdir.
563 pytest_relpath = testcase.getAttribute('file')
564 relsrc = os.path.join(buildroot_relpath, pytest_relpath)
565 failed_target = relsrc_to_target.get(relsrc)
566 failed_targets.add(failed_target)
567 except (XmlParser.XmlError, ValueError) as e:
568 raise TaskError('Error parsing xml file at {}: {}'.format(junitxml, e))
569
570 return failed_targets
571
572 def _get_target_from_test(self, test_info, targets, pytest_rootdir):
573 relsrc_to_target = self._map_relsrc_to_targets(targets)
574 buildroot_relpath = os.path.relpath(pytest_rootdir, get_buildroot())
575 pytest_relpath = test_info['file']
576 relsrc = os.path.join(buildroot_relpath, pytest_relpath)
577 return relsrc_to_target.get(relsrc)
578
579 @contextmanager
580 def partitions(self, per_target, all_targets, test_targets):
581 if per_target:
582 def iter_partitions():
583 for test_target in test_targets:
584 yield (test_target,)
585 else:
586 def iter_partitions():
587 yield tuple(test_targets)
588
589 workdir = self.workdir
590
591 def iter_partitions_with_args():
592 for partition in iter_partitions():
593 workdirs = _Workdirs.for_partition(workdir, partition)
594 args = (workdirs,)
595 yield partition, args
596
597 yield iter_partitions_with_args
598
599 # TODO(John Sirois): Its probably worth generalizing a means to mark certain options or target
600 # attributes as making results un-cacheable. See: https://github.com/pantsbuild/pants/issues/4748
601 class NeverCacheFingerprintStrategy(DefaultFingerprintStrategy):
602 def compute_fingerprint(self, target):
603 return uuid.uuid4()
604
605 def fingerprint_strategy(self):
606 if self.get_options().profile:
607 # A profile is machine-specific and we assume anyone wanting a profile wants to run it here
608 # and now and not accept some old result, even if on the same inputs.
609 return self.NeverCacheFingerprintStrategy()
610 else:
611 return None # Accept the default fingerprint strategy.
612
613 def run_tests(self, fail_fast, test_targets, workdirs):
614 try:
615 return self._run_pytest(fail_fast, tuple(test_targets), workdirs)
616 finally:
617 # Unconditionally pluck any results that an end user might need to interact with from the
618 # workdir to the locations they expect.
619 self._expose_results(test_targets, workdirs)
620
621 @memoized_property
622 def result_class(self):
623 return PytestResult
624
625 def collect_files(self, workdirs):
626 return workdirs.files()
627
628 def _expose_results(self, invalid_tgts, workdirs):
629 external_junit_xml_dir = self.get_options().junit_xml_dir
630 if external_junit_xml_dir:
631 # Either we just ran pytest for a set of invalid targets and generated a junit xml file
632 # specific to that (sub)set or else we hit the cache for the whole partition and skipped
633 # running pytest, simply retrieving the partition's full junit xml file.
634 junitxml_path = workdirs.junitxml_path(*invalid_tgts)
635
636 safe_mkdir(external_junit_xml_dir)
637 shutil.copy2(junitxml_path, external_junit_xml_dir)
638
639 if self.get_options().coverage:
640 coverage_output_dir = self.get_options().coverage_output_dir
641 if coverage_output_dir:
642 target_dir = coverage_output_dir
643 else:
644 pants_distdir = self.context.options.for_global_scope().pants_distdir
645 relpath = workdirs.target_set_id()
646 target_dir = os.path.join(pants_distdir, 'coverage', relpath)
647 mergetree(workdirs.coverage_path, target_dir)
648
649 def _run_pytest(self, fail_fast, test_targets, workdirs):
650 if not test_targets:
651 return PytestResult.rc(0)
652
653 # Absolute path to chrooted test file -> Path to original test file relative to the buildroot.
654 sources_map = OrderedDict()
655 for t in test_targets:
656 for p in t.sources_relative_to_source_root():
657 sources_map[os.path.join(self._source_chroot_path, p)] = os.path.join(t.target_base, p)
658
659 if not sources_map:
660 return PytestResult.rc(0)
661
662 with self._test_runner(workdirs, test_targets, sources_map) as (pytest_binary,
663 test_args,
664 get_pytest_rootdir):
665 # Validate that the user didn't provide any passthru args that conflict
666 # with those we must set ourselves.
667 for arg in self.get_passthru_args():
668 if arg.startswith('--junitxml') or arg.startswith('--confcutdir'):
669 raise TaskError('Cannot pass this arg through to pytest: {}'.format(arg))
670
671 junitxml_path = workdirs.junitxml_path(*test_targets)
672
673 # N.B. the `--confcutdir` here instructs pytest to stop scanning for conftest.py files at the
674 # top of the buildroot. This prevents conftest.py files from outside (e.g. in users home dirs)
675 # from leaking into pants test runs. See: https://github.com/pantsbuild/pants/issues/2726
676 args = ['-c', pytest_binary.config_path,
677 '--junitxml', junitxml_path,
678 '--confcutdir', get_buildroot(),
679 '--continue-on-collection-errors']
680 if fail_fast:
681 args.extend(['-x'])
682 if self._debug:
683 args.extend(['-s'])
684 if self.get_options().colors:
685 args.extend(['--color', 'yes'])
686
687 if self.get_options().options:
688 for opt in self.get_options().options:
689 args.extend(safe_shlex_split(opt))
690 args.extend(self.get_passthru_args())
691
692 args.extend(test_args)
693 args.extend(sources_map.keys())
694
695 # We want to ensure our reporting based off junit xml is from this run so kill results from
696 # prior runs.
697 if os.path.exists(junitxml_path):
698 os.unlink(junitxml_path)
699
700 with self._maybe_run_in_chroot():
701 result = self._do_run_tests_with_args(pytest_binary.pex, args)
702
703 # There was a problem prior to test execution preventing junit xml file creation so just let
704 # the failure result bubble.
705 if not os.path.exists(junitxml_path):
706 return result
707
708 pytest_rootdir = get_pytest_rootdir()
709 failed_targets = self._get_failed_targets_from_junitxml(junitxml_path,
710 test_targets,
711 pytest_rootdir)
712
713 def parse_error_handler(parse_error):
714 # Simple error handler to pass to xml parsing function.
715 raise TaskError('Error parsing xml file at {}: {}'
716 .format(parse_error.xml_path, parse_error.cause))
717
718 all_tests_info = self.parse_test_info(junitxml_path, parse_error_handler,
719 ['file', 'name', 'classname'])
720 for test_name, test_info in all_tests_info.items():
721 test_target = self._get_target_from_test(test_info, test_targets, pytest_rootdir)
722 self.report_all_info_for_single_test(self.options_scope, test_target, test_name, test_info)
723
724 return result.with_failed_targets(failed_targets)
725
726 @memoized_property
727 def _source_chroot_path(self):
728 return self.context.products.get_data(GatherSources.PYTHON_SOURCES).path()
729
730 def _pex_run(self, pex, workunit_name, args, env):
731 with self.context.new_workunit(name=workunit_name,
732 cmd=' '.join(pex.cmdline(args)),
733 labels=[WorkUnitLabel.TOOL, WorkUnitLabel.TEST]) as workunit:
734 process = self._spawn(pex, workunit, args, setsid=False, env=env)
735 return process.wait()
736
737 @contextmanager
738 def _maybe_run_in_chroot(self):
739 if self.run_tests_in_chroot:
740 with pushd(self._source_chroot_path):
741 yield
742 else:
743 yield
744
745 def _spawn(self, pex, workunit, args, setsid=False, env=None):
746 env = env or {}
747 process = pex.run(args,
748 with_chroot=False, # We handle chrooting ourselves.
749 blocking=False,
750 setsid=setsid,
751 env=env,
752 stdout=workunit.output('stdout'),
753 stderr=workunit.output('stderr'))
754 return SubprocessProcessHandler(process)
755
[end of src/python/pants/backend/python/tasks/pytest_run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pantsbuild/pants
|
efe6d6d7c8938f0375a2e083fcaea013931718e7
|
Consider porting graph-inspection tests to rust
There are a series of graph-introspection tests in:
- `tests/python/pants_test/engine/test_scheduler.py`
- `tests/python/pants_test/engine/test_fs.py`
- `tests/python/pants_test/engine/test_graph.py`
that are half integration test, and half unit test (in that they inspect the internals of the execution graph to validate how a result was achieved). All tests that inspect graph internals have been skipped.
|
2018-07-17T23:47:06Z
|
<patch>
diff --git a/src/python/pants/backend/native/subsystems/binaries/binutils.py b/src/python/pants/backend/native/subsystems/binaries/binutils.py
--- a/src/python/pants/backend/native/subsystems/binaries/binutils.py
+++ b/src/python/pants/backend/native/subsystems/binaries/binutils.py
@@ -8,7 +8,7 @@
from pants.backend.native.config.environment import Assembler, Linker
from pants.binaries.binary_tool import NativeTool
-from pants.engine.rules import RootRule, rule
+from pants.engine.rules import rule
from pants.engine.selectors import Select
@@ -49,5 +49,4 @@ def create_binutils_rules():
return [
get_as,
get_ld,
- RootRule(Binutils),
]
diff --git a/src/python/pants/backend/native/subsystems/binaries/gcc.py b/src/python/pants/backend/native/subsystems/binaries/gcc.py
--- a/src/python/pants/backend/native/subsystems/binaries/gcc.py
+++ b/src/python/pants/backend/native/subsystems/binaries/gcc.py
@@ -9,7 +9,7 @@
from pants.backend.native.config.environment import CCompiler, CppCompiler, Platform
from pants.backend.native.subsystems.utils.archive_file_mapper import ArchiveFileMapper
from pants.binaries.binary_tool import NativeTool
-from pants.engine.rules import RootRule, rule
+from pants.engine.rules import rule
from pants.engine.selectors import Select
from pants.util.memo import memoized_method, memoized_property
@@ -108,5 +108,4 @@ def create_gcc_rules():
return [
get_gcc,
get_gplusplus,
- RootRule(GCC),
]
diff --git a/src/python/pants/backend/native/subsystems/xcode_cli_tools.py b/src/python/pants/backend/native/subsystems/xcode_cli_tools.py
--- a/src/python/pants/backend/native/subsystems/xcode_cli_tools.py
+++ b/src/python/pants/backend/native/subsystems/xcode_cli_tools.py
@@ -7,7 +7,7 @@
import os
from pants.backend.native.config.environment import Assembler, CCompiler, CppCompiler, Linker
-from pants.engine.rules import RootRule, rule
+from pants.engine.rules import rule
from pants.engine.selectors import Select
from pants.subsystem.subsystem import Subsystem
from pants.util.dirutil import is_readable_dir
@@ -196,5 +196,4 @@ def create_xcode_cli_tools_rules():
get_ld,
get_clang,
get_clang_plusplus,
- RootRule(XCodeCLITools),
]
diff --git a/src/python/pants/engine/fs.py b/src/python/pants/engine/fs.py
--- a/src/python/pants/engine/fs.py
+++ b/src/python/pants/engine/fs.py
@@ -144,5 +144,4 @@ def create_fs_rules():
return [
RootRule(DirectoryDigest),
RootRule(PathGlobs),
- RootRule(Snapshot),
]
diff --git a/src/python/pants/engine/native.py b/src/python/pants/engine/native.py
--- a/src/python/pants/engine/native.py
+++ b/src/python/pants/engine/native.py
@@ -171,7 +171,6 @@
void tasks_task_begin(Tasks*, Function, TypeConstraint);
void tasks_add_get(Tasks*, TypeConstraint, TypeId);
void tasks_add_select(Tasks*, TypeConstraint);
-void tasks_add_select_variant(Tasks*, TypeConstraint, Buffer);
void tasks_task_end(Tasks*);
void tasks_singleton_add(Tasks*, Handle, TypeConstraint);
void tasks_destroy(Tasks*);
@@ -197,8 +196,6 @@
TypeConstraint,
TypeConstraint,
TypeConstraint,
- TypeConstraint,
- TypeConstraint,
TypeId,
TypeId,
Buffer,
@@ -802,9 +799,7 @@ def new_scheduler(self,
construct_file,
construct_link,
construct_process_result,
- constraint_has_products,
constraint_address,
- constraint_variants,
constraint_path_globs,
constraint_directory_digest,
constraint_snapshot,
@@ -836,8 +831,6 @@ def tc(constraint):
func(construct_process_result),
# TypeConstraints.
tc(constraint_address),
- tc(constraint_has_products),
- tc(constraint_variants),
tc(constraint_path_globs),
tc(constraint_directory_digest),
tc(constraint_snapshot),
diff --git a/src/python/pants/engine/rules.py b/src/python/pants/engine/rules.py
--- a/src/python/pants/engine/rules.py
+++ b/src/python/pants/engine/rules.py
@@ -131,10 +131,6 @@ class Rule(AbstractClass):
def output_constraint(self):
"""An output Constraint type for the rule."""
- @abstractproperty
- def input_selectors(self):
- """Collection of input selectors."""
-
class TaskRule(datatype(['output_constraint', 'input_selectors', 'input_gets', 'func']), Rule):
"""A Rule that runs a task function when all of its input selectors are satisfied.
@@ -191,10 +187,6 @@ def __new__(cls, output_type, value):
# Create.
return super(SingletonRule, cls).__new__(cls, constraint, value)
- @property
- def input_selectors(self):
- return tuple()
-
def __repr__(self):
return '{}({}, {})'.format(type(self).__name__, type_or_constraint_repr(self.output_constraint), self.value)
@@ -207,9 +199,6 @@ class RootRule(datatype(['output_constraint']), Rule):
of an execution.
"""
- def input_selectors(self):
- return []
-
class RuleIndex(datatype(['rules', 'roots'])):
"""Holds an index of Tasks and Singletons used to instantiate Nodes."""
diff --git a/src/python/pants/engine/scheduler.py b/src/python/pants/engine/scheduler.py
--- a/src/python/pants/engine/scheduler.py
+++ b/src/python/pants/engine/scheduler.py
@@ -20,11 +20,10 @@
from pants.engine.native import Function, TypeConstraint, TypeId
from pants.engine.nodes import Return, State, Throw
from pants.engine.rules import RuleIndex, SingletonRule, TaskRule
-from pants.engine.selectors import Select, SelectVariant, constraint_for
-from pants.engine.struct import HasProducts, Variants
+from pants.engine.selectors import Select, constraint_for
from pants.util.contextutil import temporary_file_path
from pants.util.dirutil import check_no_overlapping_paths
-from pants.util.objects import Collection, SubclassesOf, datatype
+from pants.util.objects import Collection, datatype
from pants.util.strutil import pluralize
@@ -103,9 +102,6 @@ def __init__(
self._native = native
self.include_trace_on_error = include_trace_on_error
- # TODO: The only (?) case where we use inheritance rather than exact type unions.
- has_products_constraint = SubclassesOf(HasProducts)
-
# Validate and register all provided and intrinsic tasks.
rule_index = RuleIndex.create(list(rules))
self._root_subject_types = sorted(rule_index.roots, key=repr)
@@ -132,9 +128,7 @@ def __init__(
construct_file=File,
construct_link=Link,
construct_process_result=FallibleExecuteProcessResult,
- constraint_has_products=has_products_constraint,
constraint_address=constraint_for(Address),
- constraint_variants=constraint_for(Variants),
constraint_path_globs=constraint_for(PathGlobs),
constraint_directory_digest=constraint_for(DirectoryDigest),
constraint_snapshot=constraint_for(Snapshot),
@@ -230,18 +224,13 @@ def _register_singleton(self, output_constraint, rule):
def _register_task(self, output_constraint, rule):
"""Register the given TaskRule with the native scheduler."""
- func = rule.func
- self._native.lib.tasks_task_begin(self._tasks, Function(self._to_key(func)), output_constraint)
+ func = Function(self._to_key(rule.func))
+ self._native.lib.tasks_task_begin(self._tasks, func, output_constraint)
for selector in rule.input_selectors:
selector_type = type(selector)
product_constraint = self._to_constraint(selector.product)
if selector_type is Select:
self._native.lib.tasks_add_select(self._tasks, product_constraint)
- elif selector_type is SelectVariant:
- key_buf = self._to_utf8_buf(selector.variant_key)
- self._native.lib.tasks_add_select_variant(self._tasks,
- product_constraint,
- key_buf)
else:
raise ValueError('Unrecognized Selector type: {}'.format(selector))
for get in rule.input_gets:
@@ -325,7 +314,7 @@ def _run_and_return_roots(self, session, execution_request):
for raw_root in self._native.unpack(raw_roots.nodes_ptr, raw_roots.nodes_len):
if raw_root.state_tag is 1:
state = Return(self._from_value(raw_root.state_value))
- elif raw_root.state_tag in (2, 3, 4):
+ elif raw_root.state_tag in (2, 3):
state = Throw(self._from_value(raw_root.state_value))
else:
raise ValueError(
diff --git a/src/python/pants/engine/selectors.py b/src/python/pants/engine/selectors.py
--- a/src/python/pants/engine/selectors.py
+++ b/src/python/pants/engine/selectors.py
@@ -8,8 +8,6 @@
from abc import abstractproperty
from builtins import str
-import six
-
from pants.util.meta import AbstractClass
from pants.util.objects import Exactly, datatype
@@ -110,23 +108,3 @@ def __repr__(self):
return '{}({}{})'.format(type(self).__name__,
type_or_constraint_repr(self.product),
', optional=True' if self.optional else '')
-
-
-class SelectVariant(datatype(['product', 'variant_key']), Selector):
- """Selects the matching Product and variant name for the Subject provided to the constructor.
-
- For example: a SelectVariant with a variant_key of "thrift" and a product of type ApacheThrift
- will only match when a consumer passes a variant value for "thrift" that matches the name of an
- ApacheThrift value.
- """
- optional = False
-
- def __new__(cls, product, variant_key):
- if not isinstance(variant_key, six.string_types):
- raise ValueError('Expected variant_key to be a string, but was {!r}'.format(variant_key))
- return super(SelectVariant, cls).__new__(cls, product, variant_key)
-
- def __repr__(self):
- return '{}({}, {})'.format(type(self).__name__,
- type_or_constraint_repr(self.product),
- repr(self.variant_key))
diff --git a/src/python/pants/engine/struct.py b/src/python/pants/engine/struct.py
--- a/src/python/pants/engine/struct.py
+++ b/src/python/pants/engine/struct.py
@@ -4,14 +4,12 @@
from __future__ import absolute_import, division, print_function, unicode_literals
-from abc import abstractproperty
from collections import MutableMapping, MutableSequence
from future.utils import binary_type, text_type
from pants.engine.addressable import addressable, addressable_list
from pants.engine.objects import Serializable, SerializableFactory, Validatable, ValidationError
-from pants.util.meta import AbstractClass
from pants.util.objects import SubclassesOf, SuperclassesOf
@@ -307,28 +305,3 @@ def dependencies(self):
:rtype: list
"""
-
-
-class HasProducts(AbstractClass):
- """A mixin for a class that has a collection of products which it would like to expose."""
-
- @abstractproperty
- def products(self):
- """Returns a collection of products held by this class."""
-
-
-class Variants(Struct):
- """A struct that holds default variant values.
-
- Variants are key-value pairs representing uniquely identifying parameters for a Node.
-
- Default variants are usually configured on a Target to be used whenever they are
- not specified by a caller.
- """
-
- def __init__(self, default=None, **kwargs):
- """
- :param dict default: A dict of default variant values.
- """
- # TODO: enforce the type of variants using the Addressable framework.
- super(Variants, self).__init__(default=default, **kwargs)
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-6569
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
nth groupby method on DataFrame
The nth groupby method on a Series takes the nth **non-NaN** value in the Series.
This means on a DataFrame it's not going to be well defined...
Should we make this a Series only method?
```
In [1]: g = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']],columns=['A']).groupby('A')
In [2]: g.nth(0)
Out[2]:
Empty DataFrame
Columns: []
Index: []
In [3]: g.A.nth(0)
Out[3]:
A
a a
b b
Name: A, dtype: object
```
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 [](http://scatterci.github.io/pydata/pandas)
6
7 ## What is it
8
9 **pandas** is a Python package providing fast, flexible, and expressive data
10 structures designed to make working with "relational" or "labeled" data both
11 easy and intuitive. It aims to be the fundamental high-level building block for
12 doing practical, **real world** data analysis in Python. Additionally, it has
13 the broader goal of becoming **the most powerful and flexible open source data
14 analysis / manipulation tool available in any language**. It is already well on
15 its way toward this goal.
16
17 ## Main Features
18 Here are just a few of the things that pandas does well:
19
20 - Easy handling of [**missing data**][missing-data] (represented as
21 `NaN`) in floating point as well as non-floating point data
22 - Size mutability: columns can be [**inserted and
23 deleted**][insertion-deletion] from DataFrame and higher dimensional
24 objects
25 - Automatic and explicit [**data alignment**][alignment]: objects can
26 be explicitly aligned to a set of labels, or the user can simply
27 ignore the labels and let `Series`, `DataFrame`, etc. automatically
28 align the data for you in computations
29 - Powerful, flexible [**group by**][groupby] functionality to perform
30 split-apply-combine operations on data sets, for both aggregating
31 and transforming data
32 - Make it [**easy to convert**][conversion] ragged,
33 differently-indexed data in other Python and NumPy data structures
34 into DataFrame objects
35 - Intelligent label-based [**slicing**][slicing], [**fancy
36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
37 large data sets
38 - Intuitive [**merging**][merging] and [**joining**][joining] data
39 sets
40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
41 data sets
42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
43 labels per tick)
44 - Robust IO tools for loading data from [**flat files**][flat-files]
45 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
47 - [**Time series**][timeseries]-specific functionality: date range
48 generation and frequency conversion, moving window statistics,
49 moving window linear regressions, date shifting and lagging, etc.
50
51
52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
70
71 ## Where to get it
72 The source code is currently hosted on GitHub at:
73 http://github.com/pydata/pandas
74
75 Binary installers for the latest released version are available at the Python
76 package index
77
78 http://pypi.python.org/pypi/pandas/
79
80 And via `easy_install`:
81
82 ```sh
83 easy_install pandas
84 ```
85
86 or `pip`:
87
88 ```sh
89 pip install pandas
90 ```
91
92 ## Dependencies
93 - [NumPy](http://www.numpy.org): 1.6.1 or higher
94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
95 - [pytz](http://pytz.sourceforge.net)
96 - Needed for time zone support with ``pandas.date_range``
97
98 ### Highly Recommended Dependencies
99 - [numexpr](http://code.google.com/p/numexpr/)
100 - Needed to accelerate some expression evaluation operations
101 - Required by PyTables
102 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
103 - Needed to accelerate certain numerical operations
104
105 ### Optional dependencies
106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
111 - [statsmodels](http://statsmodels.sourceforge.net/)
112 - Needed for parts of `pandas.stats`
113 - For Excel I/O:
114 - [xlrd/xlwt](http://www.python-excel.org/)
115 - Excel reading (xlrd) and writing (xlwt)
116 - [openpyxl](http://packages.python.org/openpyxl/)
117 - openpyxl version 1.6.1 or higher, for writing .xlsx files
118 - xlrd >= 0.9.0
119 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
120 - Alternative Excel writer.
121 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
122 - Needed for `pandas.io.gbq`
123 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
124 - One of the following combinations of libraries is needed to use the
125 top-level [`pandas.read_html`][read-html-docs] function:
126 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
127 recent version of [html5lib][html5lib] is okay.)
128 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
129 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
130 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
131 for reasons as to why you should probably **not** take this approach.
132
133 #### Notes about HTML parsing libraries
134 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
135 either [lxml][lxml] or [html5lib][html5lib] or both.
136 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
137 installed.
138 - You are strongly encouraged to read [HTML reading
139 gotchas][html-gotchas]. It explains issues surrounding the
140 installation and usage of the above three libraries.
141 - You may need to install an older version of
142 [BeautifulSoup4][BeautifulSoup4]:
143 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
144 32-bit Ubuntu/Debian
145 - Additionally, if you're using [Anaconda][Anaconda] you should
146 definitely read [the gotchas about HTML parsing][html-gotchas]
147 libraries
148 - If you're on a system with `apt-get` you can do
149
150 ```sh
151 sudo apt-get build-dep python-lxml
152 ```
153
154 to get the necessary dependencies for installation of [lxml][lxml].
155 This will prevent further headaches down the line.
156
157 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
158 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
159 [lxml]: http://lxml.de
160 [Anaconda]: https://store.continuum.io/cshop/anaconda
161 [NumPy]: http://numpy.scipy.org/
162 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
163 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
164
165 ## Installation from sources
166 To install pandas from source you need Cython in addition to the normal
167 dependencies above. Cython can be installed from pypi:
168
169 ```sh
170 pip install cython
171 ```
172
173 In the `pandas` directory (same one where you found this file after
174 cloning the git repo), execute:
175
176 ```sh
177 python setup.py install
178 ```
179
180 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
181
182 ```sh
183 python setup.py develop
184 ```
185
186 Alternatively, you can use `pip` if you want all the dependencies pulled
187 in automatically (the `-e` option is for installing it in [development
188 mode](http://www.pip-installer.org/en/latest/usage.html)):
189
190 ```sh
191 pip install -e .
192 ```
193
194 On Windows, you will need to install MinGW and execute:
195
196 ```sh
197 python setup.py build --compiler=mingw32
198 python setup.py install
199 ```
200
201 See http://pandas.pydata.org/ for more information.
202
203 ## License
204 BSD
205
206 ## Documentation
207 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
208
209 The Sphinx documentation should provide a good starting point for learning how
210 to use the library. Expect the docs to continue to expand as time goes on.
211
212 ## Background
213 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
214 has been under active development since then.
215
216 ## Discussion and Development
217 Since pandas development is related to a number of other scientific
218 Python projects, questions are welcome on the scipy-user mailing
219 list. Specialized discussions or design issues should take place on
220 the pystatsmodels mailing list / Google group, where
221 ``scikits.statsmodels`` and other libraries will also be discussed:
222
223 http://groups.google.com/group/pystatsmodels
224
[end of README.md]
[start of pandas/io/html.py]
1 """:mod:`pandas.io.html` is a module containing functionality for dealing with
2 HTML IO.
3
4 """
5
6 import os
7 import re
8 import numbers
9 import collections
10 import warnings
11
12 from distutils.version import LooseVersion
13
14 import numpy as np
15
16 from pandas.io.common import _is_url, urlopen, parse_url
17 from pandas.io.parsers import TextParser
18 from pandas.compat import (lrange, lmap, u, string_types, iteritems, text_type,
19 raise_with_traceback)
20 from pandas.core import common as com
21 from pandas import Series
22
23
24 try:
25 import bs4
26 except ImportError:
27 _HAS_BS4 = False
28 else:
29 _HAS_BS4 = True
30
31
32 try:
33 import lxml
34 except ImportError:
35 _HAS_LXML = False
36 else:
37 _HAS_LXML = True
38
39
40 try:
41 import html5lib
42 except ImportError:
43 _HAS_HTML5LIB = False
44 else:
45 _HAS_HTML5LIB = True
46
47
48 #############
49 # READ HTML #
50 #############
51 _RE_WHITESPACE = re.compile(r'[\r\n]+|\s{2,}')
52
53
54 def _remove_whitespace(s, regex=_RE_WHITESPACE):
55 """Replace extra whitespace inside of a string with a single space.
56
57 Parameters
58 ----------
59 s : str or unicode
60 The string from which to remove extra whitespace.
61
62 regex : regex
63 The regular expression to use to remove extra whitespace.
64
65 Returns
66 -------
67 subd : str or unicode
68 `s` with all extra whitespace replaced with a single space.
69 """
70 return regex.sub(' ', s.strip())
71
72
73 def _get_skiprows(skiprows):
74 """Get an iterator given an integer, slice or container.
75
76 Parameters
77 ----------
78 skiprows : int, slice, container
79 The iterator to use to skip rows; can also be a slice.
80
81 Raises
82 ------
83 TypeError
84 * If `skiprows` is not a slice, integer, or Container
85
86 Returns
87 -------
88 it : iterable
89 A proper iterator to use to skip rows of a DataFrame.
90 """
91 if isinstance(skiprows, slice):
92 return lrange(skiprows.start or 0, skiprows.stop, skiprows.step or 1)
93 elif isinstance(skiprows, numbers.Integral) or com.is_list_like(skiprows):
94 return skiprows
95 elif skiprows is None:
96 return 0
97 raise TypeError('%r is not a valid type for skipping rows' %
98 type(skiprows).__name__)
99
100
101 def _read(io):
102 """Try to read from a url, file or string.
103
104 Parameters
105 ----------
106 io : str, unicode, or file-like
107
108 Returns
109 -------
110 raw_text : str
111 """
112 if _is_url(io):
113 with urlopen(io) as url:
114 raw_text = url.read()
115 elif hasattr(io, 'read'):
116 raw_text = io.read()
117 elif os.path.isfile(io):
118 with open(io) as f:
119 raw_text = f.read()
120 elif isinstance(io, string_types):
121 raw_text = io
122 else:
123 raise TypeError("Cannot read object of type %r" % type(io).__name__)
124 return raw_text
125
126
127 class _HtmlFrameParser(object):
128 """Base class for parsers that parse HTML into DataFrames.
129
130 Parameters
131 ----------
132 io : str or file-like
133 This can be either a string of raw HTML, a valid URL using the HTTP,
134 FTP, or FILE protocols or a file-like object.
135
136 match : str or regex
137 The text to match in the document.
138
139 attrs : dict
140 List of HTML <table> element attributes to match.
141
142 Attributes
143 ----------
144 io : str or file-like
145 raw HTML, URL, or file-like object
146
147 match : regex
148 The text to match in the raw HTML
149
150 attrs : dict-like
151 A dictionary of valid table attributes to use to search for table
152 elements.
153
154 Notes
155 -----
156 To subclass this class effectively you must override the following methods:
157 * :func:`_build_doc`
158 * :func:`_text_getter`
159 * :func:`_parse_td`
160 * :func:`_parse_tables`
161 * :func:`_parse_tr`
162 * :func:`_parse_thead`
163 * :func:`_parse_tbody`
164 * :func:`_parse_tfoot`
165 See each method's respective documentation for details on their
166 functionality.
167 """
168 def __init__(self, io, match, attrs):
169 self.io = io
170 self.match = match
171 self.attrs = attrs
172
173 def parse_tables(self):
174 tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
175 return (self._build_table(table) for table in tables)
176
177 def _parse_raw_data(self, rows):
178 """Parse the raw data into a list of lists.
179
180 Parameters
181 ----------
182 rows : iterable of node-like
183 A list of row elements.
184
185 text_getter : callable
186 A callable that gets the text from an individual node. This must be
187 defined by subclasses.
188
189 column_finder : callable
190 A callable that takes a row node as input and returns a list of the
191 column node in that row. This must be defined by subclasses.
192
193 Returns
194 -------
195 data : list of list of strings
196 """
197 data = [[_remove_whitespace(self._text_getter(col)) for col in
198 self._parse_td(row)] for row in rows]
199 return data
200
201 def _text_getter(self, obj):
202 """Return the text of an individual DOM node.
203
204 Parameters
205 ----------
206 obj : node-like
207 A DOM node.
208
209 Returns
210 -------
211 text : str or unicode
212 The text from an individual DOM node.
213 """
214 raise NotImplementedError
215
216 def _parse_td(self, obj):
217 """Return the td elements from a row element.
218
219 Parameters
220 ----------
221 obj : node-like
222
223 Returns
224 -------
225 columns : list of node-like
226 These are the elements of each row, i.e., the columns.
227 """
228 raise NotImplementedError
229
230 def _parse_tables(self, doc, match, attrs):
231 """Return all tables from the parsed DOM.
232
233 Parameters
234 ----------
235 doc : tree-like
236 The DOM from which to parse the table element.
237
238 match : str or regular expression
239 The text to search for in the DOM tree.
240
241 attrs : dict
242 A dictionary of table attributes that can be used to disambiguate
243 mutliple tables on a page.
244
245 Raises
246 ------
247 ValueError
248 * If `match` does not match any text in the document.
249
250 Returns
251 -------
252 tables : list of node-like
253 A list of <table> elements to be parsed into raw data.
254 """
255 raise NotImplementedError
256
257 def _parse_tr(self, table):
258 """Return the list of row elements from the parsed table element.
259
260 Parameters
261 ----------
262 table : node-like
263 A table element that contains row elements.
264
265 Returns
266 -------
267 rows : list of node-like
268 A list row elements of a table, usually <tr> or <th> elements.
269 """
270 raise NotImplementedError
271
272 def _parse_thead(self, table):
273 """Return the header of a table.
274
275 Parameters
276 ----------
277 table : node-like
278 A table element that contains row elements.
279
280 Returns
281 -------
282 thead : node-like
283 A <thead>...</thead> element.
284 """
285 raise NotImplementedError
286
287 def _parse_tbody(self, table):
288 """Return the body of the table.
289
290 Parameters
291 ----------
292 table : node-like
293 A table element that contains row elements.
294
295 Returns
296 -------
297 tbody : node-like
298 A <tbody>...</tbody> element.
299 """
300 raise NotImplementedError
301
302 def _parse_tfoot(self, table):
303 """Return the footer of the table if any.
304
305 Parameters
306 ----------
307 table : node-like
308 A table element that contains row elements.
309
310 Returns
311 -------
312 tfoot : node-like
313 A <tfoot>...</tfoot> element.
314 """
315 raise NotImplementedError
316
317 def _build_doc(self):
318 """Return a tree-like object that can be used to iterate over the DOM.
319
320 Returns
321 -------
322 obj : tree-like
323 """
324 raise NotImplementedError
325
326 def _build_table(self, table):
327 header = self._parse_raw_thead(table)
328 body = self._parse_raw_tbody(table)
329 footer = self._parse_raw_tfoot(table)
330 return header, body, footer
331
332 def _parse_raw_thead(self, table):
333 thead = self._parse_thead(table)
334 res = []
335 if thead:
336 res = lmap(self._text_getter, self._parse_th(thead[0]))
337 return np.array(res).squeeze() if res and len(res) == 1 else res
338
339 def _parse_raw_tfoot(self, table):
340 tfoot = self._parse_tfoot(table)
341 res = []
342 if tfoot:
343 res = lmap(self._text_getter, self._parse_td(tfoot[0]))
344 return np.array(res).squeeze() if res and len(res) == 1 else res
345
346 def _parse_raw_tbody(self, table):
347 tbody = self._parse_tbody(table)
348
349 try:
350 res = self._parse_tr(tbody[0])
351 except IndexError:
352 res = self._parse_tr(table)
353 return self._parse_raw_data(res)
354
355
356 class _BeautifulSoupHtml5LibFrameParser(_HtmlFrameParser):
357 """HTML to DataFrame parser that uses BeautifulSoup under the hood.
358
359 See Also
360 --------
361 pandas.io.html._HtmlFrameParser
362 pandas.io.html._LxmlFrameParser
363
364 Notes
365 -----
366 Documentation strings for this class are in the base class
367 :class:`pandas.io.html._HtmlFrameParser`.
368 """
369 def __init__(self, *args, **kwargs):
370 super(_BeautifulSoupHtml5LibFrameParser, self).__init__(*args,
371 **kwargs)
372 from bs4 import SoupStrainer
373 self._strainer = SoupStrainer('table')
374
375 def _text_getter(self, obj):
376 return obj.text
377
378 def _parse_td(self, row):
379 return row.find_all(('td', 'th'))
380
381 def _parse_tr(self, element):
382 return element.find_all('tr')
383
384 def _parse_th(self, element):
385 return element.find_all('th')
386
387 def _parse_thead(self, table):
388 return table.find_all('thead')
389
390 def _parse_tbody(self, table):
391 return table.find_all('tbody')
392
393 def _parse_tfoot(self, table):
394 return table.find_all('tfoot')
395
396 def _parse_tables(self, doc, match, attrs):
397 element_name = self._strainer.name
398 tables = doc.find_all(element_name, attrs=attrs)
399
400 if not tables:
401 raise ValueError('No tables found')
402
403 result = []
404 unique_tables = set()
405
406 for table in tables:
407 if (table not in unique_tables and
408 table.find(text=match) is not None):
409 result.append(table)
410 unique_tables.add(table)
411
412 if not result:
413 raise ValueError("No tables found matching pattern %r" %
414 match.pattern)
415 return result
416
417 def _setup_build_doc(self):
418 raw_text = _read(self.io)
419 if not raw_text:
420 raise ValueError('No text parsed from document: %s' % self.io)
421 return raw_text
422
423 def _build_doc(self):
424 from bs4 import BeautifulSoup
425 return BeautifulSoup(self._setup_build_doc(), features='html5lib')
426
427
428 def _build_xpath_expr(attrs):
429 """Build an xpath expression to simulate bs4's ability to pass in kwargs to
430 search for attributes when using the lxml parser.
431
432 Parameters
433 ----------
434 attrs : dict
435 A dict of HTML attributes. These are NOT checked for validity.
436
437 Returns
438 -------
439 expr : unicode
440 An XPath expression that checks for the given HTML attributes.
441 """
442 # give class attribute as class_ because class is a python keyword
443 if 'class_' in attrs:
444 attrs['class'] = attrs.pop('class_')
445
446 s = [u("@%s=%r") % (k, v) for k, v in iteritems(attrs)]
447 return u('[%s]') % ' and '.join(s)
448
449
450 _re_namespace = {'re': 'http://exslt.org/regular-expressions'}
451 _valid_schemes = 'http', 'file', 'ftp'
452
453
454 class _LxmlFrameParser(_HtmlFrameParser):
455 """HTML to DataFrame parser that uses lxml under the hood.
456
457 Warning
458 -------
459 This parser can only handle HTTP, FTP, and FILE urls.
460
461 See Also
462 --------
463 _HtmlFrameParser
464 _BeautifulSoupLxmlFrameParser
465
466 Notes
467 -----
468 Documentation strings for this class are in the base class
469 :class:`_HtmlFrameParser`.
470 """
471 def __init__(self, *args, **kwargs):
472 super(_LxmlFrameParser, self).__init__(*args, **kwargs)
473
474 def _text_getter(self, obj):
475 return obj.text_content()
476
477 def _parse_td(self, row):
478 return row.xpath('.//td|.//th')
479
480 def _parse_tr(self, table):
481 expr = './/tr[normalize-space()]'
482 return table.xpath(expr)
483
484 def _parse_tables(self, doc, match, kwargs):
485 pattern = match.pattern
486
487 # 1. check all descendants for the given pattern and only search tables
488 # 2. go up the tree until we find a table
489 query = '//table//*[re:test(text(), %r)]/ancestor::table'
490 xpath_expr = u(query) % pattern
491
492 # if any table attributes were given build an xpath expression to
493 # search for them
494 if kwargs:
495 xpath_expr += _build_xpath_expr(kwargs)
496
497 tables = doc.xpath(xpath_expr, namespaces=_re_namespace)
498
499 if not tables:
500 raise ValueError("No tables found matching regex %r" % pattern)
501 return tables
502
503 def _build_doc(self):
504 """
505 Raises
506 ------
507 ValueError
508 * If a URL that lxml cannot parse is passed.
509
510 Exception
511 * Any other ``Exception`` thrown. For example, trying to parse a
512 URL that is syntactically correct on a machine with no internet
513 connection will fail.
514
515 See Also
516 --------
517 pandas.io.html._HtmlFrameParser._build_doc
518 """
519 from lxml.html import parse, fromstring, HTMLParser
520 from lxml.etree import XMLSyntaxError
521
522 parser = HTMLParser(recover=False)
523
524 try:
525 # try to parse the input in the simplest way
526 r = parse(self.io, parser=parser)
527
528 try:
529 r = r.getroot()
530 except AttributeError:
531 pass
532 except (UnicodeDecodeError, IOError):
533 # if the input is a blob of html goop
534 if not _is_url(self.io):
535 r = fromstring(self.io, parser=parser)
536
537 try:
538 r = r.getroot()
539 except AttributeError:
540 pass
541 else:
542 # not a url
543 scheme = parse_url(self.io).scheme
544 if scheme not in _valid_schemes:
545 # lxml can't parse it
546 msg = ('%r is not a valid url scheme, valid schemes are '
547 '%s') % (scheme, _valid_schemes)
548 raise ValueError(msg)
549 else:
550 # something else happened: maybe a faulty connection
551 raise
552 else:
553 if not hasattr(r, 'text_content'):
554 raise XMLSyntaxError("no text parsed from document", 0, 0, 0)
555 return r
556
557 def _parse_tbody(self, table):
558 return table.xpath('.//tbody')
559
560 def _parse_thead(self, table):
561 return table.xpath('.//thead')
562
563 def _parse_tfoot(self, table):
564 return table.xpath('.//tfoot')
565
566 def _parse_raw_thead(self, table):
567 expr = './/thead//th'
568 return [_remove_whitespace(x.text_content()) for x in
569 table.xpath(expr)]
570
571 def _parse_raw_tfoot(self, table):
572 expr = './/tfoot//th'
573 return [_remove_whitespace(x.text_content()) for x in
574 table.xpath(expr)]
575
576
577 def _expand_elements(body):
578 lens = Series(lmap(len, body))
579 lens_max = lens.max()
580 not_max = lens[lens != lens_max]
581
582 empty = ['']
583 for ind, length in iteritems(not_max):
584 body[ind] += empty * (lens_max - length)
585
586
587 def _data_to_frame(data, header, index_col, skiprows, infer_types,
588 parse_dates, tupleize_cols, thousands):
589 head, body, _ = data # _ is footer which is rarely used: ignore for now
590
591 if head:
592 body = [head] + body
593
594 if header is None: # special case when a table has <th> elements
595 header = 0
596
597 # fill out elements of body that are "ragged"
598 _expand_elements(body)
599
600 tp = TextParser(body, header=header, index_col=index_col,
601 skiprows=_get_skiprows(skiprows),
602 parse_dates=parse_dates, tupleize_cols=tupleize_cols,
603 thousands=thousands)
604 df = tp.read()
605
606 if infer_types: # TODO: rm this code so infer_types has no effect in 0.14
607 df = df.convert_objects(convert_dates='coerce')
608 else:
609 df = df.applymap(text_type)
610 return df
611
612
613 _valid_parsers = {'lxml': _LxmlFrameParser, None: _LxmlFrameParser,
614 'html5lib': _BeautifulSoupHtml5LibFrameParser,
615 'bs4': _BeautifulSoupHtml5LibFrameParser}
616
617
618 def _parser_dispatch(flavor):
619 """Choose the parser based on the input flavor.
620
621 Parameters
622 ----------
623 flavor : str
624 The type of parser to use. This must be a valid backend.
625
626 Returns
627 -------
628 cls : _HtmlFrameParser subclass
629 The parser class based on the requested input flavor.
630
631 Raises
632 ------
633 ValueError
634 * If `flavor` is not a valid backend.
635 ImportError
636 * If you do not have the requested `flavor`
637 """
638 valid_parsers = list(_valid_parsers.keys())
639 if flavor not in valid_parsers:
640 raise ValueError('%r is not a valid flavor, valid flavors are %s' %
641 (flavor, valid_parsers))
642
643 if flavor in ('bs4', 'html5lib'):
644 if not _HAS_HTML5LIB:
645 raise ImportError("html5lib not found please install it")
646 if not _HAS_BS4:
647 raise ImportError("bs4 not found please install it")
648 if bs4.__version__ == LooseVersion('4.2.0'):
649 raise ValueError("You're using a version"
650 " of BeautifulSoup4 (4.2.0) that has been"
651 " known to cause problems on certain"
652 " operating systems such as Debian. "
653 "Please install a version of"
654 " BeautifulSoup4 != 4.2.0, both earlier"
655 " and later releases will work.")
656 else:
657 if not _HAS_LXML:
658 raise ImportError("lxml not found please install it")
659 return _valid_parsers[flavor]
660
661
662 def _print_as_set(s):
663 return '{%s}' % ', '.join([com.pprint_thing(el) for el in s])
664
665
666 def _validate_flavor(flavor):
667 if flavor is None:
668 flavor = 'lxml', 'bs4'
669 elif isinstance(flavor, string_types):
670 flavor = flavor,
671 elif isinstance(flavor, collections.Iterable):
672 if not all(isinstance(flav, string_types) for flav in flavor):
673 raise TypeError('Object of type %r is not an iterable of strings' %
674 type(flavor).__name__)
675 else:
676 fmt = '{0!r}' if isinstance(flavor, string_types) else '{0}'
677 fmt += ' is not a valid flavor'
678 raise ValueError(fmt.format(flavor))
679
680 flavor = tuple(flavor)
681 valid_flavors = set(_valid_parsers)
682 flavor_set = set(flavor)
683
684 if not flavor_set & valid_flavors:
685 raise ValueError('%s is not a valid set of flavors, valid flavors are '
686 '%s' % (_print_as_set(flavor_set),
687 _print_as_set(valid_flavors)))
688 return flavor
689
690
691 def _parse(flavor, io, match, header, index_col, skiprows, infer_types,
692 parse_dates, tupleize_cols, thousands, attrs):
693 flavor = _validate_flavor(flavor)
694 compiled_match = re.compile(match) # you can pass a compiled regex here
695
696 # hack around python 3 deleting the exception variable
697 retained = None
698 for flav in flavor:
699 parser = _parser_dispatch(flav)
700 p = parser(io, compiled_match, attrs)
701
702 try:
703 tables = p.parse_tables()
704 except Exception as caught:
705 retained = caught
706 else:
707 break
708 else:
709 raise_with_traceback(retained)
710
711 return [_data_to_frame(table, header, index_col, skiprows, infer_types,
712 parse_dates, tupleize_cols, thousands)
713 for table in tables]
714
715
716 def read_html(io, match='.+', flavor=None, header=None, index_col=None,
717 skiprows=None, infer_types=None, attrs=None, parse_dates=False,
718 tupleize_cols=False, thousands=','):
719 r"""Read HTML tables into a ``list`` of ``DataFrame`` objects.
720
721 Parameters
722 ----------
723 io : str or file-like
724 A URL, a file-like object, or a raw string containing HTML. Note that
725 lxml only accepts the http, ftp and file url protocols. If you have a
726 URL that starts with ``'https'`` you might try removing the ``'s'``.
727
728 match : str or compiled regular expression, optional
729 The set of tables containing text matching this regex or string will be
730 returned. Unless the HTML is extremely simple you will probably need to
731 pass a non-empty string here. Defaults to '.+' (match any non-empty
732 string). The default value will return all tables contained on a page.
733 This value is converted to a regular expression so that there is
734 consistent behavior between Beautiful Soup and lxml.
735
736 flavor : str or None, container of strings
737 The parsing engine to use. 'bs4' and 'html5lib' are synonymous with
738 each other, they are both there for backwards compatibility. The
739 default of ``None`` tries to use ``lxml`` to parse and if that fails it
740 falls back on ``bs4`` + ``html5lib``.
741
742 header : int or list-like or None, optional
743 The row (or list of rows for a :class:`~pandas.MultiIndex`) to use to
744 make the columns headers.
745
746 index_col : int or list-like or None, optional
747 The column (or list of columns) to use to create the index.
748
749 skiprows : int or list-like or slice or None, optional
750 0-based. Number of rows to skip after parsing the column integer. If a
751 sequence of integers or a slice is given, will skip the rows indexed by
752 that sequence. Note that a single element sequence means 'skip the nth
753 row' whereas an integer means 'skip n rows'.
754
755 infer_types : bool, optional
756 This option is deprecated in 0.13, an will have no effect in 0.14. It
757 defaults to ``True``.
758
759 attrs : dict or None, optional
760 This is a dictionary of attributes that you can pass to use to identify
761 the table in the HTML. These are not checked for validity before being
762 passed to lxml or Beautiful Soup. However, these attributes must be
763 valid HTML table attributes to work correctly. For example, ::
764
765 attrs = {'id': 'table'}
766
767 is a valid attribute dictionary because the 'id' HTML tag attribute is
768 a valid HTML attribute for *any* HTML tag as per `this document
769 <http://www.w3.org/TR/html-markup/global-attributes.html>`__. ::
770
771 attrs = {'asdf': 'table'}
772
773 is *not* a valid attribute dictionary because 'asdf' is not a valid
774 HTML attribute even if it is a valid XML attribute. Valid HTML 4.01
775 table attributes can be found `here
776 <http://www.w3.org/TR/REC-html40/struct/tables.html#h-11.2>`__. A
777 working draft of the HTML 5 spec can be found `here
778 <http://www.w3.org/TR/html-markup/table.html>`__. It contains the
779 latest information on table attributes for the modern web.
780
781 parse_dates : bool, optional
782 See :func:`~pandas.io.parsers.read_csv` for more details. In 0.13, this
783 parameter can sometimes interact strangely with ``infer_types``. If you
784 get a large number of ``NaT`` values in your results, consider passing
785 ``infer_types=False`` and manually converting types afterwards.
786
787 tupleize_cols : bool, optional
788 If ``False`` try to parse multiple header rows into a
789 :class:`~pandas.MultiIndex`, otherwise return raw tuples. Defaults to
790 ``False``.
791
792 thousands : str, optional
793 Separator to use to parse thousands. Defaults to ``','``.
794
795 Returns
796 -------
797 dfs : list of DataFrames
798
799 Notes
800 -----
801 Before using this function you should read the :ref:`gotchas about the
802 HTML parsing libraries <html-gotchas>`.
803
804 Expect to do some cleanup after you call this function. For example, you
805 might need to manually assign column names if the column names are
806 converted to NaN when you pass the `header=0` argument. We try to assume as
807 little as possible about the structure of the table and push the
808 idiosyncrasies of the HTML contained in the table to the user.
809
810 This function searches for ``<table>`` elements and only for ``<tr>``
811 and ``<th>`` rows and ``<td>`` elements within each ``<tr>`` or ``<th>``
812 element in the table. ``<td>`` stands for "table data".
813
814 Similar to :func:`~pandas.read_csv` the `header` argument is applied
815 **after** `skiprows` is applied.
816
817 This function will *always* return a list of :class:`DataFrame` *or*
818 it will fail, e.g., it will *not* return an empty list.
819
820 Examples
821 --------
822 See the :ref:`read_html documentation in the IO section of the docs
823 <io.read_html>` for some examples of reading in HTML tables.
824
825 See Also
826 --------
827 pandas.io.parsers.read_csv
828 """
829 if infer_types is not None:
830 warnings.warn("infer_types will have no effect in 0.14", FutureWarning)
831 else:
832 infer_types = True # TODO: remove effect of this in 0.14
833
834 # Type check here. We don't want to parse only to fail because of an
835 # invalid value of an integer skiprows.
836 if isinstance(skiprows, numbers.Integral) and skiprows < 0:
837 raise ValueError('cannot skip rows starting from the end of the '
838 'data (you passed a negative value)')
839 return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
840 parse_dates, tupleize_cols, thousands, attrs)
841
[end of pandas/io/html.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
fb1b4a9150ba03ee3e43393a93ede2ddf7904c2e
|
nth groupby method on DataFrame
The nth groupby method on a Series takes the nth **non-NaN** value in the Series.
This means on a DataFrame it's not going to be well defined...
Should we make this a Series only method?
```
In [1]: g = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']],columns=['A']).groupby('A')
In [2]: g.nth(0)
Out[2]:
Empty DataFrame
Columns: []
Index: []
In [3]: g.A.nth(0)
Out[3]:
A
a a
b b
Name: A, dtype: object
```
|
Maybe I just don't understand what nth does, not always empty.
@hayd Maybe `nth` on a `DataFrame` should take the column as an argument and then it would then solve #5503
It's also really slow: http://stackoverflow.com/a/20087789/1240268
I'm of the opinion that nth should take the nth, regardless of NaN, or take kwarg for NaN-ness. Need to think through the logic of what it is doing atm (it's only small!)
For convenience the current impl is:
```
def nth(self, n):
def picker(arr):
arr = arr[notnull(arr)]
if len(arr) >= n + 1:
return arr.iget(n)
else:
return np.nan
return self.agg(picker)
```
With a frame the `arr = arr[notnull(arr)]` does nothing, I really don't understand how the next bit even works since iget is not a DataFrame method....!
I like the idea of a kwarg to describe this, grr to current behaviour: Series/DataFrame being inconsistent. Not sure how we should play this.
...in any case we should use cumcount, much faster.
Other possibility is plonk/override this into Series Groupby "as is*", but have it in Groupby the just take the nth row regardless of NaN, at least for now.
Also atm this is not that great, as it NaNs for groups smaller than n.
\* but using cumcount.
Any thoughts on this @jreback @TomAugspurger?
Should we somehow depreciate old Series behaviour (of getting nth non-null result, I think this may be something that R does which is why we do it...)? Not sure on way forward.
I still don't understand how this even works on a DataFrame... cumcount impl much easier (confusing to be different to Series though it already is).
I think u should break the API and have a na='drop' (but default to na=None), meaning don't remove Nan's
Should we have something similar for DataFrames? Could be drop_na (think this is inline with other args), You think this arg should only be for Series or hack something which will work for Dataframes too?
I think for frames u could do any/all
equivalent of doing a dropna beforehand and just default o None ?
sure, I think that's not so bad. I will remember to use `_selected_obj` :)
I'm not sure on the API for this to the old behaviour (i.e. not just filtering), for one thing there are two NaNs:
- NaNs because there are no more results in the group.
- NaNs values from the actual DataFrame/Series.
For another these are two very different results, you only get the groupby's by info if you use the old method.
I still think should change... just not sure on API for it.
```
In [11]: s = pd.Series([1, 2, 3, 1, np.nan])
In [12]: s.groupby([1, 2, 3, 1, 2]).nth(1) # the NaN has meaning here (it was missing in s)
Out[12]:
3 1
4 NaN
dtype: float64
s.groupby([1, 2, 3, 1, 2]).nth(1) # old, has index from by, the NaN means group not that big (with dropped nas), observe that index is from by
1 1
2 NaN
3 NaN
dtype: float64
```
travis is now happy...so go ahead at your leisure
@jreback still unsure about the API, any thoughts. I guess the old behaviour is an R thing?
I guess it's annoying that these are two very different things:
a. filter frame and series to take just the nth rows. result is subframe/series.
b. what it's doing before (at least in the series case, for dataframe it currently doesn't work*), which is grab the nth non-null for each group or NaN if not there, and index but the groupby by. result is not sub frame/series.
\* amusingly I think the iget call I was confused about raises but is caught so get NaN
I think you need to support both cases, but make default a)
`df.groupby(...).nth(5,drop_na=False)`
so `drop_na=False` is a), `drop_na=True` is b)
|
2014-03-07T17:50:43Z
|
<patch>
diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -738,6 +738,34 @@ This shows the first or last n rows from each group.
1 0 1 2
5 2 5 6
+Taking the nth row of each group
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To select from a DataFrame or Series the nth item, use the nth method:
+
+.. ipython:: python
+
+ DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=['A', 'B'])
+ g = df.groupby('A')
+ g.nth(0)
+
+ g.nth(1)
+
+ g.nth(-1)
+
+If you want to select the nth not-null method, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna, for a Series this just needs to be truthy.
+
+.. ipython:: python
+
+ g.nth(0, dropna='any')
+
+ g.nth(1, dropna='any') # NaNs denote group exhausted when using dropna
+
+ g.B.nth(0, dropna=True)
+
+.. warning::
+
+ Before 0.14.0 this method existed but did not work correctly on DataFrames. The API has changed so that it filters by default, but the old behaviour (for Series) can be achieved by passing dropna. An alternative is to dropna before doing the groupby.
Enumerate group items
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.14.0.txt b/doc/source/v0.14.0.txt
--- a/doc/source/v0.14.0.txt
+++ b/doc/source/v0.14.0.txt
@@ -62,7 +62,7 @@ These are out-of-bounds selections
s.index.year
- More consistent behaviour for some groupby methods:
- - groupby head and tail now act more like filter rather than an aggregation:
+ - groupby ``head`` and ``tail`` now act more like ``filter`` rather than an aggregation:
.. ipython:: python
@@ -78,6 +78,16 @@ These are out-of-bounds selections
g[['B']].head(1)
+ - groupby ``nth`` now filters by default, with optional dropna argument to ignore
+ NaN (to replicate the previous behaviour.)
+
+ .. ipython:: python
+
+ DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=['A', 'B'])
+ g = df.groupby('A')
+ g.nth(0) # can also use negative ints
+
+ g.nth(0, dropna='any') # similar to old behaviour
- Local variable usage has changed in
:func:`pandas.eval`/:meth:`DataFrame.eval`/:meth:`DataFrame.query`
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -523,15 +523,75 @@ def ohlc(self):
"""
return self._cython_agg_general('ohlc')
- def nth(self, n):
- def picker(arr):
- arr = arr[notnull(arr)]
- if len(arr) >= n + 1:
- return arr.iget(n)
+ def nth(self, n, dropna=None):
+ """
+ Take the nth row from each group.
+
+ If dropna, will not show nth non-null row, dropna is either
+ Truthy (if a Series) or 'all', 'any' (if a DataFrame); this is equivalent
+ to calling dropna(how=dropna) before the groupby.
+
+ Examples
+ --------
+ >>> DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=['A', 'B'])
+ >>> g = df.groupby('A')
+ >>> g.nth(0)
+ A B
+ 0 1 NaN
+ 2 5 6
+ >>> g.nth(1)
+ A B
+ 1 1 4
+ >>> g.nth(-1)
+ A B
+ 1 1 4
+ 2 5 6
+ >>> g.nth(0, dropna='any')
+ B
+ A
+ 1 4
+ 5 6
+ >>> g.nth(1, dropna='any') # NaNs denote group exhausted when using dropna
+ B
+ A
+ 1 NaN
+ 5 NaN
+
+ """
+
+ if not dropna: # good choice
+ m = self.grouper._max_groupsize
+ if n >= m or n < -m:
+ return self._selected_obj.loc[[]]
+ rng = np.zeros(m, dtype=bool)
+ if n >= 0:
+ rng[n] = True
+ is_nth = self._cumcount_array(rng)
else:
+ rng[- n - 1] = True
+ is_nth = self._cumcount_array(rng, ascending=False)
+ return self._selected_obj[is_nth]
+
+ if (isinstance(self._selected_obj, DataFrame)
+ and dropna not in ['any', 'all']):
+ # Note: when agg-ing picker doesn't raise this, just returns NaN
+ raise ValueError("For a DataFrame groupby, dropna must be "
+ "either None, 'any' or 'all', "
+ "(was passed %s)." % (dropna),)
+
+ # old behaviour, but with all and any support for DataFrames.
+
+ max_len = n if n >= 0 else - 1 - n
+ def picker(x):
+ x = x.dropna(how=dropna) # Note: how is ignored if Series
+ if len(x) <= max_len:
return np.nan
+ else:
+ return x.iloc[n]
+
return self.agg(picker)
+
def cumcount(self, **kwargs):
"""
Number each item in each group from 0 to the length of that group - 1.
@@ -579,8 +639,7 @@ def cumcount(self, **kwargs):
ascending = kwargs.pop('ascending', True)
index = self.obj.index
- rng = np.arange(self.grouper._max_groupsize, dtype='int64')
- cumcounts = self._cumcount_array(rng, ascending=ascending)
+ cumcounts = self._cumcount_array(ascending=ascending)
return Series(cumcounts, index)
def head(self, n=5):
@@ -606,8 +665,7 @@ def head(self, n=5):
"""
obj = self._selected_obj
- rng = np.arange(self.grouper._max_groupsize, dtype='int64')
- in_head = self._cumcount_array(rng) < n
+ in_head = self._cumcount_array() < n
head = obj[in_head]
return head
@@ -639,11 +697,17 @@ def tail(self, n=5):
tail = obj[in_tail]
return tail
- def _cumcount_array(self, arr, **kwargs):
+ def _cumcount_array(self, arr=None, **kwargs):
+ """
+ arr is where cumcount gets it's values from
+ """
ascending = kwargs.pop('ascending', True)
+ if arr is None:
+ arr = np.arange(self.grouper._max_groupsize, dtype='int64')
+
len_index = len(self.obj.index)
- cumcounts = np.zeros(len_index, dtype='int64')
+ cumcounts = np.empty(len_index, dtype=arr.dtype)
if ascending:
for v in self.indices.values():
cumcounts[v] = arr[:len(v)]
diff --git a/vb_suite/groupby.py b/vb_suite/groupby.py
--- a/vb_suite/groupby.py
+++ b/vb_suite/groupby.py
@@ -269,6 +269,22 @@ def f(g):
groupby_frame_apply = Benchmark("df.groupby(['key', 'key2']).apply(f)", setup,
start_date=datetime(2011, 10, 1))
+
+#----------------------------------------------------------------------
+# DataFrame nth
+
+setup = common_setup + """
+df = pd.DataFrame(np.random.randint(1, 100, (10000, 2)))
+"""
+
+# Not really a fair test as behaviour has changed!
+groupby_frame_nth = Benchmark("df.groupby(0).nth(0)", setup,
+ start_date=datetime(2014, 3, 1))
+
+groupby_series_nth = Benchmark("df[1].groupby(df[0]).nth(0)", setup,
+ start_date=datetime(2014, 3, 1))
+
+
#----------------------------------------------------------------------
# Sum booleans #2692
</patch>
|
[]
|
[]
| |||
conan-io__conan-2611
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GPL question
To help us debug your issue please explain:
- [X] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).
- [X] I've specified the Conan version, operating system version and any tool that can be relevant.
- [X] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
I'm looking to recommend use of Conan in a large organization, and I'm doing due diligence on licensing
Some concern has been raised over the way pylint is used. Could a version be cut which doesn't import pylint directly (i.e. invokes it out of process), or which doesn't depend on pylint at all?
</issue>
<code>
[start of README.rst]
1 Conan
2 =====
3
4 A distributed, open-source, C/C++ package manager.
5
6 +------------------------+-------------------------+
7 | **master** | **develop** |
8 +========================+=========================+
9 | |Build Status Master| | |Build Status Develop| |
10 +------------------------+-------------------------+
11
12
13 +------------------------+---------------------------+---------------------------------------------+
14 | **Coverage master** | **Coverage develop** | **Coverage graph** |
15 +========================+===========================+=============================================+
16 | |Master coverage| | |Develop coverage| | |Coverage graph| |
17 +------------------------+---------------------------+---------------------------------------------+
18
19
20 Setup
21 ======
22
23 From binaries
24 -------------
25
26 We have installers for `most platforms here <http://conan.io>`__ but you
27 can run **conan** from sources if you want.
28
29 From pip
30 --------
31
32 Conan is compatible with Python 2 and Python 3.
33
34 - Install pip following `pip docs`_.
35 - Install conan:
36
37 .. code-block:: bash
38
39 $ pip install conan
40
41 From Homebrew (OSx)
42 -------------------
43
44 - Install Homebrew following `brew homepage`_.
45
46 .. code-block:: bash
47
48 $ brew update
49 $ brew install conan
50
51 From source
52 -----------
53
54 You can run **conan** client and server in Windows, MacOS, and Linux.
55
56 - **Install pip following** `pip docs`_.
57
58 - **Clone conan repository:**
59
60 .. code-block:: bash
61
62 $ git clone https://github.com/conan-io/conan.git
63
64 - **Install python requirements**
65
66 - For running the client:
67
68 .. code-block:: bash
69
70 $ sudo pip install -r conans/requirements.txt
71
72
73 In OSX you should also install:
74
75 .. code-block:: bash
76
77 $ sudo pip install -r conans/requirements_osx.txt
78
79 - For running the server:
80
81 .. code-block:: bash
82
83 $ sudo apt-get install python-dev
84 $ sudo pip install -r conans/requirements_server.txt
85
86 - Development (for running the tests):
87
88 .. code-block:: bash
89
90 $ sudo pip install -r conans/requirements_dev.txt
91
92 If you are in Windows, using ``sudo`` is not required.
93
94
95 - **Create a launcher**
96
97 Conan entry point is "conans.conan.main" module. Fill the absolute path
98 of the cloned repository folder:
99
100 .. code-block:: bash
101
102 #!/usr/bin/env python
103 import sys
104 conan_sources_dir = '/home/user/conan' # EDIT!!
105
106 sys.path.insert(1, conan_sources_dir)
107 # Or append to sys.path to prioritize a binary installation before the source code one
108 # sys.path.append(conan_sources_dir)
109
110 from conans.conan import main
111 main(sys.argv[1:])
112
113 If you are a Windows user, you can name this file *conan.py* and create
114 a file *conan.bat* that calls the python module:
115
116 .. code-block:: bash
117
118 CALL python C:/Users/user/conan.py %*
119
120 - **Then add that 'conan' file to your PATH and you are ready:**
121
122 .. code-block::
123
124 $ conan --help
125
126 Consumer commands
127 install Installs the requirements specified in a conanfile (.py or .txt).
128 config Manages configuration. Edits the conan.conf or installs config files.
129 get Gets a file or list a directory of a given reference or package.
130 info Gets information about the dependency graph of a recipe.
131 search Searches package recipes and binaries in the local cache or in a remote.
132 Creator commands
133 new Creates a new package recipe template with a 'conanfile.py'.
134 create Builds a binary package for recipe (conanfile.py) located in current dir.
135 upload Uploads a recipe and binary packages to a remote.
136 export Copies the recipe (conanfile.py & associated files) to your local cache.
137 export-pkg Exports a recipe & creates a package with given files calling 'package'.
138 test Test a package, consuming it with a conanfile recipe with a test() method.
139 Package development commands
140 source Calls your local conanfile.py 'source()' method.
141 build Calls your local conanfile.py 'build()' method.
142 package Calls your local conanfile.py 'package()' method.
143 Misc commands
144 profile Lists profiles in the '.conan/profiles' folder, or shows profile details.
145 remote Manages the remote list and the package recipes associated to a remote.
146 user Authenticates against a remote with user/pass, caching the auth token.
147 imports Calls your local conanfile.py or conanfile.txt 'imports' method.
148 copy Copies conan recipes and packages to another user/channel.
149 remove Removes packages or binaries matching pattern from local cache or remote.
150 alias Creates and exports an 'alias recipe'.
151 download Downloads recipe and binaries to the local cache, without using settings.
152
153 Conan commands. Type "conan <command> -h" for help
154
155 Running the tests
156 =================
157
158 Make sure that the Python requirements for testing have been installed, as explained above.
159
160 Before you can run the tests, you need to set a few environment
161 variables first.
162
163 .. code-block:: bash
164
165 $ export PYTHONPATH=$PYTHONPATH:$(pwd)
166
167 On Windows it would be (while being in the conan root directory):
168
169 .. code-block:: bash
170
171 $ set PYTHONPATH=.
172
173 Ensure that your ``cmake`` has version 2.8 or later. You can see the
174 version with the following command:
175
176 .. code-block:: bash
177
178 $ cmake --version
179
180 The appropriate values of ``CONAN_COMPILER`` and
181 ``CONAN_COMPILER_VERSION`` depend on your operating system and your
182 requirements.
183
184 These should work for the GCC from ``build-essential`` on Ubuntu 14.04:
185
186 .. code-block:: bash
187
188 $ export CONAN_COMPILER=gcc
189 $ export CONAN_COMPILER_VERSION=4.8
190
191 These should work for OS X:
192
193 .. code-block:: bash
194
195 $ export CONAN_COMPILER=clang
196 $ export CONAN_COMPILER_VERSION=3.5
197
198 Finally, there are some tests that use conan to package Go-lang
199 libraries, so you might **need to install go-lang** in your computer and
200 add it to the path.
201
202 You can run the actual tests like this:
203
204 .. code-block:: bash
205
206 $ nosetests .
207
208
209 There are a couple of test attributes defined, as ``slow``, or ``golang`` that you can use
210 to filter the tests, and do not execute them:
211
212 .. code-block:: bash
213
214 $ nosetests . -a !golang
215
216 A few minutes later it should print ``OK``:
217
218 .. code-block:: bash
219
220 ............................................................................................
221 ----------------------------------------------------------------------
222 Ran 146 tests in 50.993s
223
224 OK
225
226 To run specific tests, you can specify the test name too, something like:
227
228 .. code-block:: bash
229
230 $ nosetests conans.test.integration.flat_requirements_test --nocapture
231
232 The ``--nocapture`` argument can be useful to see some output that otherwise is captured by nosetests.
233
234 License
235 -------
236
237 `MIT LICENSE <./LICENSE.md>`__
238
239 .. |Build Status Master| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/master
240 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/master
241
242 .. |Build Status Develop| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/develop
243 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/develop
244
245 .. |Master coverage| image:: https://codecov.io/gh/conan-io/conan/branch/master/graph/badge.svg
246 :target: https://codecov.io/gh/conan-io/conan/branch/master
247
248 .. |Develop coverage| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graph/badge.svg
249 :target: https://codecov.io/gh/conan-io/conan/branch/develop
250
251 .. |Coverage graph| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graphs/tree.svg
252 :height: 50px
253 :width: 50 px
254 :alt: Conan develop coverage
255
256 .. _`pip docs`: https://pip.pypa.io/en/stable/installing/
257
258 .. _`brew homepage`: http://brew.sh/
259
[end of README.rst]
[start of conans/client/conf/__init__.py]
1 import os
2
3 from six.moves.configparser import ConfigParser, NoSectionError
4 from six.moves import urllib
5
6 from conans.errors import ConanException
7 from conans.model.env_info import unquote
8 from conans.paths import conan_expand_user, DEFAULT_PROFILE_NAME
9 from conans.util.env_reader import get_env
10 from conans.util.files import load
11
12 MIN_SERVER_COMPATIBLE_VERSION = '0.12.0'
13
14 default_settings_yml = """
15 # Only for cross building, 'os_build/arch_build' is the system that runs Conan
16 os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]
17 arch_build: [x86, x86_64, ppc64le, ppc64, armv6, armv7, armv7hf, armv8, sparc, sparcv9, mips, mips64, avr, armv7s, armv7k]
18
19 # Only for building cross compilation tools, 'os_target/arch_target' is the system for
20 # which the tools generate code
21 os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]
22 arch_target: [x86, x86_64, ppc64le, ppc64, armv6, armv7, armv7hf, armv8, sparc, sparcv9, mips, mips64, avr, armv7s, armv7k]
23
24 # Rest of the settings are "host" settings:
25 # - For native building/cross building: Where the library/program will run.
26 # - For building cross compilation tools: Where the cross compiler will run.
27 os:
28 Windows:
29 subsystem: [None, cygwin, msys, msys2, wsl]
30 WindowsStore:
31 version: ["8.1", "10.0"]
32 Linux:
33 Macos:
34 Android:
35 api_level: ANY
36 iOS:
37 version: ["7.0", "7.1", "8.0", "8.1", "8.2", "8.3", "9.0", "9.1", "9.2", "9.3", "10.0", "10.1", "10.2", "10.3", "11.0"]
38 watchOS:
39 version: ["4.0"]
40 tvOS:
41 version: ["11.0"]
42 FreeBSD:
43 SunOS:
44 Arduino:
45 board: ANY
46 arch: [x86, x86_64, ppc64le, ppc64, armv6, armv7, armv7hf, armv8, sparc, sparcv9, mips, mips64, avr, armv7s, armv7k]
47 compiler:
48 sun-cc:
49 version: ["5.10", "5.11", "5.12", "5.13", "5.14"]
50 threads: [None, posix]
51 libcxx: [libCstd, libstdcxx, libstlport, libstdc++]
52 gcc:
53 version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
54 "5", "5.1", "5.2", "5.3", "5.4", "5.5",
55 "6", "6.1", "6.2", "6.3", "6.4",
56 "7", "7.1", "7.2", "7.3"]
57 libcxx: [libstdc++, libstdc++11]
58 threads: [None, posix, win32] # Windows MinGW
59 exception: [None, dwarf2, sjlj, seh] # Windows MinGW
60 Visual Studio:
61 runtime: [MD, MT, MTd, MDd]
62 version: ["8", "9", "10", "11", "12", "14", "15"]
63 toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp, v140, v140_xp, v140_clang_c2, LLVM-vs2014, LLVM-vs2014_xp, v141, v141_xp, v141_clang_c2]
64 clang:
65 version: ["3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "3.9", "4.0", "5.0", "6.0"]
66 libcxx: [libstdc++, libstdc++11, libc++]
67 apple-clang:
68 version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0"]
69 libcxx: [libstdc++, libc++]
70
71 build_type: [None, Debug, Release]
72 cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17]
73 """
74
75 default_client_conf = """
76 [log]
77 run_to_output = True # environment CONAN_LOG_RUN_TO_OUTPUT
78 run_to_file = False # environment CONAN_LOG_RUN_TO_FILE
79 level = 50 # environment CONAN_LOGGING_LEVEL
80 # trace_file = # environment CONAN_TRACE_FILE
81 print_run_commands = False # environment CONAN_PRINT_RUN_COMMANDS
82
83 [general]
84 default_profile = %s
85 compression_level = 9 # environment CONAN_COMPRESSION_LEVEL
86 sysrequires_sudo = True # environment CONAN_SYSREQUIRES_SUDO
87 # sysrequires_mode = enabled # environment CONAN_SYSREQUIRES_MODE (allowed modes enabled/verify/disabled)
88 # vs_installation_preference = Enterprise, Professional, Community, BuildTools # environment CONAN_VS_INSTALLATION_PREFERENCE
89 # verbose_traceback = False # environment CONAN_VERBOSE_TRACEBACK
90 # bash_path = "" # environment CONAN_BASH_PATH (only windows)
91 # recipe_linter = False # environment CONAN_RECIPE_LINTER
92 # read_only_cache = True # environment CONAN_READ_ONLY_CACHE
93 # pylintrc = path/to/pylintrc_file # environment CONAN_PYLINTRC
94 # cache_no_locks = True
95 # user_home_short = your_path # environment CONAN_USER_HOME_SHORT
96 # skip_vs_projects_upgrade = False # environment CONAN_SKIP_VS_PROJECTS_UPGRADE
97
98 # conan_make_program = make # environment CONAN_MAKE_PROGRAM (overrides the make program used in AutoToolsBuildEnvironment.make)
99
100 # cmake_generator # environment CONAN_CMAKE_GENERATOR
101 # http://www.vtk.org/Wiki/CMake_Cross_Compiling
102 # cmake_toolchain_file # environment CONAN_CMAKE_TOOLCHAIN_FILE
103 # cmake_system_name # environment CONAN_CMAKE_SYSTEM_NAME
104 # cmake_system_version # environment CONAN_CMAKE_SYSTEM_VERSION
105 # cmake_system_processor # environment CONAN_CMAKE_SYSTEM_PROCESSOR
106 # cmake_find_root_path # environment CONAN_CMAKE_FIND_ROOT_PATH
107 # cmake_find_root_path_mode_program # environment CONAN_CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
108 # cmake_find_root_path_mode_library # environment CONAN_CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
109 # cmake_find_root_path_mode_include # environment CONAN_CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
110
111 # cpu_count = 1 # environment CONAN_CPU_COUNT
112
113 # Change the default location for building test packages to a temporary folder
114 # which is deleted after the test.
115 # temp_test_folder = True # environment CONAN_TEMP_TEST_FOLDER
116
117 [storage]
118 # This is the default path, but you can write your own. It must be an absolute path or a
119 # path beginning with "~" (if the environment var CONAN_USER_HOME is specified, this directory, even
120 # with "~/", will be relative to the conan user home, not to the system user home)
121 path = ~/.conan/data
122
123 [proxies]
124 # Empty section will try to use system proxies.
125 # If don't want proxy at all, remove section [proxies]
126 # As documented in http://docs.python-requests.org/en/latest/user/advanced/#proxies
127 # http = http://user:[email protected]:3128/
128 # http = http://10.10.1.10:3128
129 # https = http://10.10.1.10:1080
130
131
132 # Default settings now declared in the default profile
133
134
135 """ % DEFAULT_PROFILE_NAME
136
137
138 class ConanClientConfigParser(ConfigParser, object):
139
140 def __init__(self, filename):
141 ConfigParser.__init__(self)
142 self.read(filename)
143 self.filename = filename
144
145 # So keys are not converted to lowercase, we override the default optionxform
146 optionxform = str
147
148 @property
149 def env_vars(self):
150 ret = {"CONAN_LOG_RUN_TO_OUTPUT": self._env_c("log.run_to_output", "CONAN_LOG_RUN_TO_OUTPUT", "True"),
151 "CONAN_LOG_RUN_TO_FILE": self._env_c("log.run_to_file", "CONAN_LOG_RUN_TO_FILE", "False"),
152 "CONAN_LOGGING_LEVEL": self._env_c("log.level", "CONAN_LOGGING_LEVEL", "50"),
153 "CONAN_TRACE_FILE": self._env_c("log.trace_file", "CONAN_TRACE_FILE", None),
154 "CONAN_PRINT_RUN_COMMANDS": self._env_c("log.print_run_commands", "CONAN_PRINT_RUN_COMMANDS", "False"),
155 "CONAN_COMPRESSION_LEVEL": self._env_c("general.compression_level", "CONAN_COMPRESSION_LEVEL", "9"),
156 "CONAN_PYLINTRC": self._env_c("general.pylintrc", "CONAN_PYLINTRC", None),
157 "CONAN_PYLINT_WERR": self._env_c("general.pylint_werr", "CONAN_PYLINT_WERR", None),
158 "CONAN_SYSREQUIRES_SUDO": self._env_c("general.sysrequires_sudo", "CONAN_SYSREQUIRES_SUDO", "False"),
159 "CONAN_SYSREQUIRES_MODE": self._env_c("general.sysrequires_mode", "CONAN_SYSREQUIRES_MODE", "enabled"),
160 "CONAN_VS_INSTALLATION_PREFERENCE": self._env_c("general.vs_installation_preference", "CONAN_VS_INSTALLATION_PREFERENCE", None),
161 "CONAN_RECIPE_LINTER": self._env_c("general.recipe_linter", "CONAN_RECIPE_LINTER", "True"),
162 "CONAN_CPU_COUNT": self._env_c("general.cpu_count", "CONAN_CPU_COUNT", None),
163 "CONAN_READ_ONLY_CACHE": self._env_c("general.read_only_cache", "CONAN_READ_ONLY_CACHE", None),
164 "CONAN_USER_HOME_SHORT": self._env_c("general.user_home_short", "CONAN_USER_HOME_SHORT", None),
165 "CONAN_VERBOSE_TRACEBACK": self._env_c("general.verbose_traceback", "CONAN_VERBOSE_TRACEBACK", None),
166 # http://www.vtk.org/Wiki/CMake_Cross_Compiling
167 "CONAN_CMAKE_GENERATOR": self._env_c("general.cmake_generator", "CONAN_CMAKE_GENERATOR", None),
168 "CONAN_CMAKE_TOOLCHAIN_FILE": self._env_c("general.cmake_toolchain_file", "CONAN_CMAKE_TOOLCHAIN_FILE", None),
169 "CONAN_CMAKE_SYSTEM_NAME": self._env_c("general.cmake_system_name", "CONAN_CMAKE_SYSTEM_NAME", None),
170 "CONAN_CMAKE_SYSTEM_VERSION": self._env_c("general.cmake_system_version", "CONAN_CMAKE_SYSTEM_VERSION", None),
171 "CONAN_CMAKE_SYSTEM_PROCESSOR": self._env_c("general.cmake_system_processor",
172 "CONAN_CMAKE_SYSTEM_PROCESSOR",
173 None),
174 "CONAN_CMAKE_FIND_ROOT_PATH": self._env_c("general.cmake_find_root_path",
175 "CONAN_CMAKE_FIND_ROOT_PATH",
176 None),
177 "CONAN_CMAKE_FIND_ROOT_PATH_MODE_PROGRAM": self._env_c("general.cmake_find_root_path_mode_program",
178 "CONAN_CMAKE_FIND_ROOT_PATH_MODE_PROGRAM",
179 None),
180 "CONAN_CMAKE_FIND_ROOT_PATH_MODE_LIBRARY": self._env_c("general.cmake_find_root_path_mode_library",
181 "CONAN_CMAKE_FIND_ROOT_PATH_MODE_LIBRARY",
182 None),
183 "CONAN_CMAKE_FIND_ROOT_PATH_MODE_INCLUDE": self._env_c("general.cmake_find_root_path_mode_include",
184 "CONAN_CMAKE_FIND_ROOT_PATH_MODE_INCLUDE",
185 None),
186
187 "CONAN_BASH_PATH": self._env_c("general.bash_path", "CONAN_BASH_PATH", None),
188 "CONAN_MAKE_PROGRAM": self._env_c("general.conan_make_program", "CONAN_MAKE_PROGRAM", None),
189 "CONAN_TEMP_TEST_FOLDER": self._env_c("general.temp_test_folder", "CONAN_TEMP_TEST_FOLDER", "False"),
190 "CONAN_SKIP_VS_PROJECTS_UPGRADE": self._env_c("general.skip_vs_projects_upgrade", "CONAN_SKIP_VS_PROJECTS_UPGRADE", "False")
191 }
192
193 # Filter None values
194 return {name: value for name, value in ret.items() if value is not None}
195
196 def _env_c(self, var_name, env_var_name, default_value):
197 env = os.environ.get(env_var_name, None)
198 if env is not None:
199 return env
200 try:
201 return unquote(self.get_item(var_name))
202 except ConanException:
203 return default_value
204
205 def get_item(self, item):
206 if not item:
207 return load(self.filename)
208
209 tokens = item.split(".", 1)
210 section_name = tokens[0]
211 try:
212 section = self.items(section_name)
213 except NoSectionError:
214 raise ConanException("'%s' is not a section of conan.conf" % section_name)
215 if len(tokens) == 1:
216 result = []
217 for item in section:
218 result.append(" = ".join(item))
219 return "\n".join(result)
220 else:
221 key = tokens[1]
222 try:
223 value = dict(section)[key]
224 if " #" in value: # Comments
225 value = value[:value.find(" #")].strip()
226 except KeyError:
227 raise ConanException("'%s' doesn't exist in [%s]" % (key, section_name))
228 return value
229
230 def set_item(self, key, value):
231 tokens = key.split(".", 1)
232 section_name = tokens[0]
233 if not self.has_section(section_name):
234 self.add_section(section_name)
235
236 if len(tokens) == 1: # defining full section
237 raise ConanException("You can't set a full section, please specify a key=value")
238
239 key = tokens[1]
240 super(ConanClientConfigParser, self).set(section_name, key, value)
241
242 with open(self.filename, "w") as f:
243 self.write(f)
244
245 def rm_item(self, item):
246 tokens = item.split(".", 1)
247 section_name = tokens[0]
248 if not self.has_section(section_name):
249 raise ConanException("'%s' is not a section of conan.conf" % section_name)
250
251 if len(tokens) == 1:
252 self.remove_section(tokens[0])
253 else:
254 key = tokens[1]
255 if not self.has_option(section_name, key):
256 raise ConanException("'%s' doesn't exist in [%s]" % (key, section_name))
257 self.remove_option(section_name, key)
258
259 with open(self.filename, "w") as f:
260 self.write(f)
261
262 def get_conf(self, varname):
263 """Gets the section from config file or raises an exception"""
264 try:
265 return self.items(varname)
266 except NoSectionError:
267 raise ConanException("Invalid configuration, missing %s" % varname)
268
269 @property
270 def default_profile(self):
271 try:
272 return self.get_item("general.default_profile")
273 except ConanException:
274 return DEFAULT_PROFILE_NAME
275
276 @property
277 def cache_no_locks(self):
278 try:
279 return self.get_item("general.cache_no_locks")
280 except ConanException:
281 return False
282
283 @property
284 def storage(self):
285 return dict(self.get_conf("storage"))
286
287 @property
288 def storage_path(self):
289 # Try with CONAN_STORAGE_PATH
290 result = get_env('CONAN_STORAGE_PATH', None)
291
292 # Try with conan.conf "path"
293 if not result:
294 try:
295 env_conan_user_home = os.getenv("CONAN_USER_HOME")
296 # if env var is declared, any specified path will be relative to CONAN_USER_HOME
297 # even with the ~/
298 if env_conan_user_home:
299 storage = self.storage["path"]
300 if storage[:2] == "~/":
301 storage = storage[2:]
302 result = os.path.join(env_conan_user_home, storage)
303 else:
304 result = self.storage["path"]
305 except KeyError:
306 pass
307
308 # expand the result and check if absolute
309 if result:
310 result = conan_expand_user(result)
311 if not os.path.isabs(result):
312 raise ConanException("Conan storage path has to be an absolute path")
313 return result
314
315 @property
316 def proxies(self):
317 """ optional field, might not exist
318 """
319 try:
320 proxies = self.get_conf("proxies")
321 # If there is proxies section, but empty, it will try to use system proxy
322 if not proxies:
323 # We don't have evidences that this following line is necessary.
324 # If the proxies has been
325 # configured at system level, conan will use it, and shouldn't be necessary
326 # to return here the proxies read from the system.
327 # Furthermore, the urls excluded for use proxies at system level do not work in
328 # this case, then the only way is to remove the [proxies] section with
329 # conan config remote proxies, then this method will return None and the proxies
330 # dict passed to requests will be empty.
331 # We don't remove this line because we are afraid to break something, but maybe
332 # until now is working because no one is using system-wide proxies or those proxies
333 # rules don't contain excluded urls.c #1777
334 return urllib.request.getproxies()
335 result = {k: (None if v == "None" else v) for k, v in proxies}
336 return result
337 except:
338 return None
339
[end of conans/client/conf/__init__.py]
[start of conans/client/generators/ycm.py]
1 from conans.model import Generator
2 import json
3
4
5 class YouCompleteMeGenerator(Generator):
6 template = '''
7 # This file is NOT licensed under the GPLv3, which is the license for the rest
8 # of YouCompleteMe.
9 #
10 # Here's the license text for this file:
11 #
12 # This is free and unencumbered software released into the public domain.
13 #
14 # Anyone is free to copy, modify, publish, use, compile, sell, or
15 # distribute this software, either in source code form or as a compiled
16 # binary, for any purpose, commercial or non-commercial, and by any
17 # means.
18 #
19 # In jurisdictions that recognize copyright laws, the author or authors
20 # of this software dedicate any and all copyright interest in the
21 # software to the public domain. We make this dedication for the benefit
22 # of the public at large and to the detriment of our heirs and
23 # successors. We intend this dedication to be an overt act of
24 # relinquishment in perpetuity of all present and future rights to this
25 # software under copyright law.
26 #
27 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
28 # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
29 # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
30 # IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
31 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
32 # ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
33 # OTHER DEALINGS IN THE SOFTWARE.
34 #
35 # For more information, please refer to <http://unlicense.org/>
36
37 import os
38 import json
39 import ycm_core
40 import logging
41
42
43 _logger = logging.getLogger(__name__)
44
45
46 def DirectoryOfThisScript():
47 return os.path.dirname( os.path.abspath( __file__ ) )
48
49
50 # These are the compilation flags that will be used in case there's no
51 # compilation database set (by default, one is not set).
52 # CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.
53 flags = [
54 '-x', 'c++'
55 ]
56
57 conan_flags = json.loads(open("conan_ycm_flags.json", "r").read())
58
59 flags.extend(conan_flags["flags"])
60 flags.extend(conan_flags["defines"])
61 flags.extend(conan_flags["includes"])
62
63
64 # Set this to the absolute path to the folder (NOT the file!) containing the
65 # compile_commands.json file to use that instead of 'flags'. See here for
66 # more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html
67 #
68 # You can get CMake to generate this file for you by adding:
69 # set( CMAKE_EXPORT_COMPILE_COMMANDS 1 )
70 # to your CMakeLists.txt file.
71 #
72 # Most projects will NOT need to set this to anything; you can just change the
73 # 'flags' list of compilation flags. Notice that YCM itself uses that approach.
74 compilation_database_folder = os.path.join(DirectoryOfThisScript(), 'Debug')
75
76 if os.path.exists( compilation_database_folder ):
77 database = ycm_core.CompilationDatabase( compilation_database_folder )
78 if not database.DatabaseSuccessfullyLoaded():
79 _logger.warn("Failed to load database")
80 database = None
81 else:
82 database = None
83
84 SOURCE_EXTENSIONS = [ '.cpp', '.cxx', '.cc', '.c', '.m', '.mm' ]
85
86 def GetAbsolutePath(include_path, working_directory):
87 if os.path.isabs(include_path):
88 return include_path
89 return os.path.join(working_directory, include_path)
90
91
92 def MakeRelativePathsInFlagsAbsolute( flags, working_directory ):
93 if not working_directory:
94 return list( flags )
95 new_flags = []
96 make_next_absolute = False
97 path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ]
98 for flag in flags:
99 new_flag = flag
100
101 if make_next_absolute:
102 make_next_absolute = False
103 new_flag = GetAbsolutePath(flag, working_directory)
104
105 for path_flag in path_flags:
106 if flag == path_flag:
107 make_next_absolute = True
108 break
109
110 if flag.startswith( path_flag ):
111 path = flag[ len( path_flag ): ]
112 new_flag = flag[:len(path_flag)] + GetAbsolutePath(path, working_directory)
113 break
114
115 if new_flag:
116 new_flags.append( new_flag )
117 return new_flags
118
119
120 def IsHeaderFile( filename ):
121 extension = os.path.splitext( filename )[ 1 ]
122 return extension.lower() in [ '.h', '.hxx', '.hpp', '.hh' ]
123
124
125 def GetCompilationInfoForFile( filename ):
126 # The compilation_commands.json file generated by CMake does not have entries
127 # for header files. So we do our best by asking the db for flags for a
128 # corresponding source file, if any. If one exists, the flags for that file
129 # should be good enough.
130 if IsHeaderFile( filename ):
131 basename = os.path.splitext( filename )[ 0 ]
132 for extension in SOURCE_EXTENSIONS:
133 replacement_file = basename + extension
134 if os.path.exists( replacement_file ):
135 compilation_info = database.GetCompilationInfoForFile( replacement_file )
136 if compilation_info.compiler_flags_:
137 return compilation_info
138 return None
139 return database.GetCompilationInfoForFile( filename )
140
141
142 def FlagsForFile( filename, **kwargs ):
143 relative_to = None
144 compiler_flags = None
145
146 if database:
147 # Bear in mind that compilation_info.compiler_flags_ does NOT return a
148 # python list, but a "list-like" StringVec object
149 compilation_info = GetCompilationInfoForFile( filename )
150 if compilation_info is None:
151 relative_to = DirectoryOfThisScript()
152 compiler_flags = flags
153 else:
154 relative_to = compilation_info.compiler_working_dir_
155 compiler_flags = compilation_info.compiler_flags_
156
157 else:
158 relative_to = DirectoryOfThisScript()
159 compiler_flags = flags
160
161 final_flags = MakeRelativePathsInFlagsAbsolute( compiler_flags, relative_to )
162 for flag in final_flags:
163 if flag.startswith("-W"):
164 final_flags.remove(flag)
165 _logger.info("Final flags for %s are %s" % (filename, ' '.join(final_flags)))
166
167 return {{
168 'flags': final_flags + ["-I/usr/include", "-I/usr/include/c++/{cxx_version}"],
169 'do_cache': True
170 }}
171 '''
172
173 @property
174 def filename(self):
175 pass
176
177 @property
178 def content(self):
179 def prefixed(prefix, values):
180 return [prefix + x for x in values]
181
182 conan_flags = {
183 "includes" : prefixed("-isystem", self.deps_build_info.include_paths),
184 "defines" : prefixed("-D", self.deps_build_info.defines),
185 "flags" : self.deps_build_info.cppflags
186 }
187
188 cxx_version = ''
189 try:
190 cxx_version = str(self.settings.compiler.version).split('.')[0]
191 except:
192 pass
193
194 ycm_data = self.template.format(cxx_version=cxx_version)
195 return {"conan_ycm_extra_conf.py" : ycm_data,
196 "conan_ycm_flags.json" : json.dumps(conan_flags, indent=2)}
197
[end of conans/client/generators/ycm.py]
[start of conans/model/conan_file.py]
1 import os
2 from contextlib import contextmanager
3
4 from conans import tools # @UnusedImport KEEP THIS! Needed for pyinstaller to copy to exe.
5 from conans.client.tools.env import pythonpath
6 from conans.errors import ConanException
7 from conans.model.build_info import DepsCppInfo
8 from conans.model.env_info import DepsEnvInfo, EnvValues
9 from conans.model.options import Options, PackageOptions, OptionsValues
10 from conans.model.requires import Requirements
11 from conans.model.user_info import DepsUserInfo
12 from conans.paths import RUN_LOG_NAME
13 from conans.tools import environment_append, no_op
14 from conans.client.output import Color
15
16
17 def create_options(conanfile):
18 try:
19 package_options = PackageOptions(getattr(conanfile, "options", None))
20 options = Options(package_options)
21
22 default_options = getattr(conanfile, "default_options", None)
23 if default_options:
24 if isinstance(default_options, (list, tuple)):
25 default_values = OptionsValues(default_options)
26 elif isinstance(default_options, str):
27 default_values = OptionsValues.loads(default_options)
28 else:
29 raise ConanException("Please define your default_options as list or "
30 "multiline string")
31 options.values = default_values
32 return options
33 except Exception as e:
34 raise ConanException("Error while initializing options. %s" % str(e))
35
36
37 def create_requirements(conanfile):
38 try:
39 # Actual requirements of this package
40 if not hasattr(conanfile, "requires"):
41 return Requirements()
42 else:
43 if not conanfile.requires:
44 return Requirements()
45 if isinstance(conanfile.requires, tuple):
46 return Requirements(*conanfile.requires)
47 else:
48 return Requirements(conanfile.requires, )
49 except Exception as e:
50 raise ConanException("Error while initializing requirements. %s" % str(e))
51
52
53 def create_settings(conanfile, settings, local):
54 try:
55 defined_settings = getattr(conanfile, "settings", None)
56 if isinstance(defined_settings, str):
57 defined_settings = [defined_settings]
58 current = defined_settings or {}
59 settings.constraint(current, raise_undefined_field=not local)
60 return settings
61 except Exception as e:
62 raise ConanException("Error while initializing settings. %s" % str(e))
63
64
65 def create_exports(conanfile):
66 if not hasattr(conanfile, "exports"):
67 return None
68 else:
69 if isinstance(conanfile.exports, str):
70 return (conanfile.exports, )
71 return conanfile.exports
72
73
74 def create_exports_sources(conanfile):
75 if not hasattr(conanfile, "exports_sources"):
76 return None
77 else:
78 if isinstance(conanfile.exports_sources, str):
79 return (conanfile.exports_sources, )
80 return conanfile.exports_sources
81
82
83 @contextmanager
84 def _env_and_python(conanfile):
85 with environment_append(conanfile.env):
86 with pythonpath(conanfile):
87 yield
88
89
90 def get_env_context_manager(conanfile, without_python=False):
91 if not conanfile.apply_env:
92 return no_op()
93 if without_python:
94 return environment_append(conanfile.env)
95 return _env_and_python(conanfile)
96
97
98 class ConanFile(object):
99 """ The base class for all package recipes
100 """
101
102 name = None
103 version = None # Any str, can be "1.1" or whatever
104 url = None # The URL where this File is located, as github, to collaborate in package
105 # The license of the PACKAGE, just a shortcut, does not replace or
106 # change the actual license of the source code
107 license = None
108 author = None # Main maintainer/responsible for the package, any format
109 build_policy = None
110 short_paths = False
111 apply_env = True # Apply environment variables from requires deps_env_info and profiles
112
113 def __init__(self, output, runner, settings, user=None, channel=None, local=None):
114 # User defined generators
115 self.generators = self.generators if hasattr(self, "generators") else ["txt"]
116 if isinstance(self.generators, str):
117 self.generators = [self.generators]
118
119 # User defined options
120 self.options = create_options(self)
121 self.requires = create_requirements(self)
122 self.settings = create_settings(self, settings, local)
123 try:
124 if self.settings.os_build and self.settings.os:
125 output.writeln("*"*60, front=Color.BRIGHT_RED)
126 output.writeln(" This package defines both 'os' and 'os_build' ",
127 front=Color.BRIGHT_RED)
128 output.writeln(" Please use 'os' for libraries and 'os_build'",
129 front=Color.BRIGHT_RED)
130 output.writeln(" only for build-requires used for cross-building",
131 front=Color.BRIGHT_RED)
132 output.writeln("*"*60, front=Color.BRIGHT_RED)
133 except ConanException:
134 pass
135 self.exports = create_exports(self)
136 self.exports_sources = create_exports_sources(self)
137 # needed variables to pack the project
138 self.cpp_info = None # Will be initialized at processing time
139 self.deps_cpp_info = DepsCppInfo()
140
141 # environment variables declared in the package_info
142 self.env_info = None # Will be initialized at processing time
143 self.deps_env_info = DepsEnvInfo()
144
145 # user declared variables
146 self.user_info = None
147 # Keys are the package names, and the values a dict with the vars
148 self.deps_user_info = DepsUserInfo()
149
150 self.copy = None # initialized at runtime
151 self.copy_deps = None # initialized at runtime
152
153 # an output stream (writeln, info, warn error)
154 self.output = output
155 # something that can run commands, as os.sytem
156 self._runner = runner
157
158 self.develop = False
159
160 # user specified env variables
161 self._env_values = EnvValues() # Updated at runtime, user specified -e
162 self._user = user
163 self._channel = channel
164
165 # Are we in local cache? Suggest a better name
166 self.in_local_cache = False
167
168 # Init a description
169 self.description = None
170
171 @property
172 def env(self):
173 """Apply the self.deps_env_info into a copy of self._env_values (will prioritize the
174 self._env_values, user specified from profiles or -e first, then inherited)"""
175 # Cannot be lazy cached, because it's called in configure node, and we still don't have
176 # the deps_env_info objects available
177 tmp_env_values = self._env_values.copy()
178 tmp_env_values.update(self.deps_env_info)
179
180 ret, multiple = tmp_env_values.env_dicts(self.name)
181 ret.update(multiple)
182 return ret
183
184 @property
185 def channel(self):
186 if not self._channel:
187 self._channel = os.getenv("CONAN_CHANNEL")
188 if not self._channel:
189 raise ConanException("CONAN_CHANNEL environment variable not defined, "
190 "but self.channel is used in conanfile")
191 return self._channel
192
193 @property
194 def user(self):
195 if not self._user:
196 self._user = os.getenv("CONAN_USERNAME")
197 if not self._user:
198 raise ConanException("CONAN_USERNAME environment variable not defined, "
199 "but self.user is used in conanfile")
200 return self._user
201
202 def collect_libs(self, folder="lib"):
203 self.output.warn("Use 'self.collect_libs' is deprecated, "
204 "use tools.collect_libs(self) instead")
205 return tools.collect_libs(self, folder=folder)
206
207 @property
208 def build_policy_missing(self):
209 return self.build_policy == "missing"
210
211 @property
212 def build_policy_always(self):
213 return self.build_policy == "always"
214
215 def source(self):
216 pass
217
218 def system_requirements(self):
219 """ this method can be overwritten to implement logic for system package
220 managers, as apt-get
221
222 You can define self.global_system_requirements = True, if you want the installation
223 to be for all packages (not depending on settings/options/requirements)
224 """
225
226 def config_options(self):
227 """ modify options, probably conditioned to some settings. This call is executed
228 before config_settings. E.g.
229 if self.settings.os == "Windows":
230 del self.options.shared # shared/static not supported in win
231 """
232
233 def configure(self):
234 """ modify settings, probably conditioned to some options. This call is executed
235 after config_options. E.g.
236 if self.options.header_only:
237 self.settings.clear()
238 This is also the place for conditional requirements
239 """
240
241 def build(self):
242 self.output.warn("This conanfile has no build step")
243
244 def package(self):
245 self.output.warn("This conanfile has no package step")
246
247 def package_info(self):
248 """ define cpp_build_info, flags, etc
249 """
250
251 def run(self, command, output=True, cwd=None, win_bash=False, subsystem=None, msys_mingw=True):
252 if not win_bash:
253 retcode = self._runner(command, output, os.path.abspath(RUN_LOG_NAME), cwd)
254 else:
255 retcode = tools.run_in_windows_bash(self, bashcmd=command, cwd=cwd, subsystem=subsystem,
256 msys_mingw=msys_mingw)
257
258 if retcode != 0:
259 raise ConanException("Error %d while executing %s" % (retcode, command))
260
261 return retcode
262
263 def package_id(self):
264 """ modify the conans info, typically to narrow values
265 eg.: conaninfo.package_references = []
266 """
267
268 def test(self):
269 raise ConanException("You need to create a method 'test' in your test/conanfile.py")
270
271 def __repr__(self):
272 if self.name and self.version and self._channel and self._user:
273 return "%s/%s@%s/%s" % (self.name, self.version, self.user, self.channel)
274 elif self.name and self.version:
275 return "%s/%s@PROJECT" % (self.name, self.version)
276 else:
277 return "PROJECT"
278
[end of conans/model/conan_file.py]
[start of setup.py]
1 """A setuptools based setup module.
2 See:
3 https://packaging.python.org/en/latest/distributing.html
4 https://github.com/pypa/sampleproject
5 """
6
7 # Always prefer setuptools over distutils
8 from setuptools import setup, find_packages
9 # To use a consistent encoding
10 from codecs import open
11 from os import path
12 import os
13 import re
14 import platform
15
16
17 here = path.abspath(path.dirname(__file__))
18
19
20 def get_requires(filename):
21 requirements = []
22 with open(filename, "rt") as req_file:
23 for line in req_file.read().splitlines():
24 if not line.strip().startswith("#"):
25 requirements.append(line)
26 return requirements
27
28
29 project_requirements = get_requires("conans/requirements.txt")
30 if platform.system() == "Darwin":
31 project_requirements.extend(get_requires("conans/requirements_osx.txt"))
32 project_requirements.extend(get_requires("conans/requirements_server.txt"))
33 dev_requirements = get_requires("conans/requirements_dev.txt")
34
35
36 def load_version():
37 '''Loads a file content'''
38 filename = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)),
39 "conans", "__init__.py"))
40 with open(filename, "rt") as version_file:
41 conan_init = version_file.read()
42 version = re.search("__version__ = '([0-9a-z.-]+)'", conan_init).group(1)
43 return version
44
45
46 # def generate_long_description_file():
47 # import pypandoc
48 #
49 # output = pypandoc.convert('README.md', 'rst')
50 # return output
51
52 setup(
53 name='conan',
54 # Versions should comply with PEP440. For a discussion on single-sourcing
55 # the version across setup.py and the project code, see
56 # https://packaging.python.org/en/latest/single_source_version.html
57 version=load_version(), # + ".rc1",
58
59 description='Conan C/C++ package manager',
60 # long_description="An open source, decentralized package manager, to automate building and sharing of packages",
61 # long_description=generate_long_description_file(),
62
63 # The project's main homepage.
64 url='https://conan.io',
65
66 # Author details
67 author='JFrog LTD',
68 author_email='[email protected]',
69
70 # Choose your license
71 license='MIT',
72
73 # See https://pypi.python.org/pypi?%3Aaction=list_classifiers
74 classifiers=[
75 'Development Status :: 5 - Production/Stable',
76 'Intended Audience :: Developers',
77 'Topic :: Software Development :: Build Tools',
78 'License :: OSI Approved :: MIT License',
79 'Programming Language :: Python :: 2',
80 'Programming Language :: Python :: 2.7',
81 'Programming Language :: Python :: 3',
82 'Programming Language :: Python :: 3.6'
83 ],
84
85 # What does your project relate to?
86 keywords=['C/C++', 'package', 'libraries', 'developer', 'manager',
87 'dependency', 'tool', 'c', 'c++', 'cpp'],
88
89 # You can just specify the packages manually here if your project is
90 # simple. Or you can use find_packages().
91 packages=find_packages(),
92
93 # Alternatively, if you want to distribute just a my_module.py, uncomment
94 # this:
95 # py_modules=["my_module"],
96
97 # List run-time dependencies here. These will be installed by pip when
98 # your project is installed. For an analysis of "install_requires" vs pip's
99 # requirements files see:
100 # https://packaging.python.org/en/latest/requirements.html
101 install_requires=project_requirements,
102
103 # List additional groups of dependencies here (e.g. development
104 # dependencies). You can install these using the following syntax,
105 # for example:
106 # $ pip install -e .[dev,test]
107 extras_require={
108 'dev': dev_requirements,
109 'test': dev_requirements,
110 },
111
112 # If there are data files included in your packages that need to be
113 # installed, specify them here. If using Python 2.6 or less, then these
114 # have to be included in MANIFEST.in as well.
115 package_data={
116 'conans': ['*.txt'],
117 },
118
119 # Although 'package_data' is the preferred approach, in some case you may
120 # need to place data files outside of your packages. See:
121 # http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files # noqa
122 # In this case, 'data_file' will be installed into '<sys.prefix>/my_data'
123 # data_files=[('my_data', ['data/data_file'])],
124
125 # To provide executable scripts, use entry points in preference to the
126 # "scripts" keyword. Entry points provide cross-platform support and allow
127 # pip to create the appropriate form of executable for the target platform.
128 entry_points={
129 'console_scripts': [
130 'conan=conans.conan:run',
131 'conan_server=conans.conan_server:run',
132 'conan_build_info=conans.build_info.command:run'
133 ],
134 },
135 )
136
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
conan-io/conan
|
9fc0cf0f0389f637b7fa60ce78c2b677ea619291
|
GPL question
To help us debug your issue please explain:
- [X] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).
- [X] I've specified the Conan version, operating system version and any tool that can be relevant.
- [X] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
I'm looking to recommend use of Conan in a large organization, and I'm doing due diligence on licensing
Some concern has been raised over the way pylint is used. Could a version be cut which doesn't import pylint directly (i.e. invokes it out of process), or which doesn't depend on pylint at all?
|
Hi!
This is very interesting question. What would be the problem of installing pylint as a dependency of conan? Please note:
- Nothing in the conan code or conan dependencies "contaminate" your source code, your binary packages.
- It is just a tool, used locally to manage things. The use case would be exactly the same as using git, which is also GPL, wouldn't it? Are you using git?
We can't remove the linter and release different versions with and without pylint, that would be too much overload, but I think it is something relatively easy to do an maintain yourself, if this is critical for you, I could help. But first, please let me know about the above, willing to learn about your use case.
I found a recent relevant discussion here, which lays out some of the uncertainties.
https://github.com/PyCQA/meta/issues/24
|
2018-03-14T23:16:28Z
|
<patch>
diff --git a/.ci/jenkins/runner.py b/.ci/jenkins/runner.py
--- a/.ci/jenkins/runner.py
+++ b/.ci/jenkins/runner.py
@@ -65,6 +65,7 @@ def run_tests(module_path, pyver, source_folder, tmp_folder,
env = get_environ(tmp_folder)
env["PYTHONPATH"] = source_folder
+ env["CONAN_RECIPE_LINTER"] = "False"
env["CONAN_LOGGING_LEVEL"] = "50" if platform.system() == "Darwin" else "50"
env["CHANGE_AUTHOR_DISPLAY_NAME"] = ""
with chdir(source_folder):
diff --git a/conans/client/build/compiler_flags.py b/conans/client/build/compiler_flags.py
--- a/conans/client/build/compiler_flags.py
+++ b/conans/client/build/compiler_flags.py
@@ -64,9 +64,9 @@ def libcxx_flag(compiler, libcxx):
return '-stdlib=libc++'
elif str(compiler) == 'sun-cc':
return ({"libCstd": "-library=Cstd",
- "libstdcxx": "-library=stdcxx4",
- "libstlport": "-library=stlport4",
- "libstdc++": "-library=stdcpp"}.get(libcxx, ""))
+ "libstdcxx": "-library=stdcxx4",
+ "libstlport": "-library=stlport4",
+ "libstdc++": "-library=stdcpp"}.get(libcxx, ""))
return ""
diff --git a/conans/client/cmd/export_linter.py b/conans/client/cmd/export_linter.py
--- a/conans/client/cmd/export_linter.py
+++ b/conans/client/cmd/export_linter.py
@@ -1,14 +1,12 @@
import os
import json
import sys
-import six
-from six import StringIO
-
-from pylint.reporters.json import JSONReporter
-from pylint.lint import Run
+import platform
from conans.client.output import Color
from conans.errors import ConanException
+from subprocess import PIPE, Popen
+from conans import __path__ as root_path
def conan_linter(conanfile_path, out):
@@ -18,76 +16,48 @@ def conan_linter(conanfile_path, out):
apply_lint = os.environ.get("CONAN_RECIPE_LINTER", True)
if not apply_lint or apply_lint == "False":
return
+
+ dir_path = os.path.dirname(root_path[0])
+ dirname = os.path.dirname(conanfile_path)
+ hook = '--init-hook="import sys;sys.path.extend([\'%s\', \'%s\'])"' % (dirname, dir_path)
+
try:
- dirname = os.path.dirname(conanfile_path)
- sys.path.append(dirname)
- py3_msgs = _lint_py3(conanfile_path)
+ py3_msgs = None
+ msgs, py3_msgs = _normal_linter(conanfile_path, hook)
+ except Exception as e:
+ out.warn("Failed pylint: %s" % e)
+ else:
if py3_msgs:
out.writeln("Python 3 incompatibilities\n ERROR: %s"
% "\n ERROR: ".join(py3_msgs),
front=Color.BRIGHT_MAGENTA)
- msgs = _normal_linter(conanfile_path)
if msgs:
out.writeln("Linter warnings\n WARN: %s" % "\n WARN: ".join(msgs),
front=Color.MAGENTA)
pylint_werr = os.environ.get("CONAN_PYLINT_WERR", None)
if pylint_werr and (py3_msgs or msgs):
raise ConanException("Package recipe has linter errors. Please fix them.")
- finally:
- sys.path.pop()
-
-
-class _WritableObject(object):
- def __init__(self):
- self.content = []
-
- def write(self, st):
- self.content.append(st)
def _runner(args):
- try:
- output = _WritableObject()
- stdout_ = sys.stderr
- stream = StringIO()
- sys.stderr = stream
- Run(args, reporter=JSONReporter(output), exit=False)
- finally:
- sys.stderr = stdout_
- try:
- output = "".join(output.content)
- return json.loads(output)
- except ValueError:
- return []
+ command = ["pylint", "--output-format=json"] + args
+ command = " ".join(command)
+ shell = True if platform.system() != "Windows" else False
+ proc = Popen(command, shell=shell, bufsize=10, stdout=PIPE, stderr=PIPE)
+ stdout, _ = proc.communicate()
+ return json.loads(stdout) if stdout else {}
-def _lint_py3(conanfile_path):
- if six.PY3:
- return
-
- args = ['--py3k', "--reports=no", "--disable=no-absolute-import", "--persistent=no",
- conanfile_path]
-
- output_json = _runner(args)
-
- result = []
- for msg in output_json:
- if msg.get("type") in ("warning", "error"):
- result.append("Py3 incompatibility. Line %s: %s"
- % (msg.get("line"), msg.get("message")))
- return result
-
-
-def _normal_linter(conanfile_path):
- args = ["--reports=no", "--disable=no-absolute-import", "--persistent=no", conanfile_path]
+def _normal_linter(conanfile_path, hook):
+ args = ['--py3k', "--enable=all", "--reports=no", "--disable=no-absolute-import", "--persistent=no",
+ hook, '"%s"' % conanfile_path]
pylintrc = os.environ.get("CONAN_PYLINTRC", None)
if pylintrc:
if not os.path.exists(pylintrc):
raise ConanException("File %s defined by PYLINTRC doesn't exist" % pylintrc)
- args.append('--rcfile=%s' % pylintrc)
+ args.append('--rcfile="%s"' % pylintrc)
output_json = _runner(args)
-
dynamic_fields = ("source_folder", "build_folder", "package_folder", "info_build",
"build_requires", "info")
@@ -107,9 +77,14 @@ def _accept_message(msg):
return True
result = []
+ py3msgs = []
for msg in output_json:
if msg.get("type") in ("warning", "error"):
- if _accept_message(msg):
+ message_id = msg.get("symbol")
+ if message_id in ("print-statement", "dict-iter-method"):
+ py3msgs.append("Py3 incompatibility. Line %s: %s"
+ % (msg.get("line"), msg.get("message")))
+ elif _accept_message(msg):
result.append("Linter. Line %s: %s" % (msg.get("line"), msg.get("message")))
- return result
+ return result, py3msgs
diff --git a/conans/client/tools/oss.py b/conans/client/tools/oss.py
--- a/conans/client/tools/oss.py
+++ b/conans/client/tools/oss.py
@@ -117,7 +117,7 @@ def with_pacman(self):
def with_zypper(self):
return self.is_linux and self.linux_distro in \
("opensuse", "sles")
-
+
@staticmethod
def get_win_os_version():
"""
diff --git a/pyinstaller.py b/pyinstaller.py
--- a/pyinstaller.py
+++ b/pyinstaller.py
@@ -87,7 +87,7 @@ def pyinstall(source_folder):
conan_path = os.path.join(source_folder, 'conans', 'conan.py')
conan_server_path = os.path.join(source_folder, 'conans', 'conan_server.py')
conan_build_info_path = os.path.join(source_folder, "conans/build_info/command.py")
- hidden = "--hidden-import=glob --hidden-import=pylint.reporters.text"
+ hidden = "--hidden-import=glob"
if platform.system() != "Windows":
hidden += " --hidden-import=setuptools.msvc"
win_ver = ""
</patch>
|
[]
|
[]
| |||
google__jax-151
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Get value and gradient simultaneously
In Autograd I was used to have `value_and_grad` to return both the gradient and the value of a function simultaneously. Is there any similar functionality in JAX? That would be incredibly useful for, e.g., plotting the loss while training a neural network.
This is an example in Autograd:
```
grad_and_value = grad_and_value(np.tanh)
v, g = grad_and_value(0.0) # v=0.0, g=1.0
```
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://raw.githubusercontent.com/google/jax/master/images/jax_logo_250px.png" alt="logo"></img>
3 </div>
4
5 # JAX: Autograd and XLA [](https://travis-ci.org/google/jax)
6
7 JAX is [Autograd](https://github.com/hips/autograd) and
8 [XLA](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/overview.md),
9 brought together for high-performance machine learning research.
10
11 With its updated version of [Autograd](https://github.com/hips/autograd),
12 JAX can automatically differentiate native
13 Python and NumPy functions. It can differentiate through loops, branches,
14 recursion, and closures, and it can take derivatives of derivatives of
15 derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation)
16 via [`grad`](#automatic-differentiation-with-grad) as well as forward-mode differentiation,
17 and the two can be composed arbitrarily to any order.
18
19 What’s new is that JAX uses
20 [XLA](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/overview.md)
21 to compile and run your NumPy programs on GPUs and TPUs. Compilation happens
22 under the hood by default, with library calls getting just-in-time compiled and
23 executed. But JAX also lets you just-in-time compile your own Python functions
24 into XLA-optimized kernels using a one-function API,
25 [`jit`](#compilation-with-jit). Compilation and automatic differentiation can be
26 composed arbitrarily, so you can express sophisticated algorithms and get
27 maximal performance without leaving Python.
28
29 Dig a little deeper, and you'll see that JAX is really an extensible system for
30 [composable function transformations](#transformations). Both
31 [`grad`](#automatic-differentiation-with-grad) and [`jit`](#compilation-with-jit)
32 are instances of such transformations. Another is [`vmap`](#auto-vectorization-with-vmap)
33 for automatic vectorization, with more to come.
34
35 This is a research project, not an official Google product. Expect bugs and
36 sharp edges. Please help by trying it out, [reporting
37 bugs](https://github.com/google/jax/issues), and letting us know what you
38 think!
39
40 ```python
41 import jax.numpy as np
42 from jax import grad, jit, vmap
43 from functools import partial
44
45 def predict(params, inputs):
46 for W, b in params:
47 outputs = np.dot(inputs, W) + b
48 inputs = np.tanh(outputs)
49 return outputs
50
51 def logprob_fun(params, inputs, targets):
52 preds = predict(params, inputs)
53 return np.sum((preds - targets)**2)
54
55 grad_fun = jit(grad(logprob_fun)) # compiled gradient evaluation function
56 perex_grads = jit(vmap(grad_fun, in_axes=(None, 0, 0))) # fast per-example grads
57 ```
58
59 JAX started as a research project by [Matt Johnson](https://github.com/mattjj),
60 [Roy Frostig](https://github.com/froystig), [Dougal
61 Maclaurin](https://github.com/dougalm), and [Chris
62 Leary](https://github.com/learyg), and is now developed [in the
63 open](https://github.com/google/jax) by a growing number of
64 [contributors](#contributors).
65
66 ### Contents
67 * [Quickstart: Colab in the Cloud](#quickstart-colab-in-the-cloud)
68 * [Installation](#installation)
69 * [Running the tests](#running-the-tests)
70 * [A brief tour](#a-brief-tour)
71 * [What's supported](#whats-supported)
72 * [Transformations](#transformations)
73 * [Random numbers are different](#random-numbers-are-different)
74 * [Mini-libraries](#mini-libraries)
75 * [How it works](#how-it-works)
76 * [What we're working on](#what-were-working-on)
77 * [Current gotchas](#current-gotchas)
78
79 ## Quickstart: Colab in the Cloud
80 Jump right in using a notebook in your browser, connected to a Google Cloud GPU:
81 - [The basics: NumPy on accelerators, `grad` for differentiation, `jit` for compilation, and `vmap` for vectorization](https://colab.research.google.com/github/google/jax/blob/master/notebooks/quickstart.ipynb)
82 - [Training a Simple Neural Network, with PyTorch Data Loading](https://colab.research.google.com/github/google/jax/blob/master/notebooks/neural_network_and_data_loading.ipynb)
83
84
85 ## Installation
86 JAX is written in pure Python, but it depends on XLA, which needs to be
87 compiled and installed as the `jaxlib` package. Use the following instructions
88 to build JAX from source or install a binary package with pip.
89
90 ### Building JAX from source
91 First, obtain the JAX source code:
92
93 ```bash
94 git clone https://github.com/google/jax
95 cd jax
96 ```
97
98 To build XLA with CUDA support, you can run
99
100 ```bash
101 python build/build.py --enable_cuda
102 pip install -e build # install jaxlib (includes XLA)
103 pip install -e . # install jax (pure Python)
104 ```
105
106 See `python build/build.py --help` for configuration options, including ways to
107 specify the paths to CUDA and CUDNN, which you must have installed. The build
108 also depends on NumPy, and a compiler toolchain corresponding to that of
109 Ubuntu 16.04 or newer.
110
111 To build XLA without CUDA GPU support (CPU only), drop the `--enable_cuda`:
112
113 ```bash
114 python build/build.py
115 pip install -e build # install jaxlib (includes XLA)
116 pip install -e . # install jax
117 ```
118
119 To upgrade to the latest version from GitHub, just run `git pull` from the JAX
120 repository root, and rebuild by running `build.py` if necessary. You shouldn't have
121 to reinstall because `pip install -e` sets up symbolic links from site-packages
122 into the repository.
123
124 ### pip installation
125
126 Installing XLA with prebuilt binaries via `pip` is still experimental,
127 especially with GPU support. Let us know on [the issue
128 tracker](https://github.com/google/jax/issues) if you run into any errors.
129
130 To install a CPU-only version, which might be useful for doing local
131 development on a laptop, you can run
132
133 ```bash
134 pip install --upgrade jax jaxlib # CPU-only version
135 ```
136
137 If you want to install JAX with both CPU and GPU support, using existing CUDA
138 and CUDNN7 installations on your machine (for example, preinstalled on your
139 cloud VM), you can run
140
141 ```bash
142 # install jaxlib
143 PYTHON_VERSION=py2 # alternatives: py2, py3
144 CUDA_VERSION=cuda92 # alternatives: cuda90, cuda92, cuda100
145 PLATFORM=linux_x86_64 # alternatives: linux_x86_64
146 BASE_URL='https://storage.googleapis.com/jax-wheels'
147 pip install --upgrade $BASE_URL/$CUDA_VERSION/jaxlib-0.1.3-$PYTHON_VERSION-none-$PLATFORM.whl
148
149 pip install --upgrade jax # install jax
150 ```
151
152 The library package name must correspond to the version of the existing CUDA
153 installation you want to use, with `cuda100` for CUDA 10.0, `cuda92` for CUDA
154 9.2, and `cuda90` for CUDA 9.0. To find your CUDA and CUDNN versions, you can
155 run command like these, depending on your CUDNN install path:
156
157 ```bash
158 nvcc --version
159 grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h # might need different path
160 ```
161
162 ## Running the tests
163
164 To run all the JAX tests, from the repository root directory run
165
166 ```bash
167 nosetests tests
168 ```
169
170 JAX generates test cases combinatorially, and you can control the number of
171 cases that are generated and checked for each test (default 10):
172
173 ```bash
174 JAX_NUM_GENERATED_CASES=100 nosetests tests
175 ```
176
177 You can run a more specific set of tests using
178 [`nose`](https://nose.readthedocs.io/en/latest/usage.html)'s built-in selection
179 mechanisms, or alternatively you can run a specific test file directly to see
180 more detailed information about the cases being run:
181
182 ```bash
183 python tests/lax_numpy_test.py --num_generated_cases=5
184 ```
185
186 ## A brief tour
187
188 ```python
189 In [1]: import jax.numpy as np
190
191 In [2]: from jax import random
192
193 In [3]: key = random.PRNGKey(0)
194
195 In [4]: x = random.normal(key, (5000, 5000))
196
197 In [5]: print(np.dot(x, x.T) / 2) # fast!
198 [[ 2.52727051e+03 8.15895557e+00 -8.53276134e-01 ..., # ...
199
200 In [6]: print(np.dot(x, x.T) / 2) # even faster!
201 [[ 2.52727051e+03 8.15895557e+00 -8.53276134e-01 ..., # ...
202 ```
203
204 What’s happening behind-the-scenes is that JAX is using XLA to just-in-time
205 (JIT) compile and execute these individual operations on the GPU. First the
206 `random.normal` call is compiled and the array referred to by `x` is generated
207 on the GPU. Next, each function called on `x` (namely `transpose`, `dot`, and
208 `divide`) is individually JIT-compiled and executed, each keeping its results on
209 the device.
210 It’s only when a value needs to be printed, plotted, saved, or passed into a raw
211 NumPy function that a read-only copy of the value is brought back to the host as
212 an ndarray and cached. The second call to `dot` is faster because the
213 JIT-compiled code is cached and reused, saving the compilation time.
214
215 The fun really starts when you use `grad` for automatic differentiation and
216 `jit` to compile your own functions end-to-end. Here’s a more complete toy
217 example:
218
219 ```python
220 from jax import grad, jit
221 import jax.numpy as np
222
223 def sigmoid(x):
224 return 0.5 * (np.tanh(x / 2.) + 1)
225
226 # Outputs probability of a label being true according to logistic model.
227 def logistic_predictions(weights, inputs):
228 return sigmoid(np.dot(inputs, weights))
229
230 # Training loss is the negative log-likelihood of the training labels.
231 def loss(weights, inputs, targets):
232 preds = logistic_predictions(weights, inputs)
233 label_probs = preds * targets + (1 - preds) * (1 - targets)
234 return -np.sum(np.log(label_probs))
235
236 # Build a toy dataset.
237 inputs = np.array([[0.52, 1.12, 0.77],
238 [0.88, -1.08, 0.15],
239 [0.52, 0.06, -1.30],
240 [0.74, -2.49, 1.39]])
241 targets = np.array([True, True, False, True])
242
243 # Define a compiled function that returns gradients of the training loss
244 training_gradient_fun = jit(grad(loss))
245
246 # Optimize weights using gradient descent.
247 weights = np.array([0.0, 0.0, 0.0])
248 print("Initial loss: {:0.2f}".format(loss(weights, inputs, targets)))
249 for i in range(100):
250 weights -= 0.1 * training_gradient_fun(weights, inputs, targets)
251
252 print("Trained loss: {:0.2f}".format(loss(weights, inputs, targets)))
253 ```
254
255 To see more, check out the [quickstart
256 notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/quickstart.ipynb),
257 a [simple MNIST classifier
258 example](https://github.com/google/jax/blob/master/examples/mnist_classifier.py)
259 and the rest of the [JAX
260 examples](https://github.com/google/jax/blob/master/examples/).
261
262 ## What's supported
263
264 If you’re using JAX just as an accelerator-backed NumPy, without using `grad` or
265 `jit` in your code, then in principle there are no constraints, though some
266 NumPy functions haven’t been implemented yet. Generally using `np.dot(A, B)` is
267 better than `A.dot(B)` because the former gives us more opportunities to run the
268 computation on the device. NumPy also does a lot of work to cast any array-like
269 function arguments to arrays, as in `np.sum([x, y])`, while `jax.numpy`
270 typically requires explicit casting of array arguments, like
271 `np.sum(np.array([x, y]))`.
272
273 For automatic differentiation with `grad`, JAX has the same restrictions
274 as [Autograd](https://github.com/hips/autograd). Specifically, differentiation
275 works with indexing (`x = A[i, j, :]`) but not indexed assignment (`A[i, j] =
276 x`) or indexed in-place updating (`A[i] += b`). You can use lists, tuples, and
277 dicts freely: jax doesn't even see them. Using `np.dot(A, B)` rather than
278 `A.dot(B)` is required for automatic differentiation when `A` is a raw ndarray.
279
280 For compiling your own functions with `jit` there are a few more requirements.
281 Because `jit` aims to specialize Python functions only on shapes and dtypes
282 during tracing, rather than on concrete values, Python control flow that depends
283 on concrete values won’t be able to execute and will instead raise an error. If
284 you want compiled control flow, use structured control flow primitives like
285 lax.cond and lax.while. Some indexing features, like slice-based indexing
286 `A[i:i+5]` for argument-dependent `i`, or boolean-based indexing `A[bool_ind]`
287 for argument-dependent `bool_ind`, produce abstract values of unknown shape and
288 are thus unsupported in `jit` functions.
289
290 In general, JAX is intended to be used with a functional style of Python
291 programming. Functions passed to transformations like `grad` and `jit` are
292 expected to be free of side-effects. You can write print statements for
293 debugging but they may only be executed once if they're under a `jit` decorator.
294
295 > TLDR **Do use**
296 >
297 > * Functional programming
298 > * [Many](https://github.com/google/jax/blob/master/jax/numpy/lax_numpy.py) of NumPy’s
299 > functions (help us add more!)
300 > * [Some](https://github.com/google/jax/tree/master/jax/scipy) SciPy functions
301 > * Indexing and slicing of arrays like `x = A[[5, 1, 7], :, 2:4]`
302 > * Explicit array creation from lists like `A = np.array([x, y])`
303 >
304 > **Don’t use**
305 >
306 > * Assignment into arrays like `A[0, 0] = x`
307 > * Implicit casting to arrays like `np.sum([x, y])` (use `np.sum(np.array([x,
308 > y])` instead)
309 > * `A.dot(B)` method syntax for functions of more than one argument (use
310 > `np.dot(A, B)` instead)
311 > * Side-effects like mutation of arguments or mutation of global variables
312 > * The `out` argument of NumPy functions
313 >
314 > **For jit functions, also don’t use**
315 >
316 > * Control flow based on dynamic values `if x > 0: ...`. Control flow based
317 > on shapes is fine: `if x.shape[0] > 2: ...` and `for subarr in array`.
318 > * Slicing `A[i:i+5]` for dynamic index `i` (use `lax.dynamic_slice` instead)
319 > or boolean indexing `A[bool_ind]` for traced values `bool_ind`.
320
321 You should get loud errors if your code violates any of these.
322
323 ## Transformations
324
325 At its core, JAX is an extensible system for transforming numerical functions.
326 We currently expose three important transformations: `grad`, `jit`, and `vmap`.
327
328 ### Automatic differentiation with grad
329
330 JAX has roughly the same API as [Autograd](https://github.com/hips/autograd).
331 The most popular function is `grad` for reverse-mode gradients:
332
333 ```python
334 from jax import grad
335 import jax.numpy as np
336
337 def tanh(x): # Define a function
338 y = np.exp(-2.0 * x)
339 return (1.0 - y) / (1.0 + y)
340
341 grad_tanh = grad(tanh) # Obtain its gradient function
342 print(grad_tanh(1.0)) # Evaluate it at x = 1.0
343 # prints 0.41997434161402603
344 ```
345
346 You can differentiate to any order with `grad`.
347
348 For more advanced autodiff, you can use `jax.vjp` for reverse-mode
349 vector-Jacobian products and `jax.jvp` for forward-mode Jacobian-vector
350 products. The two can be composed arbitrarily with one another, and with other
351 JAX transformations. Here's one way to compose
352 those to make a function that efficiently computes full Hessian matrices:
353
354 ```python
355 from jax import jit, jacfwd, jacrev
356 def hessian(fun):
357 return jit(jacfwd(jacrev(fun)))
358 ```
359
360 As with Autograd, you're free to use differentiation with Python control
361 structures:
362
363 ```python
364 def abs_val(x):
365 if x > 0:
366 return x
367 else:
368 return -x
369
370 abs_val_grad = grad(abs_val)
371 print(abs_val_grad(1.0)) # prints 1.0
372 print(abs_val_grad(-1.0)) # prints -1.0 (abs_val is re-evaluated)
373 ```
374
375 ### Compilation with jit
376
377 You can use XLA to compile your functions end-to-end with `jit`, used either as
378 an `@jit` decorator or as a higher-order function.
379
380 ```python
381 import jax.numpy as np
382 from jax import jit
383
384 def slow_f(x):
385 # Element-wise ops see a large benefit from fusion
386 return x * x + x * 2.0
387
388 x = np.ones((5000, 5000))
389 fast_f = jit(slow_f)
390 %timeit -n10 -r3 fast_f(x) # ~ 4.5 ms / loop on Titan X
391 %timeit -n10 -r3 slow_f(x) # ~ 14.5 ms / loop (also on GPU via JAX)
392 ```
393
394 You can mix `jit` and `grad` and any other JAX transformation however you like.
395
396 ### Auto-vectorization with vmap
397
398 `vmap` is the vectorizing map.
399 It has the familiar semantics of mapping a function along array axes, but
400 instead of keeping the loop on the outside, it pushes the loop down into a
401 function’s primitive operations for better performance.
402
403 Using `vmap` can save you from having to carry around batch dimensions in your
404 code. For example, consider this simple *unbatched* neural network prediction
405 function:
406
407 ```python
408 def predict(params, input_vec):
409 assert input_vec.ndim == 1
410 for W, b in params:
411 output_vec = np.dot(W, input_vec) + b # `input_vec` on the right-hand side!
412 input_vec = np.tanh(output_vec)
413 return output_vec
414 ```
415
416 We often instead write `np.dot(inputs, W)` to allow for a batch dimension on the
417 left side of `inputs`, but we’ve written this particular prediction function to
418 apply only to single input vectors. If we wanted to apply this function to a
419 batch of inputs at once, semantically we could just write
420
421 ```python
422 from functools import partial
423 predictions = np.stack(list(map(partial(predict, params), input_batch)))
424 ```
425
426 But pushing one example through the network at a time would be slow! It’s better
427 to vectorize the computation, so that at every layer we’re doing matrix-matrix
428 multiplies rather than matrix-vector multiplies.
429
430 The `vmap` function does that transformation for us. That is, if we write
431
432 ```python
433 from jax import vmap
434 predictions = vmap(partial(predict, params))(input_batch)
435 # or, alternatively
436 predictions = vmap(predict, in_axes=(None, 0))(params, input_batch)
437 ```
438
439 then the `vmap` function will push the outer loop inside the function, and our
440 machine will end up executing matrix-matrix multiplications exactly as if we’d
441 done the batching by hand.
442
443 It’s easy enough to manually batch a simple neural network without `vmap`, but
444 in other cases manual vectorization can be impractical or impossible. Take the
445 problem of efficiently computing per-example gradients: that is, for a fixed set
446 of parameters, we want to compute the gradient of our loss function evaluated
447 separately at each example in a batch. With `vmap`, it’s easy:
448
449 ```python
450 per_example_gradients = vmap(partial(grad(loss), params))(inputs, targets)
451 ```
452
453 Of course, `vmap` can be arbitrarily composed with `jit`, `grad`, and any other
454 JAX transformation! We use `vmap` with both forward- and reverse-mode automatic
455 differentiation for fast Jacobian and Hessian matrix calculations in
456 `jax.jacfwd`, `jax.jacrev`, and `jax.hessian`.
457
458
459 ## Random numbers are different
460
461 JAX needs a functional pseudo-random number generator (PRNG) system to provide
462 reproducible results invariant to compilation boundaries and backends, while
463 also maximizing performance by enabling vectorized generation and
464 parallelization across random calls. The `numpy.random` library doesn’t have
465 those properties. The `jax.random` library meets those needs: it’s functionally
466 pure, but it doesn’t require you to pass stateful random objects back out of
467 every function.
468
469 The `jax.random` library uses
470 [count-based PRNGs](http://www.thesalmons.org/john/random123/papers/random123sc11.pdf)
471 and a functional array-oriented
472 [splitting model](http://publications.lib.chalmers.se/records/fulltext/183348/local_183348.pdf).
473 To generate random values, you call a function like `jax.random.normal` and give
474 it a PRNG key:
475
476 ```python
477 import jax.random as random
478
479 key = random.PRNGKey(0)
480 print(random.normal(key, shape=(3,))) # [ 1.81608593 -0.48262325 0.33988902]
481 ```
482
483 If we make the same call again with the same key, we get the same values:
484
485 ```python
486 print(random.normal(key, shape=(3,))) # [ 1.81608593 -0.48262325 0.33988902]
487 ```
488
489 The key never gets updated. So how do we get fresh random values? We use
490 `jax.random.split` to create new keys from existing ones. A common pattern is to
491 split off a new key for every function call that needs random values:
492
493 ```python
494 key = random.PRNGKey(0)
495
496 key, subkey = random.split(key)
497 print(random.normal(subkey, shape=(3,))) # [ 1.1378783 -1.22095478 -0.59153646]
498
499 key, subkey = random.split(key)
500 print(random.normal(subkey, shape=(3,))) # [-0.06607265 0.16676566 1.17800343]
501 ```
502
503 By splitting the PRNG key, not only do we avoid having to thread random states
504 back out of every function call, but also we can generate multiple random arrays
505 in parallel because we can avoid unnecessary sequential dependencies.
506
507 There's a gotcha here, which is that it's easy to unintentionally reuse a key
508 without splitting. We intend to add a check for this (a sort of dynamic linear
509 typing) but for now it's something to be careful about.
510
511
512 ## Mini-libraries
513
514 JAX provides some small, experimental libraries for machine learning. These
515 libraries are in part about providing tools and in part about serving as
516 examples for how to build such libraries using JAX. Each one is only a few
517 hundred lines of code, so take a look inside and adapt them as you need!
518
519 ### Neural-net building with Stax
520
521 **Stax** is a functional neural network building library. The basic idea is that
522 a single layer or an entire network can be modeled as an `(init_fun, apply_fun)`
523 pair. The `init_fun` is used to initialize network parameters and the
524 `apply_fun` takes parameters and inputs to produce outputs. There are
525 constructor functions for common basic pairs, like `Conv` and `Relu`, and these
526 pairs can be composed in series using `stax.serial` or in parallel using
527 `stax.parallel`.
528
529 Here’s an example:
530
531 ```python
532 import jax.numpy as np
533 from jax.experimental import stax
534 from jax.experimental.stax import Conv, Dense, MaxPool, Relu, Flatten, LogSoftmax
535
536 # Use stax to set up network initialization and evaluation functions
537 net_init, net_apply = stax.serial(
538 Conv(32, (3, 3), padding='SAME'), Relu,
539 Conv(64, (3, 3), padding='SAME'), Relu,
540 MaxPool((2, 2)), Flatten,
541 Dense(128), Relu,
542 Dense(10), LogSoftmax,
543 )
544
545 # Initialize parameters, not committing to a batch shape
546 in_shape = (-1, 28, 28, 1)
547 out_shape, net_params = net_init(in_shape)
548
549 # Apply network to dummy inputs
550 inputs = np.zeros((128, 28, 28, 1))
551 predictions = net_apply(net_params, inputs)
552 ```
553
554 ### First-order optimization with Minmax
555
556 **Minmax** is an optimization library focused on stochastic first-order
557 optimizers. Every optimizer is modeled as an `(init_fun, update_fun)` pair. The
558 `init_fun` is used to initialize the optimizer state, which could include things
559 like momentum variables, and the `update_fun` accepts a gradient and an
560 optimizer state to produce a new optimizer state. The parameters being optimized
561 can be ndarrays or arbitrarily-nested list/tuple/dict structures, so you can
562 store your parameters however you’d like.
563
564 Here’s an example, using `jit` to compile the whole update end-to-end:
565
566 ```python
567 from jax.experimental import minmax
568 from jax import jit, grad
569
570 # Define a simple squared-error loss
571 def loss(params, batch):
572 inputs, targets = batch
573 predictions = net_apply(params, inputs)
574 return np.sum((predictions - targets)**2)
575
576 # Use minmax to set optimizer initialization and update functions
577 opt_init, opt_update = minmax.momentum(step_size=1e-3, mass=0.9)
578
579 # Define a compiled update step
580 @jit
581 def step(i, opt_state, batch):
582 params = minmax.get_params(opt_state)
583 g = grad(loss)(params, batch)
584 return opt_update(i, g, opt_state)
585
586 # Dummy input data stream
587 data_generator = ((np.zeros((128, 28, 28, 1)), np.zeros((128, 10)))
588 for _ in range(10))
589
590 # Optimize parameters in a loop
591 opt_state = opt_init(net_params)
592 for i in range(10):
593 opt_state = step(i, opt_state, next(data_generator))
594 net_params = minmax.get_params(opt_state)
595 ```
596
597 ## How it works
598
599 Programming in machine learning is about expressing and transforming functions.
600 Transformations include automatic differentiation, compilation for accelerators,
601 and automatic batching. High-level languages like Python are great for
602 expressing functions, but usually all we can do with them is apply them. We lose
603 access to their internal structure which would let us perform transformations.
604
605 JAX is a tool for specializing and translating high-level Python+NumPy functions
606 into a representation that can be transformed and then lifted back into a Python
607 function.
608
609 
610
611 JAX specializes Python functions by tracing. Tracing a function means monitoring
612 all the basic operations that are applied to its input to produce its output,
613 and recording these operations and the data-flow between them in a directed
614 acyclic graph (DAG). To perform tracing, JAX wraps primitive operations, like
615 basic numerical kernels, so that when they’re called they add themselves to a
616 list of operations performed along with their inputs and outputs. To keep track
617 of how data flows between these primitives, values being tracked are wrapped in
618 instances of the `Tracer` class.
619
620 When a Python function is provided to `grad` or `jit`, it’s wrapped for tracing
621 and returned. When the wrapped function is called, we abstract the concrete
622 arguments provided into instances of the `AbstractValue` class, box them for
623 tracing in instances of the `Tracer` class, and call the function on them.
624 Abstract arguments represent sets of possible values rather than specific
625 values: for example, `jit` abstracts ndarray arguments to abstract values that
626 represent all ndarrays with the same shape and dtype. In contrast, `grad`
627 abstracts ndarray arguments to represent an infinitesimal neighborhood of the
628 underlying
629 value. By tracing the Python function on these abstract values, we ensure that
630 it’s specialized enough so that it’s tractable to transform, and that it’s still
631 general enough so that the transformed result is useful, and possibly reusable.
632 These transformed functions are then lifted back into Python callables in a way
633 that allows them to be traced and transformed again as needed.
634
635 The primitive functions that JAX traces are mostly in 1:1 correspondence with
636 [XLA HLO](https://www.tensorflow.org/xla/operation_semantics) and are defined
637 in [lax.py](https://github.com/google/jax/blob/master/jax/lax.py). This 1:1
638 correspondence makes most of the translations to XLA essentially trivial, and
639 ensures we only have a small set of primitives to cover for other
640 transformations like automatic differentiation. The [`jax.numpy`
641 layer](https://github.com/google/jax/blob/master/jax/numpy/) is written in pure
642 Python simply by expressing NumPy functions in terms of the LAX functions (and
643 other NumPy functions we’ve already written). That makes `jax.numpy` easy to
644 extend.
645
646 When you use `jax.numpy`, the underlying LAX primitives are `jit`-compiled
647 behind the scenes, allowing you to write unrestricted Python+Numpy code while
648 still executing each primitive operation on an accelerator.
649
650 But JAX can do more: instead of just compiling and dispatching to a fixed set of
651 individual primitives, you can use `jit` on larger and larger functions to be
652 end-to-end compiled and optimized. For example, instead of just compiling and
653 dispatching a convolution op, you can compile a whole network, or a whole
654 gradient evaluation and optimizer update step.
655
656 The tradeoff is that `jit` functions have to satisfy some additional
657 specialization requirements: since we want to compile traces that are
658 specialized on shapes and dtypes, but not specialized all the way to concrete
659 values, the Python code under a `jit` decorator must be applicable to abstract
660 values. If we try to evaluate `x > 0` on an abstract `x`, the result is an
661 abstract value representing the set `{True, False}`, and so a Python branch like
662 `if x > 0` will raise an error: it doesn’t know which way to go!
663 See [What’s supported](#whats-supported) for more
664 information about `jit` requirements.
665
666 The good news about this tradeoff is that `jit` is opt-in: JAX libraries use
667 `jit` on individual operations and functions behind the scenes, allowing you to
668 write unrestricted Python+Numpy and still make use of a hardware accelerator.
669 But when you want to maximize performance, you can often use `jit` in your own
670 code to compile and end-to-end optimize much bigger functions.
671
672 ## What we're working on
673 1. Documentation!
674 2. Cloud TPU support
675 3. Multi-GPU and multi-TPU support
676 4. Full NumPy coverage and some SciPy coverage
677 5. Full coverage for vmap
678 6. Make everything faster
679 * Lowering the XLA function dispatch overhead
680 * Linear algebra routines (MKL on CPU, MAGMA on GPU)
681 7. `cond` and `while` primitives with efficient automatic differentiation
682
683 ## Current gotchas
684
685 Some things we don't handle that might surprise NumPy users:
686 1. No in-place mutation syntax. Functional code. Can use lax.dynamic\_update\_slice.
687 2. PRNG can be awkward, and linearity is not checked with a warning.
688
689 ## Contributors
690
691 So far, JAX includes lots of help and contributions from
692 [Jamie Townsend](https://github.com/j-towns),
693 [Peter Hawkins](https://github.com/hawkinsp),
694 [Jonathan Ragan-Kelley](https://people.eecs.berkeley.edu/~jrk/),
695 [Alex Wiltschko](http://github.com/alexbw),
696 George Dahl,
697 [Stephan Hoyer](http://stephanhoyer.com/),
698 Sam Schoenholz,
699 [Eli Bendersky](https://github.com/eliben),
700 Zak Stone,
701 [Alexey Radul](https://github.com/axch),
702 Michael Isard,
703 Skye Wanderman-Milne,
704 and many others.
705
[end of README.md]
[start of examples/mnist_classifier.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """A basic MNIST example using JAX together with the mini-libraries stax, for
16 neural network building, and minmax, for first-order stochastic optimization.
17 """
18
19 from __future__ import absolute_import
20 from __future__ import division
21 from __future__ import print_function
22
23 import time
24 import itertools
25
26 import numpy.random as npr
27
28 import jax.numpy as np
29 from jax.config import config
30 from jax import jit, grad
31 from jax.experimental import minmax
32 from jax.experimental import stax
33 from jax.experimental.stax import Dense, Relu, LogSoftmax
34 from examples import datasets
35
36
37 def loss(params, batch):
38 inputs, targets = batch
39 preds = predict(params, inputs)
40 return -np.mean(preds * targets)
41
42 def accuracy(params, batch):
43 inputs, targets = batch
44 target_class = np.argmax(targets, axis=1)
45 predicted_class = np.argmax(predict(params, inputs), axis=1)
46 return np.mean(predicted_class == target_class)
47
48 init_random_params, predict = stax.serial(
49 Dense(1024), Relu,
50 Dense(1024), Relu,
51 Dense(10), LogSoftmax)
52
53 if __name__ == "__main__":
54 step_size = 0.001
55 num_epochs = 10
56 batch_size = 128
57 momentum_mass = 0.9
58
59 train_images, train_labels, test_images, test_labels = datasets.mnist()
60 num_train = train_images.shape[0]
61 num_complete_batches, leftover = divmod(num_train, batch_size)
62 num_batches = num_complete_batches + bool(leftover)
63
64 def data_stream():
65 rng = npr.RandomState(0)
66 while True:
67 perm = rng.permutation(num_train)
68 for i in range(num_batches):
69 batch_idx = perm[i * batch_size:(i + 1) * batch_size]
70 yield train_images[batch_idx], train_labels[batch_idx]
71 batches = data_stream()
72
73 opt_init, opt_update = minmax.momentum(step_size, mass=momentum_mass)
74
75 @jit
76 def update(i, opt_state, batch):
77 params = minmax.get_params(opt_state)
78 return opt_update(i, grad(loss)(params, batch), opt_state)
79
80 _, init_params = init_random_params((-1, 28 * 28))
81 opt_state = opt_init(init_params)
82 itercount = itertools.count()
83
84 print("\nStarting training...")
85 for epoch in range(num_epochs):
86 start_time = time.time()
87 for _ in range(num_batches):
88 opt_state = update(next(itercount), opt_state, next(batches))
89 epoch_time = time.time() - start_time
90
91 params = minmax.get_params(opt_state)
92 train_acc = accuracy(params, (train_images, train_labels))
93 test_acc = accuracy(params, (test_images, test_labels))
94 print("Epoch {} in {:0.2f} sec".format(epoch, epoch_time))
95 print("Training set accuracy {}".format(train_acc))
96 print("Test set accuracy {}".format(test_acc))
97
[end of examples/mnist_classifier.py]
[start of examples/mnist_vae.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """A basic variational autoencoder (VAE) on binarized MNIST using Numpy and JAX.
16
17 This file uses the stax network definition library and the minmax optimization
18 library.
19 """
20
21 from __future__ import absolute_import
22 from __future__ import division
23 from __future__ import print_function
24
25 import os
26 import time
27
28 import matplotlib.pyplot as plt
29
30 import jax.numpy as np
31 from jax.config import config
32 from jax import jit, grad, lax, random
33 from jax.experimental import minmax
34 from jax.experimental import stax
35 from jax.experimental.stax import Dense, FanOut, Relu, Softplus
36 from examples import datasets
37
38
39 def gaussian_kl(mu, sigmasq):
40 """KL divergence from a diagonal Gaussian to the standard Gaussian."""
41 return -0.5 * np.sum(1. + np.log(sigmasq) - mu**2. - sigmasq)
42
43 def gaussian_sample(rng, mu, sigmasq):
44 """Sample a diagonal Gaussian."""
45 return mu + np.sqrt(sigmasq) * random.normal(rng, mu.shape)
46
47 def bernoulli_logpdf(logits, x):
48 """Bernoulli log pdf of data x given logits."""
49 return -np.sum(np.logaddexp(0., np.where(x, -1., 1.) * logits))
50
51 def elbo(rng, params, images):
52 """Monte Carlo estimate of the negative evidence lower bound."""
53 enc_params, dec_params = params
54 mu_z, sigmasq_z = encode(enc_params, images)
55 logits_x = decode(dec_params, gaussian_sample(rng, mu_z, sigmasq_z))
56 return bernoulli_logpdf(logits_x, images) - gaussian_kl(mu_z, sigmasq_z)
57
58 def image_sample(rng, params, nrow, ncol):
59 """Sample images from the generative model."""
60 _, dec_params = params
61 code_rng, img_rng = random.split(rng)
62 logits = decode(dec_params, random.normal(code_rng, (nrow * ncol, 10)))
63 sampled_images = random.bernoulli(img_rng, np.logaddexp(0., logits))
64 return image_grid(nrow, ncol, sampled_images, (28, 28))
65
66 def image_grid(nrow, ncol, imagevecs, imshape):
67 """Reshape a stack of image vectors into an image grid for plotting."""
68 images = iter(imagevecs.reshape((-1,) + imshape))
69 return np.vstack([np.hstack([next(images).T for _ in range(ncol)][::-1])
70 for _ in range(nrow)]).T
71
72
73 encoder_init, encode = stax.serial(
74 Dense(512), Relu,
75 Dense(512), Relu,
76 FanOut(2),
77 stax.parallel(Dense(10), stax.serial(Dense(10), Softplus)),
78 )
79
80 decoder_init, decode = stax.serial(
81 Dense(512), Relu,
82 Dense(512), Relu,
83 Dense(28 * 28),
84 )
85
86
87 if __name__ == "__main__":
88 step_size = 0.001
89 num_epochs = 100
90 batch_size = 32
91 nrow, ncol = 10, 10 # sampled image grid size
92 rng = random.PRNGKey(0)
93
94 test_rng = random.PRNGKey(1) # fixed prng key for evaluation
95 imfile = os.path.join(os.getenv("TMPDIR", "/tmp/"), "mnist_vae_{:03d}.png")
96
97 train_images, _, test_images, _ = datasets.mnist(permute_train=True)
98 num_complete_batches, leftover = divmod(train_images.shape[0], batch_size)
99 num_batches = num_complete_batches + bool(leftover)
100
101 _, init_encoder_params = encoder_init((batch_size, 28 * 28))
102 _, init_decoder_params = decoder_init((batch_size, 10))
103 init_params = init_encoder_params, init_decoder_params
104
105 opt_init, opt_update = minmax.momentum(step_size, mass=0.9)
106
107 def binarize_batch(rng, i, images):
108 i = i % num_batches
109 batch = lax.dynamic_slice_in_dim(images, i * batch_size, batch_size)
110 return random.bernoulli(rng, batch)
111
112 @jit
113 def run_epoch(rng, opt_state):
114 def body_fun(i, loop_carry):
115 (rng, opt_state, images) = loop_carry
116 rng, elbo_rng, data_rng = random.split(rng, 3)
117 batch = binarize_batch(data_rng, i, images)
118 loss = lambda params: -elbo(elbo_rng, params, batch) / batch_size
119 g = grad(loss)(minmax.get_params(opt_state))
120 loop_carry = rng, opt_update(i, g, opt_state), images
121 return loop_carry
122 init_val = rng, opt_state, train_images
123 _, opt_state, _ = lax.fori_loop(0, num_batches, body_fun, init_val)
124 return opt_state
125
126 @jit
127 def evaluate(opt_state, images):
128 params = minmax.get_params(opt_state)
129 elbo_rng, data_rng, image_rng = random.split(test_rng, 3)
130 binarized_test = random.bernoulli(data_rng, images)
131 test_elbo = elbo(elbo_rng, params, binarized_test) / images.shape[0]
132 sampled_images = image_sample(image_rng, params, nrow, ncol)
133 return test_elbo, sampled_images
134
135 opt_state = opt_init(init_params)
136 for epoch in range(num_epochs):
137 tic = time.time()
138 rng, epoch_rng = random.split(rng)
139 opt_state = run_epoch(epoch_rng, opt_state)
140 test_elbo, sampled_images = evaluate(opt_state, test_images)
141 print("{: 3d} {} ({:.3f} sec)".format(epoch, test_elbo, time.time() - tic))
142 plt.imsave(imfile.format(epoch), sampled_images, cmap=plt.cm.gray)
143
[end of examples/mnist_vae.py]
[start of jax/experimental/minmax.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Optimizers for use with JAX.
16
17 This short module contains some convenient optimizer definitions, specifically
18 initialization and update functions, which can be used with ndarrays or
19 arbitrarily-nested tuple/list/dicts of ndarrays.
20 """
21 from __future__ import absolute_import
22 from __future__ import division
23 from __future__ import print_function
24
25 import operator
26 import functools
27
28 import jax.numpy as np
29 from jax.core import pack
30 from jax.tree_util import tree_map, tree_multimap
31
32
33 def optimizer(opt_maker):
34 """Decorator to make an optimizer map over tuple/list/dict containers."""
35 @functools.wraps(opt_maker)
36 def tree_opt_maker(*args, **kwargs):
37 init_fun, update_fun = opt_maker(*args, **kwargs)
38
39 @functools.wraps(init_fun)
40 def fmapped_init_fun(x0_tree):
41 return tree_map(lambda x0: pack(init_fun(x0)), x0_tree)
42
43 @functools.wraps(update_fun)
44 def fmapped_update_fun(i, grad_tree, state_tree):
45 update = lambda g, state: pack(update_fun(i, g, *state))
46 return tree_multimap(update, grad_tree, state_tree)
47
48 return fmapped_init_fun, fmapped_update_fun
49 return tree_opt_maker
50
51 def iterate(state_tree):
52 """Extract the current iterate from an optimizer state."""
53 return tree_map(lambda state: tuple(state)[0], state_tree)
54 get_params = iterate
55
56 # optimizers
57
58 @optimizer
59 def sgd(step_size):
60 """Construct init and update step functions for stochastic gradient descent.
61
62 Args:
63 step_size: positive scalar, or a callable representing a step size schedule
64 that maps the iteration index to positive scalar.
65
66 Returns:
67 An (init_fun, update_fun) pair.
68 """
69 step_size = make_schedule(step_size)
70 def init_fun(x0):
71 return (x0,)
72 def update_fun(i, g, x):
73 return (x - step_size(i) * g,)
74 return init_fun, update_fun
75
76 @optimizer
77 def momentum(step_size, mass):
78 """Construct init and update step functions for SGD with Nesterov momentum.
79
80 Args:
81 step_size: positive scalar, or a callable representing a step size schedule
82 that maps the iteration index to positive scalar.
83
84 Returns:
85 An (init_fun, update_fun) pair.
86 """
87 step_size = make_schedule(step_size)
88 def init_fun(x0):
89 v0 = np.zeros_like(x0)
90 return x0, v0
91 def update_fun(i, g, x, velocity):
92 velocity = mass * velocity - (1. - mass) * g
93 x = x + step_size(i) * velocity
94 return x, velocity
95 return init_fun, update_fun
96
97 @optimizer
98 def rmsprop(step_size, gamma=0.9, eps=1e-8):
99 """Construct init and update step functions for RMSProp.
100
101 Args:
102 step_size: positive scalar, or a callable representing a step size schedule
103 that maps the iteration index to positive scalar.
104
105 Returns:
106 An (init_fun, update_fun) pair.
107 """
108 step_size = make_schedule(step_size)
109 def init_fun(x0):
110 avg_sq_grad = np.ones_like(x0)
111 return x0, avg_sq_grad
112 def update_fun(i, g, x, avg_sq_grad):
113 avg_sq_grad = avg_sq_grad * gamma + g**2 * (1. - gamma)
114 x = x - step_size(i) * g / (np.sqrt(avg_sq_grad) + eps)
115 return x, avg_sq_grad
116 return init_fun, update_fun
117
118 @optimizer
119 def adam(step_size, b1=0.9, b2=0.999, eps=1e-8):
120 """Construct init and update step functions for Adam.
121
122 Args:
123 step_size: positive scalar, or a callable representing a step size schedule
124 that maps the iteration index to positive scalar.
125 b1: optional, a positive scalar value for beta_1, the exponential decay rate
126 for the first moment estimates (default 0.9).
127 b2: optional, a positive scalar value for beta_2, the exponential decay rate
128 for the second moment estimates (default 0.999).
129 eps: optional, a positive scalar value for epsilon, a small constant for
130 numerical stability (default 1e-8).
131
132 Returns:
133 An (init_fun, update_fun) pair.
134 """
135 step_size = make_schedule(step_size)
136 def init_fun(x0):
137 m0 = np.zeros_like(x0)
138 v0 = np.zeros_like(x0)
139 return x0, m0, v0
140 def update_fun(i, g, x, m, v):
141 m = (1 - b1) * g + b1 * m # First moment estimate.
142 v = (1 - b2) * (g ** 2) + b2 * v # Second moment estimate.
143 mhat = m / (1 - b1 ** (i + 1)) # Bias correction.
144 vhat = v / (1 - b2 ** (i + 1))
145 x = x - step_size(i) * mhat / (np.sqrt(vhat) + eps)
146 return x, m, v
147 return init_fun, update_fun
148
149 # learning rate schedules
150
151 def constant(step_size):
152 def schedule(i):
153 return step_size
154 return schedule
155
156 def exponential_decay(step_size, decay_steps, decay_rate):
157 def schedule(i):
158 return step_size * decay_rate ** (i / decay_steps)
159 return schedule
160
161 def inverse_time_decay(step_size, decay_steps, decay_rate, staircase=False):
162 if staircase:
163 def schedule(i):
164 return step_size / (1 + decay_rate * np.floor(i / decay_steps))
165 else:
166 def schedule(i):
167 return step_size / (1 + decay_rate * i / decay_steps)
168 return schedule
169
170 def piecewise_constant(boundaries, values):
171 boundaries = np.array(boundaries)
172 values = np.array(values)
173 if not boundaries.ndim == values.ndim == 1:
174 raise ValueError("boundaries and values must be sequences")
175 if not boundaries.shape[0] == values.shape[0] - 1:
176 raise ValueError("boundaries length must be one longer than values length")
177
178 def schedule(i):
179 return values[np.sum(i > boundaries)]
180 return schedule
181
182 def make_schedule(scalar_or_schedule_fun):
183 if callable(scalar_or_schedule_fun):
184 return scalar_or_schedule_fun
185 elif np.ndim(scalar_or_schedule_fun) == 0:
186 return constant(scalar_or_schedule_fun)
187 else:
188 raise TypeError(type(scalar_or_schedule_fun))
189
[end of jax/experimental/minmax.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
google/jax
|
21e5eb13dd879f92b6ff94e18bf33a24ed8cc2a7
|
Get value and gradient simultaneously
In Autograd I was used to have `value_and_grad` to return both the gradient and the value of a function simultaneously. Is there any similar functionality in JAX? That would be incredibly useful for, e.g., plotting the loss while training a neural network.
This is an example in Autograd:
```
grad_and_value = grad_and_value(np.tanh)
v, g = grad_and_value(0.0) # v=0.0, g=1.0
```
|
Great idea. If you take a look at [the grad function's implementation](https://github.com/google/jax/blob/95135377d0ed3d3954a44a650ad2580ded61037f/jax/api.py#L105), you can see that `ans` is computed but not returned. We should factor the API so that this is easy to access.
@dougalm do you think it's better to set things up so that there's a separate `value_and_grad` function (and maybe `grad` is a wrapped version that discards the `ans` part of its return value), or is it worth considering just having a `return_ans` kwarg for `grad`? My usual preference is for the former, since other APIs always seem to succumb to kwarg creep, but in this case I'm not sure.
I would vote for a separate function. Having kwargs affect what is returned
to me is a Matlab anti-pattern, but if folx disagree, let's discuss.
On Thu, Dec 20, 2018 at 10:06 AM Matthew Johnson <[email protected]>
wrote:
> Great idea. If you take a look at the grad function's implementation
> <https://github.com/google/jax/blob/95135377d0ed3d3954a44a650ad2580ded61037f/jax/api.py#L105>,
> you can see that ans is computed but not returned. We should factor the
> API so that this is easy to access.
>
> @dougalm <https://github.com/dougalm> do you think it's better to set
> things up so that there's a separate value_and_grad function (and maybe
> grad is a wrapped version that discards the ans part of its return
> value), or is it worth considering just having a return_ans kwarg for grad?
> My usual preference is for the former, since other APIs always seem to
> succumb to kwarg creep, but in this case I'm not sure.
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/google/jax/issues/149#issuecomment-449028645>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/AAJ4j_8I5WnvkDwmMv5c33RRv4YU8zzbks5u66eDgaJpZM4ZcBBm>
> .
>
@dougalm agreed, and had a punchy way of saying the same thing @alexbw did: an argument to a function shouldn't change its type! That sounds like a knock-down argument to me.
I'll add a value_and_grad function now.
|
2018-12-20T18:18:43Z
|
<patch>
diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -58,16 +58,17 @@ def jit(fun, static_argnums=()):
Args:
fun: Function to be jitted. Should be a pure function, as side-effects may
- only be executed once. Its positional arguments and return value should
- be arrays, scalars, or standard Python containers (tuple/list/dict)
- thereof. Keyword arguments and positional arguments specified by
- `static_argnums` can be anything at all. These are treated as static
- (see below).
+ only be executed once. Its positional arguments and return value should be
+ arrays, scalars, or standard Python containers (tuple/list/dict) thereof.
+ Keyword arguments and positional arguments specified by `static_argnums`
+ can be anything at all. These are treated as static (see below).
static_argnums: A tuple of ints. Specifies which arguments to treat as
- static (compile-time constant). Operations that only depend on static
- arguments will be constant-folded. Calling the jitted function with
- different values for these constants will trigger recompilation.
- Returns: A wrapped version of `fun`, set up for just-in-time compilation.
+ static (compile-time constant). Operations that only depend on static
+ arguments will be constant-folded. Calling the jitted function with
+ different values for these constants will trigger recompilation.
+
+ Returns:
+ A wrapped version of `fun`, set up for just-in-time compilation.
"""
@_wraps(fun)
def f_jitted(*args, **kwargs):
@@ -83,31 +84,62 @@ def f_jitted(*args, **kwargs):
f_jitted.__name__ = "jit({})".format(f_jitted.__name__)
return f_jitted
+
def grad(fun, argnums=0):
"""Creates a function which evaluates the gradient of `fun`.
Args:
fun: Function to be differentiated. Its arguments at positions specified by
- `argnums` should be arrays, scalars, or standard Python containers. It
- should return a scalar (which includes arrays with shape `()` but not
- arrays with shape `(1,)` etc.)
+ `argnums` should be arrays, scalars, or standard Python containers. It
+ should return a scalar (which includes arrays with shape `()` but not
+ arrays with shape `(1,)` etc.)
argnums: Integer or tuple of integers. Specifies which positional
- argument(s) to differentiate with respect to.
- Returns: A function with the same arguments as `fun`, that evaluates the
- gradient of `fun`. If `argnums` is an integer then the gradient has the
- same shape and type as the positional argument indicated by that integer.
- If argnums is a tuple of integers, the gradient is a tuple of values with
- the same shapes and types as the corresponding arguments.
+ argument(s) to differentiate with respect to.
+
+ Returns:
+ A function with the same arguments as `fun`, that evaluates the gradient of
+ `fun`. If `argnums` is an integer then the gradient has the same shape and
+ type as the positional argument indicated by that integer. If argnums is a
+ tuple of integers, the gradient is a tuple of values with the same shapes
+ and types as the corresponding arguments.
"""
+ value_and_grad_f = value_and_grad(fun, argnums)
+
def grad_f(*args, **kwargs):
+ ans, g = value_and_grad_f(*args, **kwargs)
+ return g
+
+ return grad_f
+
+def value_and_grad(fun, argnums=0):
+ """Creates a function which evaluates both `fun` and the gradient of `fun`.
+
+ Args:
+ fun: Function to be differentiated. Its arguments at positions specified by
+ `argnums` should be arrays, scalars, or standard Python containers. It
+ should return a scalar (which includes arrays with shape `()` but not
+ arrays with shape `(1,)` etc.)
+ argnums: Integer or tuple of integers. Specifies which positional
+ argument(s) to differentiate with respect to.
+
+ Returns:
+ A function with the same arguments as `fun` that evaluates both `fun` and
+ the gradient of `fun` and returns them as a pair (a two-element tuple). If
+ `argnums` is an integer then the gradient has the same shape and type as the
+ positional argument indicated by that integer. If argnums is a tuple of
+ integers, the gradient is a tuple of values with the same shapes and types
+ as the corresponding arguments.
+ """
+ def value_and_grad_f(*args, **kwargs):
f = lu.wrap_init(fun, kwargs)
f_partial, dyn_args = argnums_partial(f, argnums, args)
ans, vjp_py = vjp(f_partial, *dyn_args)
check_scalar(ans)
g = vjp_py(onp.ones((), onp.result_type(ans)))
- return g[0] if isinstance(argnums, int) else g
+ g = g[0] if isinstance(argnums, int) else g
+ return (ans, g)
- return grad_f
+ return value_and_grad_f
@curry
def jacfwd(fun, x):
@@ -136,11 +168,13 @@ def vmap(fun, in_axes=0, out_axes=0):
Args:
fun: Function to be mapped over additional axes.
in_axes, out_axes: Specifies which axes to map over. These may be integers,
- None, or (possibly nested) tuples of integers or None.
- Returns: Batched/vectorized version of `fun` with arguments that correspond to
- those of `fun`, but with extra array axes at positions indicated by
- `in_axes`, and a return value that corresponds to that of `fun`, but with
- extra array axes at positions indicated by `out_axes`.
+ None, or (possibly nested) tuples of integers or None.
+
+ Returns:
+ Batched/vectorized version of `fun` with arguments that correspond to those
+ of `fun`, but with extra array axes at positions indicated by `in_axes`, and
+ a return value that corresponds to that of `fun`, but with extra array axes
+ at positions indicated by `out_axes`.
For example, we can implement a matrix-matrix product using a vector dot
product:
@@ -150,7 +184,6 @@ def vmap(fun, in_axes=0, out_axes=0):
mm = vmap(mv, (None, 1), 1) # ([a,b], [b,c]) -> [a,c]
(`[a,b]` indicates an array with shape (a,b))
-
"""
def batched_fun(*args, **kwargs):
if not isinstance(fun, lu.WrappedFun):
</patch>
|
[]
|
[]
| |||
numpy__numpy-3645
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PyArray_Diagonal view transition for 1.9 (Trac #2136)
_Original ticket http://projects.scipy.org/numpy/ticket/2136 on 2012-05-19 by @njsmith, assigned to unknown._
Originally this was scheduled to happen in 1.8, but 1.7 was released 2013-02-10, and then we're accelerating the release schedule so that 1.8 will be only a few months later -- which seems too soon to actually take the next step in this deprecation plan.
Current plan: make the changes below in whichever release comes on or after: ?? (to be determined on mailing list)
##
Starting in 1.<?>, PyArray_Diagonal is supposed to start returning a non-writeable view.
To do:
- Make the (trivial) change at the bottom of PyArray_Diagonal
- Update the numpy.diagonal documentation
- Make a note in the release notes
- Optionally, remove NPY_ARRAY_WARN_ON_WRITE flag. If so, see the
below pull request to find the code to change. Even if we decide
to keep it around for a rainy day, the message in
array_might_be_written should have the references to diagonals
removed.
- File a new ticket to make the returned array writeable, at some future date.
Reference: https://github.com/numpy/numpy/pull/280
</issue>
<code>
[start of README.txt]
1 NumPy is the fundamental package needed for scientific computing with Python.
2 This package contains:
3
4 * a powerful N-dimensional array object
5 * sophisticated (broadcasting) functions
6 * tools for integrating C/C++ and Fortran code
7 * useful linear algebra, Fourier transform, and random number capabilities.
8
9 It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray.
10
11 More information can be found at the website:
12
13 http://www.numpy.org
14
15 After installation, tests can be run with:
16
17 python -c 'import numpy; numpy.test()'
18
19 The most current development version is always available from our
20 git repository:
21
22 http://github.com/numpy/numpy
23
[end of README.txt]
[start of numpy/doc/internals.py]
1 """
2 ===============
3 Array Internals
4 ===============
5
6 Internal organization of numpy arrays
7 =====================================
8
9 It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to Numpy".
10
11 Numpy arrays consist of two major components, the raw array data (from now on,
12 referred to as the data buffer), and the information about the raw array data.
13 The data buffer is typically what people think of as arrays in C or Fortran,
14 a contiguous (and fixed) block of memory containing fixed sized data items.
15 Numpy also contains a significant set of data that describes how to interpret
16 the data in the data buffer. This extra information contains (among other things):
17
18 1) The basic data element's size in bytes
19 2) The start of the data within the data buffer (an offset relative to the
20 beginning of the data buffer).
21 3) The number of dimensions and the size of each dimension
22 4) The separation between elements for each dimension (the 'stride'). This
23 does not have to be a multiple of the element size
24 5) The byte order of the data (which may not be the native byte order)
25 6) Whether the buffer is read-only
26 7) Information (via the dtype object) about the interpretation of the basic
27 data element. The basic data element may be as simple as a int or a float,
28 or it may be a compound object (e.g., struct-like), a fixed character field,
29 or Python object pointers.
30 8) Whether the array is to interpreted as C-order or Fortran-order.
31
32 This arrangement allow for very flexible use of arrays. One thing that it allows
33 is simple changes of the metadata to change the interpretation of the array buffer.
34 Changing the byteorder of the array is a simple change involving no rearrangement
35 of the data. The shape of the array can be changed very easily without changing
36 anything in the data buffer or any data copying at all
37
38 Among other things that are made possible is one can create a new array metadata
39 object that uses the same data buffer
40 to create a new view of that data buffer that has a different interpretation
41 of the buffer (e.g., different shape, offset, byte order, strides, etc) but
42 shares the same data bytes. Many operations in numpy do just this such as
43 slices. Other operations, such as transpose, don't move data elements
44 around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move.
45
46 Typically these new versions of the array metadata but the same data buffer are
47 new 'views' into the data buffer. There is a different ndarray object, but it
48 uses the same data buffer. This is why it is necessary to force copies through
49 use of the .copy() method if one really wants to make a new and independent
50 copy of the data buffer.
51
52 New views into arrays mean the the object reference counts for the data buffer
53 increase. Simply doing away with the original array object will not remove the
54 data buffer if other views of it still exist.
55
56 Multidimensional Array Indexing Order Issues
57 ============================================
58
59 What is the right way to index
60 multi-dimensional arrays? Before you jump to conclusions about the one and
61 true way to index multi-dimensional arrays, it pays to understand why this is
62 a confusing issue. This section will try to explain in detail how numpy
63 indexing works and why we adopt the convention we do for images, and when it
64 may be appropriate to adopt other conventions.
65
66 The first thing to understand is
67 that there are two conflicting conventions for indexing 2-dimensional arrays.
68 Matrix notation uses the first index to indicate which row is being selected and
69 the second index to indicate which column is selected. This is opposite the
70 geometrically oriented-convention for images where people generally think the
71 first index represents x position (i.e., column) and the second represents y
72 position (i.e., row). This alone is the source of much confusion;
73 matrix-oriented users and image-oriented users expect two different things with
74 regard to indexing.
75
76 The second issue to understand is how indices correspond
77 to the order the array is stored in memory. In Fortran the first index is the
78 most rapidly varying index when moving through the elements of a two
79 dimensional array as it is stored in memory. If you adopt the matrix
80 convention for indexing, then this means the matrix is stored one column at a
81 time (since the first index moves to the next row as it changes). Thus Fortran
82 is considered a Column-major language. C has just the opposite convention. In
83 C, the last index changes most rapidly as one moves through the array as
84 stored in memory. Thus C is a Row-major language. The matrix is stored by
85 rows. Note that in both cases it presumes that the matrix convention for
86 indexing is being used, i.e., for both Fortran and C, the first index is the
87 row. Note this convention implies that the indexing convention is invariant
88 and that the data order changes to keep that so.
89
90 But that's not the only way
91 to look at it. Suppose one has large two-dimensional arrays (images or
92 matrices) stored in data files. Suppose the data are stored by rows rather than
93 by columns. If we are to preserve our index convention (whether matrix or
94 image) that means that depending on the language we use, we may be forced to
95 reorder the data if it is read into memory to preserve our indexing
96 convention. For example if we read row-ordered data into memory without
97 reordering, it will match the matrix indexing convention for C, but not for
98 Fortran. Conversely, it will match the image indexing convention for Fortran,
99 but not for C. For C, if one is using data stored in row order, and one wants
100 to preserve the image index convention, the data must be reordered when
101 reading into memory.
102
103 In the end, which you do for Fortran or C depends on
104 which is more important, not reordering data or preserving the indexing
105 convention. For large images, reordering data is potentially expensive, and
106 often the indexing convention is inverted to avoid that.
107
108 The situation with
109 numpy makes this issue yet more complicated. The internal machinery of numpy
110 arrays is flexible enough to accept any ordering of indices. One can simply
111 reorder indices by manipulating the internal stride information for arrays
112 without reordering the data at all. Numpy will know how to map the new index
113 order to the data without moving the data.
114
115 So if this is true, why not choose
116 the index order that matches what you most expect? In particular, why not define
117 row-ordered images to use the image convention? (This is sometimes referred
118 to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN'
119 order options for array ordering in numpy.) The drawback of doing this is
120 potential performance penalties. It's common to access the data sequentially,
121 either implicitly in array operations or explicitly by looping over rows of an
122 image. When that is done, then the data will be accessed in non-optimal order.
123 As the first index is incremented, what is actually happening is that elements
124 spaced far apart in memory are being sequentially accessed, with usually poor
125 memory access speeds. For example, for a two dimensional image 'im' defined so
126 that im[0, 10] represents the value at x=0, y=10. To be consistent with usual
127 Python behavior then im[0] would represent a column at x=0. Yet that data
128 would be spread over the whole array since the data are stored in row order.
129 Despite the flexibility of numpy's indexing, it can't really paper over the fact
130 basic operations are rendered inefficient because of data order or that getting
131 contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs
132 im[0]), thus one can't use an idiom such as for row in im; for col in im does
133 work, but doesn't yield contiguous column data.
134
135 As it turns out, numpy is
136 smart enough when dealing with ufuncs to determine which index is the most
137 rapidly varying one in memory and uses that for the innermost loop. Thus for
138 ufuncs there is no large intrinsic advantage to either approach in most cases.
139 On the other hand, use of .flat with an FORTRAN ordered array will lead to
140 non-optimal memory access as adjacent elements in the flattened array (iterator,
141 actually) are not contiguous in memory.
142
143 Indeed, the fact is that Python
144 indexing on lists and other sequences naturally leads to an outside-to inside
145 ordering (the first index gets the largest grouping, the next the next largest,
146 and the last gets the smallest element). Since image data are normally stored
147 by rows, this corresponds to position within rows being the last item indexed.
148
149 If you do want to use Fortran ordering realize that
150 there are two approaches to consider: 1) accept that the first index is just not
151 the most rapidly changing in memory and have all your I/O routines reorder
152 your data when going from memory to disk or visa versa, or use numpy's
153 mechanism for mapping the first index to the most rapidly varying data. We
154 recommend the former if possible. The disadvantage of the latter is that many
155 of numpy's functions will yield arrays without Fortran ordering unless you are
156 careful to use the 'order' keyword. Doing this would be highly inconvenient.
157
158 Otherwise we recommend simply learning to reverse the usual order of indices
159 when accessing elements of an array. Granted, it goes against the grain, but
160 it is more in line with Python semantics and the natural order of the data.
161
162 """
163 from __future__ import division, absolute_import, print_function
164
[end of numpy/doc/internals.py]
[start of numpy/lib/financial.py]
1 """Some simple financial calculations
2
3 patterned after spreadsheet computations.
4
5 There is some complexity in each function
6 so that the functions behave like ufuncs with
7 broadcasting and being able to be called with scalars
8 or arrays (or other sequences).
9
10 """
11 from __future__ import division, absolute_import, print_function
12
13 import numpy as np
14
15 __all__ = ['fv', 'pmt', 'nper', 'ipmt', 'ppmt', 'pv', 'rate',
16 'irr', 'npv', 'mirr']
17
18 _when_to_num = {'end':0, 'begin':1,
19 'e':0, 'b':1,
20 0:0, 1:1,
21 'beginning':1,
22 'start':1,
23 'finish':0}
24
25 def _convert_when(when):
26 #Test to see if when has already been converted to ndarray
27 #This will happen if one function calls another, for example ppmt
28 if isinstance(when, np.ndarray):
29 return when
30 try:
31 return _when_to_num[when]
32 except (KeyError, TypeError):
33 return [_when_to_num[x] for x in when]
34
35
36 def fv(rate, nper, pmt, pv, when='end'):
37 """
38 Compute the future value.
39
40 Given:
41 * a present value, `pv`
42 * an interest `rate` compounded once per period, of which
43 there are
44 * `nper` total
45 * a (fixed) payment, `pmt`, paid either
46 * at the beginning (`when` = {'begin', 1}) or the end
47 (`when` = {'end', 0}) of each period
48
49 Return:
50 the value at the end of the `nper` periods
51
52 Parameters
53 ----------
54 rate : scalar or array_like of shape(M, )
55 Rate of interest as decimal (not per cent) per period
56 nper : scalar or array_like of shape(M, )
57 Number of compounding periods
58 pmt : scalar or array_like of shape(M, )
59 Payment
60 pv : scalar or array_like of shape(M, )
61 Present value
62 when : {{'begin', 1}, {'end', 0}}, {string, int}, optional
63 When payments are due ('begin' (1) or 'end' (0)).
64 Defaults to {'end', 0}.
65
66 Returns
67 -------
68 out : ndarray
69 Future values. If all input is scalar, returns a scalar float. If
70 any input is array_like, returns future values for each input element.
71 If multiple inputs are array_like, they all must have the same shape.
72
73 Notes
74 -----
75 The future value is computed by solving the equation::
76
77 fv +
78 pv*(1+rate)**nper +
79 pmt*(1 + rate*when)/rate*((1 + rate)**nper - 1) == 0
80
81 or, when ``rate == 0``::
82
83 fv + pv + pmt * nper == 0
84
85 References
86 ----------
87 .. [WRW] Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May).
88 Open Document Format for Office Applications (OpenDocument)v1.2,
89 Part 2: Recalculated Formula (OpenFormula) Format - Annotated Version,
90 Pre-Draft 12. Organization for the Advancement of Structured Information
91 Standards (OASIS). Billerica, MA, USA. [ODT Document].
92 Available:
93 http://www.oasis-open.org/committees/documents.php?wg_abbrev=office-formula
94 OpenDocument-formula-20090508.odt
95
96 Examples
97 --------
98 What is the future value after 10 years of saving $100 now, with
99 an additional monthly savings of $100. Assume the interest rate is
100 5% (annually) compounded monthly?
101
102 >>> np.fv(0.05/12, 10*12, -100, -100)
103 15692.928894335748
104
105 By convention, the negative sign represents cash flow out (i.e. money not
106 available today). Thus, saving $100 a month at 5% annual interest leads
107 to $15,692.93 available to spend in 10 years.
108
109 If any input is array_like, returns an array of equal shape. Let's
110 compare different interest rates from the example above.
111
112 >>> a = np.array((0.05, 0.06, 0.07))/12
113 >>> np.fv(a, 10*12, -100, -100)
114 array([ 15692.92889434, 16569.87435405, 17509.44688102])
115
116 """
117 when = _convert_when(when)
118 (rate, nper, pmt, pv, when) = map(np.asarray, [rate, nper, pmt, pv, when])
119 temp = (1+rate)**nper
120 miter = np.broadcast(rate, nper, pmt, pv, when)
121 zer = np.zeros(miter.shape)
122 fact = np.where(rate==zer, nper+zer, (1+rate*when)*(temp-1)/rate+zer)
123 return -(pv*temp + pmt*fact)
124
125 def pmt(rate, nper, pv, fv=0, when='end'):
126 """
127 Compute the payment against loan principal plus interest.
128
129 Given:
130 * a present value, `pv` (e.g., an amount borrowed)
131 * a future value, `fv` (e.g., 0)
132 * an interest `rate` compounded once per period, of which
133 there are
134 * `nper` total
135 * and (optional) specification of whether payment is made
136 at the beginning (`when` = {'begin', 1}) or the end
137 (`when` = {'end', 0}) of each period
138
139 Return:
140 the (fixed) periodic payment.
141
142 Parameters
143 ----------
144 rate : array_like
145 Rate of interest (per period)
146 nper : array_like
147 Number of compounding periods
148 pv : array_like
149 Present value
150 fv : array_like (optional)
151 Future value (default = 0)
152 when : {{'begin', 1}, {'end', 0}}, {string, int}
153 When payments are due ('begin' (1) or 'end' (0))
154
155 Returns
156 -------
157 out : ndarray
158 Payment against loan plus interest. If all input is scalar, returns a
159 scalar float. If any input is array_like, returns payment for each
160 input element. If multiple inputs are array_like, they all must have
161 the same shape.
162
163 Notes
164 -----
165 The payment is computed by solving the equation::
166
167 fv +
168 pv*(1 + rate)**nper +
169 pmt*(1 + rate*when)/rate*((1 + rate)**nper - 1) == 0
170
171 or, when ``rate == 0``::
172
173 fv + pv + pmt * nper == 0
174
175 for ``pmt``.
176
177 Note that computing a monthly mortgage payment is only
178 one use for this function. For example, pmt returns the
179 periodic deposit one must make to achieve a specified
180 future balance given an initial deposit, a fixed,
181 periodically compounded interest rate, and the total
182 number of periods.
183
184 References
185 ----------
186 .. [WRW] Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May).
187 Open Document Format for Office Applications (OpenDocument)v1.2,
188 Part 2: Recalculated Formula (OpenFormula) Format - Annotated Version,
189 Pre-Draft 12. Organization for the Advancement of Structured Information
190 Standards (OASIS). Billerica, MA, USA. [ODT Document].
191 Available:
192 http://www.oasis-open.org/committees/documents.php
193 ?wg_abbrev=office-formulaOpenDocument-formula-20090508.odt
194
195 Examples
196 --------
197 What is the monthly payment needed to pay off a $200,000 loan in 15
198 years at an annual interest rate of 7.5%?
199
200 >>> np.pmt(0.075/12, 12*15, 200000)
201 -1854.0247200054619
202
203 In order to pay-off (i.e., have a future-value of 0) the $200,000 obtained
204 today, a monthly payment of $1,854.02 would be required. Note that this
205 example illustrates usage of `fv` having a default value of 0.
206
207 """
208 when = _convert_when(when)
209 (rate, nper, pv, fv, when) = map(np.asarray, [rate, nper, pv, fv, when])
210 temp = (1+rate)**nper
211 miter = np.broadcast(rate, nper, pv, fv, when)
212 zer = np.zeros(miter.shape)
213 fact = np.where(rate==zer, nper+zer, (1+rate*when)*(temp-1)/rate+zer)
214 return -(fv + pv*temp) / fact
215
216 def nper(rate, pmt, pv, fv=0, when='end'):
217 """
218 Compute the number of periodic payments.
219
220 Parameters
221 ----------
222 rate : array_like
223 Rate of interest (per period)
224 pmt : array_like
225 Payment
226 pv : array_like
227 Present value
228 fv : array_like, optional
229 Future value
230 when : {{'begin', 1}, {'end', 0}}, {string, int}, optional
231 When payments are due ('begin' (1) or 'end' (0))
232
233 Notes
234 -----
235 The number of periods ``nper`` is computed by solving the equation::
236
237 fv + pv*(1+rate)**nper + pmt*(1+rate*when)/rate*((1+rate)**nper-1) = 0
238
239 but if ``rate = 0`` then::
240
241 fv + pv + pmt*nper = 0
242
243 Examples
244 --------
245 If you only had $150/month to pay towards the loan, how long would it take
246 to pay-off a loan of $8,000 at 7% annual interest?
247
248 >>> print round(np.nper(0.07/12, -150, 8000), 5)
249 64.07335
250
251 So, over 64 months would be required to pay off the loan.
252
253 The same analysis could be done with several different interest rates
254 and/or payments and/or total amounts to produce an entire table.
255
256 >>> np.nper(*(np.ogrid[0.07/12: 0.08/12: 0.01/12,
257 ... -150 : -99 : 50 ,
258 ... 8000 : 9001 : 1000]))
259 array([[[ 64.07334877, 74.06368256],
260 [ 108.07548412, 127.99022654]],
261 [[ 66.12443902, 76.87897353],
262 [ 114.70165583, 137.90124779]]])
263
264 """
265 when = _convert_when(when)
266 (rate, pmt, pv, fv, when) = map(np.asarray, [rate, pmt, pv, fv, when])
267
268 use_zero_rate = False
269 with np.errstate(divide="raise"):
270 try:
271 z = pmt*(1.0+rate*when)/rate
272 except FloatingPointError:
273 use_zero_rate = True
274
275 if use_zero_rate:
276 return (-fv + pv) / (pmt + 0.0)
277 else:
278 A = -(fv + pv)/(pmt+0.0)
279 B = np.log((-fv+z) / (pv+z))/np.log(1.0+rate)
280 miter = np.broadcast(rate, pmt, pv, fv, when)
281 zer = np.zeros(miter.shape)
282 return np.where(rate==zer, A+zer, B+zer) + 0.0
283
284 def ipmt(rate, per, nper, pv, fv=0.0, when='end'):
285 """
286 Compute the interest portion of a payment.
287
288 Parameters
289 ----------
290 rate : scalar or array_like of shape(M, )
291 Rate of interest as decimal (not per cent) per period
292 per : scalar or array_like of shape(M, )
293 Interest paid against the loan changes during the life or the loan.
294 The `per` is the payment period to calculate the interest amount.
295 nper : scalar or array_like of shape(M, )
296 Number of compounding periods
297 pv : scalar or array_like of shape(M, )
298 Present value
299 fv : scalar or array_like of shape(M, ), optional
300 Future value
301 when : {{'begin', 1}, {'end', 0}}, {string, int}, optional
302 When payments are due ('begin' (1) or 'end' (0)).
303 Defaults to {'end', 0}.
304
305 Returns
306 -------
307 out : ndarray
308 Interest portion of payment. If all input is scalar, returns a scalar
309 float. If any input is array_like, returns interest payment for each
310 input element. If multiple inputs are array_like, they all must have
311 the same shape.
312
313 See Also
314 --------
315 ppmt, pmt, pv
316
317 Notes
318 -----
319 The total payment is made up of payment against principal plus interest.
320
321 ``pmt = ppmt + ipmt``
322
323 Examples
324 --------
325 What is the amortization schedule for a 1 year loan of $2500 at
326 8.24% interest per year compounded monthly?
327
328 >>> principal = 2500.00
329
330 The 'per' variable represents the periods of the loan. Remember that
331 financial equations start the period count at 1!
332
333 >>> per = np.arange(1*12) + 1
334 >>> ipmt = np.ipmt(0.0824/12, per, 1*12, principal)
335 >>> ppmt = np.ppmt(0.0824/12, per, 1*12, principal)
336
337 Each element of the sum of the 'ipmt' and 'ppmt' arrays should equal
338 'pmt'.
339
340 >>> pmt = np.pmt(0.0824/12, 1*12, principal)
341 >>> np.allclose(ipmt + ppmt, pmt)
342 True
343
344 >>> fmt = '{0:2d} {1:8.2f} {2:8.2f} {3:8.2f}'
345 >>> for payment in per:
346 ... index = payment - 1
347 ... principal = principal + ppmt[index]
348 ... print fmt.format(payment, ppmt[index], ipmt[index], principal)
349 1 -200.58 -17.17 2299.42
350 2 -201.96 -15.79 2097.46
351 3 -203.35 -14.40 1894.11
352 4 -204.74 -13.01 1689.37
353 5 -206.15 -11.60 1483.22
354 6 -207.56 -10.18 1275.66
355 7 -208.99 -8.76 1066.67
356 8 -210.42 -7.32 856.25
357 9 -211.87 -5.88 644.38
358 10 -213.32 -4.42 431.05
359 11 -214.79 -2.96 216.26
360 12 -216.26 -1.49 -0.00
361
362 >>> interestpd = np.sum(ipmt)
363 >>> np.round(interestpd, 2)
364 -112.98
365
366 """
367 when = _convert_when(when)
368 rate, per, nper, pv, fv, when = np.broadcast_arrays(rate, per, nper, pv, fv, when)
369 total_pmt = pmt(rate, nper, pv, fv, when)
370 ipmt = _rbl(rate, per, total_pmt, pv, when)*rate
371 try:
372 ipmt = np.where(when == 1, ipmt/(1 + rate), ipmt)
373 ipmt = np.where(np.logical_and(when == 1, per == 1), 0.0, ipmt)
374 except IndexError:
375 pass
376 return ipmt
377
378 def _rbl(rate, per, pmt, pv, when):
379 """
380 This function is here to simply have a different name for the 'fv'
381 function to not interfere with the 'fv' keyword argument within the 'ipmt'
382 function. It is the 'remaining balance on loan' which might be useful as
383 it's own function, but is easily calculated with the 'fv' function.
384 """
385 return fv(rate, (per - 1), pmt, pv, when)
386
387 def ppmt(rate, per, nper, pv, fv=0.0, when='end'):
388 """
389 Compute the payment against loan principal.
390
391 Parameters
392 ----------
393 rate : array_like
394 Rate of interest (per period)
395 per : array_like, int
396 Amount paid against the loan changes. The `per` is the period of
397 interest.
398 nper : array_like
399 Number of compounding periods
400 pv : array_like
401 Present value
402 fv : array_like, optional
403 Future value
404 when : {{'begin', 1}, {'end', 0}}, {string, int}
405 When payments are due ('begin' (1) or 'end' (0))
406
407 See Also
408 --------
409 pmt, pv, ipmt
410
411 """
412 total = pmt(rate, nper, pv, fv, when)
413 return total - ipmt(rate, per, nper, pv, fv, when)
414
415 def pv(rate, nper, pmt, fv=0.0, when='end'):
416 """
417 Compute the present value.
418
419 Given:
420 * a future value, `fv`
421 * an interest `rate` compounded once per period, of which
422 there are
423 * `nper` total
424 * a (fixed) payment, `pmt`, paid either
425 * at the beginning (`when` = {'begin', 1}) or the end
426 (`when` = {'end', 0}) of each period
427
428 Return:
429 the value now
430
431 Parameters
432 ----------
433 rate : array_like
434 Rate of interest (per period)
435 nper : array_like
436 Number of compounding periods
437 pmt : array_like
438 Payment
439 fv : array_like, optional
440 Future value
441 when : {{'begin', 1}, {'end', 0}}, {string, int}, optional
442 When payments are due ('begin' (1) or 'end' (0))
443
444 Returns
445 -------
446 out : ndarray, float
447 Present value of a series of payments or investments.
448
449 Notes
450 -----
451 The present value is computed by solving the equation::
452
453 fv +
454 pv*(1 + rate)**nper +
455 pmt*(1 + rate*when)/rate*((1 + rate)**nper - 1) = 0
456
457 or, when ``rate = 0``::
458
459 fv + pv + pmt * nper = 0
460
461 for `pv`, which is then returned.
462
463 References
464 ----------
465 .. [WRW] Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May).
466 Open Document Format for Office Applications (OpenDocument)v1.2,
467 Part 2: Recalculated Formula (OpenFormula) Format - Annotated Version,
468 Pre-Draft 12. Organization for the Advancement of Structured Information
469 Standards (OASIS). Billerica, MA, USA. [ODT Document].
470 Available:
471 http://www.oasis-open.org/committees/documents.php?wg_abbrev=office-formula
472 OpenDocument-formula-20090508.odt
473
474 Examples
475 --------
476 What is the present value (e.g., the initial investment)
477 of an investment that needs to total $15692.93
478 after 10 years of saving $100 every month? Assume the
479 interest rate is 5% (annually) compounded monthly.
480
481 >>> np.pv(0.05/12, 10*12, -100, 15692.93)
482 -100.00067131625819
483
484 By convention, the negative sign represents cash flow out
485 (i.e., money not available today). Thus, to end up with
486 $15,692.93 in 10 years saving $100 a month at 5% annual
487 interest, one's initial deposit should also be $100.
488
489 If any input is array_like, ``pv`` returns an array of equal shape.
490 Let's compare different interest rates in the example above:
491
492 >>> a = np.array((0.05, 0.04, 0.03))/12
493 >>> np.pv(a, 10*12, -100, 15692.93)
494 array([ -100.00067132, -649.26771385, -1273.78633713])
495
496 So, to end up with the same $15692.93 under the same $100 per month
497 "savings plan," for annual interest rates of 4% and 3%, one would
498 need initial investments of $649.27 and $1273.79, respectively.
499
500 """
501 when = _convert_when(when)
502 (rate, nper, pmt, fv, when) = map(np.asarray, [rate, nper, pmt, fv, when])
503 temp = (1+rate)**nper
504 miter = np.broadcast(rate, nper, pmt, fv, when)
505 zer = np.zeros(miter.shape)
506 fact = np.where(rate == zer, nper+zer, (1+rate*when)*(temp-1)/rate+zer)
507 return -(fv + pmt*fact)/temp
508
509 # Computed with Sage
510 # (y + (r + 1)^n*x + p*((r + 1)^n - 1)*(r*w + 1)/r)/(n*(r + 1)^(n - 1)*x - p*((r + 1)^n - 1)*(r*w + 1)/r^2 + n*p*(r + 1)^(n - 1)*(r*w + 1)/r + p*((r + 1)^n - 1)*w/r)
511
512 def _g_div_gp(r, n, p, x, y, w):
513 t1 = (r+1)**n
514 t2 = (r+1)**(n-1)
515 return (y + t1*x + p*(t1 - 1)*(r*w + 1)/r)/(n*t2*x - p*(t1 - 1)*(r*w + 1)/(r**2) + n*p*t2*(r*w + 1)/r + p*(t1 - 1)*w/r)
516
517 # Use Newton's iteration until the change is less than 1e-6
518 # for all values or a maximum of 100 iterations is reached.
519 # Newton's rule is
520 # r_{n+1} = r_{n} - g(r_n)/g'(r_n)
521 # where
522 # g(r) is the formula
523 # g'(r) is the derivative with respect to r.
524 def rate(nper, pmt, pv, fv, when='end', guess=0.10, tol=1e-6, maxiter=100):
525 """
526 Compute the rate of interest per period.
527
528 Parameters
529 ----------
530 nper : array_like
531 Number of compounding periods
532 pmt : array_like
533 Payment
534 pv : array_like
535 Present value
536 fv : array_like
537 Future value
538 when : {{'begin', 1}, {'end', 0}}, {string, int}, optional
539 When payments are due ('begin' (1) or 'end' (0))
540 guess : float, optional
541 Starting guess for solving the rate of interest
542 tol : float, optional
543 Required tolerance for the solution
544 maxiter : int, optional
545 Maximum iterations in finding the solution
546
547 Notes
548 -----
549 The rate of interest is computed by iteratively solving the
550 (non-linear) equation::
551
552 fv + pv*(1+rate)**nper + pmt*(1+rate*when)/rate * ((1+rate)**nper - 1) = 0
553
554 for ``rate``.
555
556 References
557 ----------
558 Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May). Open Document
559 Format for Office Applications (OpenDocument)v1.2, Part 2: Recalculated
560 Formula (OpenFormula) Format - Annotated Version, Pre-Draft 12.
561 Organization for the Advancement of Structured Information Standards
562 (OASIS). Billerica, MA, USA. [ODT Document]. Available:
563 http://www.oasis-open.org/committees/documents.php?wg_abbrev=office-formula
564 OpenDocument-formula-20090508.odt
565
566 """
567 when = _convert_when(when)
568 (nper, pmt, pv, fv, when) = map(np.asarray, [nper, pmt, pv, fv, when])
569 rn = guess
570 iter = 0
571 close = False
572 while (iter < maxiter) and not close:
573 rnp1 = rn - _g_div_gp(rn, nper, pmt, pv, fv, when)
574 diff = abs(rnp1-rn)
575 close = np.all(diff<tol)
576 iter += 1
577 rn = rnp1
578 if not close:
579 # Return nan's in array of the same shape as rn
580 return np.nan + rn
581 else:
582 return rn
583
584 def irr(values):
585 """
586 Return the Internal Rate of Return (IRR).
587
588 This is the "average" periodically compounded rate of return
589 that gives a net present value of 0.0; for a more complete explanation,
590 see Notes below.
591
592 Parameters
593 ----------
594 values : array_like, shape(N,)
595 Input cash flows per time period. By convention, net "deposits"
596 are negative and net "withdrawals" are positive. Thus, for example,
597 at least the first element of `values`, which represents the initial
598 investment, will typically be negative.
599
600 Returns
601 -------
602 out : float
603 Internal Rate of Return for periodic input values.
604
605 Notes
606 -----
607 The IRR is perhaps best understood through an example (illustrated
608 using np.irr in the Examples section below). Suppose one invests
609 100 units and then makes the following withdrawals at regular
610 (fixed) intervals: 39, 59, 55, 20. Assuming the ending value is 0,
611 one's 100 unit investment yields 173 units; however, due to the
612 combination of compounding and the periodic withdrawals, the
613 "average" rate of return is neither simply 0.73/4 nor (1.73)^0.25-1.
614 Rather, it is the solution (for :math:`r`) of the equation:
615
616 .. math:: -100 + \\frac{39}{1+r} + \\frac{59}{(1+r)^2}
617 + \\frac{55}{(1+r)^3} + \\frac{20}{(1+r)^4} = 0
618
619 In general, for `values` :math:`= [v_0, v_1, ... v_M]`,
620 irr is the solution of the equation: [G]_
621
622 .. math:: \\sum_{t=0}^M{\\frac{v_t}{(1+irr)^{t}}} = 0
623
624 References
625 ----------
626 .. [G] L. J. Gitman, "Principles of Managerial Finance, Brief," 3rd ed.,
627 Addison-Wesley, 2003, pg. 348.
628
629 Examples
630 --------
631 >>> print round(np.irr([-100, 39, 59, 55, 20]), 5)
632 0.28095
633
634 (Compare with the Example given for numpy.lib.financial.npv)
635
636 """
637 res = np.roots(values[::-1])
638 # Find the root(s) between 0 and 1
639 mask = (res.imag == 0) & (res.real > 0) & (res.real <= 1)
640 res = res[mask].real
641 if res.size == 0:
642 return np.nan
643 rate = 1.0/res - 1
644 if rate.size == 1:
645 rate = rate.item()
646 return rate
647
648 def npv(rate, values):
649 """
650 Returns the NPV (Net Present Value) of a cash flow series.
651
652 Parameters
653 ----------
654 rate : scalar
655 The discount rate.
656 values : array_like, shape(M, )
657 The values of the time series of cash flows. The (fixed) time
658 interval between cash flow "events" must be the same as that
659 for which `rate` is given (i.e., if `rate` is per year, then
660 precisely a year is understood to elapse between each cash flow
661 event). By convention, investments or "deposits" are negative,
662 income or "withdrawals" are positive; `values` must begin with
663 the initial investment, thus `values[0]` will typically be
664 negative.
665
666 Returns
667 -------
668 out : float
669 The NPV of the input cash flow series `values` at the discount `rate`.
670
671 Notes
672 -----
673 Returns the result of: [G]_
674
675 .. math :: \\sum_{t=0}^{M-1}{\\frac{values_t}{(1+rate)^{t}}}
676
677 References
678 ----------
679 .. [G] L. J. Gitman, "Principles of Managerial Finance, Brief," 3rd ed.,
680 Addison-Wesley, 2003, pg. 346.
681
682 Examples
683 --------
684 >>> np.npv(0.281,[-100, 39, 59, 55, 20])
685 -0.0084785916384548798
686
687 (Compare with the Example given for numpy.lib.financial.irr)
688
689 """
690 values = np.asarray(values)
691 return (values / (1+rate)**np.arange(0, len(values))).sum(axis=0)
692
693 def mirr(values, finance_rate, reinvest_rate):
694 """
695 Modified internal rate of return.
696
697 Parameters
698 ----------
699 values : array_like
700 Cash flows (must contain at least one positive and one negative value)
701 or nan is returned. The first value is considered a sunk cost at time zero.
702 finance_rate : scalar
703 Interest rate paid on the cash flows
704 reinvest_rate : scalar
705 Interest rate received on the cash flows upon reinvestment
706
707 Returns
708 -------
709 out : float
710 Modified internal rate of return
711
712 """
713
714 values = np.asarray(values, dtype=np.double)
715 n = values.size
716 pos = values > 0
717 neg = values < 0
718 if not (pos.any() and neg.any()):
719 return np.nan
720 numer = np.abs(npv(reinvest_rate, values*pos))
721 denom = np.abs(npv(finance_rate, values*neg))
722 return (numer/denom)**(1.0/(n - 1))*(1 + reinvest_rate) - 1
723
724 if __name__ == '__main__':
725 import doctest
726 import numpy as np
727 doctest.testmod(verbose=True)
728
[end of numpy/lib/financial.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
numpy/numpy
|
9464075c7260475bdd5d693b3046379a2bb62482
|
PyArray_Diagonal view transition for 1.9 (Trac #2136)
_Original ticket http://projects.scipy.org/numpy/ticket/2136 on 2012-05-19 by @njsmith, assigned to unknown._
Originally this was scheduled to happen in 1.8, but 1.7 was released 2013-02-10, and then we're accelerating the release schedule so that 1.8 will be only a few months later -- which seems too soon to actually take the next step in this deprecation plan.
Current plan: make the changes below in whichever release comes on or after: ?? (to be determined on mailing list)
##
Starting in 1.<?>, PyArray_Diagonal is supposed to start returning a non-writeable view.
To do:
- Make the (trivial) change at the bottom of PyArray_Diagonal
- Update the numpy.diagonal documentation
- Make a note in the release notes
- Optionally, remove NPY_ARRAY_WARN_ON_WRITE flag. If so, see the
below pull request to find the code to change. Even if we decide
to keep it around for a rainy day, the message in
array_might_be_written should have the references to diagonals
removed.
- File a new ticket to make the returned array writeable, at some future date.
Reference: https://github.com/numpy/numpy/pull/280
|
_@rgommers wrote on 2012-05-19_
<reformat description>
_@rgommers wrote on 2012-05-19_
Note, don't close this ticket once the change for 1.8 is made - there's a change to be made for 1.9 as well. See PR-280 for details.
_@njsmith wrote on 2012-05-19_
Shoot, I meant to put "File a new ticket for the 1.9 milestone" as the last item on the todo list and then forgot. Thanks for catching that :-)
|
2013-08-20T18:56:20Z
|
<patch>
diff --git a/numpy/add_newdocs.py b/numpy/add_newdocs.py
--- a/numpy/add_newdocs.py
+++ b/numpy/add_newdocs.py
@@ -3316,7 +3316,9 @@ def luf(lamdaexpr, *args, **kwargs):
"""
a.diagonal(offset=0, axis1=0, axis2=1)
- Return specified diagonals.
+ Return specified diagonals. In NumPy 1.9 the returned array is a
+ read-only view instead of a copy as in previous NumPy versions. In
+ NumPy 1.10 the read-only restriction will be removed.
Refer to :func:`numpy.diagonal` for full documentation.
diff --git a/numpy/core/fromnumeric.py b/numpy/core/fromnumeric.py
--- a/numpy/core/fromnumeric.py
+++ b/numpy/core/fromnumeric.py
@@ -1125,16 +1125,15 @@ def diagonal(a, offset=0, axis1=0, axis2=1):
In versions of NumPy prior to 1.7, this function always returned a new,
independent array containing a copy of the values in the diagonal.
- In NumPy 1.7, it continues to return a copy of the diagonal, but depending
- on this fact is deprecated. Writing to the resulting array continues to
- work as it used to, but a FutureWarning will be issued.
+ In NumPy 1.7 and 1.8, it continues to return a copy of the diagonal,
+ but depending on this fact is deprecated. Writing to the resulting
+ array continues to work as it used to, but a FutureWarning is issued.
- In NumPy 1.9, it will switch to returning a read-only view on the original
- array. Attempting to write to the resulting array will produce an error.
+ In NumPy 1.9 it returns a read-only view on the original array.
+ Attempting to write to the resulting array will produce an error.
- In NumPy 1.10, it will still return a view, but this view will no longer be
- marked read-only. Writing to the returned array will alter your original
- array as well.
+ In NumPy 1.10, it will return a read/write view, Writing to the returned
+ array will alter your original array.
If you don't write to the array returned by this function, then you can
just ignore all of the above.
</patch>
|
[]
|
[]
| |||
numpy__numpy-16408
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DOC,BLD: Switch to `xelatex` engine for latex docs?
When building the latex/pdf version of the documentation, sphinx is currently configured to use `pdflatex` (the default) to build the pdfs of the user/reference manuals. There are several places in the documentation where the use of unicode characters causes `inputenc` warnings when building the pdf. In some cases these can be resolved by ensuring that extra latex packages are installed (e.g. `texlive-langgreek` adds the necessary mappings for some Greek Unicode characters), but in other cases (e.g. U+22EE vertical dots) it is more difficult to find an appropriate package (or the mapping can be done manually via `\DeclareUnicodeCharacter`).
In addition, there seem to be some URL ref problems that are avoided by `xelatex` (see sphinx-doc/sphinx#7723).
Please share any thoughts and possible pros/cons of switching to `xelatex` for the NumPy latex/pdf docs. Note that sphinx provides a `latex_engine` configuration option that makes this simple to try.
</issue>
<code>
[start of README.md]
1 # <img alt="NumPy" src="https://cdn.rawgit.com/numpy/numpy/master/branding/icons/numpylogo.svg" height="60">
2
3 [](
4 https://travis-ci.org/numpy/numpy)
5 [](
6 https://dev.azure.com/numpy/numpy/_build/latest?definitionId=5)
7 [](
8 https://codecov.io/gh/numpy/numpy)
9
10 NumPy is the fundamental package needed for scientific computing with Python.
11
12 - **Website:** https://www.numpy.org
13 - **Documentation:** https://numpy.org/doc
14 - **Mailing list:** https://mail.python.org/mailman/listinfo/numpy-discussion
15 - **Source code:** https://github.com/numpy/numpy
16 - **Contributing:** https://www.numpy.org/devdocs/dev/index.html
17 - **Bug reports:** https://github.com/numpy/numpy/issues
18 - **Report a security vulnerability:** https://tidelift.com/docs/security
19
20 It provides:
21
22 - a powerful N-dimensional array object
23 - sophisticated (broadcasting) functions
24 - tools for integrating C/C++ and Fortran code
25 - useful linear algebra, Fourier transform, and random number capabilities
26
27 Testing:
28
29 - NumPy versions ≥ 1.15 require `pytest`
30 - NumPy versions < 1.15 require `nose`
31
32 Tests can then be run after installation with:
33
34 python -c 'import numpy; numpy.test()'
35
36
37 Call for Contributions
38 ----------------------
39
40 NumPy appreciates help from a wide range of different backgrounds.
41 Work such as high level documentation or website improvements are valuable
42 and we would like to grow our team with people filling these roles.
43 Small improvements or fixes are always appreciated and issues labeled as easy
44 may be a good starting point.
45 If you are considering larger contributions outside the traditional coding work,
46 please contact us through the mailing list.
47
48
49 [](https://numfocus.org)
50
[end of README.md]
[start of doc/neps/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # NumPy Enhancement Proposals documentation build configuration file, created by
4 # sphinx-quickstart on Mon Dec 11 12:45:09 2017.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 # If extensions (or modules to document with autodoc) are in another directory,
16 # add these directories to sys.path here. If the directory is relative to the
17 # documentation root, use os.path.abspath to make it absolute, like shown here.
18 #
19 import os
20 # import sys
21 # sys.path.insert(0, os.path.abspath('.'))
22
23
24 # -- General configuration ------------------------------------------------
25
26 # If your documentation needs a minimal Sphinx version, state it here.
27 #
28 # needs_sphinx = '1.0'
29
30 # Add any Sphinx extension module names here, as strings. They can be
31 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
32 # ones.
33 extensions = [
34 'sphinx.ext.imgmath',
35 'sphinx.ext.intersphinx',
36 ]
37
38 # Add any paths that contain templates here, relative to this directory.
39 templates_path = ['../source/_templates/']
40
41 # The suffix(es) of source filenames.
42 # You can specify multiple suffix as a list of string:
43 #
44 # source_suffix = ['.rst', '.md']
45 source_suffix = '.rst'
46
47 # The master toctree document.
48 master_doc = 'index'
49
50 # General information about the project.
51 project = u'NumPy Enhancement Proposals'
52 copyright = u'2017-2018, NumPy Developers'
53 author = u'NumPy Developers'
54
55 # The version info for the project you're documenting, acts as replacement for
56 # |version| and |release|, also used in various other places throughout the
57 # built documents.
58 #
59 # The short X.Y version.
60 version = u''
61 # The full version, including alpha/beta/rc tags.
62 release = u''
63
64 # The language for content autogenerated by Sphinx. Refer to documentation
65 # for a list of supported languages.
66 #
67 # This is also used if you do content translation via gettext catalogs.
68 # Usually you set "language" from the command line for these cases.
69 language = None
70
71 # List of patterns, relative to source directory, that match files and
72 # directories to ignore when looking for source files.
73 # This patterns also effect to html_static_path and html_extra_path
74 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
75
76 # The name of the Pygments (syntax highlighting) style to use.
77 pygments_style = 'sphinx'
78
79 # If true, `todo` and `todoList` produce output, else they produce nothing.
80 todo_include_todos = False
81
82
83 ## -- Options for HTML output ----------------------------------------------
84 #
85 ## The theme to use for HTML and HTML Help pages. See the documentation for
86 ## a list of builtin themes.
87 ##
88 #html_theme = 'alabaster'
89 #
90 ## Theme options are theme-specific and customize the look and feel of a theme
91 ## further. For a list of options available for each theme, see the
92 ## documentation.
93 ##
94 ## html_theme_options = {}
95 #
96 ## Add any paths that contain custom static files (such as style sheets) here,
97 ## relative to this directory. They are copied after the builtin static files,
98 ## so a file named "default.css" will overwrite the builtin "default.css".
99 #html_static_path = ['_static']
100 #
101 ## Custom sidebar templates, must be a dictionary that maps document names
102 ## to template names.
103 ##
104 ## This is required for the alabaster theme
105 ## refs: https://alabaster.readthedocs.io/en/latest/installation.html#sidebars
106 #html_sidebars = {
107 # '**': [
108 # 'relations.html', # needs 'show_related': True theme option to display
109 # 'searchbox.html',
110 # ]
111 #}
112
113 ## -----------------------------------------------------------------------------
114 # HTML output
115 # -----------------------------------------------------------------------------
116
117 themedir = os.path.join(os.pardir, 'scipy-sphinx-theme', '_theme')
118 if not os.path.isdir(themedir):
119 raise RuntimeError("Get the scipy-sphinx-theme first, "
120 "via git submodule init && git submodule update")
121
122 html_theme = 'scipy'
123 html_theme_path = [themedir]
124
125 #if 'scipyorg' in tags:
126 if True:
127 # Build for the scipy.org website
128 html_theme_options = {
129 "edit_link": True,
130 "sidebar": "right",
131 "scipy_org_logo": True,
132 "rootlinks": [("https://scipy.org/", "Scipy.org"),
133 ("https://docs.scipy.org/", "Docs")]
134 }
135 else:
136 # Default build
137 html_theme_options = {
138 "edit_link": False,
139 "sidebar": "left",
140 "scipy_org_logo": False,
141 "rootlinks": []
142 }
143 html_sidebars = {'index': 'indexsidebar.html'}
144
145 #html_additional_pages = {
146 # 'index': 'indexcontent.html',
147 #}
148
149 html_title = "%s" % (project)
150 html_static_path = ['../source/_static']
151 html_last_updated_fmt = '%b %d, %Y'
152
153 html_use_modindex = True
154 html_copy_source = False
155 html_domain_indices = False
156 html_file_suffix = '.html'
157
158 htmlhelp_basename = 'numpy'
159
160 if 'sphinx.ext.pngmath' in extensions:
161 pngmath_use_preview = True
162 pngmath_dvipng_args = ['-gamma', '1.5', '-D', '96', '-bg', 'Transparent']
163
164 plot_html_show_formats = False
165 plot_html_show_source_link = False
166
167
168
169 # -- Options for HTMLHelp output ------------------------------------------
170
171 # Output file base name for HTML help builder.
172 htmlhelp_basename = 'NumPyEnhancementProposalsdoc'
173
174
175 # -- Options for LaTeX output ---------------------------------------------
176
177 latex_elements = {
178 # The paper size ('letterpaper' or 'a4paper').
179 #
180 # 'papersize': 'letterpaper',
181
182 # The font size ('10pt', '11pt' or '12pt').
183 #
184 # 'pointsize': '10pt',
185
186 # Additional stuff for the LaTeX preamble.
187 #
188 # 'preamble': '',
189
190 # Latex figure (float) alignment
191 #
192 # 'figure_align': 'htbp',
193 }
194
195 # Grouping the document tree into LaTeX files. List of tuples
196 # (source start file, target name, title,
197 # author, documentclass [howto, manual, or own class]).
198 latex_documents = [
199 (master_doc, 'NumPyEnhancementProposals.tex', u'NumPy Enhancement Proposals Documentation',
200 u'NumPy Developers', 'manual'),
201 ]
202
203
204 # -- Options for manual page output ---------------------------------------
205
206 # One entry per manual page. List of tuples
207 # (source start file, name, description, authors, manual section).
208 man_pages = [
209 (master_doc, 'numpyenhancementproposals', u'NumPy Enhancement Proposals Documentation',
210 [author], 1)
211 ]
212
213
214 # -- Options for Texinfo output -------------------------------------------
215
216 # Grouping the document tree into Texinfo files. List of tuples
217 # (source start file, target name, title, author,
218 # dir menu entry, description, category)
219 texinfo_documents = [
220 (master_doc, 'NumPyEnhancementProposals', u'NumPy Enhancement Proposals Documentation',
221 author, 'NumPyEnhancementProposals', 'One line description of project.',
222 'Miscellaneous'),
223 ]
224
225 # -----------------------------------------------------------------------------
226 # Intersphinx configuration
227 # -----------------------------------------------------------------------------
228 intersphinx_mapping = {
229 'python': ('https://docs.python.org/dev', None),
230 'numpy': ('https://numpy.org/devdocs', None),
231 'scipy': ('https://docs.scipy.org/doc/scipy/reference', None),
232 'matplotlib': ('https://matplotlib.org', None)
233 }
234
235
[end of doc/neps/conf.py]
[start of doc/source/conf.py]
1 # -*- coding: utf-8 -*-
2 import os
3 import re
4 import sys
5
6 # Minimum version, enforced by sphinx
7 needs_sphinx = '2.2.0'
8
9 # -----------------------------------------------------------------------------
10 # General configuration
11 # -----------------------------------------------------------------------------
12
13 # Add any Sphinx extension module names here, as strings. They can be extensions
14 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
15
16 sys.path.insert(0, os.path.abspath('../sphinxext'))
17
18 extensions = [
19 'sphinx.ext.autodoc',
20 'numpydoc',
21 'sphinx.ext.intersphinx',
22 'sphinx.ext.coverage',
23 'sphinx.ext.doctest',
24 'sphinx.ext.autosummary',
25 'sphinx.ext.graphviz',
26 'sphinx.ext.ifconfig',
27 'matplotlib.sphinxext.plot_directive',
28 'IPython.sphinxext.ipython_console_highlighting',
29 'IPython.sphinxext.ipython_directive',
30 'sphinx.ext.imgmath',
31 ]
32
33 imgmath_image_format = 'svg'
34
35 # Add any paths that contain templates here, relative to this directory.
36 templates_path = ['_templates']
37
38 # The suffix of source filenames.
39 source_suffix = '.rst'
40
41 master_doc = 'contents'
42
43 # General substitutions.
44 project = 'NumPy'
45 copyright = '2008-2020, The SciPy community'
46
47 # The default replacements for |version| and |release|, also used in various
48 # other places throughout the built documents.
49 #
50 import numpy
51 # The short X.Y version (including .devXXXX, rcX, b1 suffixes if present)
52 version = re.sub(r'(\d+\.\d+)\.\d+(.*)', r'\1\2', numpy.__version__)
53 version = re.sub(r'(\.dev\d+).*?$', r'\1', version)
54 # The full version, including alpha/beta/rc tags.
55 release = numpy.__version__
56 print("%s %s" % (version, release))
57
58 # There are two options for replacing |today|: either, you set today to some
59 # non-false value, then it is used:
60 #today = ''
61 # Else, today_fmt is used as the format for a strftime call.
62 today_fmt = '%B %d, %Y'
63
64 # List of documents that shouldn't be included in the build.
65 #unused_docs = []
66
67 # The reST default role (used for this markup: `text`) to use for all documents.
68 default_role = "autolink"
69
70 # List of directories, relative to source directories, that shouldn't be searched
71 # for source files.
72 exclude_dirs = []
73
74 # If true, '()' will be appended to :func: etc. cross-reference text.
75 add_function_parentheses = False
76
77 # If true, the current module name will be prepended to all description
78 # unit titles (such as .. function::).
79 #add_module_names = True
80
81 # If true, sectionauthor and moduleauthor directives will be shown in the
82 # output. They are ignored by default.
83 #show_authors = False
84
85 # The name of the Pygments (syntax highlighting) style to use.
86 pygments_style = 'sphinx'
87
88 def setup(app):
89 # add a config value for `ifconfig` directives
90 app.add_config_value('python_version_major', str(sys.version_info.major), 'env')
91 app.add_lexer('NumPyC', NumPyLexer(stripnl=False))
92
93 # -----------------------------------------------------------------------------
94 # HTML output
95 # -----------------------------------------------------------------------------
96
97 themedir = os.path.join(os.pardir, 'scipy-sphinx-theme', '_theme')
98 if not os.path.isdir(themedir):
99 raise RuntimeError("Get the scipy-sphinx-theme first, "
100 "via git submodule init && git submodule update")
101
102 html_theme = 'scipy'
103 html_theme_path = [themedir]
104
105 if 'scipyorg' in tags:
106 # Build for the scipy.org website
107 html_theme_options = {
108 "edit_link": True,
109 "sidebar": "right",
110 "scipy_org_logo": True,
111 "rootlinks": [("https://scipy.org/", "Scipy.org"),
112 ("https://docs.scipy.org/", "Docs")]
113 }
114 else:
115 # Default build
116 html_theme_options = {
117 "edit_link": False,
118 "sidebar": "left",
119 "scipy_org_logo": False,
120 "rootlinks": [("https://numpy.org/", "NumPy.org"),
121 ("https://numpy.org/doc", "Docs"),
122 ]
123 }
124 html_sidebars = {'index': ['indexsidebar.html', 'searchbox.html']}
125
126 html_additional_pages = {
127 'index': 'indexcontent.html',
128 }
129
130 html_title = "%s v%s Manual" % (project, version)
131 html_static_path = ['_static']
132 html_last_updated_fmt = '%b %d, %Y'
133
134 html_use_modindex = True
135 html_copy_source = False
136 html_domain_indices = False
137 html_file_suffix = '.html'
138
139 htmlhelp_basename = 'numpy'
140
141 if 'sphinx.ext.pngmath' in extensions:
142 pngmath_use_preview = True
143 pngmath_dvipng_args = ['-gamma', '1.5', '-D', '96', '-bg', 'Transparent']
144
145 plot_html_show_formats = False
146 plot_html_show_source_link = False
147
148 # -----------------------------------------------------------------------------
149 # LaTeX output
150 # -----------------------------------------------------------------------------
151
152 # The paper size ('letter' or 'a4').
153 #latex_paper_size = 'letter'
154
155 # The font size ('10pt', '11pt' or '12pt').
156 #latex_font_size = '10pt'
157
158 # Grouping the document tree into LaTeX files. List of tuples
159 # (source start file, target name, title, author, document class [howto/manual]).
160 _stdauthor = 'Written by the NumPy community'
161 latex_documents = [
162 ('reference/index', 'numpy-ref.tex', 'NumPy Reference',
163 _stdauthor, 'manual'),
164 ('user/index', 'numpy-user.tex', 'NumPy User Guide',
165 _stdauthor, 'manual'),
166 ]
167
168 # The name of an image file (relative to this directory) to place at the top of
169 # the title page.
170 #latex_logo = None
171
172 # For "manual" documents, if this is true, then toplevel headings are parts,
173 # not chapters.
174 #latex_use_parts = False
175
176 latex_elements = {
177 'fontenc': r'\usepackage[LGR,T1]{fontenc}'
178 }
179
180 # Additional stuff for the LaTeX preamble.
181 latex_elements['preamble'] = r'''
182 % In the parameters section, place a newline after the Parameters
183 % header
184 \usepackage{xcolor}
185 \usepackage{expdlist}
186 \let\latexdescription=\description
187 \def\description{\latexdescription{}{} \breaklabel}
188 % but expdlist old LaTeX package requires fixes:
189 % 1) remove extra space
190 \usepackage{etoolbox}
191 \makeatletter
192 \patchcmd\@item{{\@breaklabel} }{{\@breaklabel}}{}{}
193 \makeatother
194 % 2) fix bug in expdlist's way of breaking the line after long item label
195 \makeatletter
196 \def\breaklabel{%
197 \def\@breaklabel{%
198 \leavevmode\par
199 % now a hack because Sphinx inserts \leavevmode after term node
200 \def\leavevmode{\def\leavevmode{\unhbox\voidb@x}}%
201 }%
202 }
203 \makeatother
204
205 % Make Examples/etc section headers smaller and more compact
206 \makeatletter
207 \titleformat{\paragraph}{\normalsize\py@HeaderFamily}%
208 {\py@TitleColor}{0em}{\py@TitleColor}{\py@NormalColor}
209 \titlespacing*{\paragraph}{0pt}{1ex}{0pt}
210 \makeatother
211
212 % Fix footer/header
213 \renewcommand{\chaptermark}[1]{\markboth{\MakeUppercase{\thechapter.\ #1}}{}}
214 \renewcommand{\sectionmark}[1]{\markright{\MakeUppercase{\thesection.\ #1}}}
215 '''
216
217 # Documents to append as an appendix to all manuals.
218 #latex_appendices = []
219
220 # If false, no module index is generated.
221 latex_use_modindex = False
222
223
224 # -----------------------------------------------------------------------------
225 # Texinfo output
226 # -----------------------------------------------------------------------------
227
228 texinfo_documents = [
229 ("contents", 'numpy', 'NumPy Documentation', _stdauthor, 'NumPy',
230 "NumPy: array processing for numbers, strings, records, and objects.",
231 'Programming',
232 1),
233 ]
234
235
236 # -----------------------------------------------------------------------------
237 # Intersphinx configuration
238 # -----------------------------------------------------------------------------
239 intersphinx_mapping = {
240 'python': ('https://docs.python.org/dev', None),
241 'scipy': ('https://docs.scipy.org/doc/scipy/reference', None),
242 'matplotlib': ('https://matplotlib.org', None),
243 'imageio': ('https://imageio.readthedocs.io/en/stable', None),
244 'skimage': ('https://scikit-image.org/docs/stable', None)
245 }
246
247
248 # -----------------------------------------------------------------------------
249 # NumPy extensions
250 # -----------------------------------------------------------------------------
251
252 # If we want to do a phantom import from an XML file for all autodocs
253 phantom_import_file = 'dump.xml'
254
255 # Make numpydoc to generate plots for example sections
256 numpydoc_use_plots = True
257
258 # -----------------------------------------------------------------------------
259 # Autosummary
260 # -----------------------------------------------------------------------------
261
262 autosummary_generate = True
263
264 # -----------------------------------------------------------------------------
265 # Coverage checker
266 # -----------------------------------------------------------------------------
267 coverage_ignore_modules = r"""
268 """.split()
269 coverage_ignore_functions = r"""
270 test($|_) (some|all)true bitwise_not cumproduct pkgload
271 generic\.
272 """.split()
273 coverage_ignore_classes = r"""
274 """.split()
275
276 coverage_c_path = []
277 coverage_c_regexes = {}
278 coverage_ignore_c_items = {}
279
280
281 # -----------------------------------------------------------------------------
282 # Plots
283 # -----------------------------------------------------------------------------
284 plot_pre_code = """
285 import numpy as np
286 np.random.seed(0)
287 """
288 plot_include_source = True
289 plot_formats = [('png', 100), 'pdf']
290
291 import math
292 phi = (math.sqrt(5) + 1)/2
293
294 plot_rcparams = {
295 'font.size': 8,
296 'axes.titlesize': 8,
297 'axes.labelsize': 8,
298 'xtick.labelsize': 8,
299 'ytick.labelsize': 8,
300 'legend.fontsize': 8,
301 'figure.figsize': (3*phi, 3),
302 'figure.subplot.bottom': 0.2,
303 'figure.subplot.left': 0.2,
304 'figure.subplot.right': 0.9,
305 'figure.subplot.top': 0.85,
306 'figure.subplot.wspace': 0.4,
307 'text.usetex': False,
308 }
309
310
311 # -----------------------------------------------------------------------------
312 # Source code links
313 # -----------------------------------------------------------------------------
314
315 import inspect
316 from os.path import relpath, dirname
317
318 for name in ['sphinx.ext.linkcode', 'numpydoc.linkcode']:
319 try:
320 __import__(name)
321 extensions.append(name)
322 break
323 except ImportError:
324 pass
325 else:
326 print("NOTE: linkcode extension not found -- no links to source generated")
327
328 def linkcode_resolve(domain, info):
329 """
330 Determine the URL corresponding to Python object
331 """
332 if domain != 'py':
333 return None
334
335 modname = info['module']
336 fullname = info['fullname']
337
338 submod = sys.modules.get(modname)
339 if submod is None:
340 return None
341
342 obj = submod
343 for part in fullname.split('.'):
344 try:
345 obj = getattr(obj, part)
346 except Exception:
347 return None
348
349 # strip decorators, which would resolve to the source of the decorator
350 # possibly an upstream bug in getsourcefile, bpo-1764286
351 try:
352 unwrap = inspect.unwrap
353 except AttributeError:
354 pass
355 else:
356 obj = unwrap(obj)
357
358 try:
359 fn = inspect.getsourcefile(obj)
360 except Exception:
361 fn = None
362 if not fn:
363 return None
364
365 try:
366 source, lineno = inspect.getsourcelines(obj)
367 except Exception:
368 lineno = None
369
370 if lineno:
371 linespec = "#L%d-L%d" % (lineno, lineno + len(source) - 1)
372 else:
373 linespec = ""
374
375 fn = relpath(fn, start=dirname(numpy.__file__))
376
377 if 'dev' in numpy.__version__:
378 return "https://github.com/numpy/numpy/blob/master/numpy/%s%s" % (
379 fn, linespec)
380 else:
381 return "https://github.com/numpy/numpy/blob/v%s/numpy/%s%s" % (
382 numpy.__version__, fn, linespec)
383
384 from pygments.lexers import CLexer
385 import copy
386
387 class NumPyLexer(CLexer):
388 name = 'NUMPYLEXER'
389
390 tokens = copy.deepcopy(CLexer.tokens)
391 # Extend the regex for valid identifiers with @
392 for k, val in tokens.items():
393 for i, v in enumerate(val):
394 if isinstance(v, tuple):
395 if isinstance(v[0], str):
396 val[i] = (v[0].replace('a-zA-Z', 'a-zA-Z@'),) + v[1:]
397
[end of doc/source/conf.py]
[start of numpy/__init__.py]
1 """
2 NumPy
3 =====
4
5 Provides
6 1. An array object of arbitrary homogeneous items
7 2. Fast mathematical operations over arrays
8 3. Linear Algebra, Fourier Transforms, Random Number Generation
9
10 How to use the documentation
11 ----------------------------
12 Documentation is available in two forms: docstrings provided
13 with the code, and a loose standing reference guide, available from
14 `the NumPy homepage <https://www.scipy.org>`_.
15
16 We recommend exploring the docstrings using
17 `IPython <https://ipython.org>`_, an advanced Python shell with
18 TAB-completion and introspection capabilities. See below for further
19 instructions.
20
21 The docstring examples assume that `numpy` has been imported as `np`::
22
23 >>> import numpy as np
24
25 Code snippets are indicated by three greater-than signs::
26
27 >>> x = 42
28 >>> x = x + 1
29
30 Use the built-in ``help`` function to view a function's docstring::
31
32 >>> help(np.sort)
33 ... # doctest: +SKIP
34
35 For some objects, ``np.info(obj)`` may provide additional help. This is
36 particularly true if you see the line "Help on ufunc object:" at the top
37 of the help() page. Ufuncs are implemented in C, not Python, for speed.
38 The native Python help() does not know how to view their help, but our
39 np.info() function does.
40
41 To search for documents containing a keyword, do::
42
43 >>> np.lookfor('keyword')
44 ... # doctest: +SKIP
45
46 General-purpose documents like a glossary and help on the basic concepts
47 of numpy are available under the ``doc`` sub-module::
48
49 >>> from numpy import doc
50 >>> help(doc)
51 ... # doctest: +SKIP
52
53 Available subpackages
54 ---------------------
55 doc
56 Topical documentation on broadcasting, indexing, etc.
57 lib
58 Basic functions used by several sub-packages.
59 random
60 Core Random Tools
61 linalg
62 Core Linear Algebra Tools
63 fft
64 Core FFT routines
65 polynomial
66 Polynomial tools
67 testing
68 NumPy testing tools
69 f2py
70 Fortran to Python Interface Generator.
71 distutils
72 Enhancements to distutils with support for
73 Fortran compilers support and more.
74
75 Utilities
76 ---------
77 test
78 Run numpy unittests
79 show_config
80 Show numpy build configuration
81 dual
82 Overwrite certain functions with high-performance SciPy tools.
83 Note: `numpy.dual` is deprecated. Use the functions from NumPy or Scipy
84 directly instead of importing them from `numpy.dual`.
85 matlib
86 Make everything matrices.
87 __version__
88 NumPy version string
89
90 Viewing documentation using IPython
91 -----------------------------------
92 Start IPython with the NumPy profile (``ipython -p numpy``), which will
93 import `numpy` under the alias `np`. Then, use the ``cpaste`` command to
94 paste examples into the shell. To see which functions are available in
95 `numpy`, type ``np.<TAB>`` (where ``<TAB>`` refers to the TAB key), or use
96 ``np.*cos*?<ENTER>`` (where ``<ENTER>`` refers to the ENTER key) to narrow
97 down the list. To view the docstring for a function, use
98 ``np.cos?<ENTER>`` (to view the docstring) and ``np.cos??<ENTER>`` (to view
99 the source code).
100
101 Copies vs. in-place operation
102 -----------------------------
103 Most of the functions in `numpy` return a copy of the array argument
104 (e.g., `np.sort`). In-place versions of these functions are often
105 available as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.
106 Exceptions to this rule are documented.
107
108 """
109 import sys
110 import warnings
111
112 from ._globals import ModuleDeprecationWarning, VisibleDeprecationWarning
113 from ._globals import _NoValue
114
115 # We first need to detect if we're being called as part of the numpy setup
116 # procedure itself in a reliable manner.
117 try:
118 __NUMPY_SETUP__
119 except NameError:
120 __NUMPY_SETUP__ = False
121
122 if __NUMPY_SETUP__:
123 sys.stderr.write('Running from numpy source directory.\n')
124 else:
125 try:
126 from numpy.__config__ import show as show_config
127 except ImportError:
128 msg = """Error importing numpy: you should not try to import numpy from
129 its source directory; please exit the numpy source tree, and relaunch
130 your python interpreter from there."""
131 raise ImportError(msg)
132
133 from .version import git_revision as __git_revision__
134 from .version import version as __version__
135
136 __all__ = ['ModuleDeprecationWarning',
137 'VisibleDeprecationWarning']
138
139 # Allow distributors to run custom init code
140 from . import _distributor_init
141
142 from . import core
143 from .core import *
144 from . import compat
145 from . import lib
146 # NOTE: to be revisited following future namespace cleanup.
147 # See gh-14454 and gh-15672 for discussion.
148 from .lib import *
149
150 from . import linalg
151 from . import fft
152 from . import polynomial
153 from . import random
154 from . import ctypeslib
155 from . import ma
156 from . import matrixlib as _mat
157 from .matrixlib import *
158
159 # Make these accessible from numpy name-space
160 # but not imported in from numpy import *
161 # TODO[gh-6103]: Deprecate these
162 from builtins import bool, int, float, complex, object, str
163 from .compat import long, unicode
164
165 from .core import round, abs, max, min
166 # now that numpy modules are imported, can initialize limits
167 core.getlimits._register_known_types()
168
169 __all__.extend(['__version__', 'show_config'])
170 __all__.extend(core.__all__)
171 __all__.extend(_mat.__all__)
172 __all__.extend(lib.__all__)
173 __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])
174
175 # These are added by `from .core import *` and `core.__all__`, but we
176 # overwrite them above with builtins we do _not_ want to export.
177 __all__.remove('long')
178 __all__.remove('unicode')
179
180 # Remove things that are in the numpy.lib but not in the numpy namespace
181 # Note that there is a test (numpy/tests/test_public_api.py:test_numpy_namespace)
182 # that prevents adding more things to the main namespace by accident.
183 # The list below will grow until the `from .lib import *` fixme above is
184 # taken care of
185 __all__.remove('Arrayterator')
186 del Arrayterator
187
188 # Filter out Cython harmless warnings
189 warnings.filterwarnings("ignore", message="numpy.dtype size changed")
190 warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
191 warnings.filterwarnings("ignore", message="numpy.ndarray size changed")
192
193 # oldnumeric and numarray were removed in 1.9. In case some packages import
194 # but do not use them, we define them here for backward compatibility.
195 oldnumeric = 'removed'
196 numarray = 'removed'
197
198 if sys.version_info[:2] >= (3, 7):
199 # Importing Tester requires importing all of UnitTest which is not a
200 # cheap import Since it is mainly used in test suits, we lazy import it
201 # here to save on the order of 10 ms of import time for most users
202 #
203 # The previous way Tester was imported also had a side effect of adding
204 # the full `numpy.testing` namespace
205 #
206 # module level getattr is only supported in 3.7 onwards
207 # https://www.python.org/dev/peps/pep-0562/
208 def __getattr__(attr):
209 if attr == 'testing':
210 import numpy.testing as testing
211 return testing
212 elif attr == 'Tester':
213 from .testing import Tester
214 return Tester
215 else:
216 raise AttributeError("module {!r} has no attribute "
217 "{!r}".format(__name__, attr))
218
219 def __dir__():
220 return list(globals().keys() | {'Tester', 'testing'})
221
222 else:
223 # We don't actually use this ourselves anymore, but I'm not 100% sure that
224 # no-one else in the world is using it (though I hope not)
225 from .testing import Tester
226
227 # Pytest testing
228 from numpy._pytesttester import PytestTester
229 test = PytestTester(__name__)
230 del PytestTester
231
232
233 def _sanity_check():
234 """
235 Quick sanity checks for common bugs caused by environment.
236 There are some cases e.g. with wrong BLAS ABI that cause wrong
237 results under specific runtime conditions that are not necessarily
238 achieved during test suite runs, and it is useful to catch those early.
239
240 See https://github.com/numpy/numpy/issues/8577 and other
241 similar bug reports.
242
243 """
244 try:
245 x = ones(2, dtype=float32)
246 if not abs(x.dot(x) - 2.0) < 1e-5:
247 raise AssertionError()
248 except AssertionError:
249 msg = ("The current Numpy installation ({!r}) fails to "
250 "pass simple sanity checks. This can be caused for example "
251 "by incorrect BLAS library being linked in, or by mixing "
252 "package managers (pip, conda, apt, ...). Search closed "
253 "numpy issues for similar problems.")
254 raise RuntimeError(msg.format(__file__))
255
256 _sanity_check()
257 del _sanity_check
258
259 def _mac_os_check():
260 """
261 Quick Sanity check for Mac OS look for accelerate build bugs.
262 Testing numpy polyfit calls init_dgelsd(LAPACK)
263 """
264 try:
265 c = array([3., 2., 1.])
266 x = linspace(0, 2, 5)
267 y = polyval(c, x)
268 _ = polyfit(x, y, 2, cov=True)
269 except ValueError:
270 pass
271
272 import sys
273 if sys.platform == "darwin":
274 with warnings.catch_warnings(record=True) as w:
275 _mac_os_check()
276 # Throw runtime error, if the test failed Check for warning and error_message
277 error_message = ""
278 if len(w) > 0:
279 error_message = "{}: {}".format(w[-1].category.__name__, str(w[-1].message))
280 msg = (
281 "Polyfit sanity test emitted a warning, most likely due "
282 "to using a buggy Accelerate backend. "
283 "If you compiled yourself, "
284 "see site.cfg.example for information. "
285 "Otherwise report this to the vendor "
286 "that provided NumPy.\n{}\n".format(
287 error_message))
288 raise RuntimeError(msg)
289 del _mac_os_check
290
291 # We usually use madvise hugepages support, but on some old kernels it
292 # is slow and thus better avoided.
293 # Specifically kernel version 4.6 had a bug fix which probably fixed this:
294 # https://github.com/torvalds/linux/commit/7cf91a98e607c2f935dbcc177d70011e95b8faff
295 import os
296 use_hugepage = os.environ.get("NUMPY_MADVISE_HUGEPAGE", None)
297 if sys.platform == "linux" and use_hugepage is None:
298 use_hugepage = 1
299 kernel_version = os.uname().release.split(".")[:2]
300 kernel_version = tuple(int(v) for v in kernel_version)
301 if kernel_version < (4, 6):
302 use_hugepage = 0
303 elif use_hugepage is None:
304 # This is not Linux, so it should not matter, just enable anyway
305 use_hugepage = 1
306 else:
307 use_hugepage = int(use_hugepage)
308
309 # Note that this will currently only make a difference on Linux
310 core.multiarray._set_madvise_hugepage(use_hugepage)
311
[end of numpy/__init__.py]
[start of numpy/doc/basics.py]
1 """
2 ============
3 Array basics
4 ============
5
6 Array types and conversions between types
7 =========================================
8
9 NumPy supports a much greater variety of numerical types than Python does.
10 This section shows which are available, and how to modify an array's data-type.
11
12 The primitive types supported are tied closely to those in C:
13
14 .. list-table::
15 :header-rows: 1
16
17 * - Numpy type
18 - C type
19 - Description
20
21 * - `np.bool_`
22 - ``bool``
23 - Boolean (True or False) stored as a byte
24
25 * - `np.byte`
26 - ``signed char``
27 - Platform-defined
28
29 * - `np.ubyte`
30 - ``unsigned char``
31 - Platform-defined
32
33 * - `np.short`
34 - ``short``
35 - Platform-defined
36
37 * - `np.ushort`
38 - ``unsigned short``
39 - Platform-defined
40
41 * - `np.intc`
42 - ``int``
43 - Platform-defined
44
45 * - `np.uintc`
46 - ``unsigned int``
47 - Platform-defined
48
49 * - `np.int_`
50 - ``long``
51 - Platform-defined
52
53 * - `np.uint`
54 - ``unsigned long``
55 - Platform-defined
56
57 * - `np.longlong`
58 - ``long long``
59 - Platform-defined
60
61 * - `np.ulonglong`
62 - ``unsigned long long``
63 - Platform-defined
64
65 * - `np.half` / `np.float16`
66 -
67 - Half precision float:
68 sign bit, 5 bits exponent, 10 bits mantissa
69
70 * - `np.single`
71 - ``float``
72 - Platform-defined single precision float:
73 typically sign bit, 8 bits exponent, 23 bits mantissa
74
75 * - `np.double`
76 - ``double``
77 - Platform-defined double precision float:
78 typically sign bit, 11 bits exponent, 52 bits mantissa.
79
80 * - `np.longdouble`
81 - ``long double``
82 - Platform-defined extended-precision float
83
84 * - `np.csingle`
85 - ``float complex``
86 - Complex number, represented by two single-precision floats (real and imaginary components)
87
88 * - `np.cdouble`
89 - ``double complex``
90 - Complex number, represented by two double-precision floats (real and imaginary components).
91
92 * - `np.clongdouble`
93 - ``long double complex``
94 - Complex number, represented by two extended-precision floats (real and imaginary components).
95
96
97 Since many of these have platform-dependent definitions, a set of fixed-size
98 aliases are provided:
99
100 .. list-table::
101 :header-rows: 1
102
103 * - Numpy type
104 - C type
105 - Description
106
107 * - `np.int8`
108 - ``int8_t``
109 - Byte (-128 to 127)
110
111 * - `np.int16`
112 - ``int16_t``
113 - Integer (-32768 to 32767)
114
115 * - `np.int32`
116 - ``int32_t``
117 - Integer (-2147483648 to 2147483647)
118
119 * - `np.int64`
120 - ``int64_t``
121 - Integer (-9223372036854775808 to 9223372036854775807)
122
123 * - `np.uint8`
124 - ``uint8_t``
125 - Unsigned integer (0 to 255)
126
127 * - `np.uint16`
128 - ``uint16_t``
129 - Unsigned integer (0 to 65535)
130
131 * - `np.uint32`
132 - ``uint32_t``
133 - Unsigned integer (0 to 4294967295)
134
135 * - `np.uint64`
136 - ``uint64_t``
137 - Unsigned integer (0 to 18446744073709551615)
138
139 * - `np.intp`
140 - ``intptr_t``
141 - Integer used for indexing, typically the same as ``ssize_t``
142
143 * - `np.uintp`
144 - ``uintptr_t``
145 - Integer large enough to hold a pointer
146
147 * - `np.float32`
148 - ``float``
149 -
150
151 * - `np.float64` / `np.float_`
152 - ``double``
153 - Note that this matches the precision of the builtin python `float`.
154
155 * - `np.complex64`
156 - ``float complex``
157 - Complex number, represented by two 32-bit floats (real and imaginary components)
158
159 * - `np.complex128` / `np.complex_`
160 - ``double complex``
161 - Note that this matches the precision of the builtin python `complex`.
162
163
164 NumPy numerical types are instances of ``dtype`` (data-type) objects, each
165 having unique characteristics. Once you have imported NumPy using
166
167 ::
168
169 >>> import numpy as np
170
171 the dtypes are available as ``np.bool_``, ``np.float32``, etc.
172
173 Advanced types, not listed in the table above, are explored in
174 section :ref:`structured_arrays`.
175
176 There are 5 basic numerical types representing booleans (bool), integers (int),
177 unsigned integers (uint) floating point (float) and complex. Those with numbers
178 in their name indicate the bitsize of the type (i.e. how many bits are needed
179 to represent a single value in memory). Some types, such as ``int`` and
180 ``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit
181 vs. 64-bit machines). This should be taken into account when interfacing
182 with low-level code (such as C or Fortran) where the raw memory is addressed.
183
184 Data-types can be used as functions to convert python numbers to array scalars
185 (see the array scalar section for an explanation), python sequences of numbers
186 to arrays of that type, or as arguments to the dtype keyword that many numpy
187 functions or methods accept. Some examples::
188
189 >>> import numpy as np
190 >>> x = np.float32(1.0)
191 >>> x
192 1.0
193 >>> y = np.int_([1,2,4])
194 >>> y
195 array([1, 2, 4])
196 >>> z = np.arange(3, dtype=np.uint8)
197 >>> z
198 array([0, 1, 2], dtype=uint8)
199
200 Array types can also be referred to by character codes, mostly to retain
201 backward compatibility with older packages such as Numeric. Some
202 documentation may still refer to these, for example::
203
204 >>> np.array([1, 2, 3], dtype='f')
205 array([ 1., 2., 3.], dtype=float32)
206
207 We recommend using dtype objects instead.
208
209 To convert the type of an array, use the .astype() method (preferred) or
210 the type itself as a function. For example: ::
211
212 >>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE
213 array([ 0., 1., 2.])
214 >>> np.int8(z)
215 array([0, 1, 2], dtype=int8)
216
217 Note that, above, we use the *Python* float object as a dtype. NumPy knows
218 that ``int`` refers to ``np.int_``, ``bool`` means ``np.bool_``,
219 that ``float`` is ``np.float_`` and ``complex`` is ``np.complex_``.
220 The other data-types do not have Python equivalents.
221
222 To determine the type of an array, look at the dtype attribute::
223
224 >>> z.dtype
225 dtype('uint8')
226
227 dtype objects also contain information about the type, such as its bit-width
228 and its byte-order. The data type can also be used indirectly to query
229 properties of the type, such as whether it is an integer::
230
231 >>> d = np.dtype(int)
232 >>> d
233 dtype('int32')
234
235 >>> np.issubdtype(d, np.integer)
236 True
237
238 >>> np.issubdtype(d, np.floating)
239 False
240
241
242 Array Scalars
243 =============
244
245 NumPy generally returns elements of arrays as array scalars (a scalar
246 with an associated dtype). Array scalars differ from Python scalars, but
247 for the most part they can be used interchangeably (the primary
248 exception is for versions of Python older than v2.x, where integer array
249 scalars cannot act as indices for lists and tuples). There are some
250 exceptions, such as when code requires very specific attributes of a scalar
251 or when it checks specifically whether a value is a Python scalar. Generally,
252 problems are easily fixed by explicitly converting array scalars
253 to Python scalars, using the corresponding Python type function
254 (e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``).
255
256 The primary advantage of using array scalars is that
257 they preserve the array type (Python may not have a matching scalar type
258 available, e.g. ``int16``). Therefore, the use of array scalars ensures
259 identical behaviour between arrays and scalars, irrespective of whether the
260 value is inside an array or not. NumPy scalars also have many of the same
261 methods arrays do.
262
263 Overflow Errors
264 ===============
265
266 The fixed size of NumPy numeric types may cause overflow errors when a value
267 requires more memory than available in the data type. For example,
268 `numpy.power` evaluates ``100 * 10 ** 8`` correctly for 64-bit integers,
269 but gives 1874919424 (incorrect) for a 32-bit integer.
270
271 >>> np.power(100, 8, dtype=np.int64)
272 10000000000000000
273 >>> np.power(100, 8, dtype=np.int32)
274 1874919424
275
276 The behaviour of NumPy and Python integer types differs significantly for
277 integer overflows and may confuse users expecting NumPy integers to behave
278 similar to Python's ``int``. Unlike NumPy, the size of Python's ``int`` is
279 flexible. This means Python integers may expand to accommodate any integer and
280 will not overflow.
281
282 NumPy provides `numpy.iinfo` and `numpy.finfo` to verify the
283 minimum or maximum values of NumPy integer and floating point values
284 respectively ::
285
286 >>> np.iinfo(int) # Bounds of the default integer on this system.
287 iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64)
288 >>> np.iinfo(np.int32) # Bounds of a 32-bit integer
289 iinfo(min=-2147483648, max=2147483647, dtype=int32)
290 >>> np.iinfo(np.int64) # Bounds of a 64-bit integer
291 iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64)
292
293 If 64-bit integers are still too small the result may be cast to a
294 floating point number. Floating point numbers offer a larger, but inexact,
295 range of possible values.
296
297 >>> np.power(100, 100, dtype=np.int64) # Incorrect even with 64-bit int
298 0
299 >>> np.power(100, 100, dtype=np.float64)
300 1e+200
301
302 Extended Precision
303 ==================
304
305 Python's floating-point numbers are usually 64-bit floating-point numbers,
306 nearly equivalent to ``np.float64``. In some unusual situations it may be
307 useful to use floating-point numbers with more precision. Whether this
308 is possible in numpy depends on the hardware and on the development
309 environment: specifically, x86 machines provide hardware floating-point
310 with 80-bit precision, and while most C compilers provide this as their
311 ``long double`` type, MSVC (standard for Windows builds) makes
312 ``long double`` identical to ``double`` (64 bits). NumPy makes the
313 compiler's ``long double`` available as ``np.longdouble`` (and
314 ``np.clongdouble`` for the complex numbers). You can find out what your
315 numpy provides with ``np.finfo(np.longdouble)``.
316
317 NumPy does not provide a dtype with more precision than C's
318 ``long double``\\; in particular, the 128-bit IEEE quad precision
319 data type (FORTRAN's ``REAL*16``\\) is not available.
320
321 For efficient memory alignment, ``np.longdouble`` is usually stored
322 padded with zero bits, either to 96 or 128 bits. Which is more efficient
323 depends on hardware and development environment; typically on 32-bit
324 systems they are padded to 96 bits, while on 64-bit systems they are
325 typically padded to 128 bits. ``np.longdouble`` is padded to the system
326 default; ``np.float96`` and ``np.float128`` are provided for users who
327 want specific padding. In spite of the names, ``np.float96`` and
328 ``np.float128`` provide only as much precision as ``np.longdouble``,
329 that is, 80 bits on most x86 machines and 64 bits in standard
330 Windows builds.
331
332 Be warned that even if ``np.longdouble`` offers more precision than
333 python ``float``, it is easy to lose that extra precision, since
334 python often forces values to pass through ``float``. For example,
335 the ``%`` formatting operator requires its arguments to be converted
336 to standard python types, and it is therefore impossible to preserve
337 extended precision even if many decimal places are requested. It can
338 be useful to test your code with the value
339 ``1 + np.finfo(np.longdouble).eps``.
340
341 """
342
[end of numpy/doc/basics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
numpy/numpy
|
671804035cda35211f97f6cf9f80d16152a872fd
|
DOC,BLD: Switch to `xelatex` engine for latex docs?
When building the latex/pdf version of the documentation, sphinx is currently configured to use `pdflatex` (the default) to build the pdfs of the user/reference manuals. There are several places in the documentation where the use of unicode characters causes `inputenc` warnings when building the pdf. In some cases these can be resolved by ensuring that extra latex packages are installed (e.g. `texlive-langgreek` adds the necessary mappings for some Greek Unicode characters), but in other cases (e.g. U+22EE vertical dots) it is more difficult to find an appropriate package (or the mapping can be done manually via `\DeclareUnicodeCharacter`).
In addition, there seem to be some URL ref problems that are avoided by `xelatex` (see sphinx-doc/sphinx#7723).
Please share any thoughts and possible pros/cons of switching to `xelatex` for the NumPy latex/pdf docs. Note that sphinx provides a `latex_engine` configuration option that makes this simple to try.
|
Could be a good idea, isn't the `textcomp` package good enough to use all of those unicode characters, though? In general I like `xelatex`, etc. though, and it gives some nice freedoms around fonts as well...
I like `xelatex` personally, but it can be much slower than `pdflatex`. Could that impact the docs build time significantly or is that irrelevant here?
@rossbar I assume this is the issue you discussed in the community meeting. Keep me informed, I'd like to get 1.19.0rc2 out this weekend.
@charris - correct, I will make sure everything is updated/labelled appropriately before Friday
> I like xelatex personally, but it can be much slower than pdflatex. Could that impact the docs build time significantly or is that irrelevant here?
Great point - I did some quick profiling to get an idea of the build times. The results of `time make -C build/latex` on my system
with `pdflatex`:
```
real 0m43.972s
user 0m43.154s
sys 0m0.552s
```
with `xelatex`:
```
real 1m13.522s
user 1m19.858s
sys 0m2.205s
```
So `xelatex` is definitely slower, but the total build time still isn't so bad, especially since I don't think building the latex docs from source is very common for most users. Note that in each case, the `latex->pdf` step is still shorter than the initial `rST->latex` step (~2m30s on my system).
Oh, then if it's irrelevant I'd say we go for it. `xelatex` is available in every major `LaTeX` distribution on whatever OS, and it does make everything easier with respect to unicode and font issues.
|
2020-05-27T23:16:04Z
|
<patch>
diff --git a/doc/source/conf.py b/doc/source/conf.py
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -155,6 +155,9 @@ def setup(app):
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
+# XeLaTeX for better support of unicode characters
+latex_engine = 'xelatex'
+
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class [howto/manual]).
_stdauthor = 'Written by the NumPy community'
</patch>
|
[]
|
[]
| |||
celery__celery-5074
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Celery does not respect exceptions types when using a serializer different than pickle.
## Checklist
```
~ : celery -A analystick report
software -> celery:4.0.0 (latentcall) kombu:4.0.0 py:3.5.2
billiard:3.5.0.2 redis:2.10.5
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://127.0.0.1:6379/1
```
## Steps to reproduce
(See example code below)
## Expected behavior
**When using a result serializer different than pickle**, exceptions types should be the same as the raised exception.
## Actual behavior
Celery does not respect the exception type but create _a new type_ instead.
The main problem is that instead of using the actual type of the exception, celery will [reconstruct a type](https://github.com/celery/celery/blob/0f87321df385c5f3dca717ec2a4a9c0d25f88054/celery/utils/serialization.py#L43-L45) on the fly, but without respecting the original exception module.
For example, using the `yaml` result serializer (I believe it will be the same for `json`):
* if a task raises a `ValueError`, the caller will receive a `celery.backends.base.ValueError`
* if a task raises a `custom.module.CustomError`, the caller will receive a `celery.backends.base.CustomError`
This ends with wrongs behaviour when raising a exception from a task and trying to catch it from the caller.
### Minimal reproductible test
As an example, I've setup a minimal reproductible test, using a redis backend :
celery config (I can provide a full config if needed):
```python
CELERY_TASK_SERIALIZER = 'yaml'
CELERY_RESULT_SERIALIZER='yaml'
```
Tasks :
```python
# module myapp.tasks
from myapp import celery_app
@celery_app.task
def raises_valueerror():
raise ValueError('Builtin exception')
class CustomError(Exception):
pass
@celery_app.task
def raises_customerror():
raise CustomError('Custom exception', {'a':1})
```
Unittest :
```python
from myapp import tasks
from myapp.tasks import CustomError
def test_builtin_exception():
t = tasks.raises_valueerror.s()
r = t.apply_async()
exc = None
try:
r.get(propagate=True)
except Exception as e:
exc = e
# with celery bug, the actual class of exc will be `celery.backends.base.ValueError` instead of builtin ValueError
assert isinstance(exc, ValueError), "Actual class %s}" % (exc.__class__)
def test_custom_exception():
t = tasks.raises_customerror.s()
r = t.apply_async()
exc = None
try:
r.get(propagate=True)
except Exception as e:
exc = e
# with celery bug, the actual class of exc will be `celery.backends.base.CustomError` instead of builtin CustomError
assert isinstance(exc, CustomError), "1/2 Actual class is %s" % (exc.__class__)
assert isinstance(exc, tasks.CustomError), "2/2 Actual class is %s" % (exc.__class__)
```
Theses tests will fail with the following errors :
```
# ...
AssertionError: Actual class <class 'celery.backends.base.ValueError'>
# ...
AssertionError: 1/2 Actual class is <class 'celery.backends.base.CustomError'>
```
Another side effect for this problem will be that a code like the one below won't work if a subtask raise a `ValueError`, as the propagated exception won't be of the builtin type `ValueError` but `celery.backends.base.ValueError`:
```python
try:
r.get(propagate=True)
except ValueError as e:
# do something
```
This problem will be the same also for any custom exceptions.
While I'm not sure about the possible side-effects, [I have a fix for this](https://github.com/jcsaaddupuy/celery/commit/8d4e613e24f6561fdaafd4e6ede582ceac882804) and I will gladly create a PR for this problem as it seems pretty critical.
What do you think ?
</issue>
<code>
[start of README.rst]
1 .. image:: http://docs.celeryproject.org/en/latest/_images/celery-banner-small.png
2
3 |build-status| |coverage| |license| |wheel| |pyversion| |pyimp| |ocbackerbadge| |ocsponsorbadge|
4
5 :Version: 4.2.1 (windowlicker)
6 :Web: http://celeryproject.org/
7 :Download: https://pypi.org/project/celery/
8 :Source: https://github.com/celery/celery/
9 :Keywords: task, queue, job, async, rabbitmq, amqp, redis,
10 python, distributed, actors
11
12 Sponsors
13 ========
14
15 |ImageLink|_
16
17 .. |ImageLink| image:: https://i.imgur.com/ULmQEib.png
18 .. _ImageLink: https://getstream.io/try-the-api/?utm_source=celery&utm_medium=banner&utm_campaign=github
19
20
21 What's a Task Queue?
22 ====================
23
24 Task queues are used as a mechanism to distribute work across threads or
25 machines.
26
27 A task queue's input is a unit of work, called a task, dedicated worker
28 processes then constantly monitor the queue for new work to perform.
29
30 Celery communicates via messages, usually using a broker
31 to mediate between clients and workers. To initiate a task a client puts a
32 message on the queue, the broker then delivers the message to a worker.
33
34 A Celery system can consist of multiple workers and brokers, giving way
35 to high availability and horizontal scaling.
36
37 Celery is written in Python, but the protocol can be implemented in any
38 language. In addition to Python there's node-celery_ for Node.js,
39 and a `PHP client`_.
40
41 Language interoperability can also be achieved by using webhooks
42 in such a way that the client enqueues an URL to be requested by a worker.
43
44 .. _node-celery: https://github.com/mher/node-celery
45 .. _`PHP client`: https://github.com/gjedeer/celery-php
46
47 What do I need?
48 ===============
49
50 Celery version 4.2 runs on,
51
52 - Python (2.7, 3.4, 3.5, 3.6)
53 - PyPy (5.8)
54
55
56 This is the last version to support Python 2.7,
57 and from the next version (Celery 5.x) Python 3.5 or newer is required.
58
59 If you're running an older version of Python, you need to be running
60 an older version of Celery:
61
62 - Python 2.6: Celery series 3.1 or earlier.
63 - Python 2.5: Celery series 3.0 or earlier.
64 - Python 2.4 was Celery series 2.2 or earlier.
65
66 Celery is a project with minimal funding,
67 so we don't support Microsoft Windows.
68 Please don't open any issues related to that platform.
69
70 *Celery* is usually used with a message broker to send and receive messages.
71 The RabbitMQ, Redis transports are feature complete,
72 but there's also experimental support for a myriad of other solutions, including
73 using SQLite for local development.
74
75 *Celery* can run on a single machine, on multiple machines, or even
76 across datacenters.
77
78 Get Started
79 ===========
80
81 If this is the first time you're trying to use Celery, or you're
82 new to Celery 4.2 coming from previous versions then you should read our
83 getting started tutorials:
84
85 - `First steps with Celery`_
86
87 Tutorial teaching you the bare minimum needed to get started with Celery.
88
89 - `Next steps`_
90
91 A more complete overview, showing more features.
92
93 .. _`First steps with Celery`:
94 http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
95
96 .. _`Next steps`:
97 http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
98
99 Celery is...
100 =============
101
102 - **Simple**
103
104 Celery is easy to use and maintain, and does *not need configuration files*.
105
106 It has an active, friendly community you can talk to for support,
107 like at our `mailing-list`_, or the IRC channel.
108
109 Here's one of the simplest applications you can make::
110
111 from celery import Celery
112
113 app = Celery('hello', broker='amqp://guest@localhost//')
114
115 @app.task
116 def hello():
117 return 'hello world'
118
119 - **Highly Available**
120
121 Workers and clients will automatically retry in the event
122 of connection loss or failure, and some brokers support
123 HA in way of *Primary/Primary* or *Primary/Replica* replication.
124
125 - **Fast**
126
127 A single Celery process can process millions of tasks a minute,
128 with sub-millisecond round-trip latency (using RabbitMQ,
129 py-librabbitmq, and optimized settings).
130
131 - **Flexible**
132
133 Almost every part of *Celery* can be extended or used on its own,
134 Custom pool implementations, serializers, compression schemes, logging,
135 schedulers, consumers, producers, broker transports, and much more.
136
137 It supports...
138 ================
139
140 - **Message Transports**
141
142 - RabbitMQ_, Redis_, Amazon SQS
143
144 - **Concurrency**
145
146 - Prefork, Eventlet_, gevent_, single threaded (``solo``)
147
148 - **Result Stores**
149
150 - AMQP, Redis
151 - memcached
152 - SQLAlchemy, Django ORM
153 - Apache Cassandra, IronCache, Elasticsearch
154
155 - **Serialization**
156
157 - *pickle*, *json*, *yaml*, *msgpack*.
158 - *zlib*, *bzip2* compression.
159 - Cryptographic message signing.
160
161 .. _`Eventlet`: http://eventlet.net/
162 .. _`gevent`: http://gevent.org/
163
164 .. _RabbitMQ: https://rabbitmq.com
165 .. _Redis: https://redis.io
166 .. _SQLAlchemy: http://sqlalchemy.org
167
168 Framework Integration
169 =====================
170
171 Celery is easy to integrate with web frameworks, some of which even have
172 integration packages:
173
174 +--------------------+------------------------+
175 | `Django`_ | not needed |
176 +--------------------+------------------------+
177 | `Pyramid`_ | `pyramid_celery`_ |
178 +--------------------+------------------------+
179 | `Pylons`_ | `celery-pylons`_ |
180 +--------------------+------------------------+
181 | `Flask`_ | not needed |
182 +--------------------+------------------------+
183 | `web2py`_ | `web2py-celery`_ |
184 +--------------------+------------------------+
185 | `Tornado`_ | `tornado-celery`_ |
186 +--------------------+------------------------+
187
188 The integration packages aren't strictly necessary, but they can make
189 development easier, and sometimes they add important hooks like closing
190 database connections at ``fork``.
191
192 .. _`Django`: https://djangoproject.com/
193 .. _`Pylons`: http://pylonsproject.org/
194 .. _`Flask`: http://flask.pocoo.org/
195 .. _`web2py`: http://web2py.com/
196 .. _`Bottle`: https://bottlepy.org/
197 .. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
198 .. _`pyramid_celery`: https://pypi.org/project/pyramid_celery/
199 .. _`celery-pylons`: https://pypi.org/project/celery-pylons/
200 .. _`web2py-celery`: https://code.google.com/p/web2py-celery/
201 .. _`Tornado`: http://www.tornadoweb.org/
202 .. _`tornado-celery`: https://github.com/mher/tornado-celery/
203
204 .. _celery-documentation:
205
206 Documentation
207 =============
208
209 The `latest documentation`_ is hosted at Read The Docs, containing user guides,
210 tutorials, and an API reference.
211
212 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
213
214 .. _celery-installation:
215
216 Installation
217 ============
218
219 You can install Celery either via the Python Package Index (PyPI)
220 or from source.
221
222 To install using ``pip``:
223
224 ::
225
226
227 $ pip install -U Celery
228
229 .. _bundles:
230
231 Bundles
232 -------
233
234 Celery also defines a group of bundles that can be used
235 to install Celery and the dependencies for a given feature.
236
237 You can specify these in your requirements or on the ``pip``
238 command-line by using brackets. Multiple bundles can be specified by
239 separating them by commas.
240
241 ::
242
243
244 $ pip install "celery[librabbitmq]"
245
246 $ pip install "celery[librabbitmq,redis,auth,msgpack]"
247
248 The following bundles are available:
249
250 Serializers
251 ~~~~~~~~~~~
252
253 :``celery[auth]``:
254 for using the ``auth`` security serializer.
255
256 :``celery[msgpack]``:
257 for using the msgpack serializer.
258
259 :``celery[yaml]``:
260 for using the yaml serializer.
261
262 Concurrency
263 ~~~~~~~~~~~
264
265 :``celery[eventlet]``:
266 for using the ``eventlet`` pool.
267
268 :``celery[gevent]``:
269 for using the ``gevent`` pool.
270
271 Transports and Backends
272 ~~~~~~~~~~~~~~~~~~~~~~~
273
274 :``celery[librabbitmq]``:
275 for using the librabbitmq C library.
276
277 :``celery[redis]``:
278 for using Redis as a message transport or as a result backend.
279
280 :``celery[sqs]``:
281 for using Amazon SQS as a message transport.
282
283 :``celery[tblib``]:
284 for using the ``task_remote_tracebacks`` feature.
285
286 :``celery[memcache]``:
287 for using Memcached as a result backend (using ``pylibmc``)
288
289 :``celery[pymemcache]``:
290 for using Memcached as a result backend (pure-Python implementation).
291
292 :``celery[cassandra]``:
293 for using Apache Cassandra as a result backend with DataStax driver.
294
295 :``celery[azureblockblob]``:
296 for using Azure Storage as a result backend (using ``azure-storage``)
297
298 :``celery[couchbase]``:
299 for using Couchbase as a result backend.
300
301 :``celery[elasticsearch]``:
302 for using Elasticsearch as a result backend.
303
304 :``celery[riak]``:
305 for using Riak as a result backend.
306
307 :``celery[zookeeper]``:
308 for using Zookeeper as a message transport.
309
310 :``celery[sqlalchemy]``:
311 for using SQLAlchemy as a result backend (*supported*).
312
313 :``celery[pyro]``:
314 for using the Pyro4 message transport (*experimental*).
315
316 :``celery[slmq]``:
317 for using the SoftLayer Message Queue transport (*experimental*).
318
319 :``celery[consul]``:
320 for using the Consul.io Key/Value store as a message transport or result backend (*experimental*).
321
322 :``celery[django]``:
323 specifies the lowest version possible for Django support.
324
325 You should probably not use this in your requirements, it's here
326 for informational purposes only.
327
328
329 .. _celery-installing-from-source:
330
331 Downloading and installing from source
332 --------------------------------------
333
334 Download the latest version of Celery from PyPI:
335
336 https://pypi.org/project/celery/
337
338 You can install it by doing the following,:
339
340 ::
341
342
343 $ tar xvfz celery-0.0.0.tar.gz
344 $ cd celery-0.0.0
345 $ python setup.py build
346 # python setup.py install
347
348 The last command must be executed as a privileged user if
349 you aren't currently using a virtualenv.
350
351 .. _celery-installing-from-git:
352
353 Using the development version
354 -----------------------------
355
356 With pip
357 ~~~~~~~~
358
359 The Celery development version also requires the development
360 versions of ``kombu``, ``amqp``, ``billiard``, and ``vine``.
361
362 You can install the latest snapshot of these using the following
363 pip commands:
364
365 ::
366
367
368 $ pip install https://github.com/celery/celery/zipball/master#egg=celery
369 $ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
370 $ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
371 $ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
372 $ pip install https://github.com/celery/vine/zipball/master#egg=vine
373
374 With git
375 ~~~~~~~~
376
377 Please see the Contributing section.
378
379 .. _getting-help:
380
381 Getting Help
382 ============
383
384 .. _mailing-list:
385
386 Mailing list
387 ------------
388
389 For discussions about the usage, development, and future of Celery,
390 please join the `celery-users`_ mailing list.
391
392 .. _`celery-users`: https://groups.google.com/group/celery-users/
393
394 .. _irc-channel:
395
396 IRC
397 ---
398
399 Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
400 network.
401
402 .. _`Freenode`: https://freenode.net
403
404 .. _bug-tracker:
405
406 Bug tracker
407 ===========
408
409 If you have any suggestions, bug reports, or annoyances please report them
410 to our issue tracker at https://github.com/celery/celery/issues/
411
412 .. _wiki:
413
414 Wiki
415 ====
416
417 https://wiki.github.com/celery/celery/
418
419 Credits
420 =======
421
422 .. _contributing-short:
423
424 Contributors
425 ------------
426
427 This project exists thanks to all the people who contribute. Development of
428 `celery` happens at GitHub: https://github.com/celery/celery
429
430 You're highly encouraged to participate in the development
431 of `celery`. If you don't like GitHub (for some reason) you're welcome
432 to send regular patches.
433
434 Be sure to also read the `Contributing to Celery`_ section in the
435 documentation.
436
437 .. _`Contributing to Celery`:
438 http://docs.celeryproject.org/en/master/contributing.html
439
440 |oc-contributors|
441
442 .. |oc-contributors| image:: https://opencollective.com/celery/contributors.svg?width=890&button=false
443 :target: https://github.com/celery/celery/graphs/contributors
444
445 Backers
446 -------
447
448 Thank you to all our backers! 🙏 [`Become a backer`_]
449
450 .. _`Become a backer`: https://opencollective.com/celery#backer
451
452 |oc-backers|
453
454 .. |oc-backers| image:: https://opencollective.com/celery/backers.svg?width=890
455 :target: https://opencollective.com/celery#backers
456
457 Sponsors
458 --------
459
460 Support this project by becoming a sponsor. Your logo will show up here with a
461 link to your website. [`Become a sponsor`_]
462
463 .. _`Become a sponsor`: https://opencollective.com/celery#sponsor
464
465 |oc-sponsors|
466
467 .. |oc-sponsors| image:: https://opencollective.com/celery/sponsor/0/avatar.svg
468 :target: https://opencollective.com/celery/sponsor/0/website
469
470 .. _license:
471
472 License
473 =======
474
475 This software is licensed under the `New BSD License`. See the ``LICENSE``
476 file in the top distribution directory for the full license text.
477
478 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
479
480 .. |build-status| image:: https://secure.travis-ci.org/celery/celery.png?branch=master
481 :alt: Build status
482 :target: https://travis-ci.org/celery/celery
483
484 .. |coverage| image:: https://codecov.io/github/celery/celery/coverage.svg?branch=master
485 :target: https://codecov.io/github/celery/celery?branch=master
486
487 .. |license| image:: https://img.shields.io/pypi/l/celery.svg
488 :alt: BSD License
489 :target: https://opensource.org/licenses/BSD-3-Clause
490
491 .. |wheel| image:: https://img.shields.io/pypi/wheel/celery.svg
492 :alt: Celery can be installed via wheel
493 :target: https://pypi.org/project/celery/
494
495 .. |pyversion| image:: https://img.shields.io/pypi/pyversions/celery.svg
496 :alt: Supported Python versions.
497 :target: https://pypi.org/project/celery/
498
499 .. |pyimp| image:: https://img.shields.io/pypi/implementation/celery.svg
500 :alt: Support Python implementations.
501 :target: https://pypi.org/project/celery/
502
503 .. |ocbackerbadge| image:: https://opencollective.com/celery/backers/badge.svg
504 :alt: Backers on Open Collective
505 :target: #backers
506
507 .. |ocsponsorbadge| image:: https://opencollective.com/celery/sponsors/badge.svg
508 :alt: Sponsors on Open Collective
509 :target: #sponsors
510
[end of README.rst]
[start of celery/platforms.py]
1 # -*- coding: utf-8 -*-
2 """Platforms.
3
4 Utilities dealing with platform specifics: signals, daemonization,
5 users, groups, and so on.
6 """
7 from __future__ import absolute_import, print_function, unicode_literals
8
9 import atexit
10 import errno
11 import math
12 import numbers
13 import os
14 import platform as _platform
15 import signal as _signal
16 import struct
17 import sys
18 import warnings
19 from collections import namedtuple
20 from contextlib import contextmanager
21
22 from billiard.compat import close_open_fds, get_fdmax
23 # fileno used to be in this module
24 from kombu.utils.compat import maybe_fileno
25 from kombu.utils.encoding import safe_str
26
27 from .exceptions import SecurityError
28 from .five import items, reraise, string_t
29 from .local import try_import
30
31 try:
32 from billiard.process import current_process
33 except ImportError: # pragma: no cover
34 current_process = None
35
36 _setproctitle = try_import('setproctitle')
37 resource = try_import('resource')
38 pwd = try_import('pwd')
39 grp = try_import('grp')
40 mputil = try_import('multiprocessing.util')
41
42 __all__ = (
43 'EX_OK', 'EX_FAILURE', 'EX_UNAVAILABLE', 'EX_USAGE', 'SYSTEM',
44 'IS_macOS', 'IS_WINDOWS', 'SIGMAP', 'pyimplementation', 'LockFailed',
45 'get_fdmax', 'Pidfile', 'create_pidlock', 'close_open_fds',
46 'DaemonContext', 'detached', 'parse_uid', 'parse_gid', 'setgroups',
47 'initgroups', 'setgid', 'setuid', 'maybe_drop_privileges', 'signals',
48 'signal_name', 'set_process_title', 'set_mp_process_title',
49 'get_errno_name', 'ignore_errno', 'fd_by_path', 'isatty',
50 )
51
52 # exitcodes
53 EX_OK = getattr(os, 'EX_OK', 0)
54 EX_FAILURE = 1
55 EX_UNAVAILABLE = getattr(os, 'EX_UNAVAILABLE', 69)
56 EX_USAGE = getattr(os, 'EX_USAGE', 64)
57 EX_CANTCREAT = getattr(os, 'EX_CANTCREAT', 73)
58
59 SYSTEM = _platform.system()
60 IS_macOS = SYSTEM == 'Darwin'
61 IS_WINDOWS = SYSTEM == 'Windows'
62
63 DAEMON_WORKDIR = '/'
64
65 PIDFILE_FLAGS = os.O_CREAT | os.O_EXCL | os.O_WRONLY
66 PIDFILE_MODE = ((os.R_OK | os.W_OK) << 6) | ((os.R_OK) << 3) | ((os.R_OK))
67
68 PIDLOCKED = """ERROR: Pidfile ({0}) already exists.
69 Seems we're already running? (pid: {1})"""
70
71 _range = namedtuple('_range', ('start', 'stop'))
72
73 C_FORCE_ROOT = os.environ.get('C_FORCE_ROOT', False)
74
75 ROOT_DISALLOWED = """\
76 Running a worker with superuser privileges when the
77 worker accepts messages serialized with pickle is a very bad idea!
78
79 If you really want to continue then you have to set the C_FORCE_ROOT
80 environment variable (but please think about this before you do).
81
82 User information: uid={uid} euid={euid} gid={gid} egid={egid}
83 """
84
85 ROOT_DISCOURAGED = """\
86 You're running the worker with superuser privileges: this is
87 absolutely not recommended!
88
89 Please specify a different user using the --uid option.
90
91 User information: uid={uid} euid={euid} gid={gid} egid={egid}
92 """
93
94 SIGNAMES = {
95 sig for sig in dir(_signal)
96 if sig.startswith('SIG') and '_' not in sig
97 }
98 SIGMAP = {getattr(_signal, name): name for name in SIGNAMES}
99
100
101 def isatty(fh):
102 """Return true if the process has a controlling terminal."""
103 try:
104 return fh.isatty()
105 except AttributeError:
106 pass
107
108
109 def pyimplementation():
110 """Return string identifying the current Python implementation."""
111 if hasattr(_platform, 'python_implementation'):
112 return _platform.python_implementation()
113 elif sys.platform.startswith('java'):
114 return 'Jython ' + sys.platform
115 elif hasattr(sys, 'pypy_version_info'):
116 v = '.'.join(str(p) for p in sys.pypy_version_info[:3])
117 if sys.pypy_version_info[3:]:
118 v += '-' + ''.join(str(p) for p in sys.pypy_version_info[3:])
119 return 'PyPy ' + v
120 else:
121 return 'CPython'
122
123
124 class LockFailed(Exception):
125 """Raised if a PID lock can't be acquired."""
126
127
128 class Pidfile(object):
129 """Pidfile.
130
131 This is the type returned by :func:`create_pidlock`.
132
133 See Also:
134 Best practice is to not use this directly but rather use
135 the :func:`create_pidlock` function instead:
136 more convenient and also removes stale pidfiles (when
137 the process holding the lock is no longer running).
138 """
139
140 #: Path to the pid lock file.
141 path = None
142
143 def __init__(self, path):
144 self.path = os.path.abspath(path)
145
146 def acquire(self):
147 """Acquire lock."""
148 try:
149 self.write_pid()
150 except OSError as exc:
151 reraise(LockFailed, LockFailed(str(exc)), sys.exc_info()[2])
152 return self
153 __enter__ = acquire
154
155 def is_locked(self):
156 """Return true if the pid lock exists."""
157 return os.path.exists(self.path)
158
159 def release(self, *args):
160 """Release lock."""
161 self.remove()
162 __exit__ = release
163
164 def read_pid(self):
165 """Read and return the current pid."""
166 with ignore_errno('ENOENT'):
167 with open(self.path, 'r') as fh:
168 line = fh.readline()
169 if line.strip() == line: # must contain '\n'
170 raise ValueError(
171 'Partial or invalid pidfile {0.path}'.format(self))
172
173 try:
174 return int(line.strip())
175 except ValueError:
176 raise ValueError(
177 'pidfile {0.path} contents invalid.'.format(self))
178
179 def remove(self):
180 """Remove the lock."""
181 with ignore_errno(errno.ENOENT, errno.EACCES):
182 os.unlink(self.path)
183
184 def remove_if_stale(self):
185 """Remove the lock if the process isn't running.
186
187 I.e. process does not respons to signal.
188 """
189 try:
190 pid = self.read_pid()
191 except ValueError as exc:
192 print('Broken pidfile found - Removing it.', file=sys.stderr)
193 self.remove()
194 return True
195 if not pid:
196 self.remove()
197 return True
198
199 try:
200 os.kill(pid, 0)
201 except os.error as exc:
202 if exc.errno == errno.ESRCH:
203 print('Stale pidfile exists - Removing it.', file=sys.stderr)
204 self.remove()
205 return True
206 except SystemError as exc:
207 print('Stale pidfile exists - Removing it.', file=sys.stderr)
208 self.remove()
209 return True
210 return False
211
212 def write_pid(self):
213 pid = os.getpid()
214 content = '{0}\n'.format(pid)
215
216 pidfile_fd = os.open(self.path, PIDFILE_FLAGS, PIDFILE_MODE)
217 pidfile = os.fdopen(pidfile_fd, 'w')
218 try:
219 pidfile.write(content)
220 # flush and sync so that the re-read below works.
221 pidfile.flush()
222 try:
223 os.fsync(pidfile_fd)
224 except AttributeError: # pragma: no cover
225 pass
226 finally:
227 pidfile.close()
228
229 rfh = open(self.path)
230 try:
231 if rfh.read() != content:
232 raise LockFailed(
233 "Inconsistency: Pidfile content doesn't match at re-read")
234 finally:
235 rfh.close()
236
237
238 PIDFile = Pidfile # noqa: E305 XXX compat alias
239
240
241 def create_pidlock(pidfile):
242 """Create and verify pidfile.
243
244 If the pidfile already exists the program exits with an error message,
245 however if the process it refers to isn't running anymore, the pidfile
246 is deleted and the program continues.
247
248 This function will automatically install an :mod:`atexit` handler
249 to release the lock at exit, you can skip this by calling
250 :func:`_create_pidlock` instead.
251
252 Returns:
253 Pidfile: used to manage the lock.
254
255 Example:
256 >>> pidlock = create_pidlock('/var/run/app.pid')
257 """
258 pidlock = _create_pidlock(pidfile)
259 atexit.register(pidlock.release)
260 return pidlock
261
262
263 def _create_pidlock(pidfile):
264 pidlock = Pidfile(pidfile)
265 if pidlock.is_locked() and not pidlock.remove_if_stale():
266 print(PIDLOCKED.format(pidfile, pidlock.read_pid()), file=sys.stderr)
267 raise SystemExit(EX_CANTCREAT)
268 pidlock.acquire()
269 return pidlock
270
271
272 def fd_by_path(paths):
273 """Return a list of file descriptors.
274
275 This method returns list of file descriptors corresponding to
276 file paths passed in paths variable.
277
278 Arguments:
279 paths: List[str]: List of file paths.
280
281 Returns:
282 List[int]: List of file descriptors.
283
284 Example:
285 >>> keep = fd_by_path(['/dev/urandom', '/my/precious/'])
286 """
287 stats = set()
288 for path in paths:
289 try:
290 fd = os.open(path, os.O_RDONLY)
291 except OSError:
292 continue
293 try:
294 stats.add(os.fstat(fd)[1:3])
295 finally:
296 os.close(fd)
297
298 def fd_in_stats(fd):
299 try:
300 return os.fstat(fd)[1:3] in stats
301 except OSError:
302 return False
303
304 return [_fd for _fd in range(get_fdmax(2048)) if fd_in_stats(_fd)]
305
306
307 class DaemonContext(object):
308 """Context manager daemonizing the process."""
309
310 _is_open = False
311
312 def __init__(self, pidfile=None, workdir=None, umask=None,
313 fake=False, after_chdir=None, after_forkers=True,
314 **kwargs):
315 if isinstance(umask, string_t):
316 # octal or decimal, depending on initial zero.
317 umask = int(umask, 8 if umask.startswith('0') else 10)
318 self.workdir = workdir or DAEMON_WORKDIR
319 self.umask = umask
320 self.fake = fake
321 self.after_chdir = after_chdir
322 self.after_forkers = after_forkers
323 self.stdfds = (sys.stdin, sys.stdout, sys.stderr)
324
325 def redirect_to_null(self, fd):
326 if fd is not None:
327 dest = os.open(os.devnull, os.O_RDWR)
328 os.dup2(dest, fd)
329
330 def open(self):
331 if not self._is_open:
332 if not self.fake:
333 self._detach()
334
335 os.chdir(self.workdir)
336 if self.umask is not None:
337 os.umask(self.umask)
338
339 if self.after_chdir:
340 self.after_chdir()
341
342 if not self.fake:
343 # We need to keep /dev/urandom from closing because
344 # shelve needs it, and Beat needs shelve to start.
345 keep = list(self.stdfds) + fd_by_path(['/dev/urandom'])
346 close_open_fds(keep)
347 for fd in self.stdfds:
348 self.redirect_to_null(maybe_fileno(fd))
349 if self.after_forkers and mputil is not None:
350 mputil._run_after_forkers()
351
352 self._is_open = True
353 __enter__ = open
354
355 def close(self, *args):
356 if self._is_open:
357 self._is_open = False
358 __exit__ = close
359
360 def _detach(self):
361 if os.fork() == 0: # first child
362 os.setsid() # create new session
363 if os.fork() > 0: # pragma: no cover
364 # second child
365 os._exit(0)
366 else:
367 os._exit(0)
368 return self
369
370
371 def detached(logfile=None, pidfile=None, uid=None, gid=None, umask=0,
372 workdir=None, fake=False, **opts):
373 """Detach the current process in the background (daemonize).
374
375 Arguments:
376 logfile (str): Optional log file.
377 The ability to write to this file
378 will be verified before the process is detached.
379 pidfile (str): Optional pid file.
380 The pidfile won't be created,
381 as this is the responsibility of the child. But the process will
382 exit if the pid lock exists and the pid written is still running.
383 uid (int, str): Optional user id or user name to change
384 effective privileges to.
385 gid (int, str): Optional group id or group name to change
386 effective privileges to.
387 umask (str, int): Optional umask that'll be effective in
388 the child process.
389 workdir (str): Optional new working directory.
390 fake (bool): Don't actually detach, intended for debugging purposes.
391 **opts (Any): Ignored.
392
393 Example:
394 >>> from celery.platforms import detached, create_pidlock
395 >>> with detached(
396 ... logfile='/var/log/app.log',
397 ... pidfile='/var/run/app.pid',
398 ... uid='nobody'):
399 ... # Now in detached child process with effective user set to nobody,
400 ... # and we know that our logfile can be written to, and that
401 ... # the pidfile isn't locked.
402 ... pidlock = create_pidlock('/var/run/app.pid')
403 ...
404 ... # Run the program
405 ... program.run(logfile='/var/log/app.log')
406 """
407 if not resource:
408 raise RuntimeError('This platform does not support detach.')
409 workdir = os.getcwd() if workdir is None else workdir
410
411 signals.reset('SIGCLD') # Make sure SIGCLD is using the default handler.
412 maybe_drop_privileges(uid=uid, gid=gid)
413
414 def after_chdir_do():
415 # Since without stderr any errors will be silently suppressed,
416 # we need to know that we have access to the logfile.
417 logfile and open(logfile, 'a').close()
418 # Doesn't actually create the pidfile, but makes sure it's not stale.
419 if pidfile:
420 _create_pidlock(pidfile).release()
421
422 return DaemonContext(
423 umask=umask, workdir=workdir, fake=fake, after_chdir=after_chdir_do,
424 )
425
426
427 def parse_uid(uid):
428 """Parse user id.
429
430 Arguments:
431 uid (str, int): Actual uid, or the username of a user.
432 Returns:
433 int: The actual uid.
434 """
435 try:
436 return int(uid)
437 except ValueError:
438 try:
439 return pwd.getpwnam(uid).pw_uid
440 except (AttributeError, KeyError):
441 raise KeyError('User does not exist: {0}'.format(uid))
442
443
444 def parse_gid(gid):
445 """Parse group id.
446
447 Arguments:
448 gid (str, int): Actual gid, or the name of a group.
449 Returns:
450 int: The actual gid of the group.
451 """
452 try:
453 return int(gid)
454 except ValueError:
455 try:
456 return grp.getgrnam(gid).gr_gid
457 except (AttributeError, KeyError):
458 raise KeyError('Group does not exist: {0}'.format(gid))
459
460
461 def _setgroups_hack(groups):
462 # :fun:`setgroups` may have a platform-dependent limit,
463 # and it's not always possible to know in advance what this limit
464 # is, so we use this ugly hack stolen from glibc.
465 groups = groups[:]
466
467 while 1:
468 try:
469 return os.setgroups(groups)
470 except ValueError: # error from Python's check.
471 if len(groups) <= 1:
472 raise
473 groups[:] = groups[:-1]
474 except OSError as exc: # error from the OS.
475 if exc.errno != errno.EINVAL or len(groups) <= 1:
476 raise
477 groups[:] = groups[:-1]
478
479
480 def setgroups(groups):
481 """Set active groups from a list of group ids."""
482 max_groups = None
483 try:
484 max_groups = os.sysconf('SC_NGROUPS_MAX')
485 except Exception: # pylint: disable=broad-except
486 pass
487 try:
488 return _setgroups_hack(groups[:max_groups])
489 except OSError as exc:
490 if exc.errno != errno.EPERM:
491 raise
492 if any(group not in groups for group in os.getgroups()):
493 # we shouldn't be allowed to change to this group.
494 raise
495
496
497 def initgroups(uid, gid):
498 """Init process group permissions.
499
500 Compat version of :func:`os.initgroups` that was first
501 added to Python 2.7.
502 """
503 if not pwd: # pragma: no cover
504 return
505 username = pwd.getpwuid(uid)[0]
506 if hasattr(os, 'initgroups'): # Python 2.7+
507 return os.initgroups(username, gid)
508 groups = [gr.gr_gid for gr in grp.getgrall()
509 if username in gr.gr_mem]
510 setgroups(groups)
511
512
513 def setgid(gid):
514 """Version of :func:`os.setgid` supporting group names."""
515 os.setgid(parse_gid(gid))
516
517
518 def setuid(uid):
519 """Version of :func:`os.setuid` supporting usernames."""
520 os.setuid(parse_uid(uid))
521
522
523 def maybe_drop_privileges(uid=None, gid=None):
524 """Change process privileges to new user/group.
525
526 If UID and GID is specified, the real user/group is changed.
527
528 If only UID is specified, the real user is changed, and the group is
529 changed to the users primary group.
530
531 If only GID is specified, only the group is changed.
532 """
533 if sys.platform == 'win32':
534 return
535 if os.geteuid():
536 # no point trying to setuid unless we're root.
537 if not os.getuid():
538 raise SecurityError('contact support')
539 uid = uid and parse_uid(uid)
540 gid = gid and parse_gid(gid)
541
542 if uid:
543 _setuid(uid, gid)
544 else:
545 gid and setgid(gid)
546
547 if uid and not os.getuid() and not os.geteuid():
548 raise SecurityError('Still root uid after drop privileges!')
549 if gid and not os.getgid() and not os.getegid():
550 raise SecurityError('Still root gid after drop privileges!')
551
552
553 def _setuid(uid, gid):
554 # If GID isn't defined, get the primary GID of the user.
555 if not gid and pwd:
556 gid = pwd.getpwuid(uid).pw_gid
557 # Must set the GID before initgroups(), as setgid()
558 # is known to zap the group list on some platforms.
559
560 # setgid must happen before setuid (otherwise the setgid operation
561 # may fail because of insufficient privileges and possibly stay
562 # in a privileged group).
563 setgid(gid)
564 initgroups(uid, gid)
565
566 # at last:
567 setuid(uid)
568 # ... and make sure privileges cannot be restored:
569 try:
570 setuid(0)
571 except OSError as exc:
572 if exc.errno != errno.EPERM:
573 raise
574 # we should get here: cannot restore privileges,
575 # everything was fine.
576 else:
577 raise SecurityError(
578 'non-root user able to restore privileges after setuid.')
579
580
581 class Signals(object):
582 """Convenience interface to :mod:`signals`.
583
584 If the requested signal isn't supported on the current platform,
585 the operation will be ignored.
586
587 Example:
588 >>> from celery.platforms import signals
589
590 >>> from proj.handlers import my_handler
591 >>> signals['INT'] = my_handler
592
593 >>> signals['INT']
594 my_handler
595
596 >>> signals.supported('INT')
597 True
598
599 >>> signals.signum('INT')
600 2
601
602 >>> signals.ignore('USR1')
603 >>> signals['USR1'] == signals.ignored
604 True
605
606 >>> signals.reset('USR1')
607 >>> signals['USR1'] == signals.default
608 True
609
610 >>> from proj.handlers import exit_handler, hup_handler
611 >>> signals.update(INT=exit_handler,
612 ... TERM=exit_handler,
613 ... HUP=hup_handler)
614 """
615
616 ignored = _signal.SIG_IGN
617 default = _signal.SIG_DFL
618
619 if hasattr(_signal, 'setitimer'):
620
621 def arm_alarm(self, seconds):
622 _signal.setitimer(_signal.ITIMER_REAL, seconds)
623 else: # pragma: no cover
624 try:
625 from itimer import alarm as _itimer_alarm # noqa
626 except ImportError:
627
628 def arm_alarm(self, seconds): # noqa
629 _signal.alarm(math.ceil(seconds))
630 else: # pragma: no cover
631
632 def arm_alarm(self, seconds): # noqa
633 return _itimer_alarm(seconds) # noqa
634
635 def reset_alarm(self):
636 return _signal.alarm(0)
637
638 def supported(self, name):
639 """Return true value if signal by ``name`` exists on this platform."""
640 try:
641 self.signum(name)
642 except AttributeError:
643 return False
644 else:
645 return True
646
647 def signum(self, name):
648 """Get signal number by name."""
649 if isinstance(name, numbers.Integral):
650 return name
651 if not isinstance(name, string_t) \
652 or not name.isupper():
653 raise TypeError('signal name must be uppercase string.')
654 if not name.startswith('SIG'):
655 name = 'SIG' + name
656 return getattr(_signal, name)
657
658 def reset(self, *signal_names):
659 """Reset signals to the default signal handler.
660
661 Does nothing if the platform has no support for signals,
662 or the specified signal in particular.
663 """
664 self.update((sig, self.default) for sig in signal_names)
665
666 def ignore(self, *names):
667 """Ignore signal using :const:`SIG_IGN`.
668
669 Does nothing if the platform has no support for signals,
670 or the specified signal in particular.
671 """
672 self.update((sig, self.ignored) for sig in names)
673
674 def __getitem__(self, name):
675 return _signal.getsignal(self.signum(name))
676
677 def __setitem__(self, name, handler):
678 """Install signal handler.
679
680 Does nothing if the current platform has no support for signals,
681 or the specified signal in particular.
682 """
683 try:
684 _signal.signal(self.signum(name), handler)
685 except (AttributeError, ValueError):
686 pass
687
688 def update(self, _d_=None, **sigmap):
689 """Set signal handlers from a mapping."""
690 for name, handler in items(dict(_d_ or {}, **sigmap)):
691 self[name] = handler
692
693
694 signals = Signals()
695 get_signal = signals.signum # compat
696 install_signal_handler = signals.__setitem__ # compat
697 reset_signal = signals.reset # compat
698 ignore_signal = signals.ignore # compat
699
700
701 def signal_name(signum):
702 """Return name of signal from signal number."""
703 return SIGMAP[signum][3:]
704
705
706 def strargv(argv):
707 arg_start = 2 if 'manage' in argv[0] else 1
708 if len(argv) > arg_start:
709 return ' '.join(argv[arg_start:])
710 return ''
711
712
713 def set_process_title(progname, info=None):
714 """Set the :command:`ps` name for the currently running process.
715
716 Only works if :pypi:`setproctitle` is installed.
717 """
718 proctitle = '[{0}]'.format(progname)
719 proctitle = '{0} {1}'.format(proctitle, info) if info else proctitle
720 if _setproctitle:
721 _setproctitle.setproctitle(safe_str(proctitle))
722 return proctitle
723
724
725 if os.environ.get('NOSETPS'): # pragma: no cover
726
727 def set_mp_process_title(*a, **k):
728 """Disabled feature."""
729 else:
730
731 def set_mp_process_title(progname, info=None, hostname=None): # noqa
732 """Set the :command:`ps` name from the current process name.
733
734 Only works if :pypi:`setproctitle` is installed.
735 """
736 if hostname:
737 progname = '{0}: {1}'.format(progname, hostname)
738 name = current_process().name if current_process else 'MainProcess'
739 return set_process_title('{0}:{1}'.format(progname, name), info=info)
740
741
742 def get_errno_name(n):
743 """Get errno for string (e.g., ``ENOENT``)."""
744 if isinstance(n, string_t):
745 return getattr(errno, n)
746 return n
747
748
749 @contextmanager
750 def ignore_errno(*errnos, **kwargs):
751 """Context manager to ignore specific POSIX error codes.
752
753 Takes a list of error codes to ignore: this can be either
754 the name of the code, or the code integer itself::
755
756 >>> with ignore_errno('ENOENT'):
757 ... with open('foo', 'r') as fh:
758 ... return fh.read()
759
760 >>> with ignore_errno(errno.ENOENT, errno.EPERM):
761 ... pass
762
763 Arguments:
764 types (Tuple[Exception]): A tuple of exceptions to ignore
765 (when the errno matches). Defaults to :exc:`Exception`.
766 """
767 types = kwargs.get('types') or (Exception,)
768 errnos = [get_errno_name(errno) for errno in errnos]
769 try:
770 yield
771 except types as exc:
772 if not hasattr(exc, 'errno'):
773 raise
774 if exc.errno not in errnos:
775 raise
776
777
778 def check_privileges(accept_content):
779 uid = os.getuid() if hasattr(os, 'getuid') else 65535
780 gid = os.getgid() if hasattr(os, 'getgid') else 65535
781 euid = os.geteuid() if hasattr(os, 'geteuid') else 65535
782 egid = os.getegid() if hasattr(os, 'getegid') else 65535
783
784 if hasattr(os, 'fchown'):
785 if not all(hasattr(os, attr)
786 for attr in ['getuid', 'getgid', 'geteuid', 'getegid']):
787 raise SecurityError('suspicious platform, contact support')
788
789 if not uid or not gid or not euid or not egid:
790 if ('pickle' in accept_content or
791 'application/x-python-serialize' in accept_content):
792 if not C_FORCE_ROOT:
793 try:
794 print(ROOT_DISALLOWED.format(
795 uid=uid, euid=euid, gid=gid, egid=egid,
796 ), file=sys.stderr)
797 finally:
798 sys.stderr.flush()
799 os._exit(1)
800 warnings.warn(RuntimeWarning(ROOT_DISCOURAGED.format(
801 uid=uid, euid=euid, gid=gid, egid=egid,
802 )))
803
804
805 if sys.version_info < (2, 7, 7): # pragma: no cover
806 import functools
807
808 def _to_bytes_arg(fun):
809 @functools.wraps(fun)
810 def _inner(s, *args, **kwargs):
811 return fun(s.encode(), *args, **kwargs)
812 return _inner
813
814 pack = _to_bytes_arg(struct.pack)
815 unpack = _to_bytes_arg(struct.unpack)
816 unpack_from = _to_bytes_arg(struct.unpack_from)
817 else:
818 pack = struct.pack
819 unpack = struct.unpack
820 unpack_from = struct.unpack_from
821
[end of celery/platforms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
celery/celery
|
bbacdfeb39a67bc05e571bddc01865f95efbbfcf
|
Celery does not respect exceptions types when using a serializer different than pickle.
## Checklist
```
~ : celery -A analystick report
software -> celery:4.0.0 (latentcall) kombu:4.0.0 py:3.5.2
billiard:3.5.0.2 redis:2.10.5
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://127.0.0.1:6379/1
```
## Steps to reproduce
(See example code below)
## Expected behavior
**When using a result serializer different than pickle**, exceptions types should be the same as the raised exception.
## Actual behavior
Celery does not respect the exception type but create _a new type_ instead.
The main problem is that instead of using the actual type of the exception, celery will [reconstruct a type](https://github.com/celery/celery/blob/0f87321df385c5f3dca717ec2a4a9c0d25f88054/celery/utils/serialization.py#L43-L45) on the fly, but without respecting the original exception module.
For example, using the `yaml` result serializer (I believe it will be the same for `json`):
* if a task raises a `ValueError`, the caller will receive a `celery.backends.base.ValueError`
* if a task raises a `custom.module.CustomError`, the caller will receive a `celery.backends.base.CustomError`
This ends with wrongs behaviour when raising a exception from a task and trying to catch it from the caller.
### Minimal reproductible test
As an example, I've setup a minimal reproductible test, using a redis backend :
celery config (I can provide a full config if needed):
```python
CELERY_TASK_SERIALIZER = 'yaml'
CELERY_RESULT_SERIALIZER='yaml'
```
Tasks :
```python
# module myapp.tasks
from myapp import celery_app
@celery_app.task
def raises_valueerror():
raise ValueError('Builtin exception')
class CustomError(Exception):
pass
@celery_app.task
def raises_customerror():
raise CustomError('Custom exception', {'a':1})
```
Unittest :
```python
from myapp import tasks
from myapp.tasks import CustomError
def test_builtin_exception():
t = tasks.raises_valueerror.s()
r = t.apply_async()
exc = None
try:
r.get(propagate=True)
except Exception as e:
exc = e
# with celery bug, the actual class of exc will be `celery.backends.base.ValueError` instead of builtin ValueError
assert isinstance(exc, ValueError), "Actual class %s}" % (exc.__class__)
def test_custom_exception():
t = tasks.raises_customerror.s()
r = t.apply_async()
exc = None
try:
r.get(propagate=True)
except Exception as e:
exc = e
# with celery bug, the actual class of exc will be `celery.backends.base.CustomError` instead of builtin CustomError
assert isinstance(exc, CustomError), "1/2 Actual class is %s" % (exc.__class__)
assert isinstance(exc, tasks.CustomError), "2/2 Actual class is %s" % (exc.__class__)
```
Theses tests will fail with the following errors :
```
# ...
AssertionError: Actual class <class 'celery.backends.base.ValueError'>
# ...
AssertionError: 1/2 Actual class is <class 'celery.backends.base.CustomError'>
```
Another side effect for this problem will be that a code like the one below won't work if a subtask raise a `ValueError`, as the propagated exception won't be of the builtin type `ValueError` but `celery.backends.base.ValueError`:
```python
try:
r.get(propagate=True)
except ValueError as e:
# do something
```
This problem will be the same also for any custom exceptions.
While I'm not sure about the possible side-effects, [I have a fix for this](https://github.com/jcsaaddupuy/celery/commit/8d4e613e24f6561fdaafd4e6ede582ceac882804) and I will gladly create a PR for this problem as it seems pretty critical.
What do you think ?
|
This is actually deliberate, as none of json, yaml or msgpack are able to reconstruct exceptions.
The alternative is to load exceptions of any type, which opens up for similar security issues as using pickle.
If this is deliberate, perhaps this issue should be closed? Or are there any plans to try to make it work, despite the security implications?
Is this situation documented? I was not able to find anything regarding this behaviour in the documentation.
@vladcalin @estan please check https://github.com/celery/celery/pull/3592 . As you can see this is still on-going. Any feedback is appreciated!
Hey here, will this issue be fixed one day ? Should we stop using custom exception ? Is there any workaround for now ?
|
2018-09-26T05:28:12Z
|
<patch>
diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -9,6 +9,7 @@
from __future__ import absolute_import, unicode_literals
import datetime
+import inspect
import sys
import time
from collections import namedtuple
@@ -34,7 +35,6 @@
from celery.utils.functional import LRUCache, arity_greater
from celery.utils.log import get_logger
from celery.utils.serialization import (create_exception_cls,
- ensure_serializable,
get_pickleable_exception,
get_pickled_exception)
@@ -236,9 +236,14 @@ def prepare_exception(self, exc, serializer=None):
serializer = self.serializer if serializer is None else serializer
if serializer in EXCEPTION_ABLE_CODECS:
return get_pickleable_exception(exc)
+ # retrieve exception original module
+ exc_module = inspect.getmodule(type(exc))
+ if exc_module:
+ exc_module = exc_module.__name__
+
return {'exc_type': type(exc).__name__,
- 'exc_message': ensure_serializable(exc.args, self.encode),
- 'exc_module': type(exc).__module__}
+ 'exc_args': exc.args,
+ 'exc_module': exc_module}
def exception_to_python(self, exc):
"""Convert serialized exception to Python exception."""
diff --git a/celery/utils/serialization.py b/celery/utils/serialization.py
--- a/celery/utils/serialization.py
+++ b/celery/utils/serialization.py
@@ -8,11 +8,11 @@
from base64 import b64decode as base64decode
from base64 import b64encode as base64encode
from functools import partial
+from importlib import import_module
from inspect import getmro
from itertools import takewhile
from kombu.utils.encoding import bytes_to_str, str_to_bytes
-
from celery.five import (bytes_if_py2, items, python_2_unicode_compatible,
reraise, string_t)
@@ -81,6 +81,14 @@ def itermro(cls, stop):
def create_exception_cls(name, module, parent=None):
"""Dynamically create an exception class."""
+ try:
+ mod = import_module(module)
+ exc_cls = getattr(mod, name, None)
+ if exc_cls and isinstance(exc_cls, type(BaseException)):
+ return exc_cls
+ except ImportError:
+ pass
+ # we could not find the exception, fallback and create a type.
if not parent:
parent = Exception
return subclass_exception(name, parent, module)
</patch>
|
[]
|
[]
| |||
pantsbuild__pants-17385
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Run pytest on multiple files the pytest-way
`pants test` triggers pytest on a per-file basis. This has different behavior compared to running `pytest <foldername>` because each pytest session only knows the single test file. This results in:
- the pytest report summary on terminal is created per test file, and the only really useful test summary is the one provided by pants which provides less information compared to the pytest summary.
- pytest plugins that create summary reports (such as pytest-html, pytest-json) create a single report per file (compared to a single report for all executed tests)
- pytest session scope fixtures are initialized repeatedly because for each test file a new pytest session is started.
I'm just speaking for myself, but I got so used to these points just working when using pytest that running python tests in pants as it is now feels like a downgrade to me. It would be great if there was a way to have the "vanilla" pytest experience also when using pants.
</issue>
<code>
[start of README.md]
1 # Pants Build System
2
3 Pants is a scalable build system for _monorepos_: codebases containing
4 multiple projects, often using multiple programming languages and frameworks,
5 in a single unified code repository.
6
7 Some noteworthy features include:
8
9 * Explicit dependency modeling.
10 * Fine-grained invalidation.
11 * Shared result caching.
12 * Concurrent execution.
13 * Remote execution.
14 * Unified interface for multiple tools and languages.
15 * Extensibility and customizability via a plugin API.
16
17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/).
18
19 # Requirements
20
21 To run Pants, you need:
22
23 * Linux or macOS.
24 * Python 3.7+ discoverable on your `PATH`.
25 * A C compiler, system headers and Python headers (to compile native Python modules).
26 * Internet access (so that Pants can fully bootstrap itself).
27
28 # Credits
29
30 We release to [PyPI](https://pypi.org/pypi)
31
32 [](https://pypi.org/pypi/pantsbuild.pants)
33 [](https://pypi.org/pypi/pantsbuild.pants)
34
35 <img width="150" height="61" src="https://uploads-ssl.webflow.com/5ac3c046c82724970fc60918/5c019d917bba312af7553b49_MacStadium-developerlogo.png">
36
[end of README.md]
[start of src/python/pants/backend/python/goals/coverage_py.py]
1 # Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import configparser
7 from dataclasses import dataclass
8 from enum import Enum
9 from io import StringIO
10 from pathlib import PurePath
11 from typing import Any, MutableMapping, cast
12
13 import toml
14
15 from pants.backend.python.goals import lockfile
16 from pants.backend.python.goals.lockfile import (
17 GeneratePythonLockfile,
18 GeneratePythonToolLockfileSentinel,
19 )
20 from pants.backend.python.subsystems.python_tool_base import PythonToolBase
21 from pants.backend.python.target_types import ConsoleScript
22 from pants.backend.python.util_rules.pex import PexRequest, VenvPex, VenvPexProcess
23 from pants.backend.python.util_rules.python_sources import (
24 PythonSourceFiles,
25 PythonSourceFilesRequest,
26 )
27 from pants.core.goals.generate_lockfiles import GenerateToolLockfileSentinel
28 from pants.core.goals.test import (
29 ConsoleCoverageReport,
30 CoverageData,
31 CoverageDataCollection,
32 CoverageReport,
33 CoverageReports,
34 FilesystemCoverageReport,
35 )
36 from pants.core.util_rules.config_files import ConfigFiles, ConfigFilesRequest
37 from pants.core.util_rules.distdir import DistDir
38 from pants.engine.addresses import Address
39 from pants.engine.fs import (
40 EMPTY_DIGEST,
41 AddPrefix,
42 CreateDigest,
43 Digest,
44 DigestContents,
45 FileContent,
46 MergeDigests,
47 PathGlobs,
48 Snapshot,
49 )
50 from pants.engine.process import FallibleProcessResult, ProcessExecutionFailure, ProcessResult
51 from pants.engine.rules import Get, MultiGet, collect_rules, rule
52 from pants.engine.target import TransitiveTargets, TransitiveTargetsRequest
53 from pants.engine.unions import UnionRule
54 from pants.option.global_options import KeepSandboxes
55 from pants.option.option_types import (
56 BoolOption,
57 EnumListOption,
58 FileOption,
59 FloatOption,
60 StrListOption,
61 StrOption,
62 )
63 from pants.source.source_root import AllSourceRoots
64 from pants.util.docutil import git_url
65 from pants.util.logging import LogLevel
66 from pants.util.strutil import softwrap
67
68 """
69 An overview:
70
71 Step 1: Run each test with the appropriate `--cov` arguments.
72 In `python_test_runner.py`, we pass options so that the pytest-cov plugin runs and records which
73 lines were encountered in the test. For each test, it will save a `.coverage` file (SQLite DB
74 format).
75
76 Step 2: Merge the results with `coverage combine`.
77 We now have a bunch of individual `PytestCoverageData` values, each with their own `.coverage` file.
78 We run `coverage combine` to convert this into a single `.coverage` file.
79
80 Step 3: Generate the report with `coverage {html,xml,console}`.
81 All the files in the single merged `.coverage` file are still stripped, and we want to generate a
82 report with the source roots restored. Coverage requires that the files it's reporting on be present
83 when it generates the report, so we populate all the source files.
84
85 Step 4: `test.py` outputs the final report.
86 """
87
88
89 class CoverageReportType(Enum):
90 CONSOLE = ("console", "report")
91 XML = ("xml", None)
92 HTML = ("html", None)
93 RAW = ("raw", None)
94 JSON = ("json", None)
95 LCOV = ("lcov", None)
96
97 _report_name: str
98
99 def __new__(cls, value: str, report_name: str | None = None) -> CoverageReportType:
100 member: CoverageReportType = object.__new__(cls)
101 member._value_ = value
102 member._report_name = report_name if report_name is not None else value
103 return member
104
105 @property
106 def report_name(self) -> str:
107 return self._report_name
108
109 @property
110 def value(self) -> str:
111 return cast(str, super().value)
112
113
114 class CoverageSubsystem(PythonToolBase):
115 options_scope = "coverage-py"
116 help = "Configuration for Python test coverage measurement."
117
118 default_version = "coverage[toml]>=6.5,<6.6"
119 default_main = ConsoleScript("coverage")
120
121 register_interpreter_constraints = True
122 default_interpreter_constraints = ["CPython>=3.7,<4"]
123
124 register_lockfile = True
125 default_lockfile_resource = ("pants.backend.python.subsystems", "coverage_py.lock")
126 default_lockfile_path = "src/python/pants/backend/python/subsystems/coverage_py.lock"
127 default_lockfile_url = git_url(default_lockfile_path)
128
129 filter = StrListOption(
130 help=softwrap(
131 """
132 A list of Python modules or filesystem paths to use in the coverage report, e.g.
133 `['helloworld_test', 'helloworld/util/dirutil'].
134
135 Both modules and directory paths are recursive: any submodules or child paths,
136 respectively, will be included.
137
138 If you leave this off, the coverage report will include every file
139 in the transitive closure of the address/file arguments; for example, `test ::`
140 will include every Python file in your project, whereas
141 `test project/app_test.py` will include `app_test.py` and any of its transitive
142 dependencies.
143 """
144 ),
145 )
146 report = EnumListOption(
147 default=[CoverageReportType.CONSOLE],
148 help="Which coverage report type(s) to emit.",
149 )
150 _output_dir = StrOption(
151 default=str(PurePath("{distdir}", "coverage", "python")),
152 advanced=True,
153 help="Path to write the Pytest Coverage report to. Must be relative to the build root.",
154 )
155 config = FileOption(
156 default=None,
157 advanced=True,
158 help=lambda cls: softwrap(
159 f"""
160 Path to an INI or TOML config file understood by coverage.py
161 (https://coverage.readthedocs.io/en/stable/config.html).
162
163 Setting this option will disable `[{cls.options_scope}].config_discovery`. Use
164 this option if the config is located in a non-standard location.
165 """
166 ),
167 )
168 config_discovery = BoolOption(
169 default=True,
170 advanced=True,
171 help=lambda cls: softwrap(
172 f"""
173 If true, Pants will include any relevant config files during runs
174 (`.coveragerc`, `setup.cfg`, `tox.ini`, and `pyproject.toml`).
175
176 Use `[{cls.options_scope}].config` instead if your config is in a
177 non-standard location.
178 """
179 ),
180 )
181 global_report = BoolOption(
182 default=False,
183 help=softwrap(
184 """
185 If true, Pants will generate a global coverage report.
186
187 The global report will include all Python source files in the workspace and not just
188 those depended on by the tests that were run.
189 """
190 ),
191 )
192 fail_under = FloatOption(
193 default=None,
194 help=softwrap(
195 """
196 Fail if the total combined coverage percentage for all tests is less than this
197 number.
198
199 Use this instead of setting fail_under in a coverage.py config file,
200 as the config will apply to each test separately, while you typically want this
201 to apply to the combined coverage for all tests run.
202
203 Note that you must generate at least one (non-raw) coverage report for this
204 check to trigger.
205
206 Note also that if you specify a non-integral value, you must
207 also set [report] precision properly in the coverage.py config file to make use
208 of the decimal places. See https://coverage.readthedocs.io/en/latest/config.html.
209 """
210 ),
211 )
212
213 def output_dir(self, distdir: DistDir) -> PurePath:
214 return PurePath(self._output_dir.format(distdir=distdir.relpath))
215
216 @property
217 def config_request(self) -> ConfigFilesRequest:
218 # Refer to https://coverage.readthedocs.io/en/stable/config.html.
219 return ConfigFilesRequest(
220 specified=self.config,
221 specified_option_name=f"[{self.options_scope}].config",
222 discovery=self.config_discovery,
223 check_existence=[".coveragerc"],
224 check_content={
225 "setup.cfg": b"[coverage:",
226 "tox.ini": b"[coverage:]",
227 "pyproject.toml": b"[tool.coverage",
228 },
229 )
230
231
232 class CoveragePyLockfileSentinel(GeneratePythonToolLockfileSentinel):
233 resolve_name = CoverageSubsystem.options_scope
234
235
236 @rule
237 def setup_coverage_lockfile(
238 _: CoveragePyLockfileSentinel, coverage: CoverageSubsystem
239 ) -> GeneratePythonLockfile:
240 return GeneratePythonLockfile.from_tool(coverage)
241
242
243 @dataclass(frozen=True)
244 class PytestCoverageData(CoverageData):
245 address: Address
246 digest: Digest
247
248
249 class PytestCoverageDataCollection(CoverageDataCollection):
250 element_type = PytestCoverageData
251
252
253 @dataclass(frozen=True)
254 class CoverageConfig:
255 digest: Digest
256 path: str
257
258
259 class InvalidCoverageConfigError(Exception):
260 pass
261
262
263 def _parse_toml_config(fc: FileContent) -> MutableMapping[str, Any]:
264 try:
265 return toml.loads(fc.content.decode())
266 except toml.TomlDecodeError as exc:
267 raise InvalidCoverageConfigError(
268 softwrap(
269 f"""
270 Failed to parse the coverage.py config `{fc.path}` as TOML. Please either fix
271 the config or update `[coverage-py].config` and/or
272 `[coverage-py].config_discovery`.
273
274 Parse error: {repr(exc)}
275 """
276 )
277 )
278
279
280 def _parse_ini_config(fc: FileContent) -> configparser.ConfigParser:
281 cp = configparser.ConfigParser()
282 try:
283 cp.read_string(fc.content.decode())
284 return cp
285 except configparser.Error as exc:
286 raise InvalidCoverageConfigError(
287 softwrap(
288 f"""
289 Failed to parse the coverage.py config `{fc.path}` as INI. Please either fix
290 the config or update `[coverage-py].config` and/or `[coverage-py].config_discovery`.
291
292 Parse error: {repr(exc)}
293 """
294 )
295 )
296
297
298 def _update_config(fc: FileContent) -> FileContent:
299 if PurePath(fc.path).suffix == ".toml":
300 all_config = _parse_toml_config(fc)
301 tool = all_config.setdefault("tool", {})
302 coverage = tool.setdefault("coverage", {})
303 run = coverage.setdefault("run", {})
304 run["relative_files"] = True
305 if "pytest.pex/*" not in run.get("omit", []):
306 run["omit"] = [*run.get("omit", []), "pytest.pex/*"]
307 return FileContent(fc.path, toml.dumps(all_config).encode())
308
309 cp = _parse_ini_config(fc)
310 run_section = "coverage:run" if fc.path in ("tox.ini", "setup.cfg") else "run"
311 if not cp.has_section(run_section):
312 cp.add_section(run_section)
313 cp.set(run_section, "relative_files", "True")
314 omit_elements = cp[run_section].get("omit", "").split("\n") or ["\n"]
315 if "pytest.pex/*" not in omit_elements:
316 omit_elements.append("pytest.pex/*")
317 cp.set(run_section, "omit", "\n".join(omit_elements))
318 stream = StringIO()
319 cp.write(stream)
320 return FileContent(fc.path, stream.getvalue().encode())
321
322
323 def get_branch_value_from_config(fc: FileContent) -> bool:
324 # Note that coverage's default value for the branch setting is False, which we mirror here.
325 if PurePath(fc.path).suffix == ".toml":
326 all_config = _parse_toml_config(fc)
327 return bool(
328 all_config.get("tool", {}).get("coverage", {}).get("run", {}).get("branch", False)
329 )
330
331 cp = _parse_ini_config(fc)
332 run_section = "coverage:run" if fc.path in ("tox.ini", "setup.cfg") else "run"
333 if not cp.has_section(run_section):
334 return False
335 return cp.getboolean(run_section, "branch", fallback=False)
336
337
338 @rule
339 async def create_or_update_coverage_config(coverage: CoverageSubsystem) -> CoverageConfig:
340 config_files = await Get(ConfigFiles, ConfigFilesRequest, coverage.config_request)
341 if config_files.snapshot.files:
342 digest_contents = await Get(DigestContents, Digest, config_files.snapshot.digest)
343 file_content = _update_config(digest_contents[0])
344 else:
345 cp = configparser.ConfigParser()
346 cp.add_section("run")
347 cp.set("run", "relative_files", "True")
348 cp.set("run", "omit", "\npytest.pex/*")
349 stream = StringIO()
350 cp.write(stream)
351 # We know that .coveragerc doesn't exist, so it's fine to create one.
352 file_content = FileContent(".coveragerc", stream.getvalue().encode())
353 digest = await Get(Digest, CreateDigest([file_content]))
354 return CoverageConfig(digest, file_content.path)
355
356
357 @dataclass(frozen=True)
358 class CoverageSetup:
359 pex: VenvPex
360
361
362 @rule
363 async def setup_coverage(coverage: CoverageSubsystem) -> CoverageSetup:
364 pex = await Get(VenvPex, PexRequest, coverage.to_pex_request())
365 return CoverageSetup(pex)
366
367
368 @dataclass(frozen=True)
369 class MergedCoverageData:
370 coverage_data: Digest
371 addresses: tuple[Address, ...]
372
373
374 @rule(desc="Merge Pytest coverage data", level=LogLevel.DEBUG)
375 async def merge_coverage_data(
376 data_collection: PytestCoverageDataCollection,
377 coverage_setup: CoverageSetup,
378 coverage_config: CoverageConfig,
379 coverage: CoverageSubsystem,
380 source_roots: AllSourceRoots,
381 ) -> MergedCoverageData:
382 if len(data_collection) == 1 and not coverage.global_report:
383 coverage_data = data_collection[0]
384 return MergedCoverageData(coverage_data.digest, (coverage_data.address,))
385
386 coverage_digest_gets = []
387 coverage_data_file_paths = []
388 addresses = []
389 for data in data_collection:
390 # We prefix each .coverage file with its corresponding address to avoid collisions.
391 coverage_digest_gets.append(
392 Get(Digest, AddPrefix(data.digest, prefix=data.address.path_safe_spec))
393 )
394 coverage_data_file_paths.append(f"{data.address.path_safe_spec}/.coverage")
395 addresses.append(data.address)
396
397 if coverage.global_report:
398 # It's important to set the `branch` value in the empty base report to the value it will
399 # have when running on real inputs, so that the reports are of the same type, and can be
400 # merged successfully. Otherwise we may get "Can't combine arc data with line data" errors.
401 # See https://github.com/pantsbuild/pants/issues/14542 .
402 config_contents = await Get(DigestContents, Digest, coverage_config.digest)
403 branch = get_branch_value_from_config(config_contents[0]) if config_contents else False
404 global_coverage_base_dir = PurePath("__global_coverage__")
405 global_coverage_config_path = global_coverage_base_dir / "pyproject.toml"
406 global_coverage_config_content = toml.dumps(
407 {
408 "tool": {
409 "coverage": {
410 "run": {
411 "relative_files": True,
412 "source": [source_root.path for source_root in source_roots],
413 "branch": branch,
414 }
415 }
416 }
417 }
418 ).encode()
419
420 no_op_exe_py_path = global_coverage_base_dir / "no-op-exe.py"
421
422 all_sources_digest, no_op_exe_py_digest, global_coverage_config_digest = await MultiGet(
423 Get(
424 Digest,
425 PathGlobs(globs=[f"{source_root.path}/**/*.py" for source_root in source_roots]),
426 ),
427 Get(Digest, CreateDigest([FileContent(path=str(no_op_exe_py_path), content=b"")])),
428 Get(
429 Digest,
430 CreateDigest(
431 [
432 FileContent(
433 path=str(global_coverage_config_path),
434 content=global_coverage_config_content,
435 ),
436 ]
437 ),
438 ),
439 )
440 extra_sources_digest = await Get(
441 Digest, MergeDigests((all_sources_digest, no_op_exe_py_digest))
442 )
443 input_digest = await Get(
444 Digest, MergeDigests((extra_sources_digest, global_coverage_config_digest))
445 )
446 result = await Get(
447 ProcessResult,
448 VenvPexProcess(
449 coverage_setup.pex,
450 argv=("run", "--rcfile", str(global_coverage_config_path), str(no_op_exe_py_path)),
451 input_digest=input_digest,
452 output_files=(".coverage",),
453 description="Create base global Pytest coverage report.",
454 level=LogLevel.DEBUG,
455 ),
456 )
457 coverage_digest_gets.append(
458 Get(
459 Digest, AddPrefix(digest=result.output_digest, prefix=str(global_coverage_base_dir))
460 )
461 )
462 coverage_data_file_paths.append(str(global_coverage_base_dir / ".coverage"))
463 else:
464 extra_sources_digest = EMPTY_DIGEST
465
466 input_digest = await Get(Digest, MergeDigests(await MultiGet(coverage_digest_gets)))
467 result = await Get(
468 ProcessResult,
469 VenvPexProcess(
470 coverage_setup.pex,
471 # We tell combine to keep the original input files, to aid debugging in the sandbox.
472 argv=("combine", "--keep", *sorted(coverage_data_file_paths)),
473 input_digest=input_digest,
474 output_files=(".coverage",),
475 description=f"Merge {len(coverage_data_file_paths)} Pytest coverage reports.",
476 level=LogLevel.DEBUG,
477 ),
478 )
479 return MergedCoverageData(
480 await Get(Digest, MergeDigests((result.output_digest, extra_sources_digest))),
481 tuple(addresses),
482 )
483
484
485 @rule(desc="Generate Pytest coverage reports", level=LogLevel.DEBUG)
486 async def generate_coverage_reports(
487 merged_coverage_data: MergedCoverageData,
488 coverage_setup: CoverageSetup,
489 coverage_config: CoverageConfig,
490 coverage_subsystem: CoverageSubsystem,
491 keep_sandboxes: KeepSandboxes,
492 distdir: DistDir,
493 ) -> CoverageReports:
494 """Takes all Python test results and generates a single coverage report."""
495 transitive_targets = await Get(
496 TransitiveTargets, TransitiveTargetsRequest(merged_coverage_data.addresses)
497 )
498 sources = await Get(
499 PythonSourceFiles,
500 # Coverage sometimes includes non-Python files in its `.coverage` data. We need to
501 # ensure that they're present when generating the report. We include all the files included
502 # by `pytest_runner.py`.
503 PythonSourceFilesRequest(
504 transitive_targets.closure, include_files=True, include_resources=True
505 ),
506 )
507 input_digest = await Get(
508 Digest,
509 MergeDigests(
510 (
511 merged_coverage_data.coverage_data,
512 coverage_config.digest,
513 sources.source_files.snapshot.digest,
514 )
515 ),
516 )
517
518 pex_processes = []
519 report_types = []
520 result_snapshot = await Get(Snapshot, Digest, merged_coverage_data.coverage_data)
521 coverage_reports: list[CoverageReport] = []
522 output_dir: PurePath = coverage_subsystem.output_dir(distdir)
523 for report_type in coverage_subsystem.report:
524 if report_type == CoverageReportType.RAW:
525 coverage_reports.append(
526 FilesystemCoverageReport(
527 # We don't know yet if the coverage is sufficient, so we let some other report
528 # trigger the failure if necessary.
529 coverage_insufficient=False,
530 report_type=CoverageReportType.RAW.value,
531 result_snapshot=result_snapshot,
532 directory_to_materialize_to=output_dir,
533 report_file=output_dir / ".coverage",
534 )
535 )
536 continue
537
538 report_types.append(report_type)
539 output_file = (
540 f"coverage.{report_type.value}"
541 if report_type
542 in {CoverageReportType.XML, CoverageReportType.JSON, CoverageReportType.LCOV}
543 else None
544 )
545 args = [report_type.report_name, f"--rcfile={coverage_config.path}"]
546 if coverage_subsystem.fail_under is not None:
547 args.append(f"--fail-under={coverage_subsystem.fail_under}")
548 pex_processes.append(
549 VenvPexProcess(
550 coverage_setup.pex,
551 argv=tuple(args),
552 input_digest=input_digest,
553 output_directories=("htmlcov",) if report_type == CoverageReportType.HTML else None,
554 output_files=(output_file,) if output_file else None,
555 description=f"Generate Pytest {report_type.report_name} coverage report.",
556 level=LogLevel.DEBUG,
557 )
558 )
559 results = await MultiGet(
560 Get(FallibleProcessResult, VenvPexProcess, process) for process in pex_processes
561 )
562 for proc, res in zip(pex_processes, results):
563 if res.exit_code not in {0, 2}:
564 # coverage.py uses exit code 2 if --fail-under triggers, in which case the
565 # reports are still generated.
566 raise ProcessExecutionFailure(
567 res.exit_code,
568 res.stdout,
569 res.stderr,
570 proc.description,
571 keep_sandboxes=keep_sandboxes,
572 )
573
574 # In practice if one result triggers --fail-under, they all will, but no need to rely on that.
575 result_exit_codes = tuple(res.exit_code for res in results)
576 result_stdouts = tuple(res.stdout for res in results)
577 result_snapshots = await MultiGet(Get(Snapshot, Digest, res.output_digest) for res in results)
578
579 coverage_reports.extend(
580 _get_coverage_report(output_dir, report_type, exit_code != 0, stdout, snapshot)
581 for (report_type, exit_code, stdout, snapshot) in zip(
582 report_types, result_exit_codes, result_stdouts, result_snapshots
583 )
584 )
585
586 return CoverageReports(tuple(coverage_reports))
587
588
589 def _get_coverage_report(
590 output_dir: PurePath,
591 report_type: CoverageReportType,
592 coverage_insufficient: bool,
593 result_stdout: bytes,
594 result_snapshot: Snapshot,
595 ) -> CoverageReport:
596 if report_type == CoverageReportType.CONSOLE:
597 return ConsoleCoverageReport(coverage_insufficient, result_stdout.decode())
598
599 try:
600 report_file = {
601 CoverageReportType.HTML: output_dir / "htmlcov" / "index.html",
602 CoverageReportType.XML: output_dir / "coverage.xml",
603 CoverageReportType.JSON: output_dir / "coverage.json",
604 CoverageReportType.LCOV: output_dir / "coverage.lcov",
605 }[report_type]
606 except KeyError:
607 raise ValueError(f"Invalid coverage report type: {report_type}") from None
608
609 return FilesystemCoverageReport(
610 coverage_insufficient=coverage_insufficient,
611 report_type=report_type.value,
612 result_snapshot=result_snapshot,
613 directory_to_materialize_to=output_dir,
614 report_file=report_file,
615 )
616
617
618 def rules():
619 return [
620 *collect_rules(),
621 *lockfile.rules(),
622 UnionRule(CoverageDataCollection, PytestCoverageDataCollection),
623 UnionRule(GenerateToolLockfileSentinel, CoveragePyLockfileSentinel),
624 ]
625
[end of src/python/pants/backend/python/goals/coverage_py.py]
[start of src/python/pants/backend/python/subsystems/pytest.py]
1 # Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import os.path
7 from dataclasses import dataclass
8 from typing import Iterable
9
10 from packaging.utils import canonicalize_name as canonicalize_project_name
11
12 from pants.backend.python.goals import lockfile
13 from pants.backend.python.goals.export import ExportPythonTool, ExportPythonToolSentinel
14 from pants.backend.python.goals.lockfile import (
15 GeneratePythonLockfile,
16 GeneratePythonToolLockfileSentinel,
17 )
18 from pants.backend.python.pip_requirement import PipRequirement
19 from pants.backend.python.subsystems.python_tool_base import ExportToolOption, PythonToolBase
20 from pants.backend.python.subsystems.setup import PythonSetup
21 from pants.backend.python.target_types import (
22 ConsoleScript,
23 InterpreterConstraintsField,
24 PythonTestsExtraEnvVarsField,
25 PythonTestSourceField,
26 PythonTestsTimeoutField,
27 PythonTestsXdistConcurrencyField,
28 SkipPythonTestsField,
29 )
30 from pants.backend.python.util_rules.partition import _find_all_unique_interpreter_constraints
31 from pants.core.goals.generate_lockfiles import GenerateToolLockfileSentinel
32 from pants.core.goals.test import RuntimePackageDependenciesField, TestFieldSet
33 from pants.core.util_rules.config_files import ConfigFilesRequest
34 from pants.core.util_rules.environments import EnvironmentField
35 from pants.engine.rules import collect_rules, rule
36 from pants.engine.target import Target
37 from pants.engine.unions import UnionRule
38 from pants.option.option_types import ArgsListOption, BoolOption, FileOption, SkipOption, StrOption
39 from pants.util.docutil import bin_name, doc_url, git_url
40 from pants.util.logging import LogLevel
41 from pants.util.memo import memoized_method
42 from pants.util.strutil import softwrap
43
44
45 @dataclass(frozen=True)
46 class PythonTestFieldSet(TestFieldSet):
47 required_fields = (PythonTestSourceField,)
48
49 source: PythonTestSourceField
50 interpreter_constraints: InterpreterConstraintsField
51 timeout: PythonTestsTimeoutField
52 runtime_package_dependencies: RuntimePackageDependenciesField
53 extra_env_vars: PythonTestsExtraEnvVarsField
54 xdist_concurrency: PythonTestsXdistConcurrencyField
55 environment: EnvironmentField
56
57 @classmethod
58 def opt_out(cls, tgt: Target) -> bool:
59 return tgt.get(SkipPythonTestsField).value
60
61
62 class PyTest(PythonToolBase):
63 options_scope = "pytest"
64 name = "Pytest"
65 help = "The pytest Python test framework (https://docs.pytest.org/)."
66
67 # This should be compatible with requirements.txt, although it can be more precise.
68 # TODO: To fix this, we should allow using a `target_option` referring to a
69 # `python_requirement` to override the version.
70 # Pytest 7.1.0 introduced a significant bug that is apparently not fixed as of 7.1.1 (the most
71 # recent release at the time of writing). see https://github.com/pantsbuild/pants/issues/14990.
72 # TODO: Once this issue is fixed, loosen this to allow the version to float above the bad ones.
73 # E.g., as default_version = "pytest>=7,<8,!=7.1.0,!=7.1.1"
74 default_version = "pytest==7.0.1"
75 default_extra_requirements = ["pytest-cov>=2.12,!=2.12.1,<3.1", "pytest-xdist>=2.5,<3"]
76
77 default_main = ConsoleScript("pytest")
78
79 register_lockfile = True
80 default_lockfile_resource = ("pants.backend.python.subsystems", "pytest.lock")
81 default_lockfile_path = "src/python/pants/backend/python/subsystems/pytest.lock"
82 default_lockfile_url = git_url(default_lockfile_path)
83
84 args = ArgsListOption(example="-k test_foo --quiet", passthrough=True)
85 junit_family = StrOption(
86 default="xunit2",
87 advanced=True,
88 help=softwrap(
89 """
90 The format of generated junit XML files. See
91 https://docs.pytest.org/en/latest/reference.html#confval-junit_family.
92 """
93 ),
94 )
95 execution_slot_var = StrOption(
96 default=None,
97 advanced=True,
98 help=softwrap(
99 """
100 If a non-empty string, the process execution slot id (an integer) will be exposed
101 to tests under this environment variable name.
102 """
103 ),
104 )
105 config = FileOption(
106 default=None,
107 advanced=True,
108 help=lambda cls: softwrap(
109 f"""
110 Path to a config file understood by Pytest
111 (https://docs.pytest.org/en/latest/reference/customize.html#configuration-file-formats).
112 Setting this option will disable `[{cls.options_scope}].config_discovery`. Use
113 this option if the config is located in a non-standard location.
114 """
115 ),
116 )
117 config_discovery = BoolOption(
118 default=True,
119 advanced=True,
120 help=lambda cls: softwrap(
121 f"""
122 If true, Pants will include all relevant Pytest config files (e.g. `pytest.ini`)
123 during runs. See
124 https://docs.pytest.org/en/stable/customize.html#finding-the-rootdir for where
125 config files should be located for Pytest to discover them.
126
127 Use `[{cls.options_scope}].config` instead if your config is in a
128 non-standard location.
129 """
130 ),
131 )
132 xdist_enabled = BoolOption(
133 default=False,
134 advanced=False,
135 help=softwrap(
136 """
137 If true, Pants will use `pytest-xdist` (https://pytest-xdist.readthedocs.io/en/latest/)
138 to parallelize tests within each `python_test` target.
139
140 NOTE: Enabling `pytest-xdist` can cause high-level scoped fixtures (for example `session`)
141 to execute more than once. See the `pytest-xdist` docs for more info:
142 https://pypi.org/project/pytest-xdist/#making-session-scoped-fixtures-execute-only-once
143 """
144 ),
145 )
146
147 export = ExportToolOption()
148
149 skip = SkipOption("test")
150
151 @property
152 def all_requirements(self) -> tuple[str, ...]:
153 return (self.version, *self.extra_requirements)
154
155 def config_request(self, dirs: Iterable[str]) -> ConfigFilesRequest:
156 # Refer to https://docs.pytest.org/en/stable/customize.html#finding-the-rootdir for how
157 # config files are discovered.
158 check_existence = []
159 check_content = {}
160 for d in ("", *dirs):
161 check_existence.append(os.path.join(d, "pytest.ini"))
162 check_content[os.path.join(d, "pyproject.toml")] = b"[tool.pytest.ini_options]"
163 check_content[os.path.join(d, "tox.ini")] = b"[pytest]"
164 check_content[os.path.join(d, "setup.cfg")] = b"[tool:pytest]"
165
166 return ConfigFilesRequest(
167 specified=self.config,
168 specified_option_name=f"[{self.options_scope}].config",
169 discovery=self.config_discovery,
170 check_existence=check_existence,
171 check_content=check_content,
172 )
173
174 @memoized_method
175 def validate_pytest_cov_included(self) -> None:
176 for s in self.extra_requirements:
177 try:
178 req = PipRequirement.parse(s).project_name
179 except Exception as e:
180 raise ValueError(f"Invalid requirement '{s}' in `[pytest].extra_requirements`: {e}")
181 if canonicalize_project_name(req) == "pytest-cov":
182 return
183
184 raise ValueError(
185 softwrap(
186 f"""
187 You set `[test].use_coverage`, but `[pytest].extra_requirements` is missing
188 `pytest-cov`, which is needed to collect coverage data.
189
190 This happens when overriding the `extra_requirements` option. Please either explicitly
191 add back `pytest-cov` or use `extra_requirements.add` to keep Pants's default, rather than
192 overriding it. Run `{bin_name()} help-advanced pytest` to see the default version of
193 `pytest-cov` and see {doc_url('options#list-values')} for more on adding vs.
194 overriding list options.
195 """
196 )
197 )
198
199
200 class PytestLockfileSentinel(GeneratePythonToolLockfileSentinel):
201 resolve_name = PyTest.options_scope
202
203
204 @rule(
205 desc=softwrap(
206 """
207 Determine all Python interpreter versions used by Pytest in your project
208 (for lockfile generation)
209 """
210 ),
211 level=LogLevel.DEBUG,
212 )
213 async def setup_pytest_lockfile(
214 _: PytestLockfileSentinel, pytest: PyTest, python_setup: PythonSetup
215 ) -> GeneratePythonLockfile:
216 if not pytest.uses_custom_lockfile:
217 return GeneratePythonLockfile.from_tool(pytest)
218
219 constraints = await _find_all_unique_interpreter_constraints(python_setup, PythonTestFieldSet)
220 return GeneratePythonLockfile.from_tool(pytest, constraints)
221
222
223 class PytestExportSentinel(ExportPythonToolSentinel):
224 pass
225
226
227 @rule(
228 desc=softwrap(
229 """
230 Determine all Python interpreter versions used by Pytest in your project
231 (for `export` goal)
232 """
233 ),
234 level=LogLevel.DEBUG,
235 )
236 async def pytest_export(
237 _: PytestExportSentinel, pytest: PyTest, python_setup: PythonSetup
238 ) -> ExportPythonTool:
239 if not pytest.export:
240 return ExportPythonTool(resolve_name=pytest.options_scope, pex_request=None)
241 constraints = await _find_all_unique_interpreter_constraints(python_setup, PythonTestFieldSet)
242 return ExportPythonTool(
243 resolve_name=pytest.options_scope,
244 pex_request=pytest.to_pex_request(interpreter_constraints=constraints),
245 )
246
247
248 def rules():
249 return (
250 *collect_rules(),
251 *lockfile.rules(),
252 UnionRule(GenerateToolLockfileSentinel, PytestLockfileSentinel),
253 UnionRule(ExportPythonToolSentinel, PytestExportSentinel),
254 )
255
[end of src/python/pants/backend/python/subsystems/pytest.py]
[start of src/python/pants/backend/terraform/hcl2_parser.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 import sys
5 from pathlib import PurePath
6 from typing import Set
7
8 #
9 # Note: This file is used as an pex entry point in the execution sandbox.
10 #
11
12
13 # PurePath does not have the Path.resolve method which resolves ".." components, thus we need to
14 # code our own version for PurePath's.
15 def resolve_pure_path(base: PurePath, relative_path: PurePath) -> PurePath:
16 parts = list(base.parts)
17 for component in relative_path.parts:
18 if component == ".":
19 pass
20 elif component == "..":
21 if not parts:
22 raise ValueError(f"Relative path {relative_path} escapes from path {base}.")
23 parts.pop()
24 else:
25 parts.append(component)
26
27 return PurePath(*parts)
28
29
30 def extract_module_source_paths(path: PurePath, raw_content: bytes) -> Set[str]:
31 # Import here so we can still test this file with pytest (since `hcl2` is not present in
32 # normal Pants venv.)
33 import hcl2 # type: ignore[import] # pants: no-infer-dep
34
35 content = raw_content.decode("utf-8")
36 parsed_content = hcl2.loads(content)
37
38 # Note: The `module` key is a list where each entry is a dict with a single entry where the key is the
39 # module name and the values are a dict for that module's actual values.
40 paths = set()
41 for wrapped_module in parsed_content.get("module", []):
42 values = list(wrapped_module.values())[
43 0
44 ] # the module is the sole entry in `wrapped_module`
45 source = values.get("source", "")
46
47 # Local paths to modules must begin with "." or ".." as per
48 # https://www.terraform.io/docs/language/modules/sources.html#local-paths.
49 if source.startswith("./") or source.startswith("../"):
50 try:
51 resolved_path = resolve_pure_path(path, PurePath(source))
52 paths.add(str(resolved_path))
53 except ValueError:
54 pass
55
56 return paths
57
58
59 def main(args):
60 paths = set()
61 for filename in args:
62 with open(filename, "rb") as f:
63 content = f.read()
64 paths |= extract_module_source_paths(PurePath(filename).parent, content)
65
66 for path in paths:
67 print(path)
68
69
70 if __name__ == "__main__":
71 main(sys.argv[1:])
72
[end of src/python/pants/backend/terraform/hcl2_parser.py]
[start of src/python/pants/conftest.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from pathlib import Path
5
6 import pytest
7
8 # The top-level `pants` module must be a namespace package, because we build two dists from it
9 # (pantsbuild.pants, and pantsbuild.pants.testutil) and consumers of these dists need to be
10 # able to import from both.
11 #
12 # In fact it is an *implicit* namespace package - that is, it has no __init__.py file.
13 # See https://packaging.python.org/guides/packaging-namespace-packages/ .
14 #
15 # Unfortunately, the presence or absence of __init__.py affects how pytest determines the
16 # module names for test code. For details see
17 # https://docs.pytest.org/en/stable/goodpractices.html#test-package-name .
18 #
19 # Usually this doesn't matter, as tests don't typically care about their own module name.
20 # But we have tests (notably those in src/python/pants/engine) that create @rules and
21 # expect them to have certain names. And @rule names are generated from the name of the module
22 # containing the rule function...
23 #
24 # To allow those tests to work naturally (with expected module names relative to `src/python`)
25 # we artificially create `src/python/pants/__init__.py` in the test sandbox, to force
26 # pytest to determine module names relative to `src/python` (instead of `src/python/pants`).
27 #
28 # Note that while this makes the (implicit) namespace package into a regular package,
29 # that is fine at test time. We don't consume testutil from a dist but from source, in the same
30 # source root (src/python). We only need `pants` to be a namespace package in the dists we create.
31 namespace_init_path = Path("src/python/pants/__init__.py")
32
33
34 def pytest_sessionstart(session) -> None:
35 if namespace_init_path.exists():
36 raise Exception(
37 f"In order for `pants` to be a namespace package, {namespace_init_path} must not "
38 f"exist on disk. See the explanation in {__file__}."
39 )
40 namespace_init_path.touch()
41
42
43 def pytest_sessionfinish(session) -> None:
44 # Technically unecessary, but nice if people are running tests directly from repo
45 # (not using pants).
46 namespace_init_path.unlink()
47
48
49 @pytest.fixture(autouse=True, scope="session")
50 def dedicated_target_fields():
51 """Ensures we follow our convention of dedicated source and dependencies field per-target.
52
53 This help ensure that plugin authors can do dependency inference on _specific_ field types, and
54 not have to filter targets using generic field types.
55
56 Note that this can't help us if a target type should use an _even more specialized_ dependencies
57 field type (E.g. 100 different target subclasses could use 1 custom dependencies field type,
58 when in reality there should be many more). However, this is a good sanity check.
59 """
60 from pants.engine.target import Dependencies, SourcesField, Target
61
62 for cls in Target.__subclasses__():
63 if hasattr(cls, "core_fields"):
64 for field_cls in cls.core_fields:
65 # NB: We want to check for all kinds of SourcesFields, like SingleSourceField and
66 # MultipleSourcesField.
67 if (
68 issubclass(field_cls, SourcesField)
69 and field_cls.__module__ is SourcesField.__module__
70 ):
71 raise ValueError(
72 f"{cls.__name__} should have a dedicated field type for the source(s) field."
73 )
74 if (
75 issubclass(field_cls, Dependencies)
76 and field_cls.__module__ is Dependencies.__module__
77 ):
78 raise ValueError(
79 f"{cls.__name__} should have a dedicated field type for the dependencies field."
80 )
81
[end of src/python/pants/conftest.py]
[start of src/python/pants/testutil/python_interpreter_selection.py]
1 # Copyright 2019 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import os
7 import subprocess
8 from functools import lru_cache
9 from typing import Iterable
10 from unittest import skipIf
11
12 import _pytest.mark.structures
13 import pytest
14
15 from pants.backend.python.util_rules.interpreter_constraints import InterpreterConstraints
16
17 PY_2 = "2"
18 PY_3 = "3"
19
20 PY_27 = "2.7"
21 PY_36 = "3.6"
22 PY_37 = "3.7"
23 PY_38 = "3.8"
24 PY_39 = "3.9"
25
26
27 def has_python_version(version):
28 """Returns `True` if the current system has the specified version of python.
29
30 :param string version: A python version string, such as 2.7, 3.
31 """
32 # TODO: Tests that skip unless a python interpreter is present often need the path to that
33 # interpreter, and so end up calling python_interpreter_path again. Find a way to streamline this.
34 return python_interpreter_path(version) is not None
35
36
37 @lru_cache()
38 def python_interpreter_path(version):
39 """Returns the interpreter path if the current system has the specified version of python.
40
41 :param string version: A python version string, such as 2.7, 3.
42 :returns: the normalized path to the interpreter binary if found; otherwise `None`
43 :rtype: string
44 """
45 try:
46 command = [f"python{version}", "-c", "import sys; print(sys.executable)"]
47 py_path = subprocess.check_output(command).decode().strip()
48 return os.path.realpath(py_path)
49 except (subprocess.CalledProcessError, FileNotFoundError):
50 return None
51
52
53 def skip_unless_all_pythons_present(*versions):
54 """A decorator that only runs the decorated test method if all of the specified pythons are
55 present.
56
57 :param string *versions: Python version strings, such as 2.7, 3.
58 """
59 missing_versions = [v for v in versions if not has_python_version(v)]
60 if len(missing_versions) == 1:
61 return skipIf(True, f"Could not find python {missing_versions[0]} on system. Skipping.")
62 elif len(missing_versions) > 1:
63 return skipIf(
64 True,
65 "Skipping due to the following missing required pythons: {}".format(
66 ", ".join(missing_versions)
67 ),
68 )
69 else:
70 return skipIf(False, "All required pythons present, continuing with test!")
71
72
73 def skip_unless_python27_present(func):
74 """A test skip decorator that only runs a test method if python2.7 is present."""
75 return skip_unless_all_pythons_present(PY_27)(func)
76
77
78 def skip_unless_python3_present(func):
79 """A test skip decorator that only runs a test method if python3 is present."""
80 return skip_unless_all_pythons_present(PY_3)(func)
81
82
83 def skip_unless_python36_present(func):
84 """A test skip decorator that only runs a test method if python3.6 is present."""
85 return skip_unless_all_pythons_present(PY_36)(func)
86
87
88 def skip_unless_python37_present(func):
89 """A test skip decorator that only runs a test method if python3.7 is present."""
90 return skip_unless_all_pythons_present(PY_37)(func)
91
92
93 def skip_unless_python38_present(func):
94 """A test skip decorator that only runs a test method if python3.8 is present."""
95 return skip_unless_all_pythons_present(PY_38)(func)
96
97
98 def skip_unless_python39_present(func):
99 """A test skip decorator that only runs a test method if python3.9 is present."""
100 return skip_unless_all_pythons_present(PY_39)(func)
101
102
103 def skip_unless_python27_and_python3_present(func):
104 """A test skip decorator that only runs a test method if python2.7 and python3 are present."""
105 return skip_unless_all_pythons_present(PY_27, PY_3)(func)
106
107
108 def skip_unless_python27_and_python36_present(func):
109 """A test skip decorator that only runs a test method if python2.7 and python3.6 are present."""
110 return skip_unless_all_pythons_present(PY_27, PY_36)(func)
111
112
113 def skip_unless_python36_and_python37_present(func):
114 """A test skip decorator that only runs a test method if python3.6 and python3.7 are present."""
115 return skip_unless_all_pythons_present(PY_36, PY_37)(func)
116
117
118 def all_major_minor_python_versions(
119 constraints: Iterable[str],
120 ) -> tuple[_pytest.mark.structures.ParameterSet, ...]:
121 """All major.minor Python versions used by the interpreter constraints.
122
123 This is intended to be used with `@pytest.mark.parametrize()` to run a test with every relevant
124 Python interpreter.
125 """
126 versions = InterpreterConstraints(constraints).partition_into_major_minor_versions(
127 # Please update this when new stable Python versions are released to CI.
128 interpreter_universe=["2.7", "3.6", "3.7", "3.8", "3.9"]
129 )
130
131 return tuple(
132 pytest.param(
133 version,
134 marks=pytest.mark.skipif(
135 not has_python_version(version),
136 reason=f"Could not find python {version} on system. Skipping.",
137 ),
138 )
139 for version in versions
140 )
141
[end of src/python/pants/testutil/python_interpreter_selection.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pantsbuild/pants
|
421230db4724a4faa810872d1ac2185f206e0847
|
Run pytest on multiple files the pytest-way
`pants test` triggers pytest on a per-file basis. This has different behavior compared to running `pytest <foldername>` because each pytest session only knows the single test file. This results in:
- the pytest report summary on terminal is created per test file, and the only really useful test summary is the one provided by pants which provides less information compared to the pytest summary.
- pytest plugins that create summary reports (such as pytest-html, pytest-json) create a single report per file (compared to a single report for all executed tests)
- pytest session scope fixtures are initialized repeatedly because for each test file a new pytest session is started.
I'm just speaking for myself, but I got so used to these points just working when using pytest that running python tests in pants as it is now feels like a downgrade to me. It would be great if there was a way to have the "vanilla" pytest experience also when using pants.
|
Also feeling the pain with session-wide fixtures while using `pytest-bdd`. Pants generates a `cucumber.json` report per file and over-writes the previous one - it's all rather unfortunate.
Related comment https://github.com/pantsbuild/pants/issues/15197#issuecomment-1107888762
We are also feeling the pain of per-file (vs. per-directory) `pytest` runs.
We run `django.setup()` in a `conftest.py` near the root of our project. In our pre-Pants CI system that runs `pytest <folder>` we see this takes a total of 120-150ms per workflow run. In our Pants-based CI, we see it takes a total of ~17640000-23500000ms (~294-392 minutes!!). We split our tests across 16 workers (using `PANTS_TEST_SHARD`) each with 14cpu and 26gb RAM, and we still see individual shards take upwards of 20 minutes to complete in CI 😞
More thoughts: A solution for this would (unfortunately) need to be more complex than a global config toggle. Everything under the `django.setup()` I mentioned above could be batched together, but we have another directory tree that `django.setup()`s with different settings in its own `conftest.py` - these would need to run in separate batches.
Thinking out loud: Could we have a new field on `python_test` (and maybe other `_test` targets?) for setting a "batch ID"? If set, targets with the same value are batched into the same `pytest` process. It could be paired with `__defaults__` to batch entire directories at a time.
> Thinking out loud: Could we have a new field on python_test (and maybe other _test targets?) for setting a "batch ID"? If set, targets with the same value are batched into the same pytest process. It could be paired with __defaults__ to batch entire directories at a time.
I really like an idea like that. What makes me nervous is when Pants tries to get too clever.
@danxmoran with your _specific_ use case, would you still want caching to be at the individual file level? We can run 5 test files in a single batch, and then still cache at the individual file level? If that is not essential for you, this actually becomes pretty easy to implement!
@Eric-Arellano per-file caching would be nice, but I think not essential for us in the short-term... all the code in this tree is so intertwined that if any one `python_source` changes, ~every `python_test` in the tree gets invalidated. I do think it'd be important for per-file change-detection to still work, though - i.e. if one test in batch "A" is changed I wouldn't want every other test marked with that batch to be immediately marked as changed, too. Not sure how much code reuse there is between the `changed` subsystem and the caching implementation.
How would you feel about a global "batch size" setting instead? The `test` rules could generically batch deterministically into batches of that size across test targets with identical `Field` values. It's more magical, but explicit batching would be pretty boilerplatey.
More magic / less boilerplate sounds good to me 😄 though I'm not fully understanding how I could manually enforce that two groups of tests _never_ run in the same batch with just a batch-size parameter. For example, if I have:
```
src/
django_monolith_apps/
conftest.py
...
projects/
django_microservice1/
conftest.py
...
django_microservice2/
conftest.py
...
```
Everything under `src/django_monolith_apps/` can be batched together arbitrarily, and ditto for each of the `django_microserviceN`s, but it is _not_ safe to batch files across those trees (because their `conftest`s configure differnet test settings). How would that work with just a global "batch size "setting?
> how I could manually enforce that two groups of tests _never_ run in the same batch with just a batch-size parameter.
I'm not sure that you need to. Assuming that `pytest` properly supports nested/overlapping `conftest.py` files overriding one another (...?), then batching across the boundary would be safe. Maybe slightly less efficient though, because both configurations would be used, but at different points in the batch run.
But automatically batching seems like it is likely to make back up the performance difference by virtue of not needing manual adjustment.
Ah, the need to keep the batches separate comes from Django, not `pytest` itself. AFAIK `django.setup()` (which we call in the `conftest.py`s) can only be called once per process - the 2nd+ calls are no-ops, so you can't swap back and forth between settings / sets of apps.
@stuhood @Eric-Arellano is there anything I can do to help push this one forward while you're busy working on the docker-environment support? My first thought is to work through all the test-related types and replace single addresses with collections (i.e. in `TestResult` [here](https://github.com/pantsbuild/pants/blob/main/src/python/pants/core/goals/test.py#L64)), but leave all the collections as singletons for now. That way the plumbing is in place.
> Ah, the need to keep the batches separate comes from Django, not `pytest` itself. AFAIK `django.setup()` (which we call in the `conftest.py`s) can only be called once per process - the 2nd+ calls are no-ops, so you can't swap back and forth between settings / sets of apps.
I didn't quite understand this. Ignoring Pants, if you run pytest directly on all your tests (across multiple conftests), does the right thing happen, somehow? Or would you have to manually batch the pytest invocations in that case as well?
> Or would you have to manually batch the pytest invocations in that case as well?
Yes, before Pants we had separate CI jobs/workflows for the different Django projects.
Hi, I've got similar problems. I am using pants inside a monorepo with hundrets of tests. When running pants test I recognized that each test takes about 7 secs. When using pytest it only takes 200ms. I saw someone else had simmilar problems so I tried the following steps: https://app.slack.com/client/T046T6T8L/C046T6T9U/thread/C046T6T9U-1658183838.164319
I saw that running --setup-only makes nearly no difference in execution time so clearly the setup takes too long and the actual test runs very fast. The output of --setup-only gets stuck at SETUP S django_db_setup (fixtures used: django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_use_migrations, django_test_environment) for roughly 6 seconds which is 6/7 of the whole test duration. Running all test in this project would take hours when every tests needs 7-10 secs using pants. With pytest it takes 4 minutes.
pytest.ini
```
[pytest]
DJANGO_SETTINGS_MODULE = django_core.settings
# -- standard arguments:
addopts:
--nomigrations
--create-db
-vv
```
pants.toml
```
[pytest]
lockfile = "lockfiles/python/tools/pytest"
version = "pytest>=7.1.2,<7.2"
extra_requirements.add = [
"pytest-django==4.5.2",
"pytest-icdiff==0.5",
"mixer==7.2.1"
]
config_discovery = true
```
> > Or would you have to manually batch the pytest invocations in that case as well?
>
> Yes, before Pants we had separate CI jobs/workflows for the different Django projects.
Ouch! OK, so that's a wrinkle to take care of.
@danxmoran's design for this is over here: https://docs.google.com/document/d/1U0Q43bRod_EeVP4eQpcN36NMlxZ4CpRzTl2gMQ5HHvg/edit#
Update: I'm working on translating @thejcannon's recently-added pattern of partitioning `lint`/`fmt` targets into the `test` goal. I plan to land a PR refactoring the plumbing but still testing one-module-per-process, then follow up with a `pytest`-specific PR to implement the new partitioning/batching logic.
|
2022-10-28T16:11:23Z
|
<patch>
diff --git a/src/python/pants/backend/python/goals/coverage_py.py b/src/python/pants/backend/python/goals/coverage_py.py
--- a/src/python/pants/backend/python/goals/coverage_py.py
+++ b/src/python/pants/backend/python/goals/coverage_py.py
@@ -242,11 +242,11 @@ def setup_coverage_lockfile(
@dataclass(frozen=True)
class PytestCoverageData(CoverageData):
- address: Address
+ addresses: tuple[Address, ...]
digest: Digest
-class PytestCoverageDataCollection(CoverageDataCollection):
+class PytestCoverageDataCollection(CoverageDataCollection[PytestCoverageData]):
element_type = PytestCoverageData
@@ -381,18 +381,20 @@ async def merge_coverage_data(
) -> MergedCoverageData:
if len(data_collection) == 1 and not coverage.global_report:
coverage_data = data_collection[0]
- return MergedCoverageData(coverage_data.digest, (coverage_data.address,))
+ return MergedCoverageData(coverage_data.digest, coverage_data.addresses)
coverage_digest_gets = []
coverage_data_file_paths = []
- addresses = []
+ addresses: list[Address] = []
for data in data_collection:
+ path_prefix = data.addresses[0].path_safe_spec
+ if len(data.addresses) > 1:
+ path_prefix = f"{path_prefix}+{len(data.addresses)-1}-others"
+
# We prefix each .coverage file with its corresponding address to avoid collisions.
- coverage_digest_gets.append(
- Get(Digest, AddPrefix(data.digest, prefix=data.address.path_safe_spec))
- )
- coverage_data_file_paths.append(f"{data.address.path_safe_spec}/.coverage")
- addresses.append(data.address)
+ coverage_digest_gets.append(Get(Digest, AddPrefix(data.digest, prefix=path_prefix)))
+ coverage_data_file_paths.append(f"{path_prefix}/.coverage")
+ addresses.extend(data.addresses)
if coverage.global_report:
# It's important to set the `branch` value in the empty base report to the value it will
diff --git a/src/python/pants/backend/python/goals/pytest_runner.py b/src/python/pants/backend/python/goals/pytest_runner.py
--- a/src/python/pants/backend/python/goals/pytest_runner.py
+++ b/src/python/pants/backend/python/goals/pytest_runner.py
@@ -1,11 +1,14 @@
# Copyright 2018 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
+from __future__ import annotations
+
import logging
import re
from abc import ABC, abstractmethod
+from collections import defaultdict
from dataclasses import dataclass
-from typing import Any, Optional, Tuple
+from typing import Optional, Tuple
from pants.backend.python.goals.coverage_py import (
CoverageConfig,
@@ -39,6 +42,7 @@
)
from pants.core.subsystems.debug_adapter import DebugAdapterSubsystem
from pants.core.util_rules.config_files import ConfigFiles, ConfigFilesRequest
+from pants.core.util_rules.partitions import Partition, PartitionerType, Partitions
from pants.core.util_rules.source_files import SourceFiles, SourceFilesRequest
from pants.engine.addresses import Address
from pants.engine.collection import Collection
@@ -118,24 +122,25 @@ class AllPytestPluginSetups(Collection[PytestPluginSetup]):
# TODO: Why is this necessary? We should be able to use `PythonTestFieldSet` as the rule param.
@dataclass(frozen=True)
class AllPytestPluginSetupsRequest:
- address: Address
+ addresses: tuple[Address, ...]
@rule
async def run_all_setup_plugins(
request: AllPytestPluginSetupsRequest, union_membership: UnionMembership
) -> AllPytestPluginSetups:
- wrapped_tgt = await Get(
- WrappedTarget, WrappedTargetRequest(request.address, description_of_origin="<infallible>")
- )
- applicable_setup_request_types = tuple(
- request
- for request in union_membership.get(PytestPluginSetupRequest)
- if request.is_applicable(wrapped_tgt.target)
+ wrapped_tgts = await MultiGet(
+ Get(WrappedTarget, WrappedTargetRequest(address, description_of_origin="<infallible>"))
+ for address in request.addresses
)
+ setup_requests = [
+ request_type(wrapped_tgt.target) # type: ignore[abstract]
+ for request_type in union_membership.get(PytestPluginSetupRequest)
+ for wrapped_tgt in wrapped_tgts
+ if request_type.is_applicable(wrapped_tgt.target)
+ ]
setups = await MultiGet(
- Get(PytestPluginSetup, PytestPluginSetupRequest, request(wrapped_tgt.target)) # type: ignore[misc, abstract]
- for request in applicable_setup_request_types
+ Get(PytestPluginSetup, PytestPluginSetupRequest, request) for request in setup_requests
)
return AllPytestPluginSetups(setups)
@@ -151,9 +156,33 @@ async def run_all_setup_plugins(
_EXTRA_OUTPUT_DIR = "extra-output"
+@dataclass(frozen=True)
+class TestMetadata:
+ """Parameters that must be constant for all test targets in a `pytest` batch."""
+
+ interpreter_constraints: InterpreterConstraints
+ extra_env_vars: tuple[str, ...]
+ xdist_concurrency: int | None
+ resolve: str
+ environment: str
+ compatability_tag: str | None = None
+
+ # Prevent this class from being detected by pytest as a test class.
+ __test__ = False
+
+ @property
+ def description(self) -> str | None:
+ if not self.compatability_tag:
+ return None
+
+ # TODO: Put more info here.
+ return self.compatability_tag
+
+
@dataclass(frozen=True)
class TestSetupRequest:
- field_set: PythonTestFieldSet
+ field_sets: Tuple[PythonTestFieldSet, ...]
+ metadata: TestMetadata
is_debug: bool
main: Optional[MainSpecification] = None # Defaults to pytest.main
prepend_argv: Tuple[str, ...] = ()
@@ -181,23 +210,22 @@ async def setup_pytest_for_target(
request: TestSetupRequest,
pytest: PyTest,
test_subsystem: TestSubsystem,
- python_setup: PythonSetup,
coverage_config: CoverageConfig,
coverage_subsystem: CoverageSubsystem,
test_extra_env: TestExtraEnv,
global_options: GlobalOptions,
) -> TestSetup:
+ addresses = tuple(field_set.address for field_set in request.field_sets)
+
transitive_targets, plugin_setups = await MultiGet(
- Get(TransitiveTargets, TransitiveTargetsRequest([request.field_set.address])),
- Get(AllPytestPluginSetups, AllPytestPluginSetupsRequest(request.field_set.address)),
+ Get(TransitiveTargets, TransitiveTargetsRequest(addresses)),
+ Get(AllPytestPluginSetups, AllPytestPluginSetupsRequest(addresses)),
)
all_targets = transitive_targets.closure
- interpreter_constraints = InterpreterConstraints.create_from_compatibility_fields(
- [request.field_set.interpreter_constraints], python_setup
- )
+ interpreter_constraints = request.metadata.interpreter_constraints
- requirements_pex_get = Get(Pex, RequirementsPexRequest([request.field_set.address]))
+ requirements_pex_get = Get(Pex, RequirementsPexRequest(addresses))
pytest_pex_get = Get(
Pex,
PexRequest(
@@ -217,10 +245,12 @@ async def setup_pytest_for_target(
# Get the file names for the test_target so that we can specify to Pytest precisely which files
# to test, rather than using auto-discovery.
- field_set_source_files_get = Get(SourceFiles, SourceFilesRequest([request.field_set.source]))
+ field_set_source_files_get = Get(
+ SourceFiles, SourceFilesRequest([field_set.source for field_set in request.field_sets])
+ )
field_set_extra_env_get = Get(
- EnvironmentVars, EnvironmentVarsRequest(request.field_set.extra_env_vars.value or ())
+ EnvironmentVars, EnvironmentVarsRequest(request.metadata.extra_env_vars)
)
(
@@ -242,7 +272,7 @@ async def setup_pytest_for_target(
local_dists = await Get(
LocalDistsPex,
LocalDistsPexRequest(
- [request.field_set.address],
+ addresses,
internal_only=True,
interpreter_constraints=interpreter_constraints,
sources=prepared_sources,
@@ -297,7 +327,12 @@ async def setup_pytest_for_target(
results_file_name = None
if not request.is_debug:
- results_file_name = f"{request.field_set.address.path_safe_spec}.xml"
+ results_file_prefix = request.field_sets[0].address.path_safe_spec
+ if len(request.field_sets) > 1:
+ results_file_prefix = (
+ f"batch-of-{results_file_prefix}+{len(request.field_sets)-1}-files"
+ )
+ results_file_name = f"{results_file_prefix}.xml"
add_opts.extend(
(f"--junitxml={results_file_name}", "-o", f"junit_family={pytest.junit_family}")
)
@@ -340,12 +375,24 @@ async def setup_pytest_for_target(
xdist_concurrency = 0
if pytest.xdist_enabled and not request.is_debug:
- concurrency = request.field_set.xdist_concurrency.value
+ concurrency = request.metadata.xdist_concurrency
if concurrency is None:
contents = await Get(DigestContents, Digest, field_set_source_files.snapshot.digest)
concurrency = _count_pytest_tests(contents)
xdist_concurrency = concurrency
+ timeout_seconds: int | None = None
+ for field_set in request.field_sets:
+ timeout = field_set.timeout.calculate_from_global_options(test_subsystem, pytest)
+ if timeout:
+ if timeout_seconds:
+ timeout_seconds += timeout
+ else:
+ timeout_seconds = timeout
+
+ run_description = request.field_sets[0].address.spec
+ if len(request.field_sets) > 1:
+ run_description = f"batch of {run_description} and {len(request.field_sets)-1} other files"
process = await Get(
Process,
VenvPexProcess(
@@ -362,12 +409,10 @@ async def setup_pytest_for_target(
input_digest=input_digest,
output_directories=(_EXTRA_OUTPUT_DIR,),
output_files=output_files,
- timeout_seconds=request.field_set.timeout.calculate_from_global_options(
- test_subsystem, pytest
- ),
+ timeout_seconds=timeout_seconds,
execution_slot_variable=pytest.execution_slot_var,
concurrency_available=xdist_concurrency,
- description=f"Run Pytest for {request.field_set.address}",
+ description=f"Run Pytest for {run_description}",
level=LogLevel.DEBUG,
cache_scope=cache_scope,
),
@@ -378,27 +423,72 @@ async def setup_pytest_for_target(
class PyTestRequest(TestRequest):
tool_subsystem = PyTest
field_set_type = PythonTestFieldSet
+ partitioner_type = PartitionerType.CUSTOM
+
+
+@rule(desc="Partition Pytest", level=LogLevel.DEBUG)
+async def partition_python_tests(
+ request: PyTestRequest.PartitionRequest[PythonTestFieldSet],
+ python_setup: PythonSetup,
+) -> Partitions[PythonTestFieldSet, TestMetadata]:
+ partitions = []
+ compatible_tests = defaultdict(list)
+
+ for field_set in request.field_sets:
+ metadata = TestMetadata(
+ interpreter_constraints=InterpreterConstraints.create_from_compatibility_fields(
+ [field_set.interpreter_constraints], python_setup
+ ),
+ extra_env_vars=tuple(sorted(field_set.extra_env_vars.value or ())),
+ xdist_concurrency=field_set.xdist_concurrency.value,
+ resolve=field_set.resolve.normalized_value(python_setup),
+ environment=field_set.environment.value,
+ compatability_tag=field_set.batch_compatibility_tag.value,
+ )
+
+ if not metadata.compatability_tag:
+ # Tests without a compatibility tag are assumed to be incompatible with all others.
+ partitions.append(Partition((field_set,), metadata))
+ else:
+ # Group tests by their common metadata.
+ compatible_tests[metadata].append(field_set)
+
+ for metadata, field_sets in compatible_tests.items():
+ partitions.append(Partition(tuple(field_sets), metadata))
+
+ return Partitions(partitions)
@rule(desc="Run Pytest", level=LogLevel.DEBUG)
-async def run_python_test(
- batch: PyTestRequest.Batch[PythonTestFieldSet, Any],
+async def run_python_tests(
+ batch: PyTestRequest.Batch[PythonTestFieldSet, TestMetadata],
test_subsystem: TestSubsystem,
) -> TestResult:
- field_set = batch.single_element
- setup = await Get(TestSetup, TestSetupRequest(field_set, is_debug=False))
+ setup = await Get(
+ TestSetup, TestSetupRequest(batch.elements, batch.partition_metadata, is_debug=False)
+ )
result = await Get(FallibleProcessResult, Process, setup.process)
+ def warning_description() -> str:
+ description = batch.elements[0].address.spec
+ if len(batch.elements) > 1:
+ description = f"batch containing {description} and {len(batch.elements)-1} other files"
+ if batch.partition_metadata.description:
+ description = f"{description} ({batch.partition_metadata.description})"
+ return description
+
coverage_data = None
if test_subsystem.use_coverage:
coverage_snapshot = await Get(
Snapshot, DigestSubset(result.output_digest, PathGlobs([".coverage"]))
)
if coverage_snapshot.files == (".coverage",):
- coverage_data = PytestCoverageData(field_set.address, coverage_snapshot.digest)
+ coverage_data = PytestCoverageData(
+ tuple(field_set.address for field_set in batch.elements), coverage_snapshot.digest
+ )
else:
- logger.warning(f"Failed to generate coverage data for {field_set.address}.")
+ logger.warning(f"Failed to generate coverage data for {warning_description()}.")
xml_results_snapshot = None
if setup.results_file_name:
@@ -406,7 +496,7 @@ async def run_python_test(
Snapshot, DigestSubset(result.output_digest, PathGlobs([setup.results_file_name]))
)
if xml_results_snapshot.files != (setup.results_file_name,):
- logger.warning(f"Failed to generate JUnit XML data for {field_set.address}.")
+ logger.warning(f"Failed to generate JUnit XML data for {warning_description()}.")
extra_output_snapshot = await Get(
Snapshot, DigestSubset(result.output_digest, PathGlobs([f"{_EXTRA_OUTPUT_DIR}/**"]))
)
@@ -414,9 +504,9 @@ async def run_python_test(
Snapshot, RemovePrefix(extra_output_snapshot.digest, _EXTRA_OUTPUT_DIR)
)
- return TestResult.from_fallible_process_result(
+ return TestResult.from_batched_fallible_process_result(
result,
- address=field_set.address,
+ batch=batch,
output_setting=test_subsystem.output,
coverage_data=coverage_data,
xml_results=xml_results_snapshot,
@@ -426,9 +516,11 @@ async def run_python_test(
@rule(desc="Set up Pytest to run interactively", level=LogLevel.DEBUG)
async def debug_python_test(
- batch: PyTestRequest.Batch[PythonTestFieldSet, Any]
+ batch: PyTestRequest.Batch[PythonTestFieldSet, TestMetadata]
) -> TestDebugRequest:
- setup = await Get(TestSetup, TestSetupRequest(batch.single_element, is_debug=True))
+ setup = await Get(
+ TestSetup, TestSetupRequest(batch.elements, batch.partition_metadata, is_debug=True)
+ )
return TestDebugRequest(
InteractiveProcess.from_process(
setup.process, forward_signals_to_process=False, restartable=True
@@ -438,7 +530,7 @@ async def debug_python_test(
@rule(desc="Set up debugpy to run an interactive Pytest session", level=LogLevel.DEBUG)
async def debugpy_python_test(
- batch: PyTestRequest.Batch[PythonTestFieldSet, Any],
+ batch: PyTestRequest.Batch[PythonTestFieldSet, TestMetadata],
debugpy: DebugPy,
debug_adapter: DebugAdapterSubsystem,
pytest: PyTest,
@@ -448,7 +540,8 @@ async def debugpy_python_test(
setup = await Get(
TestSetup,
TestSetupRequest(
- batch.single_element,
+ batch.elements,
+ batch.partition_metadata,
is_debug=True,
main=debugpy.main,
prepend_argv=debugpy.get_args(debug_adapter, pytest.main),
diff --git a/src/python/pants/backend/python/subsystems/pytest.py b/src/python/pants/backend/python/subsystems/pytest.py
--- a/src/python/pants/backend/python/subsystems/pytest.py
+++ b/src/python/pants/backend/python/subsystems/pytest.py
@@ -21,6 +21,8 @@
from pants.backend.python.target_types import (
ConsoleScript,
InterpreterConstraintsField,
+ PythonResolveField,
+ PythonTestsBatchCompatibilityTagField,
PythonTestsExtraEnvVarsField,
PythonTestSourceField,
PythonTestsTimeoutField,
@@ -52,6 +54,8 @@ class PythonTestFieldSet(TestFieldSet):
runtime_package_dependencies: RuntimePackageDependenciesField
extra_env_vars: PythonTestsExtraEnvVarsField
xdist_concurrency: PythonTestsXdistConcurrencyField
+ batch_compatibility_tag: PythonTestsBatchCompatibilityTagField
+ resolve: PythonResolveField
environment: EnvironmentField
@classmethod
diff --git a/src/python/pants/backend/python/target_types.py b/src/python/pants/backend/python/target_types.py
--- a/src/python/pants/backend/python/target_types.py
+++ b/src/python/pants/backend/python/target_types.py
@@ -878,6 +878,43 @@ class PythonTestsXdistConcurrencyField(IntField):
)
+class PythonTestsBatchCompatibilityTagField(StringField):
+ alias = "batch_compatibility_tag"
+ help = softwrap(
+ """
+ An arbitrary value used to mark the test files belonging to this target as valid for
+ batched execution.
+
+ It's _sometimes_ safe to run multiple `python_test`s within a single `pytest` process,
+ and doing so can give significant wins by allowing reuse of expensive test setup /
+ teardown logic. To opt into this behavior, set this field to an arbitrary non-empty
+ string on all the `python_test` targets that are safe/compatible to run in the same
+ process.
+
+ If this field is left unset on a target, the target is assumed to be incompatible with
+ all others and will run in a dedicated `pytest` process.
+
+ If this field is set on a target, and its value is different from the value on some
+ other `python_test`, then the two targets are explicitly incompatible and are guaranteed
+ to not run in the same `pytest` process.
+
+ If this field is set on a target, and its value is the same as the value on some other
+ `python_test`, then the two targets are explicitly compatible and _may_ run in the same
+ `pytest` process. Compatible tests may not end up in the same `pytest` batch if:
+
+ * There are "too many" compatible tests in a partition, as determined by the \
+ `[test].batch_size` config parameter, or
+ * Compatible tests have some incompatibility in Pants metadata (i.e. different \
+ `resolve`s or `extra_env_vars`).
+
+ When tests with the same `batch_compatibility_tag` have incompatibilities in some other
+ Pants metadata, they will be automatically split into separate batches. This way you can
+ set a high-level `batch_compatibility_tag` using `__defaults__` and then have tests
+ continue to work as you tweak BUILD metadata on specific targets.
+ """
+ )
+
+
class SkipPythonTestsField(BoolField):
alias = "skip_tests"
default = False
@@ -890,6 +927,7 @@ class SkipPythonTestsField(BoolField):
PythonRunGoalUseSandboxField,
PythonTestsTimeoutField,
PythonTestsXdistConcurrencyField,
+ PythonTestsBatchCompatibilityTagField,
RuntimePackageDependenciesField,
PythonTestsExtraEnvVarsField,
InterpreterConstraintsField,
</patch>
|
[]
|
[]
| |||
ipython__ipython-13745
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add completion type (_jupyter_types_experimental) for dictionary keys, file paths, etc
I would very much like the IPython to return completion type for all completions, not just for the completions from Jedi. This would not only make it possible to display the type ot the user in frontends, but also allow users to create custom rules (e.g. show paths first or show paths last). I am happy to work on a PR and maintain this part of the codebase afterwards. Would you consider a refactor of the current completions to allow for returning type in scope for IPython current plans?
Currently completions are being passed around in three forms:
- the new (unstable) [Completion](https://github.com/ipython/ipython/blob/167f683f56a900200f5bc13227639c2ebdfb1925/IPython/core/completer.py#L355) class mostly used downstream of Jedi
- the Jedi Completion class which is an implementation detail of Jedi
- the [match tuples](https://github.com/ipython/ipython/blob/167f683f56a900200f5bc13227639c2ebdfb1925/IPython/core/completer.py#L2149-L2154) as generated from the results of _matchers_
Interestingly the match tuple contains `origin` which could _almost_ be used as a type for the completions (except that this is an identifier for debug purposes and not a user-friendly name). I would propose that in a similar fashion to how the [origin is being discovered from the name of the matcher method](https://github.com/ipython/ipython/blob/167f683f56a900200f5bc13227639c2ebdfb1925/IPython/core/completer.py#L2119-L2131), each of the non-jedi [completion matchers](https://github.com/ipython/ipython/blob/167f683f56a900200f5bc13227639c2ebdfb1925/IPython/core/completer.py#L1180-L1198) would get a "type" property. This could be set by a decorator, like so:
```python
def matcher(type):
def decorate(func):
func.completion_type = type
return func
class IPCompleter(Completer):
# [...]
@matcher(type="magic")
def magic_matches(self, text:str):
"""Match magics"""
```
Then the match could get formalized as a named tuple (so that the benefits of the current lightweight approach are kept but an additional benefit of readability and ability to annotate types is obtained):
```python
from typing import NamedTuple
# could be also named CompletionMatch or SimpleCompletion
class SimpleMatch(NamedTuple):
text: str
origin: str
type: str
```
I am not sure what should happen next though. The logic appears quite complex to me. I wonder if some of it could be also refactored to simplify `_CompleteResult` and what happens with it. I am sure that more than one good solution exists. I wonder if you have any suggestions or thoughts.
Improving dict autocomplete
We have found the dict autocomplete to show too many option at times. As an example, given this:
```
$ ls
a-dir-name a-file
$ ipython
```
The following is appears:

Here you can see a number of things are suggested that are not helpful including:
1. Files
2. Magics
This is made much worse by something like [pyflyby](https://github.com/deshaw/pyflyby) where all possible imports that start with `a` are also listed.
Ideas:
1. Only show valid keys of the dict (when possible)
2. Sort to show dict keys first and all other options after
Provisional Completer API Thoughts
I've been using the provisional completion API for an Emacs mode targeting inferior-iPython. The docs mention interest in feedback on the API, so I thought to open an issue to discuss. Some thoughts and suggestions:
- The current provisional API was put in place 5 years ago. Are there any plans to change it in the near term? If not, perhaps the provisional context requirement should be dropped, and the docs should no longer indicate the functionality as experimental. After all, `[Tab]` has been providing these completions for 5 years, so they are pretty well vetted!
- It's very useful that the completion API provides the bounds of the string which it is completing. What would also be very helpful is to also get the bounds of the full expression iPython + jedi are evaluating for the completion itself. For example, in `x.y[0].z[Tab]` IPCompleter will indicate `z` as the completion text to alter, but presumably iPython knows that it's evaluating an entire expression `x.y[0].z___`. This would be very useful for users of the API that want to query functions for docstrings, etc. during completion.
Thanks for iPython.
</issue>
<code>
[start of README.rst]
1 .. image:: https://codecov.io/github/ipython/ipython/coverage.svg?branch=main
2 :target: https://codecov.io/github/ipython/ipython?branch=main
3
4 .. image:: https://img.shields.io/pypi/v/IPython.svg
5 :target: https://pypi.python.org/pypi/ipython
6
7 .. image:: https://github.com/ipython/ipython/actions/workflows/test.yml/badge.svg
8 :target: https://github.com/ipython/ipython/actions/workflows/test.yml)
9
10 .. image:: https://www.codetriage.com/ipython/ipython/badges/users.svg
11 :target: https://www.codetriage.com/ipython/ipython/
12
13 .. image:: https://raster.shields.io/badge/Follows-NEP29-brightgreen.png
14 :target: https://numpy.org/neps/nep-0029-deprecation_policy.html
15
16
17 ===========================================
18 IPython: Productive Interactive Computing
19 ===========================================
20
21 Overview
22 ========
23
24 Welcome to IPython. Our full documentation is available on `ipython.readthedocs.io
25 <https://ipython.readthedocs.io/en/stable/>`_ and contains information on how to install, use, and
26 contribute to the project.
27 IPython (Interactive Python) is a command shell for interactive computing in multiple programming languages, originally developed for the Python programming language, that offers introspection, rich media, shell syntax, tab completion, and history.
28
29 **IPython versions and Python Support**
30
31 Starting with IPython 7.10, IPython follows `NEP 29 <https://numpy.org/neps/nep-0029-deprecation_policy.html>`_
32
33 **IPython 7.17+** requires Python version 3.7 and above.
34
35 **IPython 7.10+** requires Python version 3.6 and above.
36
37 **IPython 7.0** requires Python version 3.5 and above.
38
39 **IPython 6.x** requires Python version 3.3 and above.
40
41 **IPython 5.x LTS** is the compatible release for Python 2.7.
42 If you require Python 2 support, you **must** use IPython 5.x LTS. Please
43 update your project configurations and requirements as necessary.
44
45
46 The Notebook, Qt console and a number of other pieces are now parts of *Jupyter*.
47 See the `Jupyter installation docs <https://jupyter.readthedocs.io/en/latest/install.html>`__
48 if you want to use these.
49
50 Main features of IPython
51 ========================
52 Comprehensive object introspection.
53
54 Input history, persistent across sessions.
55
56 Caching of output results during a session with automatically generated references.
57
58 Extensible tab completion, with support by default for completion of python variables and keywords, filenames and function keywords.
59
60 Extensible system of ‘magic’ commands for controlling the environment and performing many tasks related to IPython or the operating system.
61
62 A rich configuration system with easy switching between different setups (simpler than changing $PYTHONSTARTUP environment variables every time).
63
64 Session logging and reloading.
65
66 Extensible syntax processing for special purpose situations.
67
68 Access to the system shell with user-extensible alias system.
69
70 Easily embeddable in other Python programs and GUIs.
71
72 Integrated access to the pdb debugger and the Python profiler.
73
74
75 Development and Instant running
76 ===============================
77
78 You can find the latest version of the development documentation on `readthedocs
79 <https://ipython.readthedocs.io/en/latest/>`_.
80
81 You can run IPython from this directory without even installing it system-wide
82 by typing at the terminal::
83
84 $ python -m IPython
85
86 Or see the `development installation docs
87 <https://ipython.readthedocs.io/en/latest/install/install.html#installing-the-development-version>`_
88 for the latest revision on read the docs.
89
90 Documentation and installation instructions for older version of IPython can be
91 found on the `IPython website <https://ipython.org/documentation.html>`_
92
93
94
95 IPython requires Python version 3 or above
96 ==========================================
97
98 Starting with version 6.0, IPython does not support Python 2.7, 3.0, 3.1, or
99 3.2.
100
101 For a version compatible with Python 2.7, please install the 5.x LTS Long Term
102 Support version.
103
104 If you are encountering this error message you are likely trying to install or
105 use IPython from source. You need to checkout the remote 5.x branch. If you are
106 using git the following should work::
107
108 $ git fetch origin
109 $ git checkout 5.x
110
111 If you encounter this error message with a regular install of IPython, then you
112 likely need to update your package manager, for example if you are using `pip`
113 check the version of pip with::
114
115 $ pip --version
116
117 You will need to update pip to the version 9.0.1 or greater. If you are not using
118 pip, please inquiry with the maintainers of the package for your package
119 manager.
120
121 For more information see one of our blog posts:
122
123 https://blog.jupyter.org/release-of-ipython-5-0-8ce60b8d2e8e
124
125 As well as the following Pull-Request for discussion:
126
127 https://github.com/ipython/ipython/pull/9900
128
129 This error does also occur if you are invoking ``setup.py`` directly – which you
130 should not – or are using ``easy_install`` If this is the case, use ``pip
131 install .`` instead of ``setup.py install`` , and ``pip install -e .`` instead
132 of ``setup.py develop`` If you are depending on IPython as a dependency you may
133 also want to have a conditional dependency on IPython depending on the Python
134 version::
135
136 install_req = ['ipython']
137 if sys.version_info[0] < 3 and 'bdist_wheel' not in sys.argv:
138 install_req.remove('ipython')
139 install_req.append('ipython<6')
140
141 setup(
142 ...
143 install_requires=install_req
144 )
145
146 Alternatives to IPython
147 =======================
148
149 IPython may not be to your taste; if that's the case there might be similar
150 project that you might want to use:
151
152 - The classic Python REPL.
153 - `bpython <https://bpython-interpreter.org/>`_
154 - `mypython <https://www.asmeurer.com/mypython/>`_
155 - `ptpython and ptipython <https://pypi.org/project/ptpython/>`_
156 - `Xonsh <https://xon.sh/>`_
157
158 Ignoring commits with git blame.ignoreRevsFile
159 ==============================================
160
161 As of git 2.23, it is possible to make formatting changes without breaking
162 ``git blame``. See the `git documentation
163 <https://git-scm.com/docs/git-config#Documentation/git-config.txt-blameignoreRevsFile>`_
164 for more details.
165
166 To use this feature you must:
167
168 - Install git >= 2.23
169 - Configure your local git repo by running:
170 - POSIX: ``tools\configure-git-blame-ignore-revs.sh``
171 - Windows: ``tools\configure-git-blame-ignore-revs.bat``
172
[end of README.rst]
[start of IPython/core/magics/config.py]
1 """Implementation of configuration-related magic functions.
2 """
3 #-----------------------------------------------------------------------------
4 # Copyright (c) 2012 The IPython Development Team.
5 #
6 # Distributed under the terms of the Modified BSD License.
7 #
8 # The full license is in the file COPYING.txt, distributed with this software.
9 #-----------------------------------------------------------------------------
10
11 #-----------------------------------------------------------------------------
12 # Imports
13 #-----------------------------------------------------------------------------
14
15 # Stdlib
16 import re
17
18 # Our own packages
19 from IPython.core.error import UsageError
20 from IPython.core.magic import Magics, magics_class, line_magic
21 from logging import error
22
23 #-----------------------------------------------------------------------------
24 # Magic implementation classes
25 #-----------------------------------------------------------------------------
26
27 reg = re.compile(r'^\w+\.\w+$')
28 @magics_class
29 class ConfigMagics(Magics):
30
31 def __init__(self, shell):
32 super(ConfigMagics, self).__init__(shell)
33 self.configurables = []
34
35 @line_magic
36 def config(self, s):
37 """configure IPython
38
39 %config Class[.trait=value]
40
41 This magic exposes most of the IPython config system. Any
42 Configurable class should be able to be configured with the simple
43 line::
44
45 %config Class.trait=value
46
47 Where `value` will be resolved in the user's namespace, if it is an
48 expression or variable name.
49
50 Examples
51 --------
52
53 To see what classes are available for config, pass no arguments::
54
55 In [1]: %config
56 Available objects for config:
57 AliasManager
58 DisplayFormatter
59 HistoryManager
60 IPCompleter
61 LoggingMagics
62 MagicsManager
63 OSMagics
64 PrefilterManager
65 ScriptMagics
66 TerminalInteractiveShell
67
68 To view what is configurable on a given class, just pass the class
69 name::
70
71 In [2]: %config IPCompleter
72 IPCompleter(Completer) options
73 ----------------------------
74 IPCompleter.backslash_combining_completions=<Bool>
75 Enable unicode completions, e.g. \\alpha<tab> . Includes completion of latex
76 commands, unicode names, and expanding unicode characters back to latex
77 commands.
78 Current: True
79 IPCompleter.debug=<Bool>
80 Enable debug for the Completer. Mostly print extra information for
81 experimental jedi integration.
82 Current: False
83 IPCompleter.greedy=<Bool>
84 Activate greedy completion
85 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
86 This will enable completion on elements of lists, results of function calls, etc.,
87 but can be unsafe because the code is actually evaluated on TAB.
88 Current: False
89 IPCompleter.jedi_compute_type_timeout=<Int>
90 Experimental: restrict time (in milliseconds) during which Jedi can compute types.
91 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
92 performance by preventing jedi to build its cache.
93 Current: 400
94 IPCompleter.limit_to__all__=<Bool>
95 DEPRECATED as of version 5.0.
96 Instruct the completer to use __all__ for the completion
97 Specifically, when completing on ``object.<tab>``.
98 When True: only those names in obj.__all__ will be included.
99 When False [default]: the __all__ attribute is ignored
100 Current: False
101 IPCompleter.merge_completions=<Bool>
102 Whether to merge completion results into a single list
103 If False, only the completion results from the first non-empty
104 completer will be returned.
105 Current: True
106 IPCompleter.omit__names=<Enum>
107 Instruct the completer to omit private method names
108 Specifically, when completing on ``object.<tab>``.
109 When 2 [default]: all names that start with '_' will be excluded.
110 When 1: all 'magic' names (``__foo__``) will be excluded.
111 When 0: nothing will be excluded.
112 Choices: any of [0, 1, 2]
113 Current: 2
114 IPCompleter.profile_completions=<Bool>
115 If True, emit profiling data for completion subsystem using cProfile.
116 Current: False
117 IPCompleter.profiler_output_dir=<Unicode>
118 Template for path at which to output profile data for completions.
119 Current: '.completion_profiles'
120 IPCompleter.use_jedi=<Bool>
121 Experimental: Use Jedi to generate autocompletions. Default to True if jedi
122 is installed.
123 Current: True
124
125 but the real use is in setting values::
126
127 In [3]: %config IPCompleter.greedy = True
128
129 and these values are read from the user_ns if they are variables::
130
131 In [4]: feeling_greedy=False
132
133 In [5]: %config IPCompleter.greedy = feeling_greedy
134
135 """
136 from traitlets.config.loader import Config
137 # some IPython objects are Configurable, but do not yet have
138 # any configurable traits. Exclude them from the effects of
139 # this magic, as their presence is just noise:
140 configurables = sorted(set([ c for c in self.shell.configurables
141 if c.__class__.class_traits(config=True)
142 ]), key=lambda x: x.__class__.__name__)
143 classnames = [ c.__class__.__name__ for c in configurables ]
144
145 line = s.strip()
146 if not line:
147 # print available configurable names
148 print("Available objects for config:")
149 for name in classnames:
150 print(" ", name)
151 return
152 elif line in classnames:
153 # `%config TerminalInteractiveShell` will print trait info for
154 # TerminalInteractiveShell
155 c = configurables[classnames.index(line)]
156 cls = c.__class__
157 help = cls.class_get_help(c)
158 # strip leading '--' from cl-args:
159 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
160 print(help)
161 return
162 elif reg.match(line):
163 cls, attr = line.split('.')
164 return getattr(configurables[classnames.index(cls)],attr)
165 elif '=' not in line:
166 msg = "Invalid config statement: %r, "\
167 "should be `Class.trait = value`."
168
169 ll = line.lower()
170 for classname in classnames:
171 if ll == classname.lower():
172 msg = msg + '\nDid you mean %s (note the case)?' % classname
173 break
174
175 raise UsageError( msg % line)
176
177 # otherwise, assume we are setting configurables.
178 # leave quotes on args when splitting, because we want
179 # unquoted args to eval in user_ns
180 cfg = Config()
181 exec("cfg."+line, self.shell.user_ns, locals())
182
183 for configurable in configurables:
184 try:
185 configurable.update_config(cfg)
186 except Exception as e:
187 error(e)
188
[end of IPython/core/magics/config.py]
[start of IPython/core/magics/osm.py]
1 """Implementation of magic functions for interaction with the OS.
2
3 Note: this module is named 'osm' instead of 'os' to avoid a collision with the
4 builtin.
5 """
6 # Copyright (c) IPython Development Team.
7 # Distributed under the terms of the Modified BSD License.
8
9 import io
10 import os
11 import pathlib
12 import re
13 import sys
14 from pprint import pformat
15
16 from IPython.core import magic_arguments
17 from IPython.core import oinspect
18 from IPython.core import page
19 from IPython.core.alias import AliasError, Alias
20 from IPython.core.error import UsageError
21 from IPython.core.magic import (
22 Magics, compress_dhist, magics_class, line_magic, cell_magic, line_cell_magic
23 )
24 from IPython.testing.skipdoctest import skip_doctest
25 from IPython.utils.openpy import source_to_unicode
26 from IPython.utils.process import abbrev_cwd
27 from IPython.utils.terminal import set_term_title
28 from traitlets import Bool
29 from warnings import warn
30
31
32 @magics_class
33 class OSMagics(Magics):
34 """Magics to interact with the underlying OS (shell-type functionality).
35 """
36
37 cd_force_quiet = Bool(False,
38 help="Force %cd magic to be quiet even if -q is not passed."
39 ).tag(config=True)
40
41 def __init__(self, shell=None, **kwargs):
42
43 # Now define isexec in a cross platform manner.
44 self.is_posix = False
45 self.execre = None
46 if os.name == 'posix':
47 self.is_posix = True
48 else:
49 try:
50 winext = os.environ['pathext'].replace(';','|').replace('.','')
51 except KeyError:
52 winext = 'exe|com|bat|py'
53 try:
54 self.execre = re.compile(r'(.*)\.(%s)$' % winext,re.IGNORECASE)
55 except re.error:
56 warn("Seems like your pathext environmental "
57 "variable is malformed. Please check it to "
58 "enable a proper handle of file extensions "
59 "managed for your system")
60 winext = 'exe|com|bat|py'
61 self.execre = re.compile(r'(.*)\.(%s)$' % winext,re.IGNORECASE)
62
63 # call up the chain
64 super().__init__(shell=shell, **kwargs)
65
66
67 def _isexec_POSIX(self, file):
68 """
69 Test for executable on a POSIX system
70 """
71 if os.access(file.path, os.X_OK):
72 # will fail on maxOS if access is not X_OK
73 return file.is_file()
74 return False
75
76
77
78 def _isexec_WIN(self, file):
79 """
80 Test for executable file on non POSIX system
81 """
82 return file.is_file() and self.execre.match(file.name) is not None
83
84 def isexec(self, file):
85 """
86 Test for executable file on non POSIX system
87 """
88 if self.is_posix:
89 return self._isexec_POSIX(file)
90 else:
91 return self._isexec_WIN(file)
92
93
94 @skip_doctest
95 @line_magic
96 def alias(self, parameter_s=''):
97 """Define an alias for a system command.
98
99 '%alias alias_name cmd' defines 'alias_name' as an alias for 'cmd'
100
101 Then, typing 'alias_name params' will execute the system command 'cmd
102 params' (from your underlying operating system).
103
104 Aliases have lower precedence than magic functions and Python normal
105 variables, so if 'foo' is both a Python variable and an alias, the
106 alias can not be executed until 'del foo' removes the Python variable.
107
108 You can use the %l specifier in an alias definition to represent the
109 whole line when the alias is called. For example::
110
111 In [2]: alias bracket echo "Input in brackets: <%l>"
112 In [3]: bracket hello world
113 Input in brackets: <hello world>
114
115 You can also define aliases with parameters using %s specifiers (one
116 per parameter)::
117
118 In [1]: alias parts echo first %s second %s
119 In [2]: %parts A B
120 first A second B
121 In [3]: %parts A
122 Incorrect number of arguments: 2 expected.
123 parts is an alias to: 'echo first %s second %s'
124
125 Note that %l and %s are mutually exclusive. You can only use one or
126 the other in your aliases.
127
128 Aliases expand Python variables just like system calls using ! or !!
129 do: all expressions prefixed with '$' get expanded. For details of
130 the semantic rules, see PEP-215:
131 https://peps.python.org/pep-0215/. This is the library used by
132 IPython for variable expansion. If you want to access a true shell
133 variable, an extra $ is necessary to prevent its expansion by
134 IPython::
135
136 In [6]: alias show echo
137 In [7]: PATH='A Python string'
138 In [8]: show $PATH
139 A Python string
140 In [9]: show $$PATH
141 /usr/local/lf9560/bin:/usr/local/intel/compiler70/ia32/bin:...
142
143 You can use the alias facility to access all of $PATH. See the %rehashx
144 function, which automatically creates aliases for the contents of your
145 $PATH.
146
147 If called with no parameters, %alias prints the current alias table
148 for your system. For posix systems, the default aliases are 'cat',
149 'cp', 'mv', 'rm', 'rmdir', and 'mkdir', and other platform-specific
150 aliases are added. For windows-based systems, the default aliases are
151 'copy', 'ddir', 'echo', 'ls', 'ldir', 'mkdir', 'ren', and 'rmdir'.
152
153 You can see the definition of alias by adding a question mark in the
154 end::
155
156 In [1]: cat?
157 Repr: <alias cat for 'cat'>"""
158
159 par = parameter_s.strip()
160 if not par:
161 aliases = sorted(self.shell.alias_manager.aliases)
162 # stored = self.shell.db.get('stored_aliases', {} )
163 # for k, v in stored:
164 # atab.append(k, v[0])
165
166 print("Total number of aliases:", len(aliases))
167 sys.stdout.flush()
168 return aliases
169
170 # Now try to define a new one
171 try:
172 alias,cmd = par.split(None, 1)
173 except TypeError:
174 print(oinspect.getdoc(self.alias))
175 return
176
177 try:
178 self.shell.alias_manager.define_alias(alias, cmd)
179 except AliasError as e:
180 print(e)
181 # end magic_alias
182
183 @line_magic
184 def unalias(self, parameter_s=''):
185 """Remove an alias"""
186
187 aname = parameter_s.strip()
188 try:
189 self.shell.alias_manager.undefine_alias(aname)
190 except ValueError as e:
191 print(e)
192 return
193
194 stored = self.shell.db.get('stored_aliases', {} )
195 if aname in stored:
196 print("Removing %stored alias",aname)
197 del stored[aname]
198 self.shell.db['stored_aliases'] = stored
199
200 @line_magic
201 def rehashx(self, parameter_s=''):
202 """Update the alias table with all executable files in $PATH.
203
204 rehashx explicitly checks that every entry in $PATH is a file
205 with execute access (os.X_OK).
206
207 Under Windows, it checks executability as a match against a
208 '|'-separated string of extensions, stored in the IPython config
209 variable win_exec_ext. This defaults to 'exe|com|bat'.
210
211 This function also resets the root module cache of module completer,
212 used on slow filesystems.
213 """
214 from IPython.core.alias import InvalidAliasError
215
216 # for the benefit of module completer in ipy_completers.py
217 del self.shell.db['rootmodules_cache']
218
219 path = [os.path.abspath(os.path.expanduser(p)) for p in
220 os.environ.get('PATH','').split(os.pathsep)]
221
222 syscmdlist = []
223 savedir = os.getcwd()
224
225 # Now walk the paths looking for executables to alias.
226 try:
227 # write the whole loop for posix/Windows so we don't have an if in
228 # the innermost part
229 if self.is_posix:
230 for pdir in path:
231 try:
232 os.chdir(pdir)
233 except OSError:
234 continue
235
236 # for python 3.6+ rewrite to: with os.scandir(pdir) as dirlist:
237 dirlist = os.scandir(path=pdir)
238 for ff in dirlist:
239 if self.isexec(ff):
240 fname = ff.name
241 try:
242 # Removes dots from the name since ipython
243 # will assume names with dots to be python.
244 if not self.shell.alias_manager.is_alias(fname):
245 self.shell.alias_manager.define_alias(
246 fname.replace('.',''), fname)
247 except InvalidAliasError:
248 pass
249 else:
250 syscmdlist.append(fname)
251 else:
252 no_alias = Alias.blacklist
253 for pdir in path:
254 try:
255 os.chdir(pdir)
256 except OSError:
257 continue
258
259 # for python 3.6+ rewrite to: with os.scandir(pdir) as dirlist:
260 dirlist = os.scandir(pdir)
261 for ff in dirlist:
262 fname = ff.name
263 base, ext = os.path.splitext(fname)
264 if self.isexec(ff) and base.lower() not in no_alias:
265 if ext.lower() == '.exe':
266 fname = base
267 try:
268 # Removes dots from the name since ipython
269 # will assume names with dots to be python.
270 self.shell.alias_manager.define_alias(
271 base.lower().replace('.',''), fname)
272 except InvalidAliasError:
273 pass
274 syscmdlist.append(fname)
275
276 self.shell.db['syscmdlist'] = syscmdlist
277 finally:
278 os.chdir(savedir)
279
280 @skip_doctest
281 @line_magic
282 def pwd(self, parameter_s=''):
283 """Return the current working directory path.
284
285 Examples
286 --------
287 ::
288
289 In [9]: pwd
290 Out[9]: '/home/tsuser/sprint/ipython'
291 """
292 try:
293 return os.getcwd()
294 except FileNotFoundError as e:
295 raise UsageError("CWD no longer exists - please use %cd to change directory.") from e
296
297 @skip_doctest
298 @line_magic
299 def cd(self, parameter_s=''):
300 """Change the current working directory.
301
302 This command automatically maintains an internal list of directories
303 you visit during your IPython session, in the variable ``_dh``. The
304 command :magic:`%dhist` shows this history nicely formatted. You can
305 also do ``cd -<tab>`` to see directory history conveniently.
306 Usage:
307
308 - ``cd 'dir'``: changes to directory 'dir'.
309 - ``cd -``: changes to the last visited directory.
310 - ``cd -<n>``: changes to the n-th directory in the directory history.
311 - ``cd --foo``: change to directory that matches 'foo' in history
312 - ``cd -b <bookmark_name>``: jump to a bookmark set by %bookmark
313 - Hitting a tab key after ``cd -b`` allows you to tab-complete
314 bookmark names.
315
316 .. note::
317 ``cd <bookmark_name>`` is enough if there is no directory
318 ``<bookmark_name>``, but a bookmark with the name exists.
319
320 Options:
321
322 -q Be quiet. Do not print the working directory after the
323 cd command is executed. By default IPython's cd
324 command does print this directory, since the default
325 prompts do not display path information.
326
327 .. note::
328 Note that ``!cd`` doesn't work for this purpose because the shell
329 where ``!command`` runs is immediately discarded after executing
330 'command'.
331
332 Examples
333 --------
334 ::
335
336 In [10]: cd parent/child
337 /home/tsuser/parent/child
338 """
339
340 try:
341 oldcwd = os.getcwd()
342 except FileNotFoundError:
343 # Happens if the CWD has been deleted.
344 oldcwd = None
345
346 numcd = re.match(r'(-)(\d+)$',parameter_s)
347 # jump in directory history by number
348 if numcd:
349 nn = int(numcd.group(2))
350 try:
351 ps = self.shell.user_ns['_dh'][nn]
352 except IndexError:
353 print('The requested directory does not exist in history.')
354 return
355 else:
356 opts = {}
357 elif parameter_s.startswith('--'):
358 ps = None
359 fallback = None
360 pat = parameter_s[2:]
361 dh = self.shell.user_ns['_dh']
362 # first search only by basename (last component)
363 for ent in reversed(dh):
364 if pat in os.path.basename(ent) and os.path.isdir(ent):
365 ps = ent
366 break
367
368 if fallback is None and pat in ent and os.path.isdir(ent):
369 fallback = ent
370
371 # if we have no last part match, pick the first full path match
372 if ps is None:
373 ps = fallback
374
375 if ps is None:
376 print("No matching entry in directory history")
377 return
378 else:
379 opts = {}
380
381
382 else:
383 opts, ps = self.parse_options(parameter_s, 'qb', mode='string')
384 # jump to previous
385 if ps == '-':
386 try:
387 ps = self.shell.user_ns['_dh'][-2]
388 except IndexError as e:
389 raise UsageError('%cd -: No previous directory to change to.') from e
390 # jump to bookmark if needed
391 else:
392 if not os.path.isdir(ps) or 'b' in opts:
393 bkms = self.shell.db.get('bookmarks', {})
394
395 if ps in bkms:
396 target = bkms[ps]
397 print('(bookmark:%s) -> %s' % (ps, target))
398 ps = target
399 else:
400 if 'b' in opts:
401 raise UsageError("Bookmark '%s' not found. "
402 "Use '%%bookmark -l' to see your bookmarks." % ps)
403
404 # at this point ps should point to the target dir
405 if ps:
406 try:
407 os.chdir(os.path.expanduser(ps))
408 if hasattr(self.shell, 'term_title') and self.shell.term_title:
409 set_term_title(self.shell.term_title_format.format(cwd=abbrev_cwd()))
410 except OSError:
411 print(sys.exc_info()[1])
412 else:
413 cwd = pathlib.Path.cwd()
414 dhist = self.shell.user_ns['_dh']
415 if oldcwd != cwd:
416 dhist.append(cwd)
417 self.shell.db['dhist'] = compress_dhist(dhist)[-100:]
418
419 else:
420 os.chdir(self.shell.home_dir)
421 if hasattr(self.shell, 'term_title') and self.shell.term_title:
422 set_term_title(self.shell.term_title_format.format(cwd="~"))
423 cwd = pathlib.Path.cwd()
424 dhist = self.shell.user_ns['_dh']
425
426 if oldcwd != cwd:
427 dhist.append(cwd)
428 self.shell.db['dhist'] = compress_dhist(dhist)[-100:]
429 if not 'q' in opts and not self.cd_force_quiet and self.shell.user_ns['_dh']:
430 print(self.shell.user_ns['_dh'][-1])
431
432 @line_magic
433 def env(self, parameter_s=''):
434 """Get, set, or list environment variables.
435
436 Usage:\\
437
438 :``%env``: lists all environment variables/values
439 :``%env var``: get value for var
440 :``%env var val``: set value for var
441 :``%env var=val``: set value for var
442 :``%env var=$val``: set value for var, using python expansion if possible
443 """
444 if parameter_s.strip():
445 split = '=' if '=' in parameter_s else ' '
446 bits = parameter_s.split(split)
447 if len(bits) == 1:
448 key = parameter_s.strip()
449 if key in os.environ:
450 return os.environ[key]
451 else:
452 err = "Environment does not have key: {0}".format(key)
453 raise UsageError(err)
454 if len(bits) > 1:
455 return self.set_env(parameter_s)
456 env = dict(os.environ)
457 # hide likely secrets when printing the whole environment
458 for key in list(env):
459 if any(s in key.lower() for s in ('key', 'token', 'secret')):
460 env[key] = '<hidden>'
461
462 return env
463
464 @line_magic
465 def set_env(self, parameter_s):
466 """Set environment variables. Assumptions are that either "val" is a
467 name in the user namespace, or val is something that evaluates to a
468 string.
469
470 Usage:\\
471 %set_env var val: set value for var
472 %set_env var=val: set value for var
473 %set_env var=$val: set value for var, using python expansion if possible
474 """
475 split = '=' if '=' in parameter_s else ' '
476 bits = parameter_s.split(split, 1)
477 if not parameter_s.strip() or len(bits)<2:
478 raise UsageError("usage is 'set_env var=val'")
479 var = bits[0].strip()
480 val = bits[1].strip()
481 if re.match(r'.*\s.*', var):
482 # an environment variable with whitespace is almost certainly
483 # not what the user intended. what's more likely is the wrong
484 # split was chosen, ie for "set_env cmd_args A=B", we chose
485 # '=' for the split and should have chosen ' '. to get around
486 # this, users should just assign directly to os.environ or use
487 # standard magic {var} expansion.
488 err = "refusing to set env var with whitespace: '{0}'"
489 err = err.format(val)
490 raise UsageError(err)
491 os.environ[var] = val
492 print('env: {0}={1}'.format(var,val))
493
494 @line_magic
495 def pushd(self, parameter_s=''):
496 """Place the current dir on stack and change directory.
497
498 Usage:\\
499 %pushd ['dirname']
500 """
501
502 dir_s = self.shell.dir_stack
503 tgt = os.path.expanduser(parameter_s)
504 cwd = os.getcwd().replace(self.shell.home_dir,'~')
505 if tgt:
506 self.cd(parameter_s)
507 dir_s.insert(0,cwd)
508 return self.shell.run_line_magic('dirs', '')
509
510 @line_magic
511 def popd(self, parameter_s=''):
512 """Change to directory popped off the top of the stack.
513 """
514 if not self.shell.dir_stack:
515 raise UsageError("%popd on empty stack")
516 top = self.shell.dir_stack.pop(0)
517 self.cd(top)
518 print("popd ->",top)
519
520 @line_magic
521 def dirs(self, parameter_s=''):
522 """Return the current directory stack."""
523
524 return self.shell.dir_stack
525
526 @line_magic
527 def dhist(self, parameter_s=''):
528 """Print your history of visited directories.
529
530 %dhist -> print full history\\
531 %dhist n -> print last n entries only\\
532 %dhist n1 n2 -> print entries between n1 and n2 (n2 not included)\\
533
534 This history is automatically maintained by the %cd command, and
535 always available as the global list variable _dh. You can use %cd -<n>
536 to go to directory number <n>.
537
538 Note that most of time, you should view directory history by entering
539 cd -<TAB>.
540
541 """
542
543 dh = self.shell.user_ns['_dh']
544 if parameter_s:
545 try:
546 args = map(int,parameter_s.split())
547 except:
548 self.arg_err(self.dhist)
549 return
550 if len(args) == 1:
551 ini,fin = max(len(dh)-(args[0]),0),len(dh)
552 elif len(args) == 2:
553 ini,fin = args
554 fin = min(fin, len(dh))
555 else:
556 self.arg_err(self.dhist)
557 return
558 else:
559 ini,fin = 0,len(dh)
560 print('Directory history (kept in _dh)')
561 for i in range(ini, fin):
562 print("%d: %s" % (i, dh[i]))
563
564 @skip_doctest
565 @line_magic
566 def sc(self, parameter_s=''):
567 """Shell capture - run shell command and capture output (DEPRECATED use !).
568
569 DEPRECATED. Suboptimal, retained for backwards compatibility.
570
571 You should use the form 'var = !command' instead. Example:
572
573 "%sc -l myfiles = ls ~" should now be written as
574
575 "myfiles = !ls ~"
576
577 myfiles.s, myfiles.l and myfiles.n still apply as documented
578 below.
579
580 --
581 %sc [options] varname=command
582
583 IPython will run the given command using commands.getoutput(), and
584 will then update the user's interactive namespace with a variable
585 called varname, containing the value of the call. Your command can
586 contain shell wildcards, pipes, etc.
587
588 The '=' sign in the syntax is mandatory, and the variable name you
589 supply must follow Python's standard conventions for valid names.
590
591 (A special format without variable name exists for internal use)
592
593 Options:
594
595 -l: list output. Split the output on newlines into a list before
596 assigning it to the given variable. By default the output is stored
597 as a single string.
598
599 -v: verbose. Print the contents of the variable.
600
601 In most cases you should not need to split as a list, because the
602 returned value is a special type of string which can automatically
603 provide its contents either as a list (split on newlines) or as a
604 space-separated string. These are convenient, respectively, either
605 for sequential processing or to be passed to a shell command.
606
607 For example::
608
609 # Capture into variable a
610 In [1]: sc a=ls *py
611
612 # a is a string with embedded newlines
613 In [2]: a
614 Out[2]: 'setup.py\\nwin32_manual_post_install.py'
615
616 # which can be seen as a list:
617 In [3]: a.l
618 Out[3]: ['setup.py', 'win32_manual_post_install.py']
619
620 # or as a whitespace-separated string:
621 In [4]: a.s
622 Out[4]: 'setup.py win32_manual_post_install.py'
623
624 # a.s is useful to pass as a single command line:
625 In [5]: !wc -l $a.s
626 146 setup.py
627 130 win32_manual_post_install.py
628 276 total
629
630 # while the list form is useful to loop over:
631 In [6]: for f in a.l:
632 ...: !wc -l $f
633 ...:
634 146 setup.py
635 130 win32_manual_post_install.py
636
637 Similarly, the lists returned by the -l option are also special, in
638 the sense that you can equally invoke the .s attribute on them to
639 automatically get a whitespace-separated string from their contents::
640
641 In [7]: sc -l b=ls *py
642
643 In [8]: b
644 Out[8]: ['setup.py', 'win32_manual_post_install.py']
645
646 In [9]: b.s
647 Out[9]: 'setup.py win32_manual_post_install.py'
648
649 In summary, both the lists and strings used for output capture have
650 the following special attributes::
651
652 .l (or .list) : value as list.
653 .n (or .nlstr): value as newline-separated string.
654 .s (or .spstr): value as space-separated string.
655 """
656
657 opts,args = self.parse_options(parameter_s, 'lv')
658 # Try to get a variable name and command to run
659 try:
660 # the variable name must be obtained from the parse_options
661 # output, which uses shlex.split to strip options out.
662 var,_ = args.split('=', 1)
663 var = var.strip()
664 # But the command has to be extracted from the original input
665 # parameter_s, not on what parse_options returns, to avoid the
666 # quote stripping which shlex.split performs on it.
667 _,cmd = parameter_s.split('=', 1)
668 except ValueError:
669 var,cmd = '',''
670 # If all looks ok, proceed
671 split = 'l' in opts
672 out = self.shell.getoutput(cmd, split=split)
673 if 'v' in opts:
674 print('%s ==\n%s' % (var, pformat(out)))
675 if var:
676 self.shell.user_ns.update({var:out})
677 else:
678 return out
679
680 @line_cell_magic
681 def sx(self, line='', cell=None):
682 """Shell execute - run shell command and capture output (!! is short-hand).
683
684 %sx command
685
686 IPython will run the given command using commands.getoutput(), and
687 return the result formatted as a list (split on '\\n'). Since the
688 output is _returned_, it will be stored in ipython's regular output
689 cache Out[N] and in the '_N' automatic variables.
690
691 Notes:
692
693 1) If an input line begins with '!!', then %sx is automatically
694 invoked. That is, while::
695
696 !ls
697
698 causes ipython to simply issue system('ls'), typing::
699
700 !!ls
701
702 is a shorthand equivalent to::
703
704 %sx ls
705
706 2) %sx differs from %sc in that %sx automatically splits into a list,
707 like '%sc -l'. The reason for this is to make it as easy as possible
708 to process line-oriented shell output via further python commands.
709 %sc is meant to provide much finer control, but requires more
710 typing.
711
712 3) Just like %sc -l, this is a list with special attributes:
713 ::
714
715 .l (or .list) : value as list.
716 .n (or .nlstr): value as newline-separated string.
717 .s (or .spstr): value as whitespace-separated string.
718
719 This is very useful when trying to use such lists as arguments to
720 system commands."""
721
722 if cell is None:
723 # line magic
724 return self.shell.getoutput(line)
725 else:
726 opts,args = self.parse_options(line, '', 'out=')
727 output = self.shell.getoutput(cell)
728 out_name = opts.get('out', opts.get('o'))
729 if out_name:
730 self.shell.user_ns[out_name] = output
731 else:
732 return output
733
734 system = line_cell_magic('system')(sx)
735 bang = cell_magic('!')(sx)
736
737 @line_magic
738 def bookmark(self, parameter_s=''):
739 """Manage IPython's bookmark system.
740
741 %bookmark <name> - set bookmark to current dir
742 %bookmark <name> <dir> - set bookmark to <dir>
743 %bookmark -l - list all bookmarks
744 %bookmark -d <name> - remove bookmark
745 %bookmark -r - remove all bookmarks
746
747 You can later on access a bookmarked folder with::
748
749 %cd -b <name>
750
751 or simply '%cd <name>' if there is no directory called <name> AND
752 there is such a bookmark defined.
753
754 Your bookmarks persist through IPython sessions, but they are
755 associated with each profile."""
756
757 opts,args = self.parse_options(parameter_s,'drl',mode='list')
758 if len(args) > 2:
759 raise UsageError("%bookmark: too many arguments")
760
761 bkms = self.shell.db.get('bookmarks',{})
762
763 if 'd' in opts:
764 try:
765 todel = args[0]
766 except IndexError as e:
767 raise UsageError(
768 "%bookmark -d: must provide a bookmark to delete") from e
769 else:
770 try:
771 del bkms[todel]
772 except KeyError as e:
773 raise UsageError(
774 "%%bookmark -d: Can't delete bookmark '%s'" % todel) from e
775
776 elif 'r' in opts:
777 bkms = {}
778 elif 'l' in opts:
779 bks = sorted(bkms)
780 if bks:
781 size = max(map(len, bks))
782 else:
783 size = 0
784 fmt = '%-'+str(size)+'s -> %s'
785 print('Current bookmarks:')
786 for bk in bks:
787 print(fmt % (bk, bkms[bk]))
788 else:
789 if not args:
790 raise UsageError("%bookmark: You must specify the bookmark name")
791 elif len(args)==1:
792 bkms[args[0]] = os.getcwd()
793 elif len(args)==2:
794 bkms[args[0]] = args[1]
795 self.shell.db['bookmarks'] = bkms
796
797 @line_magic
798 def pycat(self, parameter_s=''):
799 """Show a syntax-highlighted file through a pager.
800
801 This magic is similar to the cat utility, but it will assume the file
802 to be Python source and will show it with syntax highlighting.
803
804 This magic command can either take a local filename, an url,
805 an history range (see %history) or a macro as argument.
806
807 If no parameter is given, prints out history of current session up to
808 this point. ::
809
810 %pycat myscript.py
811 %pycat 7-27
812 %pycat myMacro
813 %pycat http://www.example.com/myscript.py
814 """
815 try:
816 cont = self.shell.find_user_code(parameter_s, skip_encoding_cookie=False)
817 except (ValueError, IOError):
818 print("Error: no such file, variable, URL, history range or macro")
819 return
820
821 page.page(self.shell.pycolorize(source_to_unicode(cont)))
822
823 @magic_arguments.magic_arguments()
824 @magic_arguments.argument(
825 '-a', '--append', action='store_true', default=False,
826 help='Append contents of the cell to an existing file. '
827 'The file will be created if it does not exist.'
828 )
829 @magic_arguments.argument(
830 'filename', type=str,
831 help='file to write'
832 )
833 @cell_magic
834 def writefile(self, line, cell):
835 """Write the contents of the cell to a file.
836
837 The file will be overwritten unless the -a (--append) flag is specified.
838 """
839 args = magic_arguments.parse_argstring(self.writefile, line)
840 if re.match(r'^(\'.*\')|(".*")$', args.filename):
841 filename = os.path.expanduser(args.filename[1:-1])
842 else:
843 filename = os.path.expanduser(args.filename)
844
845 if os.path.exists(filename):
846 if args.append:
847 print("Appending to %s" % filename)
848 else:
849 print("Overwriting %s" % filename)
850 else:
851 print("Writing %s" % filename)
852
853 mode = 'a' if args.append else 'w'
854 with io.open(filename, mode, encoding='utf-8') as f:
855 f.write(cell)
856
[end of IPython/core/magics/osm.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ipython/ipython
|
ae0dfcd9d4fb221f0278f07b022f9837b0e51e67
|
Add completion type (_jupyter_types_experimental) for dictionary keys, file paths, etc
I would very much like the IPython to return completion type for all completions, not just for the completions from Jedi. This would not only make it possible to display the type ot the user in frontends, but also allow users to create custom rules (e.g. show paths first or show paths last). I am happy to work on a PR and maintain this part of the codebase afterwards. Would you consider a refactor of the current completions to allow for returning type in scope for IPython current plans?
Currently completions are being passed around in three forms:
- the new (unstable) [Completion](https://github.com/ipython/ipython/blob/167f683f56a900200f5bc13227639c2ebdfb1925/IPython/core/completer.py#L355) class mostly used downstream of Jedi
- the Jedi Completion class which is an implementation detail of Jedi
- the [match tuples](https://github.com/ipython/ipython/blob/167f683f56a900200f5bc13227639c2ebdfb1925/IPython/core/completer.py#L2149-L2154) as generated from the results of _matchers_
Interestingly the match tuple contains `origin` which could _almost_ be used as a type for the completions (except that this is an identifier for debug purposes and not a user-friendly name). I would propose that in a similar fashion to how the [origin is being discovered from the name of the matcher method](https://github.com/ipython/ipython/blob/167f683f56a900200f5bc13227639c2ebdfb1925/IPython/core/completer.py#L2119-L2131), each of the non-jedi [completion matchers](https://github.com/ipython/ipython/blob/167f683f56a900200f5bc13227639c2ebdfb1925/IPython/core/completer.py#L1180-L1198) would get a "type" property. This could be set by a decorator, like so:
```python
def matcher(type):
def decorate(func):
func.completion_type = type
return func
class IPCompleter(Completer):
# [...]
@matcher(type="magic")
def magic_matches(self, text:str):
"""Match magics"""
```
Then the match could get formalized as a named tuple (so that the benefits of the current lightweight approach are kept but an additional benefit of readability and ability to annotate types is obtained):
```python
from typing import NamedTuple
# could be also named CompletionMatch or SimpleCompletion
class SimpleMatch(NamedTuple):
text: str
origin: str
type: str
```
I am not sure what should happen next though. The logic appears quite complex to me. I wonder if some of it could be also refactored to simplify `_CompleteResult` and what happens with it. I am sure that more than one good solution exists. I wonder if you have any suggestions or thoughts.
Improving dict autocomplete
We have found the dict autocomplete to show too many option at times. As an example, given this:
```
$ ls
a-dir-name a-file
$ ipython
```
The following is appears:

Here you can see a number of things are suggested that are not helpful including:
1. Files
2. Magics
This is made much worse by something like [pyflyby](https://github.com/deshaw/pyflyby) where all possible imports that start with `a` are also listed.
Ideas:
1. Only show valid keys of the dict (when possible)
2. Sort to show dict keys first and all other options after
Provisional Completer API Thoughts
I've been using the provisional completion API for an Emacs mode targeting inferior-iPython. The docs mention interest in feedback on the API, so I thought to open an issue to discuss. Some thoughts and suggestions:
- The current provisional API was put in place 5 years ago. Are there any plans to change it in the near term? If not, perhaps the provisional context requirement should be dropped, and the docs should no longer indicate the functionality as experimental. After all, `[Tab]` has been providing these completions for 5 years, so they are pretty well vetted!
- It's very useful that the completion API provides the bounds of the string which it is completing. What would also be very helpful is to also get the bounds of the full expression iPython + jedi are evaluating for the completion itself. For example, in `x.y[0].z[Tab]` IPCompleter will indicate `z` as the completion text to alter, but presumably iPython knows that it's evaluating an entire expression `x.y[0].z___`. This would be very useful for users of the API that want to query functions for docstrings, etc. during completion.
Thanks for iPython.
|
An alternative approach would be to modify the matcher functions to return either a string representing text (as today) or a (named) tuple (text, origin, type). This alternative has a benefit of allowing to return multiple types per matcher function, but this is not what most of the built-in matchers were designed for.
A notable exception is the `python_matches` aggregator which return results from:
- `attr_matches` (which can return completions of type method, property, module, class, instance),
- `global_matches` (which can return anything).
Downstream, these appear to be indirectly modified by `pyflyby`. Therefore, specifying matchers to return either a sequence of strings or a SimpleCompletion object might be a better solution.
From commit history it appears that the intent was on one hand to drop python_matches (description of https://github.com/ipython/ipython/commit/3ff1be2ea8ef180a6f17a6a03a3f8452303b9abe) and on the other to adopt this alternative approach (though dicts were envisioned as the vehicle rather than named tuples):
https://github.com/ipython/ipython/blob/7f51a0332edd0c675c2d314ca3e62df7ef041281/IPython/core/completer.py#L2139-L2145
This is similar to a previous request to only show magics after `%` (https://github.com/ipython/ipython/issues/12959) which was solved by a simple check for `%` prefix in https://github.com/ipython/ipython/pull/13483. A similar issue about magics showing up in import completions was raised in https://github.com/ipython/ipython/issues/12987.
I was recently thinking about adding more capabilities to matchers in backward compatible way (https://github.com/ipython/ipython/issues/12820).
On high level, the solution can be implemented by (expanding upon ideas proposed above):
1. adding a method to check whether all other matchers should be suppressed. If two or more matchers say "suppress all other matchers", then we could take the union of completions from those.
2. adding `priority`/`rank` to each matcher or each completion item to be used for sorting (or `sort_text` if following LSP; this appears sub-optimal though and we can always convert to `sortText` downstream from a numeric rank)
On implementation level, approach (A):
(1) Suppressing can be performed as a matcher-level function. Matcher would start to resemble a proper class with methods (not just a function). We could introduce this while maintaining backward-compatibility by adding an optional `should_suppress_others` method via a decorator:
<details>
```python
def matcher(*, should_suppress_others: Callable):
def wrapper(func):
func.should_suppress_others = should_suppress_others
return func
return wrapper
class IPCompleter:
# ...
@matcher(should_suppress_others=lambda text: does_it_look_like_I_am_in_a_dict(text))
def dict_key_matches(self):
```
</details>
The new `Matcher` could be typed as follows:
```python
class LegacyMatcher(Protocol):
__call__: Callable[[str], list[str]]
class MatcherSupressionProtocol(Protocol):
should_suppress_others: Callable[[str], bool]
Matcher = Union[LegacyMatcher, MatcherSupressionProtocol]
```
(2) Currently sorting is governed by hard-coded [`completions_sorting_key`](https://github.com/ipython/ipython/blob/7f51a0332edd0c675c2d314ca3e62df7ef041281/IPython/core/completer.py#L304-L333); it would be difficult to recognise a dictionary key using this existing approach alone.
Matcher-level priority/rank could be a solution if it took the request text in and return a number depending on situation (here answering "how likely it seems that we are in a dict"):
```python
class MatcherPrioritySystemDynamic(Protocol):
priority: Callable[[str], float]
Matcher = Union[LegacyMatcher, MatcherSupressionProtocol, MatcherPrioritySystemDynamic]
```
Completion-level priority/rank would enable more granular control for matcher developers and could expand the API proposed in my [earlier comment](https://github.com/ipython/ipython/issues/12820#issuecomment-1222788271):
```python
class SimpleCompletion(NamedTuple): # or TypedDict with NotRequired
__slots__ = ('text', 'origin', 'type', 'priority')
text: str
origin: Optional[str]
type: Optional[str]
priority: Optional[float]
class LegacyMatcher(Protocol):
__call__: Callable[[str], Union[list[str], list[SimpleCompletion]]]
```
However, what follows from my comment above is an observation that the information about priority and suppression is context-specific (both `matcher.should_suppress_others(text: str)` and `matcher.priority(text: str)` require `text` argument). Maybe we should reconsider whether instead of trying to gradually turn matchers into classes as suggested in approach (A), we should instead use:
**Approach (B)**: matchers are just simple functions, but (optionally) return an extensible structure with metadata. In this scenario matchers would comply with the following typing:
```python
class MatchingResult(TypedDict):
matches: list[SimpleCompletion]
suppress_others: NotRequired[bool]
priority: NotRequired[float]
LegacyMatcher = Callable[[str], list[str]]
NewMatcher = Callable[[str], MatchingResult] # TODO: find better code name
Matcher = Union[LegacyMatcher, NewMatcher]
```
> Using `TypedDict` + `NotRequired`, or `Protocol` corresponds to implementation which will require duck typing or key checking respectively but will allow us to add more properties in the future without breaking compatibility.
|
2022-09-05T06:47:45Z
|
<patch>
diff --git a/IPython/core/completer.py b/IPython/core/completer.py
--- a/IPython/core/completer.py
+++ b/IPython/core/completer.py
@@ -100,6 +100,73 @@
Be sure to update :any:`jedi` to the latest stable version or to try the
current development version to get better completions.
+
+Matchers
+========
+
+All completions routines are implemented using unified *Matchers* API.
+The matchers API is provisional and subject to change without notice.
+
+The built-in matchers include:
+
+- :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
+- :any:`IPCompleter.magic_matcher`: completions for magics,
+- :any:`IPCompleter.unicode_name_matcher`,
+ :any:`IPCompleter.fwd_unicode_matcher`
+ and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
+- :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
+- :any:`IPCompleter.file_matcher`: paths to files and directories,
+- :any:`IPCompleter.python_func_kw_matcher` - function keywords,
+- :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
+- ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
+- :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
+ implementation in :any:`InteractiveShell` which uses IPython hooks system
+ (`complete_command`) with string dispatch (including regular expressions).
+ Differently to other matchers, ``custom_completer_matcher`` will not suppress
+ Jedi results to match behaviour in earlier IPython versions.
+
+Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
+
+Matcher API
+-----------
+
+Simplifying some details, the ``Matcher`` interface can described as
+
+.. code-block::
+
+ MatcherAPIv1 = Callable[[str], list[str]]
+ MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
+
+ Matcher = MatcherAPIv1 | MatcherAPIv2
+
+The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
+and remains supported as a simplest way for generating completions. This is also
+currently the only API supported by the IPython hooks system `complete_command`.
+
+To distinguish between matcher versions ``matcher_api_version`` attribute is used.
+More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
+and requires a literal ``2`` for v2 Matchers.
+
+Once the API stabilises future versions may relax the requirement for specifying
+``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
+please do not rely on the presence of ``matcher_api_version`` for any purposes.
+
+Suppression of competing matchers
+---------------------------------
+
+By default results from all matchers are combined, in the order determined by
+their priority. Matchers can request to suppress results from subsequent
+matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
+
+When multiple matchers simultaneously request surpression, the results from of
+the matcher with higher priority will be returned.
+
+Sometimes it is desirable to suppress most but not all other matchers;
+this can be achieved by adding a list of identifiers of matchers which
+should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
+
+The suppression behaviour can is user-configurable via
+:any:`IPCompleter.suppress_competing_matchers`.
"""
@@ -109,7 +176,7 @@
# Some of this code originated from rlcompleter in the Python standard library
# Copyright (C) 2001 Python Software Foundation, www.python.org
-
+from __future__ import annotations
import builtins as builtin_mod
import glob
import inspect
@@ -124,9 +191,26 @@
import uuid
import warnings
from contextlib import contextmanager
+from dataclasses import dataclass
+from functools import cached_property, partial
from importlib import import_module
from types import SimpleNamespace
-from typing import Iterable, Iterator, List, Tuple, Union, Any, Sequence, Dict, NamedTuple, Pattern, Optional
+from typing import (
+ Iterable,
+ Iterator,
+ List,
+ Tuple,
+ Union,
+ Any,
+ Sequence,
+ Dict,
+ NamedTuple,
+ Pattern,
+ Optional,
+ TYPE_CHECKING,
+ Set,
+ Literal,
+)
from IPython.core.error import TryNext
from IPython.core.inputtransformer2 import ESC_MAGIC
@@ -134,10 +218,22 @@
from IPython.core.oinspect import InspectColors
from IPython.testing.skipdoctest import skip_doctest
from IPython.utils import generics
+from IPython.utils.decorators import sphinx_options
from IPython.utils.dir2 import dir2, get_real_method
+from IPython.utils.docs import GENERATING_DOCUMENTATION
from IPython.utils.path import ensure_dir_exists
from IPython.utils.process import arg_split
-from traitlets import Bool, Enum, Int, List as ListTrait, Unicode, default, observe
+from traitlets import (
+ Bool,
+ Enum,
+ Int,
+ List as ListTrait,
+ Unicode,
+ Dict as DictTrait,
+ Union as UnionTrait,
+ default,
+ observe,
+)
from traitlets.config.configurable import Configurable
import __main__
@@ -145,6 +241,7 @@
# skip module docstests
__skip_doctest__ = True
+
try:
import jedi
jedi.settings.case_insensitive_completion = False
@@ -153,7 +250,26 @@
JEDI_INSTALLED = True
except ImportError:
JEDI_INSTALLED = False
-#-----------------------------------------------------------------------------
+
+
+if TYPE_CHECKING or GENERATING_DOCUMENTATION:
+ from typing import cast
+ from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias
+else:
+
+ def cast(obj, type_):
+ """Workaround for `TypeError: MatcherAPIv2() takes no arguments`"""
+ return obj
+
+ # do not require on runtime
+ NotRequired = Tuple # requires Python >=3.11
+ TypedDict = Dict # by extension of `NotRequired` requires 3.11 too
+ Protocol = object # requires Python >=3.8
+ TypeAlias = Any # requires Python >=3.10
+if GENERATING_DOCUMENTATION:
+ from typing import TypedDict
+
+# -----------------------------------------------------------------------------
# Globals
#-----------------------------------------------------------------------------
@@ -166,7 +282,7 @@
_UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
# Public API
-__all__ = ['Completer','IPCompleter']
+__all__ = ["Completer", "IPCompleter"]
if sys.platform == 'win32':
PROTECTABLES = ' '
@@ -177,6 +293,8 @@
# may have trouble processing.
MATCHES_LIMIT = 500
+# Completion type reported when no type can be inferred.
+_UNKNOWN_TYPE = "<unknown>"
class ProvisionalCompleterWarning(FutureWarning):
"""
@@ -355,9 +473,12 @@ def __repr__(self):
return '<Fake completion object jedi has crashed>'
+_JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
+
+
class Completion:
"""
- Completion object used and return by IPython completers.
+ Completion object used and returned by IPython completers.
.. warning::
@@ -417,6 +538,188 @@ def __hash__(self):
return hash((self.start, self.end, self.text))
+class SimpleCompletion:
+ """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
+
+ .. warning::
+
+ Provisional
+
+ This class is used to describe the currently supported attributes of
+ simple completion items, and any additional implementation details
+ should not be relied on. Additional attributes may be included in
+ future versions, and meaning of text disambiguated from the current
+ dual meaning of "text to insert" and "text to used as a label".
+ """
+
+ __slots__ = ["text", "type"]
+
+ def __init__(self, text: str, *, type: str = None):
+ self.text = text
+ self.type = type
+
+ def __repr__(self):
+ return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
+
+
+class _MatcherResultBase(TypedDict):
+ """Definition of dictionary to be returned by new-style Matcher (API v2)."""
+
+ #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
+ matched_fragment: NotRequired[str]
+
+ #: Whether to suppress results from all other matchers (True), some
+ #: matchers (set of identifiers) or none (False); default is False.
+ suppress: NotRequired[Union[bool, Set[str]]]
+
+ #: Identifiers of matchers which should NOT be suppressed when this matcher
+ #: requests to suppress all other matchers; defaults to an empty set.
+ do_not_suppress: NotRequired[Set[str]]
+
+ #: Are completions already ordered and should be left as-is? default is False.
+ ordered: NotRequired[bool]
+
+
+@sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
+class SimpleMatcherResult(_MatcherResultBase, TypedDict):
+ """Result of new-style completion matcher."""
+
+ # note: TypedDict is added again to the inheritance chain
+ # in order to get __orig_bases__ for documentation
+
+ #: List of candidate completions
+ completions: Sequence[SimpleCompletion]
+
+
+class _JediMatcherResult(_MatcherResultBase):
+ """Matching result returned by Jedi (will be processed differently)"""
+
+ #: list of candidate completions
+ completions: Iterable[_JediCompletionLike]
+
+
+@dataclass
+class CompletionContext:
+ """Completion context provided as an argument to matchers in the Matcher API v2."""
+
+ # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
+ # which was not explicitly visible as an argument of the matcher, making any refactor
+ # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
+ # from the completer, and make substituting them in sub-classes easier.
+
+ #: Relevant fragment of code directly preceding the cursor.
+ #: The extraction of token is implemented via splitter heuristic
+ #: (following readline behaviour for legacy reasons), which is user configurable
+ #: (by switching the greedy mode).
+ token: str
+
+ #: The full available content of the editor or buffer
+ full_text: str
+
+ #: Cursor position in the line (the same for ``full_text`` and ``text``).
+ cursor_position: int
+
+ #: Cursor line in ``full_text``.
+ cursor_line: int
+
+ #: The maximum number of completions that will be used downstream.
+ #: Matchers can use this information to abort early.
+ #: The built-in Jedi matcher is currently excepted from this limit.
+ # If not given, return all possible completions.
+ limit: Optional[int]
+
+ @cached_property
+ def text_until_cursor(self) -> str:
+ return self.line_with_cursor[: self.cursor_position]
+
+ @cached_property
+ def line_with_cursor(self) -> str:
+ return self.full_text.split("\n")[self.cursor_line]
+
+
+#: Matcher results for API v2.
+MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
+
+
+class _MatcherAPIv1Base(Protocol):
+ def __call__(self, text: str) -> list[str]:
+ """Call signature."""
+
+
+class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
+ #: API version
+ matcher_api_version: Optional[Literal[1]]
+
+ def __call__(self, text: str) -> list[str]:
+ """Call signature."""
+
+
+#: Protocol describing Matcher API v1.
+MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
+
+
+class MatcherAPIv2(Protocol):
+ """Protocol describing Matcher API v2."""
+
+ #: API version
+ matcher_api_version: Literal[2] = 2
+
+ def __call__(self, context: CompletionContext) -> MatcherResult:
+ """Call signature."""
+
+
+Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
+
+
+def completion_matcher(
+ *, priority: float = None, identifier: str = None, api_version: int = 1
+):
+ """Adds attributes describing the matcher.
+
+ Parameters
+ ----------
+ priority : Optional[float]
+ The priority of the matcher, determines the order of execution of matchers.
+ Higher priority means that the matcher will be executed first. Defaults to 0.
+ identifier : Optional[str]
+ identifier of the matcher allowing users to modify the behaviour via traitlets,
+ and also used to for debugging (will be passed as ``origin`` with the completions).
+ Defaults to matcher function ``__qualname__``.
+ api_version: Optional[int]
+ version of the Matcher API used by this matcher.
+ Currently supported values are 1 and 2.
+ Defaults to 1.
+ """
+
+ def wrapper(func: Matcher):
+ func.matcher_priority = priority or 0
+ func.matcher_identifier = identifier or func.__qualname__
+ func.matcher_api_version = api_version
+ if TYPE_CHECKING:
+ if api_version == 1:
+ func = cast(func, MatcherAPIv1)
+ elif api_version == 2:
+ func = cast(func, MatcherAPIv2)
+ return func
+
+ return wrapper
+
+
+def _get_matcher_priority(matcher: Matcher):
+ return getattr(matcher, "matcher_priority", 0)
+
+
+def _get_matcher_id(matcher: Matcher):
+ return getattr(matcher, "matcher_identifier", matcher.__qualname__)
+
+
+def _get_matcher_api_version(matcher):
+ return getattr(matcher, "matcher_api_version", 1)
+
+
+context_matcher = partial(completion_matcher, api_version=2)
+
+
_IC = Iterable[Completion]
@@ -924,7 +1227,20 @@ def _safe_isinstance(obj, module, class_name):
return (module in sys.modules and
isinstance(obj, getattr(import_module(module), class_name)))
-def back_unicode_name_matches(text:str) -> Tuple[str, Sequence[str]]:
+
+@context_matcher()
+def back_unicode_name_matcher(context: CompletionContext):
+ """Match Unicode characters back to Unicode name
+
+ Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
+ """
+ fragment, matches = back_unicode_name_matches(context.text_until_cursor)
+ return _convert_matcher_v1_result_to_v2(
+ matches, type="unicode", fragment=fragment, suppress_if_matches=True
+ )
+
+
+def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
"""Match Unicode characters back to Unicode name
This does ``☃`` -> ``\\snowman``
@@ -934,6 +1250,9 @@ def back_unicode_name_matches(text:str) -> Tuple[str, Sequence[str]]:
This will not either back-complete standard sequences like \\n, \\b ...
+ .. deprecated:: 8.6
+ You can use :meth:`back_unicode_name_matcher` instead.
+
Returns
=======
@@ -943,7 +1262,6 @@ def back_unicode_name_matches(text:str) -> Tuple[str, Sequence[str]]:
empty string,
- a sequence (of 1), name for the match Unicode character, preceded by
backslash, or empty if no match.
-
"""
if len(text)<2:
return '', ()
@@ -963,11 +1281,26 @@ def back_unicode_name_matches(text:str) -> Tuple[str, Sequence[str]]:
pass
return '', ()
-def back_latex_name_matches(text:str) -> Tuple[str, Sequence[str]] :
+
+@context_matcher()
+def back_latex_name_matcher(context: CompletionContext):
+ """Match latex characters back to unicode name
+
+ Same as :any:`back_latex_name_matches`, but adopted to new Matcher API.
+ """
+ fragment, matches = back_latex_name_matches(context.text_until_cursor)
+ return _convert_matcher_v1_result_to_v2(
+ matches, type="latex", fragment=fragment, suppress_if_matches=True
+ )
+
+
+def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
"""Match latex characters back to unicode name
This does ``\\ℵ`` -> ``\\aleph``
+ .. deprecated:: 8.6
+ You can use :meth:`back_latex_name_matcher` instead.
"""
if len(text)<2:
return '', ()
@@ -1042,11 +1375,23 @@ def _make_signature(completion)-> str:
for p in signature.defined_names()) if f])
-class _CompleteResult(NamedTuple):
- matched_text : str
- matches: Sequence[str]
- matches_origin: Sequence[str]
- jedi_matches: Any
+_CompleteResult = Dict[str, MatcherResult]
+
+
+def _convert_matcher_v1_result_to_v2(
+ matches: Sequence[str],
+ type: str,
+ fragment: str = None,
+ suppress_if_matches: bool = False,
+) -> SimpleMatcherResult:
+ """Utility to help with transition"""
+ result = {
+ "completions": [SimpleCompletion(text=match, type=type) for match in matches],
+ "suppress": (True if matches else False) if suppress_if_matches else False,
+ }
+ if fragment is not None:
+ result["matched_fragment"] = fragment
+ return result
class IPCompleter(Completer):
@@ -1062,17 +1407,59 @@ def _greedy_changed(self, change):
else:
self.splitter.delims = DELIMS
- dict_keys_only = Bool(False,
- help="""Whether to show dict key matches only""")
+ dict_keys_only = Bool(
+ False,
+ help="""
+ Whether to show dict key matches only.
+
+ (disables all matchers except for `IPCompleter.dict_key_matcher`).
+ """,
+ )
+
+ suppress_competing_matchers = UnionTrait(
+ [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
+ default_value=None,
+ help="""
+ Whether to suppress completions from other *Matchers*.
+
+ When set to ``None`` (default) the matchers will attempt to auto-detect
+ whether suppression of other matchers is desirable. For example, at
+ the beginning of a line followed by `%` we expect a magic completion
+ to be the only applicable option, and after ``my_dict['`` we usually
+ expect a completion with an existing dictionary key.
+
+ If you want to disable this heuristic and see completions from all matchers,
+ set ``IPCompleter.suppress_competing_matchers = False``.
+ To disable the heuristic for specific matchers provide a dictionary mapping:
+ ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
+
+ Set ``IPCompleter.suppress_competing_matchers = True`` to limit
+ completions to the set of matchers with the highest priority;
+ this is equivalent to ``IPCompleter.merge_completions`` and
+ can be beneficial for performance, but will sometimes omit relevant
+ candidates from matchers further down the priority list.
+ """,
+ ).tag(config=True)
- merge_completions = Bool(True,
+ merge_completions = Bool(
+ True,
help="""Whether to merge completion results into a single list
If False, only the completion results from the first non-empty
completer will be returned.
- """
+
+ As of version 8.6.0, setting the value to ``False`` is an alias for:
+ ``IPCompleter.suppress_competing_matchers = True.``.
+ """,
+ ).tag(config=True)
+
+ disable_matchers = ListTrait(
+ Unicode(), help="""List of matchers to disable."""
).tag(config=True)
- omit__names = Enum((0,1,2), default_value=2,
+
+ omit__names = Enum(
+ (0, 1, 2),
+ default_value=2,
help="""Instruct the completer to omit private method names
Specifically, when completing on ``object.<tab>``.
@@ -1148,7 +1535,7 @@ def __init__(
namespace=namespace,
global_namespace=global_namespace,
config=config,
- **kwargs
+ **kwargs,
)
# List where completion matches will be stored
@@ -1177,8 +1564,8 @@ def __init__(
#= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
self.magic_arg_matchers = [
- self.magic_config_matches,
- self.magic_color_matches,
+ self.magic_config_matcher,
+ self.magic_color_matcher,
]
# This is set externally by InteractiveShell
@@ -1190,27 +1577,50 @@ def __init__(
# attribute through the `@unicode_names` property.
self._unicode_names = None
+ self._backslash_combining_matchers = [
+ self.latex_name_matcher,
+ self.unicode_name_matcher,
+ back_latex_name_matcher,
+ back_unicode_name_matcher,
+ self.fwd_unicode_matcher,
+ ]
+
+ if not self.backslash_combining_completions:
+ for matcher in self._backslash_combining_matchers:
+ self.disable_matchers.append(matcher.matcher_identifier)
+
+ if not self.merge_completions:
+ self.suppress_competing_matchers = True
+
@property
- def matchers(self) -> List[Any]:
+ def matchers(self) -> List[Matcher]:
"""All active matcher routines for completion"""
if self.dict_keys_only:
- return [self.dict_key_matches]
+ return [self.dict_key_matcher]
if self.use_jedi:
return [
*self.custom_matchers,
- self.dict_key_matches,
- self.file_matches,
- self.magic_matches,
+ *self._backslash_combining_matchers,
+ *self.magic_arg_matchers,
+ self.custom_completer_matcher,
+ self.magic_matcher,
+ self._jedi_matcher,
+ self.dict_key_matcher,
+ self.file_matcher,
]
else:
return [
*self.custom_matchers,
- self.dict_key_matches,
+ *self._backslash_combining_matchers,
+ *self.magic_arg_matchers,
+ self.custom_completer_matcher,
+ self.dict_key_matcher,
+ # TODO: convert python_matches to v2 API
+ self.magic_matcher,
self.python_matches,
- self.file_matches,
- self.magic_matches,
- self.python_func_kw_matches,
+ self.file_matcher,
+ self.python_func_kw_matcher,
]
def all_completions(self, text:str) -> List[str]:
@@ -1231,7 +1641,15 @@ def _clean_glob_win32(self, text:str):
return [f.replace("\\","/")
for f in self.glob("%s*" % text)]
- def file_matches(self, text:str)->List[str]:
+ @context_matcher()
+ def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
+ """Same as :any:`file_matches`, but adopted to new Matcher API."""
+ matches = self.file_matches(context.token)
+ # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
+ # starts with `/home/`, `C:\`, etc)
+ return _convert_matcher_v1_result_to_v2(matches, type="path")
+
+ def file_matches(self, text: str) -> List[str]:
"""Match filenames, expanding ~USER type strings.
Most of the seemingly convoluted logic in this completer is an
@@ -1243,7 +1661,11 @@ def file_matches(self, text:str)->List[str]:
only the parts after what's already been typed (instead of the
full completions, as is normally done). I don't think with the
current (as of Python 2.3) Python readline it's possible to do
- better."""
+ better.
+
+ .. deprecated:: 8.6
+ You can use :meth:`file_matcher` instead.
+ """
# chars that require escaping with backslash - i.e. chars
# that readline treats incorrectly as delimiters, but we
@@ -1313,8 +1735,22 @@ def file_matches(self, text:str)->List[str]:
# Mark directories in input list by appending '/' to their names.
return [x+'/' if os.path.isdir(x) else x for x in matches]
- def magic_matches(self, text:str):
- """Match magics"""
+ @context_matcher()
+ def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
+ """Match magics."""
+ text = context.token
+ matches = self.magic_matches(text)
+ result = _convert_matcher_v1_result_to_v2(matches, type="magic")
+ is_magic_prefix = len(text) > 0 and text[0] == "%"
+ result["suppress"] = is_magic_prefix and bool(result["completions"])
+ return result
+
+ def magic_matches(self, text: str):
+ """Match magics.
+
+ .. deprecated:: 8.6
+ You can use :meth:`magic_matcher` instead.
+ """
# Get all shell magics now rather than statically, so magics loaded at
# runtime show up too.
lsm = self.shell.magics_manager.lsmagic()
@@ -1355,8 +1791,19 @@ def matches(magic):
return comp
- def magic_config_matches(self, text:str) -> List[str]:
- """ Match class names and attributes for %config magic """
+ @context_matcher()
+ def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
+ """Match class names and attributes for %config magic."""
+ # NOTE: uses `line_buffer` equivalent for compatibility
+ matches = self.magic_config_matches(context.line_with_cursor)
+ return _convert_matcher_v1_result_to_v2(matches, type="param")
+
+ def magic_config_matches(self, text: str) -> List[str]:
+ """Match class names and attributes for %config magic.
+
+ .. deprecated:: 8.6
+ You can use :meth:`magic_config_matcher` instead.
+ """
texts = text.strip().split()
if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
@@ -1390,8 +1837,19 @@ def magic_config_matches(self, text:str) -> List[str]:
if attr.startswith(texts[1]) ]
return []
- def magic_color_matches(self, text:str) -> List[str] :
- """ Match color schemes for %colors magic"""
+ @context_matcher()
+ def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
+ """Match color schemes for %colors magic."""
+ # NOTE: uses `line_buffer` equivalent for compatibility
+ matches = self.magic_color_matches(context.line_with_cursor)
+ return _convert_matcher_v1_result_to_v2(matches, type="param")
+
+ def magic_color_matches(self, text: str) -> List[str]:
+ """Match color schemes for %colors magic.
+
+ .. deprecated:: 8.6
+ You can use :meth:`magic_color_matcher` instead.
+ """
texts = text.split()
if text.endswith(' '):
# .split() strips off the trailing whitespace. Add '' back
@@ -1404,9 +1862,24 @@ def magic_color_matches(self, text:str) -> List[str] :
if color.startswith(prefix) ]
return []
- def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str) -> Iterable[Any]:
+ @context_matcher(identifier="IPCompleter.jedi_matcher")
+ def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
+ matches = self._jedi_matches(
+ cursor_column=context.cursor_position,
+ cursor_line=context.cursor_line,
+ text=context.full_text,
+ )
+ return {
+ "completions": matches,
+ # static analysis should not suppress other matchers
+ "suppress": False,
+ }
+
+ def _jedi_matches(
+ self, cursor_column: int, cursor_line: int, text: str
+ ) -> Iterable[_JediCompletionLike]:
"""
- Return a list of :any:`jedi.api.Completions` object from a ``text`` and
+ Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
cursor position.
Parameters
@@ -1422,6 +1895,9 @@ def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str) -> Iterabl
-----
If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
object containing a string with the Jedi debug information attached.
+
+ .. deprecated:: 8.6
+ You can use :meth:`_jedi_matcher` instead.
"""
namespaces = [self.namespace]
if self.global_namespace is not None:
@@ -1558,8 +2034,18 @@ def _default_arguments(self, obj):
return list(set(ret))
+ @context_matcher()
+ def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
+ """Match named parameters (kwargs) of the last open function."""
+ matches = self.python_func_kw_matches(context.token)
+ return _convert_matcher_v1_result_to_v2(matches, type="param")
+
def python_func_kw_matches(self, text):
- """Match named parameters (kwargs) of the last open function"""
+ """Match named parameters (kwargs) of the last open function.
+
+ .. deprecated:: 8.6
+ You can use :meth:`python_func_kw_matcher` instead.
+ """
if "." in text: # a parameter cannot be dotted
return []
@@ -1654,9 +2140,20 @@ def _get_keys(obj: Any) -> List[Any]:
return obj.dtype.names or []
return []
- def dict_key_matches(self, text:str) -> List[str]:
- "Match string keys in a dictionary, after e.g. 'foo[' "
+ @context_matcher()
+ def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
+ """Match string keys in a dictionary, after e.g. ``foo[``."""
+ matches = self.dict_key_matches(context.token)
+ return _convert_matcher_v1_result_to_v2(
+ matches, type="dict key", suppress_if_matches=True
+ )
+
+ def dict_key_matches(self, text: str) -> List[str]:
+ """Match string keys in a dictionary, after e.g. ``foo[``.
+ .. deprecated:: 8.6
+ You can use :meth:`dict_key_matcher` instead.
+ """
if self.__dict_key_regexps is not None:
regexps = self.__dict_key_regexps
@@ -1758,8 +2255,16 @@ def dict_key_matches(self, text:str) -> List[str]:
return [leading + k + suf for k in matches]
+ @context_matcher()
+ def unicode_name_matcher(self, context: CompletionContext):
+ """Same as :any:`unicode_name_matches`, but adopted to new Matcher API."""
+ fragment, matches = self.unicode_name_matches(context.text_until_cursor)
+ return _convert_matcher_v1_result_to_v2(
+ matches, type="unicode", fragment=fragment, suppress_if_matches=True
+ )
+
@staticmethod
- def unicode_name_matches(text:str) -> Tuple[str, List[str]] :
+ def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
"""Match Latex-like syntax for unicode characters base
on the name of the character.
@@ -1780,11 +2285,24 @@ def unicode_name_matches(text:str) -> Tuple[str, List[str]] :
pass
return '', []
+ @context_matcher()
+ def latex_name_matcher(self, context: CompletionContext):
+ """Match Latex syntax for unicode characters.
- def latex_matches(self, text:str) -> Tuple[str, Sequence[str]]:
+ This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
+ """
+ fragment, matches = self.latex_matches(context.text_until_cursor)
+ return _convert_matcher_v1_result_to_v2(
+ matches, type="latex", fragment=fragment, suppress_if_matches=True
+ )
+
+ def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
"""Match Latex syntax for unicode characters.
This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
+
+ .. deprecated:: 8.6
+ You can use :meth:`latex_name_matcher` instead.
"""
slashpos = text.rfind('\\')
if slashpos > -1:
@@ -1801,7 +2319,25 @@ def latex_matches(self, text:str) -> Tuple[str, Sequence[str]]:
return s, matches
return '', ()
+ @context_matcher()
+ def custom_completer_matcher(self, context):
+ """Dispatch custom completer.
+
+ If a match is found, suppresses all other matchers except for Jedi.
+ """
+ matches = self.dispatch_custom_completer(context.token) or []
+ result = _convert_matcher_v1_result_to_v2(
+ matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
+ )
+ result["ordered"] = True
+ result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
+ return result
+
def dispatch_custom_completer(self, text):
+ """
+ .. deprecated:: 8.6
+ You can use :meth:`custom_completer_matcher` instead.
+ """
if not self.custom_completers:
return
@@ -1955,12 +2491,25 @@ def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Com
"""
deadline = time.monotonic() + _timeout
-
before = full_text[:offset]
cursor_line, cursor_column = position_to_cursor(full_text, offset)
- matched_text, matches, matches_origin, jedi_matches = self._complete(
- full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column)
+ jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
+
+ results = self._complete(
+ full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
+ )
+ non_jedi_results: Dict[str, SimpleMatcherResult] = {
+ identifier: result
+ for identifier, result in results.items()
+ if identifier != jedi_matcher_id
+ }
+
+ jedi_matches = (
+ cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
+ if jedi_matcher_id in results
+ else ()
+ )
iter_jm = iter(jedi_matches)
if _timeout:
@@ -1988,28 +2537,57 @@ def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Com
for jm in iter_jm:
delta = len(jm.name_with_symbols) - len(jm.complete)
- yield Completion(start=offset - delta,
- end=offset,
- text=jm.name_with_symbols,
- type='<unknown>', # don't compute type for speed
- _origin='jedi',
- signature='')
-
-
- start_offset = before.rfind(matched_text)
+ yield Completion(
+ start=offset - delta,
+ end=offset,
+ text=jm.name_with_symbols,
+ type=_UNKNOWN_TYPE, # don't compute type for speed
+ _origin="jedi",
+ signature="",
+ )
# TODO:
# Suppress this, right now just for debug.
- if jedi_matches and matches and self.debug:
- yield Completion(start=start_offset, end=offset, text='--jedi/ipython--',
- _origin='debug', type='none', signature='')
+ if jedi_matches and non_jedi_results and self.debug:
+ some_start_offset = before.rfind(
+ next(iter(non_jedi_results.values()))["matched_fragment"]
+ )
+ yield Completion(
+ start=some_start_offset,
+ end=offset,
+ text="--jedi/ipython--",
+ _origin="debug",
+ type="none",
+ signature="",
+ )
- # I'm unsure if this is always true, so let's assert and see if it
- # crash
- assert before.endswith(matched_text)
- for m, t in zip(matches, matches_origin):
- yield Completion(start=start_offset, end=offset, text=m, _origin=t, signature='', type='<unknown>')
+ ordered = []
+ sortable = []
+
+ for origin, result in non_jedi_results.items():
+ matched_text = result["matched_fragment"]
+ start_offset = before.rfind(matched_text)
+ is_ordered = result.get("ordered", False)
+ container = ordered if is_ordered else sortable
+
+ # I'm unsure if this is always true, so let's assert and see if it
+ # crash
+ assert before.endswith(matched_text)
+
+ for simple_completion in result["completions"]:
+ completion = Completion(
+ start=start_offset,
+ end=offset,
+ text=simple_completion.text,
+ _origin=origin,
+ signature="",
+ type=simple_completion.type or _UNKNOWN_TYPE,
+ )
+ container.append(completion)
+ yield from list(self._deduplicate(ordered + self._sort(sortable)))[
+ :MATCHES_LIMIT
+ ]
def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
"""Find completions for the given text and line context.
@@ -2050,7 +2628,56 @@ def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, S
PendingDeprecationWarning)
# potential todo, FOLD the 3rd throw away argument of _complete
# into the first 2 one.
- return self._complete(line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0)[:2]
+ # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
+ # TODO: should we deprecate now, or does it stay?
+
+ results = self._complete(
+ line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
+ )
+
+ jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
+
+ return self._arrange_and_extract(
+ results,
+ # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
+ skip_matchers={jedi_matcher_id},
+ # this API does not support different start/end positions (fragments of token).
+ abort_if_offset_changes=True,
+ )
+
+ def _arrange_and_extract(
+ self,
+ results: Dict[str, MatcherResult],
+ skip_matchers: Set[str],
+ abort_if_offset_changes: bool,
+ ):
+
+ sortable = []
+ ordered = []
+ most_recent_fragment = None
+ for identifier, result in results.items():
+ if identifier in skip_matchers:
+ continue
+ if not result["completions"]:
+ continue
+ if not most_recent_fragment:
+ most_recent_fragment = result["matched_fragment"]
+ if (
+ abort_if_offset_changes
+ and result["matched_fragment"] != most_recent_fragment
+ ):
+ break
+ if result.get("ordered", False):
+ ordered.extend(result["completions"])
+ else:
+ sortable.extend(result["completions"])
+
+ if not most_recent_fragment:
+ most_recent_fragment = "" # to satisfy typechecker (and just in case)
+
+ return most_recent_fragment, [
+ m.text for m in self._deduplicate(ordered + self._sort(sortable))
+ ]
def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
full_text=None) -> _CompleteResult:
@@ -2085,14 +2712,10 @@ def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
Returns
-------
- A tuple of N elements which are (likely):
- matched_text: ? the text that the complete matched
- matches: list of completions ?
- matches_origin: ? list same length as matches, and where each completion came from
- jedi_matches: list of Jedi matches, have it's own structure.
+ An ordered dictionary where keys are identifiers of completion
+ matchers and values are ``MatcherResult``s.
"""
-
# if the cursor position isn't given, the only sane assumption we can
# make is that it's at the end of the line (the common case)
if cursor_pos is None:
@@ -2104,98 +2727,156 @@ def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
# if text is either None or an empty string, rely on the line buffer
if (not line_buffer) and full_text:
line_buffer = full_text.split('\n')[cursor_line]
- if not text: # issue #11508: check line_buffer before calling split_line
- text = self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ''
-
- if self.backslash_combining_completions:
- # allow deactivation of these on windows.
- base_text = text if not line_buffer else line_buffer[:cursor_pos]
-
- for meth in (self.latex_matches,
- self.unicode_name_matches,
- back_latex_name_matches,
- back_unicode_name_matches,
- self.fwd_unicode_match):
- name_text, name_matches = meth(base_text)
- if name_text:
- return _CompleteResult(name_text, name_matches[:MATCHES_LIMIT], \
- [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ())
-
+ if not text: # issue #11508: check line_buffer before calling split_line
+ text = (
+ self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
+ )
# If no line buffer is given, assume the input text is all there was
if line_buffer is None:
line_buffer = text
+ # deprecated - do not use `line_buffer` in new code.
self.line_buffer = line_buffer
self.text_until_cursor = self.line_buffer[:cursor_pos]
- # Do magic arg matches
- for matcher in self.magic_arg_matchers:
- matches = list(matcher(line_buffer))[:MATCHES_LIMIT]
- if matches:
- origins = [matcher.__qualname__] * len(matches)
- return _CompleteResult(text, matches, origins, ())
+ if not full_text:
+ full_text = line_buffer
+
+ context = CompletionContext(
+ full_text=full_text,
+ cursor_position=cursor_pos,
+ cursor_line=cursor_line,
+ token=text,
+ limit=MATCHES_LIMIT,
+ )
# Start with a clean slate of completions
- matches = []
+ results = {}
- # FIXME: we should extend our api to return a dict with completions for
- # different types of objects. The rlcomplete() method could then
- # simply collapse the dict into a list for readline, but we'd have
- # richer completion semantics in other environments.
- is_magic_prefix = len(text) > 0 and text[0] == "%"
- completions: Iterable[Any] = []
- if self.use_jedi and not is_magic_prefix:
- if not full_text:
- full_text = line_buffer
- completions = self._jedi_matches(
- cursor_pos, cursor_line, full_text)
-
- if self.merge_completions:
- matches = []
- for matcher in self.matchers:
- try:
- matches.extend([(m, matcher.__qualname__)
- for m in matcher(text)])
- except:
- # Show the ugly traceback if the matcher causes an
- # exception, but do NOT crash the kernel!
- sys.excepthook(*sys.exc_info())
- else:
- for matcher in self.matchers:
- matches = [(m, matcher.__qualname__)
- for m in matcher(text)]
- if matches:
- break
-
- seen = set()
- filtered_matches = set()
- for m in matches:
- t, c = m
- if t not in seen:
- filtered_matches.add(m)
- seen.add(t)
+ jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
- _filtered_matches = sorted(filtered_matches, key=lambda x: completions_sorting_key(x[0]))
+ suppressed_matchers = set()
- custom_res = [(m, 'custom') for m in self.dispatch_custom_completer(text) or []]
-
- _filtered_matches = custom_res or _filtered_matches
-
- _filtered_matches = _filtered_matches[:MATCHES_LIMIT]
- _matches = [m[0] for m in _filtered_matches]
- origins = [m[1] for m in _filtered_matches]
+ matchers = {
+ _get_matcher_id(matcher): matcher
+ for matcher in sorted(
+ self.matchers, key=_get_matcher_priority, reverse=True
+ )
+ }
- self.matches = _matches
+ for matcher_id, matcher in matchers.items():
+ api_version = _get_matcher_api_version(matcher)
+ matcher_id = _get_matcher_id(matcher)
- return _CompleteResult(text, _matches, origins, completions)
-
- def fwd_unicode_match(self, text:str) -> Tuple[str, Sequence[str]]:
+ if matcher_id in self.disable_matchers:
+ continue
+
+ if matcher_id in results:
+ warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
+
+ if matcher_id in suppressed_matchers:
+ continue
+
+ try:
+ if api_version == 1:
+ result = _convert_matcher_v1_result_to_v2(
+ matcher(text), type=_UNKNOWN_TYPE
+ )
+ elif api_version == 2:
+ result = cast(matcher, MatcherAPIv2)(context)
+ else:
+ raise ValueError(f"Unsupported API version {api_version}")
+ except:
+ # Show the ugly traceback if the matcher causes an
+ # exception, but do NOT crash the kernel!
+ sys.excepthook(*sys.exc_info())
+ continue
+
+ # set default value for matched fragment if suffix was not selected.
+ result["matched_fragment"] = result.get("matched_fragment", context.token)
+
+ if not suppressed_matchers:
+ suppression_recommended = result.get("suppress", False)
+
+ suppression_config = (
+ self.suppress_competing_matchers.get(matcher_id, None)
+ if isinstance(self.suppress_competing_matchers, dict)
+ else self.suppress_competing_matchers
+ )
+ should_suppress = (
+ (suppression_config is True)
+ or (suppression_recommended and (suppression_config is not False))
+ ) and len(result["completions"])
+
+ if should_suppress:
+ suppression_exceptions = result.get("do_not_suppress", set())
+ try:
+ to_suppress = set(suppression_recommended)
+ except TypeError:
+ to_suppress = set(matchers)
+ suppressed_matchers = to_suppress - suppression_exceptions
+
+ new_results = {}
+ for previous_matcher_id, previous_result in results.items():
+ if previous_matcher_id not in suppressed_matchers:
+ new_results[previous_matcher_id] = previous_result
+ results = new_results
+
+ results[matcher_id] = result
+
+ _, matches = self._arrange_and_extract(
+ results,
+ # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
+ # if it was omission, we can remove the filtering step, otherwise remove this comment.
+ skip_matchers={jedi_matcher_id},
+ abort_if_offset_changes=False,
+ )
+
+ # populate legacy stateful API
+ self.matches = matches
+
+ return results
+
+ @staticmethod
+ def _deduplicate(
+ matches: Sequence[SimpleCompletion],
+ ) -> Iterable[SimpleCompletion]:
+ filtered_matches = {}
+ for match in matches:
+ text = match.text
+ if (
+ text not in filtered_matches
+ or filtered_matches[text].type == _UNKNOWN_TYPE
+ ):
+ filtered_matches[text] = match
+
+ return filtered_matches.values()
+
+ @staticmethod
+ def _sort(matches: Sequence[SimpleCompletion]):
+ return sorted(matches, key=lambda x: completions_sorting_key(x.text))
+
+ @context_matcher()
+ def fwd_unicode_matcher(self, context: CompletionContext):
+ """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
+ # TODO: use `context.limit` to terminate early once we matched the maximum
+ # number that will be used downstream; can be added as an optional to
+ # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
+ fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
+ return _convert_matcher_v1_result_to_v2(
+ matches, type="unicode", fragment=fragment, suppress_if_matches=True
+ )
+
+ def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
"""
Forward match a string starting with a backslash with a list of
potential Unicode completions.
- Will compute list list of Unicode character names on first call and cache it.
+ Will compute list of Unicode character names on first call and cache it.
+
+ .. deprecated:: 8.6
+ You can use :meth:`fwd_unicode_matcher` instead.
Returns
-------
diff --git a/IPython/core/magics/config.py b/IPython/core/magics/config.py
--- a/IPython/core/magics/config.py
+++ b/IPython/core/magics/config.py
@@ -80,6 +80,9 @@ def config(self, s):
Enable debug for the Completer. Mostly print extra information for
experimental jedi integration.
Current: False
+ IPCompleter.disable_matchers=<list-item-1>...
+ List of matchers to disable.
+ Current: []
IPCompleter.greedy=<Bool>
Activate greedy completion
PENDING DEPRECATION. this is now mostly taken care of with Jedi.
@@ -102,6 +105,8 @@ def config(self, s):
Whether to merge completion results into a single list
If False, only the completion results from the first non-empty
completer will be returned.
+ As of version 8.6.0, setting the value to ``False`` is an alias for:
+ ``IPCompleter.suppress_competing_matchers = True.``.
Current: True
IPCompleter.omit__names=<Enum>
Instruct the completer to omit private method names
@@ -117,6 +122,24 @@ def config(self, s):
IPCompleter.profiler_output_dir=<Unicode>
Template for path at which to output profile data for completions.
Current: '.completion_profiles'
+ IPCompleter.suppress_competing_matchers=<Union>
+ Whether to suppress completions from other *Matchers*.
+ When set to ``None`` (default) the matchers will attempt to auto-detect
+ whether suppression of other matchers is desirable. For example, at the
+ beginning of a line followed by `%` we expect a magic completion to be the
+ only applicable option, and after ``my_dict['`` we usually expect a
+ completion with an existing dictionary key.
+ If you want to disable this heuristic and see completions from all matchers,
+ set ``IPCompleter.suppress_competing_matchers = False``. To disable the
+ heuristic for specific matchers provide a dictionary mapping:
+ ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher':
+ False}``.
+ Set ``IPCompleter.suppress_competing_matchers = True`` to limit completions
+ to the set of matchers with the highest priority; this is equivalent to
+ ``IPCompleter.merge_completions`` and can be beneficial for performance, but
+ will sometimes omit relevant candidates from matchers further down the
+ priority list.
+ Current: None
IPCompleter.use_jedi=<Bool>
Experimental: Use Jedi to generate autocompletions. Default to True if jedi
is installed.
diff --git a/IPython/utils/decorators.py b/IPython/utils/decorators.py
--- a/IPython/utils/decorators.py
+++ b/IPython/utils/decorators.py
@@ -2,7 +2,7 @@
"""Decorators that don't go anywhere else.
This module contains misc. decorators that don't really go with another module
-in :mod:`IPython.utils`. Beore putting something here please see if it should
+in :mod:`IPython.utils`. Before putting something here please see if it should
go into another topical module in :mod:`IPython.utils`.
"""
@@ -16,6 +16,10 @@
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
+from typing import Sequence
+
+from IPython.utils.docs import GENERATING_DOCUMENTATION
+
#-----------------------------------------------------------------------------
# Code
@@ -48,6 +52,7 @@ def wrapper(*args,**kw):
wrapper.__doc__ = func.__doc__
return wrapper
+
def undoc(func):
"""Mark a function or class as undocumented.
@@ -56,3 +61,23 @@ def undoc(func):
"""
return func
+
+def sphinx_options(
+ show_inheritance: bool = True,
+ show_inherited_members: bool = False,
+ exclude_inherited_from: Sequence[str] = tuple(),
+):
+ """Set sphinx options"""
+
+ def wrapper(func):
+ if not GENERATING_DOCUMENTATION:
+ return func
+
+ func._sphinx_options = dict(
+ show_inheritance=show_inheritance,
+ show_inherited_members=show_inherited_members,
+ exclude_inherited_from=exclude_inherited_from,
+ )
+ return func
+
+ return wrapper
diff --git a/IPython/utils/docs.py b/IPython/utils/docs.py
new file mode 100644
--- /dev/null
+++ b/IPython/utils/docs.py
@@ -0,0 +1,3 @@
+import os
+
+GENERATING_DOCUMENTATION = os.environ.get("IN_SPHINX_RUN", None) == "True"
diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -41,6 +41,14 @@
html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
+# Allow Python scripts to change behaviour during sphinx run
+os.environ["IN_SPHINX_RUN"] = "True"
+
+autodoc_type_aliases = {
+ "Matcher": " IPython.core.completer.Matcher",
+ "MatcherAPIv1": " IPython.core.completer.MatcherAPIv1",
+}
+
# If your extensions are in another directory, add it here. If the directory
# is relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
diff --git a/docs/sphinxext/apigen.py b/docs/sphinxext/apigen.py
--- a/docs/sphinxext/apigen.py
+++ b/docs/sphinxext/apigen.py
@@ -24,14 +24,9 @@
import os
import re
from importlib import import_module
+from types import SimpleNamespace as Obj
-class Obj(object):
- '''Namespace to hold arbitrary information.'''
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- setattr(self, k, v)
-
class FuncClsScanner(ast.NodeVisitor):
"""Scan a module for top-level functions and classes.
@@ -42,7 +37,7 @@ def __init__(self):
self.classes = []
self.classes_seen = set()
self.functions = []
-
+
@staticmethod
def has_undoc_decorator(node):
return any(isinstance(d, ast.Name) and d.id == 'undoc' \
@@ -62,11 +57,15 @@ def visit_FunctionDef(self, node):
self.functions.append(node.name)
def visit_ClassDef(self, node):
- if not (node.name.startswith('_') or self.has_undoc_decorator(node)) \
- and node.name not in self.classes_seen:
- cls = Obj(name=node.name)
- cls.has_init = any(isinstance(n, ast.FunctionDef) and \
- n.name=='__init__' for n in node.body)
+ if (
+ not (node.name.startswith("_") or self.has_undoc_decorator(node))
+ and node.name not in self.classes_seen
+ ):
+ cls = Obj(name=node.name, sphinx_options={})
+ cls.has_init = any(
+ isinstance(n, ast.FunctionDef) and n.name == "__init__"
+ for n in node.body
+ )
self.classes.append(cls)
self.classes_seen.add(node.name)
@@ -221,7 +220,11 @@ def _import_funcs_classes(self, uri):
funcs, classes = [], []
for name, obj in ns.items():
if inspect.isclass(obj):
- cls = Obj(name=name, has_init='__init__' in obj.__dict__)
+ cls = Obj(
+ name=name,
+ has_init="__init__" in obj.__dict__,
+ sphinx_options=getattr(obj, "_sphinx_options", {}),
+ )
classes.append(cls)
elif inspect.isfunction(obj):
funcs.append(name)
@@ -279,10 +282,18 @@ def generate_api_doc(self, uri):
self.rst_section_levels[2] * len(subhead) + '\n'
for c in classes:
- ad += '\n.. autoclass:: ' + c.name + '\n'
+ opts = c.sphinx_options
+ ad += "\n.. autoclass:: " + c.name + "\n"
# must NOT exclude from index to keep cross-refs working
- ad += ' :members:\n' \
- ' :show-inheritance:\n'
+ ad += " :members:\n"
+ if opts.get("show_inheritance", True):
+ ad += " :show-inheritance:\n"
+ if opts.get("show_inherited_members", False):
+ exclusions_list = opts.get("exclude_inherited_from", [])
+ exclusions = (
+ (" " + " ".join(exclusions_list)) if exclusions_list else ""
+ )
+ ad += f" :inherited-members:{exclusions}\n"
if c.has_init:
ad += '\n .. automethod:: __init__\n'
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-30175
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API: consider undeprecating Series.item() ?
I recently ran into this as well (but forgot to open an issue), and raised now by @alexiswl in https://github.com/pandas-dev/pandas/issues/18262#issuecomment-546746982
`Series.item()` was a consequence of (historically) inheriting from np.ndarray, and was deprecated (like a set of other ndarray-inhertited methods/attributes) a while ago.
While `.item()` could also be used to select the "i-th" element (`.item(i)`), and this use case is certainly redundant (not arguing here to get that aspect back), there is one use case where `item()` can actually be useful: if you do not pass *i*, the method returns the element of the Series *only* if it has one element, otherwise it errors.
Such a situation can typically occur if you use boolean indexing (or `query`) to select a single element. Eg in cases like `s[s == 'val']` or `df.loc[df['col1'] == 'val', 'col2']` where you know the condition *should* yield a single element.
You then typically want the scalar element as result, but those two code snippets give you a Series of one element. In those cases, you could use `item()` to retrieve it: `s[s == 'val'].item()`.
I saw some people using `.item()` exactly for this use case, so wondering if it is worth to keep it for this (only the version without passing *i*).
The logical alternative is doing a `.iloc[0]`, but `.item()` has the advantage of guaranteeing there was actually only one result item.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td>License</td>
35 <td>
36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td>Build Status</td>
43 <td>
44 <a href="https://travis-ci.org/pandas-dev/pandas">
45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td></td>
51 <td>
52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
54 </a>
55 </td>
56 </tr>
57 <tr>
58 <td>Coverage</td>
59 <td>
60 <a href="https://codecov.io/gh/pandas-dev/pandas">
61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
62 </a>
63 </td>
64 </tr>
65 <tr>
66 <td>Downloads</td>
67 <td>
68 <a href="https://pandas.pydata.org">
69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
70 </a>
71 </td>
72 </tr>
73 <tr>
74 <td>Gitter</td>
75 <td>
76 <a href="https://gitter.im/pydata/pandas">
77 <img src="https://badges.gitter.im/Join%20Chat.svg" />
78 </a>
79 </td>
80 </tr>
81 </table>
82
83
84
85 ## What is it?
86
87 **pandas** is a Python package providing fast, flexible, and expressive data
88 structures designed to make working with "relational" or "labeled" data both
89 easy and intuitive. It aims to be the fundamental high-level building block for
90 doing practical, **real world** data analysis in Python. Additionally, it has
91 the broader goal of becoming **the most powerful and flexible open source data
92 analysis / manipulation tool available in any language**. It is already well on
93 its way towards this goal.
94
95 ## Main Features
96 Here are just a few of the things that pandas does well:
97
98 - Easy handling of [**missing data**][missing-data] (represented as
99 `NaN`) in floating point as well as non-floating point data
100 - Size mutability: columns can be [**inserted and
101 deleted**][insertion-deletion] from DataFrame and higher dimensional
102 objects
103 - Automatic and explicit [**data alignment**][alignment]: objects can
104 be explicitly aligned to a set of labels, or the user can simply
105 ignore the labels and let `Series`, `DataFrame`, etc. automatically
106 align the data for you in computations
107 - Powerful, flexible [**group by**][groupby] functionality to perform
108 split-apply-combine operations on data sets, for both aggregating
109 and transforming data
110 - Make it [**easy to convert**][conversion] ragged,
111 differently-indexed data in other Python and NumPy data structures
112 into DataFrame objects
113 - Intelligent label-based [**slicing**][slicing], [**fancy
114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
115 large data sets
116 - Intuitive [**merging**][merging] and [**joining**][joining] data
117 sets
118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
119 data sets
120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
121 labels per tick)
122 - Robust IO tools for loading data from [**flat files**][flat-files]
123 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
125 - [**Time series**][timeseries]-specific functionality: date range
126 generation and frequency conversion, moving window statistics,
127 moving window linear regressions, date shifting and lagging, etc.
128
129
130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
148
149 ## Where to get it
150 The source code is currently hosted on GitHub at:
151 https://github.com/pandas-dev/pandas
152
153 Binary installers for the latest released version are available at the [Python
154 package index](https://pypi.org/project/pandas) and on conda.
155
156 ```sh
157 # conda
158 conda install pandas
159 ```
160
161 ```sh
162 # or PyPI
163 pip install pandas
164 ```
165
166 ## Dependencies
167 - [NumPy](https://www.numpy.org)
168 - [python-dateutil](https://labix.org/python-dateutil)
169 - [pytz](https://pythonhosted.org/pytz)
170
171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
172
173 ## Installation from sources
174 To install pandas from source you need Cython in addition to the normal
175 dependencies above. Cython can be installed from pypi:
176
177 ```sh
178 pip install cython
179 ```
180
181 In the `pandas` directory (same one where you found this file after
182 cloning the git repo), execute:
183
184 ```sh
185 python setup.py install
186 ```
187
188 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
189
190
191 ```sh
192 python -m pip install -e . --no-build-isolation --no-use-pep517
193 ```
194
195 If you have `make`, you can also use `make develop` to run the same command.
196
197 or alternatively
198
199 ```sh
200 python setup.py develop
201 ```
202
203 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
204
205 ## License
206 [BSD 3](LICENSE)
207
208 ## Documentation
209 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Getting Help
216
217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
219
220 ## Discussion and Development
221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
222
223 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
224
225 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
226
227 A detailed overview on how to contribute can be found in the **[contributing guide](https://dev.pandas.io/docs/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
228
229 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
230
231 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
232
233 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
234
235 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
236
237 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
238
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
37dfcc1acf3b37a1ff5251fee3380a179da1f2ed
|
API: consider undeprecating Series.item() ?
I recently ran into this as well (but forgot to open an issue), and raised now by @alexiswl in https://github.com/pandas-dev/pandas/issues/18262#issuecomment-546746982
`Series.item()` was a consequence of (historically) inheriting from np.ndarray, and was deprecated (like a set of other ndarray-inhertited methods/attributes) a while ago.
While `.item()` could also be used to select the "i-th" element (`.item(i)`), and this use case is certainly redundant (not arguing here to get that aspect back), there is one use case where `item()` can actually be useful: if you do not pass *i*, the method returns the element of the Series *only* if it has one element, otherwise it errors.
Such a situation can typically occur if you use boolean indexing (or `query`) to select a single element. Eg in cases like `s[s == 'val']` or `df.loc[df['col1'] == 'val', 'col2']` where you know the condition *should* yield a single element.
You then typically want the scalar element as result, but those two code snippets give you a Series of one element. In those cases, you could use `item()` to retrieve it: `s[s == 'val'].item()`.
I saw some people using `.item()` exactly for this use case, so wondering if it is worth to keep it for this (only the version without passing *i*).
The logical alternative is doing a `.iloc[0]`, but `.item()` has the advantage of guaranteeing there was actually only one result item.
|
could a errors kwarg to Series.squeeze achieve this?
Yeah, I actually used `.squeeze()` myself for this usecase before (eg in some of my tutorials), but after seeing somebody use `.item()` for this, I found that more elegant.
Actually the docstring of squeeze gives an example for this usecase: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.squeeze.html
For the actual suggestion: this might get complicated, as in `squeeze` you can have multiple dimensions. So when would an error be raised? If none of the dimensions equals 1 (and so no dimension can be squeezed), or if not all dimensions are 1 (and the result is not a scalar)? Given that there are possibly multiple ways to interpret it, it might not be the best keyword.
agreed that an additional argument would need to be consistent with DataFrame.squeeze and yet not too complicated.
rational for this would be to avoid two ways of doing the same operation, with the difference being that one raises.
Intuitively I thought `.at` could do what you search for. This would also yield a simpler API than chaining `.loc` and `.item`/`.squeeze`.
It seems like `.at` can't deal with a boolean series index though:
```
>> df = pd.DataFrame([[1, 2], [3, 4]], cols=list("ab"))
>> df.at[df["a"] == 1, "a"]
```
yields
`ValueError: At based indexing on an integer index can only have integer indexers`
I feel like this was possible before, so it was probably deprecated for a good reason that I'm not aware of. But if we're talking about un-deprecation, I think this should be considered as well.
Sorry if I open a closed discussion with that 😄
I'm fine with un-deprecating this. Is it a blocker for 1.0?
Would we add it to DataFrame, with a similar behavior to a 2D ndarray?
It's certainly not a blocker, but since it's deprecated, and if we want to do this, better sooner than later (and it should be a quick PR).
> Would we add it to DataFrame, with a similar behavior to a 2D ndarray?
That could be done yes, although I personally find that less needed (boolean indexing on the columns is much less common I think)
> Intuitively I thought .at could do what you search for. This would also yield a simpler API than chaining .loc and .item/.squeeze.
I don't think `.at` could ever do this in the past. It requires a single label.
But thinking about it, that might actually be a nice alternative. It's indeed shorter, and the *end result* (a single value out of the dataframe) is still the same as its current purpose.
It does complicate the API of `.at` though. Now it is very simple: a single label for each axis. That would be expanded with: a boolean mask with a single True value ..
The other use case for NumPy's `.item()` is to pull out a built-in Python scalar (e.g., `float`), rather than a NumPy scalar (e.g., `float64`). I think this could still make sense for pandas.
For that use case, I suppose you would want to keep the "full" behaviour of numpy? (so also `s.item(i)` , while for the above discussed use case (getting rid of the Series container for len-1 Series), `s.item()` without argument is enough).
I don't think it's really needed to support the full form of .item(i) with
an argument. That version is definitely redundant with indexing.
On Fri, Nov 15, 2019 at 4:11 AM Joris Van den Bossche <
[email protected]> wrote:
> For that use case, I suppose you would want to keep the "full" behaviour
> of numpy? (so also s.item(i) , while for the above discussed use case
> (getting rid of the Series container for len-1 Series), s.item() without
> argument is enough).
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/29250?email_source=notifications&email_token=AAJJFVV45BKH7H3ZHPALWHLQTZRUNA5CNFSM4JFWTTT2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEEZOWQ#issuecomment-554276698>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAJJFVWXNBDLB7D5UB7UYW3QTZRUNANCNFSM4JFWTTTQ>
> .
>
|
2019-12-10T04:43:19Z
|
<patch>
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -486,6 +486,7 @@ Documentation Improvements
Deprecations
~~~~~~~~~~~~
+- :meth:`Series.item` and :meth:`Index.item` have been _undeprecated_ (:issue:`29250`)
- ``Index.set_value`` has been deprecated. For a given index ``idx``, array ``arr``,
value in ``idx`` of ``idx_val`` and a new value of ``val``, ``idx.set_value(arr, idx_val, val)``
is equivalent to ``arr[idx.get_loc(idx_val)] = val``, which should be used instead (:issue:`28621`).
@@ -702,6 +703,8 @@ Datetimelike
- Bug in :attr:`Timestamp.resolution` being a property instead of a class attribute (:issue:`29910`)
- Bug in :func:`pandas.to_datetime` when called with ``None`` raising ``TypeError`` instead of returning ``NaT`` (:issue:`30011`)
- Bug in :func:`pandas.to_datetime` failing for `deques` when using ``cache=True`` (the default) (:issue:`29403`)
+- Bug in :meth:`Series.item` with ``datetime64`` or ``timedelta64`` dtype, :meth:`DatetimeIndex.item`, and :meth:`TimedeltaIndex.item` returning an integer instead of a :class:`Timestamp` or :class:`Timedelta` (:issue:`30175`)
+-
Timedelta
^^^^^^^^^
diff --git a/pandas/core/base.py b/pandas/core/base.py
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -5,7 +5,6 @@
from collections import OrderedDict
import textwrap
from typing import Dict, FrozenSet, List, Optional
-import warnings
import numpy as np
@@ -26,6 +25,7 @@
is_object_dtype,
is_scalar,
is_timedelta64_ns_dtype,
+ needs_i8_conversion,
)
from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
from pandas.core.dtypes.missing import isna
@@ -659,19 +659,27 @@ def item(self):
"""
Return the first element of the underlying data as a python scalar.
- .. deprecated:: 0.25.0
-
Returns
-------
scalar
The first element of %(klass)s.
+
+ Raises
+ ------
+ ValueError
+ If the data is not length-1.
"""
- warnings.warn(
- "`item` has been deprecated and will be removed in a future version",
- FutureWarning,
- stacklevel=2,
- )
- return self.values.item()
+ if not (
+ is_extension_array_dtype(self.dtype) or needs_i8_conversion(self.dtype)
+ ):
+ # numpy returns ints instead of datetime64/timedelta64 objects,
+ # which we need to wrap in Timestamp/Timedelta/Period regardless.
+ return self.values.item()
+
+ if len(self) == 1:
+ return next(iter(self))
+ else:
+ raise ValueError("can only convert an array of size 1 to a Python scalar")
@property
def nbytes(self):
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -1,5 +1,4 @@
from datetime import datetime, timedelta
-import warnings
import weakref
import numpy as np
@@ -862,27 +861,6 @@ def __setstate__(self, state):
_unpickle_compat = __setstate__
- def item(self):
- """
- Return the first element of the underlying data as a python
- scalar
-
- .. deprecated:: 0.25.0
-
- """
- warnings.warn(
- "`item` has been deprecated and will be removed in a future version",
- FutureWarning,
- stacklevel=2,
- )
- # TODO(DatetimeArray): remove
- if len(self) == 1:
- return self[0]
- else:
- # TODO: is this still necessary?
- # copy numpy's message here because Py26 raises an IndexError
- raise ValueError("can only convert an array of size 1 to a Python scalar")
-
def memory_usage(self, deep=False):
result = super().memory_usage(deep=deep)
if hasattr(self, "_cache") and "_int64index" in self._cache:
</patch>
|
[]
|
[]
| |||
ipython__ipython-4613
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
remove `Configurable.created` ?
What is the point of `Configurable.created` ? it does not seem to be use anywhere :
```
$ ack '\.created' **/*.py
IPython/config/configurable.py 105: self.created = datetime.datetime.now()
IPython/nbformat/v2/nbbase.py 159: metadata.created = unicode_type(created)
IPython/nbformat/v2/tests/test_nbbase.py 109: self.assertEqual(md.created, u'today')
IPython/nbformat/v3/nbbase.py 195: metadata.created = cast_unicode(created)
IPython/nbformat/v3/tests/test_nbbase.py 139: self.assertEqual(md.created, u'today')
```
</issue>
<code>
[start of README.rst]
1 ===========================================
2 IPython: Productive Interactive Computing
3 ===========================================
4
5 Overview
6 ========
7
8 Welcome to IPython. Our full documentation is available on `our website
9 <http://ipython.org/documentation.html>`_; if you downloaded a built source
10 distribution the ``docs/source`` directory contains the plaintext version of
11 these manuals. If you have Sphinx installed, you can build them by typing
12 ``cd docs; make html`` for local browsing.
13
14
15 Dependencies and supported Python versions
16 ==========================================
17
18 For full details, see the installation section of the manual. The basic parts
19 of IPython only need the Python standard library, but much of its more advanced
20 functionality requires extra packages.
21
22 Officially, IPython requires Python version 2.7, or 3.3 and above.
23 IPython 1.x is the last IPython version to support Python 2.6 and 3.2.
24
25
26 Instant running
27 ===============
28
29 You can run IPython from this directory without even installing it system-wide
30 by typing at the terminal::
31
32 $ python -m IPython
33
34
35 Development installation
36 ========================
37
38 If you want to hack on certain parts, e.g. the IPython notebook, in a clean
39 environment (such as a virtualenv) you can use ``pip`` to grab the necessary
40 dependencies quickly::
41
42 $ git clone --recursive https://github.com/ipython/ipython.git
43 $ cd ipython
44 $ pip install -e ".[notebook]"
45
46 This installs the necessary packages and symlinks IPython into your current
47 environment so that you can work on your local repo copy and run it from anywhere::
48
49 $ ipython notebook
50
51 The same process applies for other parts, such as the qtconsole (the
52 ``extras_require`` attribute in the setup.py file lists all the possibilities).
53
54 Git Hooks and Submodules
55 ************************
56
57 IPython now uses git submodules to ship its javascript dependencies.
58 If you run IPython from git master, you may need to update submodules once in a while with::
59
60 $ git submodule update
61
62 or::
63
64 $ python setup.py submodule
65
66 We have some git hooks for helping keep your submodules always in sync,
67 see our ``git-hooks`` directory for more info.
68
[end of README.rst]
[start of IPython/nbformat/convert.py]
1 """API for converting notebooks between versions.
2
3 Authors:
4
5 * Jonathan Frederic
6 """
7
8 #-----------------------------------------------------------------------------
9 # Copyright (C) 2013 The IPython Development Team
10 #
11 # Distributed under the terms of the BSD License. The full license is in
12 # the file COPYING, distributed as part of this software.
13 #-----------------------------------------------------------------------------
14
15 #-----------------------------------------------------------------------------
16 # Imports
17 #-----------------------------------------------------------------------------
18
19 import re
20
21 from .reader import get_version, versions
22
23 #-----------------------------------------------------------------------------
24 # Functions
25 #-----------------------------------------------------------------------------
26
27 def convert(nb, to_version):
28 """Convert a notebook node object to a specific version. Assumes that
29 all the versions starting from 1 to the latest major X are implemented.
30 In other words, there should never be a case where v1 v2 v3 v5 exist without
31 a v4. Also assumes that all conversions can be made in one step increments
32 between major versions and ignores minor revisions.
33
34 Parameters
35 ----------
36 nb : NotebookNode
37 to_version : int
38 Major revision to convert the notebook to. Can either be an upgrade or
39 a downgrade.
40 """
41
42 # Get input notebook version.
43 (version, version_minor) = get_version(nb)
44
45 # Check if destination is current version, if so return contents
46 if version == to_version:
47 return nb
48
49 # If the version exist, try to convert to it one step at a time.
50 elif to_version in versions:
51
52 # Get the the version that this recursion will convert to as a step
53 # closer to the final revision. Make sure the newer of the conversion
54 # functions is used to perform the conversion.
55 if to_version > version:
56 step_version = version + 1
57 convert_function = versions[step_version].upgrade
58 else:
59 step_version = version - 1
60 convert_function = versions[version].downgrade
61
62 # Convert and make sure version changed during conversion.
63 converted = convert_function(nb)
64 if converted.get('nbformat', 1) == version:
65 raise Exception("Cannot convert notebook from v%d to v%d. Operation" \
66 "failed silently." % (major, step_version))
67
68 # Recursively convert until target version is reached.
69 return convert(converted, to_version)
70 else:
71 raise Exception("Cannot convert notebook to v%d because that " \
72 "version doesn't exist" % (to_version))
73
[end of IPython/nbformat/convert.py]
[start of IPython/nbformat/current.py]
1 """The official API for working with notebooks in the current format version.
2
3 Authors:
4
5 * Brian Granger
6 * Jonathan Frederic
7 """
8
9 #-----------------------------------------------------------------------------
10 # Copyright (C) 2008-2011 The IPython Development Team
11 #
12 # Distributed under the terms of the BSD License. The full license is in
13 # the file COPYING, distributed as part of this software.
14 #-----------------------------------------------------------------------------
15
16 #-----------------------------------------------------------------------------
17 # Imports
18 #-----------------------------------------------------------------------------
19
20 from __future__ import print_function
21
22 from xml.etree import ElementTree as ET
23 import re
24
25 from IPython.utils.py3compat import unicode_type
26
27 from IPython.nbformat.v3 import (
28 NotebookNode,
29 new_code_cell, new_text_cell, new_notebook, new_output, new_worksheet,
30 parse_filename, new_metadata, new_author, new_heading_cell, nbformat,
31 nbformat_minor, to_notebook_json
32 )
33 from IPython.nbformat import v3 as _v_latest
34
35 from .reader import reads as reader_reads
36 from .reader import versions
37 from .convert import convert
38
39 #-----------------------------------------------------------------------------
40 # Code
41 #-----------------------------------------------------------------------------
42
43 current_nbformat = nbformat
44 current_nbformat_minor = nbformat_minor
45 current_nbformat_module = _v_latest.__name__
46
47 def docstring_nbformat_mod(func):
48 """Decorator for docstrings referring to classes/functions accessed through
49 nbformat.current.
50
51 Put {nbformat_mod} in the docstring in place of 'IPython.nbformat.v3'.
52 """
53 func.__doc__ = func.__doc__.format(nbformat_mod=current_nbformat_module)
54 return func
55
56
57 class NBFormatError(ValueError):
58 pass
59
60
61 def parse_py(s, **kwargs):
62 """Parse a string into a (nbformat, string) tuple."""
63 nbf = current_nbformat
64 nbm = current_nbformat_minor
65
66 pattern = r'# <nbformat>(?P<nbformat>\d+[\.\d+]*)</nbformat>'
67 m = re.search(pattern,s)
68 if m is not None:
69 digits = m.group('nbformat').split('.')
70 nbf = int(digits[0])
71 if len(digits) > 1:
72 nbm = int(digits[1])
73
74 return nbf, nbm, s
75
76
77 def reads_json(s, **kwargs):
78 """Read a JSON notebook from a string and return the NotebookNode object."""
79 return convert(reader_reads(s), current_nbformat)
80
81
82 def writes_json(nb, **kwargs):
83 return versions[current_nbformat].writes_json(nb, **kwargs)
84
85
86 def reads_py(s, **kwargs):
87 """Read a .py notebook from a string and return the NotebookNode object."""
88 nbf, nbm, s = parse_py(s, **kwargs)
89 if nbf in (2, 3):
90 nb = versions[nbf].to_notebook_py(s, **kwargs)
91 else:
92 raise NBFormatError('Unsupported PY nbformat version: %i' % nbf)
93 return nb
94
95
96 def writes_py(nb, **kwargs):
97 # nbformat 3 is the latest format that supports py
98 return versions[3].writes_py(nb, **kwargs)
99
100
101 # High level API
102
103
104 def reads(s, format, **kwargs):
105 """Read a notebook from a string and return the NotebookNode object.
106
107 This function properly handles notebooks of any version. The notebook
108 returned will always be in the current version's format.
109
110 Parameters
111 ----------
112 s : unicode
113 The raw unicode string to read the notebook from.
114 format : (u'json', u'ipynb', u'py')
115 The format that the string is in.
116
117 Returns
118 -------
119 nb : NotebookNode
120 The notebook that was read.
121 """
122 format = unicode_type(format)
123 if format == u'json' or format == u'ipynb':
124 return reads_json(s, **kwargs)
125 elif format == u'py':
126 return reads_py(s, **kwargs)
127 else:
128 raise NBFormatError('Unsupported format: %s' % format)
129
130
131 def writes(nb, format, **kwargs):
132 """Write a notebook to a string in a given format in the current nbformat version.
133
134 This function always writes the notebook in the current nbformat version.
135
136 Parameters
137 ----------
138 nb : NotebookNode
139 The notebook to write.
140 format : (u'json', u'ipynb', u'py')
141 The format to write the notebook in.
142
143 Returns
144 -------
145 s : unicode
146 The notebook string.
147 """
148 format = unicode_type(format)
149 if format == u'json' or format == u'ipynb':
150 return writes_json(nb, **kwargs)
151 elif format == u'py':
152 return writes_py(nb, **kwargs)
153 else:
154 raise NBFormatError('Unsupported format: %s' % format)
155
156
157 def read(fp, format, **kwargs):
158 """Read a notebook from a file and return the NotebookNode object.
159
160 This function properly handles notebooks of any version. The notebook
161 returned will always be in the current version's format.
162
163 Parameters
164 ----------
165 fp : file
166 Any file-like object with a read method.
167 format : (u'json', u'ipynb', u'py')
168 The format that the string is in.
169
170 Returns
171 -------
172 nb : NotebookNode
173 The notebook that was read.
174 """
175 return reads(fp.read(), format, **kwargs)
176
177
178 def write(nb, fp, format, **kwargs):
179 """Write a notebook to a file in a given format in the current nbformat version.
180
181 This function always writes the notebook in the current nbformat version.
182
183 Parameters
184 ----------
185 nb : NotebookNode
186 The notebook to write.
187 fp : file
188 Any file-like object with a write method.
189 format : (u'json', u'ipynb', u'py')
190 The format to write the notebook in.
191
192 Returns
193 -------
194 s : unicode
195 The notebook string.
196 """
197 return fp.write(writes(nb, format, **kwargs))
198
199 def _convert_to_metadata():
200 """Convert to a notebook having notebook metadata."""
201 import glob
202 for fname in glob.glob('*.ipynb'):
203 print('Converting file:',fname)
204 with open(fname,'r') as f:
205 nb = read(f,u'json')
206 md = new_metadata()
207 if u'name' in nb:
208 md.name = nb.name
209 del nb[u'name']
210 nb.metadata = md
211 with open(fname,'w') as f:
212 write(nb, f, u'json')
213
214
[end of IPython/nbformat/current.py]
[start of IPython/nbformat/reader.py]
1 """API for reading notebooks.
2
3 Authors:
4
5 * Jonathan Frederic
6 """
7
8 #-----------------------------------------------------------------------------
9 # Copyright (C) 2013 The IPython Development Team
10 #
11 # Distributed under the terms of the BSD License. The full license is in
12 # the file COPYING, distributed as part of this software.
13 #-----------------------------------------------------------------------------
14
15 #-----------------------------------------------------------------------------
16 # Imports
17 #-----------------------------------------------------------------------------
18
19 import json
20
21 from . import v1
22 from . import v2
23 from . import v3
24
25 versions = {
26 1: v1,
27 2: v2,
28 3: v3,
29 }
30
31 #-----------------------------------------------------------------------------
32 # Code
33 #-----------------------------------------------------------------------------
34
35 class NotJSONError(ValueError):
36 pass
37
38 def parse_json(s, **kwargs):
39 """Parse a JSON string into a dict."""
40 try:
41 nb_dict = json.loads(s, **kwargs)
42 except ValueError:
43 # Limit the error message to 80 characters. Display whatever JSON will fit.
44 raise NotJSONError(("Notebook does not appear to be JSON: %r" % s)[:77] + "...")
45 return nb_dict
46
47 # High level API
48
49 def get_version(nb):
50 """Get the version of a notebook.
51
52 Parameters
53 ----------
54 nb : dict
55 NotebookNode or dict containing notebook data.
56
57 Returns
58 -------
59 Tuple containing major (int) and minor (int) version numbers
60 """
61 major = nb.get('nbformat', 1)
62 minor = nb.get('nbformat_minor', 0)
63 return (major, minor)
64
65
66 def reads(s, **kwargs):
67 """Read a notebook from a json string and return the
68 NotebookNode object.
69
70 This function properly reads notebooks of any version. No version
71 conversion is performed.
72
73 Parameters
74 ----------
75 s : unicode
76 The raw unicode string to read the notebook from.
77
78 Returns
79 -------
80 nb : NotebookNode
81 The notebook that was read.
82 """
83 nb_dict = parse_json(s, **kwargs)
84 (major, minor) = get_version(nb_dict)
85 if major in versions:
86 return versions[major].to_notebook_json(nb_dict, minor=minor)
87 else:
88 raise NBFormatError('Unsupported nbformat version %s' % major)
89
90
91 def read(fp, **kwargs):
92 """Read a notebook from a file and return the NotebookNode object.
93
94 This function properly reads notebooks of any version. No version
95 conversion is performed.
96
97 Parameters
98 ----------
99 fp : file
100 Any file-like object with a read method.
101
102 Returns
103 -------
104 nb : NotebookNode
105 The notebook that was read.
106 """
107 return reads(fp.read(), **kwargs)
108
[end of IPython/nbformat/reader.py]
[start of IPython/nbformat/v1/nbbase.py]
1 """The basic dict based notebook format.
2
3 Authors:
4
5 * Brian Granger
6 """
7
8 #-----------------------------------------------------------------------------
9 # Copyright (C) 2008-2011 The IPython Development Team
10 #
11 # Distributed under the terms of the BSD License. The full license is in
12 # the file COPYING, distributed as part of this software.
13 #-----------------------------------------------------------------------------
14
15 #-----------------------------------------------------------------------------
16 # Imports
17 #-----------------------------------------------------------------------------
18
19 import pprint
20 import uuid
21
22 from IPython.utils.ipstruct import Struct
23 from IPython.utils.py3compat import unicode_type
24
25 #-----------------------------------------------------------------------------
26 # Code
27 #-----------------------------------------------------------------------------
28
29 class NotebookNode(Struct):
30 pass
31
32
33 def from_dict(d):
34 if isinstance(d, dict):
35 newd = NotebookNode()
36 for k,v in d.items():
37 newd[k] = from_dict(v)
38 return newd
39 elif isinstance(d, (tuple, list)):
40 return [from_dict(i) for i in d]
41 else:
42 return d
43
44
45 def new_code_cell(code=None, prompt_number=None):
46 """Create a new code cell with input and output"""
47 cell = NotebookNode()
48 cell.cell_type = u'code'
49 if code is not None:
50 cell.code = unicode_type(code)
51 if prompt_number is not None:
52 cell.prompt_number = int(prompt_number)
53 return cell
54
55
56 def new_text_cell(text=None):
57 """Create a new text cell."""
58 cell = NotebookNode()
59 if text is not None:
60 cell.text = unicode_type(text)
61 cell.cell_type = u'text'
62 return cell
63
64
65 def new_notebook(cells=None):
66 """Create a notebook by name, id and a list of worksheets."""
67 nb = NotebookNode()
68 if cells is not None:
69 nb.cells = cells
70 else:
71 nb.cells = []
72 return nb
73
74
[end of IPython/nbformat/v1/nbbase.py]
[start of IPython/nbformat/v2/__init__.py]
1 """The main API for the v2 notebook format.
2
3 Authors:
4
5 * Brian Granger
6 """
7
8 #-----------------------------------------------------------------------------
9 # Copyright (C) 2008-2011 The IPython Development Team
10 #
11 # Distributed under the terms of the BSD License. The full license is in
12 # the file COPYING, distributed as part of this software.
13 #-----------------------------------------------------------------------------
14
15 #-----------------------------------------------------------------------------
16 # Imports
17 #-----------------------------------------------------------------------------
18
19 from .nbbase import (
20 NotebookNode,
21 new_code_cell, new_text_cell, new_notebook, new_output, new_worksheet,
22 new_metadata, new_author
23 )
24
25 from .nbjson import reads as reads_json, writes as writes_json
26 from .nbjson import reads as read_json, writes as write_json
27 from .nbjson import to_notebook as to_notebook_json
28
29 from .nbxml import reads as reads_xml
30 from .nbxml import reads as read_xml
31 from .nbxml import to_notebook as to_notebook_xml
32
33 from .nbpy import reads as reads_py, writes as writes_py
34 from .nbpy import reads as read_py, writes as write_py
35 from .nbpy import to_notebook as to_notebook_py
36
37 from .convert import downgrade, upgrade
38
39 #-----------------------------------------------------------------------------
40 # Code
41 #-----------------------------------------------------------------------------
42
43 def parse_filename(fname):
44 """Parse a notebook filename.
45
46 This function takes a notebook filename and returns the notebook
47 format (json/py) and the notebook name. This logic can be
48 summarized as follows:
49
50 * notebook.ipynb -> (notebook.ipynb, notebook, json)
51 * notebook.json -> (notebook.json, notebook, json)
52 * notebook.py -> (notebook.py, notebook, py)
53 * notebook -> (notebook.ipynb, notebook, json)
54
55 Parameters
56 ----------
57 fname : unicode
58 The notebook filename. The filename can use a specific filename
59 extention (.ipynb, .json, .py) or none, in which case .ipynb will
60 be assumed.
61
62 Returns
63 -------
64 (fname, name, format) : (unicode, unicode, unicode)
65 The filename, notebook name and format.
66 """
67 if fname.endswith(u'.ipynb'):
68 format = u'json'
69 elif fname.endswith(u'.json'):
70 format = u'json'
71 elif fname.endswith(u'.py'):
72 format = u'py'
73 else:
74 fname = fname + u'.ipynb'
75 format = u'json'
76 name = fname.split('.')[0]
77 return fname, name, format
78
79
[end of IPython/nbformat/v2/__init__.py]
[start of IPython/nbformat/v2/nbbase.py]
1 """The basic dict based notebook format.
2
3 The Python representation of a notebook is a nested structure of
4 dictionary subclasses that support attribute access
5 (IPython.utils.ipstruct.Struct). The functions in this module are merely
6 helpers to build the structs in the right form.
7
8 Authors:
9
10 * Brian Granger
11 """
12
13 #-----------------------------------------------------------------------------
14 # Copyright (C) 2008-2011 The IPython Development Team
15 #
16 # Distributed under the terms of the BSD License. The full license is in
17 # the file COPYING, distributed as part of this software.
18 #-----------------------------------------------------------------------------
19
20 #-----------------------------------------------------------------------------
21 # Imports
22 #-----------------------------------------------------------------------------
23
24 import pprint
25 import uuid
26
27 from IPython.utils.ipstruct import Struct
28 from IPython.utils.py3compat import unicode_type
29
30 #-----------------------------------------------------------------------------
31 # Code
32 #-----------------------------------------------------------------------------
33
34 class NotebookNode(Struct):
35 pass
36
37
38 def from_dict(d):
39 if isinstance(d, dict):
40 newd = NotebookNode()
41 for k,v in d.items():
42 newd[k] = from_dict(v)
43 return newd
44 elif isinstance(d, (tuple, list)):
45 return [from_dict(i) for i in d]
46 else:
47 return d
48
49
50 def new_output(output_type=None, output_text=None, output_png=None,
51 output_html=None, output_svg=None, output_latex=None, output_json=None,
52 output_javascript=None, output_jpeg=None, prompt_number=None,
53 etype=None, evalue=None, traceback=None):
54 """Create a new code cell with input and output"""
55 output = NotebookNode()
56 if output_type is not None:
57 output.output_type = unicode_type(output_type)
58
59 if output_type != 'pyerr':
60 if output_text is not None:
61 output.text = unicode_type(output_text)
62 if output_png is not None:
63 output.png = bytes(output_png)
64 if output_jpeg is not None:
65 output.jpeg = bytes(output_jpeg)
66 if output_html is not None:
67 output.html = unicode_type(output_html)
68 if output_svg is not None:
69 output.svg = unicode_type(output_svg)
70 if output_latex is not None:
71 output.latex = unicode_type(output_latex)
72 if output_json is not None:
73 output.json = unicode_type(output_json)
74 if output_javascript is not None:
75 output.javascript = unicode_type(output_javascript)
76
77 if output_type == u'pyout':
78 if prompt_number is not None:
79 output.prompt_number = int(prompt_number)
80
81 if output_type == u'pyerr':
82 if etype is not None:
83 output.etype = unicode_type(etype)
84 if evalue is not None:
85 output.evalue = unicode_type(evalue)
86 if traceback is not None:
87 output.traceback = [unicode_type(frame) for frame in list(traceback)]
88
89 return output
90
91
92 def new_code_cell(input=None, prompt_number=None, outputs=None,
93 language=u'python', collapsed=False):
94 """Create a new code cell with input and output"""
95 cell = NotebookNode()
96 cell.cell_type = u'code'
97 if language is not None:
98 cell.language = unicode_type(language)
99 if input is not None:
100 cell.input = unicode_type(input)
101 if prompt_number is not None:
102 cell.prompt_number = int(prompt_number)
103 if outputs is None:
104 cell.outputs = []
105 else:
106 cell.outputs = outputs
107 if collapsed is not None:
108 cell.collapsed = bool(collapsed)
109
110 return cell
111
112 def new_text_cell(cell_type, source=None, rendered=None):
113 """Create a new text cell."""
114 cell = NotebookNode()
115 if source is not None:
116 cell.source = unicode_type(source)
117 if rendered is not None:
118 cell.rendered = unicode_type(rendered)
119 cell.cell_type = cell_type
120 return cell
121
122
123 def new_worksheet(name=None, cells=None):
124 """Create a worksheet by name with with a list of cells."""
125 ws = NotebookNode()
126 if name is not None:
127 ws.name = unicode_type(name)
128 if cells is None:
129 ws.cells = []
130 else:
131 ws.cells = list(cells)
132 return ws
133
134
135 def new_notebook(metadata=None, worksheets=None):
136 """Create a notebook by name, id and a list of worksheets."""
137 nb = NotebookNode()
138 nb.nbformat = 2
139 if worksheets is None:
140 nb.worksheets = []
141 else:
142 nb.worksheets = list(worksheets)
143 if metadata is None:
144 nb.metadata = new_metadata()
145 else:
146 nb.metadata = NotebookNode(metadata)
147 return nb
148
149
150 def new_metadata(name=None, authors=None, license=None, created=None,
151 modified=None, gistid=None):
152 """Create a new metadata node."""
153 metadata = NotebookNode()
154 if name is not None:
155 metadata.name = unicode_type(name)
156 if authors is not None:
157 metadata.authors = list(authors)
158 if created is not None:
159 metadata.created = unicode_type(created)
160 if modified is not None:
161 metadata.modified = unicode_type(modified)
162 if license is not None:
163 metadata.license = unicode_type(license)
164 if gistid is not None:
165 metadata.gistid = unicode_type(gistid)
166 return metadata
167
168 def new_author(name=None, email=None, affiliation=None, url=None):
169 """Create a new author."""
170 author = NotebookNode()
171 if name is not None:
172 author.name = unicode_type(name)
173 if email is not None:
174 author.email = unicode_type(email)
175 if affiliation is not None:
176 author.affiliation = unicode_type(affiliation)
177 if url is not None:
178 author.url = unicode_type(url)
179 return author
180
181
[end of IPython/nbformat/v2/nbbase.py]
[start of IPython/nbformat/v2/nbpy.py]
1 """Read and write notebooks as regular .py files.
2
3 Authors:
4
5 * Brian Granger
6 """
7
8 #-----------------------------------------------------------------------------
9 # Copyright (C) 2008-2011 The IPython Development Team
10 #
11 # Distributed under the terms of the BSD License. The full license is in
12 # the file COPYING, distributed as part of this software.
13 #-----------------------------------------------------------------------------
14
15 #-----------------------------------------------------------------------------
16 # Imports
17 #-----------------------------------------------------------------------------
18
19 import re
20 from IPython.utils.py3compat import unicode_type
21 from .rwbase import NotebookReader, NotebookWriter
22 from .nbbase import new_code_cell, new_text_cell, new_worksheet, new_notebook
23
24 #-----------------------------------------------------------------------------
25 # Code
26 #-----------------------------------------------------------------------------
27
28 _encoding_declaration_re = re.compile(r"^#.*coding[:=]\s*([-\w.]+)")
29
30 class PyReaderError(Exception):
31 pass
32
33
34 class PyReader(NotebookReader):
35
36 def reads(self, s, **kwargs):
37 return self.to_notebook(s,**kwargs)
38
39 def to_notebook(self, s, **kwargs):
40 lines = s.splitlines()
41 cells = []
42 cell_lines = []
43 state = u'codecell'
44 for line in lines:
45 if line.startswith(u'# <nbformat>') or _encoding_declaration_re.match(line):
46 pass
47 elif line.startswith(u'# <codecell>'):
48 cell = self.new_cell(state, cell_lines)
49 if cell is not None:
50 cells.append(cell)
51 state = u'codecell'
52 cell_lines = []
53 elif line.startswith(u'# <htmlcell>'):
54 cell = self.new_cell(state, cell_lines)
55 if cell is not None:
56 cells.append(cell)
57 state = u'htmlcell'
58 cell_lines = []
59 elif line.startswith(u'# <markdowncell>'):
60 cell = self.new_cell(state, cell_lines)
61 if cell is not None:
62 cells.append(cell)
63 state = u'markdowncell'
64 cell_lines = []
65 else:
66 cell_lines.append(line)
67 if cell_lines and state == u'codecell':
68 cell = self.new_cell(state, cell_lines)
69 if cell is not None:
70 cells.append(cell)
71 ws = new_worksheet(cells=cells)
72 nb = new_notebook(worksheets=[ws])
73 return nb
74
75 def new_cell(self, state, lines):
76 if state == u'codecell':
77 input = u'\n'.join(lines)
78 input = input.strip(u'\n')
79 if input:
80 return new_code_cell(input=input)
81 elif state == u'htmlcell':
82 text = self._remove_comments(lines)
83 if text:
84 return new_text_cell(u'html',source=text)
85 elif state == u'markdowncell':
86 text = self._remove_comments(lines)
87 if text:
88 return new_text_cell(u'markdown',source=text)
89
90 def _remove_comments(self, lines):
91 new_lines = []
92 for line in lines:
93 if line.startswith(u'#'):
94 new_lines.append(line[2:])
95 else:
96 new_lines.append(line)
97 text = u'\n'.join(new_lines)
98 text = text.strip(u'\n')
99 return text
100
101 def split_lines_into_blocks(self, lines):
102 if len(lines) == 1:
103 yield lines[0]
104 raise StopIteration()
105 import ast
106 source = '\n'.join(lines)
107 code = ast.parse(source)
108 starts = [x.lineno-1 for x in code.body]
109 for i in range(len(starts)-1):
110 yield '\n'.join(lines[starts[i]:starts[i+1]]).strip('\n')
111 yield '\n'.join(lines[starts[-1]:]).strip('\n')
112
113
114 class PyWriter(NotebookWriter):
115
116 def writes(self, nb, **kwargs):
117 lines = [u'# -*- coding: utf-8 -*-']
118 lines.extend([u'# <nbformat>2</nbformat>',''])
119 for ws in nb.worksheets:
120 for cell in ws.cells:
121 if cell.cell_type == u'code':
122 input = cell.get(u'input')
123 if input is not None:
124 lines.extend([u'# <codecell>',u''])
125 lines.extend(input.splitlines())
126 lines.append(u'')
127 elif cell.cell_type == u'html':
128 input = cell.get(u'source')
129 if input is not None:
130 lines.extend([u'# <htmlcell>',u''])
131 lines.extend([u'# ' + line for line in input.splitlines()])
132 lines.append(u'')
133 elif cell.cell_type == u'markdown':
134 input = cell.get(u'source')
135 if input is not None:
136 lines.extend([u'# <markdowncell>',u''])
137 lines.extend([u'# ' + line for line in input.splitlines()])
138 lines.append(u'')
139 lines.append('')
140 return unicode_type('\n'.join(lines))
141
142
143 _reader = PyReader()
144 _writer = PyWriter()
145
146 reads = _reader.reads
147 read = _reader.read
148 to_notebook = _reader.to_notebook
149 write = _writer.write
150 writes = _writer.writes
151
152
[end of IPython/nbformat/v2/nbpy.py]
[start of IPython/nbformat/v3/__init__.py]
1 """The main API for the v3 notebook format.
2
3 Authors:
4
5 * Brian Granger
6 """
7
8 #-----------------------------------------------------------------------------
9 # Copyright (C) 2008-2011 The IPython Development Team
10 #
11 # Distributed under the terms of the BSD License. The full license is in
12 # the file COPYING, distributed as part of this software.
13 #-----------------------------------------------------------------------------
14
15 #-----------------------------------------------------------------------------
16 # Imports
17 #-----------------------------------------------------------------------------
18
19 from .nbbase import (
20 NotebookNode,
21 new_code_cell, new_text_cell, new_notebook, new_output, new_worksheet,
22 new_metadata, new_author, new_heading_cell, nbformat, nbformat_minor
23 )
24
25 from .nbjson import reads as reads_json, writes as writes_json
26 from .nbjson import reads as read_json, writes as write_json
27 from .nbjson import to_notebook as to_notebook_json
28
29 from .nbpy import reads as reads_py, writes as writes_py
30 from .nbpy import reads as read_py, writes as write_py
31 from .nbpy import to_notebook as to_notebook_py
32
33 from .convert import downgrade, upgrade
34
35 #-----------------------------------------------------------------------------
36 # Code
37 #-----------------------------------------------------------------------------
38
39 def parse_filename(fname):
40 """Parse a notebook filename.
41
42 This function takes a notebook filename and returns the notebook
43 format (json/py) and the notebook name. This logic can be
44 summarized as follows:
45
46 * notebook.ipynb -> (notebook.ipynb, notebook, json)
47 * notebook.json -> (notebook.json, notebook, json)
48 * notebook.py -> (notebook.py, notebook, py)
49 * notebook -> (notebook.ipynb, notebook, json)
50
51 Parameters
52 ----------
53 fname : unicode
54 The notebook filename. The filename can use a specific filename
55 extention (.ipynb, .json, .py) or none, in which case .ipynb will
56 be assumed.
57
58 Returns
59 -------
60 (fname, name, format) : (unicode, unicode, unicode)
61 The filename, notebook name and format.
62 """
63 if fname.endswith(u'.ipynb'):
64 format = u'json'
65 elif fname.endswith(u'.json'):
66 format = u'json'
67 elif fname.endswith(u'.py'):
68 format = u'py'
69 else:
70 fname = fname + u'.ipynb'
71 format = u'json'
72 name = fname.split('.')[0]
73 return fname, name, format
74
75
[end of IPython/nbformat/v3/__init__.py]
[start of IPython/nbformat/v3/convert.py]
1 """Code for converting notebooks to and from the v2 format.
2
3 Authors:
4
5 * Brian Granger
6 * Min RK
7 * Jonathan Frederic
8 """
9
10 #-----------------------------------------------------------------------------
11 # Copyright (C) 2008-2011 The IPython Development Team
12 #
13 # Distributed under the terms of the BSD License. The full license is in
14 # the file COPYING, distributed as part of this software.
15 #-----------------------------------------------------------------------------
16
17 #-----------------------------------------------------------------------------
18 # Imports
19 #-----------------------------------------------------------------------------
20
21 from .nbbase import (
22 new_code_cell, new_text_cell, new_worksheet, new_notebook, new_output,
23 nbformat, nbformat_minor
24 )
25
26 from IPython.nbformat import v2
27
28 #-----------------------------------------------------------------------------
29 # Code
30 #-----------------------------------------------------------------------------
31
32 def upgrade(nb, from_version=2, from_minor=0):
33 """Convert a notebook to v3.
34
35 Parameters
36 ----------
37 nb : NotebookNode
38 The Python representation of the notebook to convert.
39 from_version : int
40 The original version of the notebook to convert.
41 from_minor : int
42 The original minor version of the notebook to convert (only relevant for v >= 3).
43 """
44 if from_version == 2:
45 # Mark the original nbformat so consumers know it has been converted.
46 nb.nbformat = nbformat
47 nb.nbformat_minor = nbformat_minor
48
49 nb.orig_nbformat = 2
50 return nb
51 elif from_version == 3:
52 if from_minor != nbformat_minor:
53 nb.orig_nbformat_minor = from_minor
54 nb.nbformat_minor = nbformat_minor
55 return nb
56 else:
57 raise ValueError('Cannot convert a notebook directly from v%s to v3. ' \
58 'Try using the IPython.nbformat.convert module.' % from_version)
59
60
61 def heading_to_md(cell):
62 """turn heading cell into corresponding markdown"""
63 cell.cell_type = "markdown"
64 level = cell.pop('level', 1)
65 cell.source = '#'*level + ' ' + cell.source
66
67
68 def raw_to_md(cell):
69 """let raw passthrough as markdown"""
70 cell.cell_type = "markdown"
71
72
73 def downgrade(nb):
74 """Convert a v3 notebook to v2.
75
76 Parameters
77 ----------
78 nb : NotebookNode
79 The Python representation of the notebook to convert.
80 """
81 if nb.nbformat != 3:
82 return nb
83 nb.nbformat = 2
84 for ws in nb.worksheets:
85 for cell in ws.cells:
86 if cell.cell_type == 'heading':
87 heading_to_md(cell)
88 elif cell.cell_type == 'raw':
89 raw_to_md(cell)
90 return nb
[end of IPython/nbformat/v3/convert.py]
[start of IPython/nbformat/v3/nbbase.py]
1 """The basic dict based notebook format.
2
3 The Python representation of a notebook is a nested structure of
4 dictionary subclasses that support attribute access
5 (IPython.utils.ipstruct.Struct). The functions in this module are merely
6 helpers to build the structs in the right form.
7
8 Authors:
9
10 * Brian Granger
11 """
12
13 #-----------------------------------------------------------------------------
14 # Copyright (C) 2008-2011 The IPython Development Team
15 #
16 # Distributed under the terms of the BSD License. The full license is in
17 # the file COPYING, distributed as part of this software.
18 #-----------------------------------------------------------------------------
19
20 #-----------------------------------------------------------------------------
21 # Imports
22 #-----------------------------------------------------------------------------
23
24 import pprint
25 import uuid
26
27 from IPython.utils.ipstruct import Struct
28 from IPython.utils.py3compat import cast_unicode, unicode_type
29
30 #-----------------------------------------------------------------------------
31 # Code
32 #-----------------------------------------------------------------------------
33
34 # Change this when incrementing the nbformat version
35 nbformat = 3
36 nbformat_minor = 0
37
38 class NotebookNode(Struct):
39 pass
40
41
42 def from_dict(d):
43 if isinstance(d, dict):
44 newd = NotebookNode()
45 for k,v in d.items():
46 newd[k] = from_dict(v)
47 return newd
48 elif isinstance(d, (tuple, list)):
49 return [from_dict(i) for i in d]
50 else:
51 return d
52
53
54 def new_output(output_type=None, output_text=None, output_png=None,
55 output_html=None, output_svg=None, output_latex=None, output_json=None,
56 output_javascript=None, output_jpeg=None, prompt_number=None,
57 ename=None, evalue=None, traceback=None, stream=None, metadata=None):
58 """Create a new code cell with input and output"""
59 output = NotebookNode()
60 if output_type is not None:
61 output.output_type = unicode_type(output_type)
62
63 if metadata is None:
64 metadata = {}
65 if not isinstance(metadata, dict):
66 raise TypeError("metadata must be dict")
67 output.metadata = metadata
68
69 if output_type != 'pyerr':
70 if output_text is not None:
71 output.text = cast_unicode(output_text)
72 if output_png is not None:
73 output.png = cast_unicode(output_png)
74 if output_jpeg is not None:
75 output.jpeg = cast_unicode(output_jpeg)
76 if output_html is not None:
77 output.html = cast_unicode(output_html)
78 if output_svg is not None:
79 output.svg = cast_unicode(output_svg)
80 if output_latex is not None:
81 output.latex = cast_unicode(output_latex)
82 if output_json is not None:
83 output.json = cast_unicode(output_json)
84 if output_javascript is not None:
85 output.javascript = cast_unicode(output_javascript)
86
87 if output_type == u'pyout':
88 if prompt_number is not None:
89 output.prompt_number = int(prompt_number)
90
91 if output_type == u'pyerr':
92 if ename is not None:
93 output.ename = cast_unicode(ename)
94 if evalue is not None:
95 output.evalue = cast_unicode(evalue)
96 if traceback is not None:
97 output.traceback = [cast_unicode(frame) for frame in list(traceback)]
98
99 if output_type == u'stream':
100 output.stream = 'stdout' if stream is None else cast_unicode(stream)
101
102 return output
103
104
105 def new_code_cell(input=None, prompt_number=None, outputs=None,
106 language=u'python', collapsed=False, metadata=None):
107 """Create a new code cell with input and output"""
108 cell = NotebookNode()
109 cell.cell_type = u'code'
110 if language is not None:
111 cell.language = cast_unicode(language)
112 if input is not None:
113 cell.input = cast_unicode(input)
114 if prompt_number is not None:
115 cell.prompt_number = int(prompt_number)
116 if outputs is None:
117 cell.outputs = []
118 else:
119 cell.outputs = outputs
120 if collapsed is not None:
121 cell.collapsed = bool(collapsed)
122 cell.metadata = NotebookNode(metadata or {})
123
124 return cell
125
126 def new_text_cell(cell_type, source=None, rendered=None, metadata=None):
127 """Create a new text cell."""
128 cell = NotebookNode()
129 # VERSIONHACK: plaintext -> raw
130 # handle never-released plaintext name for raw cells
131 if cell_type == 'plaintext':
132 cell_type = 'raw'
133 if source is not None:
134 cell.source = cast_unicode(source)
135 if rendered is not None:
136 cell.rendered = cast_unicode(rendered)
137 cell.metadata = NotebookNode(metadata or {})
138 cell.cell_type = cell_type
139 return cell
140
141
142 def new_heading_cell(source=None, rendered=None, level=1, metadata=None):
143 """Create a new section cell with a given integer level."""
144 cell = NotebookNode()
145 cell.cell_type = u'heading'
146 if source is not None:
147 cell.source = cast_unicode(source)
148 if rendered is not None:
149 cell.rendered = cast_unicode(rendered)
150 cell.level = int(level)
151 cell.metadata = NotebookNode(metadata or {})
152 return cell
153
154
155 def new_worksheet(name=None, cells=None, metadata=None):
156 """Create a worksheet by name with with a list of cells."""
157 ws = NotebookNode()
158 if name is not None:
159 ws.name = cast_unicode(name)
160 if cells is None:
161 ws.cells = []
162 else:
163 ws.cells = list(cells)
164 ws.metadata = NotebookNode(metadata or {})
165 return ws
166
167
168 def new_notebook(name=None, metadata=None, worksheets=None):
169 """Create a notebook by name, id and a list of worksheets."""
170 nb = NotebookNode()
171 nb.nbformat = nbformat
172 nb.nbformat_minor = nbformat_minor
173 if worksheets is None:
174 nb.worksheets = []
175 else:
176 nb.worksheets = list(worksheets)
177 if metadata is None:
178 nb.metadata = new_metadata()
179 else:
180 nb.metadata = NotebookNode(metadata)
181 if name is not None:
182 nb.metadata.name = cast_unicode(name)
183 return nb
184
185
186 def new_metadata(name=None, authors=None, license=None, created=None,
187 modified=None, gistid=None):
188 """Create a new metadata node."""
189 metadata = NotebookNode()
190 if name is not None:
191 metadata.name = cast_unicode(name)
192 if authors is not None:
193 metadata.authors = list(authors)
194 if created is not None:
195 metadata.created = cast_unicode(created)
196 if modified is not None:
197 metadata.modified = cast_unicode(modified)
198 if license is not None:
199 metadata.license = cast_unicode(license)
200 if gistid is not None:
201 metadata.gistid = cast_unicode(gistid)
202 return metadata
203
204 def new_author(name=None, email=None, affiliation=None, url=None):
205 """Create a new author."""
206 author = NotebookNode()
207 if name is not None:
208 author.name = cast_unicode(name)
209 if email is not None:
210 author.email = cast_unicode(email)
211 if affiliation is not None:
212 author.affiliation = cast_unicode(affiliation)
213 if url is not None:
214 author.url = cast_unicode(url)
215 return author
216
217
[end of IPython/nbformat/v3/nbbase.py]
[start of IPython/nbformat/v3/rwbase.py]
1 """Base classes and utilities for readers and writers.
2
3 Authors:
4
5 * Brian Granger
6 """
7
8 #-----------------------------------------------------------------------------
9 # Copyright (C) 2008-2011 The IPython Development Team
10 #
11 # Distributed under the terms of the BSD License. The full license is in
12 # the file COPYING, distributed as part of this software.
13 #-----------------------------------------------------------------------------
14
15 #-----------------------------------------------------------------------------
16 # Imports
17 #-----------------------------------------------------------------------------
18
19 from base64 import encodestring, decodestring
20 import pprint
21
22 from IPython.utils import py3compat
23 from IPython.utils.py3compat import str_to_bytes, unicode_type, string_types
24
25 #-----------------------------------------------------------------------------
26 # Code
27 #-----------------------------------------------------------------------------
28
29 def restore_bytes(nb):
30 """Restore bytes of image data from unicode-only formats.
31
32 Base64 encoding is handled elsewhere. Bytes objects in the notebook are
33 always b64-encoded. We DO NOT encode/decode around file formats.
34
35 Note: this is never used
36 """
37 for ws in nb.worksheets:
38 for cell in ws.cells:
39 if cell.cell_type == 'code':
40 for output in cell.outputs:
41 if 'png' in output:
42 output.png = str_to_bytes(output.png, 'ascii')
43 if 'jpeg' in output:
44 output.jpeg = str_to_bytes(output.jpeg, 'ascii')
45 return nb
46
47 # output keys that are likely to have multiline values
48 _multiline_outputs = ['text', 'html', 'svg', 'latex', 'javascript', 'json']
49
50
51 # FIXME: workaround for old splitlines()
52 def _join_lines(lines):
53 """join lines that have been written by splitlines()
54
55 Has logic to protect against `splitlines()`, which
56 should have been `splitlines(True)`
57 """
58 if lines and lines[0].endswith(('\n', '\r')):
59 # created by splitlines(True)
60 return u''.join(lines)
61 else:
62 # created by splitlines()
63 return u'\n'.join(lines)
64
65
66 def rejoin_lines(nb):
67 """rejoin multiline text into strings
68
69 For reversing effects of ``split_lines(nb)``.
70
71 This only rejoins lines that have been split, so if text objects were not split
72 they will pass through unchanged.
73
74 Used when reading JSON files that may have been passed through split_lines.
75 """
76 for ws in nb.worksheets:
77 for cell in ws.cells:
78 if cell.cell_type == 'code':
79 if 'input' in cell and isinstance(cell.input, list):
80 cell.input = _join_lines(cell.input)
81 for output in cell.outputs:
82 for key in _multiline_outputs:
83 item = output.get(key, None)
84 if isinstance(item, list):
85 output[key] = _join_lines(item)
86 else: # text, heading cell
87 for key in ['source', 'rendered']:
88 item = cell.get(key, None)
89 if isinstance(item, list):
90 cell[key] = _join_lines(item)
91 return nb
92
93
94 def split_lines(nb):
95 """split likely multiline text into lists of strings
96
97 For file output more friendly to line-based VCS. ``rejoin_lines(nb)`` will
98 reverse the effects of ``split_lines(nb)``.
99
100 Used when writing JSON files.
101 """
102 for ws in nb.worksheets:
103 for cell in ws.cells:
104 if cell.cell_type == 'code':
105 if 'input' in cell and isinstance(cell.input, string_types):
106 cell.input = cell.input.splitlines(True)
107 for output in cell.outputs:
108 for key in _multiline_outputs:
109 item = output.get(key, None)
110 if isinstance(item, string_types):
111 output[key] = item.splitlines(True)
112 else: # text, heading cell
113 for key in ['source', 'rendered']:
114 item = cell.get(key, None)
115 if isinstance(item, string_types):
116 cell[key] = item.splitlines(True)
117 return nb
118
119 # b64 encode/decode are never actually used, because all bytes objects in
120 # the notebook are already b64-encoded, and we don't need/want to double-encode
121
122 def base64_decode(nb):
123 """Restore all bytes objects in the notebook from base64-encoded strings.
124
125 Note: This is never used
126 """
127 for ws in nb.worksheets:
128 for cell in ws.cells:
129 if cell.cell_type == 'code':
130 for output in cell.outputs:
131 if 'png' in output:
132 if isinstance(output.png, unicode_type):
133 output.png = output.png.encode('ascii')
134 output.png = decodestring(output.png)
135 if 'jpeg' in output:
136 if isinstance(output.jpeg, unicode_type):
137 output.jpeg = output.jpeg.encode('ascii')
138 output.jpeg = decodestring(output.jpeg)
139 return nb
140
141
142 def base64_encode(nb):
143 """Base64 encode all bytes objects in the notebook.
144
145 These will be b64-encoded unicode strings
146
147 Note: This is never used
148 """
149 for ws in nb.worksheets:
150 for cell in ws.cells:
151 if cell.cell_type == 'code':
152 for output in cell.outputs:
153 if 'png' in output:
154 output.png = encodestring(output.png).decode('ascii')
155 if 'jpeg' in output:
156 output.jpeg = encodestring(output.jpeg).decode('ascii')
157 return nb
158
159
160 class NotebookReader(object):
161 """A class for reading notebooks."""
162
163 def reads(self, s, **kwargs):
164 """Read a notebook from a string."""
165 raise NotImplementedError("loads must be implemented in a subclass")
166
167 def read(self, fp, **kwargs):
168 """Read a notebook from a file like object"""
169 nbs = fp.read()
170 if not py3compat.PY3 and not isinstance(nbs, unicode_type):
171 nbs = py3compat.str_to_unicode(nbs)
172 return self.reads(nbs, **kwargs)
173
174
175 class NotebookWriter(object):
176 """A class for writing notebooks."""
177
178 def writes(self, nb, **kwargs):
179 """Write a notebook to a string."""
180 raise NotImplementedError("loads must be implemented in a subclass")
181
182 def write(self, nb, fp, **kwargs):
183 """Write a notebook to a file like object"""
184 nbs = self.writes(nb,**kwargs)
185 if not py3compat.PY3 and not isinstance(nbs, unicode_type):
186 # this branch is likely only taken for JSON on Python 2
187 nbs = py3compat.str_to_unicode(nbs)
188 return fp.write(nbs)
189
190
191
192
[end of IPython/nbformat/v3/rwbase.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ipython/ipython
|
a711e50f99398357504fabca16750cf331e12927
|
remove `Configurable.created` ?
What is the point of `Configurable.created` ? it does not seem to be use anywhere :
```
$ ack '\.created' **/*.py
IPython/config/configurable.py 105: self.created = datetime.datetime.now()
IPython/nbformat/v2/nbbase.py 159: metadata.created = unicode_type(created)
IPython/nbformat/v2/tests/test_nbbase.py 109: self.assertEqual(md.created, u'today')
IPython/nbformat/v3/nbbase.py 195: metadata.created = cast_unicode(created)
IPython/nbformat/v3/tests/test_nbbase.py 139: self.assertEqual(md.created, u'today')
```
|
2013-11-29T23:44:47Z
|
<patch>
diff --git a/IPython/config/configurable.py b/IPython/config/configurable.py
--- a/IPython/config/configurable.py
+++ b/IPython/config/configurable.py
@@ -26,7 +26,6 @@
# Imports
#-----------------------------------------------------------------------------
-import datetime
from copy import deepcopy
from .loader import Config, LazyConfigValue
@@ -55,7 +54,6 @@ class Configurable(HasTraits):
config = Instance(Config, (), {})
parent = Instance('IPython.config.configurable.Configurable')
- created = None
def __init__(self, **kwargs):
"""Create a configurable given a config config.
@@ -102,7 +100,6 @@ def __init__(self, config=None):
# This should go second so individual keyword arguments override
# the values in config.
super(Configurable, self).__init__(**kwargs)
- self.created = datetime.datetime.now()
#-------------------------------------------------------------------------
# Static trait notifiations
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-36654
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API/DEPR: casting bool-indexer to slice in dt64/td64/period
We use special logic for dt64/td64/period dtypes that makes view/copy behavior different from other dtypes:
```
dti = pd.date_range("2016-01-01", periods=4, tz="US/Pacific")
key = np.array([True, True, False, False])
ser1 = pd.Series(dti._data)
ser2 = pd.Series(range(4))
res1 = ser1[key]
res2 = ser2[key]
>>> res1._values._data.base is None
False
>>> res2._values.base is None
True
```
cc @jorisvandenbossche IIRC you advocated not doing this special casing.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://doi.org/10.5281/zenodo.3509134)
11 [](https://pypi.org/project/pandas/)
12 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [](https://travis-ci.org/pandas-dev/pandas)
14 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
15 [](https://codecov.io/gh/pandas-dev/pandas)
16 [](https://pandas.pydata.org)
17 [](https://gitter.im/pydata/pandas)
18 [](https://numfocus.org)
19 [](https://github.com/psf/black)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging.
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 package index](https://pypi.org/project/pandas) and on conda.
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy](https://www.numpy.org)
104 - [python-dateutil](https://labix.org/python-dateutil)
105 - [pytz](https://pythonhosted.org/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need Cython in addition to the normal
111 dependencies above. Cython can be installed from pypi:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
[start of pandas/core/arrays/timedeltas.py]
1 from datetime import timedelta
2 from typing import List, Optional, Union
3
4 import numpy as np
5
6 from pandas._libs import lib, tslibs
7 from pandas._libs.tslibs import (
8 BaseOffset,
9 NaT,
10 NaTType,
11 Period,
12 Tick,
13 Timedelta,
14 Timestamp,
15 iNaT,
16 to_offset,
17 )
18 from pandas._libs.tslibs.conversion import precision_from_unit
19 from pandas._libs.tslibs.fields import get_timedelta_field
20 from pandas._libs.tslibs.timedeltas import array_to_timedelta64, parse_timedelta_unit
21 from pandas.compat.numpy import function as nv
22
23 from pandas.core.dtypes.common import (
24 DT64NS_DTYPE,
25 TD64NS_DTYPE,
26 is_categorical_dtype,
27 is_dtype_equal,
28 is_float_dtype,
29 is_integer_dtype,
30 is_object_dtype,
31 is_scalar,
32 is_string_dtype,
33 is_timedelta64_dtype,
34 is_timedelta64_ns_dtype,
35 pandas_dtype,
36 )
37 from pandas.core.dtypes.dtypes import DatetimeTZDtype
38 from pandas.core.dtypes.generic import ABCSeries, ABCTimedeltaIndex
39 from pandas.core.dtypes.missing import isna
40
41 from pandas.core import nanops
42 from pandas.core.algorithms import checked_add_with_arr
43 from pandas.core.arrays import IntegerArray, datetimelike as dtl
44 from pandas.core.arrays._ranges import generate_regular_range
45 import pandas.core.common as com
46 from pandas.core.construction import extract_array
47 from pandas.core.ops.common import unpack_zerodim_and_defer
48
49
50 def _field_accessor(name: str, alias: str, docstring: str):
51 def f(self) -> np.ndarray:
52 values = self.asi8
53 result = get_timedelta_field(values, alias)
54 if self._hasnans:
55 result = self._maybe_mask_results(
56 result, fill_value=None, convert="float64"
57 )
58
59 return result
60
61 f.__name__ = name
62 f.__doc__ = f"\n{docstring}\n"
63 return property(f)
64
65
66 class TimedeltaArray(dtl.TimelikeOps):
67 """
68 Pandas ExtensionArray for timedelta data.
69
70 .. versionadded:: 0.24.0
71
72 .. warning::
73
74 TimedeltaArray is currently experimental, and its API may change
75 without warning. In particular, :attr:`TimedeltaArray.dtype` is
76 expected to change to be an instance of an ``ExtensionDtype``
77 subclass.
78
79 Parameters
80 ----------
81 values : array-like
82 The timedelta data.
83
84 dtype : numpy.dtype
85 Currently, only ``numpy.dtype("timedelta64[ns]")`` is accepted.
86 freq : Offset, optional
87 copy : bool, default False
88 Whether to copy the underlying array of data.
89
90 Attributes
91 ----------
92 None
93
94 Methods
95 -------
96 None
97 """
98
99 _typ = "timedeltaarray"
100 _scalar_type = Timedelta
101 _recognized_scalars = (timedelta, np.timedelta64, Tick)
102 _is_recognized_dtype = is_timedelta64_dtype
103
104 __array_priority__ = 1000
105 # define my properties & methods for delegation
106 _other_ops: List[str] = []
107 _bool_ops: List[str] = []
108 _object_ops = ["freq"]
109 _field_ops = ["days", "seconds", "microseconds", "nanoseconds"]
110 _datetimelike_ops = _field_ops + _object_ops + _bool_ops
111 _datetimelike_methods = [
112 "to_pytimedelta",
113 "total_seconds",
114 "round",
115 "floor",
116 "ceil",
117 ]
118
119 # Note: ndim must be defined to ensure NaT.__richcmp(TimedeltaArray)
120 # operates pointwise.
121
122 def _box_func(self, x) -> Union[Timedelta, NaTType]:
123 return Timedelta(x, unit="ns")
124
125 @property
126 def dtype(self) -> np.dtype:
127 """
128 The dtype for the TimedeltaArray.
129
130 .. warning::
131
132 A future version of pandas will change dtype to be an instance
133 of a :class:`pandas.api.extensions.ExtensionDtype` subclass,
134 not a ``numpy.dtype``.
135
136 Returns
137 -------
138 numpy.dtype
139 """
140 return TD64NS_DTYPE
141
142 # ----------------------------------------------------------------
143 # Constructors
144
145 def __init__(self, values, dtype=TD64NS_DTYPE, freq=lib.no_default, copy=False):
146 values = extract_array(values)
147
148 inferred_freq = getattr(values, "_freq", None)
149 explicit_none = freq is None
150 freq = freq if freq is not lib.no_default else None
151
152 if isinstance(values, type(self)):
153 if explicit_none:
154 # dont inherit from values
155 pass
156 elif freq is None:
157 freq = values.freq
158 elif freq and values.freq:
159 freq = to_offset(freq)
160 freq, _ = dtl.validate_inferred_freq(freq, values.freq, False)
161 values = values._data
162
163 if not isinstance(values, np.ndarray):
164 msg = (
165 f"Unexpected type '{type(values).__name__}'. 'values' must be a "
166 "TimedeltaArray ndarray, or Series or Index containing one of those."
167 )
168 raise ValueError(msg)
169 if values.ndim not in [1, 2]:
170 raise ValueError("Only 1-dimensional input arrays are supported.")
171
172 if values.dtype == "i8":
173 # for compat with datetime/timedelta/period shared methods,
174 # we can sometimes get here with int64 values. These represent
175 # nanosecond UTC (or tz-naive) unix timestamps
176 values = values.view(TD64NS_DTYPE)
177
178 _validate_td64_dtype(values.dtype)
179 dtype = _validate_td64_dtype(dtype)
180
181 if freq == "infer":
182 msg = (
183 "Frequency inference not allowed in TimedeltaArray.__init__. "
184 "Use 'pd.array()' instead."
185 )
186 raise ValueError(msg)
187
188 if copy:
189 values = values.copy()
190 if freq:
191 freq = to_offset(freq)
192
193 self._data = values
194 self._dtype = dtype
195 self._freq = freq
196
197 if inferred_freq is None and freq is not None:
198 type(self)._validate_frequency(self, freq)
199
200 @classmethod
201 def _simple_new(
202 cls, values, freq: Optional[BaseOffset] = None, dtype=TD64NS_DTYPE
203 ) -> "TimedeltaArray":
204 assert dtype == TD64NS_DTYPE, dtype
205 assert isinstance(values, np.ndarray), type(values)
206 if values.dtype != TD64NS_DTYPE:
207 assert values.dtype == "i8"
208 values = values.view(TD64NS_DTYPE)
209
210 result = object.__new__(cls)
211 result._data = values
212 result._freq = to_offset(freq)
213 result._dtype = TD64NS_DTYPE
214 return result
215
216 @classmethod
217 def _from_sequence(
218 cls, data, dtype=TD64NS_DTYPE, copy: bool = False
219 ) -> "TimedeltaArray":
220 if dtype:
221 _validate_td64_dtype(dtype)
222
223 data, inferred_freq = sequence_to_td64ns(data, copy=copy, unit=None)
224 freq, _ = dtl.validate_inferred_freq(None, inferred_freq, False)
225
226 result = cls._simple_new(data, freq=freq)
227 return result
228
229 @classmethod
230 def _from_sequence_not_strict(
231 cls,
232 data,
233 dtype=TD64NS_DTYPE,
234 copy: bool = False,
235 freq=lib.no_default,
236 unit=None,
237 ) -> "TimedeltaArray":
238 if dtype:
239 _validate_td64_dtype(dtype)
240
241 explicit_none = freq is None
242 freq = freq if freq is not lib.no_default else None
243
244 freq, freq_infer = dtl.maybe_infer_freq(freq)
245
246 data, inferred_freq = sequence_to_td64ns(data, copy=copy, unit=unit)
247 freq, freq_infer = dtl.validate_inferred_freq(freq, inferred_freq, freq_infer)
248 if explicit_none:
249 freq = None
250
251 result = cls._simple_new(data, freq=freq)
252
253 if inferred_freq is None and freq is not None:
254 # this condition precludes `freq_infer`
255 cls._validate_frequency(result, freq)
256
257 elif freq_infer:
258 # Set _freq directly to bypass duplicative _validate_frequency
259 # check.
260 result._freq = to_offset(result.inferred_freq)
261
262 return result
263
264 @classmethod
265 def _generate_range(cls, start, end, periods, freq, closed=None):
266
267 periods = dtl.validate_periods(periods)
268 if freq is None and any(x is None for x in [periods, start, end]):
269 raise ValueError("Must provide freq argument if no data is supplied")
270
271 if com.count_not_none(start, end, periods, freq) != 3:
272 raise ValueError(
273 "Of the four parameters: start, end, periods, "
274 "and freq, exactly three must be specified"
275 )
276
277 if start is not None:
278 start = Timedelta(start)
279
280 if end is not None:
281 end = Timedelta(end)
282
283 left_closed, right_closed = dtl.validate_endpoints(closed)
284
285 if freq is not None:
286 index = generate_regular_range(start, end, periods, freq)
287 else:
288 index = np.linspace(start.value, end.value, periods).astype("i8")
289
290 if not left_closed:
291 index = index[1:]
292 if not right_closed:
293 index = index[:-1]
294
295 return cls._simple_new(index, freq=freq)
296
297 # ----------------------------------------------------------------
298 # DatetimeLike Interface
299
300 @classmethod
301 def _rebox_native(cls, value: int) -> np.timedelta64:
302 return np.int64(value).view("m8[ns]")
303
304 def _unbox_scalar(self, value, setitem: bool = False):
305 if not isinstance(value, self._scalar_type) and value is not NaT:
306 raise ValueError("'value' should be a Timedelta.")
307 self._check_compatible_with(value, setitem=setitem)
308 return value.value
309
310 def _scalar_from_string(self, value):
311 return Timedelta(value)
312
313 def _check_compatible_with(self, other, setitem: bool = False):
314 # we don't have anything to validate.
315 pass
316
317 def _maybe_clear_freq(self):
318 self._freq = None
319
320 # ----------------------------------------------------------------
321 # Array-Like / EA-Interface Methods
322
323 def astype(self, dtype, copy: bool = True):
324 # We handle
325 # --> timedelta64[ns]
326 # --> timedelta64
327 # DatetimeLikeArrayMixin super call handles other cases
328 dtype = pandas_dtype(dtype)
329
330 if is_timedelta64_dtype(dtype) and not is_timedelta64_ns_dtype(dtype):
331 # by pandas convention, converting to non-nano timedelta64
332 # returns an int64-dtyped array with ints representing multiples
333 # of the desired timedelta unit. This is essentially division
334 if self._hasnans:
335 # avoid double-copying
336 result = self._data.astype(dtype, copy=False)
337 values = self._maybe_mask_results(
338 result, fill_value=None, convert="float64"
339 )
340 return values
341 result = self._data.astype(dtype, copy=copy)
342 return result.astype("i8")
343 elif is_timedelta64_ns_dtype(dtype):
344 if copy:
345 return self.copy()
346 return self
347 return dtl.DatetimeLikeArrayMixin.astype(self, dtype, copy=copy)
348
349 # ----------------------------------------------------------------
350 # Reductions
351
352 def sum(
353 self,
354 axis=None,
355 dtype=None,
356 out=None,
357 keepdims: bool = False,
358 initial=None,
359 skipna: bool = True,
360 min_count: int = 0,
361 ):
362 nv.validate_sum(
363 (), dict(dtype=dtype, out=out, keepdims=keepdims, initial=initial)
364 )
365 if not len(self):
366 return NaT
367 if not skipna and self._hasnans:
368 return NaT
369
370 result = nanops.nansum(
371 self._data, axis=axis, skipna=skipna, min_count=min_count
372 )
373 return Timedelta(result)
374
375 def std(
376 self,
377 axis=None,
378 dtype=None,
379 out=None,
380 ddof: int = 1,
381 keepdims: bool = False,
382 skipna: bool = True,
383 ):
384 nv.validate_stat_ddof_func(
385 (), dict(dtype=dtype, out=out, keepdims=keepdims), fname="std"
386 )
387 if not len(self):
388 return NaT
389 if not skipna and self._hasnans:
390 return NaT
391
392 result = nanops.nanstd(self._data, axis=axis, skipna=skipna, ddof=ddof)
393 return Timedelta(result)
394
395 # ----------------------------------------------------------------
396 # Rendering Methods
397
398 def _formatter(self, boxed=False):
399 from pandas.io.formats.format import get_format_timedelta64
400
401 return get_format_timedelta64(self, box=True)
402
403 def _format_native_types(self, na_rep="NaT", date_format=None, **kwargs):
404 from pandas.io.formats.format import get_format_timedelta64
405
406 formatter = get_format_timedelta64(self._data, na_rep)
407 return np.array([formatter(x) for x in self._data.ravel()]).reshape(self.shape)
408
409 # ----------------------------------------------------------------
410 # Arithmetic Methods
411
412 def _add_offset(self, other):
413 assert not isinstance(other, Tick)
414 raise TypeError(
415 f"cannot add the type {type(other).__name__} to a {type(self).__name__}"
416 )
417
418 def _add_period(self, other: Period):
419 """
420 Add a Period object.
421 """
422 # We will wrap in a PeriodArray and defer to the reversed operation
423 from .period import PeriodArray
424
425 i8vals = np.broadcast_to(other.ordinal, self.shape)
426 oth = PeriodArray(i8vals, freq=other.freq)
427 return oth + self
428
429 def _add_datetime_arraylike(self, other):
430 """
431 Add DatetimeArray/Index or ndarray[datetime64] to TimedeltaArray.
432 """
433 if isinstance(other, np.ndarray):
434 # At this point we have already checked that dtype is datetime64
435 from pandas.core.arrays import DatetimeArray
436
437 other = DatetimeArray(other)
438
439 # defer to implementation in DatetimeArray
440 return other + self
441
442 def _add_datetimelike_scalar(self, other):
443 # adding a timedeltaindex to a datetimelike
444 from pandas.core.arrays import DatetimeArray
445
446 assert other is not NaT
447 other = Timestamp(other)
448 if other is NaT:
449 # In this case we specifically interpret NaT as a datetime, not
450 # the timedelta interpretation we would get by returning self + NaT
451 result = self.asi8.view("m8[ms]") + NaT.to_datetime64()
452 return DatetimeArray(result)
453
454 i8 = self.asi8
455 result = checked_add_with_arr(i8, other.value, arr_mask=self._isnan)
456 result = self._maybe_mask_results(result)
457 dtype = DatetimeTZDtype(tz=other.tz) if other.tz else DT64NS_DTYPE
458 return DatetimeArray(result, dtype=dtype, freq=self.freq)
459
460 def _addsub_object_array(self, other, op):
461 # Add or subtract Array-like of objects
462 try:
463 # TimedeltaIndex can only operate with a subset of DateOffset
464 # subclasses. Incompatible classes will raise AttributeError,
465 # which we re-raise as TypeError
466 return super()._addsub_object_array(other, op)
467 except AttributeError as err:
468 raise TypeError(
469 f"Cannot add/subtract non-tick DateOffset to {type(self).__name__}"
470 ) from err
471
472 @unpack_zerodim_and_defer("__mul__")
473 def __mul__(self, other) -> "TimedeltaArray":
474 if is_scalar(other):
475 # numpy will accept float and int, raise TypeError for others
476 result = self._data * other
477 freq = None
478 if self.freq is not None and not isna(other):
479 freq = self.freq * other
480 return type(self)(result, freq=freq)
481
482 if not hasattr(other, "dtype"):
483 # list, tuple
484 other = np.array(other)
485 if len(other) != len(self) and not is_timedelta64_dtype(other.dtype):
486 # Exclude timedelta64 here so we correctly raise TypeError
487 # for that instead of ValueError
488 raise ValueError("Cannot multiply with unequal lengths")
489
490 if is_object_dtype(other.dtype):
491 # this multiplication will succeed only if all elements of other
492 # are int or float scalars, so we will end up with
493 # timedelta64[ns]-dtyped result
494 result = [self[n] * other[n] for n in range(len(self))]
495 result = np.array(result)
496 return type(self)(result)
497
498 # numpy will accept float or int dtype, raise TypeError for others
499 result = self._data * other
500 return type(self)(result)
501
502 __rmul__ = __mul__
503
504 @unpack_zerodim_and_defer("__truediv__")
505 def __truediv__(self, other):
506 # timedelta / X is well-defined for timedelta-like or numeric X
507
508 if isinstance(other, (timedelta, np.timedelta64, Tick)):
509 other = Timedelta(other)
510 if other is NaT:
511 # specifically timedelta64-NaT
512 result = np.empty(self.shape, dtype=np.float64)
513 result.fill(np.nan)
514 return result
515
516 # otherwise, dispatch to Timedelta implementation
517 return self._data / other
518
519 elif lib.is_scalar(other):
520 # assume it is numeric
521 result = self._data / other
522 freq = None
523 if self.freq is not None:
524 # Tick division is not implemented, so operate on Timedelta
525 freq = self.freq.delta / other
526 return type(self)(result, freq=freq)
527
528 if not hasattr(other, "dtype"):
529 # e.g. list, tuple
530 other = np.array(other)
531
532 if len(other) != len(self):
533 raise ValueError("Cannot divide vectors with unequal lengths")
534
535 elif is_timedelta64_dtype(other.dtype):
536 # let numpy handle it
537 return self._data / other
538
539 elif is_object_dtype(other.dtype):
540 # We operate on raveled arrays to avoid problems in inference
541 # on NaT
542 srav = self.ravel()
543 orav = other.ravel()
544 result = [srav[n] / orav[n] for n in range(len(srav))]
545 result = np.array(result).reshape(self.shape)
546
547 # We need to do dtype inference in order to keep DataFrame ops
548 # behavior consistent with Series behavior
549 inferred = lib.infer_dtype(result)
550 if inferred == "timedelta":
551 flat = result.ravel()
552 result = type(self)._from_sequence(flat).reshape(result.shape)
553 elif inferred == "floating":
554 result = result.astype(float)
555
556 return result
557
558 else:
559 result = self._data / other
560 return type(self)(result)
561
562 @unpack_zerodim_and_defer("__rtruediv__")
563 def __rtruediv__(self, other):
564 # X / timedelta is defined only for timedelta-like X
565 if isinstance(other, (timedelta, np.timedelta64, Tick)):
566 other = Timedelta(other)
567 if other is NaT:
568 # specifically timedelta64-NaT
569 result = np.empty(self.shape, dtype=np.float64)
570 result.fill(np.nan)
571 return result
572
573 # otherwise, dispatch to Timedelta implementation
574 return other / self._data
575
576 elif lib.is_scalar(other):
577 raise TypeError(
578 f"Cannot divide {type(other).__name__} by {type(self).__name__}"
579 )
580
581 if not hasattr(other, "dtype"):
582 # e.g. list, tuple
583 other = np.array(other)
584
585 if len(other) != len(self):
586 raise ValueError("Cannot divide vectors with unequal lengths")
587
588 elif is_timedelta64_dtype(other.dtype):
589 # let numpy handle it
590 return other / self._data
591
592 elif is_object_dtype(other.dtype):
593 # Note: unlike in __truediv__, we do not _need_ to do type
594 # inference on the result. It does not raise, a numeric array
595 # is returned. GH#23829
596 result = [other[n] / self[n] for n in range(len(self))]
597 return np.array(result)
598
599 else:
600 raise TypeError(
601 f"Cannot divide {other.dtype} data by {type(self).__name__}"
602 )
603
604 @unpack_zerodim_and_defer("__floordiv__")
605 def __floordiv__(self, other):
606
607 if is_scalar(other):
608 if isinstance(other, (timedelta, np.timedelta64, Tick)):
609 other = Timedelta(other)
610 if other is NaT:
611 # treat this specifically as timedelta-NaT
612 result = np.empty(self.shape, dtype=np.float64)
613 result.fill(np.nan)
614 return result
615
616 # dispatch to Timedelta implementation
617 result = other.__rfloordiv__(self._data)
618 return result
619
620 # at this point we should only have numeric scalars; anything
621 # else will raise
622 result = self.asi8 // other
623 result[self._isnan] = iNaT
624 freq = None
625 if self.freq is not None:
626 # Note: freq gets division, not floor-division
627 freq = self.freq / other
628 if freq.nanos == 0 and self.freq.nanos != 0:
629 # e.g. if self.freq is Nano(1) then dividing by 2
630 # rounds down to zero
631 freq = None
632 return type(self)(result.view("m8[ns]"), freq=freq)
633
634 if not hasattr(other, "dtype"):
635 # list, tuple
636 other = np.array(other)
637 if len(other) != len(self):
638 raise ValueError("Cannot divide with unequal lengths")
639
640 elif is_timedelta64_dtype(other.dtype):
641 other = type(self)(other)
642
643 # numpy timedelta64 does not natively support floordiv, so operate
644 # on the i8 values
645 result = self.asi8 // other.asi8
646 mask = self._isnan | other._isnan
647 if mask.any():
648 result = result.astype(np.float64)
649 result[mask] = np.nan
650 return result
651
652 elif is_object_dtype(other.dtype):
653 result = [self[n] // other[n] for n in range(len(self))]
654 result = np.array(result)
655 if lib.infer_dtype(result, skipna=False) == "timedelta":
656 result, _ = sequence_to_td64ns(result)
657 return type(self)(result)
658 return result
659
660 elif is_integer_dtype(other.dtype) or is_float_dtype(other.dtype):
661 result = self._data // other
662 return type(self)(result)
663
664 else:
665 dtype = getattr(other, "dtype", type(other).__name__)
666 raise TypeError(f"Cannot divide {dtype} by {type(self).__name__}")
667
668 @unpack_zerodim_and_defer("__rfloordiv__")
669 def __rfloordiv__(self, other):
670
671 if is_scalar(other):
672 if isinstance(other, (timedelta, np.timedelta64, Tick)):
673 other = Timedelta(other)
674 if other is NaT:
675 # treat this specifically as timedelta-NaT
676 result = np.empty(self.shape, dtype=np.float64)
677 result.fill(np.nan)
678 return result
679
680 # dispatch to Timedelta implementation
681 result = other.__floordiv__(self._data)
682 return result
683
684 raise TypeError(
685 f"Cannot divide {type(other).__name__} by {type(self).__name__}"
686 )
687
688 if not hasattr(other, "dtype"):
689 # list, tuple
690 other = np.array(other)
691
692 if len(other) != len(self):
693 raise ValueError("Cannot divide with unequal lengths")
694
695 elif is_timedelta64_dtype(other.dtype):
696 other = type(self)(other)
697 # numpy timedelta64 does not natively support floordiv, so operate
698 # on the i8 values
699 result = other.asi8 // self.asi8
700 mask = self._isnan | other._isnan
701 if mask.any():
702 result = result.astype(np.float64)
703 result[mask] = np.nan
704 return result
705
706 elif is_object_dtype(other.dtype):
707 result = [other[n] // self[n] for n in range(len(self))]
708 result = np.array(result)
709 return result
710
711 else:
712 dtype = getattr(other, "dtype", type(other).__name__)
713 raise TypeError(f"Cannot divide {dtype} by {type(self).__name__}")
714
715 @unpack_zerodim_and_defer("__mod__")
716 def __mod__(self, other):
717 # Note: This is a naive implementation, can likely be optimized
718 if isinstance(other, (timedelta, np.timedelta64, Tick)):
719 other = Timedelta(other)
720 return self - (self // other) * other
721
722 @unpack_zerodim_and_defer("__rmod__")
723 def __rmod__(self, other):
724 # Note: This is a naive implementation, can likely be optimized
725 if isinstance(other, (timedelta, np.timedelta64, Tick)):
726 other = Timedelta(other)
727 return other - (other // self) * self
728
729 @unpack_zerodim_and_defer("__divmod__")
730 def __divmod__(self, other):
731 # Note: This is a naive implementation, can likely be optimized
732 if isinstance(other, (timedelta, np.timedelta64, Tick)):
733 other = Timedelta(other)
734
735 res1 = self // other
736 res2 = self - res1 * other
737 return res1, res2
738
739 @unpack_zerodim_and_defer("__rdivmod__")
740 def __rdivmod__(self, other):
741 # Note: This is a naive implementation, can likely be optimized
742 if isinstance(other, (timedelta, np.timedelta64, Tick)):
743 other = Timedelta(other)
744
745 res1 = other // self
746 res2 = other - res1 * self
747 return res1, res2
748
749 def __neg__(self) -> "TimedeltaArray":
750 if self.freq is not None:
751 return type(self)(-self._data, freq=-self.freq)
752 return type(self)(-self._data)
753
754 def __pos__(self) -> "TimedeltaArray":
755 return type(self)(self._data, freq=self.freq)
756
757 def __abs__(self) -> "TimedeltaArray":
758 # Note: freq is not preserved
759 return type(self)(np.abs(self._data))
760
761 # ----------------------------------------------------------------
762 # Conversion Methods - Vectorized analogues of Timedelta methods
763
764 def total_seconds(self) -> np.ndarray:
765 """
766 Return total duration of each element expressed in seconds.
767
768 This method is available directly on TimedeltaArray, TimedeltaIndex
769 and on Series containing timedelta values under the ``.dt`` namespace.
770
771 Returns
772 -------
773 seconds : [ndarray, Float64Index, Series]
774 When the calling object is a TimedeltaArray, the return type
775 is ndarray. When the calling object is a TimedeltaIndex,
776 the return type is a Float64Index. When the calling object
777 is a Series, the return type is Series of type `float64` whose
778 index is the same as the original.
779
780 See Also
781 --------
782 datetime.timedelta.total_seconds : Standard library version
783 of this method.
784 TimedeltaIndex.components : Return a DataFrame with components of
785 each Timedelta.
786
787 Examples
788 --------
789 **Series**
790
791 >>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='d'))
792 >>> s
793 0 0 days
794 1 1 days
795 2 2 days
796 3 3 days
797 4 4 days
798 dtype: timedelta64[ns]
799
800 >>> s.dt.total_seconds()
801 0 0.0
802 1 86400.0
803 2 172800.0
804 3 259200.0
805 4 345600.0
806 dtype: float64
807
808 **TimedeltaIndex**
809
810 >>> idx = pd.to_timedelta(np.arange(5), unit='d')
811 >>> idx
812 TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'],
813 dtype='timedelta64[ns]', freq=None)
814
815 >>> idx.total_seconds()
816 Float64Index([0.0, 86400.0, 172800.0, 259200.00000000003, 345600.0],
817 dtype='float64')
818 """
819 return self._maybe_mask_results(1e-9 * self.asi8, fill_value=None)
820
821 def to_pytimedelta(self) -> np.ndarray:
822 """
823 Return Timedelta Array/Index as object ndarray of datetime.timedelta
824 objects.
825
826 Returns
827 -------
828 datetimes : ndarray
829 """
830 return tslibs.ints_to_pytimedelta(self.asi8)
831
832 days = _field_accessor("days", "days", "Number of days for each element.")
833 seconds = _field_accessor(
834 "seconds",
835 "seconds",
836 "Number of seconds (>= 0 and less than 1 day) for each element.",
837 )
838 microseconds = _field_accessor(
839 "microseconds",
840 "microseconds",
841 "Number of microseconds (>= 0 and less than 1 second) for each element.",
842 )
843 nanoseconds = _field_accessor(
844 "nanoseconds",
845 "nanoseconds",
846 "Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.",
847 )
848
849 @property
850 def components(self):
851 """
852 Return a dataframe of the components (days, hours, minutes,
853 seconds, milliseconds, microseconds, nanoseconds) of the Timedeltas.
854
855 Returns
856 -------
857 a DataFrame
858 """
859 from pandas import DataFrame
860
861 columns = [
862 "days",
863 "hours",
864 "minutes",
865 "seconds",
866 "milliseconds",
867 "microseconds",
868 "nanoseconds",
869 ]
870 hasnans = self._hasnans
871 if hasnans:
872
873 def f(x):
874 if isna(x):
875 return [np.nan] * len(columns)
876 return x.components
877
878 else:
879
880 def f(x):
881 return x.components
882
883 result = DataFrame([f(x) for x in self], columns=columns)
884 if not hasnans:
885 result = result.astype("int64")
886 return result
887
888
889 # ---------------------------------------------------------------------
890 # Constructor Helpers
891
892
893 def sequence_to_td64ns(data, copy=False, unit=None, errors="raise"):
894 """
895 Parameters
896 ----------
897 data : list-like
898 copy : bool, default False
899 unit : str, optional
900 The timedelta unit to treat integers as multiples of. For numeric
901 data this defaults to ``'ns'``.
902 Must be un-specified if the data contains a str and ``errors=="raise"``.
903 errors : {"raise", "coerce", "ignore"}, default "raise"
904 How to handle elements that cannot be converted to timedelta64[ns].
905 See ``pandas.to_timedelta`` for details.
906
907 Returns
908 -------
909 converted : numpy.ndarray
910 The sequence converted to a numpy array with dtype ``timedelta64[ns]``.
911 inferred_freq : Tick or None
912 The inferred frequency of the sequence.
913
914 Raises
915 ------
916 ValueError : Data cannot be converted to timedelta64[ns].
917
918 Notes
919 -----
920 Unlike `pandas.to_timedelta`, if setting ``errors=ignore`` will not cause
921 errors to be ignored; they are caught and subsequently ignored at a
922 higher level.
923 """
924 inferred_freq = None
925 if unit is not None:
926 unit = parse_timedelta_unit(unit)
927
928 # Unwrap whatever we have into a np.ndarray
929 if not hasattr(data, "dtype"):
930 # e.g. list, tuple
931 if np.ndim(data) == 0:
932 # i.e. generator
933 data = list(data)
934 data = np.array(data, copy=False)
935 elif isinstance(data, ABCSeries):
936 data = data._values
937 elif isinstance(data, (ABCTimedeltaIndex, TimedeltaArray)):
938 inferred_freq = data.freq
939 data = data._data
940 elif isinstance(data, IntegerArray):
941 data = data.to_numpy("int64", na_value=tslibs.iNaT)
942 elif is_categorical_dtype(data.dtype):
943 data = data.categories.take(data.codes, fill_value=NaT)._values
944 copy = False
945
946 # Convert whatever we have into timedelta64[ns] dtype
947 if is_object_dtype(data.dtype) or is_string_dtype(data.dtype):
948 # no need to make a copy, need to convert if string-dtyped
949 data = objects_to_td64ns(data, unit=unit, errors=errors)
950 copy = False
951
952 elif is_integer_dtype(data.dtype):
953 # treat as multiples of the given unit
954 data, copy_made = ints_to_td64ns(data, unit=unit)
955 copy = copy and not copy_made
956
957 elif is_float_dtype(data.dtype):
958 # cast the unit, multiply base/frac separately
959 # to avoid precision issues from float -> int
960 mask = np.isnan(data)
961 m, p = precision_from_unit(unit or "ns")
962 base = data.astype(np.int64)
963 frac = data - base
964 if p:
965 frac = np.round(frac, p)
966 data = (base * m + (frac * m).astype(np.int64)).view("timedelta64[ns]")
967 data[mask] = iNaT
968 copy = False
969
970 elif is_timedelta64_dtype(data.dtype):
971 if data.dtype != TD64NS_DTYPE:
972 # non-nano unit
973 # TODO: watch out for overflows
974 data = data.astype(TD64NS_DTYPE)
975 copy = False
976
977 else:
978 # This includes datetime64-dtype, see GH#23539, GH#29794
979 raise TypeError(f"dtype {data.dtype} cannot be converted to timedelta64[ns]")
980
981 data = np.array(data, copy=copy)
982
983 assert data.dtype == "m8[ns]", data
984 return data, inferred_freq
985
986
987 def ints_to_td64ns(data, unit="ns"):
988 """
989 Convert an ndarray with integer-dtype to timedelta64[ns] dtype, treating
990 the integers as multiples of the given timedelta unit.
991
992 Parameters
993 ----------
994 data : numpy.ndarray with integer-dtype
995 unit : str, default "ns"
996 The timedelta unit to treat integers as multiples of.
997
998 Returns
999 -------
1000 numpy.ndarray : timedelta64[ns] array converted from data
1001 bool : whether a copy was made
1002 """
1003 copy_made = False
1004 unit = unit if unit is not None else "ns"
1005
1006 if data.dtype != np.int64:
1007 # converting to int64 makes a copy, so we can avoid
1008 # re-copying later
1009 data = data.astype(np.int64)
1010 copy_made = True
1011
1012 if unit != "ns":
1013 dtype_str = f"timedelta64[{unit}]"
1014 data = data.view(dtype_str)
1015
1016 # TODO: watch out for overflows when converting from lower-resolution
1017 data = data.astype("timedelta64[ns]")
1018 # the astype conversion makes a copy, so we can avoid re-copying later
1019 copy_made = True
1020
1021 else:
1022 data = data.view("timedelta64[ns]")
1023
1024 return data, copy_made
1025
1026
1027 def objects_to_td64ns(data, unit=None, errors="raise"):
1028 """
1029 Convert a object-dtyped or string-dtyped array into an
1030 timedelta64[ns]-dtyped array.
1031
1032 Parameters
1033 ----------
1034 data : ndarray or Index
1035 unit : str, default "ns"
1036 The timedelta unit to treat integers as multiples of.
1037 Must not be specified if the data contains a str.
1038 errors : {"raise", "coerce", "ignore"}, default "raise"
1039 How to handle elements that cannot be converted to timedelta64[ns].
1040 See ``pandas.to_timedelta`` for details.
1041
1042 Returns
1043 -------
1044 numpy.ndarray : timedelta64[ns] array converted from data
1045
1046 Raises
1047 ------
1048 ValueError : Data cannot be converted to timedelta64[ns].
1049
1050 Notes
1051 -----
1052 Unlike `pandas.to_timedelta`, if setting `errors=ignore` will not cause
1053 errors to be ignored; they are caught and subsequently ignored at a
1054 higher level.
1055 """
1056 # coerce Index to np.ndarray, converting string-dtype if necessary
1057 values = np.array(data, dtype=np.object_, copy=False)
1058
1059 result = array_to_timedelta64(values, unit=unit, errors=errors)
1060 return result.view("timedelta64[ns]")
1061
1062
1063 def _validate_td64_dtype(dtype):
1064 dtype = pandas_dtype(dtype)
1065 if is_dtype_equal(dtype, np.dtype("timedelta64")):
1066 # no precision disallowed GH#24806
1067 msg = (
1068 "Passing in 'timedelta' dtype with no precision is not allowed. "
1069 "Please pass in 'timedelta64[ns]' instead."
1070 )
1071 raise ValueError(msg)
1072
1073 if not is_dtype_equal(dtype, TD64NS_DTYPE):
1074 raise ValueError(f"dtype {dtype} cannot be converted to timedelta64[ns]")
1075
1076 return dtype
1077
[end of pandas/core/arrays/timedeltas.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
4253abd0a82feed2db405f2e25209b49666dfee3
|
API/DEPR: casting bool-indexer to slice in dt64/td64/period
We use special logic for dt64/td64/period dtypes that makes view/copy behavior different from other dtypes:
```
dti = pd.date_range("2016-01-01", periods=4, tz="US/Pacific")
key = np.array([True, True, False, False])
ser1 = pd.Series(dti._data)
ser2 = pd.Series(range(4))
res1 = ser1[key]
res2 = ser2[key]
>>> res1._values._data.base is None
False
>>> res2._values.base is None
True
```
cc @jorisvandenbossche IIRC you advocated not doing this special casing.
|
cc @jorisvandenbossche i havent thought of any nice ways to deprecate this (s.t. we can say "do X to get the future behavior, do Y to get the old behavior"). thoughts?
Hmm, I also can't think of any way to deprecate this ..
It could maybe be done within the more general copy/view discussion, but then it's not something on the short term.
I am inclined to say that in this case it might be fine to simply change (because it's a behaviour that only comes up in rare value-dependent cases, and when it happens can lead to silent bugs).
I checked what happened if I remove this behavior: 26 test failures, all about a freq mismatch.
Because the slice keeps the freq per definition, and when doing a mask it always looses the freq?
That was my thought too, but when i tried patching get_getitem_freq to account for that it didnt help
update: got it working, was patching incorrectly in one place
|
2020-09-26T02:46:43Z
|
<patch>
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -393,6 +393,7 @@ Indexing
- Bug in :meth:`Index.get_indexer` and :meth:`Index.get_indexer_non_unique` where int64 arrays are returned instead of intp. (:issue:`36359`)
- Bug in :meth:`DataFrame.sort_index` where parameter ascending passed as a list on a single level index gives wrong result. (:issue:`32334`)
- Bug in :meth:`DataFrame.reset_index` was incorrectly raising a ``ValueError`` for input with a :class:`MultiIndex` with missing values in a level with ``Categorical`` dtype (:issue:`24206`)
+- Bug in indexing with boolean masks on datetime-like values sometimes returning a view instead of a copy (:issue:`36210`)
Missing
^^^^^^^
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -65,7 +65,7 @@
from pandas.core.arrays._mixins import NDArrayBackedExtensionArray
import pandas.core.common as com
from pandas.core.construction import array, extract_array
-from pandas.core.indexers import check_array_indexer, check_setitem_lengths
+from pandas.core.indexers import check_setitem_lengths
from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.core.ops.invalid import invalid_comparison, make_invalid_op
@@ -284,23 +284,6 @@ def __getitem__(self, key):
result._freq = self._get_getitem_freq(key)
return result
- def _validate_getitem_key(self, key):
- if com.is_bool_indexer(key):
- # first convert to boolean, because check_array_indexer doesn't
- # allow object dtype
- if is_object_dtype(key):
- key = np.asarray(key, dtype=bool)
-
- key = check_array_indexer(self, key)
- key = lib.maybe_booleans_to_slice(key.view(np.uint8))
- elif isinstance(key, list) and len(key) == 1 and isinstance(key[0], slice):
- # see https://github.com/pandas-dev/pandas/issues/31299, need to allow
- # this for now (would otherwise raise in check_array_indexer)
- pass
- else:
- key = super()._validate_getitem_key(key)
- return key
-
def _get_getitem_freq(self, key):
"""
Find the `freq` attribute to assign to the result of a __getitem__ lookup.
@@ -322,6 +305,10 @@ def _get_getitem_freq(self, key):
# GH#21282 indexing with Ellipsis is similar to a full slice,
# should preserve `freq` attribute
freq = self.freq
+ elif com.is_bool_indexer(key):
+ new_key = lib.maybe_booleans_to_slice(key.view(np.uint8))
+ if isinstance(new_key, slice):
+ return self._get_getitem_freq(new_key)
return freq
def __setitem__(
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -190,12 +190,14 @@ def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
indices = ensure_int64(indices)
maybe_slice = lib.maybe_indices_to_slice(indices, len(self))
- if isinstance(maybe_slice, slice):
- return self[maybe_slice]
- return ExtensionIndex.take(
+ result = ExtensionIndex.take(
self, indices, axis, allow_fill, fill_value, **kwargs
)
+ if isinstance(maybe_slice, slice):
+ freq = self._data._get_getitem_freq(maybe_slice)
+ result._data._freq = freq
+ return result
@doc(IndexOpsMixin.searchsorted, klass="Datetime-like Index")
def searchsorted(self, value, side="left", sorter=None):
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-2512
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
increase spacing of parameters in latex drawer
QISKit's visualization module is not properly drawing the **cu1** gate. With the following code,
```
from qiskit import QuantumProgram
from qiskit.tools.visualization import circuit_drawer
qp = QuantumProgram()
qr = qp.create_quantum_register('qr', 3)
cr = qp.create_classical_register('cr', 1)
qc = qp.create_circuit('test', [qr], [cr])
qc.cu1(2.3, qr[0], qr[2])
qc.measure(qr[1], cr[0])
circuit_drawer(qc)
```
The obtained image is,

Which is clearly corrupt.
### Informations
- **Qiskit (Python SDK) version**: 0.5.7
- **Python version**: 3.6.6
- **Operating system**: Ubuntu 18.04.1 LTS
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)
4
5 **Qiskit** is an open-source framework for working with Noisy Intermediate-Scale Quantum (NISQ) computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [contribution guidelines](.github/CONTRIBUTING.rst).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> result = execute(qc, backend_sim).result()
40 >>> print(result.get_counts(qc))
41 ```
42
43 In this case, the output will be:
44
45 ```python
46 {'00': 513, '11': 511}
47 ```
48
49 A script is available [here](examples/python/hello_quantum.py), where we also show how to
50 run the same program on a real quantum computer via IBMQ.
51
52 ### Executing your code on a real quantum chip
53
54 You can also use Qiskit to execute your code on a
55 **real quantum chip**.
56 In order to do so, you need to configure Qiskit for using the credentials in
57 your IBM Q account:
58
59 #### Configure your IBMQ credentials
60
61 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
62
63 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
64
65 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
66
67 ```python
68 >>> from qiskit import IBMQ
69 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
70 ```
71
72 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
73 Once they are stored, at any point in the future you can load and use them
74 in your program simply via:
75
76 ```python
77 >>> from qiskit import IBMQ
78 >>> IBMQ.load_accounts()
79 ```
80
81 Those who do not want to save their credentials to disk should use instead:
82
83 ```python
84 >>> from qiskit import IBMQ
85 >>> IBMQ.enable_account('MY_API_TOKEN')
86 ```
87
88 and the token will only be active for the session. For examples using Terra with real
89 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
90 the levels.
91
92 ## Contribution Guidelines
93
94 If you'd like to contribute to Qiskit Terra, please take a look at our
95 [contribution guidelines](.github/CONTRIBUTING.rst). This project adheres to Qiskit's [code of conduct](.github/CODE_OF_CONDUCT.rst). By participating, you are expected to uphold this code.
96
97 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
98 [join the Qiskit Slack community](https://join.slack.com/t/qiskit/shared_invite/enQtNDc2NjUzMjE4Mzc0LTMwZmE0YTM4ZThiNGJmODkzN2Y2NTNlMDIwYWNjYzA2ZmM1YTRlZGQ3OGM0NjcwMjZkZGE0MTA4MGQ1ZTVmYzk)
99 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
100 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
101
102 ## Next Steps
103
104 Now you're set up and ready to check out some of the other examples from our
105 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
106
107 ## Authors and Citation
108
109 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
110 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
111
112 ## License
113
114 [Apache License 2.0](LICENSE.txt)
115
[end of README.md]
[start of examples/python/circuit_draw.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 Example showing how to draw a quantum circuit using Qiskit.
17 """
18
19 from qiskit import QuantumCircuit
20
21
22 def build_bell_circuit():
23 """Returns a circuit putting 2 qubits in the Bell state."""
24 qc = QuantumCircuit(2, 2)
25 qc.h(0)
26 qc.cx(0, 1)
27 qc.measure([0, 1], [0, 1])
28 return qc
29
30 # Create the circuit
31 bell_circuit = build_bell_circuit()
32
33 # Use the internal .draw() to print the circuit
34 print(bell_circuit)
35
[end of examples/python/circuit_draw.py]
[start of examples/python/stochastic_swap.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Example of using the StochasticSwap pass."""
16
17 from qiskit.transpiler.passes import StochasticSwap
18 from qiskit.transpiler import CouplingMap, Layout
19 from qiskit.converters import circuit_to_dag, dag_to_circuit
20 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
21
22 coupling = CouplingMap([[0, 1], [1, 2], [1, 3]])
23 qr = QuantumRegister(2, 'q')
24 ar = QuantumRegister(2, 'a')
25 cr = ClassicalRegister(4, 'c')
26 circ = QuantumCircuit(qr, ar, cr)
27 circ.cx(qr[1], ar[0])
28 circ.cx(qr[0], ar[1])
29 circ.measure(qr[0], cr[0])
30 circ.h(qr)
31 circ.h(ar)
32 circ.cx(qr[0], qr[1])
33 circ.cx(ar[0], ar[1])
34 circ.measure(qr[0], cr[0])
35 circ.measure(qr[1], cr[1])
36 circ.measure(ar[0], cr[2])
37 circ.measure(ar[1], cr[3])
38 dag = circuit_to_dag(circ)
39 # ┌─┐┌───┐ ┌─┐
40 # q_0: |0>─────────────────■──────────────────┤M├┤ H ├──■─────┤M├
41 # ┌───┐ │ └╥┘└───┘┌─┴─┐┌─┐└╥┘
42 # q_1: |0>──■───────┤ H ├──┼───────────────────╫──────┤ X ├┤M├─╫─
43 # ┌─┴─┐┌───┐└───┘ │ ┌─┐ ║ └───┘└╥┘ ║
44 # a_0: |0>┤ X ├┤ H ├───────┼─────────■─────┤M├─╫────────────╫──╫─
45 # └───┘└───┘ ┌─┴─┐┌───┐┌─┴─┐┌─┐└╥┘ ║ ║ ║
46 # a_1: |0>───────────────┤ X ├┤ H ├┤ X ├┤M├─╫──╫────────────╫──╫─
47 # └───┘└───┘└───┘└╥┘ ║ ║ ║ ║
48 # c_0: 0 ═══════════════════════════════╬══╬══╩════════════╬══╩═
49 # ║ ║ ║
50 # c_1: 0 ═══════════════════════════════╬══╬═══════════════╩════
51 # ║ ║
52 # c_2: 0 ═══════════════════════════════╬══╩════════════════════
53 # ║
54 # c_3: 0 ═══════════════════════════════╩═══════════════════════
55 #
56 # ┌─┐┌───┐ ┌─┐
57 # q_0: |0>────────────────────■──┤M├┤ H ├──────────────────■──┤M├──────
58 # ┌─┴─┐└╥┘└───┘┌───┐┌───┐ ┌─┴─┐└╥┘┌─┐
59 # q_1: |0>──■───X───────────┤ X ├─╫──────┤ H ├┤ X ├─X────┤ X ├─╫─┤M├───
60 # ┌─┴─┐ │ ┌───┐└───┘ ║ └───┘└─┬─┘ │ └───┘ ║ └╥┘┌─┐
61 # a_0: |0>┤ X ├─┼──────┤ H ├──────╫─────────────■───┼──────────╫──╫─┤M├
62 # └───┘ │ ┌───┐└───┘ ║ │ ┌─┐ ║ ║ └╥┘
63 # a_1: |0>──────X─┤ H ├───────────╫─────────────────X─┤M├──────╫──╫──╫─
64 # └───┘ ║ └╥┘ ║ ║ ║
65 # c_0: 0 ════════════════════════╩════════════════════╬═══════╩══╬══╬═
66 # ║ ║ ║
67 # c_1: 0 ═════════════════════════════════════════════╬══════════╩══╬═
68 # ║ ║
69 # c_2: 0 ═════════════════════════════════════════════╬═════════════╩═
70 # ║
71 # c_3: 0 ═════════════════════════════════════════════╩═══════════════
72 #
73 # Layout from mapper:
74 # {qr[0]: 0,
75 # qr[1]: 1,
76 # ar[0]: 2,
77 # ar[1]: 3}
78 #
79 # 2
80 # |
81 # 0 - 1 - 3
82 # Build the expected output to verify the pass worked
83 expected = QuantumCircuit(qr, ar, cr)
84 expected.cx(qr[1], ar[0])
85 expected.swap(qr[0], qr[1])
86 expected.cx(qr[1], ar[1])
87 expected.h(ar[1])
88 expected.h(ar[0])
89 expected.measure(qr[1], cr[0])
90 expected.h(qr[0])
91 expected.swap(qr[1], ar[1])
92 expected.h(ar[1])
93 expected.cx(ar[0], qr[1])
94 expected.measure(ar[0], cr[2])
95 expected.swap(qr[1], ar[1])
96 expected.measure(ar[1], cr[3])
97 expected.cx(qr[1], qr[0])
98 expected.measure(qr[1], cr[0])
99 expected.measure(qr[0], cr[1])
100 expected_dag = circuit_to_dag(expected)
101
102 layout = Layout({qr[0]: 0, qr[1]: 1, ar[0]: 2, ar[1]: 3})
103 # Run the pass on the dag from the input circuit
104 pass_ = StochasticSwap(coupling, layout, 20, 13)
105 after = pass_.run(dag)
106 # Verify the output of the pass matches our expectation
107 assert expected_dag == after
108
[end of examples/python/stochastic_swap.py]
[start of qiskit/circuit/quantumcircuit.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Quantum circuit object."""
16
17 from copy import deepcopy
18 import itertools
19 import sys
20 import multiprocessing as mp
21 from warnings import warn
22
23 from qiskit.circuit.instruction import Instruction
24 from qiskit.qasm.qasm import Qasm
25 from qiskit.exceptions import QiskitError
26 from qiskit.circuit.parameter import Parameter
27 from .quantumregister import QuantumRegister, Qubit
28 from .classicalregister import ClassicalRegister, Clbit
29 from .parametertable import ParameterTable
30 from .parametervector import ParameterVector
31 from .instructionset import InstructionSet
32 from .register import Register
33 from .bit import Bit
34
35
36 def _is_bit(obj):
37 """Determine if obj is a bit"""
38 # If there is a bit type this could be replaced by isinstance.
39 if isinstance(obj, tuple) and len(obj) == 2:
40 if isinstance(obj[0], Register) and isinstance(obj[1], int) and obj[1] < len(obj[0]):
41 warn('Referring to a bit as a tuple is being deprecated. '
42 'Instead go of (qr, 0), use qr[0].', DeprecationWarning)
43 return True
44 return False
45
46
47 class QuantumCircuit:
48 """Quantum circuit."""
49 instances = 0
50 prefix = 'circuit'
51
52 # Class variable OPENQASM header
53 header = "OPENQASM 2.0;"
54 extension_lib = "include \"qelib1.inc\";"
55
56 def __init__(self, *regs, name=None):
57 """Create a new circuit.
58 A circuit is a list of instructions bound to some registers.
59 Args:
60 *regs (list(Register) or list(Int)): To be included in the circuit.
61 - If [Register], the QuantumRegister and/or ClassicalRegister
62 to include in the circuit.
63 E.g.: QuantumCircuit(QuantumRegister(4))
64 QuantumCircuit(QuantumRegister(4), ClassicalRegister(3))
65 QuantumCircuit(QuantumRegister(4, 'qr0'), QuantumRegister(2, 'qr1'))
66 - If [Int], the amount of qubits and/or classical bits to include
67 in the circuit. It can be (Int, ) or (Int, Int).
68 E.g.: QuantumCircuit(4) # A QuantumCircuit with 4 qubits
69 QuantumCircuit(4, 3) # A QuantumCircuit with 4 qubits and 3 classical bits
70 name (str or None): the name of the quantum circuit. If
71 None, an automatically generated string will be assigned.
72
73 Raises:
74 QiskitError: if the circuit name, if given, is not valid.
75 """
76 if name is None:
77 name = self.cls_prefix() + str(self.cls_instances())
78 # pylint: disable=not-callable
79 # (known pylint bug: https://github.com/PyCQA/pylint/issues/1699)
80 if sys.platform != "win32" and isinstance(mp.current_process(), mp.context.ForkProcess):
81 name += '-{}'.format(mp.current_process().pid)
82 self._increment_instances()
83
84 if not isinstance(name, str):
85 raise QiskitError("The circuit name should be a string "
86 "(or None to auto-generate a name).")
87
88 self.name = name
89
90 # Data contains a list of instructions and their contexts,
91 # in the order they were applied.
92 self.data = []
93
94 # This is a map of registers bound to this circuit, by name.
95 self.qregs = []
96 self.cregs = []
97 self.add_register(*regs)
98
99 # Parameter table tracks instructions with variable parameters.
100 self._parameter_table = ParameterTable()
101
102 def __str__(self):
103 return str(self.draw(output='text'))
104
105 def __eq__(self, other):
106 # TODO: remove the DAG from this function
107 from qiskit.converters import circuit_to_dag
108 return circuit_to_dag(self) == circuit_to_dag(other)
109
110 @classmethod
111 def _increment_instances(cls):
112 cls.instances += 1
113
114 @classmethod
115 def cls_instances(cls):
116 """Return the current number of instances of this class,
117 useful for auto naming."""
118 return cls.instances
119
120 @classmethod
121 def cls_prefix(cls):
122 """Return the prefix to use for auto naming."""
123 return cls.prefix
124
125 def has_register(self, register):
126 """
127 Test if this circuit has the register r.
128
129 Args:
130 register (Register): a quantum or classical register.
131
132 Returns:
133 bool: True if the register is contained in this circuit.
134 """
135 has_reg = False
136 if (isinstance(register, QuantumRegister) and
137 register in self.qregs):
138 has_reg = True
139 elif (isinstance(register, ClassicalRegister) and
140 register in self.cregs):
141 has_reg = True
142 return has_reg
143
144 def mirror(self):
145 """Mirror the circuit by reversing the instructions.
146
147 This is done by recursively mirroring all instructions.
148 It does not invert any gate.
149
150 Returns:
151 QuantumCircuit: the mirrored circuit
152 """
153 reverse_circ = self.copy(name=self.name + '_mirror')
154 reverse_circ.data = []
155 for inst, qargs, cargs in reversed(self.data):
156 reverse_circ.data.append((inst.mirror(), qargs, cargs))
157 return reverse_circ
158
159 def inverse(self):
160 """Invert this circuit.
161
162 This is done by recursively inverting all gates.
163
164 Returns:
165 QuantumCircuit: the inverted circuit
166
167 Raises:
168 QiskitError: if the circuit cannot be inverted.
169 """
170 inverse_circ = self.copy(name=self.name + '_dg')
171 inverse_circ.data = []
172 for inst, qargs, cargs in reversed(self.data):
173 inverse_circ.data.append((inst.inverse(), qargs, cargs))
174 return inverse_circ
175
176 def combine(self, rhs):
177 """
178 Append rhs to self if self contains compatible registers.
179
180 Two circuits are compatible if they contain the same registers
181 or if they contain different registers with unique names. The
182 returned circuit will contain all unique registers between both
183 circuits.
184
185 Return self + rhs as a new object.
186 """
187 # Check registers in LHS are compatible with RHS
188 self._check_compatible_regs(rhs)
189
190 # Make new circuit with combined registers
191 combined_qregs = deepcopy(self.qregs)
192 combined_cregs = deepcopy(self.cregs)
193
194 for element in rhs.qregs:
195 if element not in self.qregs:
196 combined_qregs.append(element)
197 for element in rhs.cregs:
198 if element not in self.cregs:
199 combined_cregs.append(element)
200 circuit = QuantumCircuit(*combined_qregs, *combined_cregs)
201 for instruction_context in itertools.chain(self.data, rhs.data):
202 circuit.append(*instruction_context)
203 return circuit
204
205 def extend(self, rhs):
206 """
207 Append rhs to self if self contains compatible registers.
208
209 Two circuits are compatible if they contain the same registers
210 or if they contain different registers with unique names. The
211 returned circuit will contain all unique registers between both
212 circuits.
213
214 Modify and return self.
215 """
216 # Check registers in LHS are compatible with RHS
217 self._check_compatible_regs(rhs)
218
219 # Add new registers
220 for element in rhs.qregs:
221 if element not in self.qregs:
222 self.qregs.append(element)
223 for element in rhs.cregs:
224 if element not in self.cregs:
225 self.cregs.append(element)
226
227 # Add new gates
228 for instruction_context in rhs.data:
229 self.append(*instruction_context)
230 return self
231
232 @property
233 def qubits(self):
234 """
235 Returns a list of quantum bits in the order that the registers had been added.
236 """
237 return [qbit for qreg in self.qregs for qbit in qreg]
238
239 @property
240 def clbits(self):
241 """
242 Returns a list of classical bits in the order that the registers had been added.
243 """
244 return [cbit for creg in self.cregs for cbit in creg]
245
246 def __add__(self, rhs):
247 """Overload + to implement self.combine."""
248 return self.combine(rhs)
249
250 def __iadd__(self, rhs):
251 """Overload += to implement self.extend."""
252 return self.extend(rhs)
253
254 def __len__(self):
255 """Return number of operations in circuit."""
256 return len(self.data)
257
258 def __getitem__(self, item):
259 """Return indexed operation."""
260 return self.data[item]
261
262 @staticmethod
263 def cast(value, _type):
264 """Best effort to cast value to type. Otherwise, returns the value."""
265 try:
266 return _type(value)
267 except (ValueError, TypeError):
268 return value
269
270 @staticmethod
271 def _bit_argument_conversion(bit_representation, in_array):
272 ret = None
273 try:
274 if isinstance(bit_representation, Bit):
275 # circuit.h(qr[0]) -> circuit.h([qr[0]])
276 ret = [bit_representation]
277 elif isinstance(bit_representation, Register):
278 # circuit.h(qr) -> circuit.h([qr[0], qr[1]])
279 ret = bit_representation[:]
280 elif isinstance(QuantumCircuit.cast(bit_representation, int), int):
281 # circuit.h(0) -> circuit.h([qr[0]])
282 ret = [in_array[bit_representation]]
283 elif isinstance(bit_representation, slice):
284 # circuit.h(slice(0,2)) -> circuit.h([qr[0], qr[1]])
285 ret = in_array[bit_representation]
286 elif _is_bit(bit_representation):
287 # circuit.h((qr, 0)) -> circuit.h([qr[0]])
288 ret = [bit_representation[0][bit_representation[1]]]
289 elif isinstance(bit_representation, list) and \
290 all(_is_bit(bit) for bit in bit_representation):
291 ret = [bit[0][bit[1]] for bit in bit_representation]
292 elif isinstance(bit_representation, list) and \
293 all(isinstance(bit, Bit) for bit in bit_representation):
294 # circuit.h([qr[0], qr[1]]) -> circuit.h([qr[0], qr[1]])
295 ret = bit_representation
296 elif isinstance(QuantumCircuit.cast(bit_representation, list), (range, list)):
297 # circuit.h([0, 1]) -> circuit.h([qr[0], qr[1]])
298 # circuit.h(range(0,2)) -> circuit.h([qr[0], qr[1]])
299 ret = [in_array[index] for index in bit_representation]
300 else:
301 raise QiskitError('Not able to expand a %s (%s)' % (bit_representation,
302 type(bit_representation)))
303 except IndexError:
304 raise QiskitError('Index out of range.')
305 except TypeError:
306 raise QiskitError('Type error handling %s (%s)' % (bit_representation,
307 type(bit_representation)))
308 return ret
309
310 def qbit_argument_conversion(self, qubit_representation):
311 """
312 Converts several qubit representations (such as indexes, range, etc)
313 into a list of qubits.
314
315 Args:
316 qubit_representation (Object): representation to expand
317
318 Returns:
319 List(tuple): Where each tuple is a qubit.
320 """
321 return QuantumCircuit._bit_argument_conversion(qubit_representation, self.qubits)
322
323 def cbit_argument_conversion(self, clbit_representation):
324 """
325 Converts several classical bit representations (such as indexes, range, etc)
326 into a list of classical bits.
327
328 Args:
329 clbit_representation (Object): representation to expand
330
331 Returns:
332 List(tuple): Where each tuple is a classical bit.
333 """
334 return QuantumCircuit._bit_argument_conversion(clbit_representation, self.clbits)
335
336 def append(self, instruction, qargs=None, cargs=None):
337 """Append one or more instructions to the end of the circuit, modifying
338 the circuit in place. Expands qargs and cargs.
339
340 Args:
341 instruction (Instruction or Operation): Instruction instance to append
342 qargs (list(argument)): qubits to attach instruction to
343 cargs (list(argument)): clbits to attach instruction to
344
345 Returns:
346 Instruction: a handle to the instruction that was just added
347 """
348 # Convert input to instruction
349 if not isinstance(instruction, Instruction) and hasattr(instruction, 'to_instruction'):
350 instruction = instruction.to_instruction()
351
352 expanded_qargs = [self.qbit_argument_conversion(qarg) for qarg in qargs or []]
353 expanded_cargs = [self.cbit_argument_conversion(carg) for carg in cargs or []]
354
355 instructions = InstructionSet()
356 for (qarg, carg) in instruction.broadcast_arguments(expanded_qargs, expanded_cargs):
357 instructions.add(self._append(instruction, qarg, carg), qarg, carg)
358 return instructions
359
360 def _append(self, instruction, qargs, cargs):
361 """Append an instruction to the end of the circuit, modifying
362 the circuit in place.
363
364 Args:
365 instruction (Instruction or Operator): Instruction instance to append
366 qargs (list(tuple)): qubits to attach instruction to
367 cargs (list(tuple)): clbits to attach instruction to
368
369 Returns:
370 Instruction: a handle to the instruction that was just added
371
372 Raises:
373 QiskitError: if the gate is of a different shape than the wires
374 it is being attached to.
375 """
376 if not isinstance(instruction, Instruction):
377 raise QiskitError('object is not an Instruction.')
378
379 # do some compatibility checks
380 self._check_dups(qargs)
381 self._check_qargs(qargs)
382 self._check_cargs(cargs)
383
384 # add the instruction onto the given wires
385 instruction_context = instruction, qargs, cargs
386 self.data.append(instruction_context)
387
388 # track variable parameters in instruction
389 for param_index, param in enumerate(instruction.params):
390 if isinstance(param, Parameter):
391 current_symbols = self.parameters
392
393 if param in current_symbols:
394 self._parameter_table[param].append((instruction, param_index))
395 else:
396 if param.name in {p.name for p in current_symbols}:
397 raise QiskitError(
398 'Name conflict on adding parameter: {}'.format(param.name))
399 self._parameter_table[param] = [(instruction, param_index)]
400
401 return instruction
402
403 def add_register(self, *regs):
404 """Add registers."""
405 if not regs:
406 return
407
408 if any([isinstance(reg, int) for reg in regs]):
409 # QuantumCircuit defined without registers
410 if len(regs) == 1 and isinstance(regs[0], int):
411 # QuantumCircuit with anonymous quantum wires e.g. QuantumCircuit(2)
412 regs = (QuantumRegister(regs[0], 'q'),)
413 elif len(regs) == 2 and all([isinstance(reg, int) for reg in regs]):
414 # QuantumCircuit with anonymous wires e.g. QuantumCircuit(2, 3)
415 regs = (QuantumRegister(regs[0], 'q'), ClassicalRegister(regs[1], 'c'))
416 else:
417 raise QiskitError("QuantumCircuit parameters can be Registers or Integers."
418 " If Integers, up to 2 arguments. QuantumCircuit was called"
419 " with %s." % (regs,))
420
421 for register in regs:
422 if register.name in [reg.name for reg in self.qregs + self.cregs]:
423 raise QiskitError("register name \"%s\" already exists"
424 % register.name)
425 if isinstance(register, QuantumRegister):
426 self.qregs.append(register)
427 elif isinstance(register, ClassicalRegister):
428 self.cregs.append(register)
429 else:
430 raise QiskitError("expected a register")
431
432 def _check_dups(self, qubits):
433 """Raise exception if list of qubits contains duplicates."""
434 squbits = set(qubits)
435 if len(squbits) != len(qubits):
436 raise QiskitError("duplicate qubit arguments")
437
438 def _check_qargs(self, qargs):
439 """Raise exception if a qarg is not in this circuit or bad format."""
440 if not all(isinstance(i, Qubit) for i in qargs):
441 raise QiskitError("qarg is not a Qubit")
442 if not all(self.has_register(i.register) for i in qargs):
443 raise QiskitError("register not in this circuit")
444
445 def _check_cargs(self, cargs):
446 """Raise exception if clbit is not in this circuit or bad format."""
447 if not all(isinstance(i, Clbit) for i in cargs):
448 raise QiskitError("carg is not a Clbit")
449 if not all(self.has_register(i.register) for i in cargs):
450 raise QiskitError("register not in this circuit")
451
452 def to_instruction(self, parameter_map=None):
453 """Create an Instruction out of this circuit.
454
455 Args:
456 parameter_map(dict): For parameterized circuits, a mapping from
457 parameters in the circuit to parameters to be used in the
458 instruction. If None, existing circuit parameters will also
459 parameterize the instruction.
460
461 Returns:
462 Instruction: a composite instruction encapsulating this circuit
463 (can be decomposed back)
464 """
465 from qiskit.converters.circuit_to_instruction import circuit_to_instruction
466 return circuit_to_instruction(self, parameter_map)
467
468 def decompose(self):
469 """Call a decomposition pass on this circuit,
470 to decompose one level (shallow decompose).
471
472 Returns:
473 QuantumCircuit: a circuit one level decomposed
474 """
475 from qiskit.transpiler.passes.decompose import Decompose
476 from qiskit.converters.circuit_to_dag import circuit_to_dag
477 from qiskit.converters.dag_to_circuit import dag_to_circuit
478 pass_ = Decompose()
479 decomposed_dag = pass_.run(circuit_to_dag(self))
480 return dag_to_circuit(decomposed_dag)
481
482 def _check_compatible_regs(self, rhs):
483 """Raise exception if the circuits are defined on incompatible registers"""
484 list1 = self.qregs + self.cregs
485 list2 = rhs.qregs + rhs.cregs
486 for element1 in list1:
487 for element2 in list2:
488 if element2.name == element1.name:
489 if element1 != element2:
490 raise QiskitError("circuits are not compatible")
491
492 def qasm(self):
493 """Return OpenQASM string."""
494 string_temp = self.header + "\n"
495 string_temp += self.extension_lib + "\n"
496 for register in self.qregs:
497 string_temp += register.qasm() + "\n"
498 for register in self.cregs:
499 string_temp += register.qasm() + "\n"
500 for instruction, qargs, cargs in self.data:
501 if instruction.name == 'measure':
502 qubit = qargs[0]
503 clbit = cargs[0]
504 string_temp += "%s %s[%d] -> %s[%d];\n" % (instruction.qasm(),
505 qubit.register.name, qubit.index,
506 clbit.register.name, clbit.index)
507 else:
508 string_temp += "%s %s;\n" % (instruction.qasm(),
509 ",".join(["%s[%d]" % (j.register.name, j.index)
510 for j in qargs + cargs]))
511 return string_temp
512
513 def draw(self, scale=0.7, filename=None, style=None, output=None,
514 interactive=False, line_length=None, plot_barriers=True,
515 reverse_bits=False, justify=None, vertical_compression='medium'):
516 """Draw the quantum circuit
517
518 Using the output parameter you can specify the format. The choices are:
519 0. text: ASCII art string
520 1. latex: high-quality images, but heavy external software dependencies
521 2. matplotlib: purely in Python with no external dependencies
522
523 Defaults to an overcomplete basis, in order to not alter gates.
524
525 Args:
526 scale (float): scale of image to draw (shrink if < 1)
527 filename (str): file path to save image to
528 style (dict or str): dictionary of style or file name of style
529 file. You can refer to the
530 :ref:`Style Dict Doc <style-dict-doc>` for more information
531 on the contents.
532 output (str): Select the output method to use for drawing the
533 circuit. Valid choices are `text`, `latex`, `latex_source`,
534 `mpl`. By default the 'text' drawer is used unless a user
535 config file has an alternative backend set as the default. If
536 the output is passed in that backend will always be used.
537 interactive (bool): when set true show the circuit in a new window
538 (for `mpl` this depends on the matplotlib backend being used
539 supporting this). Note when used with either the `text` or the
540 `latex_source` output type this has no effect and will be
541 silently ignored.
542 line_length (int): sets the length of the lines generated by `text`
543 reverse_bits (bool): When set to True reverse the bit order inside
544 registers for the output visualization.
545 plot_barriers (bool): Enable/disable drawing barriers in the output
546 circuit. Defaults to True.
547 justify (string): Options are `left`, `right` or `none`, if anything
548 else is supplied it defaults to left justified. It refers to where
549 gates should be placed in the output circuit if there is an option.
550 `none` results in each gate being placed in its own column. Currently
551 only supported by text drawer.
552 vertical_compression (string): `high`, `medium` or `low`. It merges the
553 lines generated by `text` so the drawing will take less vertical room.
554 Default is `medium`. It is ignored if output is not `text`.
555 Returns:
556 PIL.Image or matplotlib.figure or str or TextDrawing:
557 * PIL.Image: (output `latex`) an in-memory representation of the
558 image of the circuit diagram.
559 * matplotlib.figure: (output `mpl`) a matplotlib figure object
560 for the circuit diagram.
561 * str: (output `latex_source`). The LaTeX source code.
562 * TextDrawing: (output `text`). A drawing that can be printed as
563 ascii art
564
565 Raises:
566 VisualizationError: when an invalid output method is selected
567 """
568 # pylint: disable=cyclic-import
569 from qiskit.visualization import circuit_drawer
570 return circuit_drawer(self, scale=scale,
571 filename=filename, style=style,
572 output=output,
573 interactive=interactive,
574 line_length=line_length,
575 plot_barriers=plot_barriers,
576 reverse_bits=reverse_bits,
577 justify=justify,
578 vertical_compression=vertical_compression)
579
580 def size(self):
581 """Returns total number of gate operations in circuit.
582
583 Returns:
584 int: Total number of gate operations.
585 """
586 gate_ops = 0
587 for instr, _, _ in self.data:
588 if instr.name not in ['barrier', 'snapshot']:
589 gate_ops += 1
590 return gate_ops
591
592 def depth(self):
593 """Return circuit depth (i.e. length of critical path).
594 This does not include compiler or simulator directives
595 such as 'barrier' or 'snapshot'.
596
597 Returns:
598 int: Depth of circuit.
599
600 Notes:
601 The circuit depth and the DAG depth need not bt the
602 same.
603 """
604 # Labels the registers by ints
605 # and then the qubit position in
606 # a register is given by reg_int+qubit_num
607 reg_offset = 0
608 reg_map = {}
609 for reg in self.qregs + self.cregs:
610 reg_map[reg.name] = reg_offset
611 reg_offset += reg.size
612
613 # A list that holds the height of each qubit
614 # and classical bit.
615 op_stack = [0] * reg_offset
616 # Here we are playing a modified version of
617 # Tetris where we stack gates, but multi-qubit
618 # gates, or measurements have a block for each
619 # qubit or cbit that are connected by a virtual
620 # line so that they all stacked at the same depth.
621 # Conditional gates act on all cbits in the register
622 # they are conditioned on.
623 # We do not consider barriers or snapshots as
624 # They are transpiler and simulator directives.
625 # The max stack height is the circuit depth.
626 for instr, qargs, cargs in self.data:
627 if instr.name not in ['barrier', 'snapshot']:
628 levels = []
629 reg_ints = []
630 for ind, reg in enumerate(qargs + cargs):
631 # Add to the stacks of the qubits and
632 # cbits used in the gate.
633 reg_ints.append(reg_map[reg.register.name] + reg.index)
634 levels.append(op_stack[reg_ints[ind]] + 1)
635 if instr.control:
636 # Controls operate over all bits in the
637 # classical register they use.
638 cint = reg_map[instr.control[0].name]
639 for off in range(instr.control[0].size):
640 if cint + off not in reg_ints:
641 reg_ints.append(cint + off)
642 levels.append(op_stack[cint + off] + 1)
643
644 max_level = max(levels)
645 for ind in reg_ints:
646 op_stack[ind] = max_level
647 return max(op_stack)
648
649 def width(self):
650 """Return number of qubits plus clbits in circuit.
651
652 Returns:
653 int: Width of circuit.
654
655 """
656 return sum(reg.size for reg in self.qregs + self.cregs)
657
658 def count_ops(self):
659 """Count each operation kind in the circuit.
660
661 Returns:
662 dict: a breakdown of how many operations of each kind.
663 """
664 count_ops = {}
665 for instr, _, _ in self.data:
666 if instr.name in count_ops.keys():
667 count_ops[instr.name] += 1
668 else:
669 count_ops[instr.name] = 1
670 return count_ops
671
672 def num_connected_components(self, unitary_only=False):
673 """How many non-entangled subcircuits can the circuit be factored to.
674
675 Args:
676 unitary_only (bool): Compute only unitary part of graph.
677
678 Returns:
679 int: Number of connected components in circuit.
680 """
681 # Convert registers to ints (as done in depth).
682 reg_offset = 0
683 reg_map = {}
684
685 if unitary_only:
686 regs = self.qregs
687 else:
688 regs = self.qregs + self.cregs
689
690 for reg in regs:
691 reg_map[reg.name] = reg_offset
692 reg_offset += reg.size
693 # Start with each qubit or cbit being its own subgraph.
694 sub_graphs = [[bit] for bit in range(reg_offset)]
695
696 num_sub_graphs = len(sub_graphs)
697
698 # Here we are traversing the gates and looking to see
699 # which of the sub_graphs the gate joins together.
700 for instr, qargs, cargs in self.data:
701 if unitary_only:
702 args = qargs
703 num_qargs = len(args)
704 else:
705 args = qargs + cargs
706 num_qargs = len(args) + (1 if instr.control else 0)
707
708 if num_qargs >= 2 and instr.name not in ['barrier', 'snapshot']:
709 graphs_touched = []
710 num_touched = 0
711 # Controls necessarily join all the cbits in the
712 # register that they use.
713 if instr.control and not unitary_only:
714 creg = instr.control[0]
715 creg_int = reg_map[creg.name]
716 for coff in range(creg.size):
717 temp_int = creg_int + coff
718 for k in range(num_sub_graphs):
719 if temp_int in sub_graphs[k]:
720 graphs_touched.append(k)
721 num_touched += 1
722 break
723
724 for item in args:
725 reg_int = reg_map[item.register.name] + item.index
726 for k in range(num_sub_graphs):
727 if reg_int in sub_graphs[k]:
728 if k not in graphs_touched:
729 graphs_touched.append(k)
730 num_touched += 1
731 break
732
733 # If the gate touches more than one subgraph
734 # join those graphs together and return
735 # reduced number of subgraphs
736 if num_touched > 1:
737 connections = []
738 for idx in graphs_touched:
739 connections.extend(sub_graphs[idx])
740 _sub_graphs = []
741 for idx in range(num_sub_graphs):
742 if idx not in graphs_touched:
743 _sub_graphs.append(sub_graphs[idx])
744 _sub_graphs.append(connections)
745 sub_graphs = _sub_graphs
746 num_sub_graphs -= (num_touched - 1)
747 # Cannot go lower than one so break
748 if num_sub_graphs == 1:
749 break
750 return num_sub_graphs
751
752 def num_unitary_factors(self):
753 """Computes the number of tensor factors in the unitary
754 (quantum) part of the circuit only.
755 """
756 return self.num_connected_components(unitary_only=True)
757
758 def num_tensor_factors(self):
759 """Computes the number of tensor factors in the unitary
760 (quantum) part of the circuit only.
761
762 Notes:
763 This is here for backwards compatibility, and will be
764 removed in a future release of qiskit. You should call
765 `num_unitary_factors` instead.
766 """
767 return self.num_unitary_factors()
768
769 def copy(self, name=None):
770 """
771 Args:
772 name (str): name to be given to the copied circuit, if None then the name stays the same
773 Returns:
774 QuantumCircuit: a deepcopy of the current circuit, with the name updated if
775 it was provided
776 """
777 cpy = deepcopy(self)
778 if name:
779 cpy.name = name
780 return cpy
781
782 @staticmethod
783 def from_qasm_file(path):
784 """Take in a QASM file and generate a QuantumCircuit object.
785
786 Args:
787 path (str): Path to the file for a QASM program
788 Return:
789 QuantumCircuit: The QuantumCircuit object for the input QASM
790 """
791 qasm = Qasm(filename=path)
792 return _circuit_from_qasm(qasm)
793
794 @staticmethod
795 def from_qasm_str(qasm_str):
796 """Take in a QASM string and generate a QuantumCircuit object.
797
798 Args:
799 qasm_str (str): A QASM program string
800 Return:
801 QuantumCircuit: The QuantumCircuit object for the input QASM
802 """
803 qasm = Qasm(data=qasm_str)
804 return _circuit_from_qasm(qasm)
805
806 @property
807 def parameters(self):
808 """convenience function to get the parameters defined in the parameter table"""
809 return set(self._parameter_table.keys())
810
811 def bind_parameters(self, value_dict):
812 """Assign parameters to values yielding a new circuit.
813
814 Args:
815 value_dict (dict): {parameter: value, ...}
816
817 Raises:
818 QiskitError: If value_dict contains parameters not present in the circuit
819
820 Returns:
821 QuantumCircuit: copy of self with assignment substitution.
822 """
823 new_circuit = self.copy()
824 unrolled_value_dict = self._unroll_param_dict(value_dict)
825
826 if unrolled_value_dict.keys() > self.parameters:
827 raise QiskitError('Cannot bind parameters ({}) not present in the circuit.'.format(
828 [str(p) for p in value_dict.keys() - self.parameters]))
829
830 for parameter, value in unrolled_value_dict.items():
831 new_circuit._bind_parameter(parameter, value)
832 # clear evaluated expressions
833 for parameter in unrolled_value_dict:
834 del new_circuit._parameter_table[parameter]
835 return new_circuit
836
837 def _unroll_param_dict(self, value_dict):
838 unrolled_value_dict = {}
839 for (param, value) in value_dict.items():
840 if isinstance(param, Parameter):
841 unrolled_value_dict[param] = value
842 if isinstance(param, ParameterVector):
843 if not len(param) == len(value):
844 raise QiskitError('ParameterVector {} has length {}, which '
845 'differs from value list {} of '
846 'len {}'.format(param, len(param), value, len(value)))
847 unrolled_value_dict.update(zip(param, value))
848 return unrolled_value_dict
849
850 def _bind_parameter(self, parameter, value):
851 """Assigns a parameter value to matching instructions in-place."""
852 for (instr, param_index) in self._parameter_table[parameter]:
853 instr.params[param_index] = value
854
855 def _substitute_parameters(self, parameter_map):
856 """For every {existing_parameter: replacement_parameter} pair in
857 parameter_map, substitute replacement for existing in all
858 circuit instructions and the parameter table.
859 """
860 for old_parameter, new_parameter in parameter_map.items():
861 self._bind_parameter(old_parameter, new_parameter)
862 self._parameter_table[new_parameter] = self._parameter_table.pop(old_parameter)
863
864
865 def _circuit_from_qasm(qasm):
866 # pylint: disable=cyclic-import
867 from qiskit.converters import ast_to_dag
868 from qiskit.converters import dag_to_circuit
869 ast = qasm.parse()
870 dag = ast_to_dag(ast)
871 return dag_to_circuit(dag)
872
[end of qiskit/circuit/quantumcircuit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
8f406d5a15c6aedd19d829541691a79489aa77bb
|
increase spacing of parameters in latex drawer
QISKit's visualization module is not properly drawing the **cu1** gate. With the following code,
```
from qiskit import QuantumProgram
from qiskit.tools.visualization import circuit_drawer
qp = QuantumProgram()
qr = qp.create_quantum_register('qr', 3)
cr = qp.create_classical_register('cr', 1)
qc = qp.create_circuit('test', [qr], [cr])
qc.cu1(2.3, qr[0], qr[2])
qc.measure(qr[1], cr[0])
circuit_drawer(qc)
```
The obtained image is,

Which is clearly corrupt.
### Informations
- **Qiskit (Python SDK) version**: 0.5.7
- **Python version**: 3.6.6
- **Operating system**: Ubuntu 18.04.1 LTS
|
It is correct but it needs more space before the measurement.
It also happens without the measurement.
Hi @cruzpmmq
I am not sure what you think is incorrect. The number is the phase that is added if both qubits are in the 1 state. I agree that in the picture you uploaded it is too far away and looks like it is part of the measurement.
Hi @jaygambetta
I was expecting the usual boxed notation for controlled operations. Also, the following snippet implements the fourier transform on three qubits and the figure representation doesn't look so nice...
```
from qiskit import QuantumProgram
from qiskit.tools.visualization import circuit_drawer
from numpy import pi
qp = QuantumProgram()
qr = qp.create_quantum_register('qr', 3)
qc = qp.create_circuit('qc', [qr])
c_rk = lambda k, ctl, tgt: qc.cu1(2*pi/(2**k), ctl, tgt)
qc.h(qr[0])
c_rk(2, qr[1], qr[0])
c_rk(3, qr[2], qr[0])
qc.h(qr[1])
c_rk(2, qr[1], qr[2])
qc.h(qr[2])
circuit_drawer(qc)
```

@cruzpmmq the CU1 gate is symmetric. So this is the correct notation. A boxed notation would imply that control and target are different.
I agree the angles should be spaced out better. It is due to latex's column layout where the label of one column runs into the other column. I'll try to adjust it.
@ajavadia
Ok, I understand. I was just mentioning that because this operation is sometimes represented with a boxed notation (e.g. Fig.5.1 Nielsen & Chuang), and also because cu1() has ctl and tgt parameters. Thanks for looking into it.
By the way, on another subject: One very helpful feature to add to this drawing methods would be to include a parameter for the user to choose if the numbering on the qubits of the diagram goes 0,1,2,.. from top to bottom (as it is) or bottom-up. That would be very helpful.
You may consider to use `matplotlib_circuit_drawer`.
```
from qiskit import QuantumProgram
from qiskit.tools.visualization import matplotlib_circuit_drawer as circuit_drawer
from numpy import pi
%matplotlib inline
%config InlineBackend.figure_format = "svg"
qp = QuantumProgram()
qr = qp.create_quantum_register('qr', 3)
qc = qp.create_circuit('qc', [qr])
c_rk = lambda k, ctl, tgt: qc.cu1(2*pi/(2**k), ctl, tgt)
qc.h(qr[0])
c_rk(2, qr[1], qr[0])
c_rk(3, qr[2], qr[0])
qc.h(qr[1])
c_rk(2, qr[1], qr[2])
qc.h(qr[2])
my_style = {"usepiformat":True}
circuit_drawer(qc, style=my_style)
```

It allows you to write the box-ed control.
Please refer to the tutorial here:
https://nbviewer.jupyter.org/github/Qiskit/qiskit-tutorial/blob/master/reference/tools/matplotlib_circuit_drawer.ipynb
```
from qiskit import QuantumProgram
from qiskit.tools.visualization import matplotlib_circuit_drawer as circuit_drawer
from numpy import pi
%matplotlib inline
%config InlineBackend.figure_format = "jpeg"
qp = QuantumProgram()
qr = qp.create_quantum_register('qr', 3)
qc = qp.create_circuit('qc', [qr])
c_rk = lambda k, ctl, tgt: qc.cu1(2*pi/(2**k), ctl, tgt)
qc.h(qr[0])
c_rk(2, qr[1], qr[0])
c_rk(3, qr[2], qr[0])
qc.h(qr[1])
c_rk(2, qr[1], qr[2])
qc.h(qr[2])
my_style = {"usepiformat":True, "latexdrawerstyle":False}
circuit_drawer(qc, style=my_style)
```

@diego-plan9 the latex drawer requires latex installation (several GBs) and the installation can be technically difficult for various environment. I suggest we close this issue by making the matplotlib drawer as the default drawer.
> @diego-plan9 the latex drawer requires latex installation (several GBs) and the installation can be technically difficult for various environment. I suggest we close this issue by making the matplotlib drawer as the default drawer.
Hmm, currently if using `circuit_drawer()`, the latex drawer [is tried first, but falls back to the matplotlib one](https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/tools/visualization/_circuit_visualization.py#L76); and the user can chose which one to use by calling the function invidually (`latex_circuit_drawer()` or `matplotlib_circuit_drawer()`.
Which I think covers the case you mention, if I'm reading it right: if the user is not willing/not able to install latex and the dependencies (which indeed are huge), in practice the matplotlib will be used as "default" when calling `circuit_drawer()`. Does this behaviour map into your request? If not, could you comment on https://github.com/Qiskit/qiskit-terra/issues/612, for general improvements about the circuit drawer as whole?
A summary about this issue:
There is an error in the spacing of `cu1` gate in the LaTeX drawer (as explained in https://github.com/Qiskit/qiskit-terra/issues/693#issuecomment-408645734). @ajavadia "will try to adjust it" (I'm assigning @ajavadia, feel free to remove yourself). Also, @qruzar mentioned the `reversebits` issue, which should be addressed by merged PR #762.
The only thing left to do for this issue is to increase spacing of parameters for better readability. I adjusted the title accordingly.
|
2019-05-25T18:13:31Z
|
<patch>
diff --git a/qiskit/visualization/latex.py b/qiskit/visualization/latex.py
--- a/qiskit/visualization/latex.py
+++ b/qiskit/visualization/latex.py
@@ -113,7 +113,7 @@ def __init__(self, qregs, cregs, ops, scale, style=None,
self.column_separation = 0.5
# em points of separation between circuit row
- self.row_separation = 0.0
+ self.row_separation = 0
# presence of "box" or "target" determines row spacing
self.has_box = False
@@ -162,7 +162,7 @@ def latex(self, aliases=None):
\begin{document}
\begin{equation*}"""
qcircuit_line = r"""
- \Qcircuit @C=%.1fem @R=%.1fem @!R {
+ \Qcircuit @C=%.1fem @R=%.1fem @! {
"""
output = io.StringIO()
output.write(header_1)
</patch>
|
[]
|
[]
| |||
Lightning-AI__lightning-1271
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for loading flattened meta_tags.csv
## 🚀 Feature
### Motivation
PL+TensorBoard can log hierarchical dict after #1152, however, `meta_tags.csv` has been disabled by the change.
### Pitch
- Make `meta_tags.csv` back to a hierarchical dict based on their delimiter.
### Alternatives
62de7948634b1cd8c7e494f4cd9a02625cd3f602
### Additional context
1. We can consider the deprecation of `meta_tags.csv`, then adopt `config.yaml`.
1. We can interpret primitive-type parameters through the files, so it is needed to rethink parameter sanitization or update docs.
</issue>
<code>
[start of README.md]
1 <div align="center">
2
3 
4
5 # PyTorch Lightning
6
7 **The lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.**
8
9
10 [](https://badge.fury.io/py/pytorch-lightning)
11 [](https://pepy.tech/project/pytorch-lightning)
12 [](https://codecov.io/gh/PyTorchLightning/pytorch-lightning)
13 [](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning)
14
15 [](https://pytorch-lightning.readthedocs.io/en/stable/)
16 [](https://join.slack.com/t/pytorch-lightning/shared_invite/enQtODU5ODIyNTUzODQwLTFkMDg5Mzc1MDBmNjEzMDgxOTVmYTdhYjA1MDdmODUyOTg2OGQ1ZWZkYTQzODhhNzdhZDA3YmNhMDhlMDY4YzQ)
17 [](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE)
18 [](https://shields.io/)
19
20 <!--
21 removed until codecov badge isn't empy. likely a config error showing nothing on master.
22 [](https://codecov.io/gh/Borda/pytorch-lightning)
23 -->
24 </div>
25
26 ---
27 ## Continuous Integration
28 <center>
29
30 | System / PyTorch ver. | 1.1 (min. reg) | 1.2 | 1.3 | 1.4 | 1.5 (latest) |
31 | :---: | :---: | :---: | :---: | :---: | :---: |
32 | Linux py3.6 [CPU] | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) |
33 | Linux py3.7 [GPU] | - | - | - | - | [](http://35.192.60.23/PyTorchLightning/pytorch-lightning) |
34 | Linux py3.6 / py3.7 / py3.8 | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) |
35 | OSX py3.6 / py3.7 / py3.8| [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) |
36 | Windows py3.6 / py3.7 / py3.8 | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - |
37
38 </center>
39
40 Simple installation from PyPI
41 ```bash
42 pip install pytorch-lightning
43 ```
44
45 ## Docs
46 - [master](https://pytorch-lightning.readthedocs.io/en/latest)
47 - [0.7.5](https://pytorch-lightning.readthedocs.io/en/0.7.5/)
48 - [0.7.3](https://pytorch-lightning.readthedocs.io/en/0.7.3/)
49 - [0.7.1](https://pytorch-lightning.readthedocs.io/en/0.7.1/)
50 - [0.6.0](https://pytorch-lightning.readthedocs.io/en/0.6.0/)
51 - [0.5.3.2](https://pytorch-lightning.readthedocs.io/en/0.5.3.2/)
52
53 ## Refactoring your PyTorch code + benefits + full walk-through
54 [](https://www.youtube.com/watch?v=QHww1JH7IDU)
55
56 ## Demo
57 Here's a minimal example without a validation or test loop.
58
59 ```python
60 # this is just a plain nn.Module with some structure
61
62 class LitClassifier(pl.LightningModule):
63
64 def __init__(self):
65 super().__init__()
66 self.l1 = torch.nn.Linear(28 * 28, 10)
67
68 def forward(self, x):
69 return torch.relu(self.l1(x.view(x.size(0), -1)))
70
71 def training_step(self, batch, batch_nb):
72 x, y = batch
73 loss = F.cross_entropy(self(x), y)
74 tensorboard_logs = {'train_loss': loss}
75 return {'loss': loss, 'log': tensorboard_logs}
76
77 def configure_optimizers(self):
78 return torch.optim.Adam(self.parameters(), lr=0.02)
79
80 # train!
81 train_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
82
83 model = LitClassifier()
84 trainer = pl.Trainer(gpus=8, precision=16)
85 trainer.fit(model, train_loader)
86 ```
87
88 Other examples:
89 [GAN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=P0bSmCw57aV5)
90 [BERT](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=7uQVI-xv9Ddj)
91 [DQN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=NWvMLBDySQI5)
92 [MNIST on TPUs](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3)
93
94 ## What is it?
95 [READ THIS QUICK START PAGE](https://pytorch-lightning.readthedocs.io/en/stable/new-project.html)
96
97 Lightning is a way to organize your PyTorch code to decouple the science code from the engineering.
98 It's more of a PyTorch style-guide than a framework.
99
100 In Lightning, you organize your code into 3 distinct categories:
101
102 1. Research code (goes in the LightningModule).
103 2. Engineering code (you delete, and is handled by the Trainer).
104 3. Non-essential research code (logging, etc... this goes in Callbacks).
105
106 Here's an example of how to refactor your research code into a [LightningModule](https://pytorch-lightning.readthedocs.io/en/latest/lightning-module.html).
107
108 
109
110 The rest of the code is automated by the [Trainer](https://pytorch-lightning.readthedocs.io/en/latest/trainer.html)!
111 
112
113 ## Testing Rigour
114 All the automated code by the Trainer is [tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests).
115
116 In fact, we also train a few models using a vanilla PyTorch loop and compare with the same model trained using the Trainer to make sure we achieve the EXACT same results. [Check out the parity tests here](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/benchmarks).
117
118 Overall, Lightning guarantees rigorously tested, correct, modern best practices for the automated parts.
119
120 ## How flexible is it?
121 As you see, you're just organizing your PyTorch code - there's no abstraction.
122
123 And for the stuff that the Trainer abstracts out you can [override any part](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#extensibility) you want to do things like implement your own distributed training, 16-bit precision, or even a custom backwards pass.
124
125 For example, here you could do your own backward pass
126
127 ```python
128 class LitModel(LightningModule):
129 def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx,
130 second_order_closure=None):
131 optimizer.step()
132 optimizer.zero_grad()
133 ```
134
135 For anything else you might need, we have an extensive [callback system](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#callbacks) you can use to add arbitrary functionality not implemented by our team in the Trainer.
136
137 ## Who is Lightning for?
138 - Professional researchers
139 - PhD students
140 - Corporate production teams
141
142 If you're just getting into deep learning, we recommend you learn PyTorch first! Once you've implemented a few models, come back and use all the advanced features of Lightning :)
143
144 ## What does lightning control for me?
145
146 Everything in Blue!
147 This is how lightning separates the science (red) from the engineering (blue).
148
149 
150
151 ## How much effort is it to convert?
152 If your code is not a huge mess you should be able to organize it into a LightningModule in less than 1 hour.
153 If your code IS a mess, then you needed to clean up anyhow ;)
154
155 [Check out this step-by-step guide](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09).
156 [Or watch this video](https://www.youtube.com/watch?v=QHww1JH7IDU).
157
158
159 ## Starting a new project?
160 [Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed)
161
162 ## Why do I want to use lightning?
163 Although your research/production project might start simple, once you add things like GPU AND TPU training, 16-bit precision, etc, you end up spending more time engineering than researching. Lightning automates AND rigorously tests those parts for you.
164
165 ## Support
166 - [8 core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, PhD students from top AI labs.
167 - 100+ community contributors.
168
169 Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/) which requires projects to have solid testing, documentation and support.
170
171 ---
172
173 ## README Table of Contents
174 - [How do I use it](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it)
175 - [What lightning automates](https://github.com/PytorchLightning/pytorch-lightning#what-does-lightning-control-for-me)
176 - [Tensorboard integration](https://github.com/PytorchLightning/pytorch-lightning#tensorboard)
177 - [Lightning features](https://github.com/PytorchLightning/pytorch-lightning#lightning-automates-all-of-the-following-each-is-also-configurable)
178 - [Examples](https://github.com/PytorchLightning/pytorch-lightning#examples)
179 - [Tutorials](https://github.com/PytorchLightning/pytorch-lightning#tutorials)
180 - [Asking for help](https://github.com/PytorchLightning/pytorch-lightning#asking-for-help)
181 - [Contributing](https://github.com/PytorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md)
182 - [Bleeding edge install](https://github.com/PytorchLightning/pytorch-lightning#bleeding-edge)
183 - [Lightning Design Principles](https://github.com/PytorchLightning/pytorch-lightning#lightning-design-principles)
184 - [Lightning team](https://github.com/PytorchLightning/pytorch-lightning#lightning-team)
185 - [FAQ](https://github.com/PytorchLightning/pytorch-lightning#faq)
186
187 ---
188
189 ## Realistic example
190 Here's how you would organize a realistic PyTorch project into Lightning.
191
192 
193
194 The LightningModule defines a *system* such as seq-2-seq, GAN, etc...
195 It can ALSO define a simple classifier.
196
197 In summary, you:
198
199 1. Define a [LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html)
200 ```python
201 class LitSystem(pl.LightningModule):
202
203 def __init__(self):
204 super().__init__()
205 # not the best model...
206 self.l1 = torch.nn.Linear(28 * 28, 10)
207
208 def forward(self, x):
209 return torch.relu(self.l1(x.view(x.size(0), -1)))
210
211 def training_step(self, batch, batch_idx):
212 ...
213 ```
214
215 2. Fit it with a [Trainer](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html)
216 ```python
217 from pytorch_lightning import Trainer
218
219 model = LitSystem()
220
221 # most basic trainer, uses good defaults
222 trainer = Trainer()
223 trainer.fit(model)
224 ```
225
226 [Check out the COLAB demo here](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg)
227
228 ## What types of research works?
229 Anything! Remember, that this is just organized PyTorch code.
230 The Training step defines the core complexity found in the training loop.
231
232 #### Could be as complex as a seq2seq
233
234 ```python
235 # define what happens for training here
236 def training_step(self, batch, batch_idx):
237 x, y = batch
238
239 # define your own forward and loss calculation
240 hidden_states = self.encoder(x)
241
242 # even as complex as a seq-2-seq + attn model
243 # (this is just a toy, non-working example to illustrate)
244 start_token = '<SOS>'
245 last_hidden = torch.zeros(...)
246 loss = 0
247 for step in range(max_seq_len):
248 attn_context = self.attention_nn(hidden_states, start_token)
249 pred = self.decoder(start_token, attn_context, last_hidden)
250 last_hidden = pred
251 pred = self.predict_nn(pred)
252 loss += self.loss(last_hidden, y[step])
253
254 #toy example as well
255 loss = loss / max_seq_len
256 return {'loss': loss}
257 ```
258
259 #### Or as basic as CNN image classification
260
261 ```python
262 # define what happens for validation here
263 def validation_step(self, batch, batch_idx):
264 x, y = batch
265
266 # or as basic as a CNN classification
267 out = self(x)
268 loss = my_loss(out, y)
269 return {'loss': loss}
270 ```
271
272 And without changing a single line of code, you could run on CPUs
273 ```python
274 trainer = Trainer(max_epochs=1)
275 ```
276
277
278 Or GPUs
279 ```python
280 # 8 GPUs
281 trainer = Trainer(max_epochs=1, gpus=8)
282
283 # 256 GPUs
284 trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32)
285 ```
286
287 Or TPUs
288 ```python
289 trainer = Trainer(num_tpu_cores=8)
290 ```
291
292 When you're done training, run the test accuracy
293 ```python
294 trainer.test()
295 ```
296
297 ## Visualization
298 Lightning has out-of-the-box integration with the popular logging/visualizing frameworks
299
300 - [Tensorboard](https://pytorch.org/docs/stable/tensorboard.html)
301 - [MLFlow](https://mlflow.org/)
302 - [Neptune.ai](https://neptune.ai/)
303 - [Comet.ml](https://www.comet.ml/site/)
304 - [Wandb](https://www.wandb.com/)
305 - [Trains](https://github.com/allegroai/trains)
306 - ...
307
308 
309
310
311 ## Lightning automates 40+ parts of DL/ML research
312 - GPU training
313 - Distributed GPU (cluster) training
314 - TPU training
315 - EarlyStopping
316 - Logging/Visualizing
317 - Checkpointing
318 - Experiment management
319 - [Full list here](https://pytorch-lightning.readthedocs.io/en/latest/#common-use-cases)
320
321
322 ## Examples
323 Check out this awesome list of research papers and implementations done with Lightning.
324
325 - [Contextual Emotion Detection (DoubleDistilBert)](https://github.com/PyTorchLightning/emotion_transformer)
326 - [Generative Adversarial Network](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=TyYOdg8g77P0)
327 - [Hyperparameter optimization with Optuna](https://github.com/optuna/optuna/blob/master/examples/pytorch_lightning_simple.py)
328 - [Image Inpainting using Partial Convolutions](https://github.com/ryanwongsa/Image-Inpainting)
329 - [MNIST on TPU](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3#scrollTo=BHBz1_AnamN_)
330 - [NER (transformers, TPU, huggingface)](https://colab.research.google.com/drive/1dBN-wwYUngLYVt985wGs_OKPlK_ANB9D)
331 - [NeuralTexture (CVPR)](https://github.com/PyTorchLightning/neuraltexture)
332 - [Recurrent Attentive Neural Process](https://github.com/PyTorchLightning/attentive-neural-processes)
333 - [Siamese Nets for One-shot Image Recognition](https://github.com/PyTorchLightning/Siamese-Neural-Networks)
334 - [Speech Transformers](https://github.com/PyTorchLightning/speech-transformer-pytorch_lightning)
335 - [Transformers transfer learning (Huggingface)](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=yr7eaxkF-djf)
336 - [Transformers text classification](https://github.com/ricardorei/lightning-text-classification)
337 - [VAE Library of over 18+ VAE flavors](https://github.com/AntixK/PyTorch-VAE)
338
339 ## Tutorials
340 Check out our [introduction guide](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) to get started.
341 Or jump straight into [our tutorials](https://pytorch-lightning.readthedocs.io/en/latest/#tutorials).
342
343 ---
344
345 ## Asking for help
346 Welcome to the Lightning community!
347
348 If you have any questions, feel free to:
349 1. [read the docs](https://pytorch-lightning.rtfd.io/en/latest/).
350 2. [Search through the issues](https://github.com/PytorchLightning/pytorch-lightning/issues?utf8=%E2%9C%93&q=my++question).
351 3. [Ask on stackoverflow](https://stackoverflow.com/questions/ask?guided=false) with the tag pytorch-lightning.
352 4. [Join our slack](https://join.slack.com/t/pytorch-lightning/shared_invite/enQtODU5ODIyNTUzODQwLTFkMDg5Mzc1MDBmNjEzMDgxOTVmYTdhYjA1MDdmODUyOTg2OGQ1ZWZkYTQzODhhNzdhZDA3YmNhMDhlMDY4YzQ).
353
354 ---
355 ## FAQ
356 **How do I use Lightning for rapid research?**
357 [Here's a walk-through](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html)
358
359 **Why was Lightning created?**
360 Lightning has 3 goals in mind:
361
362 1. Maximal flexibility while abstracting out the common boilerplate across research projects.
363 2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format.
364 3. Democratizing PyTorch power user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning.
365
366 **How does Lightning compare with Ignite and fast.ai?**
367 [Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a).
368
369 **Is this another library I have to learn?**
370 Nope! We use pure Pytorch everywhere and don't add unnecessary abstractions!
371
372 **Are there plans to support Python 2?**
373 Nope.
374
375 **Are there plans to support virtualenv?**
376 Nope. Please use anaconda or miniconda.
377 ```bash
378 conda activate my_env
379 pip install pytorch-lightning
380 ```
381
382 **Which PyTorch versions do you support?**
383 - **PyTorch 1.1.0**
384 ```bash
385 # install pytorch 1.1.0 using the official instructions
386
387 # install test-tube 0.6.7.6 which supports 1.1.0
388 pip install test-tube==0.6.7.6
389
390 # install latest Lightning version without upgrading deps
391 pip install -U --no-deps pytorch-lightning
392 ```
393 - **PyTorch 1.2.0+**
394 ```python
395 pip install pytorch-lightning
396 ```
397
398 ## Custom installation
399
400 ### Bleeding edge
401
402 If you can't wait for the next release, install the most up to date code with:
403 * using GIT (locally clone whole repo with full history)
404 ```bash
405 pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade
406 ```
407 * using instant zip (last state of the repo without git history)
408 ```bash
409 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/master.zip --upgrade
410 ```
411
412 ### Any release installation
413
414 You can also install any past release `0.X.Y` from this repository:
415 ```bash
416 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/0.X.Y.zip --upgrade
417 ```
418
419 ### Lightning team
420
421 #### Leads
422 - William Falcon [(williamFalcon)](https://github.com/williamFalcon) (Lightning founder)
423 - Jirka Borovec [(Borda)](https://github.com/Borda) (ghost :)
424 - Ethan Harris [(ethanwharris)](https://github.com/ethanwharris) (Torchbearer founder)
425 - Matthew Painter [(MattPainter01)](https://github.com/MattPainter01) (Torchbearer founder)
426 - Justus Schock [(justusschock)](https://github.com/justusschock) (Former Core Member PyTorch Ignite)
427
428 #### Core Maintainers
429
430 - Nick Eggert [(neggert)](https://github.com/neggert)
431 - Jeff Ling [(jeffling)](https://github.com/jeffling)
432 - Jeremy Jordan [(jeremyjordan)](https://github.com/jeremyjordan)
433 - Tullie Murrell [(tullie)](https://github.com/tullie)
434 - Adrian Wälchli [(awaelchli)](https://github.com/awaelchli)
435
436 #### Funding
437 Building open-source software with only a few part-time people is hard! We've secured funding to make sure we can
438 hire a full-time staff, attend conferences, and move faster through implementing features you request.
439
440 Our goal is to build an incredible research platform and a big supportive community. Many open-source projects
441 have gone on to fund operations through things like support and special help for big corporations!
442
443 If you are one of these corporations, please feel free to reach out to [email protected]!
444
445 ## Bibtex
446 If you want to cite the framework feel free to use this (but only if you loved it 😊):
447
448 ```bibtex
449 @article{falcon2019pytorch,
450 title={PyTorch Lightning},
451 author={Falcon, WA},
452 journal={GitHub. Note: https://github. com/williamFalcon/pytorch-lightning Cited by},
453 volume={3},
454 year={2019}
455 }
456 ```
457
[end of README.md]
[start of pytorch_lightning/loggers/tensorboard.py]
1 """
2 TensorBoard
3 -----------
4 """
5
6 import csv
7 import os
8 from argparse import Namespace
9 from typing import Optional, Dict, Union, Any
10 from warnings import warn
11
12 import torch
13 from pkg_resources import parse_version
14 from torch.utils.tensorboard import SummaryWriter
15
16 from pytorch_lightning import _logger as log
17 from pytorch_lightning.loggers.base import LightningLoggerBase
18 from pytorch_lightning.utilities import rank_zero_only
19
20
21 class TensorBoardLogger(LightningLoggerBase):
22 r"""
23 Log to local file system in `TensorBoard <https://www.tensorflow.org/tensorboard>`_ format.
24 Implemented using :class:`~torch.utils.tensorboard.SummaryWriter`. Logs are saved to
25 ``os.path.join(save_dir, name, version)``. This is the default logger in Lightning, it comes
26 preinstalled.
27
28 Example:
29 >>> from pytorch_lightning import Trainer
30 >>> from pytorch_lightning.loggers import TensorBoardLogger
31 >>> logger = TensorBoardLogger("tb_logs", name="my_model")
32 >>> trainer = Trainer(logger=logger)
33
34 Args:
35 save_dir: Save directory
36 name: Experiment name. Defaults to ``'default'``. If it is the empty string then no per-experiment
37 subdirectory is used.
38 version: Experiment version. If version is not specified the logger inspects the save
39 directory for existing versions, then automatically assigns the next available version.
40 If it is a string then it is used as the run-specific subdirectory name,
41 otherwise ``'version_${version}'`` is used.
42 \**kwargs: Other arguments are passed directly to the :class:`SummaryWriter` constructor.
43
44 """
45 NAME_CSV_TAGS = 'meta_tags.csv'
46
47 def __init__(self,
48 save_dir: str,
49 name: Optional[str] = "default",
50 version: Optional[Union[int, str]] = None,
51 **kwargs):
52 super().__init__()
53 self.save_dir = save_dir
54 self._name = name
55 self._version = version
56
57 self._experiment = None
58 self.tags = {}
59 self._kwargs = kwargs
60
61 @property
62 def root_dir(self) -> str:
63 """
64 Parent directory for all tensorboard checkpoint subdirectories.
65 If the experiment name parameter is ``None`` or the empty string, no experiment subdirectory is used
66 and the checkpoint will be saved in "save_dir/version_dir"
67 """
68 if self.name is None or len(self.name) == 0:
69 return self.save_dir
70 else:
71 return os.path.join(self.save_dir, self.name)
72
73 @property
74 def log_dir(self) -> str:
75 """
76 The directory for this run's tensorboard checkpoint. By default, it is named
77 ``'version_${self.version}'`` but it can be overridden by passing a string value
78 for the constructor's version parameter instead of ``None`` or an int.
79 """
80 # create a pseudo standard path ala test-tube
81 version = self.version if isinstance(self.version, str) else f"version_{self.version}"
82 log_dir = os.path.join(self.root_dir, version)
83 return log_dir
84
85 @property
86 def experiment(self) -> SummaryWriter:
87 r"""
88 Actual tensorboard object. To use TensorBoard features in your
89 :class:`~pytorch_lightning.core.lightning.LightningModule` do the following.
90
91 Example::
92
93 self.logger.experiment.some_tensorboard_function()
94
95 """
96 if self._experiment is not None:
97 return self._experiment
98
99 os.makedirs(self.root_dir, exist_ok=True)
100 self._experiment = SummaryWriter(log_dir=self.log_dir, **self._kwargs)
101 return self._experiment
102
103 @rank_zero_only
104 def log_hyperparams(self, params: Union[Dict[str, Any], Namespace],
105 metrics: Optional[Dict[str, Any]] = None) -> None:
106 params = self._convert_params(params)
107 params = self._flatten_dict(params)
108 sanitized_params = self._sanitize_params(params)
109
110 if parse_version(torch.__version__) < parse_version("1.3.0"):
111 warn(
112 f"Hyperparameter logging is not available for Torch version {torch.__version__}."
113 " Skipping log_hyperparams. Upgrade to Torch 1.3.0 or above to enable"
114 " hyperparameter logging."
115 )
116 else:
117 from torch.utils.tensorboard.summary import hparams
118
119 if metrics is None:
120 metrics = {}
121 exp, ssi, sei = hparams(sanitized_params, metrics)
122 writer = self.experiment._get_file_writer()
123 writer.add_summary(exp)
124 writer.add_summary(ssi)
125 writer.add_summary(sei)
126
127 if metrics:
128 # necessary for hparam comparison with metrics
129 self.log_metrics(metrics)
130
131 # some alternative should be added
132 self.tags.update(sanitized_params)
133
134 @rank_zero_only
135 def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None:
136 for k, v in metrics.items():
137 if isinstance(v, torch.Tensor):
138 v = v.item()
139 self.experiment.add_scalar(k, v, step)
140
141 @rank_zero_only
142 def save(self) -> None:
143 super().save()
144 try:
145 self.experiment.flush()
146 except AttributeError:
147 # you are using PT version (<v1.2) which does not have implemented flush
148 self.experiment._get_file_writer().flush()
149
150 dir_path = self.log_dir
151 if not os.path.isdir(dir_path):
152 dir_path = self.save_dir
153
154 # prepare the file path
155 meta_tags_path = os.path.join(dir_path, self.NAME_CSV_TAGS)
156
157 # save the metatags file
158 with open(meta_tags_path, 'w', newline='') as csvfile:
159 fieldnames = ['key', 'value']
160 writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
161 writer.writerow({'key': 'key', 'value': 'value'})
162 for k, v in self.tags.items():
163 writer.writerow({'key': k, 'value': v})
164
165 @rank_zero_only
166 def finalize(self, status: str) -> None:
167 self.save()
168
169 @property
170 def name(self) -> str:
171 return self._name
172
173 @property
174 def version(self) -> int:
175 if self._version is None:
176 self._version = self._get_next_version()
177 return self._version
178
179 def _get_next_version(self):
180 root_dir = os.path.join(self.save_dir, self.name)
181
182 if not os.path.isdir(root_dir):
183 log.warning('Missing logger folder: %s', root_dir)
184 return 0
185
186 existing_versions = []
187 for d in os.listdir(root_dir):
188 if os.path.isdir(os.path.join(root_dir, d)) and d.startswith("version_"):
189 existing_versions.append(int(d.split("_")[1]))
190
191 if len(existing_versions) == 0:
192 return 0
193
194 return max(existing_versions) + 1
195
[end of pytorch_lightning/loggers/tensorboard.py]
[start of pytorch_lightning/trainer/training_tricks.py]
1 import math
2 import sys
3 from abc import ABC, abstractmethod
4 import gc
5 import os
6 from typing import Optional
7
8 import torch
9 from torch import Tensor
10 from torch.utils.data import DataLoader
11
12 from pytorch_lightning import _logger as log
13 from pytorch_lightning.core.lightning import LightningModule
14 from pytorch_lightning.callbacks import GradientAccumulationScheduler
15 from pytorch_lightning.utilities.exceptions import MisconfigurationException
16 from pytorch_lightning.utilities.memory import is_oom_error, garbage_collection_cuda
17
18 EPSILON = 1e-6
19 EPSILON_FP16 = 1e-5
20
21
22 class TrainerTrainingTricksMixin(ABC):
23
24 # this is just a summary on variables used in this abstract class,
25 # the proper values/initialisation should be done in child class
26 gradient_clip_val: ...
27 precision: ...
28 on_gpu: bool
29
30 @abstractmethod
31 def get_model(self):
32 """Warning: this is just empty shell for code implemented in other class."""
33
34 @abstractmethod
35 def save_checkpoint(self, *args):
36 """Warning: this is just empty shell for code implemented in other class."""
37
38 @abstractmethod
39 def restore(self, *args):
40 """Warning: this is just empty shell for code implemented in other class."""
41
42 @abstractmethod
43 def fit(self, *args):
44 """Warning: this is just empty shell for code implemented in other class."""
45
46 def clip_gradients(self):
47
48 # this code is a modification of torch.nn.utils.clip_grad_norm_
49 # with TPU support based on https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md
50 if self.gradient_clip_val > 0:
51 model = self.get_model()
52 parameters = model.parameters()
53 max_norm = float(self.gradient_clip_val)
54 norm_type = float(2.0)
55 if isinstance(parameters, torch.Tensor):
56 parameters = [parameters]
57 parameters = list(filter(lambda p: p.grad is not None, parameters))
58 if norm_type == math.inf:
59 total_norm = max(p.grad.data.abs().max() for p in parameters)
60 else:
61 device = parameters[0].device
62 total_norm = torch.zeros([], device=device if parameters else None)
63 for p in parameters:
64 param_norm = p.grad.data.pow(norm_type).sum()
65 total_norm.add_(param_norm)
66 total_norm = (total_norm ** (1. / norm_type))
67 eps = EPSILON_FP16 if self.precision == 16 else EPSILON
68 clip_coef = torch.tensor(max_norm, device=device) / (total_norm + eps)
69 for p in parameters:
70 p.grad.data.mul_(torch.where(clip_coef < 1, clip_coef, torch.tensor(1., device=device)))
71
72 def print_nan_gradients(self) -> None:
73 model = self.get_model()
74 for param in model.parameters():
75 if (param.grad is not None) and torch.isnan(param.grad.float()).any():
76 log.info(param, param.grad)
77
78 def detect_nan_tensors(self, loss: Tensor) -> None:
79 model = self.get_model()
80
81 # check if loss is nan
82 if not torch.isfinite(loss).all():
83 raise ValueError(
84 'The loss returned in `training_step` is nan or inf.'
85 )
86 # check if a network weight is nan
87 for name, param in model.named_parameters():
88 if not torch.isfinite(param).all():
89 self.print_nan_gradients()
90 raise ValueError(
91 f'Detected nan and/or inf values in `{name}`.'
92 ' Check your forward pass for numerically unstable operations.'
93 )
94
95 def configure_accumulated_gradients(self, accumulate_grad_batches):
96 if isinstance(accumulate_grad_batches, dict):
97 self.accumulation_scheduler = GradientAccumulationScheduler(accumulate_grad_batches)
98 elif isinstance(accumulate_grad_batches, int):
99 schedule = {1: accumulate_grad_batches}
100 self.accumulation_scheduler = GradientAccumulationScheduler(schedule)
101 else:
102 raise TypeError("Gradient accumulation supports only int and dict types")
103
104 def scale_batch_size(self,
105 model: LightningModule,
106 mode: str = 'power',
107 steps_per_trial: int = 3,
108 init_val: int = 2,
109 max_trials: int = 25,
110 batch_arg_name: str = 'batch_size'):
111 r"""
112 Will iteratively try to find the largest batch size for a given model
113 that does not give an out of memory (OOM) error.
114
115 Args:
116 model: Model to fit.
117
118 mode: string setting the search mode. Either `power` or `binsearch`.
119 If mode is `power` we keep multiplying the batch size by 2, until
120 we get an OOM error. If mode is 'binsearch', we will initially
121 also keep multiplying by 2 and after encountering an OOM error
122 do a binary search between the last successful batch size and the
123 batch size that failed.
124
125 steps_per_trial: number of steps to run with a given batch size.
126 Idealy 1 should be enough to test if a OOM error occurs,
127 however in practise a few are needed
128
129 init_val: initial batch size to start the search with
130
131 max_trials: max number of increase in batch size done before
132 algorithm is terminated
133
134 """
135 if not hasattr(model.hparams, batch_arg_name):
136 raise MisconfigurationException(f'Field {batch_arg_name} not found in `model.hparams`')
137
138 if hasattr(model.train_dataloader, 'patch_loader_code'):
139 raise MisconfigurationException('The batch scaling feature cannot be used with dataloaders'
140 ' passed directly to `.fit()`. Please disable the feature or'
141 ' incorporate the dataloader into the model.')
142
143 # Arguments we adjust during the batch size finder, save for restoring
144 self.__scale_batch_dump_params()
145
146 # Set to values that are required by the algorithm
147 self.__scale_batch_reset_params(model, steps_per_trial)
148
149 # Save initial model, that is loaded after batch size is found
150 save_path = os.path.join(self.default_root_dir, 'temp_model.ckpt')
151 self.save_checkpoint(str(save_path))
152
153 if self.progress_bar_callback:
154 self.progress_bar_callback.disable()
155
156 # Initially we just double in size until an OOM is encountered
157 new_size = _adjust_batch_size(self, value=init_val) # initially set to init_val
158 if mode == 'power':
159 new_size = _run_power_scaling(self, model, new_size, batch_arg_name, max_trials)
160 elif mode == 'binsearch':
161 new_size = _run_binsearch_scaling(self, model, new_size, batch_arg_name, max_trials)
162 else:
163 raise ValueError('mode in method `scale_batch_size` can only be `power` or `binsearch')
164
165 garbage_collection_cuda()
166 log.info(f'Finished batch size finder, will continue with full run using batch size {new_size}')
167
168 # Restore initial state of model
169 self.restore(str(save_path), on_gpu=self.on_gpu)
170 os.remove(save_path)
171
172 # Finish by resetting variables so trainer is ready to fit model
173 self.__scale_batch_restore_params()
174 if self.progress_bar_callback:
175 self.progress_bar_callback.enable()
176
177 return new_size
178
179 def __scale_batch_dump_params(self):
180 # Prevent going into infinite loop
181 self.__dumped_params = {
182 'max_steps': self.max_steps,
183 'weights_summary': self.weights_summary,
184 'logger': self.logger,
185 'callbacks': self.callbacks,
186 'checkpoint_callback': self.checkpoint_callback,
187 'early_stop_callback': self.early_stop_callback,
188 'enable_early_stop': self.enable_early_stop,
189 'auto_scale_batch_size': self.auto_scale_batch_size,
190 'train_percent_check': self.train_percent_check,
191 'model': self.model,
192 }
193
194 def __scale_batch_reset_params(self, model, steps_per_trial):
195 self.auto_scale_batch_size = None # prevent recursion
196 self.max_steps = steps_per_trial # take few steps
197 self.weights_summary = None # not needed before full run
198 self.logger = None # not needed before full run
199 self.callbacks = [] # not needed before full run
200 self.checkpoint_callback = False # required for saving
201 self.early_stop_callback = None
202 self.enable_early_stop = False
203 self.train_percent_check = 1.0
204 self.optimizers, self.schedulers = [], [] # required for saving
205 self.model = model # required for saving
206
207 def __scale_batch_restore_params(self):
208 self.max_steps = self.__dumped_params['max_steps']
209 self.weights_summary = self.__dumped_params['weights_summary']
210 self.logger = self.__dumped_params['logger']
211 self.callbacks = self.__dumped_params['callbacks']
212 self.checkpoint_callback = self.__dumped_params['checkpoint_callback']
213 self.auto_scale_batch_size = self.__dumped_params['auto_scale_batch_size']
214 self.early_stop_callback = self.__dumped_params['early_stop_callback']
215 self.enable_early_stop = self.__dumped_params['enable_early_stop']
216 self.train_percent_check = self.__dumped_params['train_percent_check']
217 self.model = self.__dumped_params['model']
218 del self.__dumped_params
219
220
221 def _adjust_batch_size(trainer,
222 batch_arg_name: str = 'batch_size',
223 factor: float = 1.0,
224 value: Optional[int] = None,
225 desc: str = None):
226 """ Function for adjusting the batch size. It is expected that the user
227 has provided a model that has a hparam field called `batch_size` i.e.
228 `model.hparams.batch_size` should exist.
229
230 Args:
231 trainer: instance of pytorch_lightning.Trainer
232
233 batch_arg_name: field where batch_size is stored in `model.hparams`
234
235 factor: value which the old batch size is multiplied by to get the
236 new batch size
237
238 value: if a value is given, will override the batch size with this value.
239 Note that the value of `factor` will not have an effect in this case
240
241 desc: either `succeeded` or `failed`. Used purely for logging
242
243 """
244 model = trainer.get_model()
245 batch_size = getattr(model.hparams, batch_arg_name)
246 if value:
247 setattr(model.hparams, batch_arg_name, value)
248 new_size = value
249 if desc:
250 log.info(f'Batch size {batch_size} {desc}, trying batch size {new_size}')
251 else:
252 new_size = int(batch_size * factor)
253 if desc:
254 log.info(f'Batch size {batch_size} {desc}, trying batch size {new_size}')
255 setattr(model.hparams, batch_arg_name, new_size)
256 return new_size
257
258
259 def _run_power_scaling(trainer, model, new_size, batch_arg_name, max_trials):
260 """ Batch scaling mode where the size is doubled at each iteration until an
261 OOM error is encountered. """
262 for _ in range(max_trials):
263 garbage_collection_cuda()
264 trainer.global_step = 0 # reset after each try
265 try:
266 # Try fit
267 trainer.fit(model)
268 # Double in size
269 new_size = _adjust_batch_size(trainer, batch_arg_name, factor=2.0, desc='succeeded')
270 except RuntimeError as exception:
271 # Only these errors should trigger an adjustment
272 if is_oom_error(exception):
273 # If we fail in power mode, half the size and return
274 garbage_collection_cuda()
275 new_size = _adjust_batch_size(trainer, batch_arg_name, factor=0.5, desc='failed')
276 break
277 else:
278 raise # some other error not memory related
279 return new_size
280
281
282 def _run_binsearch_scaling(trainer, model, new_size, batch_arg_name, max_trials):
283 """ Batch scaling mode where the size is initially is doubled at each iteration
284 until an OOM error is encountered. Hereafter, the batch size is further
285 refined using a binary search """
286 high = None
287 count = 0
288 while True:
289 garbage_collection_cuda()
290 trainer.global_step = 0 # reset after each try
291 try:
292 # Try fit
293 trainer.fit(model)
294 count += 1
295 if count > max_trials:
296 break
297 # Double in size
298 low = new_size
299 if high:
300 if high - low <= 1:
301 break
302 midval = (high + low) // 2
303 new_size = _adjust_batch_size(trainer, batch_arg_name, value=midval, desc='succeeded')
304 else:
305 new_size = _adjust_batch_size(trainer, batch_arg_name, factor=2.0, desc='succeeded')
306 except RuntimeError as exception:
307 # Only these errors should trigger an adjustment
308 if is_oom_error(exception):
309 # If we fail in power mode, half the size and return
310 garbage_collection_cuda()
311 high = new_size
312 midval = (high + low) // 2
313 new_size = _adjust_batch_size(trainer, value=midval, desc='failed')
314 if high - low <= 1:
315 break
316 else:
317 raise # some other error not memory related
318 return new_size
319
[end of pytorch_lightning/trainer/training_tricks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Lightning-AI/lightning
|
35fe2efe270d21059727fefa5df149d99e4ce33c
|
Add support for loading flattened meta_tags.csv
## 🚀 Feature
### Motivation
PL+TensorBoard can log hierarchical dict after #1152, however, `meta_tags.csv` has been disabled by the change.
### Pitch
- Make `meta_tags.csv` back to a hierarchical dict based on their delimiter.
### Alternatives
62de7948634b1cd8c7e494f4cd9a02625cd3f602
### Additional context
1. We can consider the deprecation of `meta_tags.csv`, then adopt `config.yaml`.
1. We can interpret primitive-type parameters through the files, so it is needed to rethink parameter sanitization or update docs.
|
2020-03-28T09:34:55Z
|
<patch>
diff --git a/pytorch_lightning/core/lightning.py b/pytorch_lightning/core/lightning.py
--- a/pytorch_lightning/core/lightning.py
+++ b/pytorch_lightning/core/lightning.py
@@ -1,6 +1,7 @@
import collections
import inspect
import os
+import warnings
from abc import ABC, abstractmethod
from argparse import Namespace
from typing import Any, Callable, Dict, List, Optional, Tuple, Union, Sequence
@@ -16,7 +17,7 @@
from pytorch_lightning.core.grads import GradInformation
from pytorch_lightning.core.hooks import ModelHooks
from pytorch_lightning.core.memory import ModelSummary
-from pytorch_lightning.core.saving import ModelIO, load_hparams_from_tags_csv, update_hparams
+from pytorch_lightning.core.saving import ModelIO, load_hparams_from_tags_csv, load_hparams_from_yaml, update_hparams
from pytorch_lightning.core.properties import DeviceDtypeModuleMixin
from pytorch_lightning.overrides.data_parallel import LightningDistributedDataParallel
from pytorch_lightning.utilities.exceptions import MisconfigurationException
@@ -1438,29 +1439,49 @@ def load_from_checkpoint(
cls,
checkpoint_path: str,
map_location: Optional[Union[Dict[str, str], str, torch.device, int, Callable]] = None,
- tags_csv: Optional[str] = None,
+ hparams_file: Optional[str] = None,
+ tags_csv: Optional[str] = None, # backward compatible, todo: remove in v0.9.0
hparam_overrides: Optional[Dict] = None,
*args, **kwargs
) -> 'LightningModule':
r"""
Primary way of loading a model from a checkpoint. When Lightning saves a checkpoint
it stores the hyperparameters in the checkpoint if you initialized your :class:`LightningModule`
- with an argument called ``hparams`` which is a :class:`~argparse.Namespace`
- (output of :meth:`~argparse.ArgumentParser.parse_args` when parsing command line arguments).
+ with an argument called ``hparams`` which is an object of :class:`~dict` or
+ :class:`~argparse.Namespace` (output of :meth:`~argparse.ArgumentParser.parse_args`
+ when parsing command line arguments).
+ If you want `hparams` to have a hierarchical structure, you have to define it as :class:`~dict`.
Any other arguments specified through \*args and \*\*kwargs will be passed to the model.
Example:
.. code-block:: python
+ # define hparams as Namespace
from argparse import Namespace
hparams = Namespace(**{'learning_rate': 0.1})
model = MyModel(hparams)
class MyModel(LightningModule):
- def __init__(self, hparams):
+ def __init__(self, hparams: Namespace):
self.learning_rate = hparams.learning_rate
+ # ----------
+
+ # define hparams as dict
+ hparams = {
+ drop_prob: 0.2,
+ dataloader: {
+ batch_size: 32
+ }
+ }
+
+ model = MyModel(hparams)
+
+ class MyModel(LightningModule):
+ def __init__(self, hparams: dict):
+ self.learning_rate = hparams['learning_rate']
+
Args:
checkpoint_path: Path to checkpoint.
model_args: Any keyword args needed to init the model.
@@ -1468,19 +1489,38 @@ def __init__(self, hparams):
If your checkpoint saved a GPU model and you now load on CPUs
or a different number of GPUs, use this to map to the new setup.
The behaviour is the same as in :func:`torch.load`.
- tags_csv: Optional path to a .csv file with two columns (key, value)
+ hparams_file: Optional path to a .yaml file with hierarchical structure
as in this example::
- key,value
- drop_prob,0.2
- batch_size,32
+ drop_prob: 0.2
+ dataloader:
+ batch_size: 32
You most likely won't need this since Lightning will always save the hyperparameters
to the checkpoint.
However, if your checkpoint weights don't have the hyperparameters saved,
- use this method to pass in a .csv file with the hparams you'd like to use.
- These will be converted into a :class:`~argparse.Namespace` and passed into your
+ use this method to pass in a .yaml file with the hparams you'd like to use.
+ These will be converted into a :class:`~dict` and passed into your
:class:`LightningModule` for use.
+
+ If your model's `hparams` argument is :class:`~argparse.Namespace`
+ and .yaml file has hierarchical structure, you need to refactor your model to treat
+ `hparams` as :class:`~dict`.
+
+ .csv files are acceptable here till v0.9.0, see tags_csv argument for detailed usage.
+ tags_csv:
+ .. warning:: .. deprecated:: 0.7.6
+
+ `tags_csv` argument is deprecated in v0.7.6. Will be removed v0.9.0.
+
+ Optional path to a .csv file with two columns (key, value)
+ as in this example::
+
+ key,value
+ drop_prob,0.2
+ batch_size,32
+
+ Use this method to pass in a .csv file with the hparams you'd like to use.
hparam_overrides: A dictionary with keys to override in the hparams
Return:
@@ -1502,7 +1542,7 @@ def __init__(self, hparams):
# or load weights and hyperparameters from separate files.
MyLightningModule.load_from_checkpoint(
'path/to/checkpoint.ckpt',
- tags_csv='/path/to/hparams_file.csv'
+ hparams_file='/path/to/hparams_file.yaml'
)
# override some of the params with new values
@@ -1531,9 +1571,22 @@ def __init__(self, hparams):
# add the hparams from csv file to checkpoint
if tags_csv is not None:
- hparams = load_hparams_from_tags_csv(tags_csv)
- hparams.__setattr__('on_gpu', False)
- checkpoint['hparams'] = vars(hparams)
+ hparams_file = tags_csv
+ rank_zero_warn('`tags_csv` argument is deprecated in v0.7.6. Will be removed v0.9.0', DeprecationWarning)
+
+ if hparams_file is not None:
+ extension = hparams_file.split('.')[-1]
+ if extension.lower() in ('csv'):
+ hparams = load_hparams_from_tags_csv(hparams_file)
+ elif extension.lower() in ('yml', 'yaml'):
+ hparams = load_hparams_from_yaml(hparams_file)
+ else:
+ raise ValueError('.csv, .yml or .yaml is required for `hparams_file`')
+
+ hparams['on_gpu'] = False
+
+ # overwrite hparams by the given file
+ checkpoint['hparams'] = hparams
# override the hparam keys that were passed in
if hparam_overrides is not None:
@@ -1549,15 +1602,18 @@ def _load_model_state(cls, checkpoint: Dict[str, Any], *args, **kwargs) -> 'Ligh
if cls_takes_hparams:
if ckpt_hparams is not None:
- is_namespace = checkpoint.get('hparams_type', 'namespace') == 'namespace'
- hparams = Namespace(**ckpt_hparams) if is_namespace else ckpt_hparams
+ hparams_type = checkpoint.get('hparams_type', 'Namespace')
+ if hparams_type.lower() == 'dict':
+ hparams = ckpt_hparams
+ elif hparams_type.lower() == 'namespace':
+ hparams = Namespace(**ckpt_hparams)
else:
rank_zero_warn(
f"Checkpoint does not contain hyperparameters but {cls.__name__}'s __init__"
" contains argument 'hparams'. Will pass in an empty Namespace instead."
" Did you forget to store your model hyperparameters in self.hparams?"
)
- hparams = Namespace()
+ hparams = {}
else: # The user's LightningModule does not define a hparams argument
if ckpt_hparams is None:
hparams = None
@@ -1568,7 +1624,7 @@ def _load_model_state(cls, checkpoint: Dict[str, Any], *args, **kwargs) -> 'Ligh
)
# load the state_dict on the model automatically
- if hparams:
+ if cls_takes_hparams:
kwargs.update(hparams=hparams)
model = cls(*args, **kwargs)
model.load_state_dict(checkpoint['state_dict'])
diff --git a/pytorch_lightning/core/saving.py b/pytorch_lightning/core/saving.py
--- a/pytorch_lightning/core/saving.py
+++ b/pytorch_lightning/core/saving.py
@@ -1,9 +1,12 @@
+import ast
import csv
import os
+import yaml
from argparse import Namespace
from typing import Union, Dict, Any
from pytorch_lightning import _logger as log
+from pytorch_lightning.utilities import rank_zero_warn
class ModelIO(object):
@@ -79,30 +82,78 @@ def update_hparams(hparams: dict, updates: dict) -> None:
hparams.update({k: v})
-def load_hparams_from_tags_csv(tags_csv: str) -> Namespace:
+def load_hparams_from_tags_csv(tags_csv: str) -> Dict[str, Any]:
+ """Load hparams from a file.
+
+ >>> hparams = Namespace(batch_size=32, learning_rate=0.001, data_root='./any/path/here')
+ >>> path_csv = './testing-hparams.csv'
+ >>> save_hparams_to_tags_csv(path_csv, hparams)
+ >>> hparams_new = load_hparams_from_tags_csv(path_csv)
+ >>> vars(hparams) == hparams_new
+ True
+ >>> os.remove(path_csv)
+ """
if not os.path.isfile(tags_csv):
- log.warning(f'Missing Tags: {tags_csv}.')
- return Namespace()
+ rank_zero_warn(f'Missing Tags: {tags_csv}.', RuntimeWarning)
+ return {}
- with open(tags_csv) as f:
- csv_reader = csv.reader(f, delimiter=',')
+ with open(tags_csv) as fp:
+ csv_reader = csv.reader(fp, delimiter=',')
tags = {row[0]: convert(row[1]) for row in list(csv_reader)[1:]}
- ns = Namespace(**tags)
- return ns
+
+ return tags
+
+
+def save_hparams_to_tags_csv(tags_csv: str, hparams: Union[dict, Namespace]) -> None:
+ if not os.path.isdir(os.path.dirname(tags_csv)):
+ raise RuntimeError(f'Missing folder: {os.path.dirname(tags_csv)}.')
+
+ if isinstance(hparams, Namespace):
+ hparams = vars(hparams)
+
+ with open(tags_csv, 'w') as fp:
+ fieldnames = ['key', 'value']
+ writer = csv.DictWriter(fp, fieldnames=fieldnames)
+ writer.writerow({'key': 'key', 'value': 'value'})
+ for k, v in hparams.items():
+ writer.writerow({'key': k, 'value': v})
+
+
+def load_hparams_from_yaml(config_yaml: str) -> Dict[str, Any]:
+ """Load hparams from a file.
+
+ >>> hparams = Namespace(batch_size=32, learning_rate=0.001, data_root='./any/path/here')
+ >>> path_yaml = './testing-hparams.yaml'
+ >>> save_hparams_to_yaml(path_yaml, hparams)
+ >>> hparams_new = load_hparams_from_yaml(path_yaml)
+ >>> vars(hparams) == hparams_new
+ True
+ >>> os.remove(path_yaml)
+ """
+ if not os.path.isfile(config_yaml):
+ rank_zero_warn(f'Missing Tags: {config_yaml}.', RuntimeWarning)
+ return {}
+
+ with open(config_yaml) as fp:
+ tags = yaml.load(fp, Loader=yaml.SafeLoader)
+
+ return tags
+
+
+def save_hparams_to_yaml(config_yaml, hparams: Union[dict, Namespace]) -> None:
+ if not os.path.isdir(os.path.dirname(config_yaml)):
+ raise RuntimeError(f'Missing folder: {os.path.dirname(config_yaml)}.')
+
+ if isinstance(hparams, Namespace):
+ hparams = vars(hparams)
+
+ with open(config_yaml, 'w', newline='') as fp:
+ yaml.dump(hparams, fp)
def convert(val: str) -> Union[int, float, bool, str]:
- constructors = [int, float, str]
-
- if isinstance(val, str):
- if val.lower() == 'true':
- return True
- if val.lower() == 'false':
- return False
-
- for c in constructors:
- try:
- return c(val)
- except ValueError:
- pass
- return val
+ try:
+ return ast.literal_eval(val)
+ except (ValueError, SyntaxError) as e:
+ log.debug(e)
+ return val
diff --git a/pytorch_lightning/loggers/tensorboard.py b/pytorch_lightning/loggers/tensorboard.py
--- a/pytorch_lightning/loggers/tensorboard.py
+++ b/pytorch_lightning/loggers/tensorboard.py
@@ -3,8 +3,8 @@
-----------
"""
-import csv
import os
+import yaml
from argparse import Namespace
from typing import Optional, Dict, Union, Any
from warnings import warn
@@ -14,6 +14,7 @@
from torch.utils.tensorboard import SummaryWriter
from pytorch_lightning import _logger as log
+from pytorch_lightning.core.saving import save_hparams_to_yaml
from pytorch_lightning.loggers.base import LightningLoggerBase
from pytorch_lightning.utilities import rank_zero_only
@@ -42,7 +43,7 @@ class TensorBoardLogger(LightningLoggerBase):
\**kwargs: Other arguments are passed directly to the :class:`SummaryWriter` constructor.
"""
- NAME_CSV_TAGS = 'meta_tags.csv'
+ NAME_HPARAMS_FILE = 'hparams.yaml'
def __init__(self,
save_dir: str,
@@ -55,7 +56,7 @@ def __init__(self,
self._version = version
self._experiment = None
- self.tags = {}
+ self.hparams = {}
self._kwargs = kwargs
@property
@@ -104,8 +105,13 @@ def experiment(self) -> SummaryWriter:
def log_hyperparams(self, params: Union[Dict[str, Any], Namespace],
metrics: Optional[Dict[str, Any]] = None) -> None:
params = self._convert_params(params)
+
+ # store params to output
+ self.hparams.update(params)
+
+ # format params into the suitable for tensorboard
params = self._flatten_dict(params)
- sanitized_params = self._sanitize_params(params)
+ params = self._sanitize_params(params)
if parse_version(torch.__version__) < parse_version("1.3.0"):
warn(
@@ -118,7 +124,7 @@ def log_hyperparams(self, params: Union[Dict[str, Any], Namespace],
if metrics is None:
metrics = {}
- exp, ssi, sei = hparams(sanitized_params, metrics)
+ exp, ssi, sei = hparams(params, metrics)
writer = self.experiment._get_file_writer()
writer.add_summary(exp)
writer.add_summary(ssi)
@@ -128,9 +134,6 @@ def log_hyperparams(self, params: Union[Dict[str, Any], Namespace],
# necessary for hparam comparison with metrics
self.log_metrics(metrics)
- # some alternative should be added
- self.tags.update(sanitized_params)
-
@rank_zero_only
def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None:
for k, v in metrics.items():
@@ -152,15 +155,10 @@ def save(self) -> None:
dir_path = self.save_dir
# prepare the file path
- meta_tags_path = os.path.join(dir_path, self.NAME_CSV_TAGS)
+ hparams_file = os.path.join(dir_path, self.NAME_HPARAMS_FILE)
# save the metatags file
- with open(meta_tags_path, 'w', newline='') as csvfile:
- fieldnames = ['key', 'value']
- writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
- writer.writerow({'key': 'key', 'value': 'value'})
- for k, v in self.tags.items():
- writer.writerow({'key': k, 'value': v})
+ save_hparams_to_yaml(hparams_file, self.hparams)
@rank_zero_only
def finalize(self, status: str) -> None:
diff --git a/pytorch_lightning/trainer/evaluation_loop.py b/pytorch_lightning/trainer/evaluation_loop.py
--- a/pytorch_lightning/trainer/evaluation_loop.py
+++ b/pytorch_lightning/trainer/evaluation_loop.py
@@ -105,10 +105,9 @@
.. code-block:: python
- model = MyLightningModule.load_from_metrics(
- weights_path='/path/to/pytorch_checkpoint.ckpt',
- tags_csv='/path/to/test_tube/experiment/version/meta_tags.csv',
- on_gpu=True,
+ model = MyLightningModule.load_from_checkpoint(
+ checkpoint_path='/path/to/pytorch_checkpoint.ckpt',
+ hparams_file='/path/to/test_tube/experiment/version/hparams.yaml',
map_location=None
)
diff --git a/pytorch_lightning/trainer/training_io.py b/pytorch_lightning/trainer/training_io.py
--- a/pytorch_lightning/trainer/training_io.py
+++ b/pytorch_lightning/trainer/training_io.py
@@ -344,9 +344,16 @@ def dump_checkpoint(self):
if hasattr(model, "hparams"):
parsing.clean_namespace(model.hparams)
- is_namespace = isinstance(model.hparams, Namespace)
- checkpoint['hparams'] = vars(model.hparams) if is_namespace else model.hparams
- checkpoint['hparams_type'] = 'namespace' if is_namespace else 'dict'
+ checkpoint['hparams_type'] = model.hparams.__class__.__name__
+ if checkpoint['hparams_type'] == 'dict':
+ checkpoint['hparams'] = model.hparams
+ elif checkpoint['hparams_type'] == 'Namespace':
+ checkpoint['hparams'] = vars(model.hparams)
+ else:
+ raise ValueError(
+ 'The acceptable hparams type is dict or argparse.Namespace,',
+ f' not {checkpoint["hparams_type"]}'
+ )
else:
rank_zero_warn(
"Did not find hyperparameters at model hparams. Saving checkpoint without hyperparameters."
</patch>
|
[]
|
[]
| ||||
celery__celery-6357
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Logger set to 'ascii' instead of 'utf-8': UnicodeDecodeError 'ascii'
## Checklist
Report:
```
software -> celery:4.2.1 (windowlicker) kombu:4.2.1 py:3.6.5
billiard:3.5.0.4 py-amqp:2.3.2
platform -> system:Linux arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:indicore.celery_app:Backend
broker_url: 'amqp://indico:********@rabbitmq:5672//'
task_serializer: 'msgpack-numpy'
result_serializer: 'msgpack-numpy'
enable_utc: True
worker_send_task_events: True
result_expires: 86400
task_always_eager: False
accept_content: ['application/x-msgpack']
result_backend: 'indicore.celery_app:Backend'
redis_port: 6379
redis_host: 'celery-redis'
redis_max_connections: 1000
broker_transport_options: {
'confirm_publish': True}
broker_heartbeat: 20
broker_connection_max_retries: None
task_queue_ha_policy: 'all'
```
## Steps to reproduce
Celery is logging the success result of a task that includes characters outside ascii encoding.
## Expected behavior
I expect the celery logger to use 'utf-8' encoding rather than ascii. I haven't touched the celery logging, nor do I have a python logging setup separately at the moment. I am using Python3.
## Actual behavior
I receive the following traceback:
```
[2018-10-24 15:35:00,541: WARNING/ForkPoolWorker-7] --- Logging error ---
[2018-10-24 15:35:00,541: WARNING/ForkPoolWorker-7] Traceback (most recent call last):
[2018-10-24 15:35:00,541: WARNING/ForkPoolWorker-7] File "/usr/lib/python3.6/logging/__init__.py", line 994, in emit
stream.write(msg)
[2018-10-24 15:35:00,541: WARNING/ForkPoolWorker-7] UnicodeEncodeError: 'ascii' codec can't encode characters in position 923-928: ordinal not in range(128)
[2018-10-24 15:35:00,541: WARNING/ForkPoolWorker-7] Call stack:
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/__main__.py", line 20, in <module>
main()
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/__main__.py", line 16, in main
_main()
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/celery.py", line 322, in main
cmd.execute_from_commandline(argv)
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/celery.py", line 496, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/base.py", line 275, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/celery.py", line 488, in handle_argv
return self.execute(command, argv)
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/celery.py", line 420, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/worker.py", line 223, in run_from_argv
return self(*args, **options)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/base.py", line 238, in __call__
ret = self.run(*args, **kwargs)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/worker.py", line 258, in run
worker.start()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/worker/worker.py", line 205, in start
self.blueprint.start(self)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/concurrency/base.py", line 131, in start
self.on_start()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/concurrency/prefork.py", line 112, in on_start
**self.options)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/concurrency/asynpool.py", line 432, in __init__
super(AsynPool, self).__init__(processes, *args, **kwargs)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/pool.py", line 1007, in __init__
self._create_worker_process(i)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/concurrency/asynpool.py", line 449, in _create_worker_process
return super(AsynPool, self)._create_worker_process(i)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/pool.py", line 1116, in _create_worker_process
w.start()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/process.py", line 124, in start
self._popen = self._Popen(self)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/context.py", line 333, in _Popen
return Popen(process_obj)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/popen_fork.py", line 24, in __init__
self._launch(process_obj)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/popen_fork.py", line 79, in _launch
code = process_obj._bootstrap()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/process.py", line 327, in _bootstrap
self.run()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/pool.py", line 289, in __call__
sys.exit(self.workloop(pid=pid))
[2018-10-24 15:35:00,544: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/pool.py", line 358, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
[2018-10-24 15:35:00,544: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/app/trace.py", line 549, in _fast_trace_task
uuid, args, kwargs, request,
[2018-10-24 15:35:00,544: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/app/trace.py", line 458, in trace_task
'runtime': T,
[2018-10-24 15:35:00,544: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/app/trace.py", line 124, in info
logger.info(fmt, context, extra={'data': context})
```
# Additional Info
- I ran `sys.getdefaultencoding()` before before `celery/app/trace.py` line 124 and the value is "utf-8" as I expected.
- I do have the LANG set up properly in the machine
- the machine's default locale is also set the same
- I also added the LANG in front of the celery bin in celeryd.
- I can manually print `context` once I've decoded it "utf-8", which is successfully redirected to the celery logger from what I can see.
</issue>
<code>
[start of README.rst]
1 .. image:: http://docs.celeryproject.org/en/latest/_images/celery-banner-small.png
2
3 |build-status| |coverage| |license| |wheel| |pyversion| |pyimp| |ocbackerbadge| |ocsponsorbadge|
4
5 :Version: 5.0.0rc3 (cliffs)
6 :Web: http://celeryproject.org/
7 :Download: https://pypi.org/project/celery/
8 :Source: https://github.com/celery/celery/
9 :Keywords: task, queue, job, async, rabbitmq, amqp, redis,
10 python, distributed, actors
11
12 Donations
13 =========
14
15 This project relies on your generous donations.
16
17 If you are using Celery to create a commercial product, please consider becoming our `backer`_ or our `sponsor`_ to ensure Celery's future.
18
19 .. _`backer`: https://opencollective.com/celery#backer
20 .. _`sponsor`: https://opencollective.com/celery#sponsor
21
22 For enterprise
23 ==============
24
25 Available as part of the Tidelift Subscription.
26
27 The maintainers of ``celery`` and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. `Learn more. <https://tidelift.com/subscription/pkg/pypi-celery?utm_source=pypi-celery&utm_medium=referral&utm_campaign=enterprise&utm_term=repo>`_
28
29 What's a Task Queue?
30 ====================
31
32 Task queues are used as a mechanism to distribute work across threads or
33 machines.
34
35 A task queue's input is a unit of work, called a task, dedicated worker
36 processes then constantly monitor the queue for new work to perform.
37
38 Celery communicates via messages, usually using a broker
39 to mediate between clients and workers. To initiate a task a client puts a
40 message on the queue, the broker then delivers the message to a worker.
41
42 A Celery system can consist of multiple workers and brokers, giving way
43 to high availability and horizontal scaling.
44
45 Celery is written in Python, but the protocol can be implemented in any
46 language. In addition to Python there's node-celery_ for Node.js,
47 a `PHP client`_, `gocelery`_ for golang, and rusty-celery_ for Rust.
48
49 Language interoperability can also be achieved by using webhooks
50 in such a way that the client enqueues an URL to be requested by a worker.
51
52 .. _node-celery: https://github.com/mher/node-celery
53 .. _`PHP client`: https://github.com/gjedeer/celery-php
54 .. _`gocelery`: https://github.com/gocelery/gocelery
55 .. _rusty-celery: https://github.com/rusty-celery/rusty-celery
56
57 What do I need?
58 ===============
59
60 Celery version 5.0.0rc3 runs on,
61
62 - Python (3.6, 3.7, 3.8)
63 - PyPy3.6 (7.6)
64
65
66 This is the next version to of celery which will support Python 3.6 or newer.
67
68 If you're running an older version of Python, you need to be running
69 an older version of Celery:
70
71 - Python 2.6: Celery series 3.1 or earlier.
72 - Python 2.5: Celery series 3.0 or earlier.
73 - Python 2.4 was Celery series 2.2 or earlier.
74 - Python 2.7: Celery 4.x series.
75
76 Celery is a project with minimal funding,
77 so we don't support Microsoft Windows.
78 Please don't open any issues related to that platform.
79
80 *Celery* is usually used with a message broker to send and receive messages.
81 The RabbitMQ, Redis transports are feature complete,
82 but there's also experimental support for a myriad of other solutions, including
83 using SQLite for local development.
84
85 *Celery* can run on a single machine, on multiple machines, or even
86 across datacenters.
87
88 Get Started
89 ===========
90
91 If this is the first time you're trying to use Celery, or you're
92 new to Celery 5.0.0rc3 coming from previous versions then you should read our
93 getting started tutorials:
94
95 - `First steps with Celery`_
96
97 Tutorial teaching you the bare minimum needed to get started with Celery.
98
99 - `Next steps`_
100
101 A more complete overview, showing more features.
102
103 .. _`First steps with Celery`:
104 http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
105
106 .. _`Next steps`:
107 http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
108
109 Celery is...
110 =============
111
112 - **Simple**
113
114 Celery is easy to use and maintain, and does *not need configuration files*.
115
116 It has an active, friendly community you can talk to for support,
117 like at our `mailing-list`_, or the IRC channel.
118
119 Here's one of the simplest applications you can make::
120
121 from celery import Celery
122
123 app = Celery('hello', broker='amqp://guest@localhost//')
124
125 @app.task
126 def hello():
127 return 'hello world'
128
129 - **Highly Available**
130
131 Workers and clients will automatically retry in the event
132 of connection loss or failure, and some brokers support
133 HA in way of *Primary/Primary* or *Primary/Replica* replication.
134
135 - **Fast**
136
137 A single Celery process can process millions of tasks a minute,
138 with sub-millisecond round-trip latency (using RabbitMQ,
139 py-librabbitmq, and optimized settings).
140
141 - **Flexible**
142
143 Almost every part of *Celery* can be extended or used on its own,
144 Custom pool implementations, serializers, compression schemes, logging,
145 schedulers, consumers, producers, broker transports, and much more.
146
147 It supports...
148 ================
149
150 - **Message Transports**
151
152 - RabbitMQ_, Redis_, Amazon SQS
153
154 - **Concurrency**
155
156 - Prefork, Eventlet_, gevent_, single threaded (``solo``)
157
158 - **Result Stores**
159
160 - AMQP, Redis
161 - memcached
162 - SQLAlchemy, Django ORM
163 - Apache Cassandra, IronCache, Elasticsearch
164
165 - **Serialization**
166
167 - *pickle*, *json*, *yaml*, *msgpack*.
168 - *zlib*, *bzip2* compression.
169 - Cryptographic message signing.
170
171 .. _`Eventlet`: http://eventlet.net/
172 .. _`gevent`: http://gevent.org/
173
174 .. _RabbitMQ: https://rabbitmq.com
175 .. _Redis: https://redis.io
176 .. _SQLAlchemy: http://sqlalchemy.org
177
178 Framework Integration
179 =====================
180
181 Celery is easy to integrate with web frameworks, some of which even have
182 integration packages:
183
184 +--------------------+------------------------+
185 | `Django`_ | not needed |
186 +--------------------+------------------------+
187 | `Pyramid`_ | `pyramid_celery`_ |
188 +--------------------+------------------------+
189 | `Pylons`_ | `celery-pylons`_ |
190 +--------------------+------------------------+
191 | `Flask`_ | not needed |
192 +--------------------+------------------------+
193 | `web2py`_ | `web2py-celery`_ |
194 +--------------------+------------------------+
195 | `Tornado`_ | `tornado-celery`_ |
196 +--------------------+------------------------+
197
198 The integration packages aren't strictly necessary, but they can make
199 development easier, and sometimes they add important hooks like closing
200 database connections at ``fork``.
201
202 .. _`Django`: https://djangoproject.com/
203 .. _`Pylons`: http://pylonsproject.org/
204 .. _`Flask`: http://flask.pocoo.org/
205 .. _`web2py`: http://web2py.com/
206 .. _`Bottle`: https://bottlepy.org/
207 .. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
208 .. _`pyramid_celery`: https://pypi.org/project/pyramid_celery/
209 .. _`celery-pylons`: https://pypi.org/project/celery-pylons/
210 .. _`web2py-celery`: https://code.google.com/p/web2py-celery/
211 .. _`Tornado`: http://www.tornadoweb.org/
212 .. _`tornado-celery`: https://github.com/mher/tornado-celery/
213
214 .. _celery-documentation:
215
216 Documentation
217 =============
218
219 The `latest documentation`_ is hosted at Read The Docs, containing user guides,
220 tutorials, and an API reference.
221
222 最新的中文文档托管在 https://www.celerycn.io/ 中,包含用户指南、教程、API接口等。
223
224 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
225
226 .. _celery-installation:
227
228 Installation
229 ============
230
231 You can install Celery either via the Python Package Index (PyPI)
232 or from source.
233
234 To install using ``pip``:
235
236 ::
237
238
239 $ pip install -U Celery
240
241 .. _bundles:
242
243 Bundles
244 -------
245
246 Celery also defines a group of bundles that can be used
247 to install Celery and the dependencies for a given feature.
248
249 You can specify these in your requirements or on the ``pip``
250 command-line by using brackets. Multiple bundles can be specified by
251 separating them by commas.
252
253 ::
254
255
256 $ pip install "celery[librabbitmq]"
257
258 $ pip install "celery[librabbitmq,redis,auth,msgpack]"
259
260 The following bundles are available:
261
262 Serializers
263 ~~~~~~~~~~~
264
265 :``celery[auth]``:
266 for using the ``auth`` security serializer.
267
268 :``celery[msgpack]``:
269 for using the msgpack serializer.
270
271 :``celery[yaml]``:
272 for using the yaml serializer.
273
274 Concurrency
275 ~~~~~~~~~~~
276
277 :``celery[eventlet]``:
278 for using the ``eventlet`` pool.
279
280 :``celery[gevent]``:
281 for using the ``gevent`` pool.
282
283 Transports and Backends
284 ~~~~~~~~~~~~~~~~~~~~~~~
285
286 :``celery[librabbitmq]``:
287 for using the librabbitmq C library.
288
289 :``celery[redis]``:
290 for using Redis as a message transport or as a result backend.
291
292 :``celery[sqs]``:
293 for using Amazon SQS as a message transport.
294
295 :``celery[tblib``]:
296 for using the ``task_remote_tracebacks`` feature.
297
298 :``celery[memcache]``:
299 for using Memcached as a result backend (using ``pylibmc``)
300
301 :``celery[pymemcache]``:
302 for using Memcached as a result backend (pure-Python implementation).
303
304 :``celery[cassandra]``:
305 for using Apache Cassandra as a result backend with DataStax driver.
306
307 :``celery[azureblockblob]``:
308 for using Azure Storage as a result backend (using ``azure-storage``)
309
310 :``celery[s3]``:
311 for using S3 Storage as a result backend.
312
313 :``celery[couchbase]``:
314 for using Couchbase as a result backend.
315
316 :``celery[arangodb]``:
317 for using ArangoDB as a result backend.
318
319 :``celery[elasticsearch]``:
320 for using Elasticsearch as a result backend.
321
322 :``celery[riak]``:
323 for using Riak as a result backend.
324
325 :``celery[cosmosdbsql]``:
326 for using Azure Cosmos DB as a result backend (using ``pydocumentdb``)
327
328 :``celery[zookeeper]``:
329 for using Zookeeper as a message transport.
330
331 :``celery[sqlalchemy]``:
332 for using SQLAlchemy as a result backend (*supported*).
333
334 :``celery[pyro]``:
335 for using the Pyro4 message transport (*experimental*).
336
337 :``celery[slmq]``:
338 for using the SoftLayer Message Queue transport (*experimental*).
339
340 :``celery[consul]``:
341 for using the Consul.io Key/Value store as a message transport or result backend (*experimental*).
342
343 :``celery[django]``:
344 specifies the lowest version possible for Django support.
345
346 You should probably not use this in your requirements, it's here
347 for informational purposes only.
348
349
350 .. _celery-installing-from-source:
351
352 Downloading and installing from source
353 --------------------------------------
354
355 Download the latest version of Celery from PyPI:
356
357 https://pypi.org/project/celery/
358
359 You can install it by doing the following,:
360
361 ::
362
363
364 $ tar xvfz celery-0.0.0.tar.gz
365 $ cd celery-0.0.0
366 $ python setup.py build
367 # python setup.py install
368
369 The last command must be executed as a privileged user if
370 you aren't currently using a virtualenv.
371
372 .. _celery-installing-from-git:
373
374 Using the development version
375 -----------------------------
376
377 With pip
378 ~~~~~~~~
379
380 The Celery development version also requires the development
381 versions of ``kombu``, ``amqp``, ``billiard``, and ``vine``.
382
383 You can install the latest snapshot of these using the following
384 pip commands:
385
386 ::
387
388
389 $ pip install https://github.com/celery/celery/zipball/master#egg=celery
390 $ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
391 $ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
392 $ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
393 $ pip install https://github.com/celery/vine/zipball/master#egg=vine
394
395 With git
396 ~~~~~~~~
397
398 Please see the Contributing section.
399
400 .. _getting-help:
401
402 Getting Help
403 ============
404
405 .. _mailing-list:
406
407 Mailing list
408 ------------
409
410 For discussions about the usage, development, and future of Celery,
411 please join the `celery-users`_ mailing list.
412
413 .. _`celery-users`: https://groups.google.com/group/celery-users/
414
415 .. _irc-channel:
416
417 IRC
418 ---
419
420 Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
421 network.
422
423 .. _`Freenode`: https://freenode.net
424
425 .. _bug-tracker:
426
427 Bug tracker
428 ===========
429
430 If you have any suggestions, bug reports, or annoyances please report them
431 to our issue tracker at https://github.com/celery/celery/issues/
432
433 .. _wiki:
434
435 Wiki
436 ====
437
438 https://github.com/celery/celery/wiki
439
440 Credits
441 =======
442
443 .. _contributing-short:
444
445 Contributors
446 ------------
447
448 This project exists thanks to all the people who contribute. Development of
449 `celery` happens at GitHub: https://github.com/celery/celery
450
451 You're highly encouraged to participate in the development
452 of `celery`. If you don't like GitHub (for some reason) you're welcome
453 to send regular patches.
454
455 Be sure to also read the `Contributing to Celery`_ section in the
456 documentation.
457
458 .. _`Contributing to Celery`:
459 http://docs.celeryproject.org/en/master/contributing.html
460
461 |oc-contributors|
462
463 .. |oc-contributors| image:: https://opencollective.com/celery/contributors.svg?width=890&button=false
464 :target: https://github.com/celery/celery/graphs/contributors
465
466 Backers
467 -------
468
469 Thank you to all our backers! 🙏 [`Become a backer`_]
470
471 .. _`Become a backer`: https://opencollective.com/celery#backer
472
473 |oc-backers|
474
475 .. |oc-backers| image:: https://opencollective.com/celery/backers.svg?width=890
476 :target: https://opencollective.com/celery#backers
477
478 Sponsors
479 --------
480
481 Support this project by becoming a sponsor. Your logo will show up here with a
482 link to your website. [`Become a sponsor`_]
483
484 .. _`Become a sponsor`: https://opencollective.com/celery#sponsor
485
486 |oc-sponsors|
487
488 .. |oc-sponsors| image:: https://opencollective.com/celery/sponsor/0/avatar.svg
489 :target: https://opencollective.com/celery/sponsor/0/website
490
491 .. _license:
492
493 License
494 =======
495
496 This software is licensed under the `New BSD License`. See the ``LICENSE``
497 file in the top distribution directory for the full license text.
498
499 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
500
501 .. |build-status| image:: https://secure.travis-ci.org/celery/celery.png?branch=master
502 :alt: Build status
503 :target: https://travis-ci.org/celery/celery
504
505 .. |coverage| image:: https://codecov.io/github/celery/celery/coverage.svg?branch=master
506 :target: https://codecov.io/github/celery/celery?branch=master
507
508 .. |license| image:: https://img.shields.io/pypi/l/celery.svg
509 :alt: BSD License
510 :target: https://opensource.org/licenses/BSD-3-Clause
511
512 .. |wheel| image:: https://img.shields.io/pypi/wheel/celery.svg
513 :alt: Celery can be installed via wheel
514 :target: https://pypi.org/project/celery/
515
516 .. |pyversion| image:: https://img.shields.io/pypi/pyversions/celery.svg
517 :alt: Supported Python versions.
518 :target: https://pypi.org/project/celery/
519
520 .. |pyimp| image:: https://img.shields.io/pypi/implementation/celery.svg
521 :alt: Support Python implementations.
522 :target: https://pypi.org/project/celery/
523
524 .. |ocbackerbadge| image:: https://opencollective.com/celery/backers/badge.svg
525 :alt: Backers on Open Collective
526 :target: #backers
527
528 .. |ocsponsorbadge| image:: https://opencollective.com/celery/sponsors/badge.svg
529 :alt: Sponsors on Open Collective
530 :target: #sponsors
531
532 .. |downloads| image:: https://pepy.tech/badge/celery
533 :alt: Downloads
534 :target: https://pepy.tech/project/celery
535
[end of README.rst]
[start of celery/utils/functional.py]
1 """Functional-style utilties."""
2 import inspect
3 import sys
4 from collections import UserList
5 from functools import partial
6 from itertools import chain, islice
7
8 from kombu.utils.functional import (LRUCache, dictfilter, is_list, lazy,
9 maybe_evaluate, maybe_list, memoize)
10 from vine import promise
11
12 __all__ = (
13 'LRUCache', 'is_list', 'maybe_list', 'memoize', 'mlazy', 'noop',
14 'first', 'firstmethod', 'chunks', 'padlist', 'mattrgetter', 'uniq',
15 'regen', 'dictfilter', 'lazy', 'maybe_evaluate', 'head_from_fun',
16 'maybe', 'fun_accepts_kwargs',
17 )
18
19 FUNHEAD_TEMPLATE = """
20 def {fun_name}({fun_args}):
21 return {fun_value}
22 """
23
24
25 class DummyContext:
26
27 def __enter__(self):
28 return self
29
30 def __exit__(self, *exc_info):
31 pass
32
33
34 class mlazy(lazy):
35 """Memoized lazy evaluation.
36
37 The function is only evaluated once, every subsequent access
38 will return the same value.
39 """
40
41 #: Set to :const:`True` after the object has been evaluated.
42 evaluated = False
43 _value = None
44
45 def evaluate(self):
46 if not self.evaluated:
47 self._value = super().evaluate()
48 self.evaluated = True
49 return self._value
50
51
52 def noop(*args, **kwargs):
53 """No operation.
54
55 Takes any arguments/keyword arguments and does nothing.
56 """
57
58
59 def pass1(arg, *args, **kwargs):
60 """Return the first positional argument."""
61 return arg
62
63
64 def evaluate_promises(it):
65 for value in it:
66 if isinstance(value, promise):
67 value = value()
68 yield value
69
70
71 def first(predicate, it):
72 """Return the first element in ``it`` that ``predicate`` accepts.
73
74 If ``predicate`` is None it will return the first item that's not
75 :const:`None`.
76 """
77 return next(
78 (v for v in evaluate_promises(it) if (
79 predicate(v) if predicate is not None else v is not None)),
80 None,
81 )
82
83
84 def firstmethod(method, on_call=None):
85 """Multiple dispatch.
86
87 Return a function that with a list of instances,
88 finds the first instance that gives a value for the given method.
89
90 The list can also contain lazy instances
91 (:class:`~kombu.utils.functional.lazy`.)
92 """
93 def _matcher(it, *args, **kwargs):
94 for obj in it:
95 try:
96 meth = getattr(maybe_evaluate(obj), method)
97 reply = (on_call(meth, *args, **kwargs) if on_call
98 else meth(*args, **kwargs))
99 except AttributeError:
100 pass
101 else:
102 if reply is not None:
103 return reply
104 return _matcher
105
106
107 def chunks(it, n):
108 """Split an iterator into chunks with `n` elements each.
109
110 Warning:
111 ``it`` must be an actual iterator, if you pass this a
112 concrete sequence will get you repeating elements.
113
114 So ``chunks(iter(range(1000)), 10)`` is fine, but
115 ``chunks(range(1000), 10)`` is not.
116
117 Example:
118 # n == 2
119 >>> x = chunks(iter([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]), 2)
120 >>> list(x)
121 [[0, 1], [2, 3], [4, 5], [6, 7], [8, 9], [10]]
122
123 # n == 3
124 >>> x = chunks(iter([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]), 3)
125 >>> list(x)
126 [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
127 """
128 for item in it:
129 yield [item] + list(islice(it, n - 1))
130
131
132 def padlist(container, size, default=None):
133 """Pad list with default elements.
134
135 Example:
136 >>> first, last, city = padlist(['George', 'Costanza', 'NYC'], 3)
137 ('George', 'Costanza', 'NYC')
138 >>> first, last, city = padlist(['George', 'Costanza'], 3)
139 ('George', 'Costanza', None)
140 >>> first, last, city, planet = padlist(
141 ... ['George', 'Costanza', 'NYC'], 4, default='Earth',
142 ... )
143 ('George', 'Costanza', 'NYC', 'Earth')
144 """
145 return list(container)[:size] + [default] * (size - len(container))
146
147
148 def mattrgetter(*attrs):
149 """Get attributes, ignoring attribute errors.
150
151 Like :func:`operator.itemgetter` but return :const:`None` on missing
152 attributes instead of raising :exc:`AttributeError`.
153 """
154 return lambda obj: {attr: getattr(obj, attr, None) for attr in attrs}
155
156
157 def uniq(it):
158 """Return all unique elements in ``it``, preserving order."""
159 seen = set()
160 return (seen.add(obj) or obj for obj in it if obj not in seen)
161
162
163 def regen(it):
164 """Convert iterator to an object that can be consumed multiple times.
165
166 ``Regen`` takes any iterable, and if the object is an
167 generator it will cache the evaluated list on first access,
168 so that the generator can be "consumed" multiple times.
169 """
170 if isinstance(it, (list, tuple)):
171 return it
172 return _regen(it)
173
174
175 class _regen(UserList, list):
176 # must be subclass of list so that json can encode.
177
178 def __init__(self, it):
179 # pylint: disable=super-init-not-called
180 # UserList creates a new list and sets .data, so we don't
181 # want to call init here.
182 self.__it = it
183 self.__index = 0
184 self.__consumed = []
185
186 def __reduce__(self):
187 return list, (self.data,)
188
189 def __length_hint__(self):
190 return self.__it.__length_hint__()
191
192 def __iter__(self):
193 return chain(self.__consumed, self.__it)
194
195 def __getitem__(self, index):
196 if index < 0:
197 return self.data[index]
198 try:
199 return self.__consumed[index]
200 except IndexError:
201 try:
202 for _ in range(self.__index, index + 1):
203 self.__consumed.append(next(self.__it))
204 except StopIteration:
205 raise IndexError(index)
206 else:
207 return self.__consumed[index]
208
209 @property
210 def data(self):
211 try:
212 self.__consumed.extend(list(self.__it))
213 except StopIteration:
214 pass
215 return self.__consumed
216
217
218 def _argsfromspec(spec, replace_defaults=True):
219 if spec.defaults:
220 split = len(spec.defaults)
221 defaults = (list(range(len(spec.defaults))) if replace_defaults
222 else spec.defaults)
223 positional = spec.args[:-split]
224 optional = list(zip(spec.args[-split:], defaults))
225 else:
226 positional, optional = spec.args, []
227
228 varargs = spec.varargs
229 varkw = spec.varkw
230 if spec.kwonlydefaults:
231 split = len(spec.kwonlydefaults)
232 kwonlyargs = spec.kwonlyargs[:-split]
233 if replace_defaults:
234 kwonlyargs_optional = [
235 (kw, i) for i, kw in enumerate(spec.kwonlyargs[-split:])]
236 else:
237 kwonlyargs_optional = list(spec.kwonlydefaults.items())
238 else:
239 kwonlyargs, kwonlyargs_optional = spec.kwonlyargs, []
240
241 return ', '.join(filter(None, [
242 ', '.join(positional),
243 ', '.join(f'{k}={v}' for k, v in optional),
244 f'*{varargs}' if varargs else None,
245 '*' if (kwonlyargs or kwonlyargs_optional) and not varargs else None,
246 ', '.join(kwonlyargs) if kwonlyargs else None,
247 ', '.join(f'{k}="{v}"' for k, v in kwonlyargs_optional),
248 f'**{varkw}' if varkw else None,
249 ]))
250
251
252 def head_from_fun(fun, bound=False, debug=False):
253 """Generate signature function from actual function."""
254 # we could use inspect.Signature here, but that implementation
255 # is very slow since it implements the argument checking
256 # in pure-Python. Instead we use exec to create a new function
257 # with an empty body, meaning it has the same performance as
258 # as just calling a function.
259 is_function = inspect.isfunction(fun)
260 is_callable = hasattr(fun, '__call__')
261 is_cython = fun.__class__.__name__ == 'cython_function_or_method'
262 is_method = inspect.ismethod(fun)
263
264 if not is_function and is_callable and not is_method and not is_cython:
265 name, fun = fun.__class__.__name__, fun.__call__
266 else:
267 name = fun.__name__
268 definition = FUNHEAD_TEMPLATE.format(
269 fun_name=name,
270 fun_args=_argsfromspec(inspect.getfullargspec(fun)),
271 fun_value=1,
272 )
273 if debug: # pragma: no cover
274 print(definition, file=sys.stderr)
275 namespace = {'__name__': fun.__module__}
276 # pylint: disable=exec-used
277 # Tasks are rarely, if ever, created at runtime - exec here is fine.
278 exec(definition, namespace)
279 result = namespace[name]
280 result._source = definition
281 if bound:
282 return partial(result, object())
283 return result
284
285
286 def arity_greater(fun, n):
287 argspec = inspect.getfullargspec(fun)
288 return argspec.varargs or len(argspec.args) > n
289
290
291 def fun_takes_argument(name, fun, position=None):
292 spec = inspect.getfullargspec(fun)
293 return (
294 spec.varkw or spec.varargs or
295 (len(spec.args) >= position if position else name in spec.args)
296 )
297
298
299 if hasattr(inspect, 'signature'):
300 def fun_accepts_kwargs(fun):
301 """Return true if function accepts arbitrary keyword arguments."""
302 return any(
303 p for p in inspect.signature(fun).parameters.values()
304 if p.kind == p.VAR_KEYWORD
305 )
306 else:
307 def fun_accepts_kwargs(fun): # noqa
308 """Return true if function accepts arbitrary keyword arguments."""
309 try:
310 argspec = inspect.getargspec(fun)
311 except TypeError:
312 try:
313 argspec = inspect.getargspec(fun.__call__)
314 except (TypeError, AttributeError):
315 return
316 return not argspec or argspec[2] is not None
317
318
319 def maybe(typ, val):
320 """Call typ on value if val is defined."""
321 return typ(val) if val is not None else val
322
323
324 def seq_concat_item(seq, item):
325 """Return copy of sequence seq with item added.
326
327 Returns:
328 Sequence: if seq is a tuple, the result will be a tuple,
329 otherwise it depends on the implementation of ``__add__``.
330 """
331 return seq + (item,) if isinstance(seq, tuple) else seq + [item]
332
333
334 def seq_concat_seq(a, b):
335 """Concatenate two sequences: ``a + b``.
336
337 Returns:
338 Sequence: The return value will depend on the largest sequence
339 - if b is larger and is a tuple, the return value will be a tuple.
340 - if a is larger and is a list, the return value will be a list,
341 """
342 # find the type of the largest sequence
343 prefer = type(max([a, b], key=len))
344 # convert the smallest list to the type of the largest sequence.
345 if not isinstance(a, prefer):
346 a = prefer(a)
347 if not isinstance(b, prefer):
348 b = prefer(b)
349 return a + b
350
[end of celery/utils/functional.py]
[start of setup.py]
1 #!/usr/bin/env python
2 import codecs
3 import os
4 import re
5 import sys
6
7 import setuptools
8 import setuptools.command.test
9
10 NAME = 'celery'
11
12 # -*- Extras -*-
13
14 EXTENSIONS = {
15 'arangodb',
16 'auth',
17 'azureblockblob',
18 'brotli',
19 'cassandra',
20 'consul',
21 'cosmosdbsql',
22 'couchbase',
23 'couchdb',
24 'django',
25 'dynamodb',
26 'elasticsearch',
27 'eventlet',
28 'gevent',
29 'librabbitmq',
30 'lzma',
31 'memcache',
32 'mongodb',
33 'msgpack',
34 'pymemcache',
35 'pyro',
36 'redis',
37 's3',
38 'slmq',
39 'solar',
40 'sqlalchemy',
41 'sqs',
42 'tblib',
43 'yaml',
44 'zookeeper',
45 'zstd'
46 }
47
48 # -*- Distribution Meta -*-
49
50 re_meta = re.compile(r'__(\w+?)__\s*=\s*(.*)')
51 re_doc = re.compile(r'^"""(.+?)"""')
52
53
54 def _add_default(m):
55 attr_name, attr_value = m.groups()
56 return ((attr_name, attr_value.strip("\"'")),)
57
58
59 def _add_doc(m):
60 return (('doc', m.groups()[0]),)
61
62
63 def parse_dist_meta():
64 """Extract metadata information from ``$dist/__init__.py``."""
65 pats = {re_meta: _add_default, re_doc: _add_doc}
66 here = os.path.abspath(os.path.dirname(__file__))
67 with open(os.path.join(here, NAME, '__init__.py')) as meta_fh:
68 distmeta = {}
69 for line in meta_fh:
70 if line.strip() == '# -eof meta-':
71 break
72 for pattern, handler in pats.items():
73 m = pattern.match(line.strip())
74 if m:
75 distmeta.update(handler(m))
76 return distmeta
77
78 # -*- Requirements -*-
79
80
81 def _strip_comments(l):
82 return l.split('#', 1)[0].strip()
83
84
85 def _pip_requirement(req):
86 if req.startswith('-r '):
87 _, path = req.split()
88 return reqs(*path.split('/'))
89 return [req]
90
91
92 def _reqs(*f):
93 return [
94 _pip_requirement(r) for r in (
95 _strip_comments(l) for l in open(
96 os.path.join(os.getcwd(), 'requirements', *f)).readlines()
97 ) if r]
98
99
100 def reqs(*f):
101 """Parse requirement file.
102
103 Example:
104 reqs('default.txt') # requirements/default.txt
105 reqs('extras', 'redis.txt') # requirements/extras/redis.txt
106 Returns:
107 List[str]: list of requirements specified in the file.
108 """
109 return [req for subreq in _reqs(*f) for req in subreq]
110
111
112 def extras(*p):
113 """Parse requirement in the requirements/extras/ directory."""
114 return reqs('extras', *p)
115
116
117 def install_requires():
118 """Get list of requirements required for installation."""
119 return reqs('default.txt')
120
121
122 def extras_require():
123 """Get map of all extra requirements."""
124 return {x: extras(x + '.txt') for x in EXTENSIONS}
125
126 # -*- Long Description -*-
127
128
129 def long_description():
130 try:
131 return codecs.open('README.rst', 'r', 'utf-8').read()
132 except OSError:
133 return 'Long description error: Missing README.rst file'
134
135 # -*- Command: setup.py test -*-
136
137
138 class pytest(setuptools.command.test.test):
139 user_options = [('pytest-args=', 'a', 'Arguments to pass to pytest')]
140
141 def initialize_options(self):
142 setuptools.command.test.test.initialize_options(self)
143 self.pytest_args = []
144
145 def run_tests(self):
146 import pytest as _pytest
147 sys.exit(_pytest.main(self.pytest_args))
148
149 # -*- %%% -*-
150
151
152 meta = parse_dist_meta()
153 setuptools.setup(
154 name=NAME,
155 packages=setuptools.find_packages(exclude=['t', 't.*']),
156 version=meta['version'],
157 description=meta['doc'],
158 long_description=long_description(),
159 keywords=meta['keywords'],
160 author=meta['author'],
161 author_email=meta['contact'],
162 url=meta['homepage'],
163 license='BSD',
164 platforms=['any'],
165 install_requires=install_requires(),
166 python_requires=">=3.6,",
167 tests_require=reqs('test.txt'),
168 extras_require=extras_require(),
169 cmdclass={'test': pytest},
170 include_package_data=True,
171 zip_safe=False,
172 entry_points={
173 'console_scripts': [
174 'celery = celery.__main__:main',
175 ]
176 },
177 project_urls={
178 "Documentation": "http://docs.celeryproject.org/en/latest/index.html",
179 "Code": "https://github.com/celery/celery",
180 "Tracker": "https://github.com/celery/celery/issues",
181 "Funding": "https://opencollective.com/celery"
182 },
183 classifiers=[
184 "Development Status :: 5 - Production/Stable",
185 "License :: OSI Approved :: BSD License",
186 "Topic :: System :: Distributed Computing",
187 "Topic :: Software Development :: Object Brokering",
188 "Programming Language :: Python",
189 "Programming Language :: Python :: 3 :: Only",
190 "Programming Language :: Python :: 3",
191 "Programming Language :: Python :: 3.6",
192 "Programming Language :: Python :: 3.7",
193 "Programming Language :: Python :: 3.8",
194 "Programming Language :: Python :: Implementation :: CPython",
195 "Programming Language :: Python :: Implementation :: PyPy",
196 "Operating System :: OS Independent"
197 ]
198 )
199
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
celery/celery
|
5a0c45857640f2415567736ad7ad2b7ae69e1304
|
Logger set to 'ascii' instead of 'utf-8': UnicodeDecodeError 'ascii'
## Checklist
Report:
```
software -> celery:4.2.1 (windowlicker) kombu:4.2.1 py:3.6.5
billiard:3.5.0.4 py-amqp:2.3.2
platform -> system:Linux arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:indicore.celery_app:Backend
broker_url: 'amqp://indico:********@rabbitmq:5672//'
task_serializer: 'msgpack-numpy'
result_serializer: 'msgpack-numpy'
enable_utc: True
worker_send_task_events: True
result_expires: 86400
task_always_eager: False
accept_content: ['application/x-msgpack']
result_backend: 'indicore.celery_app:Backend'
redis_port: 6379
redis_host: 'celery-redis'
redis_max_connections: 1000
broker_transport_options: {
'confirm_publish': True}
broker_heartbeat: 20
broker_connection_max_retries: None
task_queue_ha_policy: 'all'
```
## Steps to reproduce
Celery is logging the success result of a task that includes characters outside ascii encoding.
## Expected behavior
I expect the celery logger to use 'utf-8' encoding rather than ascii. I haven't touched the celery logging, nor do I have a python logging setup separately at the moment. I am using Python3.
## Actual behavior
I receive the following traceback:
```
[2018-10-24 15:35:00,541: WARNING/ForkPoolWorker-7] --- Logging error ---
[2018-10-24 15:35:00,541: WARNING/ForkPoolWorker-7] Traceback (most recent call last):
[2018-10-24 15:35:00,541: WARNING/ForkPoolWorker-7] File "/usr/lib/python3.6/logging/__init__.py", line 994, in emit
stream.write(msg)
[2018-10-24 15:35:00,541: WARNING/ForkPoolWorker-7] UnicodeEncodeError: 'ascii' codec can't encode characters in position 923-928: ordinal not in range(128)
[2018-10-24 15:35:00,541: WARNING/ForkPoolWorker-7] Call stack:
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/__main__.py", line 20, in <module>
main()
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/__main__.py", line 16, in main
_main()
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/celery.py", line 322, in main
cmd.execute_from_commandline(argv)
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/celery.py", line 496, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/base.py", line 275, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/celery.py", line 488, in handle_argv
return self.execute(command, argv)
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/celery.py", line 420, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
[2018-10-24 15:35:00,542: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/worker.py", line 223, in run_from_argv
return self(*args, **options)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/base.py", line 238, in __call__
ret = self.run(*args, **kwargs)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bin/worker.py", line 258, in run
worker.start()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/worker/worker.py", line 205, in start
self.blueprint.start(self)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/concurrency/base.py", line 131, in start
self.on_start()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/concurrency/prefork.py", line 112, in on_start
**self.options)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/concurrency/asynpool.py", line 432, in __init__
super(AsynPool, self).__init__(processes, *args, **kwargs)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/pool.py", line 1007, in __init__
self._create_worker_process(i)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/concurrency/asynpool.py", line 449, in _create_worker_process
return super(AsynPool, self)._create_worker_process(i)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/pool.py", line 1116, in _create_worker_process
w.start()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/process.py", line 124, in start
self._popen = self._Popen(self)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/context.py", line 333, in _Popen
return Popen(process_obj)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/popen_fork.py", line 24, in __init__
self._launch(process_obj)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/popen_fork.py", line 79, in _launch
code = process_obj._bootstrap()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/process.py", line 327, in _bootstrap
self.run()
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
[2018-10-24 15:35:00,543: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/pool.py", line 289, in __call__
sys.exit(self.workloop(pid=pid))
[2018-10-24 15:35:00,544: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/billiard/pool.py", line 358, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
[2018-10-24 15:35:00,544: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/app/trace.py", line 549, in _fast_trace_task
uuid, args, kwargs, request,
[2018-10-24 15:35:00,544: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/app/trace.py", line 458, in trace_task
'runtime': T,
[2018-10-24 15:35:00,544: WARNING/ForkPoolWorker-7] File "/usr/local/lib/python3.6/dist-packages/celery/app/trace.py", line 124, in info
logger.info(fmt, context, extra={'data': context})
```
# Additional Info
- I ran `sys.getdefaultencoding()` before before `celery/app/trace.py` line 124 and the value is "utf-8" as I expected.
- I do have the LANG set up properly in the machine
- the machine's default locale is also set the same
- I also added the LANG in front of the celery bin in celeryd.
- I can manually print `context` once I've decoded it "utf-8", which is successfully redirected to the celery logger from what I can see.
|
I've done some more debugging and found that `context` becomes a string here. Msgpack serializes my return value as bytes, but somewhere along the way to the logger, the bytes seem to be cast to a string It looks something like `"b'.. bytes...'"`
At the moment, I'm working around this by patching the `info` method in `celery/app/trace.py` with the following:
```python3
def info(fmt, context):
"""Log 'fmt % context' with severity 'INFO'.
'context' is also passed in extra with key 'data' for custom handlers.
"""
if isinstance(context["return_value"], str):
context_copy = copy.deepcopy(context)
context_copy["return_value"] = context_copy["return_value"].encode("utf-8")
return orig_info(fmt, context_copy)
return orig_info(fmt, context)
```
There must be a way to resolve this issue that I am unaware of. I would love guidance on where I should look. Thank you!
Do you happen to have a test case for reproducing this issue?
2 years too late but I'm currently suffering with this issue so, here is a test case.
```
from celery.utils.log import get_task_logger
logger = get_task_logger(__name__)
@shared_task
def test():
logger.debug("테스트")
```
We test this in our integration suite.
Does this always reproduce or only when writing to a file?
|
2020-09-21T10:36:52Z
|
<patch>
diff --git a/celery/app/log.py b/celery/app/log.py
--- a/celery/app/log.py
+++ b/celery/app/log.py
@@ -221,7 +221,7 @@ def _detect_handler(self, logfile=None):
logfile = sys.__stderr__ if logfile is None else logfile
if hasattr(logfile, 'write'):
return logging.StreamHandler(logfile)
- return WatchedFileHandler(logfile)
+ return WatchedFileHandler(logfile, encoding='utf-8')
def _has_handler(self, logger):
return any(
</patch>
|
[]
|
[]
| |||
ytdl-org__youtube-dl-18228
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for nzz.ch
rudolffischer@BueroPC-RF:~$ youtube-dl "http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209" -v
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209', '-v']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2014.12.06.1
[debug] Python version 2.7.6 - Linux-3.13.0-39-generic-x86_64-with-Ubuntu-14.04-trusty
[debug] exe versions: rtmpdump 2.4
[debug] Proxy map: {}
[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Downloading webpage
[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Extracting information
ERROR: Unsupported URL: http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 651, in _real_extract
doc = parse_xml(webpage)
File "/usr/local/bin/youtube-dl/youtube_dl/utils.py", line 1425, in parse_xml
tree = xml.etree.ElementTree.XML(s.encode('utf-8'), **kwargs)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1300, in XML
parser.feed(text)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
ParseError: not well-formed (invalid token): line 2, column 42
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 553, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 241, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1044, in _real_extract
raise ExtractorError('Unsupported URL: %s' % url)
ExtractorError: Unsupported URL: http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
rudolffischer@BueroPC-RF:~$
</issue>
<code>
[start of README.md]
1 [](https://travis-ci.org/rg3/youtube-dl)
2
3 youtube-dl - download videos from youtube.com or other video platforms
4
5 - [INSTALLATION](#installation)
6 - [DESCRIPTION](#description)
7 - [OPTIONS](#options)
8 - [CONFIGURATION](#configuration)
9 - [OUTPUT TEMPLATE](#output-template)
10 - [FORMAT SELECTION](#format-selection)
11 - [VIDEO SELECTION](#video-selection)
12 - [FAQ](#faq)
13 - [DEVELOPER INSTRUCTIONS](#developer-instructions)
14 - [EMBEDDING YOUTUBE-DL](#embedding-youtube-dl)
15 - [BUGS](#bugs)
16 - [COPYRIGHT](#copyright)
17
18 # INSTALLATION
19
20 To install it right away for all UNIX users (Linux, macOS, etc.), type:
21
22 sudo curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl
23 sudo chmod a+rx /usr/local/bin/youtube-dl
24
25 If you do not have curl, you can alternatively use a recent wget:
26
27 sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O /usr/local/bin/youtube-dl
28 sudo chmod a+rx /usr/local/bin/youtube-dl
29
30 Windows users can [download an .exe file](https://yt-dl.org/latest/youtube-dl.exe) and place it in any location on their [PATH](https://en.wikipedia.org/wiki/PATH_%28variable%29) except for `%SYSTEMROOT%\System32` (e.g. **do not** put in `C:\Windows\System32`).
31
32 You can also use pip:
33
34 sudo -H pip install --upgrade youtube-dl
35
36 This command will update youtube-dl if you have already installed it. See the [pypi page](https://pypi.python.org/pypi/youtube_dl) for more information.
37
38 macOS users can install youtube-dl with [Homebrew](https://brew.sh/):
39
40 brew install youtube-dl
41
42 Or with [MacPorts](https://www.macports.org/):
43
44 sudo port install youtube-dl
45
46 Alternatively, refer to the [developer instructions](#developer-instructions) for how to check out and work with the git repository. For further options, including PGP signatures, see the [youtube-dl Download Page](https://rg3.github.io/youtube-dl/download.html).
47
48 # DESCRIPTION
49 **youtube-dl** is a command-line program to download videos from YouTube.com and a few more sites. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on macOS. It is released to the public domain, which means you can modify it, redistribute it or use it however you like.
50
51 youtube-dl [OPTIONS] URL [URL...]
52
53 # OPTIONS
54 -h, --help Print this help text and exit
55 --version Print program version and exit
56 -U, --update Update this program to latest version. Make
57 sure that you have sufficient permissions
58 (run with sudo if needed)
59 -i, --ignore-errors Continue on download errors, for example to
60 skip unavailable videos in a playlist
61 --abort-on-error Abort downloading of further videos (in the
62 playlist or the command line) if an error
63 occurs
64 --dump-user-agent Display the current browser identification
65 --list-extractors List all supported extractors
66 --extractor-descriptions Output descriptions of all supported
67 extractors
68 --force-generic-extractor Force extraction to use the generic
69 extractor
70 --default-search PREFIX Use this prefix for unqualified URLs. For
71 example "gvsearch2:" downloads two videos
72 from google videos for youtube-dl "large
73 apple". Use the value "auto" to let
74 youtube-dl guess ("auto_warning" to emit a
75 warning when guessing). "error" just throws
76 an error. The default value "fixup_error"
77 repairs broken URLs, but emits an error if
78 this is not possible instead of searching.
79 --ignore-config Do not read configuration files. When given
80 in the global configuration file
81 /etc/youtube-dl.conf: Do not read the user
82 configuration in ~/.config/youtube-
83 dl/config (%APPDATA%/youtube-dl/config.txt
84 on Windows)
85 --config-location PATH Location of the configuration file; either
86 the path to the config or its containing
87 directory.
88 --flat-playlist Do not extract the videos of a playlist,
89 only list them.
90 --mark-watched Mark videos watched (YouTube only)
91 --no-mark-watched Do not mark videos watched (YouTube only)
92 --no-color Do not emit color codes in output
93
94 ## Network Options:
95 --proxy URL Use the specified HTTP/HTTPS/SOCKS proxy.
96 To enable SOCKS proxy, specify a proper
97 scheme. For example
98 socks5://127.0.0.1:1080/. Pass in an empty
99 string (--proxy "") for direct connection
100 --socket-timeout SECONDS Time to wait before giving up, in seconds
101 --source-address IP Client-side IP address to bind to
102 -4, --force-ipv4 Make all connections via IPv4
103 -6, --force-ipv6 Make all connections via IPv6
104
105 ## Geo Restriction:
106 --geo-verification-proxy URL Use this proxy to verify the IP address for
107 some geo-restricted sites. The default
108 proxy specified by --proxy (or none, if the
109 option is not present) is used for the
110 actual downloading.
111 --geo-bypass Bypass geographic restriction via faking
112 X-Forwarded-For HTTP header
113 --no-geo-bypass Do not bypass geographic restriction via
114 faking X-Forwarded-For HTTP header
115 --geo-bypass-country CODE Force bypass geographic restriction with
116 explicitly provided two-letter ISO 3166-2
117 country code
118 --geo-bypass-ip-block IP_BLOCK Force bypass geographic restriction with
119 explicitly provided IP block in CIDR
120 notation
121
122 ## Video Selection:
123 --playlist-start NUMBER Playlist video to start at (default is 1)
124 --playlist-end NUMBER Playlist video to end at (default is last)
125 --playlist-items ITEM_SPEC Playlist video items to download. Specify
126 indices of the videos in the playlist
127 separated by commas like: "--playlist-items
128 1,2,5,8" if you want to download videos
129 indexed 1, 2, 5, 8 in the playlist. You can
130 specify range: "--playlist-items
131 1-3,7,10-13", it will download the videos
132 at index 1, 2, 3, 7, 10, 11, 12 and 13.
133 --match-title REGEX Download only matching titles (regex or
134 caseless sub-string)
135 --reject-title REGEX Skip download for matching titles (regex or
136 caseless sub-string)
137 --max-downloads NUMBER Abort after downloading NUMBER files
138 --min-filesize SIZE Do not download any videos smaller than
139 SIZE (e.g. 50k or 44.6m)
140 --max-filesize SIZE Do not download any videos larger than SIZE
141 (e.g. 50k or 44.6m)
142 --date DATE Download only videos uploaded in this date
143 --datebefore DATE Download only videos uploaded on or before
144 this date (i.e. inclusive)
145 --dateafter DATE Download only videos uploaded on or after
146 this date (i.e. inclusive)
147 --min-views COUNT Do not download any videos with less than
148 COUNT views
149 --max-views COUNT Do not download any videos with more than
150 COUNT views
151 --match-filter FILTER Generic video filter. Specify any key (see
152 the "OUTPUT TEMPLATE" for a list of
153 available keys) to match if the key is
154 present, !key to check if the key is not
155 present, key > NUMBER (like "comment_count
156 > 12", also works with >=, <, <=, !=, =) to
157 compare against a number, key = 'LITERAL'
158 (like "uploader = 'Mike Smith'", also works
159 with !=) to match against a string literal
160 and & to require multiple matches. Values
161 which are not known are excluded unless you
162 put a question mark (?) after the operator.
163 For example, to only match videos that have
164 been liked more than 100 times and disliked
165 less than 50 times (or the dislike
166 functionality is not available at the given
167 service), but who also have a description,
168 use --match-filter "like_count > 100 &
169 dislike_count <? 50 & description" .
170 --no-playlist Download only the video, if the URL refers
171 to a video and a playlist.
172 --yes-playlist Download the playlist, if the URL refers to
173 a video and a playlist.
174 --age-limit YEARS Download only videos suitable for the given
175 age
176 --download-archive FILE Download only videos not listed in the
177 archive file. Record the IDs of all
178 downloaded videos in it.
179 --include-ads Download advertisements as well
180 (experimental)
181
182 ## Download Options:
183 -r, --limit-rate RATE Maximum download rate in bytes per second
184 (e.g. 50K or 4.2M)
185 -R, --retries RETRIES Number of retries (default is 10), or
186 "infinite".
187 --fragment-retries RETRIES Number of retries for a fragment (default
188 is 10), or "infinite" (DASH, hlsnative and
189 ISM)
190 --skip-unavailable-fragments Skip unavailable fragments (DASH, hlsnative
191 and ISM)
192 --abort-on-unavailable-fragment Abort downloading when some fragment is not
193 available
194 --keep-fragments Keep downloaded fragments on disk after
195 downloading is finished; fragments are
196 erased by default
197 --buffer-size SIZE Size of download buffer (e.g. 1024 or 16K)
198 (default is 1024)
199 --no-resize-buffer Do not automatically adjust the buffer
200 size. By default, the buffer size is
201 automatically resized from an initial value
202 of SIZE.
203 --http-chunk-size SIZE Size of a chunk for chunk-based HTTP
204 downloading (e.g. 10485760 or 10M) (default
205 is disabled). May be useful for bypassing
206 bandwidth throttling imposed by a webserver
207 (experimental)
208 --playlist-reverse Download playlist videos in reverse order
209 --playlist-random Download playlist videos in random order
210 --xattr-set-filesize Set file xattribute ytdl.filesize with
211 expected file size
212 --hls-prefer-native Use the native HLS downloader instead of
213 ffmpeg
214 --hls-prefer-ffmpeg Use ffmpeg instead of the native HLS
215 downloader
216 --hls-use-mpegts Use the mpegts container for HLS videos,
217 allowing to play the video while
218 downloading (some players may not be able
219 to play it)
220 --external-downloader COMMAND Use the specified external downloader.
221 Currently supports
222 aria2c,avconv,axel,curl,ffmpeg,httpie,wget
223 --external-downloader-args ARGS Give these arguments to the external
224 downloader
225
226 ## Filesystem Options:
227 -a, --batch-file FILE File containing URLs to download ('-' for
228 stdin), one URL per line. Lines starting
229 with '#', ';' or ']' are considered as
230 comments and ignored.
231 --id Use only video ID in file name
232 -o, --output TEMPLATE Output filename template, see the "OUTPUT
233 TEMPLATE" for all the info
234 --autonumber-start NUMBER Specify the start value for %(autonumber)s
235 (default is 1)
236 --restrict-filenames Restrict filenames to only ASCII
237 characters, and avoid "&" and spaces in
238 filenames
239 -w, --no-overwrites Do not overwrite files
240 -c, --continue Force resume of partially downloaded files.
241 By default, youtube-dl will resume
242 downloads if possible.
243 --no-continue Do not resume partially downloaded files
244 (restart from beginning)
245 --no-part Do not use .part files - write directly
246 into output file
247 --no-mtime Do not use the Last-modified header to set
248 the file modification time
249 --write-description Write video description to a .description
250 file
251 --write-info-json Write video metadata to a .info.json file
252 --write-annotations Write video annotations to a
253 .annotations.xml file
254 --load-info-json FILE JSON file containing the video information
255 (created with the "--write-info-json"
256 option)
257 --cookies FILE File to read cookies from and dump cookie
258 jar in
259 --cache-dir DIR Location in the filesystem where youtube-dl
260 can store some downloaded information
261 permanently. By default
262 $XDG_CACHE_HOME/youtube-dl or
263 ~/.cache/youtube-dl . At the moment, only
264 YouTube player files (for videos with
265 obfuscated signatures) are cached, but that
266 may change.
267 --no-cache-dir Disable filesystem caching
268 --rm-cache-dir Delete all filesystem cache files
269
270 ## Thumbnail images:
271 --write-thumbnail Write thumbnail image to disk
272 --write-all-thumbnails Write all thumbnail image formats to disk
273 --list-thumbnails Simulate and list all available thumbnail
274 formats
275
276 ## Verbosity / Simulation Options:
277 -q, --quiet Activate quiet mode
278 --no-warnings Ignore warnings
279 -s, --simulate Do not download the video and do not write
280 anything to disk
281 --skip-download Do not download the video
282 -g, --get-url Simulate, quiet but print URL
283 -e, --get-title Simulate, quiet but print title
284 --get-id Simulate, quiet but print id
285 --get-thumbnail Simulate, quiet but print thumbnail URL
286 --get-description Simulate, quiet but print video description
287 --get-duration Simulate, quiet but print video length
288 --get-filename Simulate, quiet but print output filename
289 --get-format Simulate, quiet but print output format
290 -j, --dump-json Simulate, quiet but print JSON information.
291 See the "OUTPUT TEMPLATE" for a description
292 of available keys.
293 -J, --dump-single-json Simulate, quiet but print JSON information
294 for each command-line argument. If the URL
295 refers to a playlist, dump the whole
296 playlist information in a single line.
297 --print-json Be quiet and print the video information as
298 JSON (video is still being downloaded).
299 --newline Output progress bar as new lines
300 --no-progress Do not print progress bar
301 --console-title Display progress in console titlebar
302 -v, --verbose Print various debugging information
303 --dump-pages Print downloaded pages encoded using base64
304 to debug problems (very verbose)
305 --write-pages Write downloaded intermediary pages to
306 files in the current directory to debug
307 problems
308 --print-traffic Display sent and read HTTP traffic
309 -C, --call-home Contact the youtube-dl server for debugging
310 --no-call-home Do NOT contact the youtube-dl server for
311 debugging
312
313 ## Workarounds:
314 --encoding ENCODING Force the specified encoding (experimental)
315 --no-check-certificate Suppress HTTPS certificate validation
316 --prefer-insecure Use an unencrypted connection to retrieve
317 information about the video. (Currently
318 supported only for YouTube)
319 --user-agent UA Specify a custom user agent
320 --referer URL Specify a custom referer, use if the video
321 access is restricted to one domain
322 --add-header FIELD:VALUE Specify a custom HTTP header and its value,
323 separated by a colon ':'. You can use this
324 option multiple times
325 --bidi-workaround Work around terminals that lack
326 bidirectional text support. Requires bidiv
327 or fribidi executable in PATH
328 --sleep-interval SECONDS Number of seconds to sleep before each
329 download when used alone or a lower bound
330 of a range for randomized sleep before each
331 download (minimum possible number of
332 seconds to sleep) when used along with
333 --max-sleep-interval.
334 --max-sleep-interval SECONDS Upper bound of a range for randomized sleep
335 before each download (maximum possible
336 number of seconds to sleep). Must only be
337 used along with --min-sleep-interval.
338
339 ## Video Format Options:
340 -f, --format FORMAT Video format code, see the "FORMAT
341 SELECTION" for all the info
342 --all-formats Download all available video formats
343 --prefer-free-formats Prefer free video formats unless a specific
344 one is requested
345 -F, --list-formats List all available formats of requested
346 videos
347 --youtube-skip-dash-manifest Do not download the DASH manifests and
348 related data on YouTube videos
349 --merge-output-format FORMAT If a merge is required (e.g.
350 bestvideo+bestaudio), output to given
351 container format. One of mkv, mp4, ogg,
352 webm, flv. Ignored if no merge is required
353
354 ## Subtitle Options:
355 --write-sub Write subtitle file
356 --write-auto-sub Write automatically generated subtitle file
357 (YouTube only)
358 --all-subs Download all the available subtitles of the
359 video
360 --list-subs List all available subtitles for the video
361 --sub-format FORMAT Subtitle format, accepts formats
362 preference, for example: "srt" or
363 "ass/srt/best"
364 --sub-lang LANGS Languages of the subtitles to download
365 (optional) separated by commas, use --list-
366 subs for available language tags
367
368 ## Authentication Options:
369 -u, --username USERNAME Login with this account ID
370 -p, --password PASSWORD Account password. If this option is left
371 out, youtube-dl will ask interactively.
372 -2, --twofactor TWOFACTOR Two-factor authentication code
373 -n, --netrc Use .netrc authentication data
374 --video-password PASSWORD Video password (vimeo, smotri, youku)
375
376 ## Adobe Pass Options:
377 --ap-mso MSO Adobe Pass multiple-system operator (TV
378 provider) identifier, use --ap-list-mso for
379 a list of available MSOs
380 --ap-username USERNAME Multiple-system operator account login
381 --ap-password PASSWORD Multiple-system operator account password.
382 If this option is left out, youtube-dl will
383 ask interactively.
384 --ap-list-mso List all supported multiple-system
385 operators
386
387 ## Post-processing Options:
388 -x, --extract-audio Convert video files to audio-only files
389 (requires ffmpeg or avconv and ffprobe or
390 avprobe)
391 --audio-format FORMAT Specify audio format: "best", "aac",
392 "flac", "mp3", "m4a", "opus", "vorbis", or
393 "wav"; "best" by default; No effect without
394 -x
395 --audio-quality QUALITY Specify ffmpeg/avconv audio quality, insert
396 a value between 0 (better) and 9 (worse)
397 for VBR or a specific bitrate like 128K
398 (default 5)
399 --recode-video FORMAT Encode the video to another format if
400 necessary (currently supported:
401 mp4|flv|ogg|webm|mkv|avi)
402 --postprocessor-args ARGS Give these arguments to the postprocessor
403 -k, --keep-video Keep the video file on disk after the post-
404 processing; the video is erased by default
405 --no-post-overwrites Do not overwrite post-processed files; the
406 post-processed files are overwritten by
407 default
408 --embed-subs Embed subtitles in the video (only for mp4,
409 webm and mkv videos)
410 --embed-thumbnail Embed thumbnail in the audio as cover art
411 --add-metadata Write metadata to the video file
412 --metadata-from-title FORMAT Parse additional metadata like song title /
413 artist from the video title. The format
414 syntax is the same as --output. Regular
415 expression with named capture groups may
416 also be used. The parsed parameters replace
417 existing values. Example: --metadata-from-
418 title "%(artist)s - %(title)s" matches a
419 title like "Coldplay - Paradise". Example
420 (regex): --metadata-from-title
421 "(?P<artist>.+?) - (?P<title>.+)"
422 --xattrs Write metadata to the video file's xattrs
423 (using dublin core and xdg standards)
424 --fixup POLICY Automatically correct known faults of the
425 file. One of never (do nothing), warn (only
426 emit a warning), detect_or_warn (the
427 default; fix file if we can, warn
428 otherwise)
429 --prefer-avconv Prefer avconv over ffmpeg for running the
430 postprocessors
431 --prefer-ffmpeg Prefer ffmpeg over avconv for running the
432 postprocessors (default)
433 --ffmpeg-location PATH Location of the ffmpeg/avconv binary;
434 either the path to the binary or its
435 containing directory.
436 --exec CMD Execute a command on the file after
437 downloading, similar to find's -exec
438 syntax. Example: --exec 'adb push {}
439 /sdcard/Music/ && rm {}'
440 --convert-subs FORMAT Convert the subtitles to other format
441 (currently supported: srt|ass|vtt|lrc)
442
443 # CONFIGURATION
444
445 You can configure youtube-dl by placing any supported command line option to a configuration file. On Linux and macOS, the system wide configuration file is located at `/etc/youtube-dl.conf` and the user wide configuration file at `~/.config/youtube-dl/config`. On Windows, the user wide configuration file locations are `%APPDATA%\youtube-dl\config.txt` or `C:\Users\<user name>\youtube-dl.conf`. Note that by default configuration file may not exist so you may need to create it yourself.
446
447 For example, with the following configuration file youtube-dl will always extract the audio, not copy the mtime, use a proxy and save all videos under `Movies` directory in your home directory:
448 ```
449 # Lines starting with # are comments
450
451 # Always extract audio
452 -x
453
454 # Do not copy the mtime
455 --no-mtime
456
457 # Use this proxy
458 --proxy 127.0.0.1:3128
459
460 # Save all videos under Movies directory in your home directory
461 -o ~/Movies/%(title)s.%(ext)s
462 ```
463
464 Note that options in configuration file are just the same options aka switches used in regular command line calls thus there **must be no whitespace** after `-` or `--`, e.g. `-o` or `--proxy` but not `- o` or `-- proxy`.
465
466 You can use `--ignore-config` if you want to disable the configuration file for a particular youtube-dl run.
467
468 You can also use `--config-location` if you want to use custom configuration file for a particular youtube-dl run.
469
470 ### Authentication with `.netrc` file
471
472 You may also want to configure automatic credentials storage for extractors that support authentication (by providing login and password with `--username` and `--password`) in order not to pass credentials as command line arguments on every youtube-dl execution and prevent tracking plain text passwords in the shell command history. You can achieve this using a [`.netrc` file](https://stackoverflow.com/tags/.netrc/info) on a per extractor basis. For that you will need to create a `.netrc` file in your `$HOME` and restrict permissions to read/write by only you:
473 ```
474 touch $HOME/.netrc
475 chmod a-rwx,u+rw $HOME/.netrc
476 ```
477 After that you can add credentials for an extractor in the following format, where *extractor* is the name of the extractor in lowercase:
478 ```
479 machine <extractor> login <login> password <password>
480 ```
481 For example:
482 ```
483 machine youtube login [email protected] password my_youtube_password
484 machine twitch login my_twitch_account_name password my_twitch_password
485 ```
486 To activate authentication with the `.netrc` file you should pass `--netrc` to youtube-dl or place it in the [configuration file](#configuration).
487
488 On Windows you may also need to setup the `%HOME%` environment variable manually. For example:
489 ```
490 set HOME=%USERPROFILE%
491 ```
492
493 # OUTPUT TEMPLATE
494
495 The `-o` option allows users to indicate a template for the output file names.
496
497 **tl;dr:** [navigate me to examples](#output-template-examples).
498
499 The basic usage is not to set any template arguments when downloading a single file, like in `youtube-dl -o funny_video.flv "https://some/video"`. However, it may contain special sequences that will be replaced when downloading each video. The special sequences may be formatted according to [python string formatting operations](https://docs.python.org/2/library/stdtypes.html#string-formatting). For example, `%(NAME)s` or `%(NAME)05d`. To clarify, that is a percent symbol followed by a name in parentheses, followed by a formatting operations. Allowed names along with sequence type are:
500
501 - `id` (string): Video identifier
502 - `title` (string): Video title
503 - `url` (string): Video URL
504 - `ext` (string): Video filename extension
505 - `alt_title` (string): A secondary title of the video
506 - `display_id` (string): An alternative identifier for the video
507 - `uploader` (string): Full name of the video uploader
508 - `license` (string): License name the video is licensed under
509 - `creator` (string): The creator of the video
510 - `release_date` (string): The date (YYYYMMDD) when the video was released
511 - `timestamp` (numeric): UNIX timestamp of the moment the video became available
512 - `upload_date` (string): Video upload date (YYYYMMDD)
513 - `uploader_id` (string): Nickname or id of the video uploader
514 - `channel` (string): Full name of the channel the video is uploaded on
515 - `channel_id` (string): Id of the channel
516 - `location` (string): Physical location where the video was filmed
517 - `duration` (numeric): Length of the video in seconds
518 - `view_count` (numeric): How many users have watched the video on the platform
519 - `like_count` (numeric): Number of positive ratings of the video
520 - `dislike_count` (numeric): Number of negative ratings of the video
521 - `repost_count` (numeric): Number of reposts of the video
522 - `average_rating` (numeric): Average rating give by users, the scale used depends on the webpage
523 - `comment_count` (numeric): Number of comments on the video
524 - `age_limit` (numeric): Age restriction for the video (years)
525 - `is_live` (boolean): Whether this video is a live stream or a fixed-length video
526 - `start_time` (numeric): Time in seconds where the reproduction should start, as specified in the URL
527 - `end_time` (numeric): Time in seconds where the reproduction should end, as specified in the URL
528 - `format` (string): A human-readable description of the format
529 - `format_id` (string): Format code specified by `--format`
530 - `format_note` (string): Additional info about the format
531 - `width` (numeric): Width of the video
532 - `height` (numeric): Height of the video
533 - `resolution` (string): Textual description of width and height
534 - `tbr` (numeric): Average bitrate of audio and video in KBit/s
535 - `abr` (numeric): Average audio bitrate in KBit/s
536 - `acodec` (string): Name of the audio codec in use
537 - `asr` (numeric): Audio sampling rate in Hertz
538 - `vbr` (numeric): Average video bitrate in KBit/s
539 - `fps` (numeric): Frame rate
540 - `vcodec` (string): Name of the video codec in use
541 - `container` (string): Name of the container format
542 - `filesize` (numeric): The number of bytes, if known in advance
543 - `filesize_approx` (numeric): An estimate for the number of bytes
544 - `protocol` (string): The protocol that will be used for the actual download
545 - `extractor` (string): Name of the extractor
546 - `extractor_key` (string): Key name of the extractor
547 - `epoch` (numeric): Unix epoch when creating the file
548 - `autonumber` (numeric): Five-digit number that will be increased with each download, starting at zero
549 - `playlist` (string): Name or id of the playlist that contains the video
550 - `playlist_index` (numeric): Index of the video in the playlist padded with leading zeros according to the total length of the playlist
551 - `playlist_id` (string): Playlist identifier
552 - `playlist_title` (string): Playlist title
553 - `playlist_uploader` (string): Full name of the playlist uploader
554 - `playlist_uploader_id` (string): Nickname or id of the playlist uploader
555
556 Available for the video that belongs to some logical chapter or section:
557
558 - `chapter` (string): Name or title of the chapter the video belongs to
559 - `chapter_number` (numeric): Number of the chapter the video belongs to
560 - `chapter_id` (string): Id of the chapter the video belongs to
561
562 Available for the video that is an episode of some series or programme:
563
564 - `series` (string): Title of the series or programme the video episode belongs to
565 - `season` (string): Title of the season the video episode belongs to
566 - `season_number` (numeric): Number of the season the video episode belongs to
567 - `season_id` (string): Id of the season the video episode belongs to
568 - `episode` (string): Title of the video episode
569 - `episode_number` (numeric): Number of the video episode within a season
570 - `episode_id` (string): Id of the video episode
571
572 Available for the media that is a track or a part of a music album:
573
574 - `track` (string): Title of the track
575 - `track_number` (numeric): Number of the track within an album or a disc
576 - `track_id` (string): Id of the track
577 - `artist` (string): Artist(s) of the track
578 - `genre` (string): Genre(s) of the track
579 - `album` (string): Title of the album the track belongs to
580 - `album_type` (string): Type of the album
581 - `album_artist` (string): List of all artists appeared on the album
582 - `disc_number` (numeric): Number of the disc or other physical medium the track belongs to
583 - `release_year` (numeric): Year (YYYY) when the album was released
584
585 Each aforementioned sequence when referenced in an output template will be replaced by the actual value corresponding to the sequence name. Note that some of the sequences are not guaranteed to be present since they depend on the metadata obtained by a particular extractor. Such sequences will be replaced with `NA`.
586
587 For example for `-o %(title)s-%(id)s.%(ext)s` and an mp4 video with title `youtube-dl test video` and id `BaW_jenozKcj`, this will result in a `youtube-dl test video-BaW_jenozKcj.mp4` file created in the current directory.
588
589 For numeric sequences you can use numeric related formatting, for example, `%(view_count)05d` will result in a string with view count padded with zeros up to 5 characters, like in `00042`.
590
591 Output templates can also contain arbitrary hierarchical path, e.g. `-o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s'` which will result in downloading each video in a directory corresponding to this path template. Any missing directory will be automatically created for you.
592
593 To use percent literals in an output template use `%%`. To output to stdout use `-o -`.
594
595 The current default template is `%(title)s-%(id)s.%(ext)s`.
596
597 In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the `--restrict-filenames` flag to get a shorter title:
598
599 #### Output template and Windows batch files
600
601 If you are using an output template inside a Windows batch file then you must escape plain percent characters (`%`) by doubling, so that `-o "%(title)s-%(id)s.%(ext)s"` should become `-o "%%(title)s-%%(id)s.%%(ext)s"`. However you should not touch `%`'s that are not plain characters, e.g. environment variables for expansion should stay intact: `-o "C:\%HOMEPATH%\Desktop\%%(title)s.%%(ext)s"`.
602
603 #### Output template examples
604
605 Note that on Windows you may need to use double quotes instead of single.
606
607 ```bash
608 $ youtube-dl --get-filename -o '%(title)s.%(ext)s' BaW_jenozKc
609 youtube-dl test video ''_ä↭𝕐.mp4 # All kinds of weird characters
610
611 $ youtube-dl --get-filename -o '%(title)s.%(ext)s' BaW_jenozKc --restrict-filenames
612 youtube-dl_test_video_.mp4 # A simple file name
613
614 # Download YouTube playlist videos in separate directory indexed by video order in a playlist
615 $ youtube-dl -o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re
616
617 # Download all playlists of YouTube channel/user keeping each playlist in separate directory:
618 $ youtube-dl -o '%(uploader)s/%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' https://www.youtube.com/user/TheLinuxFoundation/playlists
619
620 # Download Udemy course keeping each chapter in separate directory under MyVideos directory in your home
621 $ youtube-dl -u user -p password -o '~/MyVideos/%(playlist)s/%(chapter_number)s - %(chapter)s/%(title)s.%(ext)s' https://www.udemy.com/java-tutorial/
622
623 # Download entire series season keeping each series and each season in separate directory under C:/MyVideos
624 $ youtube-dl -o "C:/MyVideos/%(series)s/%(season_number)s - %(season)s/%(episode_number)s - %(episode)s.%(ext)s" https://videomore.ru/kino_v_detalayah/5_sezon/367617
625
626 # Stream the video being downloaded to stdout
627 $ youtube-dl -o - BaW_jenozKc
628 ```
629
630 # FORMAT SELECTION
631
632 By default youtube-dl tries to download the best available quality, i.e. if you want the best quality you **don't need** to pass any special options, youtube-dl will guess it for you by **default**.
633
634 But sometimes you may want to download in a different format, for example when you are on a slow or intermittent connection. The key mechanism for achieving this is so-called *format selection* based on which you can explicitly specify desired format, select formats based on some criterion or criteria, setup precedence and much more.
635
636 The general syntax for format selection is `--format FORMAT` or shorter `-f FORMAT` where `FORMAT` is a *selector expression*, i.e. an expression that describes format or formats you would like to download.
637
638 **tl;dr:** [navigate me to examples](#format-selection-examples).
639
640 The simplest case is requesting a specific format, for example with `-f 22` you can download the format with format code equal to 22. You can get the list of available format codes for particular video using `--list-formats` or `-F`. Note that these format codes are extractor specific.
641
642 You can also use a file extension (currently `3gp`, `aac`, `flv`, `m4a`, `mp3`, `mp4`, `ogg`, `wav`, `webm` are supported) to download the best quality format of a particular file extension served as a single file, e.g. `-f webm` will download the best quality format with the `webm` extension served as a single file.
643
644 You can also use special names to select particular edge case formats:
645 - `best`: Select the best quality format represented by a single file with video and audio.
646 - `worst`: Select the worst quality format represented by a single file with video and audio.
647 - `bestvideo`: Select the best quality video-only format (e.g. DASH video). May not be available.
648 - `worstvideo`: Select the worst quality video-only format. May not be available.
649 - `bestaudio`: Select the best quality audio only-format. May not be available.
650 - `worstaudio`: Select the worst quality audio only-format. May not be available.
651
652 For example, to download the worst quality video-only format you can use `-f worstvideo`.
653
654 If you want to download multiple videos and they don't have the same formats available, you can specify the order of preference using slashes. Note that slash is left-associative, i.e. formats on the left hand side are preferred, for example `-f 22/17/18` will download format 22 if it's available, otherwise it will download format 17 if it's available, otherwise it will download format 18 if it's available, otherwise it will complain that no suitable formats are available for download.
655
656 If you want to download several formats of the same video use a comma as a separator, e.g. `-f 22,17,18` will download all these three formats, of course if they are available. Or a more sophisticated example combined with the precedence feature: `-f 136/137/mp4/bestvideo,140/m4a/bestaudio`.
657
658 You can also filter the video formats by putting a condition in brackets, as in `-f "best[height=720]"` (or `-f "[filesize>10M]"`).
659
660 The following numeric meta fields can be used with comparisons `<`, `<=`, `>`, `>=`, `=` (equals), `!=` (not equals):
661 - `filesize`: The number of bytes, if known in advance
662 - `width`: Width of the video, if known
663 - `height`: Height of the video, if known
664 - `tbr`: Average bitrate of audio and video in KBit/s
665 - `abr`: Average audio bitrate in KBit/s
666 - `vbr`: Average video bitrate in KBit/s
667 - `asr`: Audio sampling rate in Hertz
668 - `fps`: Frame rate
669
670 Also filtering work for comparisons `=` (equals), `!=` (not equals), `^=` (begins with), `$=` (ends with), `*=` (contains) and following string meta fields:
671 - `ext`: File extension
672 - `acodec`: Name of the audio codec in use
673 - `vcodec`: Name of the video codec in use
674 - `container`: Name of the container format
675 - `protocol`: The protocol that will be used for the actual download, lower-case (`http`, `https`, `rtsp`, `rtmp`, `rtmpe`, `mms`, `f4m`, `ism`, `http_dash_segments`, `m3u8`, or `m3u8_native`)
676 - `format_id`: A short description of the format
677
678 Note that none of the aforementioned meta fields are guaranteed to be present since this solely depends on the metadata obtained by particular extractor, i.e. the metadata offered by the video hoster.
679
680 Formats for which the value is not known are excluded unless you put a question mark (`?`) after the operator. You can combine format filters, so `-f "[height <=? 720][tbr>500]"` selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 KBit/s.
681
682 You can merge the video and audio of two formats into a single file using `-f <video-format>+<audio-format>` (requires ffmpeg or avconv installed), for example `-f bestvideo+bestaudio` will download the best video-only format, the best audio-only format and mux them together with ffmpeg/avconv.
683
684 Format selectors can also be grouped using parentheses, for example if you want to download the best mp4 and webm formats with a height lower than 480 you can use `-f '(mp4,webm)[height<480]'`.
685
686 Since the end of April 2015 and version 2015.04.26, youtube-dl uses `-f bestvideo+bestaudio/best` as the default format selection (see [#5447](https://github.com/rg3/youtube-dl/issues/5447), [#5456](https://github.com/rg3/youtube-dl/issues/5456)). If ffmpeg or avconv are installed this results in downloading `bestvideo` and `bestaudio` separately and muxing them together into a single file giving the best overall quality available. Otherwise it falls back to `best` and results in downloading the best available quality served as a single file. `best` is also needed for videos that don't come from YouTube because they don't provide the audio and video in two different files. If you want to only download some DASH formats (for example if you are not interested in getting videos with a resolution higher than 1080p), you can add `-f bestvideo[height<=?1080]+bestaudio/best` to your configuration file. Note that if you use youtube-dl to stream to `stdout` (and most likely to pipe it to your media player then), i.e. you explicitly specify output template as `-o -`, youtube-dl still uses `-f best` format selection in order to start content delivery immediately to your player and not to wait until `bestvideo` and `bestaudio` are downloaded and muxed.
687
688 If you want to preserve the old format selection behavior (prior to youtube-dl 2015.04.26), i.e. you want to download the best available quality media served as a single file, you should explicitly specify your choice with `-f best`. You may want to add it to the [configuration file](#configuration) in order not to type it every time you run youtube-dl.
689
690 #### Format selection examples
691
692 Note that on Windows you may need to use double quotes instead of single.
693
694 ```bash
695 # Download best mp4 format available or any other best if no mp4 available
696 $ youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best'
697
698 # Download best format available but not better that 480p
699 $ youtube-dl -f 'bestvideo[height<=480]+bestaudio/best[height<=480]'
700
701 # Download best video only format but no bigger than 50 MB
702 $ youtube-dl -f 'best[filesize<50M]'
703
704 # Download best format available via direct link over HTTP/HTTPS protocol
705 $ youtube-dl -f '(bestvideo+bestaudio/best)[protocol^=http]'
706
707 # Download the best video format and the best audio format without merging them
708 $ youtube-dl -f 'bestvideo,bestaudio' -o '%(title)s.f%(format_id)s.%(ext)s'
709 ```
710 Note that in the last example, an output template is recommended as bestvideo and bestaudio may have the same file name.
711
712
713 # VIDEO SELECTION
714
715 Videos can be filtered by their upload date using the options `--date`, `--datebefore` or `--dateafter`. They accept dates in two formats:
716
717 - Absolute dates: Dates in the format `YYYYMMDD`.
718 - Relative dates: Dates in the format `(now|today)[+-][0-9](day|week|month|year)(s)?`
719
720 Examples:
721
722 ```bash
723 # Download only the videos uploaded in the last 6 months
724 $ youtube-dl --dateafter now-6months
725
726 # Download only the videos uploaded on January 1, 1970
727 $ youtube-dl --date 19700101
728
729 $ # Download only the videos uploaded in the 200x decade
730 $ youtube-dl --dateafter 20000101 --datebefore 20091231
731 ```
732
733 # FAQ
734
735 ### How do I update youtube-dl?
736
737 If you've followed [our manual installation instructions](https://rg3.github.io/youtube-dl/download.html), you can simply run `youtube-dl -U` (or, on Linux, `sudo youtube-dl -U`).
738
739 If you have used pip, a simple `sudo pip install -U youtube-dl` is sufficient to update.
740
741 If you have installed youtube-dl using a package manager like *apt-get* or *yum*, use the standard system update mechanism to update. Note that distribution packages are often outdated. As a rule of thumb, youtube-dl releases at least once a month, and often weekly or even daily. Simply go to https://yt-dl.org to find out the current version. Unfortunately, there is nothing we youtube-dl developers can do if your distribution serves a really outdated version. You can (and should) complain to your distribution in their bugtracker or support forum.
742
743 As a last resort, you can also uninstall the version installed by your package manager and follow our manual installation instructions. For that, remove the distribution's package, with a line like
744
745 sudo apt-get remove -y youtube-dl
746
747 Afterwards, simply follow [our manual installation instructions](https://rg3.github.io/youtube-dl/download.html):
748
749 ```
750 sudo wget https://yt-dl.org/latest/youtube-dl -O /usr/local/bin/youtube-dl
751 sudo chmod a+x /usr/local/bin/youtube-dl
752 hash -r
753 ```
754
755 Again, from then on you'll be able to update with `sudo youtube-dl -U`.
756
757 ### youtube-dl is extremely slow to start on Windows
758
759 Add a file exclusion for `youtube-dl.exe` in Windows Defender settings.
760
761 ### I'm getting an error `Unable to extract OpenGraph title` on YouTube playlists
762
763 YouTube changed their playlist format in March 2014 and later on, so you'll need at least youtube-dl 2014.07.25 to download all YouTube videos.
764
765 If you have installed youtube-dl with a package manager, pip, setup.py or a tarball, please use that to update. Note that Ubuntu packages do not seem to get updated anymore. Since we are not affiliated with Ubuntu, there is little we can do. Feel free to [report bugs](https://bugs.launchpad.net/ubuntu/+source/youtube-dl/+filebug) to the [Ubuntu packaging people](mailto:[email protected]?subject=outdated%20version%20of%20youtube-dl) - all they have to do is update the package to a somewhat recent version. See above for a way to update.
766
767 ### I'm getting an error when trying to use output template: `error: using output template conflicts with using title, video ID or auto number`
768
769 Make sure you are not using `-o` with any of these options `-t`, `--title`, `--id`, `-A` or `--auto-number` set in command line or in a configuration file. Remove the latter if any.
770
771 ### Do I always have to pass `-citw`?
772
773 By default, youtube-dl intends to have the best options (incidentally, if you have a convincing case that these should be different, [please file an issue where you explain that](https://yt-dl.org/bug)). Therefore, it is unnecessary and sometimes harmful to copy long option strings from webpages. In particular, the only option out of `-citw` that is regularly useful is `-i`.
774
775 ### Can you please put the `-b` option back?
776
777 Most people asking this question are not aware that youtube-dl now defaults to downloading the highest available quality as reported by YouTube, which will be 1080p or 720p in some cases, so you no longer need the `-b` option. For some specific videos, maybe YouTube does not report them to be available in a specific high quality format you're interested in. In that case, simply request it with the `-f` option and youtube-dl will try to download it.
778
779 ### I get HTTP error 402 when trying to download a video. What's this?
780
781 Apparently YouTube requires you to pass a CAPTCHA test if you download too much. We're [considering to provide a way to let you solve the CAPTCHA](https://github.com/rg3/youtube-dl/issues/154), but at the moment, your best course of action is pointing a web browser to the youtube URL, solving the CAPTCHA, and restart youtube-dl.
782
783 ### Do I need any other programs?
784
785 youtube-dl works fine on its own on most sites. However, if you want to convert video/audio, you'll need [avconv](https://libav.org/) or [ffmpeg](https://www.ffmpeg.org/). On some sites - most notably YouTube - videos can be retrieved in a higher quality format without sound. youtube-dl will detect whether avconv/ffmpeg is present and automatically pick the best option.
786
787 Videos or video formats streamed via RTMP protocol can only be downloaded when [rtmpdump](https://rtmpdump.mplayerhq.hu/) is installed. Downloading MMS and RTSP videos requires either [mplayer](https://mplayerhq.hu/) or [mpv](https://mpv.io/) to be installed.
788
789 ### I have downloaded a video but how can I play it?
790
791 Once the video is fully downloaded, use any video player, such as [mpv](https://mpv.io/), [vlc](https://www.videolan.org/) or [mplayer](https://www.mplayerhq.hu/).
792
793 ### I extracted a video URL with `-g`, but it does not play on another machine / in my web browser.
794
795 It depends a lot on the service. In many cases, requests for the video (to download/play it) must come from the same IP address and with the same cookies and/or HTTP headers. Use the `--cookies` option to write the required cookies into a file, and advise your downloader to read cookies from that file. Some sites also require a common user agent to be used, use `--dump-user-agent` to see the one in use by youtube-dl. You can also get necessary cookies and HTTP headers from JSON output obtained with `--dump-json`.
796
797 It may be beneficial to use IPv6; in some cases, the restrictions are only applied to IPv4. Some services (sometimes only for a subset of videos) do not restrict the video URL by IP address, cookie, or user-agent, but these are the exception rather than the rule.
798
799 Please bear in mind that some URL protocols are **not** supported by browsers out of the box, including RTMP. If you are using `-g`, your own downloader must support these as well.
800
801 If you want to play the video on a machine that is not running youtube-dl, you can relay the video content from the machine that runs youtube-dl. You can use `-o -` to let youtube-dl stream a video to stdout, or simply allow the player to download the files written by youtube-dl in turn.
802
803 ### ERROR: no fmt_url_map or conn information found in video info
804
805 YouTube has switched to a new video info format in July 2011 which is not supported by old versions of youtube-dl. See [above](#how-do-i-update-youtube-dl) for how to update youtube-dl.
806
807 ### ERROR: unable to download video
808
809 YouTube requires an additional signature since September 2012 which is not supported by old versions of youtube-dl. See [above](#how-do-i-update-youtube-dl) for how to update youtube-dl.
810
811 ### Video URL contains an ampersand and I'm getting some strange output `[1] 2839` or `'v' is not recognized as an internal or external command`
812
813 That's actually the output from your shell. Since ampersand is one of the special shell characters it's interpreted by the shell preventing you from passing the whole URL to youtube-dl. To disable your shell from interpreting the ampersands (or any other special characters) you have to either put the whole URL in quotes or escape them with a backslash (which approach will work depends on your shell).
814
815 For example if your URL is https://www.youtube.com/watch?t=4&v=BaW_jenozKc you should end up with following command:
816
817 ```youtube-dl 'https://www.youtube.com/watch?t=4&v=BaW_jenozKc'```
818
819 or
820
821 ```youtube-dl https://www.youtube.com/watch?t=4\&v=BaW_jenozKc```
822
823 For Windows you have to use the double quotes:
824
825 ```youtube-dl "https://www.youtube.com/watch?t=4&v=BaW_jenozKc"```
826
827 ### ExtractorError: Could not find JS function u'OF'
828
829 In February 2015, the new YouTube player contained a character sequence in a string that was misinterpreted by old versions of youtube-dl. See [above](#how-do-i-update-youtube-dl) for how to update youtube-dl.
830
831 ### HTTP Error 429: Too Many Requests or 402: Payment Required
832
833 These two error codes indicate that the service is blocking your IP address because of overuse. Contact the service and ask them to unblock your IP address, or - if you have acquired a whitelisted IP address already - use the [`--proxy` or `--source-address` options](#network-options) to select another IP address.
834
835 ### SyntaxError: Non-ASCII character
836
837 The error
838
839 File "youtube-dl", line 2
840 SyntaxError: Non-ASCII character '\x93' ...
841
842 means you're using an outdated version of Python. Please update to Python 2.6 or 2.7.
843
844 ### What is this binary file? Where has the code gone?
845
846 Since June 2012 ([#342](https://github.com/rg3/youtube-dl/issues/342)) youtube-dl is packed as an executable zipfile, simply unzip it (might need renaming to `youtube-dl.zip` first on some systems) or clone the git repository, as laid out above. If you modify the code, you can run it by executing the `__main__.py` file. To recompile the executable, run `make youtube-dl`.
847
848 ### The exe throws an error due to missing `MSVCR100.dll`
849
850 To run the exe you need to install first the [Microsoft Visual C++ 2010 Redistributable Package (x86)](https://www.microsoft.com/en-US/download/details.aspx?id=5555).
851
852 ### On Windows, how should I set up ffmpeg and youtube-dl? Where should I put the exe files?
853
854 If you put youtube-dl and ffmpeg in the same directory that you're running the command from, it will work, but that's rather cumbersome.
855
856 To make a different directory work - either for ffmpeg, or for youtube-dl, or for both - simply create the directory (say, `C:\bin`, or `C:\Users\<User name>\bin`), put all the executables directly in there, and then [set your PATH environment variable](https://www.java.com/en/download/help/path.xml) to include that directory.
857
858 From then on, after restarting your shell, you will be able to access both youtube-dl and ffmpeg (and youtube-dl will be able to find ffmpeg) by simply typing `youtube-dl` or `ffmpeg`, no matter what directory you're in.
859
860 ### How do I put downloads into a specific folder?
861
862 Use the `-o` to specify an [output template](#output-template), for example `-o "/home/user/videos/%(title)s-%(id)s.%(ext)s"`. If you want this for all of your downloads, put the option into your [configuration file](#configuration).
863
864 ### How do I download a video starting with a `-`?
865
866 Either prepend `https://www.youtube.com/watch?v=` or separate the ID from the options with `--`:
867
868 youtube-dl -- -wNyEUrxzFU
869 youtube-dl "https://www.youtube.com/watch?v=-wNyEUrxzFU"
870
871 ### How do I pass cookies to youtube-dl?
872
873 Use the `--cookies` option, for example `--cookies /path/to/cookies/file.txt`.
874
875 In order to extract cookies from browser use any conforming browser extension for exporting cookies. For example, [cookies.txt](https://chrome.google.com/webstore/detail/cookiestxt/njabckikapfpffapmjgojcnbfjonfjfg) (for Chrome) or [cookies.txt](https://addons.mozilla.org/en-US/firefox/addon/cookies-txt/) (for Firefox).
876
877 Note that the cookies file must be in Mozilla/Netscape format and the first line of the cookies file must be either `# HTTP Cookie File` or `# Netscape HTTP Cookie File`. Make sure you have correct [newline format](https://en.wikipedia.org/wiki/Newline) in the cookies file and convert newlines if necessary to correspond with your OS, namely `CRLF` (`\r\n`) for Windows and `LF` (`\n`) for Unix and Unix-like systems (Linux, macOS, etc.). `HTTP Error 400: Bad Request` when using `--cookies` is a good sign of invalid newline format.
878
879 Passing cookies to youtube-dl is a good way to workaround login when a particular extractor does not implement it explicitly. Another use case is working around [CAPTCHA](https://en.wikipedia.org/wiki/CAPTCHA) some websites require you to solve in particular cases in order to get access (e.g. YouTube, CloudFlare).
880
881 ### How do I stream directly to media player?
882
883 You will first need to tell youtube-dl to stream media to stdout with `-o -`, and also tell your media player to read from stdin (it must be capable of this for streaming) and then pipe former to latter. For example, streaming to [vlc](https://www.videolan.org/) can be achieved with:
884
885 youtube-dl -o - "https://www.youtube.com/watch?v=BaW_jenozKcj" | vlc -
886
887 ### How do I download only new videos from a playlist?
888
889 Use download-archive feature. With this feature you should initially download the complete playlist with `--download-archive /path/to/download/archive/file.txt` that will record identifiers of all the videos in a special file. Each subsequent run with the same `--download-archive` will download only new videos and skip all videos that have been downloaded before. Note that only successful downloads are recorded in the file.
890
891 For example, at first,
892
893 youtube-dl --download-archive archive.txt "https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re"
894
895 will download the complete `PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re` playlist and create a file `archive.txt`. Each subsequent run will only download new videos if any:
896
897 youtube-dl --download-archive archive.txt "https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re"
898
899 ### Should I add `--hls-prefer-native` into my config?
900
901 When youtube-dl detects an HLS video, it can download it either with the built-in downloader or ffmpeg. Since many HLS streams are slightly invalid and ffmpeg/youtube-dl each handle some invalid cases better than the other, there is an option to switch the downloader if needed.
902
903 When youtube-dl knows that one particular downloader works better for a given website, that downloader will be picked. Otherwise, youtube-dl will pick the best downloader for general compatibility, which at the moment happens to be ffmpeg. This choice may change in future versions of youtube-dl, with improvements of the built-in downloader and/or ffmpeg.
904
905 In particular, the generic extractor (used when your website is not in the [list of supported sites by youtube-dl](https://rg3.github.io/youtube-dl/supportedsites.html) cannot mandate one specific downloader.
906
907 If you put either `--hls-prefer-native` or `--hls-prefer-ffmpeg` into your configuration, a different subset of videos will fail to download correctly. Instead, it is much better to [file an issue](https://yt-dl.org/bug) or a pull request which details why the native or the ffmpeg HLS downloader is a better choice for your use case.
908
909 ### Can you add support for this anime video site, or site which shows current movies for free?
910
911 As a matter of policy (as well as legality), youtube-dl does not include support for services that specialize in infringing copyright. As a rule of thumb, if you cannot easily find a video that the service is quite obviously allowed to distribute (i.e. that has been uploaded by the creator, the creator's distributor, or is published under a free license), the service is probably unfit for inclusion to youtube-dl.
912
913 A note on the service that they don't host the infringing content, but just link to those who do, is evidence that the service should **not** be included into youtube-dl. The same goes for any DMCA note when the whole front page of the service is filled with videos they are not allowed to distribute. A "fair use" note is equally unconvincing if the service shows copyright-protected videos in full without authorization.
914
915 Support requests for services that **do** purchase the rights to distribute their content are perfectly fine though. If in doubt, you can simply include a source that mentions the legitimate purchase of content.
916
917 ### How can I speed up work on my issue?
918
919 (Also known as: Help, my important issue not being solved!) The youtube-dl core developer team is quite small. While we do our best to solve as many issues as possible, sometimes that can take quite a while. To speed up your issue, here's what you can do:
920
921 First of all, please do report the issue [at our issue tracker](https://yt-dl.org/bugs). That allows us to coordinate all efforts by users and developers, and serves as a unified point. Unfortunately, the youtube-dl project has grown too large to use personal email as an effective communication channel.
922
923 Please read the [bug reporting instructions](#bugs) below. A lot of bugs lack all the necessary information. If you can, offer proxy, VPN, or shell access to the youtube-dl developers. If you are able to, test the issue from multiple computers in multiple countries to exclude local censorship or misconfiguration issues.
924
925 If nobody is interested in solving your issue, you are welcome to take matters into your own hands and submit a pull request (or coerce/pay somebody else to do so).
926
927 Feel free to bump the issue from time to time by writing a small comment ("Issue is still present in youtube-dl version ...from France, but fixed from Belgium"), but please not more than once a month. Please do not declare your issue as `important` or `urgent`.
928
929 ### How can I detect whether a given URL is supported by youtube-dl?
930
931 For one, have a look at the [list of supported sites](docs/supportedsites.md). Note that it can sometimes happen that the site changes its URL scheme (say, from https://example.com/video/1234567 to https://example.com/v/1234567 ) and youtube-dl reports an URL of a service in that list as unsupported. In that case, simply report a bug.
932
933 It is *not* possible to detect whether a URL is supported or not. That's because youtube-dl contains a generic extractor which matches **all** URLs. You may be tempted to disable, exclude, or remove the generic extractor, but the generic extractor not only allows users to extract videos from lots of websites that embed a video from another service, but may also be used to extract video from a service that it's hosting itself. Therefore, we neither recommend nor support disabling, excluding, or removing the generic extractor.
934
935 If you want to find out whether a given URL is supported, simply call youtube-dl with it. If you get no videos back, chances are the URL is either not referring to a video or unsupported. You can find out which by examining the output (if you run youtube-dl on the console) or catching an `UnsupportedError` exception if you run it from a Python program.
936
937 # Why do I need to go through that much red tape when filing bugs?
938
939 Before we had the issue template, despite our extensive [bug reporting instructions](#bugs), about 80% of the issue reports we got were useless, for instance because people used ancient versions hundreds of releases old, because of simple syntactic errors (not in youtube-dl but in general shell usage), because the problem was already reported multiple times before, because people did not actually read an error message, even if it said "please install ffmpeg", because people did not mention the URL they were trying to download and many more simple, easy-to-avoid problems, many of whom were totally unrelated to youtube-dl.
940
941 youtube-dl is an open-source project manned by too few volunteers, so we'd rather spend time fixing bugs where we are certain none of those simple problems apply, and where we can be reasonably confident to be able to reproduce the issue without asking the reporter repeatedly. As such, the output of `youtube-dl -v YOUR_URL_HERE` is really all that's required to file an issue. The issue template also guides you through some basic steps you can do, such as checking that your version of youtube-dl is current.
942
943 # DEVELOPER INSTRUCTIONS
944
945 Most users do not need to build youtube-dl and can [download the builds](https://rg3.github.io/youtube-dl/download.html) or get them from their distribution.
946
947 To run youtube-dl as a developer, you don't need to build anything either. Simply execute
948
949 python -m youtube_dl
950
951 To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
952
953 python -m unittest discover
954 python test/test_download.py
955 nosetests
956
957 See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
958
959 If you want to create a build of youtube-dl yourself, you'll need
960
961 * python
962 * make (only GNU make is supported)
963 * pandoc
964 * zip
965 * nosetests
966
967 ### Adding support for a new site
968
969 If you want to add support for a new site, first of all **make sure** this site is **not dedicated to [copyright infringement](README.md#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. youtube-dl does **not support** such sites thus pull requests adding support for them **will be rejected**.
970
971 After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called `yourextractor`):
972
973 1. [Fork this repository](https://github.com/rg3/youtube-dl/fork)
974 2. Check out the source code with:
975
976 git clone [email protected]:YOUR_GITHUB_USERNAME/youtube-dl.git
977
978 3. Start a new git branch with
979
980 cd youtube-dl
981 git checkout -b yourextractor
982
983 4. Start with this simple template and save it to `youtube_dl/extractor/yourextractor.py`:
984
985 ```python
986 # coding: utf-8
987 from __future__ import unicode_literals
988
989 from .common import InfoExtractor
990
991
992 class YourExtractorIE(InfoExtractor):
993 _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
994 _TEST = {
995 'url': 'https://yourextractor.com/watch/42',
996 'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
997 'info_dict': {
998 'id': '42',
999 'ext': 'mp4',
1000 'title': 'Video title goes here',
1001 'thumbnail': r're:^https?://.*\.jpg$',
1002 # TODO more properties, either as:
1003 # * A value
1004 # * MD5 checksum; start the string with md5:
1005 # * A regular expression; start the string with re:
1006 # * Any Python type (for example int or float)
1007 }
1008 }
1009
1010 def _real_extract(self, url):
1011 video_id = self._match_id(url)
1012 webpage = self._download_webpage(url, video_id)
1013
1014 # TODO more code goes here, for example ...
1015 title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title')
1016
1017 return {
1018 'id': video_id,
1019 'title': title,
1020 'description': self._og_search_description(webpage),
1021 'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False),
1022 # TODO more properties (see youtube_dl/extractor/common.py)
1023 }
1024 ```
1025 5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/extractors.py).
1026 6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in.
1027 7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L74-L252). Add tests and code for as many as you want.
1028 8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://pypi.python.org/pypi/flake8). Also make sure your code works under all [Python](https://www.python.org/) versions claimed supported by youtube-dl, namely 2.6, 2.7, and 3.2+.
1029 9. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files and [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
1030
1031 $ git add youtube_dl/extractor/extractors.py
1032 $ git add youtube_dl/extractor/yourextractor.py
1033 $ git commit -m '[yourextractor] Add new extractor'
1034 $ git push origin yourextractor
1035
1036 10. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
1037
1038 In any case, thank you very much for your contributions!
1039
1040 ## youtube-dl coding conventions
1041
1042 This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code.
1043
1044 Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old youtube-dl versions working. Even though this breakage issue is easily fixed by emitting a new version of youtube-dl with a fix incorporated, all the previous versions become broken in all repositories and distros' packages that may not be so prompt in fetching the update from us. Needless to say, some non rolling release distros may never receive an update at all.
1045
1046 ### Mandatory and optional metafields
1047
1048 For extraction to work youtube-dl relies on metadata your extractor extracts and provides to youtube-dl expressed by an [information dictionary](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L75-L257) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by youtube-dl:
1049
1050 - `id` (media identifier)
1051 - `title` (media title)
1052 - `url` (media download URL) or `formats`
1053
1054 In fact only the last option is technically mandatory (i.e. if you can't figure out the download location of the media the extraction does not make any sense). But by convention youtube-dl also treats `id` and `title` as mandatory. Thus the aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken.
1055
1056 [Any field](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L149-L257) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields.
1057
1058 #### Example
1059
1060 Say you have some source dictionary `meta` that you've fetched as JSON with HTTP request and it has a key `summary`:
1061
1062 ```python
1063 meta = self._download_json(url, video_id)
1064 ```
1065
1066 Assume at this point `meta`'s layout is:
1067
1068 ```python
1069 {
1070 ...
1071 "summary": "some fancy summary text",
1072 ...
1073 }
1074 ```
1075
1076 Assume you want to extract `summary` and put it into the resulting info dict as `description`. Since `description` is an optional meta field you should be ready that this key may be missing from the `meta` dict, so that you should extract it like:
1077
1078 ```python
1079 description = meta.get('summary') # correct
1080 ```
1081
1082 and not like:
1083
1084 ```python
1085 description = meta['summary'] # incorrect
1086 ```
1087
1088 The latter will break extraction process with `KeyError` if `summary` disappears from `meta` at some later time but with the former approach extraction will just go ahead with `description` set to `None` which is perfectly fine (remember `None` is equivalent to the absence of data).
1089
1090 Similarly, you should pass `fatal=False` when extracting optional data from a webpage with `_search_regex`, `_html_search_regex` or similar methods, for instance:
1091
1092 ```python
1093 description = self._search_regex(
1094 r'<span[^>]+id="title"[^>]*>([^<]+)<',
1095 webpage, 'description', fatal=False)
1096 ```
1097
1098 With `fatal` set to `False` if `_search_regex` fails to extract `description` it will emit a warning and continue extraction.
1099
1100 You can also pass `default=<some fallback value>`, for example:
1101
1102 ```python
1103 description = self._search_regex(
1104 r'<span[^>]+id="title"[^>]*>([^<]+)<',
1105 webpage, 'description', default=None)
1106 ```
1107
1108 On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present.
1109
1110 ### Provide fallbacks
1111
1112 When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable.
1113
1114 #### Example
1115
1116 Say `meta` from the previous example has a `title` and you are about to extract it. Since `title` is a mandatory meta field you should end up with something like:
1117
1118 ```python
1119 title = meta['title']
1120 ```
1121
1122 If `title` disappears from `meta` in future due to some changes on the hoster's side the extraction would fail since `title` is mandatory. That's expected.
1123
1124 Assume that you have some another source you can extract `title` from, for example `og:title` HTML meta of a `webpage`. In this case you can provide a fallback scenario:
1125
1126 ```python
1127 title = meta.get('title') or self._og_search_title(webpage)
1128 ```
1129
1130 This code will try to extract from `meta` first and if it fails it will try extracting `og:title` from a `webpage`.
1131
1132 ### Make regular expressions flexible
1133
1134 When using regular expressions try to write them fuzzy and flexible.
1135
1136 #### Example
1137
1138 Say you need to extract `title` from the following HTML code:
1139
1140 ```html
1141 <span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">some fancy title</span>
1142 ```
1143
1144 The code for that task should look similar to:
1145
1146 ```python
1147 title = self._search_regex(
1148 r'<span[^>]+class="title"[^>]*>([^<]+)', webpage, 'title')
1149 ```
1150
1151 Or even better:
1152
1153 ```python
1154 title = self._search_regex(
1155 r'<span[^>]+class=(["\'])title\1[^>]*>(?P<title>[^<]+)',
1156 webpage, 'title', group='title')
1157 ```
1158
1159 Note how you tolerate potential changes in the `style` attribute's value or switch from using double quotes to single for `class` attribute:
1160
1161 The code definitely should not look like:
1162
1163 ```python
1164 title = self._search_regex(
1165 r'<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">(.*?)</span>',
1166 webpage, 'title', group='title')
1167 ```
1168
1169 ### Use safe conversion functions
1170
1171 Wrap all extracted numeric data into safe functions from [`youtube_dl/utils.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/utils.py): `int_or_none`, `float_or_none`. Use them for string to number conversions as well.
1172
1173 Use `url_or_none` for safe URL processing.
1174
1175 Use `try_get` for safe metadata extraction from parsed JSON.
1176
1177 Explore [`youtube_dl/utils.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/utils.py) for more useful convenience functions.
1178
1179 #### More examples
1180
1181 ##### Safely extract optional description from parsed JSON
1182 ```python
1183 description = try_get(response, lambda x: x['result']['video'][0]['summary'], compat_str)
1184 ```
1185
1186 ##### Safely extract more optional metadata
1187 ```python
1188 video = try_get(response, lambda x: x['result']['video'][0], dict) or {}
1189 description = video.get('summary')
1190 duration = float_or_none(video.get('durationMs'), scale=1000)
1191 view_count = int_or_none(video.get('views'))
1192 ```
1193
1194 # EMBEDDING YOUTUBE-DL
1195
1196 youtube-dl makes the best effort to be a good command-line program, and thus should be callable from any programming language. If you encounter any problems parsing its output, feel free to [create a report](https://github.com/rg3/youtube-dl/issues/new).
1197
1198 From a Python program, you can embed youtube-dl in a more powerful fashion, like this:
1199
1200 ```python
1201 from __future__ import unicode_literals
1202 import youtube_dl
1203
1204 ydl_opts = {}
1205 with youtube_dl.YoutubeDL(ydl_opts) as ydl:
1206 ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])
1207 ```
1208
1209 Most likely, you'll want to use various options. For a list of options available, have a look at [`youtube_dl/YoutubeDL.py`](https://github.com/rg3/youtube-dl/blob/3e4cedf9e8cd3157df2457df7274d0c842421945/youtube_dl/YoutubeDL.py#L137-L312). For a start, if you want to intercept youtube-dl's output, set a `logger` object.
1210
1211 Here's a more complete example of a program that outputs only errors (and a short message after the download is finished), and downloads/converts the video to an mp3 file:
1212
1213 ```python
1214 from __future__ import unicode_literals
1215 import youtube_dl
1216
1217
1218 class MyLogger(object):
1219 def debug(self, msg):
1220 pass
1221
1222 def warning(self, msg):
1223 pass
1224
1225 def error(self, msg):
1226 print(msg)
1227
1228
1229 def my_hook(d):
1230 if d['status'] == 'finished':
1231 print('Done downloading, now converting ...')
1232
1233
1234 ydl_opts = {
1235 'format': 'bestaudio/best',
1236 'postprocessors': [{
1237 'key': 'FFmpegExtractAudio',
1238 'preferredcodec': 'mp3',
1239 'preferredquality': '192',
1240 }],
1241 'logger': MyLogger(),
1242 'progress_hooks': [my_hook],
1243 }
1244 with youtube_dl.YoutubeDL(ydl_opts) as ydl:
1245 ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])
1246 ```
1247
1248 # BUGS
1249
1250 Bugs and suggestions should be reported at: <https://github.com/rg3/youtube-dl/issues>. Unless you were prompted to or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in the IRC channel [#youtube-dl](irc://chat.freenode.net/#youtube-dl) on freenode ([webchat](https://webchat.freenode.net/?randomnick=1&channels=youtube-dl)).
1251
1252 **Please include the full output of youtube-dl when run with `-v`**, i.e. **add** `-v` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this:
1253 ```
1254 $ youtube-dl -v <your command line>
1255 [debug] System config: []
1256 [debug] User config: []
1257 [debug] Command-line args: [u'-v', u'https://www.youtube.com/watch?v=BaW_jenozKcj']
1258 [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
1259 [debug] youtube-dl version 2015.12.06
1260 [debug] Git HEAD: 135392e
1261 [debug] Python version 2.6.6 - Windows-2003Server-5.2.3790-SP2
1262 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
1263 [debug] Proxy map: {}
1264 ...
1265 ```
1266 **Do not post screenshots of verbose logs; only plain text is acceptable.**
1267
1268 The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever.
1269
1270 Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist):
1271
1272 ### Is the description of the issue itself sufficient?
1273
1274 We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts.
1275
1276 So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious
1277
1278 - What the problem is
1279 - How it could be fixed
1280 - How your proposed solution would look like
1281
1282 If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a committer myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over.
1283
1284 For bug reports, this means that your report should contain the *complete* output of youtube-dl when called with the `-v` flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information.
1285
1286 If your server has multiple IPs or you suspect censorship, adding `--call-home` may be a good idea to get more diagnostics. If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--dump-pages` (warning: this will yield a rather large output, redirect it to the file `log.txt` by adding `>log.txt 2>&1` to your command-line) or upload the `.dump` files you get when you add `--write-pages` [somewhere](https://gist.github.com/).
1287
1288 **Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL.
1289
1290 ### Are you using the latest version?
1291
1292 Before reporting any issue, type `youtube-dl -U`. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well.
1293
1294 ### Is the issue already documented?
1295
1296 Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/rg3/youtube-dl/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
1297
1298 ### Why are existing options not enough?
1299
1300 Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/rg3/youtube-dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
1301
1302 ### Is there enough context in your bug report?
1303
1304 People want to solve problems, and often think they do us a favor by breaking down their larger problems (e.g. wanting to skip already downloaded files) to a specific request (e.g. requesting us to look whether the file exists before downloading the info page). However, what often happens is that they break down the problem into two steps: One simple, and one impossible (or extremely complicated one).
1305
1306 We are then presented with a very complicated request when the original problem could be solved far easier, e.g. by recording the downloaded video IDs in a separate file. To avoid this, you must include the greater context where it is non-obvious. In particular, every feature request that does not consist of adding support for a new site should contain a use case scenario that explains in what situation the missing feature would be useful.
1307
1308 ### Does the issue involve one problem, and one problem only?
1309
1310 Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones.
1311
1312 In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, White house podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service.
1313
1314 ### Is anyone going to need the feature?
1315
1316 Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them.
1317
1318 ### Is your question about youtube-dl?
1319
1320 It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different, or even the reporter's own, application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug.
1321
1322 # COPYRIGHT
1323
1324 youtube-dl is released into the public domain by the copyright holders.
1325
1326 This README file was originally written by [Daniel Bolton](https://github.com/dbbolton) and is likewise released into the public domain.
1327
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ytdl-org/youtube-dl
|
8578ea4dcb17834ee3843e0e337c15af706f9803
|
Add support for nzz.ch
rudolffischer@BueroPC-RF:~$ youtube-dl "http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209" -v
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209', '-v']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2014.12.06.1
[debug] Python version 2.7.6 - Linux-3.13.0-39-generic-x86_64-with-Ubuntu-14.04-trusty
[debug] exe versions: rtmpdump 2.4
[debug] Proxy map: {}
[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Downloading webpage
[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Extracting information
ERROR: Unsupported URL: http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 651, in _real_extract
doc = parse_xml(webpage)
File "/usr/local/bin/youtube-dl/youtube_dl/utils.py", line 1425, in parse_xml
tree = xml.etree.ElementTree.XML(s.encode('utf-8'), **kwargs)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1300, in XML
parser.feed(text)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
ParseError: not well-formed (invalid token): line 2, column 42
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 553, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 241, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1044, in _real_extract
raise ExtractorError('Unsupported URL: %s' % url)
ExtractorError: Unsupported URL: http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
rudolffischer@BueroPC-RF:~$
|
2018-11-18T12:03:20Z
|
<patch>
diff --git a/youtube_dl/extractor/nzz.py b/youtube_dl/extractor/nzz.py
--- a/youtube_dl/extractor/nzz.py
+++ b/youtube_dl/extractor/nzz.py
@@ -11,20 +11,27 @@
class NZZIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?nzz\.ch/(?:[^/]+/)*[^/?#]+-ld\.(?P<id>\d+)'
- _TEST = {
+ _TESTS = [{
'url': 'http://www.nzz.ch/zuerich/gymizyte/gymizyte-schreiben-schueler-heute-noch-diktate-ld.9153',
'info_dict': {
'id': '9153',
},
'playlist_mincount': 6,
- }
+ }, {
+ 'url': 'https://www.nzz.ch/video/nzz-standpunkte/cvp-auf-der-suche-nach-dem-mass-der-mitte-ld.1368112',
+ 'info_dict': {
+ 'id': '1368112',
+ },
+ 'playlist_count': 1,
+ }]
def _real_extract(self, url):
page_id = self._match_id(url)
webpage = self._download_webpage(url, page_id)
entries = []
- for player_element in re.findall(r'(<[^>]+class="kalturaPlayer"[^>]*>)', webpage):
+ for player_element in re.findall(
+ r'(<[^>]+class="kalturaPlayer[^"]*"[^>]*>)', webpage):
player_params = extract_attributes(player_element)
if player_params.get('data-type') not in ('kaltura_singleArticle',):
self.report_warning('Unsupported player type')
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-31910
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
assert_numpy_array_equal raises TypeError for pd.NA
Discovered in #31799
```python
>>> arr1 = np.array([True, False])
>>> arr2 = np.array([True, pd.NA])
>>> tm.assert_numpy_array_equal(arr1, arr2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/williamayd/clones/pandas/pandas/_testing.py", line 1001, in assert_numpy_array_equal
if not array_equivalent(left, right, strict_nan=strict_nan):
File "/Users/williamayd/clones/pandas/pandas/core/dtypes/missing.py", line 447, in array_equivalent
ensure_object(left.ravel()), ensure_object(right.ravel())
File "pandas/_libs/lib.pyx", line 583, in pandas._libs.lib.array_equivalent_object
raise
File "pandas/_libs/lib.pyx", line 574, in pandas._libs.lib.array_equivalent_object
elif not (PyObject_RichCompareBool(x, y, Py_EQ) or
File "pandas/_libs/missing.pyx", line 360, in pandas._libs.missing.NAType.__bool__
raise TypeError("boolean value of NA is ambiguous")
TypeError: boolean value of NA is ambiguous
```
Should yield an AssertionError instead of a TypeError
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://pypi.org/project/pandas/)
11 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
12 [](https://travis-ci.org/pandas-dev/pandas)
13 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
14 [](https://codecov.io/gh/pandas-dev/pandas)
15 [](https://pandas.pydata.org)
16 [](https://gitter.im/pydata/pandas)
17 [](https://numfocus.org)
18
19 ## What is it?
20
21 **pandas** is a Python package providing fast, flexible, and expressive data
22 structures designed to make working with "relational" or "labeled" data both
23 easy and intuitive. It aims to be the fundamental high-level building block for
24 doing practical, **real world** data analysis in Python. Additionally, it has
25 the broader goal of becoming **the most powerful and flexible open source data
26 analysis / manipulation tool available in any language**. It is already well on
27 its way towards this goal.
28
29 ## Main Features
30 Here are just a few of the things that pandas does well:
31
32 - Easy handling of [**missing data**][missing-data] (represented as
33 `NaN`) in floating point as well as non-floating point data
34 - Size mutability: columns can be [**inserted and
35 deleted**][insertion-deletion] from DataFrame and higher dimensional
36 objects
37 - Automatic and explicit [**data alignment**][alignment]: objects can
38 be explicitly aligned to a set of labels, or the user can simply
39 ignore the labels and let `Series`, `DataFrame`, etc. automatically
40 align the data for you in computations
41 - Powerful, flexible [**group by**][groupby] functionality to perform
42 split-apply-combine operations on data sets, for both aggregating
43 and transforming data
44 - Make it [**easy to convert**][conversion] ragged,
45 differently-indexed data in other Python and NumPy data structures
46 into DataFrame objects
47 - Intelligent label-based [**slicing**][slicing], [**fancy
48 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
49 large data sets
50 - Intuitive [**merging**][merging] and [**joining**][joining] data
51 sets
52 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
53 data sets
54 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
55 labels per tick)
56 - Robust IO tools for loading data from [**flat files**][flat-files]
57 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
58 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
59 - [**Time series**][timeseries]-specific functionality: date range
60 generation and frequency conversion, moving window statistics,
61 date shifting and lagging.
62
63
64 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
65 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
66 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
67 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
68 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
69 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
70 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
71 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
72 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
73 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
74 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
75 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
76 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
77 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
78 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
79 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
80 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
81 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
82
83 ## Where to get it
84 The source code is currently hosted on GitHub at:
85 https://github.com/pandas-dev/pandas
86
87 Binary installers for the latest released version are available at the [Python
88 package index](https://pypi.org/project/pandas) and on conda.
89
90 ```sh
91 # conda
92 conda install pandas
93 ```
94
95 ```sh
96 # or PyPI
97 pip install pandas
98 ```
99
100 ## Dependencies
101 - [NumPy](https://www.numpy.org)
102 - [python-dateutil](https://labix.org/python-dateutil)
103 - [pytz](https://pythonhosted.org/pytz)
104
105 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
106
107 ## Installation from sources
108 To install pandas from source you need Cython in addition to the normal
109 dependencies above. Cython can be installed from pypi:
110
111 ```sh
112 pip install cython
113 ```
114
115 In the `pandas` directory (same one where you found this file after
116 cloning the git repo), execute:
117
118 ```sh
119 python setup.py install
120 ```
121
122 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
123
124
125 ```sh
126 python -m pip install -e . --no-build-isolation --no-use-pep517
127 ```
128
129 If you have `make`, you can also use `make develop` to run the same command.
130
131 or alternatively
132
133 ```sh
134 python setup.py develop
135 ```
136
137 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
138
139 ## License
140 [BSD 3](LICENSE)
141
142 ## Documentation
143 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
144
145 ## Background
146 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
147 has been under active development since then.
148
149 ## Getting Help
150
151 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
152 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
153
154 ## Discussion and Development
155 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
156
157 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
158
159 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
160
161 A detailed overview on how to contribute can be found in the **[contributing guide](https://dev.pandas.io/docs/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
162
163 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
164
165 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
166
167 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
168
169 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
170
171 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
172
[end of README.md]
[start of pandas/core/dtypes/missing.py]
1 """
2 missing types & inference
3 """
4 import numpy as np
5
6 from pandas._config import get_option
7
8 from pandas._libs import lib
9 import pandas._libs.missing as libmissing
10 from pandas._libs.tslibs import NaT, iNaT
11 from pandas._typing import DtypeObj
12
13 from pandas.core.dtypes.common import (
14 _NS_DTYPE,
15 _TD_DTYPE,
16 ensure_object,
17 is_bool_dtype,
18 is_complex_dtype,
19 is_datetime64_dtype,
20 is_datetime64tz_dtype,
21 is_datetimelike_v_numeric,
22 is_dtype_equal,
23 is_extension_array_dtype,
24 is_float_dtype,
25 is_integer_dtype,
26 is_object_dtype,
27 is_period_dtype,
28 is_scalar,
29 is_string_dtype,
30 is_string_like_dtype,
31 is_timedelta64_dtype,
32 needs_i8_conversion,
33 pandas_dtype,
34 )
35 from pandas.core.dtypes.generic import (
36 ABCDatetimeArray,
37 ABCExtensionArray,
38 ABCGeneric,
39 ABCIndexClass,
40 ABCMultiIndex,
41 ABCSeries,
42 ABCTimedeltaArray,
43 )
44 from pandas.core.dtypes.inference import is_list_like
45
46 isposinf_scalar = libmissing.isposinf_scalar
47 isneginf_scalar = libmissing.isneginf_scalar
48
49
50 def isna(obj):
51 """
52 Detect missing values for an array-like object.
53
54 This function takes a scalar or array-like object and indicates
55 whether values are missing (``NaN`` in numeric arrays, ``None`` or ``NaN``
56 in object arrays, ``NaT`` in datetimelike).
57
58 Parameters
59 ----------
60 obj : scalar or array-like
61 Object to check for null or missing values.
62
63 Returns
64 -------
65 bool or array-like of bool
66 For scalar input, returns a scalar boolean.
67 For array input, returns an array of boolean indicating whether each
68 corresponding element is missing.
69
70 See Also
71 --------
72 notna : Boolean inverse of pandas.isna.
73 Series.isna : Detect missing values in a Series.
74 DataFrame.isna : Detect missing values in a DataFrame.
75 Index.isna : Detect missing values in an Index.
76
77 Examples
78 --------
79 Scalar arguments (including strings) result in a scalar boolean.
80
81 >>> pd.isna('dog')
82 False
83
84 >>> pd.isna(pd.NA)
85 True
86
87 >>> pd.isna(np.nan)
88 True
89
90 ndarrays result in an ndarray of booleans.
91
92 >>> array = np.array([[1, np.nan, 3], [4, 5, np.nan]])
93 >>> array
94 array([[ 1., nan, 3.],
95 [ 4., 5., nan]])
96 >>> pd.isna(array)
97 array([[False, True, False],
98 [False, False, True]])
99
100 For indexes, an ndarray of booleans is returned.
101
102 >>> index = pd.DatetimeIndex(["2017-07-05", "2017-07-06", None,
103 ... "2017-07-08"])
104 >>> index
105 DatetimeIndex(['2017-07-05', '2017-07-06', 'NaT', '2017-07-08'],
106 dtype='datetime64[ns]', freq=None)
107 >>> pd.isna(index)
108 array([False, False, True, False])
109
110 For Series and DataFrame, the same type is returned, containing booleans.
111
112 >>> df = pd.DataFrame([['ant', 'bee', 'cat'], ['dog', None, 'fly']])
113 >>> df
114 0 1 2
115 0 ant bee cat
116 1 dog None fly
117 >>> pd.isna(df)
118 0 1 2
119 0 False False False
120 1 False True False
121
122 >>> pd.isna(df[1])
123 0 False
124 1 True
125 Name: 1, dtype: bool
126 """
127 return _isna(obj)
128
129
130 isnull = isna
131
132
133 def _isna_new(obj):
134
135 if is_scalar(obj):
136 return libmissing.checknull(obj)
137 # hack (for now) because MI registers as ndarray
138 elif isinstance(obj, ABCMultiIndex):
139 raise NotImplementedError("isna is not defined for MultiIndex")
140 elif isinstance(obj, type):
141 return False
142 elif isinstance(
143 obj,
144 (
145 ABCSeries,
146 np.ndarray,
147 ABCIndexClass,
148 ABCExtensionArray,
149 ABCDatetimeArray,
150 ABCTimedeltaArray,
151 ),
152 ):
153 return _isna_ndarraylike(obj)
154 elif isinstance(obj, ABCGeneric):
155 return obj._constructor(obj._data.isna(func=isna))
156 elif isinstance(obj, list):
157 return _isna_ndarraylike(np.asarray(obj, dtype=object))
158 elif hasattr(obj, "__array__"):
159 return _isna_ndarraylike(np.asarray(obj))
160 else:
161 return obj is None
162
163
164 def _isna_old(obj):
165 """
166 Detect missing values, treating None, NaN, INF, -INF as null.
167
168 Parameters
169 ----------
170 arr: ndarray or object value
171
172 Returns
173 -------
174 boolean ndarray or boolean
175 """
176 if is_scalar(obj):
177 return libmissing.checknull_old(obj)
178 # hack (for now) because MI registers as ndarray
179 elif isinstance(obj, ABCMultiIndex):
180 raise NotImplementedError("isna is not defined for MultiIndex")
181 elif isinstance(obj, type):
182 return False
183 elif isinstance(obj, (ABCSeries, np.ndarray, ABCIndexClass, ABCExtensionArray)):
184 return _isna_ndarraylike_old(obj)
185 elif isinstance(obj, ABCGeneric):
186 return obj._constructor(obj._data.isna(func=_isna_old))
187 elif isinstance(obj, list):
188 return _isna_ndarraylike_old(np.asarray(obj, dtype=object))
189 elif hasattr(obj, "__array__"):
190 return _isna_ndarraylike_old(np.asarray(obj))
191 else:
192 return obj is None
193
194
195 _isna = _isna_new
196
197
198 def _use_inf_as_na(key):
199 """
200 Option change callback for na/inf behaviour.
201
202 Choose which replacement for numpy.isnan / -numpy.isfinite is used.
203
204 Parameters
205 ----------
206 flag: bool
207 True means treat None, NaN, INF, -INF as null (old way),
208 False means None and NaN are null, but INF, -INF are not null
209 (new way).
210
211 Notes
212 -----
213 This approach to setting global module values is discussed and
214 approved here:
215
216 * https://stackoverflow.com/questions/4859217/
217 programmatically-creating-variables-in-python/4859312#4859312
218 """
219 flag = get_option(key)
220 if flag:
221 globals()["_isna"] = _isna_old
222 else:
223 globals()["_isna"] = _isna_new
224
225
226 def _isna_ndarraylike(obj):
227 is_extension = is_extension_array_dtype(obj)
228
229 if not is_extension:
230 # Avoid accessing `.values` on things like
231 # PeriodIndex, which may be expensive.
232 values = getattr(obj, "values", obj)
233 else:
234 values = obj
235
236 dtype = values.dtype
237
238 if is_extension:
239 if isinstance(obj, (ABCIndexClass, ABCSeries)):
240 values = obj._values
241 else:
242 values = obj
243 result = values.isna()
244 elif isinstance(obj, ABCDatetimeArray):
245 return obj.isna()
246 elif is_string_dtype(dtype):
247 # Working around NumPy ticket 1542
248 shape = values.shape
249
250 if is_string_like_dtype(dtype):
251 # object array of strings
252 result = np.zeros(values.shape, dtype=bool)
253 else:
254 # object array of non-strings
255 result = np.empty(shape, dtype=bool)
256 vec = libmissing.isnaobj(values.ravel())
257 result[...] = vec.reshape(shape)
258
259 elif needs_i8_conversion(dtype):
260 # this is the NaT pattern
261 result = values.view("i8") == iNaT
262 else:
263 result = np.isnan(values)
264
265 # box
266 if isinstance(obj, ABCSeries):
267 result = obj._constructor(result, index=obj.index, name=obj.name, copy=False)
268
269 return result
270
271
272 def _isna_ndarraylike_old(obj):
273 values = getattr(obj, "values", obj)
274 dtype = values.dtype
275
276 if is_string_dtype(dtype):
277 # Working around NumPy ticket 1542
278 shape = values.shape
279
280 if is_string_like_dtype(dtype):
281 result = np.zeros(values.shape, dtype=bool)
282 else:
283 result = np.empty(shape, dtype=bool)
284 vec = libmissing.isnaobj_old(values.ravel())
285 result[:] = vec.reshape(shape)
286
287 elif is_datetime64_dtype(dtype):
288 # this is the NaT pattern
289 result = values.view("i8") == iNaT
290 else:
291 result = ~np.isfinite(values)
292
293 # box
294 if isinstance(obj, ABCSeries):
295 result = obj._constructor(result, index=obj.index, name=obj.name, copy=False)
296
297 return result
298
299
300 def notna(obj):
301 """
302 Detect non-missing values for an array-like object.
303
304 This function takes a scalar or array-like object and indicates
305 whether values are valid (not missing, which is ``NaN`` in numeric
306 arrays, ``None`` or ``NaN`` in object arrays, ``NaT`` in datetimelike).
307
308 Parameters
309 ----------
310 obj : array-like or object value
311 Object to check for *not* null or *non*-missing values.
312
313 Returns
314 -------
315 bool or array-like of bool
316 For scalar input, returns a scalar boolean.
317 For array input, returns an array of boolean indicating whether each
318 corresponding element is valid.
319
320 See Also
321 --------
322 isna : Boolean inverse of pandas.notna.
323 Series.notna : Detect valid values in a Series.
324 DataFrame.notna : Detect valid values in a DataFrame.
325 Index.notna : Detect valid values in an Index.
326
327 Examples
328 --------
329 Scalar arguments (including strings) result in a scalar boolean.
330
331 >>> pd.notna('dog')
332 True
333
334 >>> pd.notna(pd.NA)
335 False
336
337 >>> pd.notna(np.nan)
338 False
339
340 ndarrays result in an ndarray of booleans.
341
342 >>> array = np.array([[1, np.nan, 3], [4, 5, np.nan]])
343 >>> array
344 array([[ 1., nan, 3.],
345 [ 4., 5., nan]])
346 >>> pd.notna(array)
347 array([[ True, False, True],
348 [ True, True, False]])
349
350 For indexes, an ndarray of booleans is returned.
351
352 >>> index = pd.DatetimeIndex(["2017-07-05", "2017-07-06", None,
353 ... "2017-07-08"])
354 >>> index
355 DatetimeIndex(['2017-07-05', '2017-07-06', 'NaT', '2017-07-08'],
356 dtype='datetime64[ns]', freq=None)
357 >>> pd.notna(index)
358 array([ True, True, False, True])
359
360 For Series and DataFrame, the same type is returned, containing booleans.
361
362 >>> df = pd.DataFrame([['ant', 'bee', 'cat'], ['dog', None, 'fly']])
363 >>> df
364 0 1 2
365 0 ant bee cat
366 1 dog None fly
367 >>> pd.notna(df)
368 0 1 2
369 0 True True True
370 1 True False True
371
372 >>> pd.notna(df[1])
373 0 True
374 1 False
375 Name: 1, dtype: bool
376 """
377 res = isna(obj)
378 if is_scalar(res):
379 return not res
380 return ~res
381
382
383 notnull = notna
384
385
386 def _isna_compat(arr, fill_value=np.nan) -> bool:
387 """
388 Parameters
389 ----------
390 arr: a numpy array
391 fill_value: fill value, default to np.nan
392
393 Returns
394 -------
395 True if we can fill using this fill_value
396 """
397 dtype = arr.dtype
398 if isna(fill_value):
399 return not (is_bool_dtype(dtype) or is_integer_dtype(dtype))
400 return True
401
402
403 def array_equivalent(left, right, strict_nan: bool = False) -> bool:
404 """
405 True if two arrays, left and right, have equal non-NaN elements, and NaNs
406 in corresponding locations. False otherwise. It is assumed that left and
407 right are NumPy arrays of the same dtype. The behavior of this function
408 (particularly with respect to NaNs) is not defined if the dtypes are
409 different.
410
411 Parameters
412 ----------
413 left, right : ndarrays
414 strict_nan : bool, default False
415 If True, consider NaN and None to be different.
416
417 Returns
418 -------
419 b : bool
420 Returns True if the arrays are equivalent.
421
422 Examples
423 --------
424 >>> array_equivalent(
425 ... np.array([1, 2, np.nan]),
426 ... np.array([1, 2, np.nan]))
427 True
428 >>> array_equivalent(
429 ... np.array([1, np.nan, 2]),
430 ... np.array([1, 2, np.nan]))
431 False
432 """
433
434 left, right = np.asarray(left), np.asarray(right)
435
436 # shape compat
437 if left.shape != right.shape:
438 return False
439
440 # Object arrays can contain None, NaN and NaT.
441 # string dtypes must be come to this path for NumPy 1.7.1 compat
442 if is_string_dtype(left) or is_string_dtype(right):
443
444 if not strict_nan:
445 # isna considers NaN and None to be equivalent.
446 return lib.array_equivalent_object(
447 ensure_object(left.ravel()), ensure_object(right.ravel())
448 )
449
450 for left_value, right_value in zip(left, right):
451 if left_value is NaT and right_value is not NaT:
452 return False
453
454 elif left_value is libmissing.NA and right_value is not libmissing.NA:
455 return False
456
457 elif isinstance(left_value, float) and np.isnan(left_value):
458 if not isinstance(right_value, float) or not np.isnan(right_value):
459 return False
460 else:
461 try:
462 if np.any(np.asarray(left_value != right_value)):
463 return False
464 except TypeError as err:
465 if "Cannot compare tz-naive" in str(err):
466 # tzawareness compat failure, see GH#28507
467 return False
468 elif "boolean value of NA is ambiguous" in str(err):
469 return False
470 raise
471 return True
472
473 # NaNs can occur in float and complex arrays.
474 if is_float_dtype(left) or is_complex_dtype(left):
475
476 # empty
477 if not (np.prod(left.shape) and np.prod(right.shape)):
478 return True
479 return ((left == right) | (isna(left) & isna(right))).all()
480
481 elif is_datetimelike_v_numeric(left, right):
482 # GH#29553 avoid numpy deprecation warning
483 return False
484
485 elif needs_i8_conversion(left) or needs_i8_conversion(right):
486 # datetime64, timedelta64, Period
487 if not is_dtype_equal(left.dtype, right.dtype):
488 return False
489
490 left = left.view("i8")
491 right = right.view("i8")
492
493 # if we have structured dtypes, compare first
494 if left.dtype.type is np.void or right.dtype.type is np.void:
495 if left.dtype != right.dtype:
496 return False
497
498 return np.array_equal(left, right)
499
500
501 def _infer_fill_value(val):
502 """
503 infer the fill value for the nan/NaT from the provided
504 scalar/ndarray/list-like if we are a NaT, return the correct dtyped
505 element to provide proper block construction
506 """
507
508 if not is_list_like(val):
509 val = [val]
510 val = np.array(val, copy=False)
511 if needs_i8_conversion(val):
512 return np.array("NaT", dtype=val.dtype)
513 elif is_object_dtype(val.dtype):
514 dtype = lib.infer_dtype(ensure_object(val), skipna=False)
515 if dtype in ["datetime", "datetime64"]:
516 return np.array("NaT", dtype=_NS_DTYPE)
517 elif dtype in ["timedelta", "timedelta64"]:
518 return np.array("NaT", dtype=_TD_DTYPE)
519 return np.nan
520
521
522 def _maybe_fill(arr, fill_value=np.nan):
523 """
524 if we have a compatible fill_value and arr dtype, then fill
525 """
526 if _isna_compat(arr, fill_value):
527 arr.fill(fill_value)
528 return arr
529
530
531 def na_value_for_dtype(dtype, compat: bool = True):
532 """
533 Return a dtype compat na value
534
535 Parameters
536 ----------
537 dtype : string / dtype
538 compat : bool, default True
539
540 Returns
541 -------
542 np.dtype or a pandas dtype
543
544 Examples
545 --------
546 >>> na_value_for_dtype(np.dtype('int64'))
547 0
548 >>> na_value_for_dtype(np.dtype('int64'), compat=False)
549 nan
550 >>> na_value_for_dtype(np.dtype('float64'))
551 nan
552 >>> na_value_for_dtype(np.dtype('bool'))
553 False
554 >>> na_value_for_dtype(np.dtype('datetime64[ns]'))
555 NaT
556 """
557 dtype = pandas_dtype(dtype)
558
559 if is_extension_array_dtype(dtype):
560 return dtype.na_value
561 if (
562 is_datetime64_dtype(dtype)
563 or is_datetime64tz_dtype(dtype)
564 or is_timedelta64_dtype(dtype)
565 or is_period_dtype(dtype)
566 ):
567 return NaT
568 elif is_float_dtype(dtype):
569 return np.nan
570 elif is_integer_dtype(dtype):
571 if compat:
572 return 0
573 return np.nan
574 elif is_bool_dtype(dtype):
575 return False
576 return np.nan
577
578
579 def remove_na_arraylike(arr):
580 """
581 Return array-like containing only true/non-NaN values, possibly empty.
582 """
583 if is_extension_array_dtype(arr):
584 return arr[notna(arr)]
585 else:
586 return arr[notna(lib.values_from_object(arr))]
587
588
589 def is_valid_nat_for_dtype(obj, dtype: DtypeObj) -> bool:
590 """
591 isna check that excludes incompatible dtypes
592
593 Parameters
594 ----------
595 obj : object
596 dtype : np.datetime64, np.timedelta64, DatetimeTZDtype, or PeriodDtype
597
598 Returns
599 -------
600 bool
601 """
602 if not lib.is_scalar(obj) or not isna(obj):
603 return False
604 if dtype.kind == "M":
605 return not isinstance(obj, np.timedelta64)
606 if dtype.kind == "m":
607 return not isinstance(obj, np.datetime64)
608
609 # must be PeriodDType
610 return not isinstance(obj, (np.datetime64, np.timedelta64))
611
[end of pandas/core/dtypes/missing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
361a938dd82fdb5fdc1f7b1fff97de39326421e7
|
assert_numpy_array_equal raises TypeError for pd.NA
Discovered in #31799
```python
>>> arr1 = np.array([True, False])
>>> arr2 = np.array([True, pd.NA])
>>> tm.assert_numpy_array_equal(arr1, arr2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/williamayd/clones/pandas/pandas/_testing.py", line 1001, in assert_numpy_array_equal
if not array_equivalent(left, right, strict_nan=strict_nan):
File "/Users/williamayd/clones/pandas/pandas/core/dtypes/missing.py", line 447, in array_equivalent
ensure_object(left.ravel()), ensure_object(right.ravel())
File "pandas/_libs/lib.pyx", line 583, in pandas._libs.lib.array_equivalent_object
raise
File "pandas/_libs/lib.pyx", line 574, in pandas._libs.lib.array_equivalent_object
elif not (PyObject_RichCompareBool(x, y, Py_EQ) or
File "pandas/_libs/missing.pyx", line 360, in pandas._libs.missing.NAType.__bool__
raise TypeError("boolean value of NA is ambiguous")
TypeError: boolean value of NA is ambiguous
```
Should yield an AssertionError instead of a TypeError
|
2020-02-12T03:29:38Z
|
<patch>
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -571,6 +571,8 @@ def array_equivalent_object(left: object[:], right: object[:]) -> bool:
if PyArray_Check(x) and PyArray_Check(y):
if not array_equivalent_object(x, y):
return False
+ elif (x is C_NA) ^ (y is C_NA):
+ return False
elif not (PyObject_RichCompareBool(x, y, Py_EQ) or
(x is None or is_nan(x)) and (y is None or is_nan(y))):
return False
</patch>
|
[]
|
[]
| ||||
Qiskit__qiskit-2167
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pulse naming issues
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
We have several naming issues in Pulse-frontend classes/modules/methods:
- [ ] Inconsistency in Channel, PulseCommand, Schedule: PulseCommand -> Command?
- [ ] common module -> remove and move contents to top-level?
- [ ] Inconsistency in DriveInstruction applied to OutputChannel: OutputInstruction or rename OutputChannel? Rename both (to PulseInstruction and PulseChannel)?
- [ ] channels module have more than channels, it's more like device module: pulse/channels -> pulse/device?
(I guess we have more)
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)
4
5 **Qiskit** is an open-source framework for working with Noisy Intermediate-Scale Quantum (NISQ) computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [contribution guidelines](.github/CONTRIBUTING.rst).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> q = QuantumRegister(2)
35 >>> c = ClassicalRegister(2)
36 >>> qc = QuantumCircuit(q, c)
37 >>> qc.h(q[0])
38 >>> qc.cx(q[0], q[1])
39 >>> qc.measure(q, c)
40 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
41 >>> result = execute(qc, backend_sim).result()
42 >>> print(result.get_counts(qc))
43 ```
44
45 In this case, the output will be:
46
47 ```python
48 {'00': 513, '11': 511}
49 ```
50
51 A script is available [here](examples/python/hello_quantum.py), where we also show how to
52 run the same program on a real quantum computer via IBMQ.
53
54 ### Executing your code on a real quantum chip
55
56 You can also use Qiskit to execute your code on a
57 **real quantum chip**.
58 In order to do so, you need to configure Qiskit for using the credentials in
59 your IBM Q account:
60
61 #### Configure your IBMQ credentials
62
63 1. Create an _[IBM Q](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so.
64
65 2. Get an API token from the IBM Q website under _My Account > Advanced > API Token_.
66
67 3. Take your token from step 2, here called `MY_API_TOKEN`, and run:
68
69 ```python
70 >>> from qiskit import IBMQ
71 >>> IBMQ.save_account('MY_API_TOKEN')
72 ```
73
74 4. If you have access to the IBM Q Network features, you also need to pass the
75 URL listed on your IBM Q account page to `save_account`.
76
77 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
78 Once they are stored, at any point in the future you can load and use them
79 in your program simply via:
80
81 ```python
82 >>> from qiskit import IBMQ
83 >>> IBMQ.load_accounts()
84 ```
85
86 Those who do not want to save their credentials to disk should use instead:
87
88 ```python
89 >>> from qiskit import IBMQ
90 >>> IBMQ.enable_account('MY_API_TOKEN')
91 ```
92
93 and the token will only be active for the session. For examples using Terra with real
94 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
95 the levels.
96
97 ## Contribution Guidelines
98
99 If you'd like to contribute to Qiskit Terra, please take a look at our
100 [contribution guidelines](.github/CONTRIBUTING.rst). This project adheres to Qiskit's [code of conduct](.github/CODE_OF_CONDUCT.rst). By participating, you are expected to uphold this code.
101
102 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
103 [join the Qiskit Slack community](https://join.slack.com/t/qiskit/shared_invite/enQtNDc2NjUzMjE4Mzc0LTMwZmE0YTM4ZThiNGJmODkzN2Y2NTNlMDIwYWNjYzA2ZmM1YTRlZGQ3OGM0NjcwMjZkZGE0MTA4MGQ1ZTVmYzk)
104 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
105 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
106
107 ## Next Steps
108
109 Now you're set up and ready to check out some of the other examples from our
110 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
111
112 ## Authors and Citation
113
114 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
115 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
116
117 ## License
118
119 [Apache License 2.0](LICENSE.txt)
120
[end of README.md]
[start of qiskit/execute.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2019, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Helper module for simplified Qiskit usage.
10
11 In general we recommend using the SDK modules directly. However, to get something
12 running quickly we have provided this wrapper module.
13 """
14 import warnings
15 import logging
16
17 from qiskit.compiler import transpile, assemble
18
19 logger = logging.getLogger(__name__)
20
21
22 def execute(experiments, backend,
23 basis_gates=None, coupling_map=None, # circuit transpile options
24 backend_properties=None, initial_layout=None,
25 seed_transpiler=None, optimization_level=None, pass_manager=None,
26 qobj_id=None, qobj_header=None, shots=1024, # common run options
27 memory=False, max_credits=10, seed_simulator=None,
28 default_qubit_los=None, default_meas_los=None, # schedule run options
29 schedule_los=None, meas_level=2, meas_return='avg',
30 memory_slots=None, memory_slot_size=100, rep_time=None,
31 seed=None, seed_mapper=None, # deprecated
32 config=None, circuits=None,
33 **run_config):
34 """Execute a list of circuits or pulse schedules on a backend.
35
36 The execution is asynchronous, and a handle to a job instance is returned.
37
38 Args:
39 experiments (QuantumCircuit or list[QuantumCircuit] or Schedule or list[Schedule]):
40 Circuit(s) or pulse schedule(s) to execute
41
42 backend (BaseBackend):
43 Backend to execute circuits on.
44 Transpiler options are automatically grabbed from
45 backend.configuration() and backend.properties().
46 If any other option is explicitly set (e.g. coupling_map), it
47 will override the backend's.
48
49 basis_gates (list[str]):
50 List of basis gate names to unroll to.
51 e.g:
52 ['u1', 'u2', 'u3', 'cx']
53 If None, do not unroll.
54
55 coupling_map (CouplingMap or list):
56 Coupling map (perhaps custom) to target in mapping.
57 Multiple formats are supported:
58 a. CouplingMap instance
59
60 b. list
61 Must be given as an adjacency matrix, where each entry
62 specifies all two-qubit interactions supported by backend
63 e.g:
64 [[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]
65
66 backend_properties (BackendProperties):
67 Properties returned by a backend, including information on gate
68 errors, readout errors, qubit coherence times, etc. For a backend
69 that provides this information, it can be obtained with:
70 ``backend.properties()``
71
72 initial_layout (Layout or dict or list):
73 Initial position of virtual qubits on physical qubits.
74 If this layout makes the circuit compatible with the coupling_map
75 constraints, it will be used.
76 The final layout is not guaranteed to be the same, as the transpiler
77 may permute qubits through swaps or other means.
78
79 Multiple formats are supported:
80 a. Layout instance
81
82 b. dict
83 virtual to physical:
84 {qr[0]: 0,
85 qr[1]: 3,
86 qr[2]: 5}
87
88 physical to virtual:
89 {0: qr[0],
90 3: qr[1],
91 5: qr[2]}
92
93 c. list
94 virtual to physical:
95 [0, 3, 5] # virtual qubits are ordered (in addition to named)
96
97 physical to virtual:
98 [qr[0], None, None, qr[1], None, qr[2]]
99
100 seed_transpiler (int):
101 Sets random seed for the stochastic parts of the transpiler
102
103 optimization_level (int):
104 How much optimization to perform on the circuits.
105 Higher levels generate more optimized circuits,
106 at the expense of longer transpilation time.
107 0: no optimization
108 1: light optimization
109 2: heavy optimization
110
111 pass_manager (PassManager):
112 The pass manager to use during transpilation. If this arg is present,
113 auto-selection of pass manager based on the transpile options will be
114 turned off and this pass manager will be used directly.
115
116 qobj_id (str):
117 String identifier to annotate the Qobj
118
119 qobj_header (QobjHeader or dict):
120 User input that will be inserted in Qobj header, and will also be
121 copied to the corresponding Result header. Headers do not affect the run.
122
123 shots (int):
124 Number of repetitions of each circuit, for sampling. Default: 2014
125
126 memory (bool):
127 If True, per-shot measurement bitstrings are returned as well
128 (provided the backend supports it). For OpenPulse jobs, only
129 measurement level 2 supports this option. Default: False
130
131 max_credits (int):
132 Maximum credits to spend on job. Default: 10
133
134 seed_simulator (int):
135 Random seed to control sampling, for when backend is a simulator
136
137 default_qubit_los (list):
138 List of default qubit lo frequencies
139
140 default_meas_los (list):
141 List of default meas lo frequencies
142
143 schedule_los (None or list[Union[Dict[OutputChannel, float], LoConfig]] or
144 Union[Dict[OutputChannel, float], LoConfig]):
145 Experiment LO configurations
146
147 meas_level (int):
148 Set the appropriate level of the measurement output for pulse experiments.
149
150 meas_return (str):
151 Level of measurement data for the backend to return
152 For `meas_level` 0 and 1:
153 "single" returns information from every shot.
154 "avg" returns average measurement output (averaged over number of shots).
155
156 memory_slots (int):
157 Number of classical memory slots used in this job.
158
159 memory_slot_size (int):
160 Size of each memory slot if the output is Level 0.
161
162 rep_time (int): repetition time of the experiment in μs.
163 The delay between experiments will be rep_time.
164 Must be from the list provided by the device.
165
166 seed (int):
167 DEPRECATED in 0.8: use ``seed_simulator`` kwarg instead
168
169 seed_mapper (int):
170 DEPRECATED in 0.8: use ``seed_transpiler`` kwarg instead
171
172 config (dict):
173 DEPRECATED in 0.8: use run_config instead
174
175 circuits (QuantumCircuit or list[QuantumCircuit]):
176 DEPRECATED in 0.8: use ``experiments`` kwarg instead.
177
178 run_config (dict):
179 Extra arguments used to configure the run (e.g. for Aer configurable backends)
180 Refer to the backend documentation for details on these arguments
181 Note: for now, these keyword arguments will both be copied to the
182 Qobj config, and passed to backend.run()
183
184 Returns:
185 BaseJob: returns job instance derived from BaseJob
186
187 Raises:
188 QiskitError: if the execution cannot be interpreted as either circuits or schedules
189 """
190 if circuits is not None:
191 experiments = circuits
192 warnings.warn("the `circuits` arg in `execute()` has been deprecated. "
193 "please use `experiments`, which can handle both circuit "
194 "and pulse Schedules", DeprecationWarning)
195
196 # transpiling the circuits using given transpile options
197 experiments = transpile(experiments,
198 basis_gates=basis_gates,
199 coupling_map=coupling_map,
200 backend_properties=backend_properties,
201 initial_layout=initial_layout,
202 seed_transpiler=seed_transpiler,
203 optimization_level=optimization_level,
204 backend=backend,
205 pass_manager=pass_manager,
206 seed_mapper=seed_mapper, # deprecated
207 )
208
209 # assembling the circuits into a qobj to be run on the backend
210 qobj = assemble(experiments,
211 qobj_id=qobj_id,
212 qobj_header=qobj_header,
213 shots=shots,
214 memory=memory,
215 max_credits=max_credits,
216 seed_simulator=seed_simulator,
217 default_qubit_los=default_qubit_los,
218 default_meas_los=default_meas_los,
219 schedule_los=schedule_los,
220 meas_level=meas_level,
221 meas_return=meas_return,
222 memory_slots=memory_slots,
223 memory_slot_size=memory_slot_size,
224 rep_time=rep_time,
225 backend=backend,
226 config=config, # deprecated
227 seed=seed, # deprecated
228 run_config=run_config
229 )
230
231 # executing the circuits on the backend and returning the job
232 return backend.run(qobj, **run_config)
233
[end of qiskit/execute.py]
[start of qiskit/pulse/channels/pulse_channel.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2019, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Channels.
10 """
11 from abc import ABCMeta, abstractmethod
12
13
14 class Channel(metaclass=ABCMeta):
15 """Pulse channel."""
16
17 prefix = None
18
19 @abstractmethod
20 def __init__(self, index: int = None):
21 self._index = index
22
23 @property
24 def index(self) -> int:
25 """Return the index of this channel."""
26 return self._index
27
28 @property
29 def name(self) -> str:
30 """Return the name of this channel."""
31 return '%s%d' % (self.__class__.prefix, self._index)
32
33 def __repr__(self):
34 return '%s(%s)' % (self.__class__.__name__, self._index)
35
36 def __eq__(self, other):
37 """Two channels are the same if they are of the same type, and have the same index.
38
39 Args:
40 other (Channel): other PulseChannel
41
42 Returns:
43 bool: are self and other equal.
44 """
45 if type(self) is type(other) and \
46 self._index == other._index:
47 return True
48 return False
49
50 def __hash__(self):
51 return hash((type(self), self._index))
52
53
54 class AcquireChannel(Channel):
55 """Acquire channel."""
56
57 prefix = 'a'
58
59 def __init__(self, index):
60 """Create new acquire channel.
61
62 Args:
63 index (int): Index of the channel.
64 """
65 super().__init__(index)
66
67
68 class SnapshotChannel(Channel):
69 """Snapshot channel."""
70
71 prefix = 's'
72
73 def __init__(self):
74 """Create new snapshot channel."""
75 super().__init__(0)
76
77
78 class MemorySlot(Channel):
79 """Memory slot."""
80
81 prefix = 'm'
82
83 def __init__(self, index):
84 """Create new memory slot.
85
86 Args:
87 index (int): Index of the channel.
88 """
89 super().__init__(index)
90
91
92 class RegisterSlot(Channel):
93 """Classical resister slot channel."""
94
95 prefix = 'c'
96
97 def __init__(self, index):
98 """Create new register slot.
99
100 Args:
101 index (int): Index of the channel.
102 """
103 super().__init__(index)
104
[end of qiskit/pulse/channels/pulse_channel.py]
[start of qiskit/pulse/commands/acquire.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2019, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Acquire.
10 """
11 from typing import Union, List
12
13 from qiskit.pulse.channels import Qubit, MemorySlot, RegisterSlot, AcquireChannel
14 from qiskit.pulse.exceptions import PulseError
15 from .instruction import Instruction
16 from .meas_opts import Discriminator, Kernel
17 from .pulse_command import PulseCommand
18
19
20 class Acquire(PulseCommand):
21 """Acquire."""
22
23 def __init__(self, duration, discriminator=None, kernel=None):
24 """Create new acquire command.
25
26 Args:
27 duration (int): Duration of acquisition.
28 discriminator (Discriminator): Discriminators to be used
29 (from the list of available discriminator) if the measurement level is 2.
30 kernel (Kernel): The data structures defining the measurement kernels
31 to be used (from the list of available kernels) and set of parameters
32 (if applicable) if the measurement level is 1 or 2.
33
34 Raises:
35 PulseError: when invalid discriminator or kernel object is input.
36 """
37 super().__init__(duration=duration)
38
39 if discriminator:
40 if isinstance(discriminator, Discriminator):
41 self._discriminator = discriminator
42 else:
43 raise PulseError('Invalid discriminator object is specified.')
44 else:
45 self._discriminator = None
46
47 if kernel:
48 if isinstance(kernel, Kernel):
49 self._kernel = kernel
50 else:
51 raise PulseError('Invalid kernel object is specified.')
52 else:
53 self._kernel = None
54
55 @property
56 def kernel(self):
57 """Return kernel settings."""
58 return self._kernel
59
60 @property
61 def discriminator(self):
62 """Return discrimination settings."""
63 return self._discriminator
64
65 def __eq__(self, other):
66 """Two Acquires are the same if they are of the same type
67 and have the same kernel and discriminator.
68
69 Args:
70 other (Acquire): Other Acquire
71
72 Returns:
73 bool: are self and other equal.
74 """
75 if type(self) is type(other) and \
76 self.kernel == other.kernel and \
77 self.discriminator == other.discriminator:
78 return True
79 return False
80
81 def __repr__(self):
82 return '%s(%s, duration=%d, kernel=%s, discriminator=%s)' % \
83 (self.__class__.__name__, self.name, self.duration,
84 self.kernel, self.discriminator)
85
86 # pylint: disable=arguments-differ
87 def to_instruction(self,
88 qubits: Union[Qubit, List[Qubit]],
89 mem_slots: Union[MemorySlot, List[MemorySlot]] = None,
90 reg_slots: Union[RegisterSlot, List[RegisterSlot]] = None,
91 name=None) -> 'AcquireInstruction':
92 return AcquireInstruction(self, qubits, mem_slots, reg_slots, name=name)
93 # pylint: enable=arguments-differ
94
95
96 class AcquireInstruction(Instruction):
97 """Pulse to acquire measurement result. """
98
99 def __init__(self,
100 command: Acquire,
101 qubits: Union[Qubit, AcquireChannel, List[Qubit], List[AcquireChannel]],
102 mem_slots: Union[MemorySlot, List[MemorySlot]],
103 reg_slots: Union[RegisterSlot, List[RegisterSlot]] = None,
104 name=None):
105
106 if isinstance(qubits, (Qubit, AcquireChannel)):
107 qubits = [qubits]
108
109 if not (mem_slots or reg_slots):
110 raise PulseError('Neither memoryslots or registers were supplied')
111
112 if mem_slots:
113 if isinstance(mem_slots, MemorySlot):
114 mem_slots = [mem_slots]
115 elif len(qubits) != len(mem_slots):
116 raise PulseError("#mem_slots must be equals to #qubits")
117
118 if reg_slots:
119 if isinstance(reg_slots, RegisterSlot):
120 reg_slots = [reg_slots]
121 if len(qubits) != len(reg_slots):
122 raise PulseError("#reg_slots must be equals to #qubits")
123 else:
124 reg_slots = []
125
126 # extract acquire channels
127 acquires = []
128 for q in qubits:
129 if isinstance(q, Qubit):
130 q = q.acquire
131 acquires.append(q)
132
133 super().__init__(command, *acquires, *mem_slots, *reg_slots, name=name)
134
135 self._acquires = acquires
136 self._mem_slots = mem_slots
137 self._reg_slots = reg_slots
138
139 @property
140 def acquires(self):
141 """Acquire channels to be acquired on. """
142 return self._acquires
143
144 @property
145 def mem_slots(self):
146 """MemorySlots. """
147 return self._mem_slots
148
149 @property
150 def reg_slots(self):
151 """RegisterSlots. """
152 return self._reg_slots
153
[end of qiskit/pulse/commands/acquire.py]
[start of qiskit/pulse/commands/frame_change.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2019, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Frame change pulse.
10 """
11
12 from qiskit.pulse.channels import OutputChannel
13 from .instruction import Instruction
14 from .pulse_command import PulseCommand
15
16
17 class FrameChange(PulseCommand):
18 """Frame change pulse."""
19
20 def __init__(self, phase):
21 """Create new frame change pulse.
22
23 Args:
24 phase (float): Frame change phase in radians.
25 The allowable precision is device specific.
26 """
27 super().__init__(duration=0)
28 self._phase = phase
29
30 @property
31 def phase(self):
32 """Framechange phase."""
33 return self._phase
34
35 def __eq__(self, other):
36 """Two FrameChanges are the same if they are of the same type
37 and have the same phase.
38
39 Args:
40 other (FrameChange): other FrameChange
41
42 Returns:
43 bool: are self and other equal.
44 """
45 if type(self) is type(other) and \
46 self.phase == other.phase:
47 return True
48 return False
49
50 def __repr__(self):
51 return '%s(%s, phase=%.3f)' % (self.__class__.__name__, self.name, self.phase)
52
53 # pylint: disable=arguments-differ
54 def to_instruction(self, channel: OutputChannel, name=None) -> 'FrameChangeInstruction':
55 return FrameChangeInstruction(self, channel, name=name)
56 # pylint: enable=arguments-differ
57
58
59 class FrameChangeInstruction(Instruction):
60 """Instruction to change frame of an `OutputChannel`. """
61
62 def __init__(self, command: FrameChange, channel: OutputChannel, name=None):
63 super().__init__(command, channel, name=name)
64
[end of qiskit/pulse/commands/frame_change.py]
[start of qiskit/pulse/commands/instruction.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2019, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Instruction = Leaf node of schedule.
10 """
11 import logging
12 from typing import Tuple, List, Iterable
13
14 from qiskit.pulse import ops
15 from qiskit.pulse.channels import Channel
16 from qiskit.pulse.interfaces import ScheduleComponent
17 from qiskit.pulse.timeslots import Interval, Timeslot, TimeslotCollection
18 from qiskit.pulse.exceptions import PulseError
19
20
21 logger = logging.getLogger(__name__)
22
23 # pylint: disable=missing-return-doc
24
25
26 class Instruction(ScheduleComponent):
27 """An abstract class for leaf nodes of schedule."""
28
29 def __init__(self, command, *channels: List[Channel],
30 timeslots: TimeslotCollection = None, name=None):
31 """
32 command (PulseCommand): Pulse command to schedule
33 *channels: List of pulse channels to schedule with command
34 timeslots: Optional list of timeslots. If channels are supplied timeslots
35 cannot also be given
36 name: Name of Instruction
37 """
38 self._command = command
39 self._name = name if name else self._command.name
40
41 if timeslots and channels:
42 raise PulseError('Channels and timeslots may not both be supplied.')
43
44 if not timeslots:
45 duration = command.duration
46 self._timeslots = TimeslotCollection(*(Timeslot(Interval(0, duration), channel)
47 for channel in channels))
48 else:
49 self._timeslots = timeslots
50
51 @property
52 def name(self) -> str:
53 """Name of this instruction."""
54 return self._name
55
56 @property
57 def command(self):
58 """Acquire command.
59
60 Returns: PulseCommand
61 """
62 return self._command
63
64 @property
65 def channels(self) -> Tuple[Channel]:
66 """Returns channels that this schedule uses."""
67 return self.timeslots.channels
68
69 @property
70 def timeslots(self) -> TimeslotCollection:
71 """Occupied time slots by this instruction. """
72 return self._timeslots
73
74 @property
75 def start_time(self) -> int:
76 """Relative begin time of this instruction. """
77 return self.timeslots.start_time
78
79 @property
80 def stop_time(self) -> int:
81 """Relative end time of this instruction. """
82 return self.timeslots.stop_time
83
84 @property
85 def duration(self) -> int:
86 """Duration of this instruction. """
87 return self.timeslots.duration
88
89 @property
90 def children(self) -> Tuple[ScheduleComponent]:
91 """Instruction has no child nodes. """
92 return ()
93
94 def ch_duration(self, *channels: List[Channel]) -> int:
95 """Return duration of the supplied channels in this Instruction.
96
97 Args:
98 *channels: Supplied channels
99 """
100 return self.timeslots.ch_duration(*channels)
101
102 def ch_start_time(self, *channels: List[Channel]) -> int:
103 """Return minimum start time for supplied channels.
104
105 Args:
106 *channels: Supplied channels
107 """
108 return self.timeslots.ch_start_time(*channels)
109
110 def ch_stop_time(self, *channels: List[Channel]) -> int:
111 """Return maximum start time for supplied channels.
112
113 Args:
114 *channels: Supplied channels
115 """
116 return self.timeslots.ch_stop_time(*channels)
117
118 def union(self, *schedules: List[ScheduleComponent]) -> 'ScheduleComponent':
119 """Return a new schedule which is the union of `self` and `schedule`.
120
121 Args:
122 *schedules: Schedules to be take the union with the parent `Schedule`.
123 """
124 return ops.union(self, *schedules)
125
126 def shift(self: ScheduleComponent, time: int) -> 'ScheduleComponent':
127 """Return a new schedule shifted forward by `time`.
128
129 Args:
130 time: Time to shift by
131 """
132 return ops.shift(self, time)
133
134 def insert(self, start_time: int, schedule: ScheduleComponent) -> 'ScheduleComponent':
135 """Return a new schedule with `schedule` inserted within `self` at `start_time`.
136
137 Args:
138 start_time: time to be inserted
139 schedule: schedule to be inserted
140 """
141 return ops.insert(self, start_time, schedule)
142
143 def append(self, schedule: ScheduleComponent) -> 'ScheduleComponent':
144 """Return a new schedule with `schedule` inserted at the maximum time over
145 all channels shared between `self` and `schedule`.
146
147 Args:
148 schedule: schedule to be appended
149 """
150 return ops.append(self, schedule)
151
152 def flatten(self, time: int = 0) -> Iterable[Tuple[int, ScheduleComponent]]:
153 """Iterable for flattening Schedule tree.
154
155 Args:
156 time: Shifted time of this node due to parent
157
158 Yields:
159 Tuple[int, ScheduleComponent]: Tuple containing time `ScheduleComponent` starts
160 at and the flattened `ScheduleComponent`.
161 """
162 yield (time, self)
163
164 def __add__(self, schedule: ScheduleComponent) -> 'ScheduleComponent':
165 """Return a new schedule with `schedule` inserted within `self` at `start_time`."""
166 return self.append(schedule)
167
168 def __or__(self, schedule: ScheduleComponent) -> 'ScheduleComponent':
169 """Return a new schedule which is the union of `self` and `schedule`."""
170 return self.union(schedule)
171
172 def __lshift__(self, time: int) -> 'ScheduleComponent':
173 """Return a new schedule which is shifted forward by `time`."""
174 return self.shift(time)
175
176 def __repr__(self):
177 return "%s" % (self._command)
178
[end of qiskit/pulse/commands/instruction.py]
[start of qiskit/pulse/commands/persistent_value.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2019, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Persistent value.
10 """
11
12 from qiskit.pulse.channels import OutputChannel
13 from qiskit.pulse.exceptions import PulseError
14 from .instruction import Instruction
15 from .pulse_command import PulseCommand
16
17
18 class PersistentValue(PulseCommand):
19 """Persistent value."""
20
21 def __init__(self, value):
22 """create new persistent value command.
23
24 Args:
25 value (complex): Complex value to apply, bounded by an absolute value of 1.
26 The allowable precision is device specific.
27 Raises:
28 PulseError: when input value exceed 1.
29 """
30 super().__init__(duration=0)
31
32 if abs(value) > 1:
33 raise PulseError("Absolute value of PV amplitude exceeds 1.")
34
35 self._value = complex(value)
36
37 @property
38 def value(self):
39 """Persistent value amplitude."""
40 return self._value
41
42 def __eq__(self, other):
43 """Two PersistentValues are the same if they are of the same type
44 and have the same value.
45
46 Args:
47 other (PersistentValue): other PersistentValue
48
49 Returns:
50 bool: are self and other equal.
51 """
52 if type(self) is type(other) and \
53 self.value == other.value:
54 return True
55 return False
56
57 def __repr__(self):
58 return '%s(%s, value=%s)' % (self.__class__.__name__, self.name, self.value)
59
60 # pylint: disable=arguments-differ
61 def to_instruction(self, channel: OutputChannel, name=None) -> 'PersistentValueInstruction':
62 return PersistentValueInstruction(self, channel, name=name)
63 # pylint: enable=arguments-differ
64
65
66 class PersistentValueInstruction(Instruction):
67 """Instruction to keep persistent value. """
68
69 def __init__(self, command: PersistentValue, channel: OutputChannel, name=None):
70 super().__init__(command, channel, name=name)
71
[end of qiskit/pulse/commands/persistent_value.py]
[start of qiskit/pulse/commands/pulse_command.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2019, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Base command.
10 """
11 from abc import ABCMeta, abstractmethod
12
13 from qiskit.pulse.exceptions import PulseError
14
15 from .instruction import Instruction
16
17
18 class PulseCommand(metaclass=ABCMeta):
19 """Super abstract class of command group."""
20
21 pulseIndex = 0
22
23 @abstractmethod
24 def __init__(self, duration: int = None, name: str = None):
25 """Create new pulse commands.
26
27 Args:
28 duration (int): Duration of pulse.
29 name (str): Name of pulse command.
30 Raises:
31 PulseError: when duration is not number of points.
32 """
33 if isinstance(duration, int):
34 self._duration = duration
35 else:
36 raise PulseError('Pulse duration should be integer.')
37
38 if name:
39 self._name = name
40 else:
41 self._name = 'p%d' % PulseCommand.pulseIndex
42 PulseCommand.pulseIndex += 1
43
44 @property
45 def duration(self) -> int:
46 """Duration of this command. """
47 return self._duration
48
49 @property
50 def name(self) -> str:
51 """Name of this command. """
52 return self._name
53
54 @abstractmethod
55 def to_instruction(self, command, *channels, timeslots=None, name=None) -> Instruction:
56 """Create an instruction from command."""
57 pass
58
59 def __call__(self, *args, **kwargs):
60 """Creates an Instruction obtained from call to `to_instruction` wrapped in a Schedule."""
61 return self.to_instruction(*args, **kwargs)
62
63 def __eq__(self, other):
64 """Two PulseCommands are the same if they are of the same type
65 and have the same duration and name.
66
67 Args:
68 other (PulseCommand): other PulseCommand.
69
70 Returns:
71 bool: are self and other equal.
72 """
73 if type(self) is type(other) and \
74 self._duration == other._duration and \
75 self._name == other._name:
76 return True
77 return False
78
79 def __hash__(self):
80 return hash((type(self), self._duration, self._name))
81
82 def __repr__(self):
83 return '%s(name=%s, duration=%d)' % (self.__class__.__name__,
84 self._name, self._duration)
85
[end of qiskit/pulse/commands/pulse_command.py]
[start of qiskit/pulse/commands/sample_pulse.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2019, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Sample pulse.
10 """
11 import numpy as np
12
13 from qiskit.pulse.channels import OutputChannel
14 from qiskit.pulse.exceptions import PulseError
15 from .instruction import Instruction
16 from .pulse_command import PulseCommand
17
18
19 class SamplePulse(PulseCommand):
20 """Container for functional pulse."""
21
22 def __init__(self, samples, name=None):
23 """Create new sample pulse command.
24
25 Args:
26 samples (ndarray): Complex array of pulse envelope.
27 name (str): Unique name to identify the pulse.
28 Raises:
29 PulseError: when pulse envelope amplitude exceeds 1.
30 """
31 super().__init__(duration=len(samples), name=name)
32
33 if np.any(np.abs(samples) > 1):
34 raise PulseError('Absolute value of pulse envelope amplitude exceeds 1.')
35
36 self._samples = samples
37
38 @property
39 def samples(self):
40 """Return sample values."""
41 return self._samples
42
43 def draw(self, **kwargs):
44 """Plot the interpolated envelope of pulse.
45
46 Keyword Args:
47 dt (float): Time interval of samples.
48 interp_method (str): Method of interpolation
49 (set `None` for turn off the interpolation).
50 filename (str): Name required to save pulse image.
51 interactive (bool): When set true show the circuit in a new window
52 (this depends on the matplotlib backend being used supporting this).
53 dpi (int): Resolution of saved image.
54 nop (int): Data points for interpolation.
55 size (tuple): Size of figure.
56 """
57 from qiskit.tools.visualization import pulse_drawer
58
59 return pulse_drawer(self._samples, self.duration, **kwargs)
60
61 def __eq__(self, other):
62 """Two SamplePulses are the same if they are of the same type
63 and have the same name and samples.
64
65 Args:
66 other (SamplePulse): other SamplePulse
67
68 Returns:
69 bool: are self and other equal.
70 """
71 if super().__eq__(other) and \
72 (self._samples == other._samples).all():
73 return True
74 return False
75
76 def __hash__(self):
77 return hash((super().__hash__(), self._samples.tostring()))
78
79 def __repr__(self):
80 return '%s(%s, duration=%d)' % (self.__class__.__name__, self.name, self.duration)
81
82 # pylint: disable=arguments-differ
83 def to_instruction(self, channel: OutputChannel, name=None) -> 'DriveInstruction':
84 return DriveInstruction(self, channel, name=name)
85 # pylint: enable=arguments-differ
86
87
88 class DriveInstruction(Instruction):
89 """Instruction to drive a pulse to an `OutputChannel`. """
90
91 def __init__(self, command: SamplePulse, channel: OutputChannel, name=None):
92 super().__init__(command, channel, name=name)
93
[end of qiskit/pulse/commands/sample_pulse.py]
[start of qiskit/pulse/samplers/decorators.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2019, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 # pylint: disable=missing-return-doc
9
10 """Sampler decorator module for sampling of continuous pulses to discrete pulses to be
11 exposed to user.
12
13 Some atypical boilerplate has been added to solve the problem of decorators not preserving
14 their wrapped function signatures. Below we explain the problem that samplers solve and how
15 we implement this.
16
17 A sampler is a function that takes an continuous pulse function with signature:
18 ```python
19 def f(times: np.ndarray, *args, **kwargs) -> np.ndarray:
20 ...
21 ```
22 and returns a new function:
23 def f(duration: int, *args, **kwargs) -> SamplePulse:
24 ...
25
26 Samplers are used to build up pulse commands from continuous pulse functions.
27
28 In Python the creation of a dynamic function that wraps another function will cause
29 the underlying signature and documentation of the underlying function to be overwritten.
30 In order to circumvent this issue the Python standard library provides the decorator
31 `functools.wraps` which allows the programmer to expose the names and signature of the
32 wrapped function as those of the dynamic function.
33
34 Samplers are implemented by creating a function with signature
35 @sampler
36 def left(continuous_pulse: Callable, duration: int, *args, **kwargs)
37 ...
38
39 This will create a sampler function for `left`. Since it is a dynamic function it would not
40 have the docstring of `left` available too `help`. This could be fixed by wrapping with
41 `functools.wraps` in the `sampler`, but this would then cause the signature to be that of the
42 sampler function which is called on the continuous pulse, below:
43 `(continuous_pulse: Callable, duration: int, *args, **kwargs)``
44 This is not correct for the sampler as the output sampled functions accept only a function.
45 For the standard sampler we get around this by not using `functools.wraps` and
46 explicitly defining our samplers such as `left`, `right` and `midpoint` and
47 calling `sampler` internally on the function that implements the sampling schemes such as
48 `left_sample`, `right_sample` and `midpoint_sample` respectively. See `left` for an example of this.
49
50
51 In this way our standard samplers will expose the proper help signature, but a user can
52 still create their own sampler with
53 @sampler
54 def custom_sampler(time, *args, **kwargs):
55 ...
56 However, in this case it will be missing documentation of the underlying sampling methods.
57 We believe that the definition of custom samplers will be rather infrequent.
58
59 However, users will frequently apply sampler instances too continuous pulses. Therefore, a different
60 approach was required for sampled continuous functions (the output of an continuous pulse function
61 decorated by a sampler instance).
62
63 A sampler instance is a decorator that may be used to wrap continuous pulse functions such as
64 linear below:
65 ```python
66 @left
67 def linear(times: np.ndarray, m: float, b: float) -> np.ndarray:
68 ```Linear test function
69 Args:
70 times: Input times.
71 m: Slope.
72 b: Intercept
73 Returns:
74 np.ndarray
75 ```
76 return m*times+b
77 ```
78 Which after decoration may be called with a duration rather than an array of times
79 ```python
80 duration = 10
81 pulse_command = linear(10, 0.1, 0.1)
82 ```
83 If one calls help on `linear` they will find
84 ```
85 linear(duration:int, *args, **kwargs) -> numpy.ndarray
86 Discretized continuous pulse function: `linear` using
87 sampler: `_left`.
88
89 The first argument (time) of the continuous pulse function has been replaced with
90 a discretized `duration` of type (int).
91
92 Args:
93 duration (int)
94 *args: Remaining arguments of continuous pulse function.
95 See continuous pulse function documentation below.
96 **kwargs: Remaining kwargs of continuous pulse function.
97 See continuous pulse function documentation below.
98
99 Sampled continuous function:
100
101 function linear in module test.python.pulse.test_samplers
102 linear(x:numpy.ndarray, m:float, b:float) -> numpy.ndarray
103 Linear test function
104 Args:
105 x: Input times.
106 m: Slope.
107 b: Intercept
108 Returns:
109 np.ndarray
110 ```
111 This is partly because `functools.wraps` has been used on the underlying function.
112 This in itself is not sufficient as the signature of the sampled function has
113 `duration`, whereas the signature of the continuous function is `time`.
114
115 This is acheived by removing `__wrapped__` set by `functools.wraps` in order to preserve
116 the correct signature and also applying `_update_annotations` and `_update_docstring`
117 to the generated function which corrects the function annotations and adds an informative
118 docstring respectively.
119
120 The user therefore has access to the correct sampled function docstring in its entirety, while
121 still seeing the signature for the continuous pulse function and all of its arguments.
122 """
123
124 import functools
125 from typing import Callable
126 import textwrap
127 import pydoc
128
129 import numpy as np
130
131 from qiskit.pulse.samplers import strategies
132 import qiskit.pulse.commands as commands
133
134
135 def _update_annotations(discretized_pulse: Callable) -> Callable:
136 """Update annotations of discretized continuous pulse function with duration.
137
138 Args:
139 discretized_pulse: Discretized decorated continuous pulse.
140 """
141 undecorated_annotations = list(discretized_pulse.__annotations__.items())
142 decorated_annotations = undecorated_annotations[1:]
143 decorated_annotations.insert(0, ('duration', int))
144 discretized_pulse.__annotations__ = dict(decorated_annotations)
145 return discretized_pulse
146
147
148 def _update_docstring(discretized_pulse: Callable, sampler_inst: Callable) -> Callable:
149 """Update annotations of discretized continuous pulse function.
150
151 Args:
152 discretized_pulse: Discretized decorated continuous pulse.
153 sampler_inst: Applied sampler.
154 """
155 wrapped_docstring = pydoc.render_doc(discretized_pulse, '%s')
156 header, body = wrapped_docstring.split('\n', 1)
157 body = textwrap.indent(body, ' ')
158 wrapped_docstring = header+body
159 updated_ds = """
160 Discretized continuous pulse function: `{continuous_name}` using
161 sampler: `{sampler_name}`.
162
163 The first argument (time) of the continuous pulse function has been replaced with
164 a discretized `duration` of type (int).
165
166 Args:
167 duration (int)
168 *args: Remaining arguments of continuous pulse function.
169 See continuous pulse function documentation below.
170 **kwargs: Remaining kwargs of continuous pulse function.
171 See continuous pulse function documentation below.
172
173 Sampled continuous function:
174
175 {continuous_doc}
176 """.format(continuous_name=discretized_pulse.__name__,
177 sampler_name=sampler_inst.__name__,
178 continuous_doc=wrapped_docstring)
179
180 discretized_pulse.__doc__ = updated_ds
181 return discretized_pulse
182
183
184 def sampler(sample_function: Callable) -> Callable:
185 """Sampler decorator base method.
186
187 Samplers are used for converting an continuous function to a discretized pulse.
188
189 They operate on a function with the signature:
190 `def f(times: np.ndarray, *args, **kwargs) -> np.ndarray`
191 Where `times` is a numpy array of floats with length n_times and the output array
192 is a complex numpy array with length n_times. The output of the decorator is an
193 instance of `FunctionalPulse` with signature:
194 `def g(duration: int, *args, **kwargs) -> SamplePulse`
195
196 Note if your continuous pulse function outputs a `complex` scalar rather than a
197 `np.ndarray`, you should first vectorize it before applying a sampler.
198
199
200 This class implements the sampler boilerplate for the sampler.
201
202 Args:
203 sample_function: A sampler function to be decorated.
204 """
205
206 def generate_sampler(continuous_pulse: Callable) -> Callable:
207 """Return a decorated sampler function."""
208
209 @functools.wraps(continuous_pulse)
210 def call_sampler(duration: int, *args, **kwargs) -> commands.SamplePulse:
211 """Replace the call to the continuous function with a call to the sampler applied
212 to the anlytic pulse function."""
213 sampled_pulse = sample_function(continuous_pulse, duration, *args, **kwargs)
214 return np.asarray(sampled_pulse, dtype=np.complex_)
215
216 # Update type annotations for wrapped continuous function to be discrete
217 call_sampler = _update_annotations(call_sampler)
218 # Update docstring with that of the sampler and include sampled function documentation.
219 call_sampler = _update_docstring(call_sampler, sample_function)
220 # Unset wrapped to return base sampler signature
221 # but still get rest of benefits of wraps
222 # such as __name__, __qualname__
223 call_sampler.__dict__.pop('__wrapped__')
224 # wrap with functional pulse
225 return commands.functional_pulse(call_sampler)
226
227 return generate_sampler
228
229
230 def left(continuous_pulse: Callable) -> Callable:
231 r"""Left sampling strategy decorator.
232
233 See `pulse.samplers.sampler` for more information.
234
235 For `duration`, return:
236 $$\{f(t) \in \mathbb{C} | t \in \mathbb{Z} \wedge 0<=t<\texttt{duration}\}$$
237
238 Args:
239 continuous_pulse: To sample.
240 """
241
242 return sampler(strategies.left_sample)(continuous_pulse)
243
244
245 def right(continuous_pulse: Callable) -> Callable:
246 r"""Right sampling strategy decorator.
247
248 See `pulse.samplers.sampler` for more information.
249
250 For `duration`, return:
251 $$\{f(t) \in \mathbb{C} | t \in \mathbb{Z} \wedge 0<t<=\texttt{duration}\}$$
252
253 Args:
254 continuous_pulse: To sample.
255 """
256
257 return sampler(strategies.right_sample)(continuous_pulse)
258
259
260 def midpoint(continuous_pulse: Callable) -> Callable:
261 r"""Midpoint sampling strategy decorator.
262
263 See `pulse.samplers.sampler` for more information.
264
265 For `duration`, return:
266 $$\{f(t+0.5) \in \mathbb{C} | t \in \mathbb{Z} \wedge 0<=t<\texttt{duration}\}$$
267
268 Args:
269 continuous_pulse: To sample.
270 """
271 return sampler(strategies.midpoint_sample)(continuous_pulse)
272
[end of qiskit/pulse/samplers/decorators.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
a85c741a6ebf8955cdf7ad1bcc78e772a30bdf31
|
Pulse naming issues
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
We have several naming issues in Pulse-frontend classes/modules/methods:
- [ ] Inconsistency in Channel, PulseCommand, Schedule: PulseCommand -> Command?
- [ ] common module -> remove and move contents to top-level?
- [ ] Inconsistency in DriveInstruction applied to OutputChannel: OutputInstruction or rename OutputChannel? Rename both (to PulseInstruction and PulseChannel)?
- [ ] channels module have more than channels, it's more like device module: pulse/channels -> pulse/device?
(I guess we have more)
|
Agreed on all counts.
> Inconsistency in DriveInstruction applied to OutputChannel: OutputInstruction or rename OutputChannel? Rename both (to PulseInstruction and PulseChannel)?
I think PulseInstruction and PulseChannel make the most sense. However, they may be confused with the `pulse` module? If this is the case maybe `OutputInstruction` and `OutputChannel`? What do you think?
|
2019-04-22T08:54:29Z
|
<patch>
diff --git a/qiskit/compiler/assembler.py b/qiskit/compiler/assembler.py
--- a/qiskit/compiler/assembler.py
+++ b/qiskit/compiler/assembler.py
@@ -13,7 +13,7 @@
from qiskit.circuit import QuantumCircuit
from qiskit.exceptions import QiskitError
from qiskit.pulse import Schedule, LoConfig
-from qiskit.pulse.commands import DriveInstruction
+from qiskit.pulse.commands import PulseInstruction
from qiskit.compiler.run_config import RunConfig
from qiskit.qobj import (QasmQobj, PulseQobj, QobjExperimentHeader, QobjHeader,
QasmQobjInstruction, QasmQobjExperimentConfig, QasmQobjExperiment,
@@ -151,16 +151,13 @@ def assemble_circuits(circuits, qobj_id=None, qobj_header=None, run_config=None)
def assemble_schedules(schedules, qobj_id=None, qobj_header=None, run_config=None):
"""Assembles a list of schedules into a qobj which can be run on the backend.
-
Args:
schedules (list[Schedule]): schedules to assemble
qobj_id (int): identifier for the generated qobj
qobj_header (QobjHeader): header to pass to the results
run_config (RunConfig): configuration of the runtime environment
-
Returns:
PulseQobj: the Qobj to be run on the backends
-
Raises:
QiskitError: when invalid schedules or configs are provided
"""
@@ -184,7 +181,7 @@ def assemble_schedules(schedules, qobj_id=None, qobj_header=None, run_config=Non
for shift, instruction in list(schedule.flatten()):
# TODO: support conditional gate
qobj_instructions.append(instruction_converter(shift, instruction))
- if isinstance(instruction, DriveInstruction):
+ if isinstance(instruction, PulseInstruction):
# add samples to pulse library
user_pulselib.add(instruction.command)
# experiment header
@@ -308,8 +305,8 @@ def assemble(experiments,
default_meas_los (list):
List of default meas lo frequencies
- schedule_los (None or list[Union[Dict[OutputChannel, float], LoConfig]] or
- Union[Dict[OutputChannel, float], LoConfig]):
+ schedule_los (None or list[Union[Dict[PulseChannel, float], LoConfig]] or
+ Union[Dict[PulseChannel, float], LoConfig]):
Experiment LO configurations
meas_level (int):
diff --git a/qiskit/execute.py b/qiskit/execute.py
--- a/qiskit/execute.py
+++ b/qiskit/execute.py
@@ -140,8 +140,8 @@ def execute(experiments, backend,
default_meas_los (list):
List of default meas lo frequencies
- schedule_los (None or list[Union[Dict[OutputChannel, float], LoConfig]] or
- Union[Dict[OutputChannel, float], LoConfig]):
+ schedule_los (None or list[Union[Dict[PulseChannel, float], LoConfig]] or
+ Union[Dict[PulseChannel, float], LoConfig]):
Experiment LO configurations
meas_level (int):
diff --git a/qiskit/providers/models/pulsedefaults.py b/qiskit/providers/models/pulsedefaults.py
--- a/qiskit/providers/models/pulsedefaults.py
+++ b/qiskit/providers/models/pulsedefaults.py
@@ -40,8 +40,8 @@ class DiscriminatorSchema(BaseSchema):
params = Nested(ObjSchema)
-class PulseCommandSchema(BaseSchema):
- """Schema for PulseCommand."""
+class CommandSchema(BaseSchema):
+ """Schema for Command."""
# Required properties.
name = String(required=True)
@@ -60,7 +60,7 @@ class PulseDefaultsSchema(BaseSchema):
meas_freq_est = List(Number(), required=True, validate=Length(min=1))
buffer = Integer(required=True, validate=Range(min=0))
pulse_library = Nested(PulseLibraryItemSchema, required=True, many=True)
- cmd_def = Nested(PulseCommandSchema, many=True, required=True)
+ cmd_def = Nested(CommandSchema, many=True, required=True)
# Optional properties.
meas_kernel = Nested(MeasurementKernelSchema)
@@ -105,12 +105,12 @@ class Discriminator(BaseModel):
pass
-@bind_schema(PulseCommandSchema)
-class PulseCommand(BaseModel):
- """Model for PulseCommand.
+@bind_schema(CommandSchema)
+class Command(BaseModel):
+ """Model for Command.
Please note that this class only describes the required fields. For the
- full description of the model, please check ``PulseCommandSchema``.
+ full description of the model, please check ``CommandSchema``.
Attributes:
name (str): Pulse command name.
@@ -134,7 +134,7 @@ class PulseDefaults(BaseModel):
in GHz.
buffer (int): Default buffer time (in units of dt) between pulses.
pulse_library (list[PulseLibraryItem]): Backend pulse library.
- cmd_def (list[PulseCommand]): Backend command definition.
+ cmd_def (list[Command]): Backend command definition.
"""
def __init__(self, qubit_freq_est, meas_freq_est, buffer,
diff --git a/qiskit/pulse/channels/__init__.py b/qiskit/pulse/channels/__init__.py
--- a/qiskit/pulse/channels/__init__.py
+++ b/qiskit/pulse/channels/__init__.py
@@ -5,11 +5,11 @@
# This source code is licensed under the Apache License, Version 2.0 found in
# the LICENSE.txt file in the root directory of this source tree.
-"""Channel classes for pulse."""
+"""Device-related classes for pulse."""
from .device_specification import DeviceSpecification
-from .output_channel import DriveChannel, ControlChannel, MeasureChannel
-from .output_channel import OutputChannel
-from .pulse_channel import AcquireChannel, MemorySlot, RegisterSlot, SnapshotChannel
-from .pulse_channel import Channel
+from .pulse_channels import DriveChannel, ControlChannel, MeasureChannel
+from .pulse_channels import PulseChannel
+from .channels import AcquireChannel, MemorySlot, RegisterSlot, SnapshotChannel
+from .channels import Channel
from .qubit import Qubit
diff --git a/qiskit/pulse/channels/pulse_channel.py b/qiskit/pulse/channels/channels.py
similarity index 95%
rename from qiskit/pulse/channels/pulse_channel.py
rename to qiskit/pulse/channels/channels.py
--- a/qiskit/pulse/channels/pulse_channel.py
+++ b/qiskit/pulse/channels/channels.py
@@ -12,7 +12,7 @@
class Channel(metaclass=ABCMeta):
- """Pulse channel."""
+ """Base class of channels."""
prefix = None
@@ -37,7 +37,7 @@ def __eq__(self, other):
"""Two channels are the same if they are of the same type, and have the same index.
Args:
- other (Channel): other PulseChannel
+ other (Channel): other Channel
Returns:
bool: are self and other equal.
@@ -76,7 +76,7 @@ def __init__(self):
class MemorySlot(Channel):
- """Memory slot."""
+ """Memory slot channel."""
prefix = 'm'
diff --git a/qiskit/pulse/channels/device_specification.py b/qiskit/pulse/channels/device_specification.py
--- a/qiskit/pulse/channels/device_specification.py
+++ b/qiskit/pulse/channels/device_specification.py
@@ -13,8 +13,8 @@
from qiskit.pulse.exceptions import PulseError
from qiskit.validation.exceptions import ModelValidationError
-from .output_channel import DriveChannel, ControlChannel, MeasureChannel
-from .pulse_channel import AcquireChannel, MemorySlot, RegisterSlot
+from .pulse_channels import DriveChannel, ControlChannel, MeasureChannel
+from .channels import AcquireChannel, MemorySlot, RegisterSlot
from .qubit import Qubit
logger = logging.getLogger(__name__)
diff --git a/qiskit/pulse/channels/output_channel.py b/qiskit/pulse/channels/pulse_channels.py
similarity index 94%
rename from qiskit/pulse/channels/output_channel.py
rename to qiskit/pulse/channels/pulse_channels.py
--- a/qiskit/pulse/channels/output_channel.py
+++ b/qiskit/pulse/channels/pulse_channels.py
@@ -12,7 +12,7 @@
from typing import Tuple
from qiskit.pulse.exceptions import PulseError
-from .pulse_channel import Channel
+from .channels import Channel
class LoRange:
@@ -65,8 +65,8 @@ def __eq__(self, other):
return False
-class OutputChannel(Channel):
- """Output Channel."""
+class PulseChannel(Channel):
+ """Base class of Channel supporting pulse output."""
@abstractmethod
def __init__(self,
@@ -96,7 +96,7 @@ def lo_freq_range(self) -> LoRange:
return self._lo_freq_range
-class DriveChannel(OutputChannel):
+class DriveChannel(PulseChannel):
"""Drive Channel."""
prefix = 'd'
@@ -114,7 +114,7 @@ def __init__(self, index: int,
super().__init__(index, lo_freq, lo_freq_range)
-class ControlChannel(OutputChannel):
+class ControlChannel(PulseChannel):
"""Control Channel."""
prefix = 'u'
@@ -132,7 +132,7 @@ def __init__(self, index: int,
super().__init__(index, lo_freq, lo_freq_range)
-class MeasureChannel(OutputChannel):
+class MeasureChannel(PulseChannel):
"""Measure Channel."""
prefix = 'm'
diff --git a/qiskit/pulse/channels/qubit.py b/qiskit/pulse/channels/qubit.py
--- a/qiskit/pulse/channels/qubit.py
+++ b/qiskit/pulse/channels/qubit.py
@@ -11,8 +11,8 @@
from typing import List
from qiskit.pulse.exceptions import PulseError
-from .output_channel import DriveChannel, ControlChannel, MeasureChannel
-from .pulse_channel import AcquireChannel
+from .pulse_channels import DriveChannel, ControlChannel, MeasureChannel
+from .channels import AcquireChannel
class Qubit:
diff --git a/qiskit/pulse/commands/__init__.py b/qiskit/pulse/commands/__init__.py
--- a/qiskit/pulse/commands/__init__.py
+++ b/qiskit/pulse/commands/__init__.py
@@ -12,7 +12,7 @@
from .frame_change import FrameChange, FrameChangeInstruction
from .meas_opts import Discriminator, Kernel
from .persistent_value import PersistentValue, PersistentValueInstruction
-from .pulse_command import PulseCommand
+from .command import Command
from .pulse_decorators import functional_pulse
-from .sample_pulse import SamplePulse, DriveInstruction
+from .sample_pulse import SamplePulse, PulseInstruction
from .snapshot import Snapshot
diff --git a/qiskit/pulse/commands/acquire.py b/qiskit/pulse/commands/acquire.py
--- a/qiskit/pulse/commands/acquire.py
+++ b/qiskit/pulse/commands/acquire.py
@@ -14,10 +14,10 @@
from qiskit.pulse.exceptions import PulseError
from .instruction import Instruction
from .meas_opts import Discriminator, Kernel
-from .pulse_command import PulseCommand
+from .command import Command
-class Acquire(PulseCommand):
+class Acquire(Command):
"""Acquire."""
def __init__(self, duration, discriminator=None, kernel=None):
diff --git a/qiskit/pulse/commands/pulse_command.py b/qiskit/pulse/commands/command.py
similarity index 83%
rename from qiskit/pulse/commands/pulse_command.py
rename to qiskit/pulse/commands/command.py
--- a/qiskit/pulse/commands/pulse_command.py
+++ b/qiskit/pulse/commands/command.py
@@ -15,18 +15,18 @@
from .instruction import Instruction
-class PulseCommand(metaclass=ABCMeta):
+class Command(metaclass=ABCMeta):
"""Super abstract class of command group."""
pulseIndex = 0
@abstractmethod
def __init__(self, duration: int = None, name: str = None):
- """Create new pulse commands.
+ """Create a new command.
Args:
- duration (int): Duration of pulse.
- name (str): Name of pulse command.
+ duration (int): Duration of this command.
+ name (str): Name of this command.
Raises:
PulseError: when duration is not number of points.
"""
@@ -38,8 +38,8 @@ def __init__(self, duration: int = None, name: str = None):
if name:
self._name = name
else:
- self._name = 'p%d' % PulseCommand.pulseIndex
- PulseCommand.pulseIndex += 1
+ self._name = 'p%d' % Command.pulseIndex
+ Command.pulseIndex += 1
@property
def duration(self) -> int:
@@ -61,11 +61,11 @@ def __call__(self, *args, **kwargs):
return self.to_instruction(*args, **kwargs)
def __eq__(self, other):
- """Two PulseCommands are the same if they are of the same type
+ """Two Commands are the same if they are of the same type
and have the same duration and name.
Args:
- other (PulseCommand): other PulseCommand.
+ other (Command): other Command.
Returns:
bool: are self and other equal.
diff --git a/qiskit/pulse/commands/frame_change.py b/qiskit/pulse/commands/frame_change.py
--- a/qiskit/pulse/commands/frame_change.py
+++ b/qiskit/pulse/commands/frame_change.py
@@ -9,12 +9,12 @@
Frame change pulse.
"""
-from qiskit.pulse.channels import OutputChannel
+from qiskit.pulse.channels import PulseChannel
from .instruction import Instruction
-from .pulse_command import PulseCommand
+from .command import Command
-class FrameChange(PulseCommand):
+class FrameChange(Command):
"""Frame change pulse."""
def __init__(self, phase):
@@ -51,13 +51,13 @@ def __repr__(self):
return '%s(%s, phase=%.3f)' % (self.__class__.__name__, self.name, self.phase)
# pylint: disable=arguments-differ
- def to_instruction(self, channel: OutputChannel, name=None) -> 'FrameChangeInstruction':
+ def to_instruction(self, channel: PulseChannel, name=None) -> 'FrameChangeInstruction':
return FrameChangeInstruction(self, channel, name=name)
# pylint: enable=arguments-differ
class FrameChangeInstruction(Instruction):
- """Instruction to change frame of an `OutputChannel`. """
+ """Instruction to change frame of an `PulseChannel`. """
- def __init__(self, command: FrameChange, channel: OutputChannel, name=None):
+ def __init__(self, command: FrameChange, channel: PulseChannel, name=None):
super().__init__(command, channel, name=name)
diff --git a/qiskit/pulse/commands/instruction.py b/qiskit/pulse/commands/instruction.py
--- a/qiskit/pulse/commands/instruction.py
+++ b/qiskit/pulse/commands/instruction.py
@@ -17,7 +17,6 @@
from qiskit.pulse.timeslots import Interval, Timeslot, TimeslotCollection
from qiskit.pulse.exceptions import PulseError
-
logger = logging.getLogger(__name__)
# pylint: disable=missing-return-doc
@@ -29,7 +28,7 @@ class Instruction(ScheduleComponent):
def __init__(self, command, *channels: List[Channel],
timeslots: TimeslotCollection = None, name=None):
"""
- command (PulseCommand): Pulse command to schedule
+ command (Command): Pulse command to schedule
*channels: List of pulse channels to schedule with command
timeslots: Optional list of timeslots. If channels are supplied timeslots
cannot also be given
@@ -57,7 +56,7 @@ def name(self) -> str:
def command(self):
"""Acquire command.
- Returns: PulseCommand
+ Returns: Command
"""
return self._command
diff --git a/qiskit/pulse/commands/persistent_value.py b/qiskit/pulse/commands/persistent_value.py
--- a/qiskit/pulse/commands/persistent_value.py
+++ b/qiskit/pulse/commands/persistent_value.py
@@ -9,13 +9,13 @@
Persistent value.
"""
-from qiskit.pulse.channels import OutputChannel
+from qiskit.pulse.channels import PulseChannel
from qiskit.pulse.exceptions import PulseError
from .instruction import Instruction
-from .pulse_command import PulseCommand
+from .command import Command
-class PersistentValue(PulseCommand):
+class PersistentValue(Command):
"""Persistent value."""
def __init__(self, value):
@@ -58,7 +58,7 @@ def __repr__(self):
return '%s(%s, value=%s)' % (self.__class__.__name__, self.name, self.value)
# pylint: disable=arguments-differ
- def to_instruction(self, channel: OutputChannel, name=None) -> 'PersistentValueInstruction':
+ def to_instruction(self, channel: PulseChannel, name=None) -> 'PersistentValueInstruction':
return PersistentValueInstruction(self, channel, name=name)
# pylint: enable=arguments-differ
@@ -66,5 +66,5 @@ def to_instruction(self, channel: OutputChannel, name=None) -> 'PersistentValueI
class PersistentValueInstruction(Instruction):
"""Instruction to keep persistent value. """
- def __init__(self, command: PersistentValue, channel: OutputChannel, name=None):
+ def __init__(self, command: PersistentValue, channel: PulseChannel, name=None):
super().__init__(command, channel, name=name)
diff --git a/qiskit/pulse/commands/sample_pulse.py b/qiskit/pulse/commands/sample_pulse.py
--- a/qiskit/pulse/commands/sample_pulse.py
+++ b/qiskit/pulse/commands/sample_pulse.py
@@ -10,13 +10,13 @@
"""
import numpy as np
-from qiskit.pulse.channels import OutputChannel
+from qiskit.pulse.channels import PulseChannel
from qiskit.pulse.exceptions import PulseError
from .instruction import Instruction
-from .pulse_command import PulseCommand
+from .command import Command
-class SamplePulse(PulseCommand):
+class SamplePulse(Command):
"""Container for functional pulse."""
def __init__(self, samples, name=None):
@@ -80,13 +80,13 @@ def __repr__(self):
return '%s(%s, duration=%d)' % (self.__class__.__name__, self.name, self.duration)
# pylint: disable=arguments-differ
- def to_instruction(self, channel: OutputChannel, name=None) -> 'DriveInstruction':
- return DriveInstruction(self, channel, name=name)
+ def to_instruction(self, channel: PulseChannel, name=None) -> 'PulseInstruction':
+ return PulseInstruction(self, channel, name=name)
# pylint: enable=arguments-differ
-class DriveInstruction(Instruction):
- """Instruction to drive a pulse to an `OutputChannel`. """
+class PulseInstruction(Instruction):
+ """Instruction to drive a pulse to an `PulseChannel`. """
- def __init__(self, command: SamplePulse, channel: OutputChannel, name=None):
+ def __init__(self, command: SamplePulse, channel: PulseChannel, name=None):
super().__init__(command, channel, name=name)
diff --git a/qiskit/pulse/commands/snapshot.py b/qiskit/pulse/commands/snapshot.py
--- a/qiskit/pulse/commands/snapshot.py
+++ b/qiskit/pulse/commands/snapshot.py
@@ -11,10 +11,10 @@
from qiskit.pulse.channels import SnapshotChannel
from .instruction import Instruction
-from .pulse_command import PulseCommand
+from .command import Command
-class Snapshot(PulseCommand, Instruction):
+class Snapshot(Command, Instruction):
"""Snapshot."""
def __init__(self, name: str, snap_type: str):
@@ -28,7 +28,7 @@ def __init__(self, name: str, snap_type: str):
"""
self._type = snap_type
self._channel = SnapshotChannel()
- PulseCommand.__init__(self, duration=0, name=name)
+ Command.__init__(self, duration=0, name=name)
Instruction.__init__(self, self, self._channel, name=name)
@property
diff --git a/qiskit/pulse/configuration.py b/qiskit/pulse/configuration.py
--- a/qiskit/pulse/configuration.py
+++ b/qiskit/pulse/configuration.py
@@ -10,14 +10,14 @@
"""
from typing import Dict
-from .channels import OutputChannel, DriveChannel, MeasureChannel
+from .channels import PulseChannel, DriveChannel, MeasureChannel
from .exceptions import PulseError
class LoConfig:
- """Output channel LO frequency container."""
+ """Pulse channel LO frequency container."""
- def __init__(self, user_lo_dic: Dict[OutputChannel, float] = None):
+ def __init__(self, user_lo_dic: Dict[PulseChannel, float] = None):
self._q_lo_freq = {}
self._m_lo_freq = {}
diff --git a/qiskit/qobj/converters/pulse_instruction.py b/qiskit/qobj/converters/pulse_instruction.py
--- a/qiskit/qobj/converters/pulse_instruction.py
+++ b/qiskit/qobj/converters/pulse_instruction.py
@@ -192,13 +192,13 @@ def convert_persistent_value(self, shift, instruction):
}
return self._qobj_model(**command_dict)
- @bind_instruction(commands.DriveInstruction)
+ @bind_instruction(commands.PulseInstruction)
def convert_drive(self, shift, instruction):
- """Return converted `DriveInstruction`.
+ """Return converted `PulseInstruction`.
Args:
shift(int): Offset time.
- instruction (DriveInstruction): drive instruction.
+ instruction (PulseInstruction): drive instruction.
Returns:
dict: Dictionary of required parameters.
"""
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-32068
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Equivalent of Series.map for DataFrame
I think there should be a way to take a DataFrame and use the values in its columns as indices into a MultiIndexed Series. Here is an example:
```
>>> d = pandas.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
>>> d
Let Num
0 A 1
1 B 2
2 C 3
>>> ser = pandas.Series(
... ['a', 'b', 'c', 'd', 'e', 'f'],
... index=pandas.MultiIndex.from_arrays([["A", "B", "C"]*2, [1, 2, 3, 4, 5, 6]])
... )
>>> ser
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
```
With this data, you should be able to do `d.map(ser)` (or whatever method name instead of `map`) and get the same result as this:
```
>>> ser.ix[d.apply(tuple, axis=1)]
A 1 a
B 2 b
C 3 c
dtype: object
```
You currently cannot do this without converting the rows to tuples (`ser[d]` gives `ValueError: Cannot index with multidimensional key`). Converting to tuple is an awkward way to do a very natural task, which is using a tabular array of data to look up values in a tabular index (i.e., a MultiIndex).
I'm creating an issue to resurrect this request from much earlier discussion [here](https://groups.google.com/forum/#!searchin/pydata/dataframe$20map/pydata/sibkQ5Ea-Hc/RKMwiNsH7TMJ) and [here](http://stackoverflow.com/questions/22293683/equivalent-of-series-map-for-dataframe).
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://pypi.org/project/pandas/)
11 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
12 [](https://travis-ci.org/pandas-dev/pandas)
13 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
14 [](https://codecov.io/gh/pandas-dev/pandas)
15 [](https://pandas.pydata.org)
16 [](https://gitter.im/pydata/pandas)
17 [](https://numfocus.org)
18
19 ## What is it?
20
21 **pandas** is a Python package providing fast, flexible, and expressive data
22 structures designed to make working with "relational" or "labeled" data both
23 easy and intuitive. It aims to be the fundamental high-level building block for
24 doing practical, **real world** data analysis in Python. Additionally, it has
25 the broader goal of becoming **the most powerful and flexible open source data
26 analysis / manipulation tool available in any language**. It is already well on
27 its way towards this goal.
28
29 ## Main Features
30 Here are just a few of the things that pandas does well:
31
32 - Easy handling of [**missing data**][missing-data] (represented as
33 `NaN`) in floating point as well as non-floating point data
34 - Size mutability: columns can be [**inserted and
35 deleted**][insertion-deletion] from DataFrame and higher dimensional
36 objects
37 - Automatic and explicit [**data alignment**][alignment]: objects can
38 be explicitly aligned to a set of labels, or the user can simply
39 ignore the labels and let `Series`, `DataFrame`, etc. automatically
40 align the data for you in computations
41 - Powerful, flexible [**group by**][groupby] functionality to perform
42 split-apply-combine operations on data sets, for both aggregating
43 and transforming data
44 - Make it [**easy to convert**][conversion] ragged,
45 differently-indexed data in other Python and NumPy data structures
46 into DataFrame objects
47 - Intelligent label-based [**slicing**][slicing], [**fancy
48 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
49 large data sets
50 - Intuitive [**merging**][merging] and [**joining**][joining] data
51 sets
52 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
53 data sets
54 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
55 labels per tick)
56 - Robust IO tools for loading data from [**flat files**][flat-files]
57 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
58 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
59 - [**Time series**][timeseries]-specific functionality: date range
60 generation and frequency conversion, moving window statistics,
61 date shifting and lagging.
62
63
64 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
65 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
66 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
67 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
68 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
69 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
70 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
71 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
72 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
73 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
74 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
75 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
76 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
77 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
78 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
79 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
80 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
81 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
82
83 ## Where to get it
84 The source code is currently hosted on GitHub at:
85 https://github.com/pandas-dev/pandas
86
87 Binary installers for the latest released version are available at the [Python
88 package index](https://pypi.org/project/pandas) and on conda.
89
90 ```sh
91 # conda
92 conda install pandas
93 ```
94
95 ```sh
96 # or PyPI
97 pip install pandas
98 ```
99
100 ## Dependencies
101 - [NumPy](https://www.numpy.org)
102 - [python-dateutil](https://labix.org/python-dateutil)
103 - [pytz](https://pythonhosted.org/pytz)
104
105 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
106
107 ## Installation from sources
108 To install pandas from source you need Cython in addition to the normal
109 dependencies above. Cython can be installed from pypi:
110
111 ```sh
112 pip install cython
113 ```
114
115 In the `pandas` directory (same one where you found this file after
116 cloning the git repo), execute:
117
118 ```sh
119 python setup.py install
120 ```
121
122 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
123
124
125 ```sh
126 python -m pip install -e . --no-build-isolation --no-use-pep517
127 ```
128
129 If you have `make`, you can also use `make develop` to run the same command.
130
131 or alternatively
132
133 ```sh
134 python setup.py develop
135 ```
136
137 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
138
139 ## License
140 [BSD 3](LICENSE)
141
142 ## Documentation
143 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
144
145 ## Background
146 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
147 has been under active development since then.
148
149 ## Getting Help
150
151 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
152 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
153
154 ## Discussion and Development
155 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
156
157 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
158
159 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
160
161 A detailed overview on how to contribute can be found in the **[contributing guide](https://dev.pandas.io/docs/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
162
163 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
164
165 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
166
167 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
168
169 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
170
171 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
172
[end of README.md]
[start of pandas/core/reshape/concat.py]
1 """
2 Concat routines.
3 """
4
5 from typing import Hashable, Iterable, List, Mapping, Optional, Union, overload
6
7 import numpy as np
8
9 from pandas._typing import FrameOrSeriesUnion
10
11 from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries
12
13 from pandas import DataFrame, Index, MultiIndex, Series
14 from pandas.core.arrays.categorical import (
15 factorize_from_iterable,
16 factorize_from_iterables,
17 )
18 import pandas.core.common as com
19 from pandas.core.generic import NDFrame
20 from pandas.core.indexes.api import (
21 all_indexes_same,
22 ensure_index,
23 get_consensus_names,
24 get_objs_combined_axis,
25 )
26 import pandas.core.indexes.base as ibase
27 from pandas.core.internals import concatenate_block_managers
28
29 # ---------------------------------------------------------------------
30 # Concatenate DataFrame objects
31
32
33 @overload
34 def concat(
35 objs: Union[Iterable["DataFrame"], Mapping[Optional[Hashable], "DataFrame"]],
36 axis=0,
37 join: str = "outer",
38 ignore_index: bool = False,
39 keys=None,
40 levels=None,
41 names=None,
42 verify_integrity: bool = False,
43 sort: bool = False,
44 copy: bool = True,
45 ) -> "DataFrame":
46 ...
47
48
49 @overload
50 def concat(
51 objs: Union[
52 Iterable[FrameOrSeriesUnion], Mapping[Optional[Hashable], FrameOrSeriesUnion]
53 ],
54 axis=0,
55 join: str = "outer",
56 ignore_index: bool = False,
57 keys=None,
58 levels=None,
59 names=None,
60 verify_integrity: bool = False,
61 sort: bool = False,
62 copy: bool = True,
63 ) -> FrameOrSeriesUnion:
64 ...
65
66
67 def concat(
68 objs: Union[
69 Iterable[FrameOrSeriesUnion], Mapping[Optional[Hashable], FrameOrSeriesUnion]
70 ],
71 axis=0,
72 join="outer",
73 ignore_index: bool = False,
74 keys=None,
75 levels=None,
76 names=None,
77 verify_integrity: bool = False,
78 sort: bool = False,
79 copy: bool = True,
80 ) -> FrameOrSeriesUnion:
81 """
82 Concatenate pandas objects along a particular axis with optional set logic
83 along the other axes.
84
85 Can also add a layer of hierarchical indexing on the concatenation axis,
86 which may be useful if the labels are the same (or overlapping) on
87 the passed axis number.
88
89 Parameters
90 ----------
91 objs : a sequence or mapping of Series or DataFrame objects
92 If a dict is passed, the sorted keys will be used as the `keys`
93 argument, unless it is passed, in which case the values will be
94 selected (see below). Any None objects will be dropped silently unless
95 they are all None in which case a ValueError will be raised.
96 axis : {0/'index', 1/'columns'}, default 0
97 The axis to concatenate along.
98 join : {'inner', 'outer'}, default 'outer'
99 How to handle indexes on other axis (or axes).
100 ignore_index : bool, default False
101 If True, do not use the index values along the concatenation axis. The
102 resulting axis will be labeled 0, ..., n - 1. This is useful if you are
103 concatenating objects where the concatenation axis does not have
104 meaningful indexing information. Note the index values on the other
105 axes are still respected in the join.
106 keys : sequence, default None
107 If multiple levels passed, should contain tuples. Construct
108 hierarchical index using the passed keys as the outermost level.
109 levels : list of sequences, default None
110 Specific levels (unique values) to use for constructing a
111 MultiIndex. Otherwise they will be inferred from the keys.
112 names : list, default None
113 Names for the levels in the resulting hierarchical index.
114 verify_integrity : bool, default False
115 Check whether the new concatenated axis contains duplicates. This can
116 be very expensive relative to the actual data concatenation.
117 sort : bool, default False
118 Sort non-concatenation axis if it is not already aligned when `join`
119 is 'outer'.
120 This has no effect when ``join='inner'``, which already preserves
121 the order of the non-concatenation axis.
122
123 .. versionadded:: 0.23.0
124 .. versionchanged:: 1.0.0
125
126 Changed to not sort by default.
127
128 copy : bool, default True
129 If False, do not copy data unnecessarily.
130
131 Returns
132 -------
133 object, type of objs
134 When concatenating all ``Series`` along the index (axis=0), a
135 ``Series`` is returned. When ``objs`` contains at least one
136 ``DataFrame``, a ``DataFrame`` is returned. When concatenating along
137 the columns (axis=1), a ``DataFrame`` is returned.
138
139 See Also
140 --------
141 Series.append : Concatenate Series.
142 DataFrame.append : Concatenate DataFrames.
143 DataFrame.join : Join DataFrames using indexes.
144 DataFrame.merge : Merge DataFrames by indexes or columns.
145
146 Notes
147 -----
148 The keys, levels, and names arguments are all optional.
149
150 A walkthrough of how this method fits in with other tools for combining
151 pandas objects can be found `here
152 <https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html>`__.
153
154 Examples
155 --------
156 Combine two ``Series``.
157
158 >>> s1 = pd.Series(['a', 'b'])
159 >>> s2 = pd.Series(['c', 'd'])
160 >>> pd.concat([s1, s2])
161 0 a
162 1 b
163 0 c
164 1 d
165 dtype: object
166
167 Clear the existing index and reset it in the result
168 by setting the ``ignore_index`` option to ``True``.
169
170 >>> pd.concat([s1, s2], ignore_index=True)
171 0 a
172 1 b
173 2 c
174 3 d
175 dtype: object
176
177 Add a hierarchical index at the outermost level of
178 the data with the ``keys`` option.
179
180 >>> pd.concat([s1, s2], keys=['s1', 's2'])
181 s1 0 a
182 1 b
183 s2 0 c
184 1 d
185 dtype: object
186
187 Label the index keys you create with the ``names`` option.
188
189 >>> pd.concat([s1, s2], keys=['s1', 's2'],
190 ... names=['Series name', 'Row ID'])
191 Series name Row ID
192 s1 0 a
193 1 b
194 s2 0 c
195 1 d
196 dtype: object
197
198 Combine two ``DataFrame`` objects with identical columns.
199
200 >>> df1 = pd.DataFrame([['a', 1], ['b', 2]],
201 ... columns=['letter', 'number'])
202 >>> df1
203 letter number
204 0 a 1
205 1 b 2
206 >>> df2 = pd.DataFrame([['c', 3], ['d', 4]],
207 ... columns=['letter', 'number'])
208 >>> df2
209 letter number
210 0 c 3
211 1 d 4
212 >>> pd.concat([df1, df2])
213 letter number
214 0 a 1
215 1 b 2
216 0 c 3
217 1 d 4
218
219 Combine ``DataFrame`` objects with overlapping columns
220 and return everything. Columns outside the intersection will
221 be filled with ``NaN`` values.
222
223 >>> df3 = pd.DataFrame([['c', 3, 'cat'], ['d', 4, 'dog']],
224 ... columns=['letter', 'number', 'animal'])
225 >>> df3
226 letter number animal
227 0 c 3 cat
228 1 d 4 dog
229 >>> pd.concat([df1, df3], sort=False)
230 letter number animal
231 0 a 1 NaN
232 1 b 2 NaN
233 0 c 3 cat
234 1 d 4 dog
235
236 Combine ``DataFrame`` objects with overlapping columns
237 and return only those that are shared by passing ``inner`` to
238 the ``join`` keyword argument.
239
240 >>> pd.concat([df1, df3], join="inner")
241 letter number
242 0 a 1
243 1 b 2
244 0 c 3
245 1 d 4
246
247 Combine ``DataFrame`` objects horizontally along the x axis by
248 passing in ``axis=1``.
249
250 >>> df4 = pd.DataFrame([['bird', 'polly'], ['monkey', 'george']],
251 ... columns=['animal', 'name'])
252 >>> pd.concat([df1, df4], axis=1)
253 letter number animal name
254 0 a 1 bird polly
255 1 b 2 monkey george
256
257 Prevent the result from including duplicate index values with the
258 ``verify_integrity`` option.
259
260 >>> df5 = pd.DataFrame([1], index=['a'])
261 >>> df5
262 0
263 a 1
264 >>> df6 = pd.DataFrame([2], index=['a'])
265 >>> df6
266 0
267 a 2
268 >>> pd.concat([df5, df6], verify_integrity=True)
269 Traceback (most recent call last):
270 ...
271 ValueError: Indexes have overlapping values: ['a']
272 """
273 op = _Concatenator(
274 objs,
275 axis=axis,
276 ignore_index=ignore_index,
277 join=join,
278 keys=keys,
279 levels=levels,
280 names=names,
281 verify_integrity=verify_integrity,
282 copy=copy,
283 sort=sort,
284 )
285
286 return op.get_result()
287
288
289 class _Concatenator:
290 """
291 Orchestrates a concatenation operation for BlockManagers
292 """
293
294 def __init__(
295 self,
296 objs,
297 axis=0,
298 join: str = "outer",
299 keys=None,
300 levels=None,
301 names=None,
302 ignore_index: bool = False,
303 verify_integrity: bool = False,
304 copy: bool = True,
305 sort=False,
306 ):
307 if isinstance(objs, (NDFrame, str)):
308 raise TypeError(
309 "first argument must be an iterable of pandas "
310 f'objects, you passed an object of type "{type(objs).__name__}"'
311 )
312
313 if join == "outer":
314 self.intersect = False
315 elif join == "inner":
316 self.intersect = True
317 else: # pragma: no cover
318 raise ValueError(
319 "Only can inner (intersect) or outer (union) join the other axis"
320 )
321
322 if isinstance(objs, dict):
323 if keys is None:
324 keys = list(objs.keys())
325 objs = [objs[k] for k in keys]
326 else:
327 objs = list(objs)
328
329 if len(objs) == 0:
330 raise ValueError("No objects to concatenate")
331
332 if keys is None:
333 objs = list(com.not_none(*objs))
334 else:
335 # #1649
336 clean_keys = []
337 clean_objs = []
338 for k, v in zip(keys, objs):
339 if v is None:
340 continue
341 clean_keys.append(k)
342 clean_objs.append(v)
343 objs = clean_objs
344 name = getattr(keys, "name", None)
345 keys = Index(clean_keys, name=name)
346
347 if len(objs) == 0:
348 raise ValueError("All objects passed were None")
349
350 # consolidate data & figure out what our result ndim is going to be
351 ndims = set()
352 for obj in objs:
353 if not isinstance(obj, (Series, DataFrame)):
354 msg = (
355 f"cannot concatenate object of type '{type(obj)}'; "
356 "only Series and DataFrame objs are valid"
357 )
358 raise TypeError(msg)
359
360 # consolidate
361 obj._consolidate(inplace=True)
362 ndims.add(obj.ndim)
363
364 # get the sample
365 # want the highest ndim that we have, and must be non-empty
366 # unless all objs are empty
367 sample = None
368 if len(ndims) > 1:
369 max_ndim = max(ndims)
370 for obj in objs:
371 if obj.ndim == max_ndim and np.sum(obj.shape):
372 sample = obj
373 break
374
375 else:
376 # filter out the empties if we have not multi-index possibilities
377 # note to keep empty Series as it affect to result columns / name
378 non_empties = [
379 obj for obj in objs if sum(obj.shape) > 0 or isinstance(obj, Series)
380 ]
381
382 if len(non_empties) and (
383 keys is None and names is None and levels is None and not self.intersect
384 ):
385 objs = non_empties
386 sample = objs[0]
387
388 if sample is None:
389 sample = objs[0]
390 self.objs = objs
391
392 # Standardize axis parameter to int
393 if isinstance(sample, Series):
394 axis = DataFrame._get_axis_number(axis)
395 else:
396 axis = sample._get_axis_number(axis)
397
398 # Need to flip BlockManager axis in the DataFrame special case
399 self._is_frame = isinstance(sample, ABCDataFrame)
400 if self._is_frame:
401 axis = 1 if axis == 0 else 0
402
403 self._is_series = isinstance(sample, ABCSeries)
404 if not 0 <= axis <= sample.ndim:
405 raise AssertionError(
406 f"axis must be between 0 and {sample.ndim}, input was {axis}"
407 )
408
409 # if we have mixed ndims, then convert to highest ndim
410 # creating column numbers as needed
411 if len(ndims) > 1:
412 current_column = 0
413 max_ndim = sample.ndim
414 self.objs, objs = [], self.objs
415 for obj in objs:
416
417 ndim = obj.ndim
418 if ndim == max_ndim:
419 pass
420
421 elif ndim != max_ndim - 1:
422 raise ValueError(
423 "cannot concatenate unaligned mixed "
424 "dimensional NDFrame objects"
425 )
426
427 else:
428 name = getattr(obj, "name", None)
429 if ignore_index or name is None:
430 name = current_column
431 current_column += 1
432
433 # doing a row-wise concatenation so need everything
434 # to line up
435 if self._is_frame and axis == 1:
436 name = 0
437 obj = sample._constructor({name: obj})
438
439 self.objs.append(obj)
440
441 # note: this is the BlockManager axis (since DataFrame is transposed)
442 self.axis = axis
443 self.keys = keys
444 self.names = names or getattr(keys, "names", None)
445 self.levels = levels
446 self.sort = sort
447
448 self.ignore_index = ignore_index
449 self.verify_integrity = verify_integrity
450 self.copy = copy
451
452 self.new_axes = self._get_new_axes()
453
454 def get_result(self):
455
456 # series only
457 if self._is_series:
458
459 # stack blocks
460 if self.axis == 0:
461 name = com.consensus_name_attr(self.objs)
462
463 mgr = self.objs[0]._data.concat(
464 [x._data for x in self.objs], self.new_axes
465 )
466 cons = self.objs[0]._constructor
467 return cons(mgr, name=name).__finalize__(self, method="concat")
468
469 # combine as columns in a frame
470 else:
471 data = dict(zip(range(len(self.objs)), self.objs))
472 cons = DataFrame
473
474 index, columns = self.new_axes
475 df = cons(data, index=index)
476 df.columns = columns
477 return df.__finalize__(self, method="concat")
478
479 # combine block managers
480 else:
481 mgrs_indexers = []
482 for obj in self.objs:
483 mgr = obj._data
484 indexers = {}
485 for ax, new_labels in enumerate(self.new_axes):
486 if ax == self.axis:
487 # Suppress reindexing on concat axis
488 continue
489
490 obj_labels = mgr.axes[ax]
491 if not new_labels.equals(obj_labels):
492 indexers[ax] = obj_labels.reindex(new_labels)[1]
493
494 mgrs_indexers.append((obj._data, indexers))
495
496 new_data = concatenate_block_managers(
497 mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=self.copy
498 )
499 if not self.copy:
500 new_data._consolidate_inplace()
501
502 cons = self.objs[0]._constructor
503 return cons(new_data).__finalize__(self, method="concat")
504
505 def _get_result_dim(self) -> int:
506 if self._is_series and self.axis == 1:
507 return 2
508 else:
509 return self.objs[0].ndim
510
511 def _get_new_axes(self) -> List[Index]:
512 ndim = self._get_result_dim()
513 return [
514 self._get_concat_axis() if i == self.axis else self._get_comb_axis(i)
515 for i in range(ndim)
516 ]
517
518 def _get_comb_axis(self, i: int) -> Index:
519 data_axis = self.objs[0]._get_block_manager_axis(i)
520 return get_objs_combined_axis(
521 self.objs,
522 axis=data_axis,
523 intersect=self.intersect,
524 sort=self.sort,
525 copy=self.copy,
526 )
527
528 def _get_concat_axis(self) -> Index:
529 """
530 Return index to be used along concatenation axis.
531 """
532 if self._is_series:
533 if self.axis == 0:
534 indexes = [x.index for x in self.objs]
535 elif self.ignore_index:
536 idx = ibase.default_index(len(self.objs))
537 return idx
538 elif self.keys is None:
539 names: List[Optional[Hashable]] = [None] * len(self.objs)
540 num = 0
541 has_names = False
542 for i, x in enumerate(self.objs):
543 if not isinstance(x, Series):
544 raise TypeError(
545 f"Cannot concatenate type 'Series' with "
546 f"object of type '{type(x).__name__}'"
547 )
548 if x.name is not None:
549 names[i] = x.name
550 has_names = True
551 else:
552 names[i] = num
553 num += 1
554 if has_names:
555 return Index(names)
556 else:
557 return ibase.default_index(len(self.objs))
558 else:
559 return ensure_index(self.keys).set_names(self.names)
560 else:
561 indexes = [x._data.axes[self.axis] for x in self.objs]
562
563 if self.ignore_index:
564 idx = ibase.default_index(sum(len(i) for i in indexes))
565 return idx
566
567 if self.keys is None:
568 concat_axis = _concat_indexes(indexes)
569 else:
570 concat_axis = _make_concat_multiindex(
571 indexes, self.keys, self.levels, self.names
572 )
573
574 self._maybe_check_integrity(concat_axis)
575
576 return concat_axis
577
578 def _maybe_check_integrity(self, concat_index: Index):
579 if self.verify_integrity:
580 if not concat_index.is_unique:
581 overlap = concat_index[concat_index.duplicated()].unique()
582 raise ValueError(f"Indexes have overlapping values: {overlap}")
583
584
585 def _concat_indexes(indexes) -> Index:
586 return indexes[0].append(indexes[1:])
587
588
589 def _make_concat_multiindex(indexes, keys, levels=None, names=None) -> MultiIndex:
590
591 if (levels is None and isinstance(keys[0], tuple)) or (
592 levels is not None and len(levels) > 1
593 ):
594 zipped = list(zip(*keys))
595 if names is None:
596 names = [None] * len(zipped)
597
598 if levels is None:
599 _, levels = factorize_from_iterables(zipped)
600 else:
601 levels = [ensure_index(x) for x in levels]
602 else:
603 zipped = [keys]
604 if names is None:
605 names = [None]
606
607 if levels is None:
608 levels = [ensure_index(keys)]
609 else:
610 levels = [ensure_index(x) for x in levels]
611
612 if not all_indexes_same(indexes):
613 codes_list = []
614
615 # things are potentially different sizes, so compute the exact codes
616 # for each level and pass those to MultiIndex.from_arrays
617
618 for hlevel, level in zip(zipped, levels):
619 to_concat = []
620 for key, index in zip(hlevel, indexes):
621 try:
622 i = level.get_loc(key)
623 except KeyError:
624 raise ValueError(f"Key {key} not in level {level}")
625
626 to_concat.append(np.repeat(i, len(index)))
627 codes_list.append(np.concatenate(to_concat))
628
629 concat_index = _concat_indexes(indexes)
630
631 # these go at the end
632 if isinstance(concat_index, MultiIndex):
633 levels.extend(concat_index.levels)
634 codes_list.extend(concat_index.codes)
635 else:
636 codes, categories = factorize_from_iterable(concat_index)
637 levels.append(categories)
638 codes_list.append(codes)
639
640 if len(names) == len(levels):
641 names = list(names)
642 else:
643 # make sure that all of the passed indices have the same nlevels
644 if not len({idx.nlevels for idx in indexes}) == 1:
645 raise AssertionError(
646 "Cannot concat indices that do not have the same number of levels"
647 )
648
649 # also copies
650 names = names + get_consensus_names(indexes)
651
652 return MultiIndex(
653 levels=levels, codes=codes_list, names=names, verify_integrity=False
654 )
655
656 new_index = indexes[0]
657 n = len(new_index)
658 kpieces = len(indexes)
659
660 # also copies
661 new_names = list(names)
662 new_levels = list(levels)
663
664 # construct codes
665 new_codes = []
666
667 # do something a bit more speedy
668
669 for hlevel, level in zip(zipped, levels):
670 hlevel = ensure_index(hlevel)
671 mapped = level.get_indexer(hlevel)
672
673 mask = mapped == -1
674 if mask.any():
675 raise ValueError(f"Values not found in passed level: {hlevel[mask]!s}")
676
677 new_codes.append(np.repeat(mapped, n))
678
679 if isinstance(new_index, MultiIndex):
680 new_levels.extend(new_index.levels)
681 new_codes.extend([np.tile(lab, kpieces) for lab in new_index.codes])
682 else:
683 new_levels.append(new_index)
684 new_codes.append(np.tile(np.arange(n), kpieces))
685
686 if len(new_names) < len(new_levels):
687 new_names.extend(new_index.names)
688
689 return MultiIndex(
690 levels=new_levels, codes=new_codes, names=new_names, verify_integrity=False
691 )
692
[end of pandas/core/reshape/concat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
b4cbc196bae1efbd9d989204afc87f4b2fc456b3
|
Equivalent of Series.map for DataFrame
I think there should be a way to take a DataFrame and use the values in its columns as indices into a MultiIndexed Series. Here is an example:
```
>>> d = pandas.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
>>> d
Let Num
0 A 1
1 B 2
2 C 3
>>> ser = pandas.Series(
... ['a', 'b', 'c', 'd', 'e', 'f'],
... index=pandas.MultiIndex.from_arrays([["A", "B", "C"]*2, [1, 2, 3, 4, 5, 6]])
... )
>>> ser
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
```
With this data, you should be able to do `d.map(ser)` (or whatever method name instead of `map`) and get the same result as this:
```
>>> ser.ix[d.apply(tuple, axis=1)]
A 1 a
B 2 b
C 3 c
dtype: object
```
You currently cannot do this without converting the rows to tuples (`ser[d]` gives `ValueError: Cannot index with multidimensional key`). Converting to tuple is an awkward way to do a very natural task, which is using a tabular array of data to look up values in a tabular index (i.e., a MultiIndex).
I'm creating an issue to resurrect this request from much earlier discussion [here](https://groups.google.com/forum/#!searchin/pydata/dataframe$20map/pydata/sibkQ5Ea-Hc/RKMwiNsH7TMJ) and [here](http://stackoverflow.com/questions/22293683/equivalent-of-series-map-for-dataframe).
|
`map` is just a fancy way of doing a `merge`
```
In [24]: d
Out[24]:
Let Num
0 A 1
1 B 2
2 C 3
In [25]: ser
Out[25]:
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [26]: ser.index.names=['Let','Num']
In [27]: pd.merge(d, ser.reset_index(), on=['Let','Num'])
Out[27]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
```
I suppose this could be shown in a doc example and the merge docs (and maybe cross-ref `.map` with `.merge`.
@BrenBarn do you want to do a doc-example for this?
I find myself doing something like this all the time, so I wrote my own function for it.
In terms of functionality, it's like a (vectorized and more generic) `VLOOKUP` in Excel. The implementation is just a wrapper around `pd.DataFrame.merge` (or `join` for some cases)
`df.vlookup(other, drop_missing_rows=False, drop_lookup_cols=False )`.
Similar to `pd.DataFrame.merge` and `pd.Series.map`, but:
* You may (or may not) want to keep the columns you're joining on (`Series.map` drops them, `df.merge` keeps them)
* always join on `other.index.names`, but you may want to use the index of `df` (like join) or the columns (like merge) or a combination.
* just like `map`, it raises when `other.index` rows are not unique
* you may want to drop the rows that don't have an entry in `other` (inner join) or insert NAs (left join).
Happy to submit if it helps
I'll write up an example of this in the merging docs.
|
2020-02-18T00:05:32Z
|
<patch>
diff --git a/doc/source/user_guide/merging.rst b/doc/source/user_guide/merging.rst
--- a/doc/source/user_guide/merging.rst
+++ b/doc/source/user_guide/merging.rst
@@ -724,6 +724,27 @@ either the left or right tables, the values in the joined table will be
labels=['left', 'right'], vertical=False);
plt.close('all');
+You can merge a mult-indexed Series and a DataFrame, if the names of
+the MultiIndex correspond to the columns from the DataFrame. Transform
+the Series to a DataFrame using :meth:`Series.reset_index` before merging,
+as shown in the following example.
+
+.. ipython:: python
+
+ df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
+ df
+
+ ser = pd.Series(
+ ["a", "b", "c", "d", "e", "f"],
+ index=pd.MultiIndex.from_arrays(
+ [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
+ ),
+ )
+ ser
+
+ result = pd.merge(df, ser.reset_index(), on=['Let', 'Num'])
+
+
Here is another example with duplicate join keys in DataFrames:
.. ipython:: python
</patch>
|
[]
|
[]
| |||
huggingface__transformers-11406
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Distributed DataSampler has fixed data order despite random seeds.
When using a distributed data loader with `shuffle = True` in the Hugging Face trainer, it calls the underlying torch data loader. If `shuffle` is set to True, the data loader seeds the generator with `seed + epoch` ([here](https://github.com/pytorch/pytorch/blob/f84a50109f794d4feab922056b77d7c358076776/torch/utils/data/distributed.py#L100)).
When calling the data loader in HF trainer ([here](https://github.com/huggingface/transformers/blob/3ed5e97ba04ce9b24b4a7161ea74572598a4c480/src/transformers/trainer.py#L553)), the seed is _not_ passed to the torch data loader and thereby gets set to the default seed of 0. This means the data loader generator will always gets initialized to the epoch, despite a different seed to HF.
I would think we'd want the data order to be random, too.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes (with DeepSpeed)
### Who can help
@sgugger (trainer)
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* The hugging face trainer with a distributed data sampler
The tasks I am working on is:
* Training GPT2 from scratch using DDP with DeepSpeed
## To reproduce
Steps to reproduce the behavior:
Using a different seed with distributed data loader does not change the data order.
## Expected behavior
The random seed should be passed to the data loader so the data order to randomized with the seed changing.
</issue>
<code>
[start of README.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/transformers/index.html">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/transformers/index.html.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 </p>
39
40 <h3 align="center">
41 <p>State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0
42 </h3>
43
44 🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction, question answering, summarization, translation, text generation, etc in 100+ languages. Its aim is to make cutting-edge NLP easier to use for everyone.
45
46 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture can be used as a standalone and modified to enable quick research experiments.
47
48 🤗 Transformers is backed by the two most popular deep learning libraries, [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/), with a seamless integration between them, allowing you to train your models with one then load it for inference with the other.
49
50 ## Online demos
51
52 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) to use those models.
53
54 Here are a few examples:
55 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
56 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
57 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
58 - [Natural Language Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
59 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
60 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
61 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
62
63 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities.
64
65 ## Quick tour
66
67 To immediately use a model on a given text, we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model training. Here is how to quickly use a pipeline to classify positive versus negative texts
68
69 ```python
70 >>> from transformers import pipeline
71
72 # Allocate a pipeline for sentiment-analysis
73 >>> classifier = pipeline('sentiment-analysis')
74 >>> classifier('We are very happy to include pipeline into the transformers repository.')
75 [{'label': 'POSITIVE', 'score': 0.9978193640708923}]
76 ```
77
78 The second line of code downloads and caches the pretrained model used by the pipeline, the third line evaluates it on the given text. Here the answer is "positive" with a confidence of 99.8%.
79
80 This is another example of pipeline used for that can extract question answers from some context:
81
82 ``` python
83 >>> from transformers import pipeline
84
85 # Allocate a pipeline for question-answering
86 >>> question_answerer = pipeline('question-answering')
87 >>> question_answerer({
88 ... 'question': 'What is the name of the repository ?',
89 ... 'context': 'Pipeline have been included in the huggingface/transformers repository'
90 ... })
91 {'score': 0.5135612454720828, 'start': 35, 'end': 59, 'answer': 'huggingface/transformers'}
92
93 ```
94
95 On top of the answer, the pretrained model used here returned its confidence score, along with the start position and its end position in the tokenized sentence. You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/transformers/task_summary.html).
96
97 To download and use any of the pretrained models on your given task, you just need to use those three lines of codes (PyTorch version):
98 ```python
99 >>> from transformers import AutoTokenizer, AutoModel
100
101 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
102 >>> model = AutoModel.from_pretrained("bert-base-uncased")
103
104 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
105 >>> outputs = model(**inputs)
106 ```
107 or for TensorFlow:
108 ```python
109 >>> from transformers import AutoTokenizer, TFAutoModel
110
111 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
112 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
113
114 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
115 >>> outputs = model(**inputs)
116 ```
117
118 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on one (or list) of texts (as we can see on the fourth line of both code examples). It will output a dictionary you can directly pass to your model (which is done on the fifth line).
119
120 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use normally. For instance, [this tutorial](https://huggingface.co/transformers/training.html) explains how to integrate such a model in classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune the on a new dataset.
121
122 ## Why should I use transformers?
123
124 1. Easy-to-use state-of-the-art models:
125 - High performance on NLU and NLG tasks.
126 - Low barrier to entry for educators and practitioners.
127 - Few user-facing abstractions with just three classes to learn.
128 - A unified API for using all our pretrained models.
129
130 1. Lower compute costs, smaller carbon footprint:
131 - Researchers can share trained models instead of always retraining.
132 - Practitioners can reduce compute time and production costs.
133 - Dozens of architectures with over 2,000 pretrained models, some in more than 100 languages.
134
135 1. Choose the right framework for every part of a model's lifetime:
136 - Train state-of-the-art models in 3 lines of code.
137 - Move a single model between TF2.0/PyTorch frameworks at will.
138 - Seamlessly pick the right framework for training, evaluation, production.
139
140 1. Easily customize a model or an example to your needs:
141 - Examples for each architecture to reproduce the results by the official authors of said architecture.
142 - Expose the models internal as consistently as possible.
143 - Model files can be used independently of the library for quick experiments.
144
145 ## Why shouldn't I use transformers?
146
147 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving in additional abstractions/files.
148 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library.
149 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/master/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs.
150
151 ## Installation
152
153 ### With pip
154
155 This repository is tested on Python 3.6+, PyTorch 1.0.0+ (PyTorch 1.3.1+ for [examples](https://github.com/huggingface/transformers/tree/master/examples)) and TensorFlow 2.0.
156
157 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
158
159 First, create a virtual environment with the version of Python you're going to use and activate it.
160
161 Then, you will need to install at least one of TensorFlow 2.0, PyTorch or Flax.
162 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/pip#tensorflow-2.0-rc-is-available), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) regarding the specific install command for your platform and/or [Flax installation page](https://github.com/google/flax#quick-install).
163
164 When TensorFlow 2.0 and/or PyTorch has been installed, 🤗 Transformers can be installed using pip as follows:
165
166 ```bash
167 pip install transformers
168 ```
169
170 If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/transformers/installation.html#installing-from-source).
171
172 ### With conda
173
174 Since Transformers version v4.0.0, we now have a conda channel: `huggingface`.
175
176 🤗 Transformers can be installed using conda as follows:
177
178 ```shell script
179 conda install -c huggingface transformers
180 ```
181
182 Follow the installation pages of TensorFlow, PyTorch or Flax to see how to install them with conda.
183
184 ## Models architectures
185
186 **[All the model checkpoints](https://huggingface.co/models)** provided by 🤗 Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co) where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations).
187
188 Current number of checkpoints: 
189
190 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/transformers/model_summary.html) for a high-level summary of each them):
191
192 1. **[ALBERT](https://huggingface.co/transformers/model_doc/albert.html)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
193 1. **[BART](https://huggingface.co/transformers/model_doc/bart.html)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
194 1. **[BARThez](https://huggingface.co/transformers/model_doc/barthez.html)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
195 1. **[BERT](https://huggingface.co/transformers/model_doc/bert.html)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
196 1. **[BERT For Sequence Generation](https://huggingface.co/transformers/model_doc/bertgeneration.html)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
197 1. **[BigBird-RoBERTa](https://huggingface.co/transformers/model_doc/bigbird.html)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
198 1. **[Blenderbot](https://huggingface.co/transformers/model_doc/blenderbot.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
199 1. **[BlenderbotSmall](https://huggingface.co/transformers/model_doc/blenderbot_small.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
200 1. **[BORT](https://huggingface.co/transformers/model_doc/bort.html)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
201 1. **[CamemBERT](https://huggingface.co/transformers/model_doc/camembert.html)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
202 1. **[ConvBERT](https://huggingface.co/transformers/model_doc/convbert.html)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
203 1. **[CPM](https://huggingface.co/transformers/model_doc/cpm.html)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
204 1. **[CTRL](https://huggingface.co/transformers/model_doc/ctrl.html)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
205 1. **[DeBERTa](https://huggingface.co/transformers/model_doc/deberta.html)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
206 1. **[DeBERTa-v2](https://huggingface.co/transformers/model_doc/deberta_v2.html)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
207 1. **[DeiT](https://huggingface.co/transformers/model_doc/deit.html)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
208 1. **[DialoGPT](https://huggingface.co/transformers/model_doc/dialogpt.html)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
209 1. **[DistilBERT](https://huggingface.co/transformers/model_doc/distilbert.html)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT.
210 1. **[DPR](https://huggingface.co/transformers/model_doc/dpr.html)** (from Facebook) released with the paper [Dense Passage Retrieval
211 for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon
212 Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
213 1. **[ELECTRA](https://huggingface.co/transformers/model_doc/electra.html)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
214 1. **[FlauBERT](https://huggingface.co/transformers/model_doc/flaubert.html)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
215 1. **[Funnel Transformer](https://huggingface.co/transformers/model_doc/funnel.html)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
216 1. **[GPT](https://huggingface.co/transformers/model_doc/gpt.html)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
217 1. **[GPT-2](https://huggingface.co/transformers/model_doc/gpt2.html)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
218 1. **[GPT Neo](https://huggingface.co/transformers/model_doc/gpt_neo.html)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
219 1. **[I-BERT](https://huggingface.co/transformers/model_doc/ibert.html)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer
220 1. **[LayoutLM](https://huggingface.co/transformers/model_doc/layoutlm.html)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
221 1. **[LED](https://huggingface.co/transformers/model_doc/led.html)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
222 1. **[Longformer](https://huggingface.co/transformers/model_doc/longformer.html)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
223 1. **[LXMERT](https://huggingface.co/transformers/model_doc/lxmert.html)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
224 1. **[M2M100](https://huggingface.co/transformers/model_doc/m2m_100.html)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
225 1. **[MarianMT](https://huggingface.co/transformers/model_doc/marian.html)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
226 1. **[MBart](https://huggingface.co/transformers/model_doc/mbart.html)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
227 1. **[MBart-50](https://huggingface.co/transformers/model_doc/mbart.html)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
228 1. **[Megatron-BERT](https://huggingface.co/transformers/model_doc/megatron_bert.html)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
229 1. **[Megatron-GPT2](https://huggingface.co/transformers/model_doc/megatron_gpt2.html)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
230 1. **[MPNet](https://huggingface.co/transformers/model_doc/mpnet.html)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
231 1. **[MT5](https://huggingface.co/transformers/model_doc/mt5.html)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
232 1. **[Pegasus](https://huggingface.co/transformers/model_doc/pegasus.html)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777)> by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
233 1. **[ProphetNet](https://huggingface.co/transformers/model_doc/prophetnet.html)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
234 1. **[Reformer](https://huggingface.co/transformers/model_doc/reformer.html)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
235 1. **[RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
236 1. **[SpeechToTextTransformer](https://huggingface.co/transformers/model_doc/speech_to_text.html)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
237 1. **[SqueezeBert](https://huggingface.co/transformers/model_doc/squeezebert.html)** released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
238 1. **[T5](https://huggingface.co/transformers/model_doc/t5.html)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
239 1. **[TAPAS](https://huggingface.co/transformers/model_doc/tapas.html)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
240 1. **[Transformer-XL](https://huggingface.co/transformers/model_doc/transformerxl.html)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
241 1. **[Vision Transformer (ViT)](https://huggingface.co/transformers/model_doc/vit.html)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
242 1. **[Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
243 1. **[XLM](https://huggingface.co/transformers/model_doc/xlm.html)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
244 1. **[XLM-ProphetNet](https://huggingface.co/transformers/model_doc/xlmprophetnet.html)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
245 1. **[XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
246 1. **[XLNet](https://huggingface.co/transformers/model_doc/xlnet.html)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
247 1. **[XLSR-Wav2Vec2](https://huggingface.co/transformers/model_doc/xlsr_wav2vec2.html)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
248 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
249
250 To check if each model has an implementation in PyTorch/TensorFlow/Flax or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/transformers/index.html#bigtable)
251
252 These implementations have been tested on several datasets (see the example scripts) and should match the performances of the original implementations. You can find more details on the performances in the Examples section of the [documentation](https://huggingface.co/transformers/examples.html).
253
254
255 ## Learn more
256
257 | Section | Description |
258 |-|-|
259 | [Documentation](https://huggingface.co/transformers/) | Full API documentation and tutorials |
260 | [Task summary](https://huggingface.co/transformers/task_summary.html) | Tasks supported by 🤗 Transformers |
261 | [Preprocessing tutorial](https://huggingface.co/transformers/preprocessing.html) | Using the `Tokenizer` class to prepare data for the models |
262 | [Training and fine-tuning](https://huggingface.co/transformers/training.html) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API |
263 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/master/examples) | Example scripts for fine-tuning models on a wide range of tasks |
264 | [Model sharing and uploading](https://huggingface.co/transformers/model_sharing.html) | Upload and share your fine-tuned models with the community |
265 | [Migration](https://huggingface.co/transformers/migration.html) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` |
266
267 ## Citation
268
269 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library:
270 ```bibtex
271 @inproceedings{wolf-etal-2020-transformers,
272 title = "Transformers: State-of-the-Art Natural Language Processing",
273 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
274 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
275 month = oct,
276 year = "2020",
277 address = "Online",
278 publisher = "Association for Computational Linguistics",
279 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
280 pages = "38--45"
281 }
282 ```
283
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
huggingface/transformers
|
b24ead87e1be6bce17e4ec5c953b6d028e4b3af7
|
Distributed DataSampler has fixed data order despite random seeds.
When using a distributed data loader with `shuffle = True` in the Hugging Face trainer, it calls the underlying torch data loader. If `shuffle` is set to True, the data loader seeds the generator with `seed + epoch` ([here](https://github.com/pytorch/pytorch/blob/f84a50109f794d4feab922056b77d7c358076776/torch/utils/data/distributed.py#L100)).
When calling the data loader in HF trainer ([here](https://github.com/huggingface/transformers/blob/3ed5e97ba04ce9b24b4a7161ea74572598a4c480/src/transformers/trainer.py#L553)), the seed is _not_ passed to the torch data loader and thereby gets set to the default seed of 0. This means the data loader generator will always gets initialized to the epoch, despite a different seed to HF.
I would think we'd want the data order to be random, too.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes (with DeepSpeed)
### Who can help
@sgugger (trainer)
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* The hugging face trainer with a distributed data sampler
The tasks I am working on is:
* Training GPT2 from scratch using DDP with DeepSpeed
## To reproduce
Steps to reproduce the behavior:
Using a different seed with distributed data loader does not change the data order.
## Expected behavior
The random seed should be passed to the data loader so the data order to randomized with the seed changing.
|
Follow-up note (part of @lorr1's team that encountered this). This is particularly insidious for any sort of code that tries training with multiple random seeds; there's an assumption that across seeds, weight initialization (for pre-training, fine-tuning weights), dropout, AND data order are all different (and all do have significant bearing on results).
Consistent data order (as in the existing code) runs counter to that expectation.
I'm guessing we want a new argument to control that seed though, not the current `args.seed` that is set at the beginning of training, what do you think?
I think the seed set at the beginning of training would be fine -- that would be the expected behavior (weights get randomly initialized, then data order is random _conditioned on a single seed_.
Adding a separate seed just for data order means it's just one more thing you need to keep track of.
There's a backwards compatibility issue here possibly (if folks doing multiple random seeds worth of runs have been relying on/reporting those results), but this feels like the simplest solution?
It's definitely the easiest solution. For the backward compatibility issue, I hope users save their version of Transformers and PyTorch along the seeds for reproducibility. PyTorch does not guarantee the same results across versions (we had an issue with multinomial that changed behavior for instance). So I think it's fine to change the behavior, especially as it's a bug fix.
Will make a PR with the change.
|
2021-04-23T21:35:35Z
|
<patch>
diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -547,6 +547,7 @@ def _get_train_sampler(self) -> Optional[torch.utils.data.sampler.Sampler]:
rank=self.args.process_index,
lengths=lengths,
model_input_name=model_input_name,
+ seed=self.args.seed,
)
else:
@@ -562,10 +563,14 @@ def _get_train_sampler(self) -> Optional[torch.utils.data.sampler.Sampler]:
batch_size=self.args.per_device_train_batch_size,
num_replicas=self.args.world_size,
rank=self.args.process_index,
+ seed=self.args.seed,
)
else:
return DistributedSampler(
- self.train_dataset, num_replicas=self.args.world_size, rank=self.args.process_index
+ self.train_dataset,
+ num_replicas=self.args.world_size,
+ rank=self.args.process_index,
+ seed=self.args.seed,
)
def get_train_dataloader(self) -> DataLoader:
</patch>
|
[]
|
[]
| |||
conan-io__conan-13610
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Take a look into normalizing the log levels of conan
Currently, it's not clear whether verbose or notice/status is the default one, and if their order is correct. We should take a look into normalizing all the log levels to be consistent across
</issue>
<code>
[start of README.md]
1 <picture>
2 <!-- These are also used for https://github.com/conan-io/.github/blob/main/profile/README.md -->
3 <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/conan-io/conan/develop2/.github/conan2-logo-for-dark.svg">
4 <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/conan-io/conan/develop2/.github/conan2-logo-for-light.svg">
5 <img alt="JFrog | Conan 2.0 Logo" src="https://raw.githubusercontent.com/conan-io/conan/develop2/.github/conan2-logo-with-bg.svg">
6 </picture>
7
8 # Conan
9
10 Decentralized, open-source (MIT), C/C++ package manager.
11
12 - Homepage: https://conan.io/
13 - Github: https://github.com/conan-io/conan
14 - Docs: https://docs.conan.io
15 - Slack: https://cpplang.slack.com (#conan channel)
16 - Twitter: https://twitter.com/conan_io
17
18
19 Conan is a package manager for C and C++ developers:
20
21 - It is fully decentralized. Users can host their packages on their servers, privately. Integrates with Artifactory and Bintray.
22 - Portable. Works across all platforms, including Linux, OSX, Windows (with native and first-class support, WSL, MinGW),
23 Solaris, FreeBSD, embedded and cross-compiling, docker, WSL
24 - Manage binaries. It can create, upload and download binaries for any configuration and platform,
25 even cross-compiling, saving lots of time in development and continuous integration. The binary compatibility can be configured
26 and customized. Manage all your artifacts in the same way on all platforms.
27 - Integrates with any build system, including any proprietary and custom one. Provides tested support for major build systems
28 (CMake, MSBuild, Makefiles, Meson, etc).
29 - Extensible: Its python based recipes, together with extensions points allows for great power and flexibility.
30 - Large and active community, especially in Github (https://github.com/conan-io/conan) and Slack (https://cpplang-inviter.cppalliance.org/ #conan channel).
31 This community also creates and maintains packages in ConanCenter and Bincrafters repositories in Bintray.
32 - Stable. Used in production by many companies, since 1.0 there is a commitment not to break package recipes and documented behavior.
33
34
35 This is the **developer/maintainer** documentation. For user documentation, go to https://docs.conan.io
36
37
38 | **develop2** |
39 |-------------------------|
40 | [](https://ci.conan.io/blue/organizations/jenkins/ConanTestSuitev2/activity) |
41
42
43
44 ## Setup
45
46 You can run Conan from source in Windows, MacOS, and Linux:
47
48 - **Install pip following** [pip docs](https://pip.pypa.io/en/stable/installation/).
49
50 - **Clone Conan repository:**
51
52 ```bash
53 $ git clone https://github.com/conan-io/conan.git conan-io
54 ```
55
56 > **Note**: repository directory name matters, some directories are known to be problematic to run tests (e.g. `conan`). `conan-io` directory name was tested and guaranteed to be working.
57
58 - **Install in editable mode**
59
60 ```bash
61 $ cd conan-io && sudo pip install -e .
62 ```
63
64 If you are in Windows, using ``sudo`` is not required.
65
66 - **You are ready, try to run Conan:**
67
68 ```bash
69 $ conan --help
70
71 Consumer commands
72 install Installs the requirements specified in a recipe (conanfile.py or conanfile.txt).
73 ...
74
75 Conan commands. Type "conan <command> -h" for help
76 ```
77
78 ## Contributing to the project
79
80
81 Feedback and contribution are always welcome in this project.
82 Please read our [contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
83 Also, if you plan to contribute, please add some testing for your changes. You can read the [Conan
84 tests guidelines section](https://github.com/conan-io/conan/blob/develop/conans/test/README.md) for
85 some advise on how to write tests for Conan.
86
87 ### Running the tests
88
89
90 **Install python requirements**
91
92 ```bash
93 $ python -m pip install -r conans/requirements_server.txt
94 $ python -m pip install -r conans/requirements_dev.txt
95 ```
96
97 If you are not Windows and you are not using a python virtual environment, you will need to run these
98 commands using `sudo`.
99
100 Before you can run the tests, you need to set a few environment variables first.
101
102 ```bash
103 $ export PYTHONPATH=$PYTHONPATH:$(pwd)
104 ```
105
106 On Windows it would be (while being in the Conan root directory):
107
108 ```bash
109 $ set PYTHONPATH=.
110 ```
111
112 Conan test suite defines and configure some required tools (CMake, Ninja, etc) in the
113 ``conftest.py`` and allows to define a custom ``conftest_user.py``.
114 Some specific versions, like cmake>=3.15 are necessary.
115
116
117 You can run the tests like this:
118
119 ```bash
120 $ python -m pytest .
121 ```
122
123 A few minutes later it should print ``OK``:
124
125 ```bash
126 ............................................................................................
127 ----------------------------------------------------------------------
128 Ran 146 tests in 50.993s
129
130 OK
131 ```
132
133 To run specific tests, you can specify the test name too, something like:
134
135 ```bash
136 $ python -m pytest conans/test/unittests/client/cmd/export_test.py::ExportTest::test_export_warning -s
137 ```
138
139 The `-s` argument can be useful to see some output that otherwise is captured by pytest.
140
141 Also, you can run tests against an instance of Artifactory. Those tests should add the attribute
142 `artifactory_ready`.
143
144 ```bash
145 $ python -m pytest . -m artifactory_ready
146 ```
147
148 Some environment variables have to be defined to run them. For example, for an
149 Artifactory instance that is running on the localhost with default user and password configured, the
150 variables could take the values:
151
152 ```bash
153 $ export CONAN_TEST_WITH_ARTIFACTORY=1
154 $ export ARTIFACTORY_DEFAULT_URL=http://localhost:8081/artifactory
155 $ export ARTIFACTORY_DEFAULT_USER=admin
156 $ export ARTIFACTORY_DEFAULT_PASSWORD=password
157 ```
158
159 `ARTIFACTORY_DEFAULT_URL` is the base url for the Artifactory repo, not one for a specific
160 repository. Running the tests with a real Artifactory instance will create repos on the fly so please
161 use a separate server for testing purposes.
162
163 ## License
164
165 [MIT LICENSE](LICENSE.md)
166
[end of README.md]
[start of conan/api/subapi/search.py]
1 from conan.internal.conan_app import ConanApp
2 from conans.search.search import search_recipes
3
4
5 class SearchAPI:
6
7 def __init__(self, conan_api):
8 self.conan_api = conan_api
9
10 def recipes(self, query: str, remote=None):
11 only_none_user_channel = False
12 if query and query.endswith("@"):
13 only_none_user_channel = True
14 query = query[:-1]
15
16 app = ConanApp(self.conan_api.cache_folder)
17 if remote:
18 refs = app.remote_manager.search_recipes(remote, query)
19 else:
20 references = search_recipes(app.cache, query)
21 # For consistency with the remote search, we return references without revisions
22 # user could use further the API to look for the revisions
23 refs = []
24 for r in references:
25 r.revision = None
26 r.timestamp = None
27 if r not in refs:
28 refs.append(r)
29 ret = []
30 for r in refs:
31 if not only_none_user_channel or (r.user is None and r.channel is None):
32 ret.append(r)
33 return ret
34
[end of conan/api/subapi/search.py]
[start of conan/cli/command.py]
1 import argparse
2 import textwrap
3
4 from conan.errors import ConanException
5
6
7 class OnceArgument(argparse.Action):
8 """Allows declaring a parameter that can have only one value, by default argparse takes the
9 latest declared and it's very confusing.
10 """
11
12 def __call__(self, parser, namespace, values, option_string=None):
13 if getattr(namespace, self.dest) is not None and self.default is None:
14 msg = '{o} can only be specified once'.format(o=option_string)
15 raise argparse.ArgumentError(None, msg)
16 setattr(namespace, self.dest, values)
17
18
19 class SmartFormatter(argparse.HelpFormatter):
20
21 def _fill_text(self, text, width, indent):
22 text = textwrap.dedent(text)
23 return ''.join(indent + line for line in text.splitlines(True))
24
25
26 class BaseConanCommand:
27 def __init__(self, method, formatters=None):
28 self._formatters = {"text": lambda x: None}
29 self._method = method
30 self._name = None
31 self._parser = None
32 if formatters:
33 for kind, action in formatters.items():
34 if callable(action):
35 self._formatters[kind] = action
36 else:
37 raise ConanException("Invalid formatter for {}. The formatter must be"
38 "a valid function".format(kind))
39 if method.__doc__:
40 self._doc = method.__doc__
41 else:
42 raise ConanException("No documentation string defined for command: '{}'. Conan "
43 "commands should provide a documentation string explaining "
44 "its use briefly.".format(self._name))
45
46 def _init_log_levels(self):
47 self._parser.add_argument("-v", default="status", nargs='?',
48 help="Level of detail of the output. Valid options from less verbose "
49 "to more verbose: -vquiet, -verror, -vwarning, -vnotice, -vstatus, "
50 "-v or -vverbose, -vv or -vdebug, -vvv or -vtrace")
51
52 @property
53 def _help_formatters(self):
54 """
55 Formatters that are shown as available in help, 'text' formatter
56 should not appear
57 """
58 return [formatter for formatter in list(self._formatters) if formatter != "text"]
59
60 def _init_formatters(self):
61 if self._help_formatters:
62 help_message = "Select the output format: {}".format(", ".join(list(self._help_formatters)))
63 self._parser.add_argument('-f', '--format', action=OnceArgument, help=help_message)
64
65 @property
66 def name(self):
67 return self._name
68
69 @property
70 def method(self):
71 return self._method
72
73 @property
74 def doc(self):
75 return self._doc
76
77 @property
78 def parser(self):
79 return self._parser
80
81 def _format(self, parser, info, *args):
82 parser_args, _ = parser.parse_known_args(*args)
83
84 default_format = "text"
85 try:
86 formatarg = parser_args.format or default_format
87 except AttributeError:
88 formatarg = default_format
89
90 try:
91 formatter = self._formatters[formatarg]
92 except KeyError:
93 raise ConanException("{} is not a known format. Supported formatters are: {}".format(
94 formatarg, ", ".join(self._help_formatters)))
95
96 formatter(info)
97
98
99 class ConanArgumentParser(argparse.ArgumentParser):
100
101 def __init__(self, *args, **kwargs):
102 super().__init__(*args, **kwargs)
103
104 def parse_args(self, args=None, namespace=None):
105 args = super().parse_args(args)
106 self._process_log_level_args(args)
107 return args
108
109 @staticmethod
110 def _process_log_level_args(args):
111 from conan.api import output
112 from conan.api.output import LEVEL_QUIET, LEVEL_ERROR, LEVEL_WARNING, LEVEL_NOTICE, \
113 LEVEL_STATUS, LEVEL_VERBOSE, LEVEL_DEBUG, LEVEL_TRACE
114
115 levels = {"quiet": LEVEL_QUIET, # -vquiet 80
116 "error": LEVEL_ERROR, # -verror 70
117 "warning": LEVEL_WARNING, # -vwaring 60
118 "notice": LEVEL_NOTICE, # -vnotice 50
119 "status": LEVEL_STATUS, # -vstatus 40
120 "verbose": LEVEL_VERBOSE, # -vverbose 30
121 None: LEVEL_VERBOSE, # -v 30
122 "debug": LEVEL_DEBUG, # -vdebug 20
123 "v": LEVEL_DEBUG, # -vv 20
124 "trace": LEVEL_TRACE, # -vtrace 10
125 "vv": LEVEL_TRACE, # -vvv 10
126 }
127
128 level = levels.get(args.v)
129 if not level:
130 raise ConanException(f"Invalid argument '-v{args.v}'")
131 output.conan_output_level = level
132
133
134 class ConanCommand(BaseConanCommand):
135 def __init__(self, method, group=None, formatters=None):
136 super().__init__(method, formatters=formatters)
137 self._subcommands = {}
138 self._subcommand_parser = None
139 self._group = group or "Other"
140 self._name = method.__name__.replace("_", "-")
141 self._parser = ConanArgumentParser(description=self._doc,
142 prog="conan {}".format(self._name),
143 formatter_class=SmartFormatter)
144 self._init_formatters()
145 self._init_log_levels()
146
147 def add_subcommand(self, subcommand):
148 if not self._subcommand_parser:
149 self._subcommand_parser = self._parser.add_subparsers(dest='subcommand',
150 help='sub-command help')
151 self._subcommand_parser.required = True
152 subcommand.set_name(self.name)
153 subcommand.set_parser(self._parser, self._subcommand_parser)
154 self._subcommands[subcommand.name] = subcommand
155
156 def run(self, conan_api, parser, *args):
157 info = self._method(conan_api, parser, *args)
158
159 if not self._subcommands:
160 self._format(self._parser, info, *args)
161 else:
162 subcommand = args[0][0] if args[0] else None
163 if subcommand in self._subcommands:
164 self._subcommands[subcommand].run(conan_api, *args)
165 else:
166 self._parser.parse_args(*args)
167
168 @property
169 def group(self):
170 return self._group
171
172
173 class ConanSubCommand(BaseConanCommand):
174 def __init__(self, method, formatters=None):
175 super().__init__(method, formatters=formatters)
176 self._parent_parser = None
177 self._parser = None
178 self._subcommand_name = method.__name__.replace('_', '-')
179
180 def run(self, conan_api, *args):
181 info = self._method(conan_api, self._parent_parser, self._parser, *args)
182 # It is necessary to do it after calling the "method" otherwise parser not complete
183 self._format(self._parent_parser, info, *args)
184
185 def set_name(self, parent_name):
186 self._name = self._subcommand_name.replace(f'{parent_name}-', '', 1)
187
188 def set_parser(self, parent_parser, subcommand_parser):
189 self._parser = subcommand_parser.add_parser(self._name, help=self._doc)
190 self._parser.description = self._doc
191 self._parent_parser = parent_parser
192 self._init_formatters()
193 self._init_log_levels()
194
195
196 def conan_command(group=None, formatters=None):
197 return lambda f: ConanCommand(f, group, formatters=formatters)
198
199
200 def conan_subcommand(formatters=None):
201 return lambda f: ConanSubCommand(f, formatters=formatters)
202
[end of conan/cli/command.py]
[start of conan/tools/microsoft/msbuild.py]
1 from conans.errors import ConanException
2
3
4 def msbuild_verbosity_cmd_line_arg(conanfile):
5 verbosity = conanfile.conf.get("tools.build:verbosity")
6 if verbosity:
7 if verbosity not in ("quiet", "error", "warning", "notice", "status", "verbose",
8 "normal", "debug", "v", "trace", "vv"):
9 raise ConanException(f"Unknown value '{verbosity}' for 'tools.build:verbosity'")
10 else:
11 # "Quiet", "Minimal", "Normal", "Detailed", "Diagnostic"
12 verbosity = {
13 "quiet": "Quiet",
14 "error": "Minimal",
15 "warning": "Minimal",
16 "notice": "Minimal",
17 "status": "Normal",
18 "verbose": "Normal",
19 "normal": "Normal",
20 "debug": "Detailed",
21 "v": "Detailed",
22 "trace": "Diagnostic",
23 "vv": "Diagnostic"
24 }.get(verbosity)
25 return '/verbosity:{}'.format(verbosity)
26
27
28 def msbuild_arch(arch):
29 return {'x86': 'x86',
30 'x86_64': 'x64',
31 'armv7': 'ARM',
32 'armv8': 'ARM64'}.get(str(arch))
33
34
35 class MSBuild(object):
36 """
37 MSBuild build helper class
38 """
39
40 def __init__(self, conanfile):
41 """
42 :param conanfile: ``< ConanFile object >`` The current recipe object. Always use ``self``.
43 """
44 self._conanfile = conanfile
45 #: Defines the build type. By default, ``settings.build_type``.
46 self.build_type = conanfile.settings.get_safe("build_type")
47 # if platforms:
48 # msvc_arch.update(platforms)
49 arch = conanfile.settings.get_safe("arch")
50 msvc_arch = msbuild_arch(arch)
51 if conanfile.settings.get_safe("os") == "WindowsCE":
52 msvc_arch = conanfile.settings.get_safe("os.platform")
53 #: Defines the platform name, e.g., ``ARM`` if ``settings.arch == "armv7"``.
54 self.platform = msvc_arch
55
56 def command(self, sln, targets=None):
57 """
58 Gets the ``msbuild`` command line. For instance,
59 :command:`msbuild "MyProject.sln" /p:Configuration=<conf> /p:Platform=<platform>`.
60
61 :param sln: ``str`` name of Visual Studio ``*.sln`` file
62 :param targets: ``targets`` is an optional argument, defaults to ``None``, and otherwise it is a list of targets to build
63 :return: ``str`` msbuild command line.
64 """
65 # TODO: Enable output_binary_log via config
66 cmd = ('msbuild "%s" /p:Configuration="%s" /p:Platform=%s'
67 % (sln, self.build_type, self.platform))
68
69 verbosity = msbuild_verbosity_cmd_line_arg(self._conanfile)
70 if verbosity:
71 cmd += " {}".format(verbosity)
72
73 maxcpucount = self._conanfile.conf.get("tools.microsoft.msbuild:max_cpu_count",
74 check_type=int)
75 if maxcpucount:
76 cmd += " /m:{}".format(maxcpucount)
77
78 if targets:
79 if not isinstance(targets, list):
80 raise ConanException("targets argument should be a list")
81 cmd += " /target:{}".format(";".join(targets))
82
83 return cmd
84
85 def build(self, sln, targets=None):
86 """
87 Runs the ``msbuild`` command line obtained from ``self.command(sln)``.
88
89 :param sln: ``str`` name of Visual Studio ``*.sln`` file
90 :param targets: ``targets`` is an optional argument, defaults to ``None``, and otherwise it is a list of targets to build
91 """
92 cmd = self.command(sln, targets=targets)
93 self._conanfile.run(cmd)
94
95 @staticmethod
96 def get_version(_):
97 return NotImplementedError("get_version() method is not supported in MSBuild "
98 "toolchain helper")
99
[end of conan/tools/microsoft/msbuild.py]
[start of conans/model/conf.py]
1 import re
2 import os
3 import fnmatch
4
5 from collections import OrderedDict
6
7
8 from conans.errors import ConanException
9 from conans.model.recipe_ref import ref_matches
10
11 BUILT_IN_CONFS = {
12 "core:required_conan_version": "Raise if current version does not match the defined range.",
13 "core:non_interactive": "Disable interactive user input, raises error if input necessary",
14 "core:default_profile": "Defines the default host profile ('default' by default)",
15 "core:default_build_profile": "Defines the default build profile (None by default)",
16 "core:allow_uppercase_pkg_names": "Temporarily (will be removed in 2.X) allow uppercase names",
17 "core.version_ranges:resolve_prereleases": "Whether version ranges can resolve to pre-releases or not",
18 "core.upload:retry": "Number of retries in case of failure when uploading to Conan server",
19 "core.upload:retry_wait": "Seconds to wait between upload attempts to Conan server",
20 "core.download:parallel": "Number of concurrent threads to download packages",
21 "core.download:retry": "Number of retries in case of failure when downloading from Conan server",
22 "core.download:retry_wait": "Seconds to wait between download attempts from Conan server",
23 "core.download:download_cache": "Define path to a file download cache",
24 "core.cache:storage_path": "Absolute path where the packages and database are stored",
25 # Sources backup
26 "core.sources:download_cache": "Folder to store the sources backup",
27 "core.sources:download_urls": "List of URLs to download backup sources from",
28 "core.sources:upload_url": "Remote URL to upload backup sources to",
29 # Package ID
30 "core.package_id:default_unknown_mode": "By default, 'semver_mode'",
31 "core.package_id:default_non_embed_mode": "By default, 'minor_mode'",
32 "core.package_id:default_embed_mode": "By default, 'full_mode'",
33 "core.package_id:default_python_mode": "By default, 'minor_mode'",
34 "core.package_id:default_build_mode": "By default, 'None'",
35 # General HTTP(python-requests) configuration
36 "core.net.http:max_retries": "Maximum number of connection retries (requests library)",
37 "core.net.http:timeout": "Number of seconds without response to timeout (requests library)",
38 "core.net.http:no_proxy_match": "List of urls to skip from proxies configuration",
39 "core.net.http:proxies": "Dictionary containing the proxy configuration",
40 "core.net.http:cacert_path": "Path containing a custom Cacert file",
41 "core.net.http:client_cert": "Path or tuple of files containing a client cert (and key)",
42 "core.net.http:clean_system_proxy": "If defined, the proxies system env-vars will be discarded",
43 # Gzip compression
44 "core.gzip:compresslevel": "The Gzip compresion level for Conan artifacts (default=9)",
45 # Tools
46 "tools.android:ndk_path": "Argument for the CMAKE_ANDROID_NDK",
47 "tools.android:cmake_legacy_toolchain": "Define to explicitly pass ANDROID_USE_LEGACY_TOOLCHAIN_FILE in CMake toolchain",
48 "tools.build:skip_test": "Do not execute CMake.test() and Meson.test() when enabled",
49 "tools.build:download_source": "Force download of sources for every package",
50 "tools.build:jobs": "Default compile jobs number -jX Ninja, Make, /MP VS (default: max CPUs)",
51 "tools.build:sysroot": "Pass the --sysroot=<tools.build:sysroot> flag if available. (None by default)",
52 "tools.build.cross_building:can_run": "Bool value that indicates whether is possible to run a non-native "
53 "app on the same architecture. It's used by 'can_run' tool",
54 "tools.build:verbosity": "Verbosity of MSBuild and XCodeBuild build systems. "
55 "Possible values are 'quiet', 'error', 'warning', 'notice', 'status', 'verbose', 'normal', 'debug', 'v', 'trace' and 'vv'",
56 "tools.cmake.cmaketoolchain:generator": "User defined CMake generator to use instead of default",
57 "tools.cmake.cmaketoolchain:find_package_prefer_config": "Argument for the CMAKE_FIND_PACKAGE_PREFER_CONFIG",
58 "tools.cmake.cmaketoolchain:toolchain_file": "Use other existing file rather than conan_toolchain.cmake one",
59 "tools.cmake.cmaketoolchain:user_toolchain": "Inject existing user toolchains at the beginning of conan_toolchain.cmake",
60 "tools.cmake.cmaketoolchain:system_name": "Define CMAKE_SYSTEM_NAME in CMakeToolchain",
61 "tools.cmake.cmaketoolchain:system_version": "Define CMAKE_SYSTEM_VERSION in CMakeToolchain",
62 "tools.cmake.cmaketoolchain:system_processor": "Define CMAKE_SYSTEM_PROCESSOR in CMakeToolchain",
63 "tools.cmake.cmaketoolchain:toolset_arch": "Toolset architecture to be used as part of CMAKE_GENERATOR_TOOLSET in CMakeToolchain",
64 "tools.cmake.cmake_layout:build_folder_vars": "Settings and Options that will produce a different build folder and different CMake presets names",
65 "tools.files.download:retry": "Number of retries in case of failure when downloading",
66 "tools.files.download:retry_wait": "Seconds to wait between download attempts",
67 "tools.gnu:make_program": "Indicate path to make program",
68 "tools.gnu:define_libcxx11_abi": "Force definition of GLIBCXX_USE_CXX11_ABI=1 for libstdc++11",
69 "tools.gnu:pkg_config": "Path to pkg-config executable used by PkgConfig build helper",
70 "tools.gnu:host_triplet": "Custom host triplet to pass to Autotools scripts",
71 "tools.google.bazel:configs": "Define Bazel config file",
72 "tools.google.bazel:bazelrc_path": "Defines Bazel rc-path",
73 "tools.meson.mesontoolchain:backend": "Any Meson backend: ninja, vs, vs2010, vs2012, vs2013, vs2015, vs2017, vs2019, xcode",
74 "tools.meson.mesontoolchain:extra_machine_files": "List of paths for any additional native/cross file references to be appended to the existing Conan ones",
75 "tools.microsoft.msbuild:vs_version": "Defines the IDE version when using the new msvc compiler",
76 "tools.microsoft.msbuild:max_cpu_count": "Argument for the /m when running msvc to build parallel projects",
77 "tools.microsoft.msbuild:installation_path": "VS install path, to avoid auto-detect via vswhere, like C:/Program Files (x86)/Microsoft Visual Studio/2019/Community. Use empty string to disable",
78 "tools.microsoft.msbuilddeps:exclude_code_analysis": "Suppress MSBuild code analysis for patterns",
79 "tools.microsoft.msbuildtoolchain:compile_options": "Dictionary with MSBuild compiler options",
80 "tools.microsoft.bash:subsystem": "The subsystem to be used when conanfile.win_bash==True. Possible values: msys2, msys, cygwin, wsl, sfu",
81 "tools.microsoft.bash:path": "The path to the shell to run when conanfile.win_bash==True",
82 "tools.microsoft.bash:active": "If Conan is already running inside bash terminal in Windows",
83 "tools.intel:installation_path": "Defines the Intel oneAPI installation root path",
84 "tools.intel:setvars_args": "Custom arguments to be passed onto the setvars.sh|bat script from Intel oneAPI",
85 "tools.system.package_manager:tool": "Default package manager tool: 'apt-get', 'yum', 'dnf', 'brew', 'pacman', 'choco', 'zypper', 'pkg' or 'pkgutil'",
86 "tools.system.package_manager:mode": "Mode for package_manager tools: 'check' or 'install'",
87 "tools.system.package_manager:sudo": "Use 'sudo' when invoking the package manager tools in Linux (False by default)",
88 "tools.system.package_manager:sudo_askpass": "Use the '-A' argument if using sudo in Linux to invoke the system package manager (False by default)",
89 "tools.apple:sdk_path": "Path to the SDK to be used",
90 "tools.apple:enable_bitcode": "(boolean) Enable/Disable Bitcode Apple Clang flags",
91 "tools.apple:enable_arc": "(boolean) Enable/Disable ARC Apple Clang flags",
92 "tools.apple:enable_visibility": "(boolean) Enable/Disable Visibility Apple Clang flags",
93 "tools.env.virtualenv:powershell": "If it is set to True it will generate powershell launchers if os=Windows",
94 # Compilers/Flags configurations
95 "tools.build:compiler_executables": "Defines a Python dict-like with the compilers path to be used. Allowed keys {'c', 'cpp', 'cuda', 'objc', 'objcxx', 'rc', 'fortran', 'asm', 'hip', 'ispc'}",
96 "tools.build:cxxflags": "List of extra CXX flags used by different toolchains like CMakeToolchain, AutotoolsToolchain and MesonToolchain",
97 "tools.build:cflags": "List of extra C flags used by different toolchains like CMakeToolchain, AutotoolsToolchain and MesonToolchain",
98 "tools.build:defines": "List of extra definition flags used by different toolchains like CMakeToolchain and AutotoolsToolchain",
99 "tools.build:sharedlinkflags": "List of extra flags used by CMakeToolchain for CMAKE_SHARED_LINKER_FLAGS_INIT variable",
100 "tools.build:exelinkflags": "List of extra flags used by CMakeToolchain for CMAKE_EXE_LINKER_FLAGS_INIT variable",
101 "tools.build:linker_scripts": "List of linker script files to pass to the linker used by different toolchains like CMakeToolchain, AutotoolsToolchain, and MesonToolchain",
102 # Package ID composition
103 "tools.info.package_id:confs": "List of existing configuration to be part of the package ID",
104 }
105
106 BUILT_IN_CONFS = {key: value for key, value in sorted(BUILT_IN_CONFS.items())}
107
108
109 CORE_CONF_PATTERN = re.compile(r"^core[.:]")
110 TOOLS_CONF_PATTERN = re.compile(r"^tools[.:]")
111 USER_CONF_PATTERN = re.compile(r"^user[.:]")
112
113
114 def _is_profile_module(module_name):
115 # These are the modules that are propagated to profiles and user recipes
116 _profiles_modules_patterns = USER_CONF_PATTERN, TOOLS_CONF_PATTERN
117 return any(pattern.match(module_name) for pattern in _profiles_modules_patterns)
118
119
120 # FIXME: Refactor all the next classes because they are mostly the same as
121 # conan.tools.env.environment ones
122 class _ConfVarPlaceHolder:
123 pass
124
125
126 class _ConfValue(object):
127
128 def __init__(self, name, value, path=False, update=None):
129 if name != name.lower():
130 raise ConanException("Conf '{}' must be lowercase".format(name))
131 self._name = name
132 self._value = value
133 self._value_type = type(value)
134 self._path = path
135 self._update = update
136
137 def __repr__(self):
138 return repr(self._value)
139
140 @property
141 def value(self):
142 if self._value_type is list and _ConfVarPlaceHolder in self._value:
143 v = self._value[:]
144 v.remove(_ConfVarPlaceHolder)
145 return v
146 return self._value
147
148 def copy(self):
149 return _ConfValue(self._name, self._value, self._path, self._update)
150
151 def dumps(self):
152 if self._value is None:
153 return "{}=!".format(self._name) # unset
154 elif self._value_type is list and _ConfVarPlaceHolder in self._value:
155 v = self._value[:]
156 v.remove(_ConfVarPlaceHolder)
157 return "{}={}".format(self._name, v)
158 else:
159 return "{}={}".format(self._name, self._value)
160
161 def serialize(self):
162 if self._value is None:
163 _value = "!" # unset
164 elif self._value_type is list and _ConfVarPlaceHolder in self._value:
165 v = self._value[:]
166 v.remove(_ConfVarPlaceHolder)
167 _value = v
168 else:
169 _value = self._value
170 return {self._name: _value}
171
172 def update(self, value):
173 assert self._value_type is dict, "Only dicts can be updated"
174 assert isinstance(value, dict), "Only dicts can update"
175 self._value.update(value)
176
177 def remove(self, value):
178 if self._value_type is list:
179 self._value.remove(value)
180 elif self._value_type is dict:
181 self._value.pop(value, None)
182
183 def append(self, value):
184 if self._value_type is not list:
185 raise ConanException("Only list-like values can append other values.")
186
187 if isinstance(value, list):
188 self._value.extend(value)
189 else:
190 self._value.append(value)
191
192 def prepend(self, value):
193 if self._value_type is not list:
194 raise ConanException("Only list-like values can prepend other values.")
195
196 if isinstance(value, list):
197 self._value = value + self._value
198 else:
199 self._value.insert(0, value)
200
201 def compose_conf_value(self, other):
202 """
203 self has precedence, the "other" will add/append if possible and not conflicting, but
204 self mandates what to do. If self has define(), without placeholder, that will remain.
205 :type other: _ConfValue
206 """
207 v_type = self._value_type
208 o_type = other._value_type
209 if v_type is list and o_type is list:
210 try:
211 index = self._value.index(_ConfVarPlaceHolder)
212 except ValueError: # It doesn't have placeholder
213 pass
214 else:
215 new_value = self._value[:] # do a copy
216 new_value[index:index + 1] = other._value # replace the placeholder
217 self._value = new_value
218 elif v_type is dict and o_type is dict:
219 if self._update:
220 # only if the current one is marked as "*=" update, otherwise it remains
221 # as this is a "compose" operation, self has priority, it is the one updating
222 new_value = other._value.copy()
223 new_value.update(self._value)
224 self._value = new_value
225 elif self._value is None or other._value is None:
226 # It means any of those values were an "unset" so doing nothing because we don't
227 # really know the original value type
228 pass
229 elif o_type != v_type:
230 raise ConanException("It's not possible to compose {} values "
231 "and {} ones.".format(v_type.__name__, o_type.__name__))
232 # TODO: In case of any other object types?
233
234 def set_relative_base_folder(self, folder):
235 if not self._path:
236 return
237 if isinstance(self._value, list):
238 self._value = [os.path.join(folder, v) if v != _ConfVarPlaceHolder else v
239 for v in self._value]
240 if isinstance(self._value, dict):
241 self._value = {k: os.path.join(folder, v) for k, v in self._value.items()}
242 elif isinstance(self._value, str):
243 self._value = os.path.join(folder, self._value)
244
245
246 class Conf:
247
248 # Putting some default expressions to check that any value could be false
249 boolean_false_expressions = ("0", '"0"', "false", '"false"', "off")
250
251 def __init__(self):
252 # It being ordered allows for Windows case-insensitive composition
253 self._values = OrderedDict() # {var_name: [] of values, including separators}
254
255 def __bool__(self):
256 return bool(self._values)
257
258 def __repr__(self):
259 return "Conf: " + repr(self._values)
260
261 def __eq__(self, other):
262 """
263 :type other: Conf
264 """
265 return other._values == self._values
266
267 def validate(self):
268 for conf in self._values:
269 if conf.startswith("tools") or conf.startswith("core"):
270 if conf not in BUILT_IN_CONFS:
271 raise ConanException(f"Unknown conf '{conf}'. Use 'conan config list' to "
272 "display existing configurations")
273
274 def items(self):
275 # FIXME: Keeping backward compatibility
276 for k, v in self._values.items():
277 yield k, v.value
278
279 def get(self, conf_name, default=None, check_type=None):
280 """
281 Get all the values of the given configuration name.
282
283 :param conf_name: Name of the configuration.
284 :param default: Default value in case of conf does not have the conf_name key.
285 :param check_type: Check the conf type(value) is the same as the given by this param.
286 There are two default smart conversions for bool and str types.
287 """
288 # Skipping this check only the user.* configurations
289 if USER_CONF_PATTERN.match(conf_name) is None and conf_name not in BUILT_IN_CONFS:
290 raise ConanException(f"[conf] '{conf_name}' does not exist in configuration list. "
291 f" Run 'conan config list' to see all the available confs.")
292
293 conf_value = self._values.get(conf_name)
294 if conf_value:
295 v = conf_value.value
296 # Some smart conversions
297 if check_type is bool and not isinstance(v, bool):
298 # Perhaps, user has introduced a "false", "0" or even "off"
299 return str(v).lower() not in Conf.boolean_false_expressions
300 elif check_type is str and not isinstance(v, str):
301 return str(v)
302 elif v is None: # value was unset
303 return default
304 elif check_type is not None and not isinstance(v, check_type):
305 raise ConanException(f"[conf] {conf_name} must be a "
306 f"{check_type.__name__}-like object. The value '{v}' "
307 f"introduced is a {type(v).__name__} object")
308 return v
309 else:
310 return default
311
312 def pop(self, conf_name, default=None):
313 """
314 Remove the given configuration, returning its value.
315
316 :param conf_name: Name of the configuration.
317 :param default: Default value to return in case the configuration doesn't exist.
318 :return:
319 """
320 value = self.get(conf_name, default=default)
321 self._values.pop(conf_name, None)
322 return value
323
324 def show(self, fnpattern, pattern=""):
325 return {key: self.get(key)
326 for key in self._values.keys()
327 if fnmatch.fnmatch(pattern + key, fnpattern)}
328
329 def copy(self):
330 c = Conf()
331 c._values = self._values.copy()
332 return c
333
334 def dumps(self):
335 """
336 Returns a string with the format ``name=conf-value``
337 """
338 return "\n".join([v.dumps() for v in reversed(self._values.values())])
339
340 def serialize(self):
341 """
342 Returns a dict-like object, e.g., ``{"tools.xxxx": "value1"}``
343 """
344 ret = {}
345 for v in self._values.values():
346 ret.update(v.serialize())
347 return ret
348
349 def define(self, name, value):
350 """
351 Define a value for the given configuration name.
352
353 :param name: Name of the configuration.
354 :param value: Value of the configuration.
355 """
356 self._values[name] = _ConfValue(name, value)
357
358 def define_path(self, name, value):
359 self._values[name] = _ConfValue(name, value, path=True)
360
361 def unset(self, name):
362 """
363 Clears the variable, equivalent to a unset or set XXX=
364
365 :param name: Name of the configuration.
366 """
367 self._values[name] = _ConfValue(name, None)
368
369 def update(self, name, value):
370 """
371 Update the value to the given configuration name.
372
373 :param name: Name of the configuration.
374 :param value: Value of the configuration.
375 """
376 # Placeholder trick is not good for dict update, so we need to explicitly update=True
377 conf_value = _ConfValue(name, {}, update=True)
378 self._values.setdefault(name, conf_value).update(value)
379
380 def update_path(self, name, value):
381 conf_value = _ConfValue(name, {}, path=True, update=True)
382 self._values.setdefault(name, conf_value).update(value)
383
384 def append(self, name, value):
385 """
386 Append a value to the given configuration name.
387
388 :param name: Name of the configuration.
389 :param value: Value to append.
390 """
391 conf_value = _ConfValue(name, [_ConfVarPlaceHolder])
392 self._values.setdefault(name, conf_value).append(value)
393
394 def append_path(self, name, value):
395 conf_value = _ConfValue(name, [_ConfVarPlaceHolder], path=True)
396 self._values.setdefault(name, conf_value).append(value)
397
398 def prepend(self, name, value):
399 """
400 Prepend a value to the given configuration name.
401
402 :param name: Name of the configuration.
403 :param value: Value to prepend.
404 """
405 conf_value = _ConfValue(name, [_ConfVarPlaceHolder])
406 self._values.setdefault(name, conf_value).prepend(value)
407
408 def prepend_path(self, name, value):
409 conf_value = _ConfValue(name, [_ConfVarPlaceHolder], path=True)
410 self._values.setdefault(name, conf_value).prepend(value)
411
412 def remove(self, name, value):
413 """
414 Remove a value from the given configuration name.
415
416 :param name: Name of the configuration.
417 :param value: Value to remove.
418 """
419 conf_value = self._values.get(name)
420 if conf_value:
421 conf_value.remove(value)
422 else:
423 raise ConanException("Conf {} does not exist.".format(name))
424
425 def compose_conf(self, other):
426 """
427 :param other: other has less priority than current one
428 :type other: Conf
429 """
430 for k, v in other._values.items():
431 existing = self._values.get(k)
432 if existing is None:
433 self._values[k] = v.copy()
434 else:
435 existing.compose_conf_value(v)
436 return self
437
438 def filter_user_modules(self):
439 result = Conf()
440 for k, v in self._values.items():
441 if _is_profile_module(k):
442 result._values[k] = v
443 return result
444
445 def copy_conaninfo_conf(self):
446 """
447 Get a new `Conf()` object with all the configurations required by the consumer
448 to be included in the final `ConanInfo().package_id()` computation. For instance, let's
449 suppose that we have this Conan `profile`:
450
451 ```
452 ...
453 [conf]
454 tools.info.package_id:confs=["tools.build:cxxflags", "tools.build:cflags"]
455 tools.build:cxxflags=["flag1xx"]
456 tools.build:cflags=["flag1"]
457 tools.build:defines=["DEF1"]
458 ...
459
460 Then, the resulting `Conf()` will have only these configuration lines:
461
462 tools.build:cxxflags=["flag1xx"]
463 tools.build:cflags=["flag1"]
464 ```
465
466 :return: a new `< Conf object >` with the configuration selected by `tools.info.package_id:confs`.
467 """
468 result = Conf()
469 # Reading the list of all the configurations selected by the user to use for the package_id
470 package_id_confs = self.get("tools.info.package_id:confs", default=[], check_type=list)
471 for conf_name in package_id_confs:
472 value = self.get(conf_name)
473 # Pruning any empty values, those should not affect package ID
474 if value:
475 result.define(conf_name, value)
476 return result
477
478 def set_relative_base_folder(self, folder):
479 for v in self._values.values():
480 v.set_relative_base_folder(folder)
481
482
483 class ConfDefinition:
484
485 # Order is important, "define" must be latest
486 actions = (("+=", "append"), ("=+", "prepend"),
487 ("=!", "unset"), ("*=", "update"), ("=", "define"))
488
489 def __init__(self):
490 self._pattern_confs = OrderedDict()
491
492 def __repr__(self):
493 return "ConfDefinition: " + repr(self._pattern_confs)
494
495 def __bool__(self):
496 return bool(self._pattern_confs)
497
498 def get(self, conf_name, default=None, check_type=None):
499 """
500 Get the value of the conf name requested and convert it to the [type]-like passed.
501 """
502 pattern, name = self._split_pattern_name(conf_name)
503 return self._pattern_confs.get(pattern, Conf()).get(name, default=default,
504 check_type=check_type)
505
506 def show(self, fnpattern):
507 """
508 Get the value of the confs that match the requested pattern
509 """
510 result = {}
511
512 for patter_key, patter_conf in self._pattern_confs.items():
513 if patter_key is None:
514 patter_key = ""
515 else:
516 patter_key += ":"
517
518 pattern_values = patter_conf.show(fnpattern, patter_key)
519 result.update({patter_key + pattern_subkey: pattern_subvalue
520 for pattern_subkey, pattern_subvalue in pattern_values.items()})
521
522 return result
523
524 def pop(self, conf_name, default=None):
525 """
526 Remove the conf name passed.
527 """
528 pattern, name = self._split_pattern_name(conf_name)
529 return self._pattern_confs.get(pattern, Conf()).pop(name, default=default)
530
531 @staticmethod
532 def _split_pattern_name(pattern_name):
533 if pattern_name.count(":") >= 2:
534 pattern, name = pattern_name.split(":", 1)
535 else:
536 pattern, name = None, pattern_name
537 return pattern, name
538
539 def get_conanfile_conf(self, ref, is_consumer=False):
540 """ computes package-specific Conf
541 it is only called when conanfile.buildenv is called
542 the last one found in the profile file has top priority
543 """
544 result = Conf()
545 for pattern, conf in self._pattern_confs.items():
546 if pattern is None or ref_matches(ref, pattern, is_consumer):
547 # Latest declared has priority, copy() necessary to not destroy data
548 result = conf.copy().compose_conf(result)
549 return result
550
551 def update_conf_definition(self, other):
552 """
553 :type other: ConfDefinition
554 :param other: The argument profile has priority/precedence over the current one.
555 """
556 for pattern, conf in other._pattern_confs.items():
557 self._update_conf_definition(pattern, conf)
558
559 def _update_conf_definition(self, pattern, conf):
560 existing = self._pattern_confs.get(pattern)
561 if existing:
562 self._pattern_confs[pattern] = conf.compose_conf(existing)
563 else:
564 self._pattern_confs[pattern] = conf
565
566 def rebase_conf_definition(self, other):
567 """
568 for taking the new global.conf and composing with the profile [conf]
569 :type other: ConfDefinition
570 """
571 for pattern, conf in other._pattern_confs.items():
572 new_conf = conf.filter_user_modules() # Creates a copy, filtered
573 existing = self._pattern_confs.get(pattern)
574 if existing:
575 existing.compose_conf(new_conf)
576 else:
577 self._pattern_confs[pattern] = new_conf
578
579 def update(self, key, value, profile=False, method="define"):
580 """
581 Define/append/prepend/unset any Conf line
582 >> update("tools.build:verbosity", "verbose")
583 """
584 pattern, name = self._split_pattern_name(key)
585
586 if not _is_profile_module(name):
587 if profile:
588 raise ConanException("[conf] '{}' not allowed in profiles".format(key))
589 if pattern is not None:
590 raise ConanException("Conf '{}' cannot have a package pattern".format(key))
591
592 # strip whitespaces before/after =
593 # values are not strip() unless they are a path, to preserve potential whitespaces
594 name = name.strip()
595
596 # When loading from profile file, latest line has priority
597 conf = Conf()
598 if method == "unset":
599 conf.unset(name)
600 else:
601 getattr(conf, method)(name, value)
602 # Update
603 self._update_conf_definition(pattern, conf)
604
605 def dumps(self):
606 result = []
607 for pattern, conf in self._pattern_confs.items():
608 if pattern is None:
609 result.append(conf.dumps())
610 else:
611 result.append("\n".join("{}:{}".format(pattern, line) if line else ""
612 for line in conf.dumps().splitlines()))
613 if result:
614 result.append("")
615 return "\n".join(result)
616
617 def serialize(self):
618 result = {}
619 for pattern, conf in self._pattern_confs.items():
620 if pattern is None:
621 result.update(conf.serialize())
622 else:
623 for k, v in conf.serialize():
624 result[f"{pattern}:{k}"] = v
625 return result
626
627 @staticmethod
628 def _get_evaluated_value(__v):
629 """
630 Function to avoid eval() catching local variables
631 """
632 try:
633 # Isolated eval
634 parsed_value = eval(__v)
635 if isinstance(parsed_value, str): # xxx:xxx = "my string"
636 # Let's respect the quotes introduced by any user
637 parsed_value = '"{}"'.format(parsed_value)
638 except:
639 # It means eval() failed because of a string without quotes
640 parsed_value = __v.strip()
641 return parsed_value
642
643 def loads(self, text, profile=False):
644 self._pattern_confs = {}
645
646 for line in text.splitlines():
647 line = line.strip()
648 if not line or line.startswith("#"):
649 continue
650 for op, method in ConfDefinition.actions:
651 tokens = line.split(op, 1)
652 if len(tokens) != 2:
653 continue
654 pattern_name, value = tokens
655 parsed_value = ConfDefinition._get_evaluated_value(value)
656 self.update(pattern_name, parsed_value, profile=profile, method=method)
657 break
658 else:
659 raise ConanException("Bad conf definition: {}".format(line))
660
661 def validate(self):
662 for conf in self._pattern_confs.values():
663 conf.validate()
664
[end of conans/model/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
conan-io/conan
|
0c1624d2dd3b0278c1cf6f66f8dcc7bd1aa9ec48
|
Take a look into normalizing the log levels of conan
Currently, it's not clear whether verbose or notice/status is the default one, and if their order is correct. We should take a look into normalizing all the log levels to be consistent across
|
2023-04-04T13:31:15Z
|
<patch>
diff --git a/conan/cli/command.py b/conan/cli/command.py
--- a/conan/cli/command.py
+++ b/conan/cli/command.py
@@ -117,8 +117,8 @@ def _process_log_level_args(args):
"warning": LEVEL_WARNING, # -vwaring 60
"notice": LEVEL_NOTICE, # -vnotice 50
"status": LEVEL_STATUS, # -vstatus 40
+ None: LEVEL_STATUS, # -v 40
"verbose": LEVEL_VERBOSE, # -vverbose 30
- None: LEVEL_VERBOSE, # -v 30
"debug": LEVEL_DEBUG, # -vdebug 20
"v": LEVEL_DEBUG, # -vv 20
"trace": LEVEL_TRACE, # -vtrace 10
</patch>
|
[]
|
[]
| ||||
pypa__pip-9775
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
New resolver: Failure when package only specified with extras is not available from index
<!--
Please provide as much information as you can about your failure, so that we can understand the root cause.
Try if your issue has been fixed in the in-development version of pip. Use the following command to install pip from master:
python -m pip install -U "pip @ https://github.com/pypa/pip/archive/master.zip"
-->
**What did you want to do?**
<!-- Include any inputs you gave to pip, for example:
* Package requirements: any CLI arguments and/or your requirements.txt file
* Already installed packages, outputted via `pip freeze`
-->
`pip install --use-feature=2020-resolver -r requirements.txt --no-deps`
`pip install --use-feature=2020-resolver -r requirements.txt`
requirements.txt only contains a list of packages to be installed in editable mode, with some depending on each other.
This is in a fresh miniconda environment and occurred on both 20.2.2 and master.
I ran pip using `--no-deps` first since in my experience, installing multiple editable mode packages with dependencies on each other fails otherwise. However, running just the normal install command directly in a fresh environment still fails with the new resolver, as below.
```
ERROR: Could not find a version that satisfies the requirement azureml-dataset-runtime[fuse]~=0.1.0.0 (from azureml-defaults)
ERROR: No matching distribution found for azureml-dataset-runtime[fuse]~=0.1.0.0
```
**Output**
This output is after running the 2nd pip install command, to actually install the package dependencies after installing the editable mode packages using `--no-deps`. Output has been slightly edited to remove full file paths.
```
ERROR: Cannot install azureml-dataset-runtime 0.1.0.0 (from src\azureml-dataset-runtime), -r requirements.txt (line 9), -r requirements.txt (line 16) and azureml-dataset-runtime[fuse] 0.1.0.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested azureml-dataset-runtime 0.1.0.0 (from src\azureml-dataset-runtime)
azureml-automl-core 0.1.0.0 depends on azureml-dataset-runtime~=0.1.0.0
azureml-train-automl-client 0.1.0.0 depends on azureml-dataset-runtime~=0.1.0.0
azureml-dataset-runtime[fuse] 0.1.0.0 depends on azureml-dataset-runtime 0.1.0.0 (Installed)
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
```
**Additional information**
<!--
It would be great if you could also include your dependency tree. For this you can use pipdeptree: https://pypi.org/project/pipdeptree/
For users installing packages from a private repository or local directory, please try your best to describe your setup. We'd like to understand how to reproduce the error locally, so would need (at a minimum) a description of the packages you are trying to install, and a list of dependencies for each package.
-->
The requirements file looks like this (all below packages should be available from pypi as well):
```
-e "src\azureml-core\."
-e "src\azureml-dataset-runtime\."
-e "src\azureml-defaults\."
-e "src\azureml-telemetry\."
-e "src\azureml-opendatasets\."
-e "src\azureml-pipeline\."
-e "src\azureml-pipeline-core\."
-e "src\azureml-pipeline-steps\."
-e "src\azureml-automl-core\."
-e "src\azureml-automl-runtime\."
-e "src\azureml-interpret\."
-e "src\azureml-explain-model\."
-e "src\azureml-train-restclients-hyperdrive\."
-e "src\azureml-train-core\."
-e "src\azureml-train\."
-e "src\azureml-train-automl-client\."
-e "src\azureml-train-automl-runtime\."
```
</issue>
<code>
[start of README.rst]
1 pip - The Python Package Installer
2 ==================================
3
4 .. image:: https://img.shields.io/pypi/v/pip.svg
5 :target: https://pypi.org/project/pip/
6
7 .. image:: https://readthedocs.org/projects/pip/badge/?version=latest
8 :target: https://pip.pypa.io/en/latest
9
10 pip is the `package installer`_ for Python. You can use pip to install packages from the `Python Package Index`_ and other indexes.
11
12 Please take a look at our documentation for how to install and use pip:
13
14 * `Installation`_
15 * `Usage`_
16
17 We release updates regularly, with a new version every 3 months. Find more details in our documentation:
18
19 * `Release notes`_
20 * `Release process`_
21
22 In pip 20.3, we've `made a big improvement to the heart of pip`_; `learn more`_. We want your input, so `sign up for our user experience research studies`_ to help us do it right.
23
24 **Note**: pip 21.0, in January 2021, removed Python 2 support, per pip's `Python 2 support policy`_. Please migrate to Python 3.
25
26 If you find bugs, need help, or want to talk to the developers, please use our mailing lists or chat rooms:
27
28 * `Issue tracking`_
29 * `Discourse channel`_
30 * `User IRC`_
31
32 If you want to get involved head over to GitHub to get the source code, look at our development documentation and feel free to jump on the developer mailing lists and chat rooms:
33
34 * `GitHub page`_
35 * `Development documentation`_
36 * `Development mailing list`_
37 * `Development IRC`_
38
39 Code of Conduct
40 ---------------
41
42 Everyone interacting in the pip project's codebases, issue trackers, chat
43 rooms, and mailing lists is expected to follow the `PSF Code of Conduct`_.
44
45 .. _package installer: https://packaging.python.org/guides/tool-recommendations/
46 .. _Python Package Index: https://pypi.org
47 .. _Installation: https://pip.pypa.io/en/stable/installing.html
48 .. _Usage: https://pip.pypa.io/en/stable/
49 .. _Release notes: https://pip.pypa.io/en/stable/news.html
50 .. _Release process: https://pip.pypa.io/en/latest/development/release-process/
51 .. _GitHub page: https://github.com/pypa/pip
52 .. _Development documentation: https://pip.pypa.io/en/latest/development
53 .. _made a big improvement to the heart of pip: https://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html
54 .. _learn more: https://pip.pypa.io/en/latest/user_guide/#changes-to-the-pip-dependency-resolver-in-20-3-2020
55 .. _sign up for our user experience research studies: https://pyfound.blogspot.com/2020/03/new-pip-resolver-to-roll-out-this-year.html
56 .. _Python 2 support policy: https://pip.pypa.io/en/latest/development/release-process/#python-2-support
57 .. _Issue tracking: https://github.com/pypa/pip/issues
58 .. _Discourse channel: https://discuss.python.org/c/packaging
59 .. _Development mailing list: https://mail.python.org/mailman3/lists/distutils-sig.python.org/
60 .. _User IRC: https://webchat.freenode.net/?channels=%23pypa
61 .. _Development IRC: https://webchat.freenode.net/?channels=%23pypa-dev
62 .. _PSF Code of Conduct: https://github.com/pypa/.github/blob/main/CODE_OF_CONDUCT.md
63
[end of README.rst]
[start of src/pip/_internal/cli/cmdoptions.py]
1 """
2 shared options and groups
3
4 The principle here is to define options once, but *not* instantiate them
5 globally. One reason being that options with action='append' can carry state
6 between parses. pip parses general options twice internally, and shouldn't
7 pass on state. To be consistent, all options will follow this design.
8 """
9
10 # The following comment should be removed at some point in the future.
11 # mypy: strict-optional=False
12
13 import os
14 import textwrap
15 import warnings
16 from functools import partial
17 from optparse import SUPPRESS_HELP, Option, OptionGroup, OptionParser, Values
18 from textwrap import dedent
19 from typing import Any, Callable, Dict, Optional, Tuple
20
21 from pip._vendor.packaging.utils import canonicalize_name
22
23 from pip._internal.cli.parser import ConfigOptionParser
24 from pip._internal.cli.progress_bars import BAR_TYPES
25 from pip._internal.exceptions import CommandError
26 from pip._internal.locations import USER_CACHE_DIR, get_src_prefix
27 from pip._internal.models.format_control import FormatControl
28 from pip._internal.models.index import PyPI
29 from pip._internal.models.target_python import TargetPython
30 from pip._internal.utils.hashes import STRONG_HASHES
31 from pip._internal.utils.misc import strtobool
32
33
34 def raise_option_error(parser, option, msg):
35 # type: (OptionParser, Option, str) -> None
36 """
37 Raise an option parsing error using parser.error().
38
39 Args:
40 parser: an OptionParser instance.
41 option: an Option instance.
42 msg: the error text.
43 """
44 msg = f"{option} error: {msg}"
45 msg = textwrap.fill(" ".join(msg.split()))
46 parser.error(msg)
47
48
49 def make_option_group(group, parser):
50 # type: (Dict[str, Any], ConfigOptionParser) -> OptionGroup
51 """
52 Return an OptionGroup object
53 group -- assumed to be dict with 'name' and 'options' keys
54 parser -- an optparse Parser
55 """
56 option_group = OptionGroup(parser, group["name"])
57 for option in group["options"]:
58 option_group.add_option(option())
59 return option_group
60
61
62 def check_install_build_global(options, check_options=None):
63 # type: (Values, Optional[Values]) -> None
64 """Disable wheels if per-setup.py call options are set.
65
66 :param options: The OptionParser options to update.
67 :param check_options: The options to check, if not supplied defaults to
68 options.
69 """
70 if check_options is None:
71 check_options = options
72
73 def getname(n):
74 # type: (str) -> Optional[Any]
75 return getattr(check_options, n, None)
76
77 names = ["build_options", "global_options", "install_options"]
78 if any(map(getname, names)):
79 control = options.format_control
80 control.disallow_binaries()
81 warnings.warn(
82 "Disabling all use of wheels due to the use of --build-option "
83 "/ --global-option / --install-option.",
84 stacklevel=2,
85 )
86
87
88 def check_dist_restriction(options, check_target=False):
89 # type: (Values, bool) -> None
90 """Function for determining if custom platform options are allowed.
91
92 :param options: The OptionParser options.
93 :param check_target: Whether or not to check if --target is being used.
94 """
95 dist_restriction_set = any(
96 [
97 options.python_version,
98 options.platforms,
99 options.abis,
100 options.implementation,
101 ]
102 )
103
104 binary_only = FormatControl(set(), {":all:"})
105 sdist_dependencies_allowed = (
106 options.format_control != binary_only and not options.ignore_dependencies
107 )
108
109 # Installations or downloads using dist restrictions must not combine
110 # source distributions and dist-specific wheels, as they are not
111 # guaranteed to be locally compatible.
112 if dist_restriction_set and sdist_dependencies_allowed:
113 raise CommandError(
114 "When restricting platform and interpreter constraints using "
115 "--python-version, --platform, --abi, or --implementation, "
116 "either --no-deps must be set, or --only-binary=:all: must be "
117 "set and --no-binary must not be set (or must be set to "
118 ":none:)."
119 )
120
121 if check_target:
122 if dist_restriction_set and not options.target_dir:
123 raise CommandError(
124 "Can not use any platform or abi specific options unless "
125 "installing via '--target'"
126 )
127
128
129 def _path_option_check(option, opt, value):
130 # type: (Option, str, str) -> str
131 return os.path.expanduser(value)
132
133
134 def _package_name_option_check(option, opt, value):
135 # type: (Option, str, str) -> str
136 return canonicalize_name(value)
137
138
139 class PipOption(Option):
140 TYPES = Option.TYPES + ("path", "package_name")
141 TYPE_CHECKER = Option.TYPE_CHECKER.copy()
142 TYPE_CHECKER["package_name"] = _package_name_option_check
143 TYPE_CHECKER["path"] = _path_option_check
144
145
146 ###########
147 # options #
148 ###########
149
150 help_ = partial(
151 Option,
152 "-h",
153 "--help",
154 dest="help",
155 action="help",
156 help="Show help.",
157 ) # type: Callable[..., Option]
158
159 isolated_mode = partial(
160 Option,
161 "--isolated",
162 dest="isolated_mode",
163 action="store_true",
164 default=False,
165 help=(
166 "Run pip in an isolated mode, ignoring environment variables and user "
167 "configuration."
168 ),
169 ) # type: Callable[..., Option]
170
171 require_virtualenv = partial(
172 Option,
173 # Run only if inside a virtualenv, bail if not.
174 "--require-virtualenv",
175 "--require-venv",
176 dest="require_venv",
177 action="store_true",
178 default=False,
179 help=SUPPRESS_HELP,
180 ) # type: Callable[..., Option]
181
182 verbose = partial(
183 Option,
184 "-v",
185 "--verbose",
186 dest="verbose",
187 action="count",
188 default=0,
189 help="Give more output. Option is additive, and can be used up to 3 times.",
190 ) # type: Callable[..., Option]
191
192 no_color = partial(
193 Option,
194 "--no-color",
195 dest="no_color",
196 action="store_true",
197 default=False,
198 help="Suppress colored output.",
199 ) # type: Callable[..., Option]
200
201 version = partial(
202 Option,
203 "-V",
204 "--version",
205 dest="version",
206 action="store_true",
207 help="Show version and exit.",
208 ) # type: Callable[..., Option]
209
210 quiet = partial(
211 Option,
212 "-q",
213 "--quiet",
214 dest="quiet",
215 action="count",
216 default=0,
217 help=(
218 "Give less output. Option is additive, and can be used up to 3"
219 " times (corresponding to WARNING, ERROR, and CRITICAL logging"
220 " levels)."
221 ),
222 ) # type: Callable[..., Option]
223
224 progress_bar = partial(
225 Option,
226 "--progress-bar",
227 dest="progress_bar",
228 type="choice",
229 choices=list(BAR_TYPES.keys()),
230 default="on",
231 help=(
232 "Specify type of progress to be displayed ["
233 + "|".join(BAR_TYPES.keys())
234 + "] (default: %default)"
235 ),
236 ) # type: Callable[..., Option]
237
238 log = partial(
239 PipOption,
240 "--log",
241 "--log-file",
242 "--local-log",
243 dest="log",
244 metavar="path",
245 type="path",
246 help="Path to a verbose appending log.",
247 ) # type: Callable[..., Option]
248
249 no_input = partial(
250 Option,
251 # Don't ask for input
252 "--no-input",
253 dest="no_input",
254 action="store_true",
255 default=False,
256 help="Disable prompting for input.",
257 ) # type: Callable[..., Option]
258
259 proxy = partial(
260 Option,
261 "--proxy",
262 dest="proxy",
263 type="str",
264 default="",
265 help="Specify a proxy in the form [user:passwd@]proxy.server:port.",
266 ) # type: Callable[..., Option]
267
268 retries = partial(
269 Option,
270 "--retries",
271 dest="retries",
272 type="int",
273 default=5,
274 help="Maximum number of retries each connection should attempt "
275 "(default %default times).",
276 ) # type: Callable[..., Option]
277
278 timeout = partial(
279 Option,
280 "--timeout",
281 "--default-timeout",
282 metavar="sec",
283 dest="timeout",
284 type="float",
285 default=15,
286 help="Set the socket timeout (default %default seconds).",
287 ) # type: Callable[..., Option]
288
289
290 def exists_action():
291 # type: () -> Option
292 return Option(
293 # Option when path already exist
294 "--exists-action",
295 dest="exists_action",
296 type="choice",
297 choices=["s", "i", "w", "b", "a"],
298 default=[],
299 action="append",
300 metavar="action",
301 help="Default action when a path already exists: "
302 "(s)witch, (i)gnore, (w)ipe, (b)ackup, (a)bort.",
303 )
304
305
306 cert = partial(
307 PipOption,
308 "--cert",
309 dest="cert",
310 type="path",
311 metavar="path",
312 help=(
313 "Path to PEM-encoded CA certificate bundle. "
314 "If provided, overrides the default. "
315 "See 'SSL Certificate Verification' in pip documentation "
316 "for more information."
317 ),
318 ) # type: Callable[..., Option]
319
320 client_cert = partial(
321 PipOption,
322 "--client-cert",
323 dest="client_cert",
324 type="path",
325 default=None,
326 metavar="path",
327 help="Path to SSL client certificate, a single file containing the "
328 "private key and the certificate in PEM format.",
329 ) # type: Callable[..., Option]
330
331 index_url = partial(
332 Option,
333 "-i",
334 "--index-url",
335 "--pypi-url",
336 dest="index_url",
337 metavar="URL",
338 default=PyPI.simple_url,
339 help="Base URL of the Python Package Index (default %default). "
340 "This should point to a repository compliant with PEP 503 "
341 "(the simple repository API) or a local directory laid out "
342 "in the same format.",
343 ) # type: Callable[..., Option]
344
345
346 def extra_index_url():
347 # type: () -> Option
348 return Option(
349 "--extra-index-url",
350 dest="extra_index_urls",
351 metavar="URL",
352 action="append",
353 default=[],
354 help="Extra URLs of package indexes to use in addition to "
355 "--index-url. Should follow the same rules as "
356 "--index-url.",
357 )
358
359
360 no_index = partial(
361 Option,
362 "--no-index",
363 dest="no_index",
364 action="store_true",
365 default=False,
366 help="Ignore package index (only looking at --find-links URLs instead).",
367 ) # type: Callable[..., Option]
368
369
370 def find_links():
371 # type: () -> Option
372 return Option(
373 "-f",
374 "--find-links",
375 dest="find_links",
376 action="append",
377 default=[],
378 metavar="url",
379 help="If a URL or path to an html file, then parse for links to "
380 "archives such as sdist (.tar.gz) or wheel (.whl) files. "
381 "If a local path or file:// URL that's a directory, "
382 "then look for archives in the directory listing. "
383 "Links to VCS project URLs are not supported.",
384 )
385
386
387 def trusted_host():
388 # type: () -> Option
389 return Option(
390 "--trusted-host",
391 dest="trusted_hosts",
392 action="append",
393 metavar="HOSTNAME",
394 default=[],
395 help="Mark this host or host:port pair as trusted, even though it "
396 "does not have valid or any HTTPS.",
397 )
398
399
400 def constraints():
401 # type: () -> Option
402 return Option(
403 "-c",
404 "--constraint",
405 dest="constraints",
406 action="append",
407 default=[],
408 metavar="file",
409 help="Constrain versions using the given constraints file. "
410 "This option can be used multiple times.",
411 )
412
413
414 def requirements():
415 # type: () -> Option
416 return Option(
417 "-r",
418 "--requirement",
419 dest="requirements",
420 action="append",
421 default=[],
422 metavar="file",
423 help="Install from the given requirements file. "
424 "This option can be used multiple times.",
425 )
426
427
428 def editable():
429 # type: () -> Option
430 return Option(
431 "-e",
432 "--editable",
433 dest="editables",
434 action="append",
435 default=[],
436 metavar="path/url",
437 help=(
438 "Install a project in editable mode (i.e. setuptools "
439 '"develop mode") from a local project path or a VCS url.'
440 ),
441 )
442
443
444 def _handle_src(option, opt_str, value, parser):
445 # type: (Option, str, str, OptionParser) -> None
446 value = os.path.abspath(value)
447 setattr(parser.values, option.dest, value)
448
449
450 src = partial(
451 PipOption,
452 "--src",
453 "--source",
454 "--source-dir",
455 "--source-directory",
456 dest="src_dir",
457 type="path",
458 metavar="dir",
459 default=get_src_prefix(),
460 action="callback",
461 callback=_handle_src,
462 help="Directory to check out editable projects into. "
463 'The default in a virtualenv is "<venv path>/src". '
464 'The default for global installs is "<current dir>/src".',
465 ) # type: Callable[..., Option]
466
467
468 def _get_format_control(values, option):
469 # type: (Values, Option) -> Any
470 """Get a format_control object."""
471 return getattr(values, option.dest)
472
473
474 def _handle_no_binary(option, opt_str, value, parser):
475 # type: (Option, str, str, OptionParser) -> None
476 existing = _get_format_control(parser.values, option)
477 FormatControl.handle_mutual_excludes(
478 value,
479 existing.no_binary,
480 existing.only_binary,
481 )
482
483
484 def _handle_only_binary(option, opt_str, value, parser):
485 # type: (Option, str, str, OptionParser) -> None
486 existing = _get_format_control(parser.values, option)
487 FormatControl.handle_mutual_excludes(
488 value,
489 existing.only_binary,
490 existing.no_binary,
491 )
492
493
494 def no_binary():
495 # type: () -> Option
496 format_control = FormatControl(set(), set())
497 return Option(
498 "--no-binary",
499 dest="format_control",
500 action="callback",
501 callback=_handle_no_binary,
502 type="str",
503 default=format_control,
504 help="Do not use binary packages. Can be supplied multiple times, and "
505 'each time adds to the existing value. Accepts either ":all:" to '
506 'disable all binary packages, ":none:" to empty the set (notice '
507 "the colons), or one or more package names with commas between "
508 "them (no colons). Note that some packages are tricky to compile "
509 "and may fail to install when this option is used on them.",
510 )
511
512
513 def only_binary():
514 # type: () -> Option
515 format_control = FormatControl(set(), set())
516 return Option(
517 "--only-binary",
518 dest="format_control",
519 action="callback",
520 callback=_handle_only_binary,
521 type="str",
522 default=format_control,
523 help="Do not use source packages. Can be supplied multiple times, and "
524 'each time adds to the existing value. Accepts either ":all:" to '
525 'disable all source packages, ":none:" to empty the set, or one '
526 "or more package names with commas between them. Packages "
527 "without binary distributions will fail to install when this "
528 "option is used on them.",
529 )
530
531
532 platforms = partial(
533 Option,
534 "--platform",
535 dest="platforms",
536 metavar="platform",
537 action="append",
538 default=None,
539 help=(
540 "Only use wheels compatible with <platform>. Defaults to the "
541 "platform of the running system. Use this option multiple times to "
542 "specify multiple platforms supported by the target interpreter."
543 ),
544 ) # type: Callable[..., Option]
545
546
547 # This was made a separate function for unit-testing purposes.
548 def _convert_python_version(value):
549 # type: (str) -> Tuple[Tuple[int, ...], Optional[str]]
550 """
551 Convert a version string like "3", "37", or "3.7.3" into a tuple of ints.
552
553 :return: A 2-tuple (version_info, error_msg), where `error_msg` is
554 non-None if and only if there was a parsing error.
555 """
556 if not value:
557 # The empty string is the same as not providing a value.
558 return (None, None)
559
560 parts = value.split(".")
561 if len(parts) > 3:
562 return ((), "at most three version parts are allowed")
563
564 if len(parts) == 1:
565 # Then we are in the case of "3" or "37".
566 value = parts[0]
567 if len(value) > 1:
568 parts = [value[0], value[1:]]
569
570 try:
571 version_info = tuple(int(part) for part in parts)
572 except ValueError:
573 return ((), "each version part must be an integer")
574
575 return (version_info, None)
576
577
578 def _handle_python_version(option, opt_str, value, parser):
579 # type: (Option, str, str, OptionParser) -> None
580 """
581 Handle a provided --python-version value.
582 """
583 version_info, error_msg = _convert_python_version(value)
584 if error_msg is not None:
585 msg = "invalid --python-version value: {!r}: {}".format(
586 value,
587 error_msg,
588 )
589 raise_option_error(parser, option=option, msg=msg)
590
591 parser.values.python_version = version_info
592
593
594 python_version = partial(
595 Option,
596 "--python-version",
597 dest="python_version",
598 metavar="python_version",
599 action="callback",
600 callback=_handle_python_version,
601 type="str",
602 default=None,
603 help=dedent(
604 """\
605 The Python interpreter version to use for wheel and "Requires-Python"
606 compatibility checks. Defaults to a version derived from the running
607 interpreter. The version can be specified using up to three dot-separated
608 integers (e.g. "3" for 3.0.0, "3.7" for 3.7.0, or "3.7.3"). A major-minor
609 version can also be given as a string without dots (e.g. "37" for 3.7.0).
610 """
611 ),
612 ) # type: Callable[..., Option]
613
614
615 implementation = partial(
616 Option,
617 "--implementation",
618 dest="implementation",
619 metavar="implementation",
620 default=None,
621 help=(
622 "Only use wheels compatible with Python "
623 "implementation <implementation>, e.g. 'pp', 'jy', 'cp', "
624 " or 'ip'. If not specified, then the current "
625 "interpreter implementation is used. Use 'py' to force "
626 "implementation-agnostic wheels."
627 ),
628 ) # type: Callable[..., Option]
629
630
631 abis = partial(
632 Option,
633 "--abi",
634 dest="abis",
635 metavar="abi",
636 action="append",
637 default=None,
638 help=(
639 "Only use wheels compatible with Python abi <abi>, e.g. 'pypy_41'. "
640 "If not specified, then the current interpreter abi tag is used. "
641 "Use this option multiple times to specify multiple abis supported "
642 "by the target interpreter. Generally you will need to specify "
643 "--implementation, --platform, and --python-version when using this "
644 "option."
645 ),
646 ) # type: Callable[..., Option]
647
648
649 def add_target_python_options(cmd_opts):
650 # type: (OptionGroup) -> None
651 cmd_opts.add_option(platforms())
652 cmd_opts.add_option(python_version())
653 cmd_opts.add_option(implementation())
654 cmd_opts.add_option(abis())
655
656
657 def make_target_python(options):
658 # type: (Values) -> TargetPython
659 target_python = TargetPython(
660 platforms=options.platforms,
661 py_version_info=options.python_version,
662 abis=options.abis,
663 implementation=options.implementation,
664 )
665
666 return target_python
667
668
669 def prefer_binary():
670 # type: () -> Option
671 return Option(
672 "--prefer-binary",
673 dest="prefer_binary",
674 action="store_true",
675 default=False,
676 help="Prefer older binary packages over newer source packages.",
677 )
678
679
680 cache_dir = partial(
681 PipOption,
682 "--cache-dir",
683 dest="cache_dir",
684 default=USER_CACHE_DIR,
685 metavar="dir",
686 type="path",
687 help="Store the cache data in <dir>.",
688 ) # type: Callable[..., Option]
689
690
691 def _handle_no_cache_dir(option, opt, value, parser):
692 # type: (Option, str, str, OptionParser) -> None
693 """
694 Process a value provided for the --no-cache-dir option.
695
696 This is an optparse.Option callback for the --no-cache-dir option.
697 """
698 # The value argument will be None if --no-cache-dir is passed via the
699 # command-line, since the option doesn't accept arguments. However,
700 # the value can be non-None if the option is triggered e.g. by an
701 # environment variable, like PIP_NO_CACHE_DIR=true.
702 if value is not None:
703 # Then parse the string value to get argument error-checking.
704 try:
705 strtobool(value)
706 except ValueError as exc:
707 raise_option_error(parser, option=option, msg=str(exc))
708
709 # Originally, setting PIP_NO_CACHE_DIR to a value that strtobool()
710 # converted to 0 (like "false" or "no") caused cache_dir to be disabled
711 # rather than enabled (logic would say the latter). Thus, we disable
712 # the cache directory not just on values that parse to True, but (for
713 # backwards compatibility reasons) also on values that parse to False.
714 # In other words, always set it to False if the option is provided in
715 # some (valid) form.
716 parser.values.cache_dir = False
717
718
719 no_cache = partial(
720 Option,
721 "--no-cache-dir",
722 dest="cache_dir",
723 action="callback",
724 callback=_handle_no_cache_dir,
725 help="Disable the cache.",
726 ) # type: Callable[..., Option]
727
728 no_deps = partial(
729 Option,
730 "--no-deps",
731 "--no-dependencies",
732 dest="ignore_dependencies",
733 action="store_true",
734 default=False,
735 help="Don't install package dependencies.",
736 ) # type: Callable[..., Option]
737
738 build_dir = partial(
739 PipOption,
740 "-b",
741 "--build",
742 "--build-dir",
743 "--build-directory",
744 dest="build_dir",
745 type="path",
746 metavar="dir",
747 help=SUPPRESS_HELP,
748 ) # type: Callable[..., Option]
749
750 ignore_requires_python = partial(
751 Option,
752 "--ignore-requires-python",
753 dest="ignore_requires_python",
754 action="store_true",
755 help="Ignore the Requires-Python information.",
756 ) # type: Callable[..., Option]
757
758 no_build_isolation = partial(
759 Option,
760 "--no-build-isolation",
761 dest="build_isolation",
762 action="store_false",
763 default=True,
764 help="Disable isolation when building a modern source distribution. "
765 "Build dependencies specified by PEP 518 must be already installed "
766 "if this option is used.",
767 ) # type: Callable[..., Option]
768
769
770 def _handle_no_use_pep517(option, opt, value, parser):
771 # type: (Option, str, str, OptionParser) -> None
772 """
773 Process a value provided for the --no-use-pep517 option.
774
775 This is an optparse.Option callback for the no_use_pep517 option.
776 """
777 # Since --no-use-pep517 doesn't accept arguments, the value argument
778 # will be None if --no-use-pep517 is passed via the command-line.
779 # However, the value can be non-None if the option is triggered e.g.
780 # by an environment variable, for example "PIP_NO_USE_PEP517=true".
781 if value is not None:
782 msg = """A value was passed for --no-use-pep517,
783 probably using either the PIP_NO_USE_PEP517 environment variable
784 or the "no-use-pep517" config file option. Use an appropriate value
785 of the PIP_USE_PEP517 environment variable or the "use-pep517"
786 config file option instead.
787 """
788 raise_option_error(parser, option=option, msg=msg)
789
790 # Otherwise, --no-use-pep517 was passed via the command-line.
791 parser.values.use_pep517 = False
792
793
794 use_pep517 = partial(
795 Option,
796 "--use-pep517",
797 dest="use_pep517",
798 action="store_true",
799 default=None,
800 help="Use PEP 517 for building source distributions "
801 "(use --no-use-pep517 to force legacy behaviour).",
802 ) # type: Any
803
804 no_use_pep517 = partial(
805 Option,
806 "--no-use-pep517",
807 dest="use_pep517",
808 action="callback",
809 callback=_handle_no_use_pep517,
810 default=None,
811 help=SUPPRESS_HELP,
812 ) # type: Any
813
814 install_options = partial(
815 Option,
816 "--install-option",
817 dest="install_options",
818 action="append",
819 metavar="options",
820 help="Extra arguments to be supplied to the setup.py install "
821 'command (use like --install-option="--install-scripts=/usr/local/'
822 'bin"). Use multiple --install-option options to pass multiple '
823 "options to setup.py install. If you are using an option with a "
824 "directory path, be sure to use absolute path.",
825 ) # type: Callable[..., Option]
826
827 build_options = partial(
828 Option,
829 "--build-option",
830 dest="build_options",
831 metavar="options",
832 action="append",
833 help="Extra arguments to be supplied to 'setup.py bdist_wheel'.",
834 ) # type: Callable[..., Option]
835
836 global_options = partial(
837 Option,
838 "--global-option",
839 dest="global_options",
840 action="append",
841 metavar="options",
842 help="Extra global options to be supplied to the setup.py "
843 "call before the install or bdist_wheel command.",
844 ) # type: Callable[..., Option]
845
846 no_clean = partial(
847 Option,
848 "--no-clean",
849 action="store_true",
850 default=False,
851 help="Don't clean up build directories.",
852 ) # type: Callable[..., Option]
853
854 pre = partial(
855 Option,
856 "--pre",
857 action="store_true",
858 default=False,
859 help="Include pre-release and development versions. By default, "
860 "pip only finds stable versions.",
861 ) # type: Callable[..., Option]
862
863 disable_pip_version_check = partial(
864 Option,
865 "--disable-pip-version-check",
866 dest="disable_pip_version_check",
867 action="store_true",
868 default=False,
869 help="Don't periodically check PyPI to determine whether a new version "
870 "of pip is available for download. Implied with --no-index.",
871 ) # type: Callable[..., Option]
872
873
874 def _handle_merge_hash(option, opt_str, value, parser):
875 # type: (Option, str, str, OptionParser) -> None
876 """Given a value spelled "algo:digest", append the digest to a list
877 pointed to in a dict by the algo name."""
878 if not parser.values.hashes:
879 parser.values.hashes = {}
880 try:
881 algo, digest = value.split(":", 1)
882 except ValueError:
883 parser.error(
884 "Arguments to {} must be a hash name " # noqa
885 "followed by a value, like --hash=sha256:"
886 "abcde...".format(opt_str)
887 )
888 if algo not in STRONG_HASHES:
889 parser.error(
890 "Allowed hash algorithms for {} are {}.".format( # noqa
891 opt_str, ", ".join(STRONG_HASHES)
892 )
893 )
894 parser.values.hashes.setdefault(algo, []).append(digest)
895
896
897 hash = partial(
898 Option,
899 "--hash",
900 # Hash values eventually end up in InstallRequirement.hashes due to
901 # __dict__ copying in process_line().
902 dest="hashes",
903 action="callback",
904 callback=_handle_merge_hash,
905 type="string",
906 help="Verify that the package's archive matches this "
907 "hash before installing. Example: --hash=sha256:abcdef...",
908 ) # type: Callable[..., Option]
909
910
911 require_hashes = partial(
912 Option,
913 "--require-hashes",
914 dest="require_hashes",
915 action="store_true",
916 default=False,
917 help="Require a hash to check each requirement against, for "
918 "repeatable installs. This option is implied when any package in a "
919 "requirements file has a --hash option.",
920 ) # type: Callable[..., Option]
921
922
923 list_path = partial(
924 PipOption,
925 "--path",
926 dest="path",
927 type="path",
928 action="append",
929 help="Restrict to the specified installation path for listing "
930 "packages (can be used multiple times).",
931 ) # type: Callable[..., Option]
932
933
934 def check_list_path_option(options):
935 # type: (Values) -> None
936 if options.path and (options.user or options.local):
937 raise CommandError("Cannot combine '--path' with '--user' or '--local'")
938
939
940 list_exclude = partial(
941 PipOption,
942 "--exclude",
943 dest="excludes",
944 action="append",
945 metavar="package",
946 type="package_name",
947 help="Exclude specified package from the output",
948 ) # type: Callable[..., Option]
949
950
951 no_python_version_warning = partial(
952 Option,
953 "--no-python-version-warning",
954 dest="no_python_version_warning",
955 action="store_true",
956 default=False,
957 help="Silence deprecation warnings for upcoming unsupported Pythons.",
958 ) # type: Callable[..., Option]
959
960
961 use_new_feature = partial(
962 Option,
963 "--use-feature",
964 dest="features_enabled",
965 metavar="feature",
966 action="append",
967 default=[],
968 choices=["2020-resolver", "fast-deps", "in-tree-build"],
969 help="Enable new functionality, that may be backward incompatible.",
970 ) # type: Callable[..., Option]
971
972 use_deprecated_feature = partial(
973 Option,
974 "--use-deprecated",
975 dest="deprecated_features_enabled",
976 metavar="feature",
977 action="append",
978 default=[],
979 choices=["legacy-resolver"],
980 help=("Enable deprecated functionality, that will be removed in the future."),
981 ) # type: Callable[..., Option]
982
983
984 ##########
985 # groups #
986 ##########
987
988 general_group = {
989 "name": "General Options",
990 "options": [
991 help_,
992 isolated_mode,
993 require_virtualenv,
994 verbose,
995 version,
996 quiet,
997 log,
998 no_input,
999 proxy,
1000 retries,
1001 timeout,
1002 exists_action,
1003 trusted_host,
1004 cert,
1005 client_cert,
1006 cache_dir,
1007 no_cache,
1008 disable_pip_version_check,
1009 no_color,
1010 no_python_version_warning,
1011 use_new_feature,
1012 use_deprecated_feature,
1013 ],
1014 } # type: Dict[str, Any]
1015
1016 index_group = {
1017 "name": "Package Index Options",
1018 "options": [
1019 index_url,
1020 extra_index_url,
1021 no_index,
1022 find_links,
1023 ],
1024 } # type: Dict[str, Any]
1025
[end of src/pip/_internal/cli/cmdoptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pypa/pip
|
8fc65ea53fa7a4a052a31a040721ba68145ecc25
|
New resolver: Failure when package only specified with extras is not available from index
<!--
Please provide as much information as you can about your failure, so that we can understand the root cause.
Try if your issue has been fixed in the in-development version of pip. Use the following command to install pip from master:
python -m pip install -U "pip @ https://github.com/pypa/pip/archive/master.zip"
-->
**What did you want to do?**
<!-- Include any inputs you gave to pip, for example:
* Package requirements: any CLI arguments and/or your requirements.txt file
* Already installed packages, outputted via `pip freeze`
-->
`pip install --use-feature=2020-resolver -r requirements.txt --no-deps`
`pip install --use-feature=2020-resolver -r requirements.txt`
requirements.txt only contains a list of packages to be installed in editable mode, with some depending on each other.
This is in a fresh miniconda environment and occurred on both 20.2.2 and master.
I ran pip using `--no-deps` first since in my experience, installing multiple editable mode packages with dependencies on each other fails otherwise. However, running just the normal install command directly in a fresh environment still fails with the new resolver, as below.
```
ERROR: Could not find a version that satisfies the requirement azureml-dataset-runtime[fuse]~=0.1.0.0 (from azureml-defaults)
ERROR: No matching distribution found for azureml-dataset-runtime[fuse]~=0.1.0.0
```
**Output**
This output is after running the 2nd pip install command, to actually install the package dependencies after installing the editable mode packages using `--no-deps`. Output has been slightly edited to remove full file paths.
```
ERROR: Cannot install azureml-dataset-runtime 0.1.0.0 (from src\azureml-dataset-runtime), -r requirements.txt (line 9), -r requirements.txt (line 16) and azureml-dataset-runtime[fuse] 0.1.0.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested azureml-dataset-runtime 0.1.0.0 (from src\azureml-dataset-runtime)
azureml-automl-core 0.1.0.0 depends on azureml-dataset-runtime~=0.1.0.0
azureml-train-automl-client 0.1.0.0 depends on azureml-dataset-runtime~=0.1.0.0
azureml-dataset-runtime[fuse] 0.1.0.0 depends on azureml-dataset-runtime 0.1.0.0 (Installed)
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
```
**Additional information**
<!--
It would be great if you could also include your dependency tree. For this you can use pipdeptree: https://pypi.org/project/pipdeptree/
For users installing packages from a private repository or local directory, please try your best to describe your setup. We'd like to understand how to reproduce the error locally, so would need (at a minimum) a description of the packages you are trying to install, and a list of dependencies for each package.
-->
The requirements file looks like this (all below packages should be available from pypi as well):
```
-e "src\azureml-core\."
-e "src\azureml-dataset-runtime\."
-e "src\azureml-defaults\."
-e "src\azureml-telemetry\."
-e "src\azureml-opendatasets\."
-e "src\azureml-pipeline\."
-e "src\azureml-pipeline-core\."
-e "src\azureml-pipeline-steps\."
-e "src\azureml-automl-core\."
-e "src\azureml-automl-runtime\."
-e "src\azureml-interpret\."
-e "src\azureml-explain-model\."
-e "src\azureml-train-restclients-hyperdrive\."
-e "src\azureml-train-core\."
-e "src\azureml-train\."
-e "src\azureml-train-automl-client\."
-e "src\azureml-train-automl-runtime\."
```
|
Can you provide either `setup.py` for all the listed projects (or at least `azureml-dataset-runtime` and `azureml-train-automl-client`), or a reduced setup that can reproduce the same error? It is impossible to tell what is going on without any context.
Sure. The below is a trimmed down version of setup.py for `azureml-dataset-runtime`.
`azureml-train-automl-client` takes a dependency on this via this entry for install_requires:
`'{}~={}'.format('azureml-dataset-runtime', VERSION)`
```
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from setuptools import setup, find_packages
import os
import shutil
SELFVERSION = '0.1.0.0'
DATAPREP_VERSION = '>=2.0.1a,<2.1.0a'
REQUIRES = [
'azureml-dataprep{}'.format(DATAPREP_VERSION),
'pyarrow>=0.17.0,<2.0.0'
]
with open('README.md', 'r', encoding='utf-8') as f:
long_description = f.read()
with open('../.inlinelicense', 'r', encoding='utf-8') as f:
inline_license = f.read()
setup(
name="azureml-dataset-runtime",
version=SELFVERSION,
description='',
long_description=long_description,
long_description_content_type='text/markdown',
author='Microsoft Corp',
license=inline_license,
url='https://docs.microsoft.com/python/api/overview/azure/ml/?view=azure-ml-py',
packages=find_packages(exclude=["*tests*"]),
install_requires=REQUIRES,
extras_require={
'pandas': ['numpy>=1.14.0,<2.0.0', 'pandas>=0.23.4,<2.0.0'],
'parquet': [], # Keeping as no-op to avoid breaking scenarios where this extra is used.
'pyspark': ['pyspark==2.3.0'],
'fuse': ['fusepy>=3.0.1,<4.0.0'],
'scipy': ['scipy>=1.1.0,<2.0.0']
}
)
```
I was able to reproduce the error with the following:
```python
# a/setup.py
from setuptools import setup
setup(name="a", version="0.1.0.0", extras_require={"z": ["c"]})
```
```python
# b/setup.py
from setuptools import setup
setup(name="b", version="0.1.0.0", install_requires=["a[z]~=0.1.0.0"])
```
```python
# c/setup.py
from setuptools import setup
setup(name="c", version="0.1.0.0")
```
```console
$ pip install --use-feature=2020-resolver --no-deps ./a ./b
(snip, this works)
$ pip install --use-feature=2020-resolver ./a ./b
Processing file:///.../a
Processing file:///.../b
Requirement already satisfied: a[z]~=0.1.0.0 in .../a (from b==0.1.0.0) (0.1.0.0)
ERROR: Cannot install a 0.1.0.0 (from .../a) and a[z] 0.1.0.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested a 0.1.0.0 (from .../a)
a[z] 0.1.0.0 depends on a 0.1.0.0 (Installed)
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
```
----
A few observations. First, editable-ness does not matter.
Second, the extra indirection is needed to trigger this. `pip install ./a ./b` would work if `b` depends on `a` directly (instead of `a[z]`).
Third, the error if you don’t `install --no-deps` first is
```
ERROR: Could not find a version that satisfies the requirement a[z]~=0.1.0.0 (from b)
ERROR: No matching distribution found for a[z]~=0.1.0.0
```
Again, the extra indirection is needed to trigger the error. `pip install ./a ./b` works if `b` depends on `a` directly (instead of `a[z]`).
So the problem is that we’re doing something wrong when handling dependencies to an extra-ed requirement.
OK, I think I know what’s going on here. When a package requests dependencies, the resolver pass all the currently known specifiers for the provider to find candidates with. If `b` requests `a` (no extras), the resolver would pass `['a', 'a @ ./a']` since it knows that there is an additional specifier provided by the user.
When `b` requests `a[z]`, however, the resolver does not know that `a @ ./a` *also* provides `a[z]`, and thus only passes `['a[z]']` to the provider, resulting it failing to find a match.
The provider API will need to be tweaked slightly for it to be able to “tell” the resolver that `a` actually provides candidates for both `a` and `a[z]`. I will update resolvelib and come back to this afterwards.
---
The ResolutionImpossible exception triggered when `a` was first installed before hand is triggered by the same cause. But instead of not being able to find any candidates, the provider now is able to find exactly one match—the previously-installed distribution—but then rejects it since the installed distribution does not satisfy the direct link requirement. How this should be fixed (or not fixed) would depend on our resolution to #5780. `Factory.find_candidates()` will need to be updated to relect the decision and make an AlreadyInstalledCandidate be able to satisfy a SpecifierRequirement if the URL matches.
> I will update resolvelib and come back to this afterwards.
@uranusjr Any updates on this? I am not 100% sure how this would work and whether we *have* to make changes for this issue, prior to the new-resolver-is-default release.
I have a couple of plans to make this work (not yet decided which is best; the plans I came up with thus far are all a bit ugly). It’s not the end of the world if the release is made without fixing this, but we should try to get a fix in if possible.
I came up with a way to solve this without tweaking resolvelib. (See the PR linked above for the solution that does involve changing resolvelib.)
The main idea is, when the provider gets a dependency `a @ URL`, it should also emit a constraint on `a[x] @ URL` to the resolver. This makes the resolver aware of the URL spec when dealing with `a[x]`, but only if the user does supply `a[x]` somewhere as a concrete requirement. The problem then becomes making constraints flow through in the resolvelib resolver, instead of only dealt externally in pip code. The trick is to make constraints a subclass of `base.Requirement`, so they can be returned by `get_dependencies()`. When `find_matches()` receives only constraints and not concrete requirements, it returns a special “no-op” candidate that does not have any dependencies, satisfies anything, but ultimately installs nothing. (Regular candidates are returned when at least one concrete requirement is passed into `find_matches()`.) This should correctly represent the logical meaning of constraints in the resolver.
The problem to this solution, however, is that the provider will need to emit all combinations of extras, even if they may never be needed in the dependency graph. The object count would grow quite quickly (`factorial(len(extras) + 1) - 1`), potentially slowing down resolution further more. It would be best if we could generate the constraints lazily, but I haven’t thought of a way without duplicating a backtracking stack in the provider, which is probably a very bad idea.
Per [today's meeting](https://wiki.python.org/psf/PackagingWG/2020-10-06-pip-teamwidemeeting) it looks like the next step here is for @uranusjr and @pradyunsg to have a chat about what to do next. This is something we want to get taken care of before the 20.3 release in a few weeks.
Just been wondering if there's been any updates to this, since 20.3/new resolver will definitely break us in various ways unless this is fixed.
Per [notes from a meeting this week](https://wiki.python.org/psf/PackagingWG/2020-10-12-pip-teamwide-meeting) I believe Pradyun and Tzu-ping had a private conversation about this issue -- could one of you please share those notes here in the issue so we know what needs to happen next? Thanks!
Sorry for the delay. The issue happens when pip sees a direct URL requirement, and then a requirement of the same name with different extras. Since a requirment with different extras represent a different set of dependencies, the resolver treats these two as different requirements, and is not able to resolve the non-URL requirement into using that URL.
The workaround to this would be to install the editable/URL requirements first, like this:
```
$ pip install --use-feature=2020-resolver -r requirements.txt --no-deps
$ pip install --use-feature=2020-resolver -r requirements.txt
```
The first `--no-deps` installation would skip all the resolver stuff (thus avoid the error). Once all those packages are in site-packages, subsequent `install` calls would work correctly.
Some additional technical context that might make more sense to those into pip internals: Since `a[x]` provides a different set of packages from `a`, and that set may be different depending on the version of `a` chosen, the resolver internally creates a “fake” package that depends on the dependencies specified by extra `x`, and `a` that matches the same version of itself. This makes pip aware of the correct version of `a` when it sees `a[x]`. But this does not work the other way around—if you provide `a` first, the resolver does not know it needs `a[x]` when it sees `a` (it can’t just pull it in, otherwise the user would get unwanted packages when they don’t want `a[x]` but only `a`), and we don’t have a good way to inform the resolver about that direct URL can also be used to satisfy `a[x]` later on. This is why installing the packages without dependencies would work around the issue. The on-disk `.dist-info` can be used to represent that `a` package, so we don’t need to use the URL.
Ultimately, this is one of those issues that are probably fixable, but require too much resource that are more useful spent elsewhere. I would be happy to offer advices if anyone really wants to work on this, but am not personally very motivated to do the work myself.
Hmm, I just re-read the top post, and it seems like the `--no-deps` trick is not working for OP? @pradyunsg this seems to contradict with our findings yesterday; maybe something changed that makes things work now (the `get_preference` thing, maybe)?
Damn it, I was all over the place. To sum up things again: there are actually *two* issues here. The first is the direct-URL-with-different-extras thing I talked about in the previous comment, which is hit if you don’t run `--no-deps` first. We decided to not put too much resource on it, and recommend the `--no-deps` workaround instead.
The error you get if you `--no-deps` first is not related to the extras issue, but about the resolver’s upgrade strategy regarding direct URL requirements (a variant of #8711, blocked by #5780). We should (will?) fix that.
@pradyunsg could you dive into this before tomorrow's meeting, or before Wednesday's?
Per [today's meeting](https://wiki.python.org/psf/PackagingWG/2020-11-18-pip-team-meeting), Pradyun has decided that improving this behavior is not a blocker to the 20.3 release (in which the new resolver will be on by default); the new behavior does better than old resolver, though not as well as we want to.
Apparently the the new `--use-feature=2020-resolver` is already activated in `20.3`? It broke docker my deployments.
Docker pipeline edits setup.py to use CI_JOB_TOKEN to clone deps:
```
ham/setup.py # gitlabci changes git+ssh to git+https
-> git+https://gitlab.com/bar/[email protected]
-> git+https://gitlab.com/bar/[email protected]
foo/setup.py # points still to repo via ssh
-> git+ssh://gitlab.com/bar/[email protected]
```
Under 20.2.4:
It clones `foo` and `lau` via https git and installs it
Under 20.3:
It clones `foo` and `lau` via https git and installs it. It also tries to install `lau` again via ssh
Under 20.2.4 + `--use-feature=2020-resolver`:
Same error as 20.3.
@delijati The reason to your error is not the same as described in this issue. The two `lau` URLs are different. Yes, the two URLs ultimately download the same code, but pip does not have any special knowledge about GitHub, and therefore (correctly) treated as different packages.
@delijati Yes, the new resolver is activated by default in pip 20.3. [Here's more info on that, including how to specify the old resolver temporarily](https://pip.pypa.io/en/latest/user_guide/#changes-to-the-pip-dependency-resolver-in-20-3-2020) as a workaround.
@uranusjr jup the new dependecy resolver made just transparent that there was an error all along. Thanks i hope i can finally convince the team to use an artifactory store ;) or i have to stick with `--use-deprecated=legacy-resolver`
I have merged #8939, #9143, and #9204 here, which all have the same root cause, described above (https://github.com/pypa/pip/issues/8785#issuecomment-678885871).
I've attempted to use the workaround described in #9143 (use `legacy-resolver`). However, this approach has another downside. In python/importlib_metadata@c769ba8fae245265bdb3f173a41e0e8c4f2a2d4b, I implemented the workaround, but now when I run `tox` on my local workstation, it doesn't work at all:
```
importlib_metadata main $ tox
python create: /Users/jaraco/code/public/importlib_metadata/.tox/python
python develop-inst: /Users/jaraco/code/public/importlib_metadata
ERROR: invocation failed (exit code 3), logfile: /Users/jaraco/code/public/importlib_metadata/.tox/python/log/python-1.log
=============================================================== log start ===============================================================
An error occurred during configuration: option use-deprecated: invalid choice: 'legacy-resolver' (choose from )
================================================================ log end ================================================================
________________________________________________________________ summary ________________________________________________________________
ERROR: python: InvocationError for command /Users/jaraco/code/public/importlib_metadata/.tox/python/bin/python -m pip install --exists-action w -e '/Users/jaraco/code/public/importlib_metadata[testing]' (exited with code 3)
```
I have the latest tox installed with the latest virtualenv:
```
importlib_metadata main $ which tox
/Users/jaraco/.local/bin/tox
importlib_metadata main $ ~/.local/pipx/venvs/tox/bin/python -m pip list
Package Version
--------------- -------
appdirs 1.4.4
distlib 0.3.1
filelock 3.0.12
packaging 20.8
pip 20.3.3
pluggy 0.13.1
py 1.10.0
pyparsing 2.4.7
setuptools 51.1.2
six 1.15.0
toml 0.10.2
tox 3.21.0
tox-pip-version 0.0.7
virtualenv 20.3.0
wheel 0.36.2
```
Yet, somehow, pip 20.2.4 is getting used.
I tried forcing a later pip with `tox-pip-version`:
```diff
diff --git a/tox.ini b/tox.ini
index 11f52d7..46676f3 100644
--- a/tox.ini
+++ b/tox.ini
@@ -4,6 +4,8 @@ minversion = 3.2
# https://github.com/jaraco/skeleton/issues/6
tox_pip_extensions_ext_venv_update = true
toxworkdir={env:TOX_WORK_DIR:.tox}
+requires =
+ tox-pip-version
[testenv]
@@ -15,6 +17,8 @@ extras = testing
setenv =
# workaround pypa/pip#9143
PIP_USE_DEPRECATED=legacy-resolver
+pip_version =
+ pip>=20.3.1
[testenv:docs]
```
But even with that, pip fails to upgrade itself with the same error. It's proving difficult to implement the workaround in a reliable way. I somehow need a way for tox to invoke pip first without the workaround to upgrade pip, then invoke it with the workaround to install the the project and its dependencies.
I probably could spend some more time researching how tox and virtualenv work together to install different pip versions in different environments and maybe come up with a workaround, but for now what I'm doing is manually removing the workaround on my local environments when creating tox environments, then undoing that change.
I'm open to suggestions, but unblocked and done for the day, so no reply is needed. I mainly just wanted to capture this secondary issue.
|
2021-04-04T14:08:26Z
|
<patch>
diff --git a/src/pip/_internal/resolution/resolvelib/candidates.py b/src/pip/_internal/resolution/resolvelib/candidates.py
--- a/src/pip/_internal/resolution/resolvelib/candidates.py
+++ b/src/pip/_internal/resolution/resolvelib/candidates.py
@@ -33,6 +33,18 @@
]
+def as_base_candidate(candidate: Candidate) -> Optional[BaseCandidate]:
+ """The runtime version of BaseCandidate."""
+ base_candidate_classes = (
+ AlreadyInstalledCandidate,
+ EditableCandidate,
+ LinkCandidate,
+ )
+ if isinstance(candidate, base_candidate_classes):
+ return candidate
+ return None
+
+
def make_install_req_from_link(link, template):
# type: (Link, InstallRequirement) -> InstallRequirement
assert not template.editable, "template is editable"
diff --git a/src/pip/_internal/resolution/resolvelib/factory.py b/src/pip/_internal/resolution/resolvelib/factory.py
--- a/src/pip/_internal/resolution/resolvelib/factory.py
+++ b/src/pip/_internal/resolution/resolvelib/factory.py
@@ -1,3 +1,4 @@
+import contextlib
import functools
import logging
from typing import (
@@ -16,6 +17,8 @@
cast,
)
+from pip._vendor.packaging.requirements import InvalidRequirement
+from pip._vendor.packaging.requirements import Requirement as PackagingRequirement
from pip._vendor.packaging.specifiers import SpecifierSet
from pip._vendor.packaging.utils import NormalizedName, canonicalize_name
from pip._vendor.pkg_resources import Distribution
@@ -54,6 +57,7 @@
ExtrasCandidate,
LinkCandidate,
RequiresPythonCandidate,
+ as_base_candidate,
)
from .found_candidates import FoundCandidates, IndexCandidateInfo
from .requirements import (
@@ -123,6 +127,15 @@ def force_reinstall(self):
# type: () -> bool
return self._force_reinstall
+ def _fail_if_link_is_unsupported_wheel(self, link: Link) -> None:
+ if not link.is_wheel:
+ return
+ wheel = Wheel(link.filename)
+ if wheel.supported(self._finder.target_python.get_tags()):
+ return
+ msg = f"{link.filename} is not a supported wheel on this platform."
+ raise UnsupportedWheel(msg)
+
def _make_extras_candidate(self, base, extras):
# type: (BaseCandidate, FrozenSet[str]) -> ExtrasCandidate
cache_key = (id(base), extras)
@@ -275,6 +288,51 @@ def iter_index_candidate_infos():
incompatible_ids,
)
+ def _iter_explicit_candidates_from_base(
+ self,
+ base_requirements: Iterable[Requirement],
+ extras: FrozenSet[str],
+ ) -> Iterator[Candidate]:
+ """Produce explicit candidates from the base given an extra-ed package.
+
+ :param base_requirements: Requirements known to the resolver. The
+ requirements are guaranteed to not have extras.
+ :param extras: The extras to inject into the explicit requirements'
+ candidates.
+ """
+ for req in base_requirements:
+ lookup_cand, _ = req.get_candidate_lookup()
+ if lookup_cand is None: # Not explicit.
+ continue
+ # We've stripped extras from the identifier, and should always
+ # get a BaseCandidate here, unless there's a bug elsewhere.
+ base_cand = as_base_candidate(lookup_cand)
+ assert base_cand is not None, "no extras here"
+ yield self._make_extras_candidate(base_cand, extras)
+
+ def _iter_candidates_from_constraints(
+ self,
+ identifier: str,
+ constraint: Constraint,
+ template: InstallRequirement,
+ ) -> Iterator[Candidate]:
+ """Produce explicit candidates from constraints.
+
+ This creates "fake" InstallRequirement objects that are basically clones
+ of what "should" be the template, but with original_link set to link.
+ """
+ for link in constraint.links:
+ self._fail_if_link_is_unsupported_wheel(link)
+ candidate = self._make_candidate_from_link(
+ link,
+ extras=frozenset(),
+ template=install_req_from_link_and_ireq(link, template),
+ name=canonicalize_name(identifier),
+ version=None,
+ )
+ if candidate:
+ yield candidate
+
def find_candidates(
self,
identifier: str,
@@ -283,59 +341,48 @@ def find_candidates(
constraint: Constraint,
prefers_installed: bool,
) -> Iterable[Candidate]:
-
- # Since we cache all the candidates, incompatibility identification
- # can be made quicker by comparing only the id() values.
- incompat_ids = {id(c) for c in incompatibilities.get(identifier, ())}
-
+ # Collect basic lookup information from the requirements.
explicit_candidates = set() # type: Set[Candidate]
ireqs = [] # type: List[InstallRequirement]
for req in requirements[identifier]:
cand, ireq = req.get_candidate_lookup()
- if cand is not None and id(cand) not in incompat_ids:
+ if cand is not None:
explicit_candidates.add(cand)
if ireq is not None:
ireqs.append(ireq)
- for link in constraint.links:
- if not ireqs:
- # If we hit this condition, then we cannot construct a candidate.
- # However, if we hit this condition, then none of the requirements
- # provided an ireq, so they must have provided an explicit candidate.
- # In that case, either the candidate matches, in which case this loop
- # doesn't need to do anything, or it doesn't, in which case there's
- # nothing this loop can do to recover.
- break
- if link.is_wheel:
- wheel = Wheel(link.filename)
- # Check whether the provided wheel is compatible with the target
- # platform.
- if not wheel.supported(self._finder.target_python.get_tags()):
- # We are constrained to install a wheel that is incompatible with
- # the target architecture, so there are no valid candidates.
- # Return early, with no candidates.
- return ()
- # Create a "fake" InstallRequirement that's basically a clone of
- # what "should" be the template, but with original_link set to link.
- # Using the given requirement is necessary for preserving hash
- # requirements, but without the original_link, direct_url.json
- # won't be created.
- ireq = install_req_from_link_and_ireq(link, ireqs[0])
- candidate = self._make_candidate_from_link(
- link,
- extras=frozenset(),
- template=ireq,
- name=canonicalize_name(ireq.name) if ireq.name else None,
- version=None,
+ # If the current identifier contains extras, add explicit candidates
+ # from entries from extra-less identifier.
+ with contextlib.suppress(InvalidRequirement):
+ parsed_requirement = PackagingRequirement(identifier)
+ explicit_candidates.update(
+ self._iter_explicit_candidates_from_base(
+ requirements.get(parsed_requirement.name, ()),
+ frozenset(parsed_requirement.extras),
+ ),
)
- if candidate is None:
- # _make_candidate_from_link returns None if the wheel fails to build.
- # We are constrained to install this wheel, so there are no valid
- # candidates.
- # Return early, with no candidates.
+
+ # Add explicit candidates from constraints. We only do this if there are
+ # kown ireqs, which represent requirements not already explicit. If
+ # there are no ireqs, we're constraining already-explicit requirements,
+ # which is handled later when we return the explicit candidates.
+ if ireqs:
+ try:
+ explicit_candidates.update(
+ self._iter_candidates_from_constraints(
+ identifier,
+ constraint,
+ template=ireqs[0],
+ ),
+ )
+ except UnsupportedWheel:
+ # If we're constrained to install a wheel incompatible with the
+ # target architecture, no candidates will ever be valid.
return ()
- explicit_candidates.add(candidate)
+ # Since we cache all the candidates, incompatibility identification
+ # can be made quicker by comparing only the id() values.
+ incompat_ids = {id(c) for c in incompatibilities.get(identifier, ())}
# If none of the requirements want an explicit candidate, we can ask
# the finder for candidates.
@@ -351,7 +398,8 @@ def find_candidates(
return (
c
for c in explicit_candidates
- if constraint.is_satisfied_by(c)
+ if id(c) not in incompat_ids
+ and constraint.is_satisfied_by(c)
and all(req.is_satisfied_by(c) for req in requirements[identifier])
)
@@ -366,13 +414,7 @@ def make_requirement_from_install_req(self, ireq, requested_extras):
return None
if not ireq.link:
return SpecifierRequirement(ireq)
- if ireq.link.is_wheel:
- wheel = Wheel(ireq.link.filename)
- if not wheel.supported(self._finder.target_python.get_tags()):
- msg = "{} is not a supported wheel on this platform.".format(
- wheel.filename,
- )
- raise UnsupportedWheel(msg)
+ self._fail_if_link_is_unsupported_wheel(ireq.link)
cand = self._make_candidate_from_link(
ireq.link,
extras=frozenset(ireq.extras),
</patch>
|
[]
|
[]
| |||
apache__airflow-19854
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove some more redundant parentheses
Removed redundant parentheses in airflow/_vendor/connexion/apis/flask_api.py and in many more
</issue>
<code>
[start of README.md]
1 <!--
2 Licensed to the Apache Software Foundation (ASF) under one
3 or more contributor license agreements. See the NOTICE file
4 distributed with this work for additional information
5 regarding copyright ownership. The ASF licenses this file
6 to you under the Apache License, Version 2.0 (the
7 "License"); you may not use this file except in compliance
8 with the License. You may obtain a copy of the License at
9
10 http://www.apache.org/licenses/LICENSE-2.0
11
12 Unless required by applicable law or agreed to in writing,
13 software distributed under the License is distributed on an
14 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 KIND, either express or implied. See the License for the
16 specific language governing permissions and limitations
17 under the License.
18 -->
19
20 # Apache Airflow
21
22 [](https://badge.fury.io/py/apache-airflow)
23 [](https://github.com/apache/airflow/actions)
24 [](https://codecov.io/github/apache/airflow?branch=main)
25 [](https://www.apache.org/licenses/LICENSE-2.0.txt)
26 [](https://pypi.org/project/apache-airflow/)
27 [](https://hub.docker.com/r/apache/airflow)
28 [](https://hub.docker.com/r/apache/airflow)
29 [](https://pypi.org/project/apache-airflow/)
30 [](https://artifacthub.io/packages/search?repo=apache-airflow)
31 [](https://github.com/psf/black)
32 [](https://twitter.com/ApacheAirflow)
33 [](https://s.apache.org/airflow-slack)
34
35 [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows.
36
37 When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative.
38
39 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
40
41 <!-- START doctoc generated TOC please keep comment here to allow auto update -->
42 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
43 **Table of contents**
44
45 - [Project Focus](#project-focus)
46 - [Principles](#principles)
47 - [Requirements](#requirements)
48 - [Getting started](#getting-started)
49 - [Installing from PyPI](#installing-from-pypi)
50 - [Official source code](#official-source-code)
51 - [Convenience packages](#convenience-packages)
52 - [User Interface](#user-interface)
53 - [Semantic versioning](#semantic-versioning)
54 - [Version Life Cycle](#version-life-cycle)
55 - [Support for Python and Kubernetes versions](#support-for-python-and-kubernetes-versions)
56 - [Contributing](#contributing)
57 - [Who uses Apache Airflow?](#who-uses-apache-airflow)
58 - [Who Maintains Apache Airflow?](#who-maintains-apache-airflow)
59 - [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation)
60 - [Airflow merchandise](#airflow-merchandise)
61 - [Links](#links)
62 - [Sponsors](#sponsors)
63
64 <!-- END doctoc generated TOC please keep comment here to allow auto update -->
65
66 ## Project Focus
67
68 Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include [Luigi](https://github.com/spotify/luigi), [Oozie](https://oozie.apache.org/) and [Azkaban](https://azkaban.github.io/).
69
70 Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's [Xcom feature](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#xcoms)). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work.
71
72 Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches.
73
74 ## Principles
75
76 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically.
77 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment.
78 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine.
79 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers.
80
81 ## Requirements
82
83 Apache Airflow is tested with:
84
85 | | Main version (dev) | Stable version (2.2.2) |
86 | -------------------- | ------------------------- | ------------------------ |
87 | Python | 3.6, 3.7, 3.8, 3.9 | 3.6, 3.7, 3.8, 3.9 |
88 | Kubernetes | 1.20, 1.21 | 1.18, 1.19, 1.20 |
89 | PostgreSQL | 9.6, 10, 11, 12, 13 | 9.6, 10, 11, 12, 13 |
90 | MySQL | 5.7, 8 | 5.7, 8 |
91 | SQLite | 3.15.0+ | 3.15.0+ |
92 | MSSQL(Experimental) | 2017, 2019 | |
93
94 **Note**: MySQL 5.x versions are unable to or have limitations with
95 running multiple schedulers -- please see the [Scheduler docs](https://airflow.apache.org/docs/apache-airflow/stable/scheduler.html).
96 MariaDB is not tested/recommended.
97
98 **Note**: SQLite is used in Airflow tests. Do not use it in production. We recommend
99 using the latest stable version of SQLite for local development.
100
101 **Note**: Python v3.10 is not supported yet. For details, see [#19059](https://github.com/apache/airflow/issues/19059).
102
103 ## Getting started
104
105 Visit the official Airflow website documentation (latest **stable** release) for help with
106 [installing Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation.html),
107 [getting started](https://airflow.apache.org/docs/apache-airflow/stable/start/index.html), or walking
108 through a more complete [tutorial](https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html).
109
110 > Note: If you're looking for documentation for the main branch (latest development branch): you can find it on [s.apache.org/airflow-docs](https://s.apache.org/airflow-docs/).
111
112 For more information on Airflow Improvement Proposals (AIPs), visit
113 the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals).
114
115 Documentation for dependent projects like provider packages, Docker image, Helm Chart, you'll find it in [the documentation index](https://airflow.apache.org/docs/).
116
117 ## Installing from PyPI
118
119 We publish Apache Airflow as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky
120 because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open, and
121 applications usually pin them, but we should do neither and both simultaneously. We decided to keep
122 our dependencies as open as possible (in `setup.py`) so users can install different versions of libraries
123 if needed. This means that `pip install apache-airflow` will not work from time to time or will
124 produce unusable Airflow installation.
125
126 To have repeatable installation, however, we keep a set of "known-to-be-working" constraint
127 files in the orphan `constraints-main` and `constraints-2-0` branches. We keep those "known-to-be-working"
128 constraints files separately per major/minor Python version.
129 You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify
130 correct Airflow tag/version/branch and Python versions in the URL.
131
132
133 1. Installing just Airflow:
134
135 > Note: Only `pip` installation is currently officially supported.
136
137 While it is possible to install Airflow with tools like [Poetry](https://python-poetry.org) or
138 [pip-tools](https://pypi.org/project/pip-tools), they do not share the same workflow as
139 `pip` - especially when it comes to constraint vs. requirements management.
140 Installing via `Poetry` or `pip-tools` is not currently supported.
141
142 If you wish to install Airflow using those tools, you should use the constraint files and convert
143 them to the appropriate format and workflow that your tool requires.
144
145
146 ```bash
147 pip install 'apache-airflow==2.2.2' \
148 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.2/constraints-3.7.txt"
149 ```
150
151 2. Installing with extras (i.e., postgres, google)
152
153 ```bash
154 pip install 'apache-airflow[postgres,google]==2.2.2' \
155 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.2/constraints-3.7.txt"
156 ```
157
158 For information on installing provider packages, check
159 [providers](http://airflow.apache.org/docs/apache-airflow-providers/index.html).
160
161 ## Official source code
162
163 Apache Airflow is an [Apache Software Foundation](https://www.apache.org) (ASF) project,
164 and our official source code releases:
165
166 - Follow the [ASF Release Policy](https://www.apache.org/legal/release-policy.html)
167 - Can be downloaded from [the ASF Distribution Directory](https://downloads.apache.org/airflow)
168 - Are cryptographically signed by the release manager
169 - Are officially voted on by the PMC members during the
170 [Release Approval Process](https://www.apache.org/legal/release-policy.html#release-approval)
171
172 Following the ASF rules, the source packages released must be sufficient for a user to build and test the
173 release provided they have access to the appropriate platform and tools.
174
175 ## Convenience packages
176
177 There are other ways of installing and using Airflow. Those are "convenience" methods - they are
178 not "official releases" as stated by the `ASF Release Policy`, but they can be used by the users
179 who do not want to build the software themselves.
180
181 Those are - in the order of most common ways people install Airflow:
182
183 - [PyPI releases](https://pypi.org/project/apache-airflow/) to install Airflow using standard `pip` tool
184 - [Docker Images](https://hub.docker.com/r/apache/airflow) to install airflow via
185 `docker` tool, use them in Kubernetes, Helm Charts, `docker-compose`, `docker swarm`, etc. You can
186 read more about using, customising, and extending the images in the
187 [Latest docs](https://airflow.apache.org/docs/docker-stack/index.html), and
188 learn details on the internals in the [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst) document.
189 - [Tags in GitHub](https://github.com/apache/airflow/tags) to retrieve the git project sources that
190 were used to generate official source packages via git
191
192 All those artifacts are not official releases, but they are prepared using officially released sources.
193 Some of those artifacts are "development" or "pre-release" ones, and they are clearly marked as such
194 following the ASF Policy.
195
196 ## User Interface
197
198 - **DAGs**: Overview of all DAGs in your environment.
199
200 
201
202 - **Tree**: Tree representation of a DAG that spans across time.
203
204 
205
206 - **Graph**: Visualization of a DAG's dependencies and their current status for a specific run.
207
208 
209
210 - **Task Duration**: Total time spent on different tasks over time.
211
212 
213
214 - **Gantt**: Duration and overlap of a DAG.
215
216 
217
218 - **Code**: Quick way to view source code of a DAG.
219
220 
221
222 ## Semantic versioning
223
224 As of Airflow 2.0.0, we support a strict [SemVer](https://semver.org/) approach for all packages released.
225
226 There are few specific rules that we agreed to that define details of versioning of the different
227 packages:
228
229 * **Airflow**: SemVer rules apply to core airflow only (excludes any changes to providers).
230 Changing limits for versions of Airflow dependencies is not a breaking change on its own.
231 * **Airflow Providers**: SemVer rules apply to changes in the particular provider's code only.
232 SemVer MAJOR and MINOR versions for the packages are independent of the Airflow version.
233 For example, `google 4.1.0` and `amazon 3.0.3` providers can happily be installed
234 with `Airflow 2.1.2`. If there are limits of cross-dependencies between providers and Airflow packages,
235 they are present in providers as `install_requires` limitations. We aim to keep backwards
236 compatibility of providers with all previously released Airflow 2 versions but
237 there will sometimes be breaking changes that might make some, or all
238 providers, have minimum Airflow version specified. Change of that minimum supported Airflow version
239 is a breaking change for provider because installing the new provider might automatically
240 upgrade Airflow (which might be an undesired side effect of upgrading provider).
241 * **Airflow Helm Chart**: SemVer rules apply to changes in the chart only. SemVer MAJOR and MINOR
242 versions for the chart are independent from the Airflow version. We aim to keep backwards
243 compatibility of the Helm Chart with all released Airflow 2 versions, but some new features might
244 only work starting from specific Airflow releases. We might however limit the Helm
245 Chart to depend on minimal Airflow version.
246 * **Airflow API clients**: SemVer MAJOR and MINOR versions follow MAJOR and MINOR versions of Airflow.
247 The first MAJOR or MINOR X.Y.0 release of Airflow should always be followed by X.Y.0 release of
248 all clients. The clients then can release their own PATCH releases with bugfixes,
249 independently of Airflow PATCH releases.
250
251 ## Version Life Cycle
252
253 Apache Airflow version life cycle:
254
255 | Version | Current Patch/Minor | State | First Release | Limited Support | EOL/Terminated |
256 |---------|---------------------|-----------|---------------|-----------------|----------------|
257 | 2 | 2.2.2 | Supported | Dec 17, 2020 | TBD | TBD |
258 | 1.10 | 1.10.15 | EOL | Aug 27, 2018 | Dec 17, 2020 | June 17, 2021 |
259 | 1.9 | 1.9.0 | EOL | Jan 03, 2018 | Aug 27, 2018 | Aug 27, 2018 |
260 | 1.8 | 1.8.2 | EOL | Mar 19, 2017 | Jan 03, 2018 | Jan 03, 2018 |
261 | 1.7 | 1.7.1.2 | EOL | Mar 28, 2016 | Mar 19, 2017 | Mar 19, 2017 |
262
263 Limited support versions will be supported with security and critical bug fix only.
264 EOL versions will not get any fixes nor support.
265 We always recommend that all users run the latest available minor release for whatever major version is in use.
266 We **highly** recommend upgrading to the latest Airflow major release at the earliest convenient time and before the EOL date.
267
268 ## Support for Python and Kubernetes versions
269
270 As of Airflow 2.0, we agreed to certain rules we follow for Python and Kubernetes support.
271 They are based on the official release schedule of Python and Kubernetes, nicely summarized in the
272 [Python Developer's Guide](https://devguide.python.org/#status-of-python-branches) and
273 [Kubernetes version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/).
274
275 1. We drop support for Python and Kubernetes versions when they reach EOL. We drop support for those
276 EOL versions in main right after EOL date, and it is effectively removed when we release the
277 first new MINOR (Or MAJOR if there is no new MINOR version) of Airflow
278 For example, for Python 3.6 it means that we drop support in main right after 23.12.2021, and the first
279 MAJOR or MINOR version of Airflow released after will not have it.
280
281 2. The "oldest" supported version of Python/Kubernetes is the default one until we decide to switch to
282 later version. "Default" is only meaningful in terms of "smoke tests" in CI PRs, which are run using this
283 default version and the default reference image available. Currently `apache/airflow:latest`
284 and `apache/airflow:2.2.1` images are Python 3.7 images as we are preparing for 23.12.2021 when will
285 Python 3.6 reaches end of life.
286
287 3. We support a new version of Python/Kubernetes in main after they are officially released, as soon as we
288 make them work in our CI pipeline (which might not be immediate due to dependencies catching up with
289 new versions of Python mostly) we release new images/support in Airflow based on the working CI setup.
290
291 ### Additional notes on Python version requirements
292
293 * Previous versions [require](https://github.com/apache/airflow/issues/8162) at least Python 3.5.3
294 when using Python 3.
295
296 ## Contributing
297
298 Want to help build Apache Airflow? Check out our [contributing documentation](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst).
299
300 Official Docker (container) images for Apache Airflow are described in [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst).
301
302 ## Who uses Apache Airflow?
303
304 More than 400 organizations are using Apache Airflow
305 [in the wild](https://github.com/apache/airflow/blob/main/INTHEWILD.md).
306
307 ## Who Maintains Apache Airflow?
308
309 Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors),
310 but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow)
311 are responsible for reviewing and merging PRs as well as steering conversations around new feature requests.
312 If you would like to become a maintainer, please review the Apache Airflow
313 [committer requirements](https://github.com/apache/airflow/blob/main/COMMITTERS.rst#guidelines-to-become-an-airflow-committer).
314
315 ## Can I use the Apache Airflow logo in my presentation?
316
317 Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up to date logos are found in [this repo](/docs/apache-airflow/img/logos) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html).
318
319 ## Airflow merchandise
320
321 If you would love to have Apache Airflow stickers, t-shirt, etc. then check out
322 [Redbubble Shop](https://www.redbubble.com/i/sticker/Apache-Airflow-by-comdev/40497530.EJUG5).
323
324 ## Links
325
326 - [Documentation](https://airflow.apache.org/docs/apache-airflow/stable/)
327 - [Chat](https://s.apache.org/airflow-slack)
328
329 ## Sponsors
330
331 The CI infrastructure for Apache Airflow has been sponsored by:
332
333 <!-- Ordered by most recently "funded" -->
334
335 <a href="https://astronomer.io"><img src="https://assets2.astronomer.io/logos/logoForLIGHTbackground.png" alt="astronomer.io" width="250px"></a>
336 <a href="https://aws.amazon.com/opensource/"><img src="docs/integration-logos/aws/[email protected]" alt="AWS OpenSource" width="130px"></a>
337
[end of README.md]
[start of airflow/_vendor/connexion/apis/__init__.py]
1 from .abstract import AbstractAPI # NOQA
2
[end of airflow/_vendor/connexion/apis/__init__.py]
[start of airflow/_vendor/connexion/apis/flask_utils.py]
1 import functools
2 import random
3 import re
4 import string
5
6 import flask
7 import werkzeug.wrappers
8
9 PATH_PARAMETER = re.compile(r'\{([^}]*)\}')
10
11 # map Swagger type to flask path converter
12 # see http://flask.pocoo.org/docs/0.10/api/#url-route-registrations
13 PATH_PARAMETER_CONVERTERS = {
14 'integer': 'int',
15 'number': 'float',
16 'path': 'path'
17 }
18
19
20 def flaskify_endpoint(identifier, randomize=None):
21 """
22 Converts the provided identifier in a valid flask endpoint name
23
24 :type identifier: str
25 :param randomize: If specified, add this many random characters (upper case
26 and digits) to the endpoint name, separated by a pipe character.
27 :type randomize: int | None
28 :rtype: str
29 """
30 result = identifier.replace('.', '_')
31 if randomize is None:
32 return result
33
34 chars = string.ascii_uppercase + string.digits
35 return "{result}|{random_string}".format(
36 result=result,
37 random_string=''.join(random.SystemRandom().choice(chars) for _ in range(randomize)))
38
39
40 def convert_path_parameter(match, types):
41 name = match.group(1)
42 swagger_type = types.get(name)
43 converter = PATH_PARAMETER_CONVERTERS.get(swagger_type)
44 return '<{0}{1}{2}>'.format(converter or '',
45 ':' if converter else '',
46 name.replace('-', '_'))
47
48
49 def flaskify_path(swagger_path, types=None):
50 """
51 Convert swagger path templates to flask path templates
52
53 :type swagger_path: str
54 :type types: dict
55 :rtype: str
56
57 >>> flaskify_path('/foo-bar/{my-param}')
58 '/foo-bar/<my_param>'
59
60 >>> flaskify_path('/foo/{someint}', {'someint': 'int'})
61 '/foo/<int:someint>'
62 """
63 if types is None:
64 types = {}
65 convert_match = functools.partial(convert_path_parameter, types=types)
66 return PATH_PARAMETER.sub(convert_match, swagger_path)
67
68
69 def is_flask_response(obj):
70 """
71 Verifies if obj is a default Flask response instance.
72
73 :type obj: object
74 :rtype bool
75
76 >>> is_flask_response(redirect('http://example.com/'))
77 True
78 >>> is_flask_response(flask.Response())
79 True
80 """
81 return isinstance(obj, flask.Response) or isinstance(obj, werkzeug.wrappers.Response)
82
[end of airflow/_vendor/connexion/apis/flask_utils.py]
[start of airflow/_vendor/connexion/decorators/decorator.py]
1 import functools
2 import logging
3
4 from ..utils import has_coroutine
5
6 logger = logging.getLogger('connexion.decorators.decorator')
7
8
9 class BaseDecorator(object):
10
11 def __call__(self, function):
12 """
13 :type function: types.FunctionType
14 :rtype: types.FunctionType
15 """
16 return function
17
18 def __repr__(self): # pragma: no cover
19 """
20 :rtype: str
21 """
22 return '<BaseDecorator>'
23
24
25 class RequestResponseDecorator(BaseDecorator):
26 """Manages the lifecycle of the request internally in Connexion.
27 Filter the ConnexionRequest instance to return the corresponding
28 framework specific object.
29 """
30
31 def __init__(self, api, mimetype):
32 self.api = api
33 self.mimetype = mimetype
34
35 def __call__(self, function):
36 """
37 :type function: types.FunctionType
38 :rtype: types.FunctionType
39 """
40 if has_coroutine(function, self.api):
41 from .coroutine_wrappers import get_request_life_cycle_wrapper
42 wrapper = get_request_life_cycle_wrapper(function, self.api, self.mimetype)
43
44 else: # pragma: 3 no cover
45 @functools.wraps(function)
46 def wrapper(*args, **kwargs):
47 request = self.api.get_request(*args, **kwargs)
48 response = function(request)
49 return self.api.get_response(response, self.mimetype, request)
50
51 return wrapper
52
[end of airflow/_vendor/connexion/decorators/decorator.py]
[start of airflow/_vendor/connexion/decorators/response.py]
1 # Decorators to change the return type of endpoints
2 import functools
3 import logging
4
5 from jsonschema import ValidationError
6
7 from ..exceptions import (NonConformingResponseBody,
8 NonConformingResponseHeaders)
9 from ..utils import all_json, has_coroutine
10 from .decorator import BaseDecorator
11 from .validation import ResponseBodyValidator
12
13 logger = logging.getLogger('connexion.decorators.response')
14
15
16 class ResponseValidator(BaseDecorator):
17 def __init__(self, operation, mimetype, validator=None):
18 """
19 :type operation: Operation
20 :type mimetype: str
21 :param validator: Validator class that should be used to validate passed data
22 against API schema. Default is jsonschema.Draft4Validator.
23 :type validator: jsonschema.IValidator
24 """
25 self.operation = operation
26 self.mimetype = mimetype
27 self.validator = validator
28
29 def validate_response(self, data, status_code, headers, url):
30 """
31 Validates the Response object based on what has been declared in the specification.
32 Ensures the response body matches the declared schema.
33 :type data: dict
34 :type status_code: int
35 :type headers: dict
36 :rtype bool | None
37 """
38 # check against returned header, fall back to expected mimetype
39 content_type = headers.get("Content-Type", self.mimetype)
40 content_type = content_type.rsplit(";", 1)[0] # remove things like utf8 metadata
41
42 response_definition = self.operation.response_definition(str(status_code), content_type)
43 response_schema = self.operation.response_schema(str(status_code), content_type)
44
45 if self.is_json_schema_compatible(response_schema):
46 v = ResponseBodyValidator(response_schema, validator=self.validator)
47 try:
48 data = self.operation.json_loads(data)
49 v.validate_schema(data, url)
50 except ValidationError as e:
51 raise NonConformingResponseBody(message=str(e))
52
53 if response_definition and response_definition.get("headers"):
54 # converting to set is needed to support python 2.7
55 response_definition_header_keys = set(response_definition.get("headers").keys())
56 header_keys = set(headers.keys())
57 missing_keys = response_definition_header_keys - header_keys
58 if missing_keys:
59 pretty_list = ', '.join(missing_keys)
60 msg = ("Keys in header don't match response specification. "
61 "Difference: {0}").format(pretty_list)
62 raise NonConformingResponseHeaders(message=msg)
63 return True
64
65 def is_json_schema_compatible(self, response_schema):
66 """
67 Verify if the specified operation responses are JSON schema
68 compatible.
69
70 All operations that specify a JSON schema and have content
71 type "application/json" or "text/plain" can be validated using
72 json_schema package.
73
74 :type response_schema: dict
75 :rtype bool
76 """
77 if not response_schema:
78 return False
79 return all_json([self.mimetype]) or self.mimetype == 'text/plain'
80
81 def __call__(self, function):
82 """
83 :type function: types.FunctionType
84 :rtype: types.FunctionType
85 """
86
87 def _wrapper(request, response):
88 connexion_response = \
89 self.operation.api.get_connexion_response(response, self.mimetype)
90 self.validate_response(
91 connexion_response.body, connexion_response.status_code,
92 connexion_response.headers, request.url)
93
94 return response
95
96 if has_coroutine(function):
97 from .coroutine_wrappers import get_response_validator_wrapper
98 wrapper = get_response_validator_wrapper(function, _wrapper)
99
100 else: # pragma: 3 no cover
101 @functools.wraps(function)
102 def wrapper(request):
103 response = function(request)
104 return _wrapper(request, response)
105
106 return wrapper
107
108 def __repr__(self):
109 """
110 :rtype: str
111 """
112 return '<ResponseValidator>' # pragma: no cover
113
[end of airflow/_vendor/connexion/decorators/response.py]
[start of airflow/_vendor/connexion/operations/secure.py]
1 import functools
2 import logging
3
4 from ..decorators.decorator import RequestResponseDecorator
5 from ..decorators.security import (get_apikeyinfo_func, get_basicinfo_func,
6 get_bearerinfo_func,
7 get_scope_validate_func, get_tokeninfo_func,
8 security_deny, security_passthrough,
9 verify_apikey, verify_basic, verify_bearer,
10 verify_none, verify_oauth, verify_security)
11
12 logger = logging.getLogger("connexion.operations.secure")
13
14 DEFAULT_MIMETYPE = 'application/json'
15
16
17 class SecureOperation(object):
18
19 def __init__(self, api, security, security_schemes):
20 """
21 :param security: list of security rules the application uses by default
22 :type security: list
23 :param security_definitions: `Security Definitions Object
24 <https://github.com/swagger-api/swagger-spec/blob/master/versions/2.0.md#security-definitions-object>`_
25 :type security_definitions: dict
26 """
27 self._api = api
28 self._security = security
29 self._security_schemes = security_schemes
30
31 @property
32 def api(self):
33 return self._api
34
35 @property
36 def security(self):
37 return self._security
38
39 @property
40 def security_schemes(self):
41 return self._security_schemes
42
43 @property
44 def security_decorator(self):
45 """
46 Gets the security decorator for operation
47
48 From Swagger Specification:
49
50 **Security Definitions Object**
51
52 A declaration of the security schemes available to be used in the specification.
53
54 This does not enforce the security schemes on the operations and only serves to provide the relevant details
55 for each scheme.
56
57
58 **Operation Object -> security**
59
60 A declaration of which security schemes are applied for this operation. The list of values describes alternative
61 security schemes that can be used (that is, there is a logical OR between the security requirements).
62 This definition overrides any declared top-level security. To remove a top-level security declaration,
63 an empty array can be used.
64
65
66 **Security Requirement Object**
67
68 Lists the required security schemes to execute this operation. The object can have multiple security schemes
69 declared in it which are all required (that is, there is a logical AND between the schemes).
70
71 The name used for each property **MUST** correspond to a security scheme declared in the Security Definitions.
72
73 :rtype: types.FunctionType
74 """
75 logger.debug('... Security: %s', self.security, extra=vars(self))
76 if not self.security:
77 return security_passthrough
78
79 auth_funcs = []
80 required_scopes = None
81 for security_req in self.security:
82 if not security_req:
83 auth_funcs.append(verify_none())
84 continue
85 elif len(security_req) > 1:
86 logger.warning("... More than one security scheme in security requirement defined. "
87 "**DENYING ALL REQUESTS**", extra=vars(self))
88 return security_deny
89
90 scheme_name, scopes = next(iter(security_req.items()))
91 security_scheme = self.security_schemes[scheme_name]
92
93 if security_scheme['type'] == 'oauth2':
94 required_scopes = scopes
95 token_info_func = get_tokeninfo_func(security_scheme)
96 scope_validate_func = get_scope_validate_func(security_scheme)
97 if not token_info_func:
98 logger.warning("... x-tokenInfoFunc missing", extra=vars(self))
99 continue
100
101 auth_funcs.append(verify_oauth(token_info_func, scope_validate_func))
102
103 # Swagger 2.0
104 elif security_scheme['type'] == 'basic':
105 basic_info_func = get_basicinfo_func(security_scheme)
106 if not basic_info_func:
107 logger.warning("... x-basicInfoFunc missing", extra=vars(self))
108 continue
109
110 auth_funcs.append(verify_basic(basic_info_func))
111
112 # OpenAPI 3.0.0
113 elif security_scheme['type'] == 'http':
114 scheme = security_scheme['scheme'].lower()
115 if scheme == 'basic':
116 basic_info_func = get_basicinfo_func(security_scheme)
117 if not basic_info_func:
118 logger.warning("... x-basicInfoFunc missing", extra=vars(self))
119 continue
120
121 auth_funcs.append(verify_basic(basic_info_func))
122 elif scheme == 'bearer':
123 bearer_info_func = get_bearerinfo_func(security_scheme)
124 if not bearer_info_func:
125 logger.warning("... x-bearerInfoFunc missing", extra=vars(self))
126 continue
127 auth_funcs.append(verify_bearer(bearer_info_func))
128 else:
129 logger.warning("... Unsupported http authorization scheme %s" % scheme, extra=vars(self))
130
131 elif security_scheme['type'] == 'apiKey':
132 scheme = security_scheme.get('x-authentication-scheme', '').lower()
133 if scheme == 'bearer':
134 bearer_info_func = get_bearerinfo_func(security_scheme)
135 if not bearer_info_func:
136 logger.warning("... x-bearerInfoFunc missing", extra=vars(self))
137 continue
138 auth_funcs.append(verify_bearer(bearer_info_func))
139 else:
140 apikey_info_func = get_apikeyinfo_func(security_scheme)
141 if not apikey_info_func:
142 logger.warning("... x-apikeyInfoFunc missing", extra=vars(self))
143 continue
144
145 auth_funcs.append(verify_apikey(apikey_info_func, security_scheme['in'], security_scheme['name']))
146
147 else:
148 logger.warning("... Unsupported security scheme type %s" % security_scheme['type'], extra=vars(self))
149
150 return functools.partial(verify_security, auth_funcs, required_scopes)
151
152 def get_mimetype(self):
153 return DEFAULT_MIMETYPE
154
155 @property
156 def _request_response_decorator(self):
157 """
158 Guarantees that instead of the internal representation of the
159 operation handler response
160 (connexion.lifecycle.ConnexionRequest) a framework specific
161 object is returned.
162 :rtype: types.FunctionType
163 """
164 return RequestResponseDecorator(self.api, self.get_mimetype())
165
[end of airflow/_vendor/connexion/operations/secure.py]
[start of airflow/_vendor/connexion/options.py]
1 import logging
2 import pathlib
3 from typing import Optional # NOQA
4
5 try:
6 from swagger_ui_bundle import (swagger_ui_2_path,
7 swagger_ui_3_path)
8 except ImportError:
9 swagger_ui_2_path = swagger_ui_3_path = None
10
11 MODULE_PATH = pathlib.Path(__file__).absolute().parent
12 NO_UI_MSG = """The swagger_ui directory could not be found.
13 Please install connexion with extra install: pip install connexion[swagger-ui]
14 or provide the path to your local installation by passing swagger_path=<your path>
15 """
16
17 logger = logging.getLogger("connexion.options")
18
19
20 class ConnexionOptions(object):
21
22 def __init__(self, options=None, oas_version=(2,)):
23 self._options = {}
24 self.oas_version = oas_version
25 if self.oas_version >= (3, 0, 0):
26 self.openapi_spec_name = '/openapi.json'
27 self.swagger_ui_local_path = swagger_ui_3_path
28 else:
29 self.openapi_spec_name = '/swagger.json'
30 self.swagger_ui_local_path = swagger_ui_2_path
31
32 if options:
33 self._options.update(filter_values(options))
34
35 def extend(self, new_values=None):
36 # type: (Optional[dict]) -> ConnexionOptions
37 """
38 Return a new instance of `ConnexionOptions` using as default the currently
39 defined options.
40 """
41 if new_values is None:
42 new_values = {}
43
44 options = dict(self._options)
45 options.update(filter_values(new_values))
46 return ConnexionOptions(options, self.oas_version)
47
48 def as_dict(self):
49 return self._options
50
51 @property
52 def openapi_spec_available(self):
53 # type: () -> bool
54 """
55 Whether to make available the OpenAPI Specification under
56 `openapi_spec_path`.
57
58 Default: True
59 """
60 deprecated_option = self._options.get('swagger_json', True)
61 serve_spec = self._options.get('serve_spec', deprecated_option)
62 if 'swagger_json' in self._options:
63 deprecation_warning = ("The 'swagger_json' option is deprecated. "
64 "Please use 'serve_spec' instead")
65 logger.warning(deprecation_warning)
66 return serve_spec
67
68 @property
69 def openapi_console_ui_available(self):
70 # type: () -> bool
71 """
72 Whether to make the OpenAPI Console UI available under the path
73 defined in `openapi_console_ui_path` option.
74
75 Default: True
76 """
77 if (self._options.get('swagger_ui', True) and
78 self.openapi_console_ui_from_dir is None):
79 logger.warning(NO_UI_MSG)
80 return False
81 return self._options.get('swagger_ui', True)
82
83 @property
84 def openapi_spec_path(self):
85 # type: () -> str
86 """
87 Path to mount the OpenAPI Console UI and make it accessible via a browser.
88
89 Default: /openapi.json for openapi3, otherwise /swagger.json
90 """
91 return self._options.get('openapi_spec_path', self.openapi_spec_name)
92
93 @property
94 def openapi_console_ui_path(self):
95 # type: () -> str
96 """
97 Path to mount the OpenAPI Console UI and make it accessible via a browser.
98
99 Default: /ui
100 """
101 return self._options.get('swagger_url', '/ui')
102
103 @property
104 def openapi_console_ui_from_dir(self):
105 # type: () -> str
106 """
107 Custom OpenAPI Console UI directory from where Connexion will serve
108 the static files.
109
110 Default: Connexion's vendored version of the OpenAPI Console UI.
111 """
112 return self._options.get('swagger_path', self.swagger_ui_local_path)
113
114 @property
115 def openapi_console_ui_config(self):
116 # type: () -> dict
117 """
118 Custom OpenAPI Console UI config.
119
120 Default: None
121 """
122 return self._options.get('swagger_ui_config', None)
123
124 @property
125 def uri_parser_class(self):
126 # type: () -> AbstractURIParser
127 """
128 The class to use for parsing URIs into path and query parameters.
129 Default: None
130 """
131 return self._options.get('uri_parser_class', None)
132
133
134 def filter_values(dictionary):
135 # type: (dict) -> dict
136 """
137 Remove `None` value entries in the dictionary.
138
139 :param dictionary:
140 :return:
141 """
142 return dict([(key, value)
143 for key, value in dictionary.items()
144 if value is not None])
145
[end of airflow/_vendor/connexion/options.py]
[start of airflow/www/extensions/init_views.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17
18 import logging
19 import warnings
20 from os import path
21
22 from flask import Flask, request
23
24 from airflow._vendor import connexion
25 from airflow._vendor.connexion import ProblemException
26 from airflow.api_connexion.exceptions import common_error_handler
27 from airflow.configuration import conf
28 from airflow.security import permissions
29 from airflow.www.views import lazy_add_provider_discovered_options_to_connection_form
30
31 log = logging.getLogger(__name__)
32
33 # airflow/www/extensions/init_views.py => airflow/
34 ROOT_APP_DIR = path.abspath(path.join(path.dirname(__file__), path.pardir, path.pardir))
35
36
37 def init_flash_views(app):
38 """Init main app view - redirect to FAB"""
39 from airflow.www.blueprints import routes
40
41 app.register_blueprint(routes)
42
43
44 def init_appbuilder_views(app):
45 """Initialize Web UI views"""
46 appbuilder = app.appbuilder
47 from airflow.www import views
48
49 # Remove the session from scoped_session registry to avoid
50 # reusing a session with a disconnected connection
51 appbuilder.session.remove()
52 appbuilder.add_view_no_menu(views.AutocompleteView())
53 appbuilder.add_view_no_menu(views.Airflow())
54 appbuilder.add_view(
55 views.DagRunModelView,
56 permissions.RESOURCE_DAG_RUN,
57 category=permissions.RESOURCE_BROWSE_MENU,
58 category_icon="fa-globe",
59 )
60 appbuilder.add_view(
61 views.JobModelView, permissions.RESOURCE_JOB, category=permissions.RESOURCE_BROWSE_MENU
62 )
63 appbuilder.add_view(
64 views.LogModelView, permissions.RESOURCE_AUDIT_LOG, category=permissions.RESOURCE_BROWSE_MENU
65 )
66 appbuilder.add_view(
67 views.VariableModelView, permissions.RESOURCE_VARIABLE, category=permissions.RESOURCE_ADMIN_MENU
68 )
69 appbuilder.add_view(
70 views.TaskInstanceModelView,
71 permissions.RESOURCE_TASK_INSTANCE,
72 category=permissions.RESOURCE_BROWSE_MENU,
73 )
74 appbuilder.add_view(
75 views.TaskRescheduleModelView,
76 permissions.RESOURCE_TASK_RESCHEDULE,
77 category=permissions.RESOURCE_BROWSE_MENU,
78 )
79 appbuilder.add_view(
80 views.TriggerModelView,
81 permissions.RESOURCE_TRIGGER,
82 category=permissions.RESOURCE_BROWSE_MENU,
83 )
84 appbuilder.add_view(
85 views.ConfigurationView,
86 permissions.RESOURCE_CONFIG,
87 category=permissions.RESOURCE_ADMIN_MENU,
88 category_icon="fa-user",
89 )
90 appbuilder.add_view(
91 views.ConnectionModelView, permissions.RESOURCE_CONNECTION, category=permissions.RESOURCE_ADMIN_MENU
92 )
93 appbuilder.add_view(
94 views.SlaMissModelView, permissions.RESOURCE_SLA_MISS, category=permissions.RESOURCE_BROWSE_MENU
95 )
96 appbuilder.add_view(
97 views.PluginView, permissions.RESOURCE_PLUGIN, category=permissions.RESOURCE_ADMIN_MENU
98 )
99 appbuilder.add_view(
100 views.ProviderView, permissions.RESOURCE_PROVIDER, category=permissions.RESOURCE_ADMIN_MENU
101 )
102 appbuilder.add_view(
103 views.PoolModelView, permissions.RESOURCE_POOL, category=permissions.RESOURCE_ADMIN_MENU
104 )
105 appbuilder.add_view(
106 views.XComModelView, permissions.RESOURCE_XCOM, category=permissions.RESOURCE_ADMIN_MENU
107 )
108 appbuilder.add_view(
109 views.DagDependenciesView,
110 permissions.RESOURCE_DAG_DEPENDENCIES,
111 category=permissions.RESOURCE_BROWSE_MENU,
112 )
113 # add_view_no_menu to change item position.
114 # I added link in extensions.init_appbuilder_links.init_appbuilder_links
115 appbuilder.add_view_no_menu(views.RedocView)
116
117
118 def init_plugins(app):
119 """Integrate Flask and FAB with plugins"""
120 from airflow import plugins_manager
121
122 plugins_manager.initialize_web_ui_plugins()
123
124 appbuilder = app.appbuilder
125
126 for view in plugins_manager.flask_appbuilder_views:
127 name = view.get('name')
128 if name:
129 log.debug("Adding view %s with menu", name)
130 appbuilder.add_view(view["view"], name, category=view["category"])
131 else:
132 # if 'name' key is missing, intent is to add view without menu
133 log.debug("Adding view %s without menu", str(type(view["view"])))
134 appbuilder.add_view_no_menu(view["view"])
135
136 for menu_link in sorted(plugins_manager.flask_appbuilder_menu_links, key=lambda x: x["name"]):
137 log.debug("Adding menu link %s to %s", menu_link["name"], menu_link["href"])
138 appbuilder.add_link(**menu_link)
139
140 for blue_print in plugins_manager.flask_blueprints:
141 log.debug("Adding blueprint %s:%s", blue_print["name"], blue_print["blueprint"].import_name)
142 app.register_blueprint(blue_print["blueprint"])
143
144
145 def init_connection_form():
146 """Initializes connection form"""
147 lazy_add_provider_discovered_options_to_connection_form()
148
149
150 def init_error_handlers(app: Flask):
151 """Add custom errors handlers"""
152 from airflow.www import views
153
154 app.register_error_handler(500, views.show_traceback)
155 app.register_error_handler(404, views.not_found)
156
157
158 def set_cors_headers_on_response(response):
159 """Add response headers"""
160 allow_headers = conf.get('api', 'access_control_allow_headers')
161 allow_methods = conf.get('api', 'access_control_allow_methods')
162 allow_origins = conf.get('api', 'access_control_allow_origins')
163 if allow_headers is not None:
164 response.headers['Access-Control-Allow-Headers'] = allow_headers
165 if allow_methods is not None:
166 response.headers['Access-Control-Allow-Methods'] = allow_methods
167 if allow_origins is not None:
168 allowed_origins = allow_origins.split(' ')
169 origin = request.environ.get('HTTP_ORIGIN', allowed_origins[0])
170 if origin in allowed_origins:
171 response.headers['Access-Control-Allow-Origin'] = origin
172 return response
173
174
175 def init_api_connexion(app: Flask) -> None:
176 """Initialize Stable API"""
177 base_path = '/api/v1'
178
179 from airflow.www import views
180
181 @app.errorhandler(404)
182 @app.errorhandler(405)
183 def _handle_api_error(ex):
184 if request.path.startswith(base_path):
185 # 404 errors are never handled on the blueprint level
186 # unless raised from a view func so actual 404 errors,
187 # i.e. "no route for it" defined, need to be handled
188 # here on the application level
189 return common_error_handler(ex)
190 else:
191 return views.not_found(ex)
192
193 spec_dir = path.join(ROOT_APP_DIR, 'api_connexion', 'openapi')
194 connexion_app = connexion.App(__name__, specification_dir=spec_dir, skip_error_handlers=True)
195 connexion_app.app = app
196 api_bp = connexion_app.add_api(
197 specification='v1.yaml', base_path=base_path, validate_responses=True, strict_validation=True
198 ).blueprint
199 # Like "api_bp.after_request", but the BP is already registered, so we have
200 # to register it in the app directly.
201 app.after_request_funcs.setdefault(api_bp.name, []).append(set_cors_headers_on_response)
202 app.register_error_handler(ProblemException, common_error_handler)
203 app.extensions['csrf'].exempt(api_bp)
204
205
206 def init_api_experimental(app):
207 """Initialize Experimental API"""
208 if not conf.getboolean('api', 'enable_experimental_api', fallback=False):
209 return
210 from airflow.www.api.experimental import endpoints
211
212 warnings.warn(
213 "The experimental REST API is deprecated. Please migrate to the stable REST API. "
214 "Please note that the experimental API do not have access control. "
215 "The authenticated user has full access.",
216 DeprecationWarning,
217 )
218 app.register_blueprint(endpoints.api_experimental, url_prefix='/api/experimental')
219 app.extensions['csrf'].exempt(endpoints.api_experimental)
220
[end of airflow/www/extensions/init_views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
apache/airflow
|
20b1664685027ef7e1b9a5c9c30d1fddb600ed0f
|
Remove some more redundant parentheses
Removed redundant parentheses in airflow/_vendor/connexion/apis/flask_api.py and in many more
|
Congratulations on your first Pull Request and welcome to the Apache Airflow community! If you have any issues or are unsure about any anything please check our Contribution Guide (https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst)
Here are some useful points:
- Pay attention to the quality of your code (flake8, mypy and type annotations). Our [pre-commits]( https://github.com/apache/airflow/blob/main/STATIC_CODE_CHECKS.rst#prerequisites-for-pre-commit-hooks) will help you with that.
- In case of a new feature add useful documentation (in docstrings or in `docs/` directory). Adding a new operator? Check this short [guide](https://github.com/apache/airflow/blob/main/docs/apache-airflow/howto/custom-operator.rst) Consider adding an example DAG that shows how users should use it.
- Consider using [Breeze environment](https://github.com/apache/airflow/blob/main/BREEZE.rst) for testing locally, it’s a heavy docker but it ships with a working Airflow and a lot of integrations.
- Be patient and persistent. It might take some time to get a review or get the final approval from Committers.
- Please follow [ASF Code of Conduct](https://www.apache.org/foundation/policies/conduct) for all communication including (but not limited to) comments on Pull Requests, Mailing list and Slack.
- Be sure to read the [Airflow Coding style]( https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#coding-style-and-best-practices).
Apache Airflow is a community-driven project and together we are making it better 🚀.
In case of doubts contact the developers at:
Mailing List: [email protected]
Slack: https://s.apache.org/airflow-slack
|
2021-11-27T13:28:04Z
|
<patch>
diff --git a/airflow/_vendor/connexion/apis/flask_api.py b/airflow/_vendor/connexion/apis/flask_api.py
--- a/airflow/_vendor/connexion/apis/flask_api.py
+++ b/airflow/_vendor/connexion/apis/flask_api.py
@@ -186,7 +186,7 @@ def _build_response(cls, mimetype, content_type=None, headers=None, status_code=
def _serialize_data(cls, data, mimetype):
# TODO: harmonize flask and aiohttp serialization when mimetype=None or mimetype is not JSON
# (cases where it might not make sense to jsonify the data)
- if (isinstance(mimetype, str) and is_json_mimetype(mimetype)):
+ if isinstance(mimetype, str) and is_json_mimetype(mimetype):
body = cls.jsonifier.dumps(data)
elif not (isinstance(data, bytes) or isinstance(data, str)):
warnings.warn(
diff --git a/airflow/_vendor/connexion/decorators/uri_parsing.py b/airflow/_vendor/connexion/decorators/uri_parsing.py
--- a/airflow/_vendor/connexion/decorators/uri_parsing.py
+++ b/airflow/_vendor/connexion/decorators/uri_parsing.py
@@ -108,7 +108,7 @@ def resolve_params(self, params, _in):
# multiple values in a path is impossible
values = [values]
- if (param_schema is not None and param_schema['type'] == 'array'):
+ if param_schema is not None and param_schema['type'] == 'array':
# resolve variable re-assignment, handle explode
values = self._resolve_param_duplicates(values, param_defn, _in)
# handle array styles
@@ -186,7 +186,7 @@ def _make_deep_object(k, v):
"""
root_key = k.split("[", 1)[0]
if k == root_key:
- return (k, v, False)
+ return k, v, False
key_path = re.findall(r'\[([^\[\]]*)\]', k)
root = prev = node = {}
for k in key_path:
@@ -194,7 +194,7 @@ def _make_deep_object(k, v):
prev = node
node = node[k]
prev[k] = v[0]
- return (root_key, [root], True)
+ return root_key, [root], True
def _preprocess_deep_objects(self, query_data):
""" deep objects provide a way of rendering nested objects using query
diff --git a/airflow/_vendor/connexion/operations/openapi.py b/airflow/_vendor/connexion/operations/openapi.py
--- a/airflow/_vendor/connexion/operations/openapi.py
+++ b/airflow/_vendor/connexion/operations/openapi.py
@@ -210,7 +210,7 @@ def example_response(self, status_code=None, content_type=None):
return (self._nested_example(deep_get(self._responses, schema_path)),
status_code)
except KeyError:
- return (None, status_code)
+ return None, status_code
def _nested_example(self, schema):
try:
diff --git a/airflow/_vendor/connexion/operations/swagger2.py b/airflow/_vendor/connexion/operations/swagger2.py
--- a/airflow/_vendor/connexion/operations/swagger2.py
+++ b/airflow/_vendor/connexion/operations/swagger2.py
@@ -203,7 +203,7 @@ def example_response(self, status_code=None, *args, **kwargs):
return (self._nested_example(deep_get(self._responses, schema_path)),
status_code)
except KeyError:
- return (None, status_code)
+ return None, status_code
def _nested_example(self, schema):
try:
</patch>
|
[]
|
[]
| |||
wagtail__wagtail-990
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error serving documents with Django Cache Middleware enabled
I get an error each time I serve an uploaded document (pdf file) and my Django cache middleware is enabled. I'm using Redis as a cache backend, probably happens with any backend but I'm not sure. Some of the details are:
```
Django Version: 1.7.1
Exception Type: TypeError
Exception Value:
can't pickle instancemethod objects
Exception Location: /var/www/bcito/venvs/test.com/lib/python2.7/copy_reg.py in _reduce_ex, line 70
Python Executable: /usr/bin/uwsgi-core
Python Version: 2.7.6
```
I've found a workaround editing the way I include the wagtaildocs urls.py in my app. I make use of the _never_cache_ decorator. Instead of having:
```
urlpatterns = patterns(
...
url(r'^documents/', include(wagtaildocs_urls)),
...
```
Have this:
```
...
from wagtail.wagtaildocs.views import serve
...
urlpatterns = patterns(
...
url(r'^documents/(\d+)/(.*)$', never_cache(serve.serve), name='wagtaildocs_serve'),
...
```
Cheers,
Jordi
</issue>
<code>
[start of README.rst]
1 .. image:: https://travis-ci.org/torchbox/wagtail.png?branch=master
2 :target: https://travis-ci.org/torchbox/wagtail
3
4 .. image:: https://coveralls.io/repos/torchbox/wagtail/badge.png?branch=master&zxcv1
5 :target: https://coveralls.io/r/torchbox/wagtail?branch=master
6
7 .. image:: https://pypip.in/v/wagtail/badge.png?zxcv
8 :target: https://crate.io/packages/wagtail/
9
10 Wagtail CMS
11 ===========
12
13 .. image:: http://i.imgur.com/1OJwg4m.png
14
15 Wagtail is a Django content management system built originally for the `Royal College of Art <http://www.rca.ac.uk/>`_ and focused on flexibility and user experience. Its features include:
16
17 * A fast, attractive editor interface
18 * Complete control over design with standard Django templates
19 * Configure content types through standard Django models
20 * Tightly integrated search (with an `Elasticsearch <http://www.elasticsearch.org/>`_ backend for production)
21 * Strong document and image management
22 * Wide support for embedded content
23 * Simple, configurable permissions
24 * Support for tree-based content organisation
25 * Optional preview->submit->approve workflow
26 * Fast out of the box. `Varnish <https://www.varnish-cache.org/>`_-friendly if you need it
27 * A simple `form builder <http://docs.wagtail.io/en/latest/core_components/form_builder.html>`_
28 * Optional `static site generation <http://docs.wagtail.io/en/latest/contrib_components/static_site_generation.html>`_
29 * Excellent `test coverage <https://coveralls.io/r/torchbox/wagtail?branch=master>`_
30
31 Find out more at `wagtail.io <http://wagtail.io/>`_.
32
33 Got a question? Ask it on our `Google Group <https://groups.google.com/forum/#!forum/wagtail>`_.
34
35 Who's using it?
36 ~~~~~~~~~~~~~~~
37 We've a list of public Wagtail sites here: https://github.com/torchbox/wagtail/wiki/Public-Wagtail-sites
38
39 Got one of your own? Feel free to add it!
40
41
42 Getting started
43 ~~~~~~~~~~~~~~~
44 * To get you up and running quickly, we've provided a demonstration site with all the configuration in place, at `github.com/torchbox/wagtaildemo <https://github.com/torchbox/wagtaildemo/>`_; see the `README <https://github.com/torchbox/wagtaildemo/blob/master/README.md>`_ for installation instructions.
45 * See the `Getting Started <http://wagtail.readthedocs.org/en/latest/getting_started/installation.html>`_ docs for installation (with the demo app) on a fresh Debian/Ubuntu box with production-ready dependencies, on OS X and on a Vagrant box.
46 * `Serafeim Papastefanos <https://github.com/spapas>`_ has written a `tutorial <http://spapas.github.io/2014/02/13/wagtail-tutorial/>`_ with all the steps to build a simple Wagtail site from scratch.
47 * We've also provided a skeletal django-template to get started on a blank site: https://github.com/torchbox/wagtail-template
48
49 Documentation
50 ~~~~~~~~~~~~~
51 Available at `wagtail.readthedocs.org <http://wagtail.readthedocs.org/>`_ and always being updated.
52
53 Compatibility
54 ~~~~~~~~~~~~~
55 Wagtail supports Django 1.7.0+ on Python 2.7, 3.3 and 3.4.
56
57 Wagtail's dependencies are summarised at `requirements.io <https://requires.io/github/torchbox/wagtail/requirements>`_.
58
59 Contributing
60 ~~~~~~~~~~~~
61 If you're a Python or Django developer, fork the repo and get stuck in!
62
63 We suggest you start by checking the `Help develop me! <https://github.com/torchbox/wagtail/labels/Help%20develop%20me%21>`_ label and the `coding guidelines <http://wagtail.readthedocs.org/en/latest/howto/contributing.html#coding-guidelines>`_.
64
65 Send us a useful pull request and we'll post you a `t-shirt <https://twitter.com/WagtailCMS/status/432166799464210432/photo/1>`_.
66
67 We also welcome `translations <http://wagtail.readthedocs.org/en/latest/howto/contributing.html#translations>`_ for Wagtail's interface.
68
[end of README.rst]
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Wagtail documentation build configuration file, created by
4 # sphinx-quickstart on Tue Jan 14 17:38:55 2014.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import sys
16 import os
17
18
19 # on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org
20 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
21
22 if not on_rtd: # only import and set the theme if we're building docs locally
23 import sphinx_rtd_theme
24 html_theme = 'sphinx_rtd_theme'
25 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
26
27 # If extensions (or modules to document with autodoc) are in another directory,
28 # add these directories to sys.path here. If the directory is relative to the
29 # documentation root, use os.path.abspath to make it absolute, like shown here.
30 sys.path.insert(0, os.path.abspath('..'))
31
32 # Get Wagtail version
33 from wagtail.wagtailcore import __version__
34
35 # Autodoc may need to import some models modules which require django settings
36 # be configured
37 os.environ['DJANGO_SETTINGS_MODULE'] = 'wagtail.tests.settings'
38
39 # Use SQLite3 database engine so it doesn't attempt to use psycopg2 on RTD
40 os.environ['DATABASE_ENGINE'] = 'django.db.backends.sqlite3'
41
42
43 # -- General configuration ------------------------------------------------
44
45 # If your documentation needs a minimal Sphinx version, state it here.
46 #needs_sphinx = '1.0'
47
48 # Add any Sphinx extension module names here, as strings. They can be
49 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
50 # ones.
51 extensions = [
52 'sphinx.ext.autodoc',
53 ]
54
55 # Add any paths that contain templates here, relative to this directory.
56 templates_path = ['_templates']
57
58 # The suffix of source filenames.
59 source_suffix = '.rst'
60
61 # The encoding of source files.
62 #source_encoding = 'utf-8-sig'
63
64 # The master toctree document.
65 master_doc = 'index'
66
67 # General information about the project.
68 project = u'Wagtail'
69 copyright = u'2014, Torchbox'
70
71 # The version info for the project you're documenting, acts as replacement for
72 # |version| and |release|, also used in various other places throughout the
73 # built documents.
74 #
75 # The short X.Y version.
76 version = __version__
77 # The full version, including alpha/beta/rc tags.
78 release = __version__
79
80 # The language for content autogenerated by Sphinx. Refer to documentation
81 # for a list of supported languages.
82 #language = None
83
84 # There are two options for replacing |today|: either, you set today to some
85 # non-false value, then it is used:
86 #today = ''
87 # Else, today_fmt is used as the format for a strftime call.
88 #today_fmt = '%B %d, %Y'
89
90 # List of patterns, relative to source directory, that match files and
91 # directories to ignore when looking for source files.
92 exclude_patterns = ['_build']
93
94 # The reST default role (used for this markup: `text`) to use for all
95 # documents.
96 #default_role = None
97
98 # If true, '()' will be appended to :func: etc. cross-reference text.
99 #add_function_parentheses = True
100
101 # If true, the current module name will be prepended to all description
102 # unit titles (such as .. function::).
103 #add_module_names = True
104
105 # If true, sectionauthor and moduleauthor directives will be shown in the
106 # output. They are ignored by default.
107 #show_authors = False
108
109 # The name of the Pygments (syntax highlighting) style to use.
110 pygments_style = 'sphinx'
111
112 # A list of ignored prefixes for module index sorting.
113 #modindex_common_prefix = []
114
115 # If true, keep warnings as "system message" paragraphs in the built documents.
116 #keep_warnings = False
117
118
119 # -- Options for HTML output ----------------------------------------------
120
121
122 # Theme options are theme-specific and customize the look and feel of a theme
123 # further. For a list of options available for each theme, see the
124 # documentation.
125 #html_theme_options = {}
126
127
128
129 # The name for this set of Sphinx documents. If None, it defaults to
130 # "<project> v<release> documentation".
131 #html_title = None
132
133 # A shorter title for the navigation bar. Default is the same as html_title.
134 #html_short_title = None
135
136 # The name of an image file (relative to this directory) to place at the top
137 # of the sidebar.
138 #html_logo = None
139
140 # The name of an image file (within the static path) to use as favicon of the
141 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
142 # pixels large.
143 #html_favicon = None
144
145 # Add any paths that contain custom static files (such as style sheets) here,
146 # relative to this directory. They are copied after the builtin static files,
147 # so a file named "default.css" will overwrite the builtin "default.css".
148 html_static_path = ['_static']
149
150 # Add any extra paths that contain custom files (such as robots.txt or
151 # .htaccess) here, relative to this directory. These files are copied
152 # directly to the root of the documentation.
153 #html_extra_path = []
154
155 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
156 # using the given strftime format.
157 #html_last_updated_fmt = '%b %d, %Y'
158
159 # If true, SmartyPants will be used to convert quotes and dashes to
160 # typographically correct entities.
161 #html_use_smartypants = True
162
163 # Custom sidebar templates, maps document names to template names.
164 #html_sidebars = {}
165
166 # Additional templates that should be rendered to pages, maps page names to
167 # template names.
168 #html_additional_pages = {}
169
170 # If false, no module index is generated.
171 #html_domain_indices = True
172
173 # If false, no index is generated.
174 #html_use_index = True
175
176 # If true, the index is split into individual pages for each letter.
177 #html_split_index = False
178
179 # If true, links to the reST sources are added to the pages.
180 #html_show_sourcelink = True
181
182 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
183 #html_show_sphinx = True
184
185 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
186 #html_show_copyright = True
187
188 # If true, an OpenSearch description file will be output, and all pages will
189 # contain a <link> tag referring to it. The value of this option must be the
190 # base URL from which the finished HTML is served.
191 #html_use_opensearch = ''
192
193 # This is the file name suffix for HTML files (e.g. ".xhtml").
194 #html_file_suffix = None
195
196 # Output file base name for HTML help builder.
197 htmlhelp_basename = 'Wagtaildoc'
198
199
200 # -- Options for LaTeX output ---------------------------------------------
201
202 latex_elements = {
203 # The paper size ('letterpaper' or 'a4paper').
204 #'papersize': 'letterpaper',
205
206 # The font size ('10pt', '11pt' or '12pt').
207 #'pointsize': '10pt',
208
209 # Additional stuff for the LaTeX preamble.
210 #'preamble': '',
211 }
212
213 # Grouping the document tree into LaTeX files. List of tuples
214 # (source start file, target name, title,
215 # author, documentclass [howto, manual, or own class]).
216 latex_documents = [
217 ('index', 'Wagtail.tex', u'Wagtail Documentation',
218 u'Torchbox', 'manual'),
219 ]
220
221 # The name of an image file (relative to this directory) to place at the top of
222 # the title page.
223 #latex_logo = None
224
225 # For "manual" documents, if this is true, then toplevel headings are parts,
226 # not chapters.
227 #latex_use_parts = False
228
229 # If true, show page references after internal links.
230 #latex_show_pagerefs = False
231
232 # If true, show URL addresses after external links.
233 #latex_show_urls = False
234
235 # Documents to append as an appendix to all manuals.
236 #latex_appendices = []
237
238 # If false, no module index is generated.
239 #latex_domain_indices = True
240
241
242 # -- Options for manual page output ---------------------------------------
243
244 # One entry per manual page. List of tuples
245 # (source start file, name, description, authors, manual section).
246 man_pages = [
247 ('index', 'wagtail', u'Wagtail Documentation',
248 [u'Torchbox'], 1)
249 ]
250
251 # If true, show URL addresses after external links.
252 #man_show_urls = False
253
254
255 # -- Options for Texinfo output -------------------------------------------
256
257 # Grouping the document tree into Texinfo files. List of tuples
258 # (source start file, target name, title, author,
259 # dir menu entry, description, category)
260 texinfo_documents = [
261 ('index', 'Wagtail', u'Wagtail Documentation',
262 u'Torchbox', 'Wagtail', 'One line description of project.',
263 'Miscellaneous'),
264 ]
265
266 # Documents to append as an appendix to all manuals.
267 #texinfo_appendices = []
268
269 # If false, no module index is generated.
270 #texinfo_domain_indices = True
271
272 # How to display URL addresses: 'footnote', 'no', or 'inline'.
273 #texinfo_show_urls = 'footnote'
274
275 # If true, do not generate a @detailmenu in the "Top" node's menu.
276 #texinfo_no_detailmenu = False
277
[end of docs/conf.py]
[start of setup.py]
1 #!/usr/bin/env python
2
3 import sys, os
4
5 from wagtail.wagtailcore import __version__
6
7
8 try:
9 from setuptools import setup, find_packages
10 except ImportError:
11 from distutils.core import setup
12
13
14 # Hack to prevent "TypeError: 'NoneType' object is not callable" error
15 # in multiprocessing/util.py _exit_function when setup.py exits
16 # (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)
17 try:
18 import multiprocessing
19 except ImportError:
20 pass
21
22
23 # Disable parallel builds, because Pillow 2.5.3 does some crazy monkeypatching of
24 # the build process on multicore systems, which breaks installation of libsass
25 os.environ['MAX_CONCURRENCY'] = '1'
26
27 PY3 = sys.version_info[0] == 3
28
29
30 install_requires = [
31 "Django>=1.7.0,<1.8",
32 "django-compressor>=1.4",
33 "django-libsass>=0.2",
34 "django-modelcluster>=0.5",
35 "django-taggit==0.12.2",
36 "django-treebeard==3.0",
37 "Pillow>=2.6.1",
38 "beautifulsoup4>=4.3.2",
39 "html5lib==0.999",
40 "Unidecode>=0.04.14",
41 "six>=1.7.0",
42 'requests>=2.0.0',
43 "Willow==0.1",
44 ]
45
46
47 if not PY3:
48 install_requires += [
49 "unicodecsv>=0.9.4"
50 ]
51
52
53 setup(
54 name='wagtail',
55 version=__version__,
56 description='A Django content management system focused on flexibility and user experience',
57 author='Matthew Westcott',
58 author_email='[email protected]',
59 url='http://wagtail.io/',
60 packages=find_packages(),
61 include_package_data=True,
62 license='BSD',
63 long_description=open('README.rst').read(),
64 classifiers=[
65 'Development Status :: 5 - Production/Stable',
66 'Environment :: Web Environment',
67 'Intended Audience :: Developers',
68 'License :: OSI Approved :: BSD License',
69 'Operating System :: OS Independent',
70 'Programming Language :: Python',
71 'Programming Language :: Python :: 2',
72 'Programming Language :: Python :: 2.7',
73 'Programming Language :: Python :: 3',
74 'Programming Language :: Python :: 3.3',
75 'Programming Language :: Python :: 3.4',
76 'Framework :: Django',
77 'Topic :: Internet :: WWW/HTTP :: Site Management',
78 ],
79 install_requires=install_requires,
80 entry_points="""
81 [console_scripts]
82 wagtail=wagtail.bin.wagtail:main
83 """,
84 zip_safe=False,
85 )
86
[end of setup.py]
[start of wagtail/project_template/project_name/settings/production.py]
1 from .base import *
2
3
4 # Disable debug mode
5
6 DEBUG = False
7 TEMPLATE_DEBUG = False
8
9
10 # Compress static files offline
11 # http://django-compressor.readthedocs.org/en/latest/settings/#django.conf.settings.COMPRESS_OFFLINE
12
13 COMPRESS_OFFLINE = True
14
15
16 # Send notification emails as a background task using Celery,
17 # to prevent this from blocking web server threads
18 # (requires the django-celery package):
19 # http://celery.readthedocs.org/en/latest/configuration.html
20
21 # import djcelery
22 #
23 # djcelery.setup_loader()
24 #
25 # CELERY_SEND_TASK_ERROR_EMAILS = True
26 # BROKER_URL = 'redis://'
27
28
29 # Use Redis as the cache backend for extra performance
30 # (requires the django-redis-cache package):
31 # http://wagtail.readthedocs.org/en/latest/howto/performance.html#cache
32
33 # CACHES = {
34 # 'default': {
35 # 'BACKEND': 'redis_cache.cache.RedisCache',
36 # 'LOCATION': '127.0.0.1:6379',
37 # 'KEY_PREFIX': '{{ project_name }}',
38 # 'OPTIONS': {
39 # 'CLIENT_CLASS': 'redis_cache.client.DefaultClient',
40 # }
41 # }
42 # }
43
44
45 try:
46 from .local import *
47 except ImportError:
48 pass
49
[end of wagtail/project_template/project_name/settings/production.py]
[start of wagtail/project_template/project_name/urls.py]
1 import os
2
3 from django.conf.urls import patterns, include, url
4 from django.conf.urls.static import static
5 from django.conf import settings
6 from django.contrib import admin
7
8 from wagtail.wagtailadmin import urls as wagtailadmin_urls
9 from wagtail.wagtailsearch import urls as wagtailsearch_urls
10 from wagtail.wagtaildocs import urls as wagtaildocs_urls
11 from wagtail.wagtailcore import urls as wagtail_urls
12
13
14 urlpatterns = patterns('',
15 url(r'^django-admin/', include(admin.site.urls)),
16
17 url(r'^admin/', include(wagtailadmin_urls)),
18 url(r'^search/', include(wagtailsearch_urls)),
19 url(r'^documents/', include(wagtaildocs_urls)),
20
21 url(r'', include(wagtail_urls)),
22 )
23
24
25 if settings.DEBUG:
26 from django.contrib.staticfiles.urls import staticfiles_urlpatterns
27
28 urlpatterns += staticfiles_urlpatterns()
29 urlpatterns += static(settings.MEDIA_URL + 'images/', document_root=os.path.join(settings.MEDIA_ROOT, 'images'))
30
[end of wagtail/project_template/project_name/urls.py]
[start of wagtail/wagtailadmin/urls.py]
1 from django.conf.urls import url
2 from django.contrib.auth.decorators import permission_required
3 from django.views.decorators.cache import cache_control
4
5 from wagtail.wagtailadmin.forms import PasswordResetForm
6 from wagtail.wagtailadmin.views import account, chooser, home, pages, tags, userbar, page_privacy
7 from wagtail.wagtailcore import hooks
8 from wagtail.utils.urlpatterns import decorate_urlpatterns
9
10
11 urlpatterns = [
12 url(r'^$', home.home, name='wagtailadmin_home'),
13
14 url(r'^failwhale/$', home.error_test, name='wagtailadmin_error_test'),
15
16 url(r'^explorer-nav/$', pages.explorer_nav, name='wagtailadmin_explorer_nav'),
17
18 url(r'^pages/$', pages.index, name='wagtailadmin_explore_root'),
19 url(r'^pages/(\d+)/$', pages.index, name='wagtailadmin_explore'),
20
21 url(r'^pages/new/(\w+)/(\w+)/(\d+)/$', pages.create, name='wagtailadmin_pages_create'),
22 url(r'^pages/new/(\w+)/(\w+)/(\d+)/preview/$', pages.preview_on_create, name='wagtailadmin_pages_preview_on_create'),
23 url(r'^pages/usage/(\w+)/(\w+)/$', pages.content_type_use, name='wagtailadmin_pages_type_use'),
24
25 url(r'^pages/(\d+)/edit/$', pages.edit, name='wagtailadmin_pages_edit'),
26 url(r'^pages/(\d+)/edit/preview/$', pages.preview_on_edit, name='wagtailadmin_pages_preview_on_edit'),
27
28 url(r'^pages/preview/$', pages.preview, name='wagtailadmin_pages_preview'),
29 url(r'^pages/preview_loading/$', pages.preview_loading, name='wagtailadmin_pages_preview_loading'),
30
31 url(r'^pages/(\d+)/view_draft/$', pages.view_draft, name='wagtailadmin_pages_view_draft'),
32 url(r'^pages/(\d+)/add_subpage/$', pages.add_subpage, name='wagtailadmin_pages_add_subpage'),
33 url(r'^pages/(\d+)/delete/$', pages.delete, name='wagtailadmin_pages_delete'),
34 url(r'^pages/(\d+)/unpublish/$', pages.unpublish, name='wagtailadmin_pages_unpublish'),
35
36 url(r'^pages/search/$', pages.search, name='wagtailadmin_pages_search'),
37
38 url(r'^pages/(\d+)/move/$', pages.move_choose_destination, name='wagtailadmin_pages_move'),
39 url(r'^pages/(\d+)/move/(\d+)/$', pages.move_choose_destination, name='wagtailadmin_pages_move_choose_destination'),
40 url(r'^pages/(\d+)/move/(\d+)/confirm/$', pages.move_confirm, name='wagtailadmin_pages_move_confirm'),
41 url(r'^pages/(\d+)/set_position/$', pages.set_page_position, name='wagtailadmin_pages_set_page_position'),
42
43 url(r'^pages/(\d+)/copy/$', pages.copy, name='wagtailadmin_pages_copy'),
44
45 url(r'^pages/moderation/(\d+)/approve/$', pages.approve_moderation, name='wagtailadmin_pages_approve_moderation'),
46 url(r'^pages/moderation/(\d+)/reject/$', pages.reject_moderation, name='wagtailadmin_pages_reject_moderation'),
47 url(r'^pages/moderation/(\d+)/preview/$', pages.preview_for_moderation, name='wagtailadmin_pages_preview_for_moderation'),
48
49 url(r'^pages/(\d+)/privacy/$', page_privacy.set_privacy, name='wagtailadmin_pages_set_privacy'),
50
51 url(r'^pages/(\d+)/lock/$', pages.lock, name='wagtailadmin_pages_lock'),
52 url(r'^pages/(\d+)/unlock/$', pages.unlock, name='wagtailadmin_pages_unlock'),
53
54 url(r'^choose-page/$', chooser.browse, name='wagtailadmin_choose_page'),
55 url(r'^choose-page/(\d+)/$', chooser.browse, name='wagtailadmin_choose_page_child'),
56 url(r'^choose-external-link/$', chooser.external_link, name='wagtailadmin_choose_page_external_link'),
57 url(r'^choose-email-link/$', chooser.email_link, name='wagtailadmin_choose_page_email_link'),
58
59 url(r'^tag-autocomplete/$', tags.autocomplete, name='wagtailadmin_tag_autocomplete'),
60
61 url(r'^account/$', account.account, name='wagtailadmin_account'),
62 url(r'^account/change_password/$', account.change_password, name='wagtailadmin_account_change_password'),
63 url(r'^account/notification_preferences/$', account.notification_preferences, name='wagtailadmin_account_notification_preferences'),
64 url(r'^logout/$', account.logout, name='wagtailadmin_logout'),
65 ]
66
67
68 # Import additional urlpatterns from any apps that define a register_admin_urls hook
69 for fn in hooks.get_hooks('register_admin_urls'):
70 urls = fn()
71 if urls:
72 urlpatterns += urls
73
74
75 # Add "wagtailadmin.access_admin" permission check
76 urlpatterns = decorate_urlpatterns(urlpatterns,
77 permission_required(
78 'wagtailadmin.access_admin',
79 login_url='wagtailadmin_login'
80 )
81 )
82
83
84 # These url patterns do not require an authenticated admin user
85 urlpatterns += [
86 url(r'^login/$', account.login, name='wagtailadmin_login'),
87
88 # These two URLs have the "permission_required" decorator applied directly
89 # as they need to fail with a 403 error rather than redirect to the login page
90 url(r'^userbar/(\d+)/$', userbar.for_frontend, name='wagtailadmin_userbar_frontend'),
91 url(r'^userbar/moderation/(\d+)/$', userbar.for_moderation, name='wagtailadmin_userbar_moderation'),
92
93 # Password reset
94 url(
95 r'^password_reset/$', 'django.contrib.auth.views.password_reset', {
96 'template_name': 'wagtailadmin/account/password_reset/form.html',
97 'email_template_name': 'wagtailadmin/account/password_reset/email.txt',
98 'subject_template_name': 'wagtailadmin/account/password_reset/email_subject.txt',
99 'password_reset_form': PasswordResetForm,
100 'post_reset_redirect': 'wagtailadmin_password_reset_done',
101 }, name='wagtailadmin_password_reset'
102 ),
103 url(
104 r'^password_reset/done/$', 'django.contrib.auth.views.password_reset_done', {
105 'template_name': 'wagtailadmin/account/password_reset/done.html'
106 }, name='wagtailadmin_password_reset_done'
107 ),
108 url(
109 r'^password_reset/confirm/(?P<uidb64>[0-9A-Za-z_\-]+)/(?P<token>[0-9A-Za-z]{1,13}-[0-9A-Za-z]{1,20})/$',
110 'django.contrib.auth.views.password_reset_confirm', {
111 'template_name': 'wagtailadmin/account/password_reset/confirm.html',
112 'post_reset_redirect': 'wagtailadmin_password_reset_complete',
113 }, name='wagtailadmin_password_reset_confirm',
114 ),
115 url(
116 r'^password_reset/complete/$', 'django.contrib.auth.views.password_reset_complete',{
117 'template_name': 'wagtailadmin/account/password_reset/complete.html'
118 }, name='wagtailadmin_password_reset_complete'
119 ),
120 ]
121
122 # Decorate all views with cache settings to prevent caching
123 urlpatterns = decorate_urlpatterns(urlpatterns,
124 cache_control(private=True, no_cache=True, no_store=True, max_age=0)
125 )
126
[end of wagtail/wagtailadmin/urls.py]
[start of wagtail/wagtaildocs/admin_urls.py]
1 from django.conf.urls import url
2 from wagtail.wagtaildocs.views import documents, chooser
3
4
5 urlpatterns = [
6 url(r'^$', documents.index, name='wagtaildocs_index'),
7 url(r'^add/$', documents.add, name='wagtaildocs_add_document'),
8 url(r'^edit/(\d+)/$', documents.edit, name='wagtaildocs_edit_document'),
9 url(r'^delete/(\d+)/$', documents.delete, name='wagtaildocs_delete_document'),
10
11 url(r'^chooser/$', chooser.chooser, name='wagtaildocs_chooser'),
12 url(r'^chooser/(\d+)/$', chooser.document_chosen, name='wagtaildocs_document_chosen'),
13 url(r'^chooser/upload/$', chooser.chooser_upload, name='wagtaildocs_chooser_upload'),
14 url(r'^usage/(\d+)/$', documents.usage, name='wagtaildocs_document_usage'),
15 ]
16
[end of wagtail/wagtaildocs/admin_urls.py]
[start of wagtail/wagtaildocs/urls.py]
1 from django.conf.urls import url
2 from wagtail.wagtaildocs.views import serve
3
4 urlpatterns = [
5 url(r'^(\d+)/(.*)$', serve.serve, name='wagtaildocs_serve'),
6 ]
7
[end of wagtail/wagtaildocs/urls.py]
[start of wagtail/wagtaildocs/views/documents.py]
1 from django.shortcuts import render, redirect, get_object_or_404
2 from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
3 from django.contrib.auth.decorators import permission_required
4 from django.core.exceptions import PermissionDenied
5 from django.utils.translation import ugettext as _
6 from django.views.decorators.vary import vary_on_headers
7 from django.core.urlresolvers import reverse
8
9 from wagtail.wagtailadmin.forms import SearchForm
10 from wagtail.wagtailsearch.backends import get_search_backends
11 from wagtail.wagtailadmin import messages
12
13 from wagtail.wagtaildocs.models import Document
14 from wagtail.wagtaildocs.forms import DocumentForm
15
16
17 @permission_required('wagtaildocs.add_document')
18 @vary_on_headers('X-Requested-With')
19 def index(request):
20 # Get documents
21 documents = Document.objects.all()
22
23 # Ordering
24 if 'ordering' in request.GET and request.GET['ordering'] in ['title', '-created_at']:
25 ordering = request.GET['ordering']
26 else:
27 ordering = '-created_at'
28 documents = documents.order_by(ordering)
29
30 # Permissions
31 if not request.user.has_perm('wagtaildocs.change_document'):
32 # restrict to the user's own documents
33 documents = documents.filter(uploaded_by_user=request.user)
34
35 # Search
36 query_string = None
37 if 'q' in request.GET:
38 form = SearchForm(request.GET, placeholder=_("Search documents"))
39 if form.is_valid():
40 query_string = form.cleaned_data['q']
41 if not request.user.has_perm('wagtaildocs.change_document'):
42 # restrict to the user's own documents
43 documents = Document.search(query_string, filters={'uploaded_by_user_id': request.user.id})
44 else:
45 documents = Document.search(query_string)
46 else:
47 form = SearchForm(placeholder=_("Search documents"))
48
49 # Pagination
50 p = request.GET.get('p', 1)
51 paginator = Paginator(documents, 20)
52
53 try:
54 documents = paginator.page(p)
55 except PageNotAnInteger:
56 documents = paginator.page(1)
57 except EmptyPage:
58 documents = paginator.page(paginator.num_pages)
59
60 # Create response
61 if request.is_ajax():
62 return render(request, 'wagtaildocs/documents/results.html', {
63 'ordering': ordering,
64 'documents': documents,
65 'query_string': query_string,
66 'is_searching': bool(query_string),
67 })
68 else:
69 return render(request, 'wagtaildocs/documents/index.html', {
70 'ordering': ordering,
71 'documents': documents,
72 'query_string': query_string,
73 'is_searching': bool(query_string),
74
75 'search_form': form,
76 'popular_tags': Document.popular_tags(),
77 })
78
79
80 @permission_required('wagtaildocs.add_document')
81 def add(request):
82 if request.POST:
83 doc = Document(uploaded_by_user=request.user)
84 form = DocumentForm(request.POST, request.FILES, instance=doc)
85 if form.is_valid():
86 form.save()
87
88 # Reindex the document to make sure all tags are indexed
89 for backend in get_search_backends():
90 backend.add(doc)
91
92 messages.success(request, _("Document '{0}' added.").format(doc.title), buttons=[
93 messages.button(reverse('wagtaildocs_edit_document', args=(doc.id,)), _('Edit'))
94 ])
95 return redirect('wagtaildocs_index')
96 else:
97 messages.error(request, _("The document could not be saved due to errors."))
98 else:
99 form = DocumentForm()
100
101 return render(request, "wagtaildocs/documents/add.html", {
102 'form': form,
103 })
104
105
106 def edit(request, document_id):
107 doc = get_object_or_404(Document, id=document_id)
108
109 if not doc.is_editable_by_user(request.user):
110 raise PermissionDenied
111
112 if request.POST:
113 original_file = doc.file
114 form = DocumentForm(request.POST, request.FILES, instance=doc)
115 if form.is_valid():
116 if 'file' in form.changed_data:
117 # if providing a new document file, delete the old one.
118 # NB Doing this via original_file.delete() clears the file field,
119 # which definitely isn't what we want...
120 original_file.storage.delete(original_file.name)
121 doc = form.save()
122
123 # Reindex the document to make sure all tags are indexed
124 for backend in get_search_backends():
125 backend.add(doc)
126
127 messages.success(request, _("Document '{0}' updated").format(doc.title), buttons=[
128 messages.button(reverse('wagtaildocs_edit_document', args=(doc.id,)), _('Edit'))
129 ])
130 return redirect('wagtaildocs_index')
131 else:
132 messages.error(request, _("The document could not be saved due to errors."))
133 else:
134 form = DocumentForm(instance=doc)
135
136 return render(request, "wagtaildocs/documents/edit.html", {
137 'document': doc,
138 'form': form
139 })
140
141
142 def delete(request, document_id):
143 doc = get_object_or_404(Document, id=document_id)
144
145 if not doc.is_editable_by_user(request.user):
146 raise PermissionDenied
147
148 if request.POST:
149 doc.delete()
150 messages.success(request, _("Document '{0}' deleted.").format(doc.title))
151 return redirect('wagtaildocs_index')
152
153 return render(request, "wagtaildocs/documents/confirm_delete.html", {
154 'document': doc,
155 })
156
157
158 def usage(request, document_id):
159 doc = get_object_or_404(Document, id=document_id)
160
161 # Pagination
162 p = request.GET.get('p', 1)
163 paginator = Paginator(doc.get_usage(), 20)
164
165 try:
166 used_by = paginator.page(p)
167 except PageNotAnInteger:
168 used_by = paginator.page(1)
169 except EmptyPage:
170 used_by = paginator.page(paginator.num_pages)
171
172 return render(request, "wagtaildocs/documents/usage.html", {
173 'document': doc,
174 'used_by': used_by
175 })
176
[end of wagtail/wagtaildocs/views/documents.py]
[start of wagtail/wagtailimages/views/images.py]
1 import json
2
3 from django.shortcuts import render, redirect, get_object_or_404
4 from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
5 from django.contrib.auth.decorators import permission_required
6 from django.core.exceptions import PermissionDenied
7 from django.utils.translation import ugettext as _
8 from django.views.decorators.vary import vary_on_headers
9 from django.core.urlresolvers import reverse, NoReverseMatch
10 from django.http import HttpResponse
11
12 from wagtail.wagtailcore.models import Site
13 from wagtail.wagtailadmin.forms import SearchForm
14 from wagtail.wagtailadmin import messages
15 from wagtail.wagtailsearch.backends import get_search_backends
16
17 from wagtail.wagtailimages.models import get_image_model, Filter
18 from wagtail.wagtailimages.forms import get_image_form, URLGeneratorForm
19 from wagtail.wagtailimages.utils import generate_signature
20 from wagtail.wagtailimages.fields import MAX_UPLOAD_SIZE
21 from wagtail.wagtailimages.exceptions import InvalidFilterSpecError
22
23
24 @permission_required('wagtailimages.add_image')
25 @vary_on_headers('X-Requested-With')
26 def index(request):
27 Image = get_image_model()
28
29 # Get images
30 images = Image.objects.order_by('-created_at')
31
32 # Permissions
33 if not request.user.has_perm('wagtailimages.change_image'):
34 # restrict to the user's own images
35 images = images.filter(uploaded_by_user=request.user)
36
37 # Search
38 query_string = None
39 if 'q' in request.GET:
40 form = SearchForm(request.GET, placeholder=_("Search images"))
41 if form.is_valid():
42 query_string = form.cleaned_data['q']
43
44 if not request.user.has_perm('wagtailimages.change_image'):
45 # restrict to the user's own images
46 images = Image.search(query_string, filters={'uploaded_by_user_id': request.user.id})
47 else:
48 images = Image.search(query_string)
49 else:
50 form = SearchForm(placeholder=_("Search images"))
51
52 # Pagination
53 p = request.GET.get('p', 1)
54 paginator = Paginator(images, 20)
55
56 try:
57 images = paginator.page(p)
58 except PageNotAnInteger:
59 images = paginator.page(1)
60 except EmptyPage:
61 images = paginator.page(paginator.num_pages)
62
63 # Create response
64 if request.is_ajax():
65 return render(request, 'wagtailimages/images/results.html', {
66 'images': images,
67 'query_string': query_string,
68 'is_searching': bool(query_string),
69 })
70 else:
71 return render(request, 'wagtailimages/images/index.html', {
72 'images': images,
73 'query_string': query_string,
74 'is_searching': bool(query_string),
75
76 'search_form': form,
77 'popular_tags': Image.popular_tags(),
78 })
79
80
81 def edit(request, image_id):
82 Image = get_image_model()
83 ImageForm = get_image_form(Image)
84
85 image = get_object_or_404(Image, id=image_id)
86
87 if not image.is_editable_by_user(request.user):
88 raise PermissionDenied
89
90 if request.POST:
91 original_file = image.file
92 form = ImageForm(request.POST, request.FILES, instance=image)
93 if form.is_valid():
94 if 'file' in form.changed_data:
95 # if providing a new image file, delete the old one and all renditions.
96 # NB Doing this via original_file.delete() clears the file field,
97 # which definitely isn't what we want...
98 original_file.storage.delete(original_file.name)
99 image.renditions.all().delete()
100 form.save()
101
102 # Reindex the image to make sure all tags are indexed
103 for backend in get_search_backends():
104 backend.add(image)
105
106 messages.success(request, _("Image '{0}' updated.").format(image.title), buttons=[
107 messages.button(reverse('wagtailimages_edit_image', args=(image.id,)), _('Edit again'))
108 ])
109 return redirect('wagtailimages_index')
110 else:
111 messages.error(request, _("The image could not be saved due to errors."))
112 else:
113 form = ImageForm(instance=image)
114
115 # Check if we should enable the frontend url generator
116 try:
117 reverse('wagtailimages_serve', args=('foo', '1', 'bar'))
118 url_generator_enabled = True
119 except NoReverseMatch:
120 url_generator_enabled = False
121
122 return render(request, "wagtailimages/images/edit.html", {
123 'image': image,
124 'form': form,
125 'url_generator_enabled': url_generator_enabled,
126 })
127
128
129 def url_generator(request, image_id):
130 image = get_object_or_404(get_image_model(), id=image_id)
131
132 if not image.is_editable_by_user(request.user):
133 raise PermissionDenied
134
135 form = URLGeneratorForm(initial={
136 'filter_method': 'original',
137 'width': image.width,
138 'height': image.height,
139 })
140
141 return render(request, "wagtailimages/images/url_generator.html", {
142 'image': image,
143 'form': form,
144 })
145
146
147 def json_response(document, status=200):
148 return HttpResponse(json.dumps(document), content_type='application/json', status=status)
149
150
151 def generate_url(request, image_id, filter_spec):
152 # Get the image
153 Image = get_image_model()
154 try:
155 image = Image.objects.get(id=image_id)
156 except Image.DoesNotExist:
157 return json_response({
158 'error': "Cannot find image."
159 }, status=404)
160
161 # Check if this user has edit permission on this image
162 if not image.is_editable_by_user(request.user):
163 return json_response({
164 'error': "You do not have permission to generate a URL for this image."
165 }, status=403)
166
167 # Parse the filter spec to make sure its valid
168 try:
169 Filter(spec=filter_spec).operations
170 except InvalidFilterSpecError:
171 return json_response({
172 'error': "Invalid filter spec."
173 }, status=400)
174
175 # Generate url
176 signature = generate_signature(image_id, filter_spec)
177 url = reverse('wagtailimages_serve', args=(signature, image_id, filter_spec))
178
179 # Get site root url
180 try:
181 site_root_url = Site.objects.get(is_default_site=True).root_url
182 except Site.DoesNotExist:
183 site_root_url = Site.objects.first().root_url
184
185 # Generate preview url
186 preview_url = reverse('wagtailimages_preview', args=(image_id, filter_spec))
187
188 return json_response({'url': site_root_url + url, 'preview_url': preview_url}, status=200)
189
190
191 def preview(request, image_id, filter_spec):
192 image = get_object_or_404(get_image_model(), id=image_id)
193
194 try:
195 return Filter(spec=filter_spec).run(image, HttpResponse(content_type='image/jpeg'))
196 except InvalidFilterSpecError:
197 return HttpResponse("Invalid filter spec: " + filter_spec, content_type='text/plain', status=400)
198
199
200 def delete(request, image_id):
201 image = get_object_or_404(get_image_model(), id=image_id)
202
203 if not image.is_editable_by_user(request.user):
204 raise PermissionDenied
205
206 if request.POST:
207 image.delete()
208 messages.success(request, _("Image '{0}' deleted.").format(image.title))
209 return redirect('wagtailimages_index')
210
211 return render(request, "wagtailimages/images/confirm_delete.html", {
212 'image': image,
213 })
214
215
216 @permission_required('wagtailimages.add_image')
217 def add(request):
218 ImageModel = get_image_model()
219 ImageForm = get_image_form(ImageModel)
220
221 if request.POST:
222 image = ImageModel(uploaded_by_user=request.user)
223 form = ImageForm(request.POST, request.FILES, instance=image)
224 if form.is_valid():
225 form.save()
226
227 # Reindex the image to make sure all tags are indexed
228 for backend in get_search_backends():
229 backend.add(image)
230
231 messages.success(request, _("Image '{0}' added.").format(image.title), buttons=[
232 messages.button(reverse('wagtailimages_edit_image', args=(image.id,)), _('Edit'))
233 ])
234 return redirect('wagtailimages_index')
235 else:
236 messages.error(request, _("The image could not be created due to errors."))
237 else:
238 form = ImageForm()
239
240 return render(request, "wagtailimages/images/add.html", {
241 'form': form,
242 'max_filesize': MAX_UPLOAD_SIZE,
243 })
244
245
246 def usage(request, image_id):
247 image = get_object_or_404(get_image_model(), id=image_id)
248
249 # Pagination
250 p = request.GET.get('p', 1)
251 paginator = Paginator(image.get_usage(), 20)
252
253 try:
254 used_by = paginator.page(p)
255 except PageNotAnInteger:
256 used_by = paginator.page(1)
257 except EmptyPage:
258 used_by = paginator.page(paginator.num_pages)
259
260 return render(request, "wagtailimages/images/usage.html", {
261 'image': image,
262 'used_by': used_by
263 })
264
[end of wagtail/wagtailimages/views/images.py]
[start of wagtail/wagtailimages/views/multiple.py]
1 import json
2
3 from django.shortcuts import render, get_object_or_404
4 from django.contrib.auth.decorators import permission_required
5 from django.views.decorators.http import require_POST
6 from django.core.exceptions import PermissionDenied, ValidationError
7 from django.views.decorators.vary import vary_on_headers
8 from django.http import HttpResponse, HttpResponseBadRequest
9 from django.template import RequestContext
10 from django.template.loader import render_to_string
11 from django.utils.translation import ugettext as _
12 from django.utils.encoding import force_text
13
14 from wagtail.wagtailsearch.backends import get_search_backends
15
16 from wagtail.wagtailimages.models import get_image_model
17 from wagtail.wagtailimages.forms import get_image_form
18 from wagtail.wagtailimages.fields import (
19 MAX_UPLOAD_SIZE,
20 IMAGE_FIELD_HELP_TEXT,
21 INVALID_IMAGE_ERROR,
22 ALLOWED_EXTENSIONS,
23 SUPPORTED_FORMATS_TEXT,
24 FILE_TOO_LARGE_ERROR,
25 )
26
27
28 def json_response(document):
29 return HttpResponse(json.dumps(document), content_type='application/json')
30
31
32 def get_image_edit_form(ImageModel):
33 ImageForm = get_image_form(ImageModel)
34
35 # Make a new form with the file and focal point fields excluded
36 class ImageEditForm(ImageForm):
37 class Meta(ImageForm.Meta):
38 model = ImageModel
39 exclude = (
40 'file',
41 'focal_point_x',
42 'focal_point_y',
43 'focal_point_width',
44 'focal_point_height',
45 )
46
47 return ImageEditForm
48
49
50 @permission_required('wagtailimages.add_image')
51 @vary_on_headers('X-Requested-With')
52 def add(request):
53 Image = get_image_model()
54 ImageForm = get_image_form(Image)
55
56 if request.method == 'POST':
57 if not request.is_ajax():
58 return HttpResponseBadRequest("Cannot POST to this view without AJAX")
59
60 if not request.FILES:
61 return HttpResponseBadRequest("Must upload a file")
62
63 # Build a form for validation
64 form = ImageForm({
65 'title': request.FILES['files[]'].name,
66 }, {
67 'file': request.FILES['files[]'],
68 })
69
70 if form.is_valid():
71 # Save it
72 image = form.save(commit=False)
73 image.uploaded_by_user = request.user
74 image.save()
75
76 # Success! Send back an edit form for this image to the user
77 return json_response({
78 'success': True,
79 'image_id': int(image.id),
80 'form': render_to_string('wagtailimages/multiple/edit_form.html', {
81 'image': image,
82 'form': get_image_edit_form(Image)(instance=image, prefix='image-%d' % image.id),
83 }, context_instance=RequestContext(request)),
84 })
85 else:
86 # Validation error
87 return json_response({
88 'success': False,
89
90 # https://github.com/django/django/blob/stable/1.6.x/django/forms/util.py#L45
91 'error_message': '\n'.join(['\n'.join([force_text(i) for i in v]) for k, v in form.errors.items()]),
92 })
93
94 return render(request, 'wagtailimages/multiple/add.html', {
95 'max_filesize': MAX_UPLOAD_SIZE,
96 'help_text': IMAGE_FIELD_HELP_TEXT,
97 'allowed_extensions': ALLOWED_EXTENSIONS,
98 'error_max_file_size': FILE_TOO_LARGE_ERROR,
99 'error_accepted_file_types': INVALID_IMAGE_ERROR,
100 })
101
102
103 @require_POST
104 def edit(request, image_id, callback=None):
105 Image = get_image_model()
106 ImageForm = get_image_edit_form(Image)
107
108 image = get_object_or_404(Image, id=image_id)
109
110 if not request.is_ajax():
111 return HttpResponseBadRequest("Cannot POST to this view without AJAX")
112
113 if not image.is_editable_by_user(request.user):
114 raise PermissionDenied
115
116 form = ImageForm(request.POST, request.FILES, instance=image, prefix='image-'+image_id)
117
118 if form.is_valid():
119 form.save()
120
121 # Reindex the image to make sure all tags are indexed
122 for backend in get_search_backends():
123 backend.add(image)
124
125 return json_response({
126 'success': True,
127 'image_id': int(image_id),
128 })
129 else:
130 return json_response({
131 'success': False,
132 'image_id': int(image_id),
133 'form': render_to_string('wagtailimages/multiple/edit_form.html', {
134 'image': image,
135 'form': form,
136 }, context_instance=RequestContext(request)),
137 })
138
139
140 @require_POST
141 def delete(request, image_id):
142 image = get_object_or_404(get_image_model(), id=image_id)
143
144 if not request.is_ajax():
145 return HttpResponseBadRequest("Cannot POST to this view without AJAX")
146
147 if not image.is_editable_by_user(request.user):
148 raise PermissionDenied
149
150 image.delete()
151
152 return json_response({
153 'success': True,
154 'image_id': int(image_id),
155 })
156
[end of wagtail/wagtailimages/views/multiple.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
wagtail/wagtail
|
56e398fd6464d9af66c16b74fa34f7199b91e29d
|
Error serving documents with Django Cache Middleware enabled
I get an error each time I serve an uploaded document (pdf file) and my Django cache middleware is enabled. I'm using Redis as a cache backend, probably happens with any backend but I'm not sure. Some of the details are:
```
Django Version: 1.7.1
Exception Type: TypeError
Exception Value:
can't pickle instancemethod objects
Exception Location: /var/www/bcito/venvs/test.com/lib/python2.7/copy_reg.py in _reduce_ex, line 70
Python Executable: /usr/bin/uwsgi-core
Python Version: 2.7.6
```
I've found a workaround editing the way I include the wagtaildocs urls.py in my app. I make use of the _never_cache_ decorator. Instead of having:
```
urlpatterns = patterns(
...
url(r'^documents/', include(wagtaildocs_urls)),
...
```
Have this:
```
...
from wagtail.wagtaildocs.views import serve
...
urlpatterns = patterns(
...
url(r'^documents/(\d+)/(.*)$', never_cache(serve.serve), name='wagtaildocs_serve'),
...
```
Cheers,
Jordi
|
Thank you Jordi,
@kaedroho are you happy with the proposed solution?
Does this relate to what you are thinking on #911?
@jordij Can you paste the full stack trace, please?
TypeError at /documents/51/testar2013spreads.pdf
can't pickle instancemethod objects
Request Method: GET
Request URL: http://test.co.nz/documents/xxx.pdf
Django Version: 1.7.1
Exception Type: TypeError
Exception Value:
can't pickle instancemethod objects
Exception Location: /var/www/test/venvs/test.org.nz/lib/python2.7/copy_reg.py in _reduce_ex, line 70
Python Executable: /usr/bin/uwsgi-core
Python Version: 2.7.6
Python Path:
['.',
'',
'/var/www/test/venvs/test.org.nz/src/wagtailembedder',
'/var/www/test/venvs/test.org.nz/src/wagtailgmaps',
'/var/www/test/venvs/test.org.nz/lib/python2.7',
'/var/www/test/venvs/test.org.nz/lib/python2.7/plat-x86_64-linux-gnu',
'/var/www/test/venvs/test.org.nz/lib/python2.7/lib-tk',
'/var/www/test/venvs/test.org.nz/lib/python2.7/lib-old',
'/var/www/test/venvs/test.org.nz/lib/python2.7/lib-dynload',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x86_64-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/var/www/test/venvs/test.org.nz/local/lib/python2.7/site-packages',
'/usr/local/lib/python2.7/site-packages',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/var/www/test/test.org.nz/test',
'/var/www/test/test.org.nz/test']
Server time: Thu, 12 Feb 2015 17:31:58 +1300
Traceback Switch back to interactive view
Environment:
Request Method: GET
Request URL: http://test.co.nz/documents/51/testar2013spreads.pdf
Django Version: 1.7.1
Python Version: 2.7.6
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'compressor',
'import_export',
'taggit',
'modelcluster',
'djcelery',
'core',
'lookingforwork',
'floppyforms',
'widget_tweaks',
'wagtail.contrib.wagtailsitemaps',
'wagtail.contrib.wagtailroutablepage',
'wagtail.wagtailcore',
'wagtail.wagtailadmin',
'wagtail.wagtaildocs',
'wagtail.wagtailsnippets',
'wagtail.wagtailusers',
'wagtail.wagtailsites',
'wagtail.wagtailimages',
'wagtail.wagtailembeds',
'wagtail.wagtailsearch',
'wagtail.wagtailredirects',
'wagtail.wagtailforms',
'wagtailembedder',
'utils',
'debug_toolbar')
Installed Middleware:
(u'debug_toolbar.middleware.DebugToolbarMiddleware',
'django.middleware.common.BrokenLinkEmailsMiddleware',
'django.middleware.cache.UpdateCacheMiddleware',
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.cache.FetchFromCacheMiddleware',
'wagtail.wagtailcore.middleware.SiteMiddleware',
'wagtail.wagtailredirects.middleware.RedirectMiddleware')
Traceback:
File "/var/www/test/venvs/test.org.nz/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
1. response = middleware_method(request, response)
File "/var/www/test/venvs/test.org.nz/local/lib/python2.7/site-packages/django/middleware/cache.py" in process_response
2. self.cache.set(cache_key, response, timeout)
File "/var/www/test/venvs/test.org.nz/local/lib/python2.7/site-packages/redis_cache/cache.py" in set
3. result = self._set(key, pickle.dumps(value), timeout, client, _add_only)
File "/var/www/test/venvs/test.org.nz/lib/python2.7/copy_reg.py" in _reduce_ex
4. raise TypeError, "can't pickle %s objects" % base.**name**
Exception Type: TypeError at /documents/xxx.pdf
Exception Value: can't pickle instancemethod objects
OK - it looks like the problem is in the `wagtail.wagtaildocs.views.serve` view, which uses `FileWrapper` to generate a streamed response that avoids loading the file into memory. (It turns out that FileWrapper is not a documented Django feature - it's provided by the `wsgiref` package, and the relevant alias has been [quietly dropped from Django as of several days ago](https://github.com/django/django/commit/bbe28496d32f76ca161f5c33787d6ad62267fcc6), so it's a good thing we're spotting this now! I suspect I borrowed the code from [this StackOverflow question](http://stackoverflow.com/questions/8600843/serving-large-files-with-high-loads-in-django).)
The cache middleware can't cache streamed responses, and usually it will identify and ignore those because they're using `StreamingHttpResponse` instead of `HttpResponse`... however, our code is currently using a plain HttpResponse. (Sure enough, that StackOverflow question [was also using HttpResponse until someone fixed it a few days ago](http://stackoverflow.com/posts/8601118/revisions)...)
So, I think what we need to do here is:
- ~~check that we can rely on the `wsgiref` package always being available. (I think it's part of the Python standard library, but would be good to make sure)~~ ([it is](https://docs.python.org/2/library/wsgiref.html))
- change the import to `from wsgiref.utils import FileWrapper`
- update the view to return a StreamingHttpResponse
|
2015-02-12T11:44:28Z
|
<patch>
diff --git a/wagtail/wagtaildocs/views/serve.py b/wagtail/wagtaildocs/views/serve.py
--- a/wagtail/wagtaildocs/views/serve.py
+++ b/wagtail/wagtaildocs/views/serve.py
@@ -1,6 +1,6 @@
from django.shortcuts import get_object_or_404
-from django.core.servers.basehttp import FileWrapper
-from django.http import HttpResponse
+from wsgiref.util import FileWrapper
+from django.http import StreamingHttpResponse
from wagtail.wagtaildocs.models import Document, document_served
@@ -8,7 +8,7 @@
def serve(request, document_id, document_filename):
doc = get_object_or_404(Document, id=document_id)
wrapper = FileWrapper(doc.file)
- response = HttpResponse(wrapper, content_type='application/octet-stream')
+ response = StreamingHttpResponse(wrapper, content_type='application/octet-stream')
# TODO: strip out weird characters like semicolons from the filename
# (there doesn't seem to be an official way of escaping them)
</patch>
|
[]
|
[]
| |||
mesonbuild__meson-4789
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CMake for dependencies can't use modules
CMake for finding dependencies could easily be further extended with off-the-shelf `Find<foo>.cmake` scripts. The only thing that needs to happen for that is that the invocation of `cmake` needs to point `-DCMAKE_MODULE_PATH=<dir>` at the directory containing these scripts.
A good place for this might be here:
https://github.com/mesonbuild/meson/blob/90c9b868b20b11bb089fc5e0c634d5ed76fea0cb/mesonbuild/dependencies/base.py#L1422
As a simple suggestion, just redefine all `CMAKE_*` environment variables as CMake variables:
```python
for key in env.keys():
if key.startswith('CMAKE_'):
args.append('-D%s=%s' % (key, env[key]))
```
That should do the trick (some testing required).
As an added benefit, you can influence CMake further with other `CMAKE_*` variables in the environment.
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="http://mesonbuild.com/assets/images/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://travis-ci.org/mesonbuild/meson)
11 [](https://dev.azure.com/jussi0947/jussi/_build/latest?definitionId=1)
12 [](https://codecov.io/gh/mesonbuild/meson/branch/master)
13 [](https://lgtm.com/projects/g/mesonbuild/meson/context:python)
14 [](https://lgtm.com/projects/g/mesonbuild/meson/alerts)
15
16 #### Dependencies
17
18 - [Python](http://python.org) (version 3.5 or newer)
19 - [Ninja](https://ninja-build.org) (version 1.5 or newer)
20
21 #### Installing from source
22
23 You can run Meson directly from a revision control checkout or an
24 extracted tarball. If you wish you can install it locally with the
25 standard Python distutils command `python3 setup.py install <your
26 options here>`.
27
28 Meson is also available from
29 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed
30 with `pip3 install meson` (this does not require a source checkout,
31 pip will download the package automatically). The exact command to
32 type to install with pip can vary between systems, be sure to use the
33 Python 3 version of pip.
34
35 #### Running
36
37 Meson requires that you have a source directory and a build directory
38 and that these two are different. In your source root must exist a file
39 called 'meson.build'. To generate the build system run this command:
40
41 `meson <source directory> <build directory>`
42
43 Depending on how you obtained Meson the command might also be called
44 `meson.py` instead of plain `meson`. In the rest of this document we
45 are going to use the latter form.
46
47 You can omit either of the two directories, and Meson will substitute
48 the current directory and autodetect what you mean. This allows you to
49 do things like this:
50
51 `cd source_root; mkdir builddir; cd builddir; meson ..`
52
53 or
54
55 `cd source_root; mkdir builddir; meson builddir`
56
57 To compile, cd into your build directory and type `ninja`. To run unit
58 tests, type `ninja test`.
59
60 Install is the same but it can take an extra argument:
61
62 `DESTDIR=/destdir/path ninja install`
63
64 `DESTDIR` can be omitted. If you are installing to system directories,
65 you may need to run this command with sudo.
66
67
68 #### Contributing
69
70 We love code contributions. See the [contributing.md](contributing.md) file for
71 details.
72
73
74 #### IRC
75
76 The irc channel for Meson is `#mesonbuild` over at Freenode.
77
78 You can use [FreeNode's official webchat][meson_irc]
79 to connect to this channel.
80
81 [meson_irc]: https://webchat.freenode.net/?channels=%23mesonbuild
82
83 #### Further info
84
85 More information about the Meson build system can be found at the
86 [project's home page](http://mesonbuild.com).
87
88 Meson is a registered trademark of Jussi Pakkanen.
89
[end of README.md]
[start of mesonbuild/coredata.py]
1 # Copyright 2012-2018 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from . import mlog
16 import pickle, os, uuid, shlex
17 import sys
18 from itertools import chain
19 from pathlib import PurePath
20 from collections import OrderedDict
21 from .mesonlib import (
22 MesonException, default_libdir, default_libexecdir, default_prefix
23 )
24 from .wrap import WrapMode
25 import ast
26 import argparse
27 import configparser
28
29 version = '0.49.999'
30 backendlist = ['ninja', 'vs', 'vs2010', 'vs2015', 'vs2017', 'xcode']
31
32 default_yielding = False
33
34 class UserOption:
35 def __init__(self, name, description, choices, yielding):
36 super().__init__()
37 self.name = name
38 self.choices = choices
39 self.description = description
40 if yielding is None:
41 yielding = default_yielding
42 if not isinstance(yielding, bool):
43 raise MesonException('Value of "yielding" must be a boolean.')
44 self.yielding = yielding
45
46 def printable_value(self):
47 return self.value
48
49 # Check that the input is a valid value and return the
50 # "cleaned" or "native" version. For example the Boolean
51 # option could take the string "true" and return True.
52 def validate_value(self, value):
53 raise RuntimeError('Derived option class did not override validate_value.')
54
55 def set_value(self, newvalue):
56 self.value = self.validate_value(newvalue)
57
58 class UserStringOption(UserOption):
59 def __init__(self, name, description, value, choices=None, yielding=None):
60 super().__init__(name, description, choices, yielding)
61 self.set_value(value)
62
63 def validate_value(self, value):
64 if not isinstance(value, str):
65 raise MesonException('Value "%s" for string option "%s" is not a string.' % (str(value), self.name))
66 return value
67
68 class UserBooleanOption(UserOption):
69 def __init__(self, name, description, value, yielding=None):
70 super().__init__(name, description, [True, False], yielding)
71 self.set_value(value)
72
73 def __bool__(self):
74 return self.value
75
76 def validate_value(self, value):
77 if isinstance(value, bool):
78 return value
79 if value.lower() == 'true':
80 return True
81 if value.lower() == 'false':
82 return False
83 raise MesonException('Value %s is not boolean (true or false).' % value)
84
85 class UserIntegerOption(UserOption):
86 def __init__(self, name, description, min_value, max_value, value, yielding=None):
87 super().__init__(name, description, [True, False], yielding)
88 self.min_value = min_value
89 self.max_value = max_value
90 self.set_value(value)
91 c = []
92 if min_value is not None:
93 c.append('>=' + str(min_value))
94 if max_value is not None:
95 c.append('<=' + str(max_value))
96 self.choices = ', '.join(c)
97
98 def validate_value(self, value):
99 if isinstance(value, str):
100 value = self.toint(value)
101 if not isinstance(value, int):
102 raise MesonException('New value for integer option is not an integer.')
103 if self.min_value is not None and value < self.min_value:
104 raise MesonException('New value %d is less than minimum value %d.' % (value, self.min_value))
105 if self.max_value is not None and value > self.max_value:
106 raise MesonException('New value %d is more than maximum value %d.' % (value, self.max_value))
107 return value
108
109 def toint(self, valuestring):
110 try:
111 return int(valuestring)
112 except ValueError:
113 raise MesonException('Value string "%s" is not convertable to an integer.' % valuestring)
114
115 class UserUmaskOption(UserIntegerOption):
116 def __init__(self, name, description, value, yielding=None):
117 super().__init__(name, description, 0, 0o777, value, yielding)
118 self.choices = ['preserve', '0000-0777']
119
120 def printable_value(self):
121 if self.value == 'preserve':
122 return self.value
123 return format(self.value, '04o')
124
125 def validate_value(self, value):
126 if value is None or value == 'preserve':
127 return 'preserve'
128 return super().validate_value(value)
129
130 def toint(self, valuestring):
131 try:
132 return int(valuestring, 8)
133 except ValueError as e:
134 raise MesonException('Invalid mode: {}'.format(e))
135
136 class UserComboOption(UserOption):
137 def __init__(self, name, description, choices, value, yielding=None):
138 super().__init__(name, description, choices, yielding)
139 if not isinstance(self.choices, list):
140 raise MesonException('Combo choices must be an array.')
141 for i in self.choices:
142 if not isinstance(i, str):
143 raise MesonException('Combo choice elements must be strings.')
144 self.set_value(value)
145
146 def validate_value(self, value):
147 if value not in self.choices:
148 optionsstring = ', '.join(['"%s"' % (item,) for item in self.choices])
149 raise MesonException('Value "%s" for combo option "%s" is not one of the choices. Possible choices are: %s.' % (value, self.name, optionsstring))
150 return value
151
152 class UserArrayOption(UserOption):
153 def __init__(self, name, description, value, shlex_split=False, user_input=False, allow_dups=False, **kwargs):
154 super().__init__(name, description, kwargs.get('choices', []), yielding=kwargs.get('yielding', None))
155 self.shlex_split = shlex_split
156 self.allow_dups = allow_dups
157 self.value = self.validate_value(value, user_input=user_input)
158
159 def validate_value(self, value, user_input=True):
160 # User input is for options defined on the command line (via -D
161 # options). Users can put their input in as a comma separated
162 # string, but for defining options in meson_options.txt the format
163 # should match that of a combo
164 if not user_input and isinstance(value, str) and not value.startswith('['):
165 raise MesonException('Value does not define an array: ' + value)
166
167 if isinstance(value, str):
168 if value.startswith('['):
169 newvalue = ast.literal_eval(value)
170 elif value == '':
171 newvalue = []
172 else:
173 if self.shlex_split:
174 newvalue = shlex.split(value)
175 else:
176 newvalue = [v.strip() for v in value.split(',')]
177 elif isinstance(value, list):
178 newvalue = value
179 else:
180 raise MesonException('"{0}" should be a string array, but it is not'.format(str(newvalue)))
181
182 if not self.allow_dups and len(set(newvalue)) != len(newvalue):
183 msg = 'Duplicated values in array option "%s" is deprecated. ' \
184 'This will become a hard error in the future.' % (self.name)
185 mlog.deprecation(msg)
186 for i in newvalue:
187 if not isinstance(i, str):
188 raise MesonException('String array element "{0}" is not a string.'.format(str(newvalue)))
189 if self.choices:
190 bad = [x for x in newvalue if x not in self.choices]
191 if bad:
192 raise MesonException('Options "{}" are not in allowed choices: "{}"'.format(
193 ', '.join(bad), ', '.join(self.choices)))
194 return newvalue
195
196
197 class UserFeatureOption(UserComboOption):
198 static_choices = ['enabled', 'disabled', 'auto']
199
200 def __init__(self, name, description, value, yielding=None):
201 super().__init__(name, description, self.static_choices, value, yielding)
202
203 def is_enabled(self):
204 return self.value == 'enabled'
205
206 def is_disabled(self):
207 return self.value == 'disabled'
208
209 def is_auto(self):
210 return self.value == 'auto'
211
212
213 def load_configs(filenames):
214 """Load native files."""
215 def gen():
216 for f in filenames:
217 f = os.path.expanduser(os.path.expandvars(f))
218 if os.path.exists(f):
219 yield f
220 continue
221 elif sys.platform != 'win32':
222 f = os.path.basename(f)
223 paths = [
224 os.environ.get('XDG_DATA_HOME', os.path.expanduser('~/.local/share')),
225 ] + os.environ.get('XDG_DATA_DIRS', '/usr/local/share:/usr/share').split(':')
226 for path in paths:
227 path_to_try = os.path.join(path, 'meson', 'native', f)
228 if os.path.isfile(path_to_try):
229 yield path_to_try
230 break
231 else:
232 raise MesonException('Cannot find specified native file: ' + f)
233 continue
234
235 raise MesonException('Cannot find specified native file: ' + f)
236
237 config = configparser.SafeConfigParser()
238 config.read(gen())
239 return config
240
241
242 # This class contains all data that must persist over multiple
243 # invocations of Meson. It is roughly the same thing as
244 # cmakecache.
245
246 class CoreData:
247
248 def __init__(self, options):
249 self.lang_guids = {
250 'default': '8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942',
251 'c': '8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942',
252 'cpp': '8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942',
253 'test': '3AC096D0-A1C2-E12C-1390-A8335801FDAB',
254 'directory': '2150E333-8FDC-42A3-9474-1A3956D46DE8',
255 }
256 self.test_guid = str(uuid.uuid4()).upper()
257 self.regen_guid = str(uuid.uuid4()).upper()
258 self.install_guid = str(uuid.uuid4()).upper()
259 self.target_guids = {}
260 self.version = version
261 self.init_builtins()
262 self.backend_options = {}
263 self.user_options = {}
264 self.compiler_options = {}
265 self.base_options = {}
266 self.external_preprocess_args = {} # CPPFLAGS only
267 self.cross_file = self.__load_cross_file(options.cross_file)
268 self.compilers = OrderedDict()
269 self.cross_compilers = OrderedDict()
270 self.deps = OrderedDict()
271 # Only to print a warning if it changes between Meson invocations.
272 self.pkgconf_envvar = os.environ.get('PKG_CONFIG_PATH', '')
273 self.config_files = self.__load_config_files(options.native_file)
274 self.libdir_cross_fixup()
275
276 @staticmethod
277 def __load_config_files(filenames):
278 if not filenames:
279 return []
280 filenames = [os.path.abspath(os.path.expanduser(os.path.expanduser(f)))
281 for f in filenames]
282 return filenames
283
284 @staticmethod
285 def __load_cross_file(filename):
286 """Try to load the cross file.
287
288 If the filename is None return None. If the filename is an absolute
289 (after resolving variables and ~), return that absolute path. Next,
290 check if the file is relative to the current source dir. If the path
291 still isn't resolved do the following:
292 Windows:
293 - Error
294 *:
295 - $XDG_DATA_HOME/meson/cross (or ~/.local/share/meson/cross if
296 undefined)
297 - $XDG_DATA_DIRS/meson/cross (or
298 /usr/local/share/meson/cross:/usr/share/meson/cross if undefined)
299 - Error
300
301 Non-Windows follows the Linux path and will honor XDG_* if set. This
302 simplifies the implementation somewhat.
303 """
304 if filename is None:
305 return None
306 filename = os.path.expanduser(os.path.expandvars(filename))
307 if os.path.isabs(filename):
308 return filename
309 path_to_try = os.path.abspath(filename)
310 if os.path.isfile(path_to_try):
311 return path_to_try
312 if sys.platform != 'win32':
313 paths = [
314 os.environ.get('XDG_DATA_HOME', os.path.expanduser('~/.local/share')),
315 ] + os.environ.get('XDG_DATA_DIRS', '/usr/local/share:/usr/share').split(':')
316 for path in paths:
317 path_to_try = os.path.join(path, 'meson', 'cross', filename)
318 if os.path.isfile(path_to_try):
319 return path_to_try
320 raise MesonException('Cannot find specified cross file: ' + filename)
321
322 raise MesonException('Cannot find specified cross file: ' + filename)
323
324 def libdir_cross_fixup(self):
325 # By default set libdir to "lib" when cross compiling since
326 # getting the "system default" is always wrong on multiarch
327 # platforms as it gets a value like lib/x86_64-linux-gnu.
328 if self.cross_file is not None:
329 self.builtins['libdir'].value = 'lib'
330
331 def sanitize_prefix(self, prefix):
332 if not os.path.isabs(prefix):
333 raise MesonException('prefix value {!r} must be an absolute path'
334 ''.format(prefix))
335 if prefix.endswith('/') or prefix.endswith('\\'):
336 # On Windows we need to preserve the trailing slash if the
337 # string is of type 'C:\' because 'C:' is not an absolute path.
338 if len(prefix) == 3 and prefix[1] == ':':
339 pass
340 # If prefix is a single character, preserve it since it is
341 # the root directory.
342 elif len(prefix) == 1:
343 pass
344 else:
345 prefix = prefix[:-1]
346 return prefix
347
348 def sanitize_dir_option_value(self, prefix, option, value):
349 '''
350 If the option is an installation directory option and the value is an
351 absolute path, check that it resides within prefix and return the value
352 as a path relative to the prefix.
353
354 This way everyone can do f.ex, get_option('libdir') and be sure to get
355 the library directory relative to prefix.
356 '''
357 if option.endswith('dir') and os.path.isabs(value) and \
358 option not in builtin_dir_noprefix_options:
359 # Value must be a subdir of the prefix
360 # commonpath will always return a path in the native format, so we
361 # must use pathlib.PurePath to do the same conversion before
362 # comparing.
363 if os.path.commonpath([value, prefix]) != str(PurePath(prefix)):
364 m = 'The value of the {!r} option is {!r} which must be a ' \
365 'subdir of the prefix {!r}.\nNote that if you pass a ' \
366 'relative path, it is assumed to be a subdir of prefix.'
367 raise MesonException(m.format(option, value, prefix))
368 # Convert path to be relative to prefix
369 skip = len(prefix) + 1
370 value = value[skip:]
371 return value
372
373 def init_builtins(self):
374 # Create builtin options with default values
375 self.builtins = {}
376 prefix = get_builtin_option_default('prefix')
377 for key in get_builtin_options():
378 value = get_builtin_option_default(key, prefix)
379 args = [key] + builtin_options[key][1:-1] + [value]
380 self.builtins[key] = builtin_options[key][0](*args)
381
382 def init_backend_options(self, backend_name):
383 if backend_name == 'ninja':
384 self.backend_options['backend_max_links'] = \
385 UserIntegerOption(
386 'backend_max_links',
387 'Maximum number of linker processes to run or 0 for no '
388 'limit',
389 0, None, 0)
390 elif backend_name.startswith('vs'):
391 self.backend_options['backend_startup_project'] = \
392 UserStringOption(
393 'backend_startup_project',
394 'Default project to execute in Visual Studio',
395 '')
396
397 def get_builtin_option(self, optname):
398 if optname in self.builtins:
399 v = self.builtins[optname]
400 if optname == 'wrap_mode':
401 return WrapMode.from_string(v.value)
402 return v.value
403 raise RuntimeError('Tried to get unknown builtin option %s.' % optname)
404
405 def set_builtin_option(self, optname, value):
406 if optname == 'prefix':
407 value = self.sanitize_prefix(value)
408 elif optname in self.builtins:
409 prefix = self.builtins['prefix'].value
410 value = self.sanitize_dir_option_value(prefix, optname, value)
411 else:
412 raise RuntimeError('Tried to set unknown builtin option %s.' % optname)
413 self.builtins[optname].set_value(value)
414
415 # Make sure that buildtype matches other settings.
416 if optname == 'buildtype':
417 self.set_others_from_buildtype(value)
418 else:
419 self.set_buildtype_from_others()
420
421 def set_others_from_buildtype(self, value):
422 if value == 'plain':
423 opt = '0'
424 debug = False
425 elif value == 'debug':
426 opt = '0'
427 debug = True
428 elif value == 'debugoptimized':
429 opt = '2'
430 debug = True
431 elif value == 'release':
432 opt = '3'
433 debug = False
434 elif value == 'minsize':
435 opt = 's'
436 debug = True
437 else:
438 assert(value == 'custom')
439 return
440 self.builtins['optimization'].set_value(opt)
441 self.builtins['debug'].set_value(debug)
442
443 def set_buildtype_from_others(self):
444 opt = self.builtins['optimization'].value
445 debug = self.builtins['debug'].value
446 if opt == '0' and not debug:
447 mode = 'plain'
448 elif opt == '0' and debug:
449 mode = 'debug'
450 elif opt == '2' and debug:
451 mode = 'debugoptimized'
452 elif opt == '3' and not debug:
453 mode = 'release'
454 elif opt == 's' and debug:
455 mode = 'minsize'
456 else:
457 mode = 'custom'
458 self.builtins['buildtype'].set_value(mode)
459
460 def _get_all_nonbuiltin_options(self):
461 yield self.backend_options
462 yield self.user_options
463 yield self.compiler_options
464 yield self.base_options
465
466 def get_all_options(self):
467 return chain(
468 iter([self.builtins]),
469 self._get_all_nonbuiltin_options())
470
471 def validate_option_value(self, option_name, override_value):
472 for opts in self.get_all_options():
473 if option_name in opts:
474 opt = opts[option_name]
475 return opt.validate_value(override_value)
476 raise MesonException('Tried to validate unknown option %s.' % option_name)
477
478 def get_external_args(self, lang):
479 return self.compiler_options[lang + '_args'].value
480
481 def get_external_link_args(self, lang):
482 return self.compiler_options[lang + '_link_args'].value
483
484 def get_external_preprocess_args(self, lang):
485 return self.external_preprocess_args[lang]
486
487 def merge_user_options(self, options):
488 for (name, value) in options.items():
489 if name not in self.user_options:
490 self.user_options[name] = value
491 else:
492 oldval = self.user_options[name]
493 if type(oldval) != type(value):
494 self.user_options[name] = value
495
496 def set_options(self, options, subproject=''):
497 # Set prefix first because it's needed to sanitize other options
498 prefix = self.builtins['prefix'].value
499 if 'prefix' in options:
500 prefix = self.sanitize_prefix(options['prefix'])
501 self.builtins['prefix'].set_value(prefix)
502 for key in builtin_dir_noprefix_options:
503 if key not in options:
504 self.builtins[key].set_value(get_builtin_option_default(key, prefix))
505
506 unknown_options = []
507 for k, v in options.items():
508 if k == 'prefix':
509 pass
510 elif k in self.builtins:
511 self.set_builtin_option(k, v)
512 else:
513 for opts in self._get_all_nonbuiltin_options():
514 if k in opts:
515 tgt = opts[k]
516 tgt.set_value(v)
517 break
518 else:
519 unknown_options.append(k)
520
521 if unknown_options:
522 unknown_options = ', '.join(sorted(unknown_options))
523 sub = 'In subproject {}: '.format(subproject) if subproject else ''
524 mlog.warning('{}Unknown options: "{}"'.format(sub, unknown_options))
525
526 def set_default_options(self, default_options, subproject, cmd_line_options):
527 # Set default options as if they were passed to the command line.
528 # Subprojects can only define default for user options.
529 from . import optinterpreter
530 for k, v in default_options.items():
531 if subproject:
532 if optinterpreter.is_invalid_name(k):
533 continue
534 k = subproject + ':' + k
535 cmd_line_options.setdefault(k, v)
536
537 # Create a subset of cmd_line_options, keeping only options for this
538 # subproject. Also take builtin options if it's the main project.
539 # Language and backend specific options will be set later when adding
540 # languages and setting the backend (builtin options must be set first
541 # to know which backend we'll use).
542 options = {}
543 for k, v in cmd_line_options.items():
544 if subproject:
545 if not k.startswith(subproject + ':'):
546 continue
547 elif k not in get_builtin_options():
548 if ':' in k:
549 continue
550 if optinterpreter.is_invalid_name(k):
551 continue
552 options[k] = v
553
554 self.set_options(options, subproject)
555
556 class CmdLineFileParser(configparser.ConfigParser):
557 def __init__(self):
558 # We don't want ':' as key delimiter, otherwise it would break when
559 # storing subproject options like "subproject:option=value"
560 super().__init__(delimiters=['='])
561
562 def get_cmd_line_file(build_dir):
563 return os.path.join(build_dir, 'meson-private', 'cmd_line.txt')
564
565 def read_cmd_line_file(build_dir, options):
566 filename = get_cmd_line_file(build_dir)
567 config = CmdLineFileParser()
568 config.read(filename)
569
570 # Do a copy because config is not really a dict. options.cmd_line_options
571 # overrides values from the file.
572 d = dict(config['options'])
573 d.update(options.cmd_line_options)
574 options.cmd_line_options = d
575
576 properties = config['properties']
577 if options.cross_file is None:
578 options.cross_file = properties.get('cross_file', None)
579
580 def write_cmd_line_file(build_dir, options):
581 filename = get_cmd_line_file(build_dir)
582 config = CmdLineFileParser()
583
584 properties = {}
585 if options.cross_file is not None:
586 properties['cross_file'] = options.cross_file
587
588 config['options'] = options.cmd_line_options
589 config['properties'] = properties
590 with open(filename, 'w') as f:
591 config.write(f)
592
593 def update_cmd_line_file(build_dir, options):
594 filename = get_cmd_line_file(build_dir)
595 config = CmdLineFileParser()
596 config.read(filename)
597 config['options'].update(options.cmd_line_options)
598 with open(filename, 'w') as f:
599 config.write(f)
600
601 def load(build_dir):
602 filename = os.path.join(build_dir, 'meson-private', 'coredata.dat')
603 load_fail_msg = 'Coredata file {!r} is corrupted. Try with a fresh build tree.'.format(filename)
604 try:
605 with open(filename, 'rb') as f:
606 obj = pickle.load(f)
607 except pickle.UnpicklingError:
608 raise MesonException(load_fail_msg)
609 if not isinstance(obj, CoreData):
610 raise MesonException(load_fail_msg)
611 if obj.version != version:
612 raise MesonException('Build directory has been generated with Meson version %s, '
613 'which is incompatible with current version %s.\n' %
614 (obj.version, version))
615 return obj
616
617 def save(obj, build_dir):
618 filename = os.path.join(build_dir, 'meson-private', 'coredata.dat')
619 prev_filename = filename + '.prev'
620 tempfilename = filename + '~'
621 if obj.version != version:
622 raise MesonException('Fatal version mismatch corruption.')
623 if os.path.exists(filename):
624 import shutil
625 shutil.copyfile(filename, prev_filename)
626 with open(tempfilename, 'wb') as f:
627 pickle.dump(obj, f)
628 f.flush()
629 os.fsync(f.fileno())
630 os.replace(tempfilename, filename)
631 return filename
632
633 def get_builtin_options():
634 return list(builtin_options.keys())
635
636 def is_builtin_option(optname):
637 return optname in get_builtin_options()
638
639 def get_builtin_option_choices(optname):
640 if is_builtin_option(optname):
641 if builtin_options[optname][0] == UserComboOption:
642 return builtin_options[optname][2]
643 elif builtin_options[optname][0] == UserBooleanOption:
644 return [True, False]
645 elif builtin_options[optname][0] == UserFeatureOption:
646 return UserFeatureOption.static_choices
647 else:
648 return None
649 else:
650 raise RuntimeError('Tried to get the supported values for an unknown builtin option \'%s\'.' % optname)
651
652 def get_builtin_option_description(optname):
653 if is_builtin_option(optname):
654 return builtin_options[optname][1]
655 else:
656 raise RuntimeError('Tried to get the description for an unknown builtin option \'%s\'.' % optname)
657
658 def get_builtin_option_action(optname):
659 default = builtin_options[optname][2]
660 if default is True:
661 return 'store_false'
662 elif default is False:
663 return 'store_true'
664 return None
665
666 def get_builtin_option_default(optname, prefix=''):
667 if is_builtin_option(optname):
668 o = builtin_options[optname]
669 if o[0] == UserComboOption:
670 return o[3]
671 if o[0] == UserIntegerOption:
672 return o[4]
673 try:
674 return builtin_dir_noprefix_options[optname][prefix]
675 except KeyError:
676 pass
677 return o[2]
678 else:
679 raise RuntimeError('Tried to get the default value for an unknown builtin option \'%s\'.' % optname)
680
681 def get_builtin_option_cmdline_name(name):
682 if name == 'warning_level':
683 return '--warnlevel'
684 else:
685 return '--' + name.replace('_', '-')
686
687 def add_builtin_argument(p, name):
688 kwargs = {}
689 c = get_builtin_option_choices(name)
690 b = get_builtin_option_action(name)
691 h = get_builtin_option_description(name)
692 if not b:
693 h = h.rstrip('.') + ' (default: %s).' % get_builtin_option_default(name)
694 else:
695 kwargs['action'] = b
696 if c and not b:
697 kwargs['choices'] = c
698 kwargs['default'] = argparse.SUPPRESS
699 kwargs['dest'] = name
700
701 cmdline_name = get_builtin_option_cmdline_name(name)
702 p.add_argument(cmdline_name, help=h, **kwargs)
703
704 def register_builtin_arguments(parser):
705 for n in builtin_options:
706 add_builtin_argument(parser, n)
707 parser.add_argument('-D', action='append', dest='projectoptions', default=[], metavar="option",
708 help='Set the value of an option, can be used several times to set multiple options.')
709
710 def create_options_dict(options):
711 result = {}
712 for o in options:
713 try:
714 (key, value) = o.split('=', 1)
715 except ValueError:
716 raise MesonException('Option {!r} must have a value separated by equals sign.'.format(o))
717 result[key] = value
718 return result
719
720 def parse_cmd_line_options(args):
721 args.cmd_line_options = create_options_dict(args.projectoptions)
722
723 # Merge builtin options set with --option into the dict.
724 for name in builtin_options:
725 value = getattr(args, name, None)
726 if value is not None:
727 if name in args.cmd_line_options:
728 cmdline_name = get_builtin_option_cmdline_name(name)
729 raise MesonException(
730 'Got argument {0} as both -D{0} and {1}. Pick one.'.format(name, cmdline_name))
731 args.cmd_line_options[name] = value
732 delattr(args, name)
733
734 builtin_options = {
735 'buildtype': [UserComboOption, 'Build type to use', ['plain', 'debug', 'debugoptimized', 'release', 'minsize', 'custom'], 'debug'],
736 'strip': [UserBooleanOption, 'Strip targets on install', False],
737 'unity': [UserComboOption, 'Unity build', ['on', 'off', 'subprojects'], 'off'],
738 'prefix': [UserStringOption, 'Installation prefix', default_prefix()],
739 'libdir': [UserStringOption, 'Library directory', default_libdir()],
740 'libexecdir': [UserStringOption, 'Library executable directory', default_libexecdir()],
741 'bindir': [UserStringOption, 'Executable directory', 'bin'],
742 'sbindir': [UserStringOption, 'System executable directory', 'sbin'],
743 'includedir': [UserStringOption, 'Header file directory', 'include'],
744 'datadir': [UserStringOption, 'Data file directory', 'share'],
745 'mandir': [UserStringOption, 'Manual page directory', 'share/man'],
746 'infodir': [UserStringOption, 'Info page directory', 'share/info'],
747 'localedir': [UserStringOption, 'Locale data directory', 'share/locale'],
748 'sysconfdir': [UserStringOption, 'Sysconf data directory', 'etc'],
749 'localstatedir': [UserStringOption, 'Localstate data directory', 'var'],
750 'sharedstatedir': [UserStringOption, 'Architecture-independent data directory', 'com'],
751 'werror': [UserBooleanOption, 'Treat warnings as errors', False],
752 'warning_level': [UserComboOption, 'Compiler warning level to use', ['1', '2', '3'], '1'],
753 'layout': [UserComboOption, 'Build directory layout', ['mirror', 'flat'], 'mirror'],
754 'default_library': [UserComboOption, 'Default library type', ['shared', 'static', 'both'], 'shared'],
755 'backend': [UserComboOption, 'Backend to use', backendlist, 'ninja'],
756 'stdsplit': [UserBooleanOption, 'Split stdout and stderr in test logs', True],
757 'errorlogs': [UserBooleanOption, "Whether to print the logs from failing tests", True],
758 'install_umask': [UserUmaskOption, 'Default umask to apply on permissions of installed files', '022'],
759 'auto_features': [UserFeatureOption, "Override value of all 'auto' features", 'auto'],
760 'optimization': [UserComboOption, 'Optimization level', ['0', 'g', '1', '2', '3', 's'], '0'],
761 'debug': [UserBooleanOption, 'Debug', True],
762 'wrap_mode': [UserComboOption, 'Wrap mode', ['default',
763 'nofallback',
764 'nodownload',
765 'forcefallback'], 'default'],
766 }
767
768 # Special prefix-dependent defaults for installation directories that reside in
769 # a path outside of the prefix in FHS and common usage.
770 builtin_dir_noprefix_options = {
771 'sysconfdir': {'/usr': '/etc'},
772 'localstatedir': {'/usr': '/var', '/usr/local': '/var/local'},
773 'sharedstatedir': {'/usr': '/var/lib', '/usr/local': '/var/local/lib'},
774 }
775
776 forbidden_target_names = {'clean': None,
777 'clean-ctlist': None,
778 'clean-gcno': None,
779 'clean-gcda': None,
780 'coverage': None,
781 'coverage-text': None,
782 'coverage-xml': None,
783 'coverage-html': None,
784 'phony': None,
785 'PHONY': None,
786 'all': None,
787 'test': None,
788 'benchmark': None,
789 'install': None,
790 'uninstall': None,
791 'build.ninja': None,
792 'scan-build': None,
793 'reconfigure': None,
794 'dist': None,
795 'distcheck': None,
796 }
797
[end of mesonbuild/coredata.py]
[start of tools/boost_names.py]
1 #!/usr/bin/env python3
2
3 # Copyright 2017 Niklas Claesson
4
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8
9 # http://www.apache.org/licenses/LICENSE-2.0
10
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 """This is two implementations for how to get module names from the boost
18 sources. One relies on json metadata files in the sources, the other relies on
19 the folder names.
20
21 Run the tool in the boost directory and append the stdout to the misc.py:
22
23 boost/$ path/to/meson/tools/boost_names.py >> path/to/meson/dependencies/misc.py
24 """
25
26 import sys
27 import os
28 import collections
29 import pprint
30 import json
31 import re
32
33 Module = collections.namedtuple('Module', ['dirname', 'name', 'libnames'])
34 Module.__repr__ = lambda self: str((self.dirname, self.name, self.libnames))
35
36 LIBS = 'libs'
37
38 manual_map = {
39 'callable_traits': 'Call Traits',
40 'crc': 'CRC',
41 'dll': 'DLL',
42 'gil': 'GIL',
43 'graph_parallel': 'GraphParallel',
44 'icl': 'ICL',
45 'io': 'IO State Savers',
46 'msm': 'Meta State Machine',
47 'mpi': 'MPI',
48 'mpl': 'MPL',
49 'multi_array': 'Multi-Array',
50 'multi_index': 'Multi-Index',
51 'numeric': 'Numeric Conversion',
52 'ptr_container': 'Pointer Container',
53 'poly_collection': 'PolyCollection',
54 'qvm': 'QVM',
55 'throw_exception': 'ThrowException',
56 'tti': 'TTI',
57 'vmd': 'VMD',
58 }
59
60 extra = [
61 Module('utility', 'Compressed Pair', []),
62 Module('core', 'Enable If', []),
63 Module('functional', 'Functional/Factory', []),
64 Module('functional', 'Functional/Forward', []),
65 Module('functional', 'Functional/Hash', []),
66 Module('functional', 'Functional/Overloaded Function', []),
67 Module('utility', 'Identity Type', []),
68 Module('utility', 'In Place Factory, Typed In Place Factory', []),
69 Module('numeric', 'Interval', []),
70 Module('math', 'Math Common Factor', []),
71 Module('math', 'Math Octonion', []),
72 Module('math', 'Math Quaternion', []),
73 Module('math', 'Math/Special Functions', []),
74 Module('math', 'Math/Statistical Distributions', []),
75 Module('bind', 'Member Function', []),
76 Module('algorithm', 'Min-Max', []),
77 Module('numeric', 'Odeint', []),
78 Module('utility', 'Operators', []),
79 Module('core', 'Ref', []),
80 Module('utility', 'Result Of', []),
81 Module('algorithm', 'String Algo', []),
82 Module('core', 'Swap', []),
83 Module('', 'Tribool', []),
84 Module('numeric', 'uBLAS', []),
85 Module('utility', 'Value Initialized', []),
86 ]
87
88 # Cannot find the following modules in the documentation of boost
89 not_modules = ['beast', 'logic', 'mp11', 'winapi']
90
91 def eprint(message):
92 print(message, file=sys.stderr)
93
94 def get_library_names(jamfile):
95 libs = []
96 with open(jamfile) as jamfh:
97 jam = jamfh.read()
98 res = re.finditer(r'^lib[\s]+([A-Za-z0-9_]+)([^;]*);', jam, re.MULTILINE | re.DOTALL)
99 for matches in res:
100 if ':' in matches.group(2):
101 libs.append(matches.group(1))
102 res = re.finditer(r'^boost-lib[\s]+([A-Za-z0-9_]+)([^;]*);', jam, re.MULTILINE | re.DOTALL)
103 for matches in res:
104 if ':' in matches.group(2):
105 libs.append('boost_{}'.format(matches.group(1)))
106 return libs
107
108 def exists(modules, module):
109 return len([x for x in modules if x.dirname == module.dirname]) != 0
110
111 def get_modules(init=extra):
112 modules = init
113 for directory in os.listdir(LIBS):
114 if not os.path.isdir(os.path.join(LIBS, directory)):
115 continue
116 if directory in not_modules:
117 continue
118 jamfile = os.path.join(LIBS, directory, 'build', 'Jamfile.v2')
119 if os.path.isfile(jamfile):
120 libs = get_library_names(jamfile)
121 else:
122 libs = []
123 if directory in manual_map.keys():
124 modname = manual_map[directory]
125 else:
126 modname = directory.replace('_', ' ').title()
127 modules.append(Module(directory, modname, libs))
128 return modules
129
130 def get_modules_2():
131 modules = []
132 # The python module uses an older build system format and is not easily parseable.
133 # We add the python module libraries manually.
134 modules.append(Module('python', 'Python', ['boost_python', 'boost_python3', 'boost_numpy', 'boost_numpy3']))
135 for (root, dirs, files) in os.walk(LIBS):
136 for f in files:
137 if f == "libraries.json":
138 projectdir = os.path.dirname(root)
139
140 jamfile = os.path.join(projectdir, 'build', 'Jamfile.v2')
141 if os.path.isfile(jamfile):
142 libs = get_library_names(jamfile)
143 else:
144 libs = []
145
146 # Get metadata for module
147 jsonfile = os.path.join(root, f)
148 with open(jsonfile) as jsonfh:
149 boost_modules = json.loads(jsonfh.read())
150 if(isinstance(boost_modules, dict)):
151 boost_modules = [boost_modules]
152 for boost_module in boost_modules:
153 modules.append(Module(boost_module['key'], boost_module['name'], libs))
154
155 # Some subprojects do not have meta directory with json file. Find those
156 jsonless_modules = [x for x in get_modules([]) if not exists(modules, x)]
157 for module in jsonless_modules:
158 eprint("WARNING: {} does not have meta/libraries.json. Will guess pretty name '{}'".format(module.dirname, module.name))
159 modules.extend(jsonless_modules)
160
161 return modules
162
163 def main(args):
164 if not os.path.isdir(LIBS):
165 eprint("ERROR: script must be run in boost source directory")
166
167 # It will pick jsonless algorithm if 1 is given as argument
168 impl = 0
169 if len(args) > 1:
170 if args[1] == '1':
171 impl = 1
172
173 if impl == 1:
174 modules = get_modules()
175 else:
176 modules = get_modules_2()
177
178 sorted_modules = sorted(modules, key=lambda module: module.name.lower())
179 sorted_modules = [x[2] for x in sorted_modules if x[2]]
180 sorted_modules = sum(sorted_modules, [])
181 sorted_modules = [x for x in sorted_modules if x.startswith('boost')]
182
183 pp = pprint.PrettyPrinter()
184 pp.pprint(sorted_modules)
185
186 if __name__ == '__main__':
187 main(sys.argv)
188
[end of tools/boost_names.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
2f72d4db0921ec3ce7c4cd9803c7af9f4ac776cf
|
CMake for dependencies can't use modules
CMake for finding dependencies could easily be further extended with off-the-shelf `Find<foo>.cmake` scripts. The only thing that needs to happen for that is that the invocation of `cmake` needs to point `-DCMAKE_MODULE_PATH=<dir>` at the directory containing these scripts.
A good place for this might be here:
https://github.com/mesonbuild/meson/blob/90c9b868b20b11bb089fc5e0c634d5ed76fea0cb/mesonbuild/dependencies/base.py#L1422
As a simple suggestion, just redefine all `CMAKE_*` environment variables as CMake variables:
```python
for key in env.keys():
if key.startswith('CMAKE_'):
args.append('-D%s=%s' % (key, env[key]))
```
That should do the trick (some testing required).
As an added benefit, you can influence CMake further with other `CMAKE_*` variables in the environment.
|
I don't know if depending on environment variables is a good idea for this use case. I would rather add a `cmake_options` or `module_path` key to the dependency function than reading environment variables. This way everything stays in the meson file and the CMake dependency backend doesn't depend on the user's environment.
As far as I can tell, this should be sufficient for off-the-shelf CMake scripts. Am I missing a use case?
Not that I can think of. And I'm fine with solving things differently. This is how I hacked it for me locally, is all.
Since I don't know the meson code base very well (yet), and I don't particularly want to get side tracked right now, I figured I'd post my hack and let you figure out how to do it well - otherwise this would've been a PR :)
Err, that closing was an accident - sorry.
|
2019-01-16T11:33:44Z
|
<patch>
diff --git a/mesonbuild/dependencies/base.py b/mesonbuild/dependencies/base.py
--- a/mesonbuild/dependencies/base.py
+++ b/mesonbuild/dependencies/base.py
@@ -27,13 +27,14 @@
import platform
import itertools
import ctypes
+from typing import List
from enum import Enum
from pathlib import PurePath
from .. import mlog
from .. import mesonlib
from ..compilers import clib_langs
-from ..environment import BinaryTable
+from ..environment import BinaryTable, Environment
from ..mesonlib import MachineChoice, MesonException, OrderedSet, PerMachine
from ..mesonlib import Popen_safe, version_compare_many, version_compare, listify
@@ -908,7 +909,7 @@ class CMakeDependency(ExternalDependency):
def _gen_exception(self, msg):
return DependencyException('Dependency {} not found: {}'.format(self.name, msg))
- def __init__(self, name, environment, kwargs, language=None):
+ def __init__(self, name: str, environment: Environment, kwargs, language=None):
super().__init__('cmake', environment, language, kwargs)
self.name = name
self.is_libtool = False
@@ -956,16 +957,25 @@ def __init__(self, name, environment, kwargs, language=None):
return
modules = kwargs.get('modules', [])
+ cm_path = kwargs.get('cmake_module_path', [])
+ cm_args = kwargs.get('cmake_args', [])
if not isinstance(modules, list):
modules = [modules]
- self._detect_dep(name, modules)
+ if not isinstance(cm_path, list):
+ cm_path = [cm_path]
+ if not isinstance(cm_args, list):
+ cm_args = [cm_args]
+ cm_path = [x if os.path.isabs(x) else os.path.join(environment.get_source_dir(), x) for x in cm_path]
+ if cm_path:
+ cm_args += ['-DCMAKE_MODULE_PATH={}'.format(';'.join(cm_path))]
+ self._detect_dep(name, modules, cm_args)
def __repr__(self):
s = '<{0} {1}: {2} {3}>'
return s.format(self.__class__.__name__, self.name, self.is_found,
self.version_reqs)
- def _detect_dep(self, name, modules):
+ def _detect_dep(self, name: str, modules: List[str], args: List[str]):
# Detect a dependency with CMake using the '--find-package' mode
# and the trace output (stderr)
#
@@ -981,7 +991,7 @@ def _detect_dep(self, name, modules):
mlog.debug('Try CMake generator: {}'.format(i if len(i) > 0 else 'auto'))
# Prepare options
- cmake_opts = ['--trace-expand', '-DNAME={}'.format(name), '.']
+ cmake_opts = ['--trace-expand', '-DNAME={}'.format(name)] + args + ['.']
if len(i) > 0:
cmake_opts = ['-G', i] + cmake_opts
diff --git a/mesonbuild/interpreter.py b/mesonbuild/interpreter.py
--- a/mesonbuild/interpreter.py
+++ b/mesonbuild/interpreter.py
@@ -1926,6 +1926,7 @@ def get_cross_property_method(self, args, kwargs):
'main',
'method',
'modules',
+ 'cmake_module_path',
'optional_modules',
'native',
'not_found_message',
@@ -1933,6 +1934,7 @@ def get_cross_property_method(self, args, kwargs):
'static',
'version',
'private_headers',
+ 'cmake_args',
},
'declare_dependency': {'include_directories',
'link_with',
@@ -2944,10 +2946,10 @@ def _handle_featurenew_dependencies(self, name):
elif name == 'openmp':
FeatureNew('OpenMP Dependency', '0.46.0').use(self.subproject)
+ @FeatureNewKwargs('dependency', '0.50.0', ['not_found_message', 'cmake_module_path', 'cmake_args'])
@FeatureNewKwargs('dependency', '0.49.0', ['disabler'])
@FeatureNewKwargs('dependency', '0.40.0', ['method'])
@FeatureNewKwargs('dependency', '0.38.0', ['default_options'])
- @FeatureNewKwargs('dependency', '0.50.0', ['not_found_message'])
@disablerIfNotFound
@permittedKwargs(permitted_kwargs['dependency'])
def func_dependency(self, node, args, kwargs):
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-8680
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ENH: indexing support for reversed is_monotonic
conceptually not hard (and you can look at a the slices to figure this out, e.g. if start >= end or start>last_endpoint, you can just do a reversed is_monotonic), to avoid a perf hit I think, then just reverse the searching operations for slices would need to do `is_monotonic_decreasing` here : https://github.com/pydata/pandas/blob/master/pandas/core/index.py#L1764
Hello,
I am working with spectral data, which for various spectral units such as
wavenumber, is often presented with decreasing spectral values along the
index. For example:
http://www.chemguide.co.uk/analysis/ir/irpropanone.GIF
In my dataframe, the index is stored in descending order (eg 500, 499,
498... 2, 1); however, when I try to slice using .ix[]; it becomes
impossible, giving me a long key error.
Likewise, df.plot() is sorting the xvalues from low to high, so I need to
reverse the plot axis after the fact. Not really a big deal, but wondered
if there's a better workaround.
Any suggestions?
**Note: This behavior works fine for int64 index:**
```
#Create dataframe and reverse index
x = DataFrame(np.random.randn(50,50))
x.index = x.index[::-1]
#Slice 30-10
x.ix[30:10, ::]
```
But fails for float index
```
x = DataFrame(np.random.randn(50,50),
index=np.linspace(0,50))
x.index = x.index[::-1]
x.ix[30.0:10.0, ::]
```
With error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-68-1af3b9a79d3d> in <module>()
11 x.index = x.index[::-1]
12
---> 13 x.ix[30.0:10.0, ::]
14
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.pyc in __getitem__(self, key)
67 pass
68
---> 69 return self._getitem_tuple(key)
70 else:
71 return self._getitem_axis(key, axis=0)
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_tuple(self, tup)
673 continue
674
--> 675 retval = getattr(retval, self.name)._getitem_axis(key, axis=i)
676
677 return retval
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_axis(self, key, axis, validate_iterable)
859 labels = self.obj._get_axis(axis)
860 if isinstance(key, slice):
--> 861 return self._get_slice_axis(key, axis=axis)
862 elif _is_list_like(key) and not (isinstance(key, tuple) and
863 isinstance(labels, MultiIndex)):
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.pyc in _get_slice_axis(self, slice_obj, axis)
1106 if not _need_slice(slice_obj):
1107 return obj
-> 1108 indexer = self._convert_slice_indexer(slice_obj, axis)
1109
1110 if isinstance(indexer, slice):
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.pyc in _convert_slice_indexer(self, key, axis)
161 # if we are accessing via lowered dim, use the last dim
162 ax = self.obj._get_axis(min(axis, self.ndim - 1))
--> 163 return ax._convert_slice_indexer(key, typ=self.name)
164
165 def _has_valid_setitem_indexer(self, indexer):
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in _convert_slice_indexer(self, key, typ)
2027
2028 # translate to locations
-> 2029 return self.slice_indexer(key.start, key.stop, key.step)
2030
2031 def get_value(self, series, key):
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in slice_indexer(self, start, end, step)
1704 This function assumes that the data is sorted, so use at your own peril
1705 """
-> 1706 start_slice, end_slice = self.slice_locs(start, end)
1707
1708 # return a slice
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in slice_locs(self, start, end)
1777
1778 start_slice = _get_slice(0, offset=0, search_side='left',
-> 1779 slice_property='start', search_value=start)
1780 end_slice = _get_slice(len(self), offset=1, search_side='right',
1781 slice_property='stop', search_value=end)
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in _get_slice(starting_value, offset, search_side, slice_property, search_value)
1746
1747 try:
-> 1748 slc = self.get_loc(search_value)
1749
1750 if not is_unique:
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in get_loc(self, key)
2091 except (TypeError, NotImplementedError):
2092 pass
-> 2093 return super(Float64Index, self).get_loc(key)
2094
2095 @property
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in get_loc(self, key)
1179 loc : int if unique index, possibly slice or mask if not
1180 """
-> 1181 return self._engine.get_loc(_values_from_object(key))
1182
1183 def get_value(self, series, key):
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/index.so in pandas.index.IndexEngine.get_loc (pandas/index.c:3354)()
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/index.so in pandas.index.IndexEngine.get_loc (pandas/index.c:3234)()
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/hashtable.so in pandas.hashtable.Float64HashTable.get_item (pandas/hashtable.c:9018)()
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/hashtable.so in pandas.hashtable.Float64HashTable.get_item (pandas/hashtable.c:8962)()
KeyError: 30.0
```
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 [](http://scatterci.github.io/pydata/pandas)
6
7 ## What is it
8
9 **pandas** is a Python package providing fast, flexible, and expressive data
10 structures designed to make working with "relational" or "labeled" data both
11 easy and intuitive. It aims to be the fundamental high-level building block for
12 doing practical, **real world** data analysis in Python. Additionally, it has
13 the broader goal of becoming **the most powerful and flexible open source data
14 analysis / manipulation tool available in any language**. It is already well on
15 its way toward this goal.
16
17 ## Main Features
18 Here are just a few of the things that pandas does well:
19
20 - Easy handling of [**missing data**][missing-data] (represented as
21 `NaN`) in floating point as well as non-floating point data
22 - Size mutability: columns can be [**inserted and
23 deleted**][insertion-deletion] from DataFrame and higher dimensional
24 objects
25 - Automatic and explicit [**data alignment**][alignment]: objects can
26 be explicitly aligned to a set of labels, or the user can simply
27 ignore the labels and let `Series`, `DataFrame`, etc. automatically
28 align the data for you in computations
29 - Powerful, flexible [**group by**][groupby] functionality to perform
30 split-apply-combine operations on data sets, for both aggregating
31 and transforming data
32 - Make it [**easy to convert**][conversion] ragged,
33 differently-indexed data in other Python and NumPy data structures
34 into DataFrame objects
35 - Intelligent label-based [**slicing**][slicing], [**fancy
36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
37 large data sets
38 - Intuitive [**merging**][merging] and [**joining**][joining] data
39 sets
40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
41 data sets
42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
43 labels per tick)
44 - Robust IO tools for loading data from [**flat files**][flat-files]
45 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
47 - [**Time series**][timeseries]-specific functionality: date range
48 generation and frequency conversion, moving window statistics,
49 moving window linear regressions, date shifting and lagging, etc.
50
51
52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
70
71 ## Where to get it
72 The source code is currently hosted on GitHub at:
73 http://github.com/pydata/pandas
74
75 Binary installers for the latest released version are available at the Python
76 package index
77
78 http://pypi.python.org/pypi/pandas/
79
80 And via `easy_install`:
81
82 ```sh
83 easy_install pandas
84 ```
85
86 or `pip`:
87
88 ```sh
89 pip install pandas
90 ```
91
92 ## Dependencies
93 - [NumPy](http://www.numpy.org): 1.7.0 or higher
94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
95 - [pytz](http://pytz.sourceforge.net)
96 - Needed for time zone support with ``pandas.date_range``
97
98 ### Highly Recommended Dependencies
99 - [numexpr](https://github.com/pydata/numexpr)
100 - Needed to accelerate some expression evaluation operations
101 - Required by PyTables
102 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
103 - Needed to accelerate certain numerical operations
104
105 ### Optional dependencies
106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
111 - [statsmodels](http://statsmodels.sourceforge.net/)
112 - Needed for parts of `pandas.stats`
113 - For Excel I/O:
114 - [xlrd/xlwt](http://www.python-excel.org/)
115 - Excel reading (xlrd) and writing (xlwt)
116 - [openpyxl](http://packages.python.org/openpyxl/)
117 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for
118 writing .xlsx files
119 - xlrd >= 0.9.0
120 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
121 - Alternative Excel writer.
122 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
123 - Needed for `pandas.io.gbq`
124 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
125 - One of the following combinations of libraries is needed to use the
126 top-level [`pandas.read_html`][read-html-docs] function:
127 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
128 recent version of [html5lib][html5lib] is okay.)
129 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
130 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
131 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
132 for reasons as to why you should probably **not** take this approach.
133
134 #### Notes about HTML parsing libraries
135 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
136 either [lxml][lxml] or [html5lib][html5lib] or both.
137 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
138 installed.
139 - You are strongly encouraged to read [HTML reading
140 gotchas][html-gotchas]. It explains issues surrounding the
141 installation and usage of the above three libraries.
142 - You may need to install an older version of
143 [BeautifulSoup4][BeautifulSoup4]:
144 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
145 32-bit Ubuntu/Debian
146 - Additionally, if you're using [Anaconda][Anaconda] you should
147 definitely read [the gotchas about HTML parsing][html-gotchas]
148 libraries
149 - If you're on a system with `apt-get` you can do
150
151 ```sh
152 sudo apt-get build-dep python-lxml
153 ```
154
155 to get the necessary dependencies for installation of [lxml][lxml].
156 This will prevent further headaches down the line.
157
158 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
159 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
160 [lxml]: http://lxml.de
161 [Anaconda]: https://store.continuum.io/cshop/anaconda
162 [NumPy]: http://numpy.scipy.org/
163 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
164 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
165
166 ## Installation from sources
167 To install pandas from source you need Cython in addition to the normal
168 dependencies above. Cython can be installed from pypi:
169
170 ```sh
171 pip install cython
172 ```
173
174 In the `pandas` directory (same one where you found this file after
175 cloning the git repo), execute:
176
177 ```sh
178 python setup.py install
179 ```
180
181 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
182
183 ```sh
184 python setup.py develop
185 ```
186
187 Alternatively, you can use `pip` if you want all the dependencies pulled
188 in automatically (the `-e` option is for installing it in [development
189 mode](http://www.pip-installer.org/en/latest/usage.html)):
190
191 ```sh
192 pip install -e .
193 ```
194
195 On Windows, you will need to install MinGW and execute:
196
197 ```sh
198 python setup.py build --compiler=mingw32
199 python setup.py install
200 ```
201
202 See http://pandas.pydata.org/ for more information.
203
204 ## License
205 BSD
206
207 ## Documentation
208 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
209
210 The Sphinx documentation should provide a good starting point for learning how
211 to use the library. Expect the docs to continue to expand as time goes on.
212
213 ## Background
214 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
215 has been under active development since then.
216
217 ## Discussion and Development
218 Since pandas development is related to a number of other scientific
219 Python projects, questions are welcome on the scipy-user mailing
220 list. Specialized discussions or design issues should take place on
221 the PyData mailing list / Google group:
222
223 https://groups.google.com/forum/#!forum/pydata
224
[end of README.md]
[start of setup.py]
1 #!/usr/bin/env python
2
3 """
4 Parts of this file were taken from the pyzmq project
5 (https://github.com/zeromq/pyzmq) which have been permitted for use under the
6 BSD license. Parts are from lxml (https://github.com/lxml/lxml)
7 """
8
9 import os
10 import sys
11 import shutil
12 import warnings
13 import re
14
15 # may need to work around setuptools bug by providing a fake Pyrex
16 try:
17 import Cython
18 sys.path.insert(0, os.path.join(os.path.dirname(__file__), "fake_pyrex"))
19 except ImportError:
20 pass
21
22 # try bootstrapping setuptools if it doesn't exist
23 try:
24 import pkg_resources
25 try:
26 pkg_resources.require("setuptools>=0.6c5")
27 except pkg_resources.VersionConflict:
28 from ez_setup import use_setuptools
29 use_setuptools(version="0.6c5")
30 from setuptools import setup, Command
31 _have_setuptools = True
32 except ImportError:
33 # no setuptools installed
34 from distutils.core import setup, Command
35 _have_setuptools = False
36
37 setuptools_kwargs = {}
38 min_numpy_ver = '1.7.0'
39 if sys.version_info[0] >= 3:
40
41 setuptools_kwargs = {
42 'zip_safe': False,
43 'install_requires': ['python-dateutil >= 2',
44 'pytz >= 2011k',
45 'numpy >= %s' % min_numpy_ver],
46 'setup_requires': ['numpy >= %s' % min_numpy_ver],
47 }
48 if not _have_setuptools:
49 sys.exit("need setuptools/distribute for Py3k"
50 "\n$ pip install distribute")
51
52 else:
53 setuptools_kwargs = {
54 'install_requires': ['python-dateutil',
55 'pytz >= 2011k',
56 'numpy >= %s' % min_numpy_ver],
57 'setup_requires': ['numpy >= %s' % min_numpy_ver],
58 'zip_safe': False,
59 }
60
61 if not _have_setuptools:
62 try:
63 import numpy
64 import dateutil
65 setuptools_kwargs = {}
66 except ImportError:
67 sys.exit("install requires: 'python-dateutil < 2','numpy'."
68 " use pip or easy_install."
69 "\n $ pip install 'python-dateutil < 2' 'numpy'")
70
71 from distutils.extension import Extension
72 from distutils.command.build import build
73 from distutils.command.sdist import sdist
74 from distutils.command.build_ext import build_ext as _build_ext
75
76 try:
77 from Cython.Distutils import build_ext as _build_ext
78 # from Cython.Distutils import Extension # to get pyrex debugging symbols
79 cython = True
80 except ImportError:
81 cython = False
82
83 from os.path import join as pjoin
84
85
86 class build_ext(_build_ext):
87 def build_extensions(self):
88 numpy_incl = pkg_resources.resource_filename('numpy', 'core/include')
89
90 for ext in self.extensions:
91 if hasattr(ext, 'include_dirs') and not numpy_incl in ext.include_dirs:
92 ext.include_dirs.append(numpy_incl)
93 _build_ext.build_extensions(self)
94
95
96 DESCRIPTION = ("Powerful data structures for data analysis, time series,"
97 "and statistics")
98 LONG_DESCRIPTION = """
99 **pandas** is a Python package providing fast, flexible, and expressive data
100 structures designed to make working with structured (tabular, multidimensional,
101 potentially heterogeneous) and time series data both easy and intuitive. It
102 aims to be the fundamental high-level building block for doing practical,
103 **real world** data analysis in Python. Additionally, it has the broader goal
104 of becoming **the most powerful and flexible open source data analysis /
105 manipulation tool available in any language**. It is already well on its way
106 toward this goal.
107
108 pandas is well suited for many different kinds of data:
109
110 - Tabular data with heterogeneously-typed columns, as in an SQL table or
111 Excel spreadsheet
112 - Ordered and unordered (not necessarily fixed-frequency) time series data.
113 - Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
114 column labels
115 - Any other form of observational / statistical data sets. The data actually
116 need not be labeled at all to be placed into a pandas data structure
117
118 The two primary data structures of pandas, Series (1-dimensional) and DataFrame
119 (2-dimensional), handle the vast majority of typical use cases in finance,
120 statistics, social science, and many areas of engineering. For R users,
121 DataFrame provides everything that R's ``data.frame`` provides and much
122 more. pandas is built on top of `NumPy <http://www.numpy.org>`__ and is
123 intended to integrate well within a scientific computing environment with many
124 other 3rd party libraries.
125
126 Here are just a few of the things that pandas does well:
127
128 - Easy handling of **missing data** (represented as NaN) in floating point as
129 well as non-floating point data
130 - Size mutability: columns can be **inserted and deleted** from DataFrame and
131 higher dimensional objects
132 - Automatic and explicit **data alignment**: objects can be explicitly
133 aligned to a set of labels, or the user can simply ignore the labels and
134 let `Series`, `DataFrame`, etc. automatically align the data for you in
135 computations
136 - Powerful, flexible **group by** functionality to perform
137 split-apply-combine operations on data sets, for both aggregating and
138 transforming data
139 - Make it **easy to convert** ragged, differently-indexed data in other
140 Python and NumPy data structures into DataFrame objects
141 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
142 of large data sets
143 - Intuitive **merging** and **joining** data sets
144 - Flexible **reshaping** and pivoting of data sets
145 - **Hierarchical** labeling of axes (possible to have multiple labels per
146 tick)
147 - Robust IO tools for loading data from **flat files** (CSV and delimited),
148 Excel files, databases, and saving / loading data from the ultrafast **HDF5
149 format**
150 - **Time series**-specific functionality: date range generation and frequency
151 conversion, moving window statistics, moving window linear regressions,
152 date shifting and lagging, etc.
153
154 Many of these principles are here to address the shortcomings frequently
155 experienced using other languages / scientific research environments. For data
156 scientists, working with data is typically divided into multiple stages:
157 munging and cleaning data, analyzing / modeling it, then organizing the results
158 of the analysis into a form suitable for plotting or tabular display. pandas is
159 the ideal tool for all of these tasks.
160
161 Note
162 ----
163 Windows binaries built against NumPy 1.8.1
164 """
165
166 DISTNAME = 'pandas'
167 LICENSE = 'BSD'
168 AUTHOR = "The PyData Development Team"
169 EMAIL = "[email protected]"
170 URL = "http://pandas.pydata.org"
171 DOWNLOAD_URL = ''
172 CLASSIFIERS = [
173 'Development Status :: 4 - Beta',
174 'Environment :: Console',
175 'Operating System :: OS Independent',
176 'Intended Audience :: Science/Research',
177 'Programming Language :: Python',
178 'Programming Language :: Python :: 2',
179 'Programming Language :: Python :: 3',
180 'Programming Language :: Python :: 2.6',
181 'Programming Language :: Python :: 2.7',
182 'Programming Language :: Python :: 3.2',
183 'Programming Language :: Python :: 3.3',
184 'Programming Language :: Python :: 3.4',
185 'Programming Language :: Cython',
186 'Topic :: Scientific/Engineering',
187 ]
188
189 MAJOR = 0
190 MINOR = 15
191 MICRO = 0
192 ISRELEASED = False
193 VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)
194 QUALIFIER = ''
195
196 FULLVERSION = VERSION
197 write_version = True
198
199 if not ISRELEASED:
200 import subprocess
201 FULLVERSION += '.dev'
202
203 pipe = None
204 for cmd in ['git','git.cmd']:
205 try:
206 pipe = subprocess.Popen([cmd, "describe", "--always", "--match", "v[0-9]*"],
207 stdout=subprocess.PIPE)
208 (so,serr) = pipe.communicate()
209 if pipe.returncode == 0:
210 break
211 except:
212 pass
213
214 if pipe is None or pipe.returncode != 0:
215 # no git, or not in git dir
216 if os.path.exists('pandas/version.py'):
217 warnings.warn("WARNING: Couldn't get git revision, using existing pandas/version.py")
218 write_version = False
219 else:
220 warnings.warn("WARNING: Couldn't get git revision, using generic version string")
221 else:
222 # have git, in git dir, but may have used a shallow clone (travis does this)
223 rev = so.strip()
224 # makes distutils blow up on Python 2.7
225 if sys.version_info[0] >= 3:
226 rev = rev.decode('ascii')
227
228 if not rev.startswith('v') and re.match("[a-zA-Z0-9]{7,9}",rev):
229 # partial clone, manually construct version string
230 # this is the format before we started using git-describe
231 # to get an ordering on dev version strings.
232 rev ="v%s.dev-%s" % (VERSION, rev)
233
234 # Strip leading v from tags format "vx.y.z" to get th version string
235 FULLVERSION = rev.lstrip('v')
236
237 else:
238 FULLVERSION += QUALIFIER
239
240
241 def write_version_py(filename=None):
242 cnt = """\
243 version = '%s'
244 short_version = '%s'
245 """
246 if not filename:
247 filename = os.path.join(
248 os.path.dirname(__file__), 'pandas', 'version.py')
249
250 a = open(filename, 'w')
251 try:
252 a.write(cnt % (FULLVERSION, VERSION))
253 finally:
254 a.close()
255
256 if write_version:
257 write_version_py()
258
259 class CleanCommand(Command):
260 """Custom distutils command to clean the .so and .pyc files."""
261
262 user_options = [("all", "a", "")]
263
264 def initialize_options(self):
265 self.all = True
266 self._clean_me = []
267 self._clean_trees = []
268 self._clean_exclude = ['np_datetime.c',
269 'np_datetime_strings.c',
270 'period.c',
271 'tokenizer.c',
272 'io.c',
273 'ujson.c',
274 'objToJSON.c',
275 'JSONtoObj.c',
276 'ultrajsonenc.c',
277 'ultrajsondec.c',
278 ]
279
280 for root, dirs, files in os.walk('pandas'):
281 for f in files:
282 if f in self._clean_exclude:
283 continue
284
285 # XXX
286 if 'ujson' in f:
287 continue
288
289 if os.path.splitext(f)[-1] in ('.pyc', '.so', '.o',
290 '.pyo',
291 '.pyd', '.c', '.orig'):
292 self._clean_me.append(pjoin(root, f))
293 for d in dirs:
294 if d == '__pycache__':
295 self._clean_trees.append(pjoin(root, d))
296
297 for d in ('build', 'dist'):
298 if os.path.exists(d):
299 self._clean_trees.append(d)
300
301 def finalize_options(self):
302 pass
303
304 def run(self):
305 for clean_me in self._clean_me:
306 try:
307 os.unlink(clean_me)
308 except Exception:
309 pass
310 for clean_tree in self._clean_trees:
311 try:
312 shutil.rmtree(clean_tree)
313 except Exception:
314 pass
315
316
317 class CheckSDist(sdist):
318 """Custom sdist that ensures Cython has compiled all pyx files to c."""
319
320 _pyxfiles = ['pandas/lib.pyx',
321 'pandas/hashtable.pyx',
322 'pandas/tslib.pyx',
323 'pandas/index.pyx',
324 'pandas/algos.pyx',
325 'pandas/parser.pyx',
326 'pandas/src/sparse.pyx',
327 'pandas/src/testing.pyx']
328
329 def initialize_options(self):
330 sdist.initialize_options(self)
331
332 '''
333 self._pyxfiles = []
334 for root, dirs, files in os.walk('pandas'):
335 for f in files:
336 if f.endswith('.pyx'):
337 self._pyxfiles.append(pjoin(root, f))
338 '''
339
340 def run(self):
341 if 'cython' in cmdclass:
342 self.run_command('cython')
343 else:
344 for pyxfile in self._pyxfiles:
345 cfile = pyxfile[:-3] + 'c'
346 msg = "C-source file '%s' not found." % (cfile) +\
347 " Run 'setup.py cython' before sdist."
348 assert os.path.isfile(cfile), msg
349 sdist.run(self)
350
351
352 class CheckingBuildExt(build_ext):
353 """Subclass build_ext to get clearer report if Cython is necessary."""
354
355 def check_cython_extensions(self, extensions):
356 for ext in extensions:
357 for src in ext.sources:
358 if not os.path.exists(src):
359 raise Exception("""Cython-generated file '%s' not found.
360 Cython is required to compile pandas from a development branch.
361 Please install Cython or download a release package of pandas.
362 """ % src)
363
364 def build_extensions(self):
365 self.check_cython_extensions(self.extensions)
366 build_ext.build_extensions(self)
367
368
369 class CythonCommand(build_ext):
370 """Custom distutils command subclassed from Cython.Distutils.build_ext
371 to compile pyx->c, and stop there. All this does is override the
372 C-compile method build_extension() with a no-op."""
373 def build_extension(self, ext):
374 pass
375
376
377 class DummyBuildSrc(Command):
378 """ numpy's build_src command interferes with Cython's build_ext.
379 """
380 user_options = []
381
382 def initialize_options(self):
383 self.py_modules_dict = {}
384
385 def finalize_options(self):
386 pass
387
388 def run(self):
389 pass
390
391 cmdclass = {'clean': CleanCommand,
392 'build': build,
393 'sdist': CheckSDist}
394
395 try:
396 from wheel.bdist_wheel import bdist_wheel
397
398 class BdistWheel(bdist_wheel):
399 def get_tag(self):
400 tag = bdist_wheel.get_tag(self)
401 repl = 'macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64'
402 if tag[2] == 'macosx_10_6_intel':
403 tag = (tag[0], tag[1], repl)
404 return tag
405 cmdclass['bdist_wheel'] = BdistWheel
406 except ImportError:
407 pass
408
409 if cython:
410 suffix = '.pyx'
411 cmdclass['build_ext'] = CheckingBuildExt
412 cmdclass['cython'] = CythonCommand
413 else:
414 suffix = '.c'
415 cmdclass['build_src'] = DummyBuildSrc
416 cmdclass['build_ext'] = CheckingBuildExt
417
418 lib_depends = ['reduce', 'inference', 'properties']
419
420
421 def srcpath(name=None, suffix='.pyx', subdir='src'):
422 return pjoin('pandas', subdir, name + suffix)
423
424 if suffix == '.pyx':
425 lib_depends = [srcpath(f, suffix='.pyx') for f in lib_depends]
426 lib_depends.append('pandas/src/util.pxd')
427 else:
428 lib_depends = []
429 plib_depends = []
430
431 common_include = ['pandas/src/klib', 'pandas/src']
432
433
434 def pxd(name):
435 return os.path.abspath(pjoin('pandas', name + '.pxd'))
436
437
438 lib_depends = lib_depends + ['pandas/src/numpy_helper.h',
439 'pandas/src/parse_helper.h']
440
441
442 tseries_depends = ['pandas/src/datetime/np_datetime.h',
443 'pandas/src/datetime/np_datetime_strings.h',
444 'pandas/src/period.h']
445
446
447 # some linux distros require it
448 libraries = ['m'] if 'win32' not in sys.platform else []
449
450 ext_data = dict(
451 lib={'pyxfile': 'lib',
452 'pxdfiles': [],
453 'depends': lib_depends},
454 hashtable={'pyxfile': 'hashtable',
455 'pxdfiles': ['hashtable']},
456 tslib={'pyxfile': 'tslib',
457 'depends': tseries_depends,
458 'sources': ['pandas/src/datetime/np_datetime.c',
459 'pandas/src/datetime/np_datetime_strings.c',
460 'pandas/src/period.c']},
461 index={'pyxfile': 'index',
462 'sources': ['pandas/src/datetime/np_datetime.c',
463 'pandas/src/datetime/np_datetime_strings.c']},
464 algos={'pyxfile': 'algos',
465 'depends': [srcpath('generated', suffix='.pyx'),
466 srcpath('join', suffix='.pyx')]},
467 parser=dict(pyxfile='parser',
468 depends=['pandas/src/parser/tokenizer.h',
469 'pandas/src/parser/io.h',
470 'pandas/src/numpy_helper.h'],
471 sources=['pandas/src/parser/tokenizer.c',
472 'pandas/src/parser/io.c'])
473 )
474
475 extensions = []
476
477 for name, data in ext_data.items():
478 sources = [srcpath(data['pyxfile'], suffix=suffix, subdir='')]
479 pxds = [pxd(x) for x in data.get('pxdfiles', [])]
480 if suffix == '.pyx' and pxds:
481 sources.extend(pxds)
482
483 sources.extend(data.get('sources', []))
484
485 include = data.get('include', common_include)
486
487 obj = Extension('pandas.%s' % name,
488 sources=sources,
489 depends=data.get('depends', []),
490 include_dirs=include)
491
492 extensions.append(obj)
493
494
495 sparse_ext = Extension('pandas._sparse',
496 sources=[srcpath('sparse', suffix=suffix)],
497 include_dirs=[],
498 libraries=libraries)
499
500 extensions.extend([sparse_ext])
501
502 testing_ext = Extension('pandas._testing',
503 sources=[srcpath('testing', suffix=suffix)],
504 include_dirs=[],
505 libraries=libraries)
506
507 extensions.extend([testing_ext])
508
509 #----------------------------------------------------------------------
510 # msgpack stuff here
511
512 if sys.byteorder == 'big':
513 macros = [('__BIG_ENDIAN__', '1')]
514 else:
515 macros = [('__LITTLE_ENDIAN__', '1')]
516
517 msgpack_ext = Extension('pandas.msgpack',
518 sources = [srcpath('msgpack',
519 suffix=suffix if suffix == '.pyx' else '.cpp',
520 subdir='')],
521 language='c++',
522 include_dirs=common_include,
523 define_macros=macros)
524
525 extensions.append(msgpack_ext)
526
527 # if not ISRELEASED:
528 # extensions.extend([sandbox_ext])
529
530 if suffix == '.pyx' and 'setuptools' in sys.modules:
531 # undo dumb setuptools bug clobbering .pyx sources back to .c
532 for ext in extensions:
533 if ext.sources[0].endswith(('.c','.cpp')):
534 root, _ = os.path.splitext(ext.sources[0])
535 ext.sources[0] = root + suffix
536
537 ujson_ext = Extension('pandas.json',
538 depends=['pandas/src/ujson/lib/ultrajson.h',
539 'pandas/src/numpy_helper.h'],
540 sources=['pandas/src/ujson/python/ujson.c',
541 'pandas/src/ujson/python/objToJSON.c',
542 'pandas/src/ujson/python/JSONtoObj.c',
543 'pandas/src/ujson/lib/ultrajsonenc.c',
544 'pandas/src/ujson/lib/ultrajsondec.c',
545 'pandas/src/datetime/np_datetime.c',
546 'pandas/src/datetime/np_datetime_strings.c'],
547 include_dirs=['pandas/src/ujson/python',
548 'pandas/src/ujson/lib',
549 'pandas/src/datetime'] + common_include,
550 extra_compile_args=['-D_GNU_SOURCE'])
551
552
553 extensions.append(ujson_ext)
554
555
556 if _have_setuptools:
557 setuptools_kwargs["test_suite"] = "nose.collector"
558
559 # The build cache system does string matching below this point.
560 # if you change something, be careful.
561
562 setup(name=DISTNAME,
563 version=FULLVERSION,
564 maintainer=AUTHOR,
565 packages=['pandas',
566 'pandas.compat',
567 'pandas.computation',
568 'pandas.computation.tests',
569 'pandas.core',
570 'pandas.io',
571 'pandas.rpy',
572 'pandas.sandbox',
573 'pandas.sparse',
574 'pandas.sparse.tests',
575 'pandas.stats',
576 'pandas.util',
577 'pandas.tests',
578 'pandas.tests.test_msgpack',
579 'pandas.tools',
580 'pandas.tools.tests',
581 'pandas.tseries',
582 'pandas.tseries.tests',
583 'pandas.io.tests',
584 'pandas.io.tests.test_json',
585 'pandas.stats.tests',
586 ],
587 package_data={'pandas.io': ['tests/data/legacy_hdf/*.h5',
588 'tests/data/legacy_pickle/0.10.1/*.pickle',
589 'tests/data/legacy_pickle/0.11.0/*.pickle',
590 'tests/data/legacy_pickle/0.12.0/*.pickle',
591 'tests/data/legacy_pickle/0.13.0/*.pickle',
592 'tests/data/legacy_pickle/0.14.0/*.pickle',
593 'tests/data/*.csv',
594 'tests/data/*.dta',
595 'tests/data/*.txt',
596 'tests/data/*.xls',
597 'tests/data/*.xlsx',
598 'tests/data/*.xlsm',
599 'tests/data/*.table',
600 'tests/data/*.html',
601 'tests/data/html_encoding/*.html',
602 'tests/test_json/data/*.json'],
603 'pandas.tools': ['tests/*.csv'],
604 'pandas.tests': ['data/*.pickle',
605 'data/*.csv'],
606 'pandas.tseries.tests': ['data/*.pickle',
607 'data/*.csv']
608 },
609 ext_modules=extensions,
610 maintainer_email=EMAIL,
611 description=DESCRIPTION,
612 license=LICENSE,
613 cmdclass=cmdclass,
614 url=URL,
615 download_url=DOWNLOAD_URL,
616 long_description=LONG_DESCRIPTION,
617 classifiers=CLASSIFIERS,
618 platforms='any',
619 **setuptools_kwargs)
620
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
02de8530419e253eb51ca8ec17e195deb455cd75
|
ENH: indexing support for reversed is_monotonic
conceptually not hard (and you can look at a the slices to figure this out, e.g. if start >= end or start>last_endpoint, you can just do a reversed is_monotonic), to avoid a perf hit I think, then just reverse the searching operations for slices would need to do `is_monotonic_decreasing` here : https://github.com/pydata/pandas/blob/master/pandas/core/index.py#L1764
Hello,
I am working with spectral data, which for various spectral units such as
wavenumber, is often presented with decreasing spectral values along the
index. For example:
http://www.chemguide.co.uk/analysis/ir/irpropanone.GIF
In my dataframe, the index is stored in descending order (eg 500, 499,
498... 2, 1); however, when I try to slice using .ix[]; it becomes
impossible, giving me a long key error.
Likewise, df.plot() is sorting the xvalues from low to high, so I need to
reverse the plot axis after the fact. Not really a big deal, but wondered
if there's a better workaround.
Any suggestions?
**Note: This behavior works fine for int64 index:**
```
#Create dataframe and reverse index
x = DataFrame(np.random.randn(50,50))
x.index = x.index[::-1]
#Slice 30-10
x.ix[30:10, ::]
```
But fails for float index
```
x = DataFrame(np.random.randn(50,50),
index=np.linspace(0,50))
x.index = x.index[::-1]
x.ix[30.0:10.0, ::]
```
With error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-68-1af3b9a79d3d> in <module>()
11 x.index = x.index[::-1]
12
---> 13 x.ix[30.0:10.0, ::]
14
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.pyc in __getitem__(self, key)
67 pass
68
---> 69 return self._getitem_tuple(key)
70 else:
71 return self._getitem_axis(key, axis=0)
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_tuple(self, tup)
673 continue
674
--> 675 retval = getattr(retval, self.name)._getitem_axis(key, axis=i)
676
677 return retval
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_axis(self, key, axis, validate_iterable)
859 labels = self.obj._get_axis(axis)
860 if isinstance(key, slice):
--> 861 return self._get_slice_axis(key, axis=axis)
862 elif _is_list_like(key) and not (isinstance(key, tuple) and
863 isinstance(labels, MultiIndex)):
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.pyc in _get_slice_axis(self, slice_obj, axis)
1106 if not _need_slice(slice_obj):
1107 return obj
-> 1108 indexer = self._convert_slice_indexer(slice_obj, axis)
1109
1110 if isinstance(indexer, slice):
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.pyc in _convert_slice_indexer(self, key, axis)
161 # if we are accessing via lowered dim, use the last dim
162 ax = self.obj._get_axis(min(axis, self.ndim - 1))
--> 163 return ax._convert_slice_indexer(key, typ=self.name)
164
165 def _has_valid_setitem_indexer(self, indexer):
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in _convert_slice_indexer(self, key, typ)
2027
2028 # translate to locations
-> 2029 return self.slice_indexer(key.start, key.stop, key.step)
2030
2031 def get_value(self, series, key):
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in slice_indexer(self, start, end, step)
1704 This function assumes that the data is sorted, so use at your own peril
1705 """
-> 1706 start_slice, end_slice = self.slice_locs(start, end)
1707
1708 # return a slice
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in slice_locs(self, start, end)
1777
1778 start_slice = _get_slice(0, offset=0, search_side='left',
-> 1779 slice_property='start', search_value=start)
1780 end_slice = _get_slice(len(self), offset=1, search_side='right',
1781 slice_property='stop', search_value=end)
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in _get_slice(starting_value, offset, search_side, slice_property, search_value)
1746
1747 try:
-> 1748 slc = self.get_loc(search_value)
1749
1750 if not is_unique:
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in get_loc(self, key)
2091 except (TypeError, NotImplementedError):
2092 pass
-> 2093 return super(Float64Index, self).get_loc(key)
2094
2095 @property
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/index.pyc in get_loc(self, key)
1179 loc : int if unique index, possibly slice or mask if not
1180 """
-> 1181 return self._engine.get_loc(_values_from_object(key))
1182
1183 def get_value(self, series, key):
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/index.so in pandas.index.IndexEngine.get_loc (pandas/index.c:3354)()
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/index.so in pandas.index.IndexEngine.get_loc (pandas/index.c:3234)()
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/hashtable.so in pandas.hashtable.Float64HashTable.get_item (pandas/hashtable.c:9018)()
/home/glue/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/hashtable.so in pandas.hashtable.Float64HashTable.get_item (pandas/hashtable.c:8962)()
KeyError: 30.0
```
|
always `pd.show_versions()`
you can use:
`x[(x.index>10.0)&(x.index<30.0)]`
its not clear what:
`x.ix[30.0:10.0,:]` actually would mean for a reversed index as neither point is in the index. I supposed it _could_ mean the above, but would have to think about that.
For an integer index, its clear, because the end-points are included.
@cpcloud
@jorisvandenbossche
Sorry, here's `show_verions()`:
In [4]: pd.show_versions()
## INSTALLED VERSIONS
commit: None
python: 2.7.6.final.0
python-bits: 64
OS: Linux
OS-release: 2.6.32-62-generic
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.utf8
pandas: 0.14.1
nose: 1.3.0
Cython: None
numpy: 1.8.0
scipy: 0.14.0
statsmodels: 0.5.0
IPython: 3.0.0-dev
sphinx: None
patsy: 0.2.1
scikits.timeseries: None
dateutil: 2.2
pytz: 2014.4
bottleneck: None
tables: None
numexpr: None
matplotlib: 1.3.1
openpyxl: 2.0.3
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
Thanks for the solution. I'll use it for sure, but the ix behavior should work right? This is such a common index type in spectral data, I'd hate to require a seperate slice call for this use case. Although, if it's not likely to be changed in the future, I could probably just add my own slice functions that bury this under the hood unbeknownst to users. What do you recommend?
pls review docs here as well: http://pandas.pydata.org/pandas-docs/stable/indexing.html#float64index
I think this was not implemented because its not 'cheap'. In the sense that it _would_ work if you _knew_ that the index was monotonic, but reversed (iow would need to have a `is_monotonic_increasing` and `is_monotonic_decreasing` and then could just reverse the searching operators.
Well, that makes sense, thanks. I'll either make my own slice wrapper, or
raise a warning to users if they try to slice reversed index data.
On Mon, Jul 28, 2014 at 3:33 PM, jreback [email protected] wrote:
> pls review docs here as well:
> http://pandas.pydata.org/pandas-docs/stable/indexing.html#float64index
>
> I think this was not implemented because its not 'cheap'. In the sense
> that it _would_ work if you _knew_ that the index was monotonic, but
> reversed (iow would need to have a is_monotonic_increasing and
> is_monotonic_decreasing and then could just reverse the searching
> operators.
>
> —
> Reply to this email directly or view it on GitHub
> https://github.com/pydata/pandas/issues/7860#issuecomment-50387218.
##
Adam Hughes
Physics Ph.D Candidate
George Washington University
we'll put it on the enhancement list. if you are interested in implemented, step up!
Alright, thanks. I would take a crack, but really feel like I don't know the pandas code base well enough to guarantee my solution will do more good than harm.
I'm considering taking a crack at this, but there's one edge case I would like to clarify first. In particular: how do we want to handle slices with mis-matched ordering, e.g., `x.loc[10:30]` for an descending index or `x.loc[30:10]` for an ascending index.
Keeping track of whether an index is descending or ascending is one of those details that's nice to keep track of for the user, so it would be nice if these "just work" by switching `start`/`stop` in these cases. It seems like this would be handy for cases where the index is generally monotonic but can go either direction, e.g., as is the case for a number of physical variables.
Can anyone think of unfortunate consequences to this sort of interchanging?
@shoyer you can add to the `is_monotonic_float64` et. all in `generated.pyx` and just return say -1 if is negative monotonic, then it would 'keep track' internally (just as is_monotonic does now, but for increasing).
Then I think you could easily just swap the start stop in those caes.
@jreback Excellent, I'll take a look. I'd like this to work for `Int64Index`, too, for the sake of consistency, although the typical case is floating point data.
you can do for all types - just change the template
I just wanted to point out that I did use @jreback suggestion for the boolean experssion and just put that into my **getitem**() indexer calls somewhere, and haven't encountered any problems since. This is probably a hacky solution, but for my use case, works fine.
Can I ask how monotonicity is determined? Are all values inspected, or just the start and final? And does `is_monotonic_float64` already exist, or is this what is being proposed to be put in? It would help me if I had access to this attribute as well for when we do plotting. In fact, that might be an issue to consider. Matplotlib will try to plot from low to high values I believe, and I had to actually reverse the xlimits on calls to df.plot(). Unless my memory is mixed up...
Here is where `is_monotonic` is defined: https://github.com/pydata/pandas/blob/c7bfb4e16411516ca9108af95013bc3400ba38ad/pandas/src/generate_code.py#L542
This should be an easy fix to extend to identify descending indexes. It does indeed check all values (when necessary).
The advantage to using slice syntax is it uses numpy.ndarray views instead of making copies, so it's much faster. Also, various scientific file formats (e.g., netCDF, HDF5, OpenDAP) support reading slices directly but are much slower or have more limited support for array indexing. The later will be handy for [xray](https://github.com/xray/xray), and it will get that for free when I add this to pandas.
|
2014-10-30T06:15:51Z
|
<patch>
diff --git a/doc/source/api.rst b/doc/source/api.rst
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -1166,6 +1166,8 @@ Attributes
Index.values
Index.is_monotonic
+ Index.is_monotonic_increasing
+ Index.is_monotonic_decreasing
Index.is_unique
Index.dtype
Index.inferred_type
diff --git a/doc/source/whatsnew/v0.15.1.txt b/doc/source/whatsnew/v0.15.1.txt
--- a/doc/source/whatsnew/v0.15.1.txt
+++ b/doc/source/whatsnew/v0.15.1.txt
@@ -146,6 +146,29 @@ API changes
s.dt.hour
+- support for slicing with monotonic decreasing indexes, even if ``start`` or ``stop`` is
+ not found in the index (:issue:`7860`):
+
+ .. ipython:: python
+
+ s = pd.Series(['a', 'b', 'c', 'd'], [4, 3, 2, 1])
+ s
+
+ previous behavior:
+
+ .. code-block:: python
+
+ In [8]: s.loc[3.5:1.5]
+ KeyError: 3.5
+
+ current behavior:
+
+ .. ipython:: python
+
+ s.loc[3.5:1.5]
+
+- added Index properties `is_monotonic_increasing` and `is_monotonic_decreasing` (:issue:`8680`).
+
.. _whatsnew_0151.enhancements:
Enhancements
@@ -208,8 +231,9 @@ Bug Fixes
- Bug in ix/loc block splitting on setitem (manifests with integer-like dtypes, e.g. datetime64) (:issue:`8607`)
-
-
+- Bug when doing label based indexing with integers not found in the index for
+ non-unique but monotonic indexes (:issue:`8680`).
+- Bug when indexing a Float64Index with ``np.nan`` on numpy 1.7 (:issue:`8980`).
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1461,7 +1461,7 @@ def xs(self, key, axis=0, level=None, copy=None, drop_level=True):
name=self.index[loc])
else:
- result = self[loc]
+ result = self.iloc[loc]
result.index = new_index
# this could be a view
diff --git a/pandas/core/index.py b/pandas/core/index.py
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -573,8 +573,22 @@ def _mpl_repr(self):
@property
def is_monotonic(self):
- """ return if the index has monotonic (only equaly or increasing) values """
- return self._engine.is_monotonic
+ """ alias for is_monotonic_increasing (deprecated) """
+ return self._engine.is_monotonic_increasing
+
+ @property
+ def is_monotonic_increasing(self):
+ """ return if the index is monotonic increasing (only equal or
+ increasing) values
+ """
+ return self._engine.is_monotonic_increasing
+
+ @property
+ def is_monotonic_decreasing(self):
+ """ return if the index is monotonic decreasing (only equal or
+ decreasing values
+ """
+ return self._engine.is_monotonic_decreasing
def is_lexsorted_for_tuple(self, tup):
return True
@@ -1988,16 +2002,12 @@ def _get_slice(starting_value, offset, search_side, slice_property,
slc += offset
except KeyError:
- if self.is_monotonic:
-
- # we are duplicated but non-unique
- # so if we have an indexer then we are done
- # else search for it (GH 7523)
- if not is_unique and is_integer(search_value):
- slc = search_value
- else:
- slc = self.searchsorted(search_value,
- side=search_side)
+ if self.is_monotonic_increasing:
+ slc = self.searchsorted(search_value, side=search_side)
+ elif self.is_monotonic_decreasing:
+ search_side = 'right' if search_side == 'left' else 'left'
+ slc = len(self) - self[::-1].searchsorted(search_value,
+ side=search_side)
else:
raise
return slc
@@ -2431,10 +2441,13 @@ def __contains__(self, other):
def get_loc(self, key):
try:
if np.all(np.isnan(key)):
+ nan_idxs = self._nan_idxs
try:
- return self._nan_idxs.item()
- except ValueError:
- return self._nan_idxs
+ return nan_idxs.item()
+ except (ValueError, IndexError):
+ # should only need to catch ValueError here but on numpy
+ # 1.7 .item() can raise IndexError when NaNs are present
+ return nan_idxs
except (TypeError, NotImplementedError):
pass
return super(Float64Index, self).get_loc(key)
diff --git a/pandas/index.pyx b/pandas/index.pyx
--- a/pandas/index.pyx
+++ b/pandas/index.pyx
@@ -77,7 +77,7 @@ cdef class IndexEngine:
bint over_size_threshold
cdef:
- bint unique, monotonic
+ bint unique, monotonic_inc, monotonic_dec
bint initialized, monotonic_check, unique_check
def __init__(self, vgetter, n):
@@ -89,7 +89,8 @@ cdef class IndexEngine:
self.monotonic_check = 0
self.unique = 0
- self.monotonic = 0
+ self.monotonic_inc = 0
+ self.monotonic_dec = 0
def __contains__(self, object val):
self._ensure_mapping_populated()
@@ -134,7 +135,7 @@ cdef class IndexEngine:
if is_definitely_invalid_key(val):
raise TypeError
- if self.over_size_threshold and self.is_monotonic:
+ if self.over_size_threshold and self.is_monotonic_increasing:
if not self.is_unique:
return self._get_loc_duplicates(val)
values = self._get_index_values()
@@ -158,7 +159,7 @@ cdef class IndexEngine:
cdef:
Py_ssize_t diff
- if self.is_monotonic:
+ if self.is_monotonic_increasing:
values = self._get_index_values()
left = values.searchsorted(val, side='left')
right = values.searchsorted(val, side='right')
@@ -210,25 +211,35 @@ cdef class IndexEngine:
return self.unique == 1
- property is_monotonic:
+ property is_monotonic_increasing:
def __get__(self):
if not self.monotonic_check:
self._do_monotonic_check()
- return self.monotonic == 1
+ return self.monotonic_inc == 1
+
+ property is_monotonic_decreasing:
+
+ def __get__(self):
+ if not self.monotonic_check:
+ self._do_monotonic_check()
+
+ return self.monotonic_dec == 1
cdef inline _do_monotonic_check(self):
try:
values = self._get_index_values()
- self.monotonic, unique = self._call_monotonic(values)
+ self.monotonic_inc, self.monotonic_dec, unique = \
+ self._call_monotonic(values)
if unique is not None:
self.unique = unique
self.unique_check = 1
except TypeError:
- self.monotonic = 0
+ self.monotonic_inc = 0
+ self.monotonic_dec = 0
self.monotonic_check = 1
cdef _get_index_values(self):
@@ -345,7 +356,7 @@ cdef class Int64Engine(IndexEngine):
return _hash.Int64HashTable(n)
def _call_monotonic(self, values):
- return algos.is_monotonic_int64(values)
+ return algos.is_monotonic_int64(values, timelike=False)
def get_pad_indexer(self, other, limit=None):
return algos.pad_int64(self._get_index_values(), other,
@@ -435,7 +446,7 @@ cdef class Float64Engine(IndexEngine):
return result
def _call_monotonic(self, values):
- return algos.is_monotonic_float64(values)
+ return algos.is_monotonic_float64(values, timelike=False)
def get_pad_indexer(self, other, limit=None):
return algos.pad_float64(self._get_index_values(), other,
@@ -489,7 +500,7 @@ cdef class ObjectEngine(IndexEngine):
return _hash.PyObjectHashTable(n)
def _call_monotonic(self, values):
- return algos.is_monotonic_object(values)
+ return algos.is_monotonic_object(values, timelike=False)
def get_pad_indexer(self, other, limit=None):
return algos.pad_object(self._get_index_values(), other,
@@ -506,7 +517,7 @@ cdef class DatetimeEngine(Int64Engine):
return 'M8[ns]'
def __contains__(self, object val):
- if self.over_size_threshold and self.is_monotonic:
+ if self.over_size_threshold and self.is_monotonic_increasing:
if not self.is_unique:
return self._get_loc_duplicates(val)
values = self._get_index_values()
@@ -521,7 +532,7 @@ cdef class DatetimeEngine(Int64Engine):
return self.vgetter().view('i8')
def _call_monotonic(self, values):
- return algos.is_monotonic_int64(values)
+ return algos.is_monotonic_int64(values, timelike=True)
cpdef get_loc(self, object val):
if is_definitely_invalid_key(val):
@@ -529,7 +540,7 @@ cdef class DatetimeEngine(Int64Engine):
# Welcome to the spaghetti factory
- if self.over_size_threshold and self.is_monotonic:
+ if self.over_size_threshold and self.is_monotonic_increasing:
if not self.is_unique:
val = _to_i8(val)
return self._get_loc_duplicates(val)
diff --git a/pandas/src/generate_code.py b/pandas/src/generate_code.py
--- a/pandas/src/generate_code.py
+++ b/pandas/src/generate_code.py
@@ -539,31 +539,51 @@ def diff_2d_%(name)s(ndarray[%(c_type)s, ndim=2] arr,
is_monotonic_template = """@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_%(name)s(ndarray[%(c_type)s] arr):
+def is_monotonic_%(name)s(ndarray[%(c_type)s] arr, bint timelike):
'''
Returns
-------
- is_monotonic, is_unique
+ is_monotonic_inc, is_monotonic_dec, is_unique
'''
cdef:
Py_ssize_t i, n
%(c_type)s prev, cur
bint is_unique = 1
+ bint is_monotonic_inc = 1
+ bint is_monotonic_dec = 1
n = len(arr)
- if n < 2:
- return True, True
+ if n == 1:
+ if arr[0] != arr[0] or (timelike and arr[0] == iNaT):
+ # single value is NaN
+ return False, False, True
+ else:
+ return True, True, True
+ elif n < 2:
+ return True, True, True
+
+ if timelike and arr[0] == iNaT:
+ return False, False, None
prev = arr[0]
for i in range(1, n):
cur = arr[i]
+ if timelike and cur == iNaT:
+ return False, False, None
if cur < prev:
- return False, None
+ is_monotonic_inc = 0
+ elif cur > prev:
+ is_monotonic_dec = 0
elif cur == prev:
is_unique = 0
+ else:
+ # cur or prev is NaN
+ return False, False, None
+ if not is_monotonic_inc and not is_monotonic_dec:
+ return False, False, None
prev = cur
- return True, is_unique
+ return is_monotonic_inc, is_monotonic_dec, is_unique
"""
map_indices_template = """@cython.wraparound(False)
diff --git a/pandas/src/generated.pyx b/pandas/src/generated.pyx
--- a/pandas/src/generated.pyx
+++ b/pandas/src/generated.pyx
@@ -1799,166 +1799,286 @@ def backfill_2d_inplace_bool(ndarray[uint8_t, ndim=2] values,
@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_float64(ndarray[float64_t] arr):
+def is_monotonic_float64(ndarray[float64_t] arr, bint timelike):
'''
Returns
-------
- is_monotonic, is_unique
+ is_monotonic_inc, is_monotonic_dec, is_unique
'''
cdef:
Py_ssize_t i, n
float64_t prev, cur
bint is_unique = 1
+ bint is_monotonic_inc = 1
+ bint is_monotonic_dec = 1
n = len(arr)
- if n < 2:
- return True, True
+ if n == 1:
+ if arr[0] != arr[0] or (timelike and arr[0] == iNaT):
+ # single value is NaN
+ return False, False, True
+ else:
+ return True, True, True
+ elif n < 2:
+ return True, True, True
+
+ if timelike and arr[0] == iNaT:
+ return False, False, None
prev = arr[0]
for i in range(1, n):
cur = arr[i]
+ if timelike and cur == iNaT:
+ return False, False, None
if cur < prev:
- return False, None
+ is_monotonic_inc = 0
+ elif cur > prev:
+ is_monotonic_dec = 0
elif cur == prev:
is_unique = 0
+ else:
+ # cur or prev is NaN
+ return False, False, None
+ if not is_monotonic_inc and not is_monotonic_dec:
+ return False, False, None
prev = cur
- return True, is_unique
+ return is_monotonic_inc, is_monotonic_dec, is_unique
@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_float32(ndarray[float32_t] arr):
+def is_monotonic_float32(ndarray[float32_t] arr, bint timelike):
'''
Returns
-------
- is_monotonic, is_unique
+ is_monotonic_inc, is_monotonic_dec, is_unique
'''
cdef:
Py_ssize_t i, n
float32_t prev, cur
bint is_unique = 1
+ bint is_monotonic_inc = 1
+ bint is_monotonic_dec = 1
n = len(arr)
- if n < 2:
- return True, True
+ if n == 1:
+ if arr[0] != arr[0] or (timelike and arr[0] == iNaT):
+ # single value is NaN
+ return False, False, True
+ else:
+ return True, True, True
+ elif n < 2:
+ return True, True, True
+
+ if timelike and arr[0] == iNaT:
+ return False, False, None
prev = arr[0]
for i in range(1, n):
cur = arr[i]
+ if timelike and cur == iNaT:
+ return False, False, None
if cur < prev:
- return False, None
+ is_monotonic_inc = 0
+ elif cur > prev:
+ is_monotonic_dec = 0
elif cur == prev:
is_unique = 0
+ else:
+ # cur or prev is NaN
+ return False, False, None
+ if not is_monotonic_inc and not is_monotonic_dec:
+ return False, False, None
prev = cur
- return True, is_unique
+ return is_monotonic_inc, is_monotonic_dec, is_unique
@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_object(ndarray[object] arr):
+def is_monotonic_object(ndarray[object] arr, bint timelike):
'''
Returns
-------
- is_monotonic, is_unique
+ is_monotonic_inc, is_monotonic_dec, is_unique
'''
cdef:
Py_ssize_t i, n
object prev, cur
bint is_unique = 1
+ bint is_monotonic_inc = 1
+ bint is_monotonic_dec = 1
n = len(arr)
- if n < 2:
- return True, True
+ if n == 1:
+ if arr[0] != arr[0] or (timelike and arr[0] == iNaT):
+ # single value is NaN
+ return False, False, True
+ else:
+ return True, True, True
+ elif n < 2:
+ return True, True, True
+
+ if timelike and arr[0] == iNaT:
+ return False, False, None
prev = arr[0]
for i in range(1, n):
cur = arr[i]
+ if timelike and cur == iNaT:
+ return False, False, None
if cur < prev:
- return False, None
+ is_monotonic_inc = 0
+ elif cur > prev:
+ is_monotonic_dec = 0
elif cur == prev:
is_unique = 0
+ else:
+ # cur or prev is NaN
+ return False, False, None
+ if not is_monotonic_inc and not is_monotonic_dec:
+ return False, False, None
prev = cur
- return True, is_unique
+ return is_monotonic_inc, is_monotonic_dec, is_unique
@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_int32(ndarray[int32_t] arr):
+def is_monotonic_int32(ndarray[int32_t] arr, bint timelike):
'''
Returns
-------
- is_monotonic, is_unique
+ is_monotonic_inc, is_monotonic_dec, is_unique
'''
cdef:
Py_ssize_t i, n
int32_t prev, cur
bint is_unique = 1
+ bint is_monotonic_inc = 1
+ bint is_monotonic_dec = 1
n = len(arr)
- if n < 2:
- return True, True
+ if n == 1:
+ if arr[0] != arr[0] or (timelike and arr[0] == iNaT):
+ # single value is NaN
+ return False, False, True
+ else:
+ return True, True, True
+ elif n < 2:
+ return True, True, True
+
+ if timelike and arr[0] == iNaT:
+ return False, False, None
prev = arr[0]
for i in range(1, n):
cur = arr[i]
+ if timelike and cur == iNaT:
+ return False, False, None
if cur < prev:
- return False, None
+ is_monotonic_inc = 0
+ elif cur > prev:
+ is_monotonic_dec = 0
elif cur == prev:
is_unique = 0
+ else:
+ # cur or prev is NaN
+ return False, False, None
+ if not is_monotonic_inc and not is_monotonic_dec:
+ return False, False, None
prev = cur
- return True, is_unique
+ return is_monotonic_inc, is_monotonic_dec, is_unique
@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_int64(ndarray[int64_t] arr):
+def is_monotonic_int64(ndarray[int64_t] arr, bint timelike):
'''
Returns
-------
- is_monotonic, is_unique
+ is_monotonic_inc, is_monotonic_dec, is_unique
'''
cdef:
Py_ssize_t i, n
int64_t prev, cur
bint is_unique = 1
+ bint is_monotonic_inc = 1
+ bint is_monotonic_dec = 1
n = len(arr)
- if n < 2:
- return True, True
+ if n == 1:
+ if arr[0] != arr[0] or (timelike and arr[0] == iNaT):
+ # single value is NaN
+ return False, False, True
+ else:
+ return True, True, True
+ elif n < 2:
+ return True, True, True
+
+ if timelike and arr[0] == iNaT:
+ return False, False, None
prev = arr[0]
for i in range(1, n):
cur = arr[i]
+ if timelike and cur == iNaT:
+ return False, False, None
if cur < prev:
- return False, None
+ is_monotonic_inc = 0
+ elif cur > prev:
+ is_monotonic_dec = 0
elif cur == prev:
is_unique = 0
+ else:
+ # cur or prev is NaN
+ return False, False, None
+ if not is_monotonic_inc and not is_monotonic_dec:
+ return False, False, None
prev = cur
- return True, is_unique
+ return is_monotonic_inc, is_monotonic_dec, is_unique
@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_bool(ndarray[uint8_t] arr):
+def is_monotonic_bool(ndarray[uint8_t] arr, bint timelike):
'''
Returns
-------
- is_monotonic, is_unique
+ is_monotonic_inc, is_monotonic_dec, is_unique
'''
cdef:
Py_ssize_t i, n
uint8_t prev, cur
bint is_unique = 1
+ bint is_monotonic_inc = 1
+ bint is_monotonic_dec = 1
n = len(arr)
- if n < 2:
- return True, True
+ if n == 1:
+ if arr[0] != arr[0] or (timelike and arr[0] == iNaT):
+ # single value is NaN
+ return False, False, True
+ else:
+ return True, True, True
+ elif n < 2:
+ return True, True, True
+
+ if timelike and arr[0] == iNaT:
+ return False, False, None
prev = arr[0]
for i in range(1, n):
cur = arr[i]
+ if timelike and cur == iNaT:
+ return False, False, None
if cur < prev:
- return False, None
+ is_monotonic_inc = 0
+ elif cur > prev:
+ is_monotonic_dec = 0
elif cur == prev:
is_unique = 0
+ else:
+ # cur or prev is NaN
+ return False, False, None
+ if not is_monotonic_inc and not is_monotonic_dec:
+ return False, False, None
prev = cur
- return True, is_unique
+ return is_monotonic_inc, is_monotonic_dec, is_unique
@cython.wraparound(False)
@cython.boundscheck(False)
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-29346
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Styler.applymap(subset=...) breaks promise that "any valid indexer to .loc will work." for mult-index
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
import numpy as np
labels = np.array([[0, 0,1,1],
[0, 1, 0, 1]])
columns = pd.MultiIndex(
levels=[["a", "b"], ['%', '#']], labels=labels, names=['', ''])
df = pd.DataFrame(
[[1,-1,1,1],[-1,1,1,1]],
index=["hello", "world"],
columns=columns)
pct_subset = pd.IndexSlice[:, pd.IndexSlice[:, '%':'%']]
def color_negative_red(val):
color = 'red' if val < 0 else 'black'
return 'color: %s' % color
# Works on both 0.22 and 0.24.
df.loc[pct_subset]
# This works on 0.22, but `TypeError: unhashable type` on 0.24!
df.style.applymap(color_negative_red, subset=pct_subset)
```
#### Problem description
To quote from the docs on the `subset` keyword argument for `Styler.applymap` (https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html) :
> For row and column slicing, any valid indexer to .loc will work.
The code sample demonstrates an indexer for a dataframe with multindex columns that works with .loc, but doesn't work as the `subset` argument to `applymap`. Note that this indexer previously worked in pandas version 0.22, but a regression has been introduced in 0.24.2
#### Expected Output
Expected the indexer to apply the styling the "%" columns, and not throw an error.
#### Full Backtrace
<details>
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-17-994feb9d52cf> in <module>()
1 df.style.format(
----> 2 '{:.1f}%', subset=pct_subset)
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/io/formats/style.pyc in format(self, formatter, subset)
399 subset = subset, self.data.columns
400
--> 401 sub_df = self.data.loc[subset]
402 row_locs = self.data.index.get_indexer_for(sub_df.index)
403 col_locs = self.data.columns.get_indexer_for(sub_df.columns)
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in __getitem__(self, key)
1492 except (KeyError, IndexError, AttributeError):
1493 pass
-> 1494 return self._getitem_tuple(key)
1495 else:
1496 # we by definition only have the 0th axis
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_tuple(self, tup)
866 def _getitem_tuple(self, tup):
867 try:
--> 868 return self._getitem_lowerdim(tup)
869 except IndexingError:
870 pass
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_lowerdim(self, tup)
967 # we may have a nested tuples indexer here
968 if self._is_nested_tuple_indexer(tup):
--> 969 return self._getitem_nested_tuple(tup)
970
971 # we maybe be using a tuple to represent multiple dimensions here
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_nested_tuple(self, tup)
1046
1047 current_ndim = obj.ndim
-> 1048 obj = getattr(obj, self.name)._getitem_axis(key, axis=axis)
1049 axis += 1
1050
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_axis(self, key, axis)
1900 raise ValueError('Cannot index with multidimensional key')
1901
-> 1902 return self._getitem_iterable(key, axis=axis)
1903
1904 # nested tuple slicing
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_iterable(self, key, axis)
1203 # A collection of keys
1204 keyarr, indexer = self._get_listlike_indexer(key, axis,
-> 1205 raise_missing=False)
1206 return self.obj._reindex_with_indexers({axis: [keyarr, indexer]},
1207 copy=True, allow_dups=True)
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _get_listlike_indexer(self, key, axis, raise_missing)
1152 if len(ax) or not len(key):
1153 key = self._convert_for_reindex(key, axis)
-> 1154 indexer = ax.get_indexer_for(key)
1155 keyarr = ax.reindex(keyarr)[0]
1156 else:
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexes/base.pyc in get_indexer_for(self, target, **kwargs)
4453 """
4454 if self.is_unique:
-> 4455 return self.get_indexer(target, **kwargs)
4456 indexer, _ = self.get_indexer_non_unique(target, **kwargs)
4457 return indexer
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexes/multi.pyc in get_indexer(self, target, method, limit, tolerance)
2157 method=method,
2158 limit=limit,
-> 2159 tolerance=tolerance)
2160
2161 if not self.is_unique:
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexes/base.pyc in get_indexer(self, target, method, limit, tolerance)
2753 'backfill or nearest reindexing')
2754
-> 2755 indexer = self._engine.get_indexer(target._ndarray_values)
2756
2757 return ensure_platform_int(indexer)
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_indexer()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.lookup()
TypeError: unhashable type
```
</details>
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.14.final.0
python-bits: 64
OS: Linux
OS-release: 4.19.20-1rodete1-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.24.2
pytest: None
pip: 19.0.3
setuptools: 36.7.1
Cython: None
numpy: 1.16.2
scipy: None
pyarrow: None
xarray: None
IPython: 5.8.0
sphinx: None
patsy: None
dateutil: 2.8.0
pytz: 2018.9
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: 4.2.5
bs4: 4.6.0
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
</details>
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td>License</td>
35 <td>
36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td>Build Status</td>
43 <td>
44 <a href="https://travis-ci.org/pandas-dev/pandas">
45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td></td>
51 <td>
52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
54 </a>
55 </td>
56 </tr>
57 <tr>
58 <td>Coverage</td>
59 <td>
60 <a href="https://codecov.io/gh/pandas-dev/pandas">
61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
62 </a>
63 </td>
64 </tr>
65 <tr>
66 <td>Downloads</td>
67 <td>
68 <a href="https://pandas.pydata.org">
69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
70 </a>
71 </td>
72 </tr>
73 <tr>
74 <td>Gitter</td>
75 <td>
76 <a href="https://gitter.im/pydata/pandas">
77 <img src="https://badges.gitter.im/Join%20Chat.svg" />
78 </a>
79 </td>
80 </tr>
81 </table>
82
83
84
85 ## What is it?
86
87 **pandas** is a Python package providing fast, flexible, and expressive data
88 structures designed to make working with "relational" or "labeled" data both
89 easy and intuitive. It aims to be the fundamental high-level building block for
90 doing practical, **real world** data analysis in Python. Additionally, it has
91 the broader goal of becoming **the most powerful and flexible open source data
92 analysis / manipulation tool available in any language**. It is already well on
93 its way towards this goal.
94
95 ## Main Features
96 Here are just a few of the things that pandas does well:
97
98 - Easy handling of [**missing data**][missing-data] (represented as
99 `NaN`) in floating point as well as non-floating point data
100 - Size mutability: columns can be [**inserted and
101 deleted**][insertion-deletion] from DataFrame and higher dimensional
102 objects
103 - Automatic and explicit [**data alignment**][alignment]: objects can
104 be explicitly aligned to a set of labels, or the user can simply
105 ignore the labels and let `Series`, `DataFrame`, etc. automatically
106 align the data for you in computations
107 - Powerful, flexible [**group by**][groupby] functionality to perform
108 split-apply-combine operations on data sets, for both aggregating
109 and transforming data
110 - Make it [**easy to convert**][conversion] ragged,
111 differently-indexed data in other Python and NumPy data structures
112 into DataFrame objects
113 - Intelligent label-based [**slicing**][slicing], [**fancy
114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
115 large data sets
116 - Intuitive [**merging**][merging] and [**joining**][joining] data
117 sets
118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
119 data sets
120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
121 labels per tick)
122 - Robust IO tools for loading data from [**flat files**][flat-files]
123 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
125 - [**Time series**][timeseries]-specific functionality: date range
126 generation and frequency conversion, moving window statistics,
127 moving window linear regressions, date shifting and lagging, etc.
128
129
130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
148
149 ## Where to get it
150 The source code is currently hosted on GitHub at:
151 https://github.com/pandas-dev/pandas
152
153 Binary installers for the latest released version are available at the [Python
154 package index](https://pypi.org/project/pandas) and on conda.
155
156 ```sh
157 # conda
158 conda install pandas
159 ```
160
161 ```sh
162 # or PyPI
163 pip install pandas
164 ```
165
166 ## Dependencies
167 - [NumPy](https://www.numpy.org): 1.13.3 or higher
168 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
169 - [pytz](https://pythonhosted.org/pytz): 2015.4 or higher
170
171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
172 for recommended and optional dependencies.
173
174 ## Installation from sources
175 To install pandas from source you need Cython in addition to the normal
176 dependencies above. Cython can be installed from pypi:
177
178 ```sh
179 pip install cython
180 ```
181
182 In the `pandas` directory (same one where you found this file after
183 cloning the git repo), execute:
184
185 ```sh
186 python setup.py install
187 ```
188
189 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
190
191
192 ```sh
193 python -m pip install --no-build-isolation -e .
194 ```
195
196 If you have `make`, you can also use `make develop` to run the same command.
197
198 or alternatively
199
200 ```sh
201 python setup.py develop
202 ```
203
204 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
205
206 ## License
207 [BSD 3](LICENSE)
208
209 ## Documentation
210 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
211
212 ## Background
213 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
214 has been under active development since then.
215
216 ## Getting Help
217
218 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
219 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
220
221 ## Discussion and Development
222 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
223
224 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
225
226 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
227
228 A detailed overview on how to contribute can be found in the **[contributing guide](https://dev.pandas.io/docs/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
229
230 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
231
232 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
233
234 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
235
236 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
237
238 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
239
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
1effb56d0f91623641122fd8555bd9037a530443
|
Styler.applymap(subset=...) breaks promise that "any valid indexer to .loc will work." for mult-index
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
import numpy as np
labels = np.array([[0, 0,1,1],
[0, 1, 0, 1]])
columns = pd.MultiIndex(
levels=[["a", "b"], ['%', '#']], labels=labels, names=['', ''])
df = pd.DataFrame(
[[1,-1,1,1],[-1,1,1,1]],
index=["hello", "world"],
columns=columns)
pct_subset = pd.IndexSlice[:, pd.IndexSlice[:, '%':'%']]
def color_negative_red(val):
color = 'red' if val < 0 else 'black'
return 'color: %s' % color
# Works on both 0.22 and 0.24.
df.loc[pct_subset]
# This works on 0.22, but `TypeError: unhashable type` on 0.24!
df.style.applymap(color_negative_red, subset=pct_subset)
```
#### Problem description
To quote from the docs on the `subset` keyword argument for `Styler.applymap` (https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html) :
> For row and column slicing, any valid indexer to .loc will work.
The code sample demonstrates an indexer for a dataframe with multindex columns that works with .loc, but doesn't work as the `subset` argument to `applymap`. Note that this indexer previously worked in pandas version 0.22, but a regression has been introduced in 0.24.2
#### Expected Output
Expected the indexer to apply the styling the "%" columns, and not throw an error.
#### Full Backtrace
<details>
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-17-994feb9d52cf> in <module>()
1 df.style.format(
----> 2 '{:.1f}%', subset=pct_subset)
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/io/formats/style.pyc in format(self, formatter, subset)
399 subset = subset, self.data.columns
400
--> 401 sub_df = self.data.loc[subset]
402 row_locs = self.data.index.get_indexer_for(sub_df.index)
403 col_locs = self.data.columns.get_indexer_for(sub_df.columns)
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in __getitem__(self, key)
1492 except (KeyError, IndexError, AttributeError):
1493 pass
-> 1494 return self._getitem_tuple(key)
1495 else:
1496 # we by definition only have the 0th axis
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_tuple(self, tup)
866 def _getitem_tuple(self, tup):
867 try:
--> 868 return self._getitem_lowerdim(tup)
869 except IndexingError:
870 pass
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_lowerdim(self, tup)
967 # we may have a nested tuples indexer here
968 if self._is_nested_tuple_indexer(tup):
--> 969 return self._getitem_nested_tuple(tup)
970
971 # we maybe be using a tuple to represent multiple dimensions here
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_nested_tuple(self, tup)
1046
1047 current_ndim = obj.ndim
-> 1048 obj = getattr(obj, self.name)._getitem_axis(key, axis=axis)
1049 axis += 1
1050
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_axis(self, key, axis)
1900 raise ValueError('Cannot index with multidimensional key')
1901
-> 1902 return self._getitem_iterable(key, axis=axis)
1903
1904 # nested tuple slicing
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_iterable(self, key, axis)
1203 # A collection of keys
1204 keyarr, indexer = self._get_listlike_indexer(key, axis,
-> 1205 raise_missing=False)
1206 return self.obj._reindex_with_indexers({axis: [keyarr, indexer]},
1207 copy=True, allow_dups=True)
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _get_listlike_indexer(self, key, axis, raise_missing)
1152 if len(ax) or not len(key):
1153 key = self._convert_for_reindex(key, axis)
-> 1154 indexer = ax.get_indexer_for(key)
1155 keyarr = ax.reindex(keyarr)[0]
1156 else:
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexes/base.pyc in get_indexer_for(self, target, **kwargs)
4453 """
4454 if self.is_unique:
-> 4455 return self.get_indexer(target, **kwargs)
4456 indexer, _ = self.get_indexer_non_unique(target, **kwargs)
4457 return indexer
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexes/multi.pyc in get_indexer(self, target, method, limit, tolerance)
2157 method=method,
2158 limit=limit,
-> 2159 tolerance=tolerance)
2160
2161 if not self.is_unique:
/home/jeremysalwen/.local/lib/python2.7/site-packages/pandas/core/indexes/base.pyc in get_indexer(self, target, method, limit, tolerance)
2753 'backfill or nearest reindexing')
2754
-> 2755 indexer = self._engine.get_indexer(target._ndarray_values)
2756
2757 return ensure_platform_int(indexer)
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_indexer()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.lookup()
TypeError: unhashable type
```
</details>
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.14.final.0
python-bits: 64
OS: Linux
OS-release: 4.19.20-1rodete1-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.24.2
pytest: None
pip: 19.0.3
setuptools: 36.7.1
Cython: None
numpy: 1.16.2
scipy: None
pyarrow: None
xarray: None
IPython: 5.8.0
sphinx: None
patsy: None
dateutil: 2.8.0
pytz: 2018.9
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: 4.2.5
bs4: 4.6.0
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
</details>
|
This looks to work on master now. Could use a test.
```
In [49]: import pandas as pd
...: import numpy as np
...: labels = np.array([[0, 0,1,1],
...: [0, 1, 0, 1]])
...: columns = pd.MultiIndex(
...: levels=[["a", "b"], ['%', '#']], labels=labels, names=['', ''])
...: df = pd.DataFrame(
...: [[1,-1,1,1],[-1,1,1,1]],
...: index=["hello", "world"],
...: columns=columns)
...: pct_subset = pd.IndexSlice[:, pd.IndexSlice[:, '%':'%']]
...:
...: def color_negative_red(val):
...: color = 'red' if val < 0 else 'black'
...: return 'color: %s' % color
...:
...: # Works on both 0.22 and 0.24.
...: df.loc[pct_subset]
/anaconda3/envs/pandas-dev/bin/ipython:6: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
Out[49]:
a b
% %
hello 1 1
world -1 1
In [50]: df.style.applymap(color_negative_red, subset=pct_subset)
...:
Out[50]: <pandas.io.formats.style.Styler at 0x1a24fb1b90>
In [51]: pd.__version__
Out[51]: '0.26.0.dev0+734.g0de99558b'
```
|
2019-11-02T12:44:13Z
|
<patch>
</patch>
|
[]
|
[]
| |||
mesonbuild__meson-1966
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
gtkdoc dependencies produce invalid search paths
If I add a `print` at the beginning of [`gtkdoc_run_check`](https://github.com/mesonbuild/meson/blob/master/mesonbuild/scripts/gtkdochelper.py#L48):
```
['gtkdoc-scangobj',
'--types=finch.types',
'--module=finch',
'--cflags=-I@BUILD_ROOT@/. -I@SOURCE_ROOT@/. -I@BUILD_ROOT@/finch/. -I@SOURCE_ROOT@/finch/. -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/gplugin-1.0/ -I@BUILD_ROOT@/libpurple/. -I@SOURCE_ROOT@/libpurple/. -I/usr/include/dbus-1.0 -I/usr/lib64/dbus-1.0/include -I/usr/include/libxml2 -I/usr/include/farstream-0.2 -I/usr/include/json-glib-1.0 -I/usr/include/p11-kit-1 -I/usr/include/nss3 -I/usr/include/nspr4 -I@BUILD_ROOT@/finch/libgnt/. -I@SOURCE_ROOT@/finch/libgnt/. -I/usr/include/ncursesw', '--ldflags=-lfinch -Lfinch -L/home/elliott/code/pidgin-hg/build/finch -L/home/elliott/code/pidgin-hg/build/libpurple -L/home/elliott/code/pidgin-hg/build/finch/libgnt -Wl,-rpath,finch -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0 -lgplugin -lgmodule-2.0 -pthread -lgio-2.0 -lncursesw -lpanelw -lpurple -Llibpurple -Wl,-rpath,libpurple -ldbus-1 -ldbus-glib-1 -lxml2 -lfarstream-0.2 -lgstbase-1.0 -lgstvideo-1.0 -lgstapp-1.0 -lidn -ljson-glib-1.0 -lgnutls -lssl3 -lsmime3 -lnss3 -lnssutil3 -lplds4 -lplc4 -lnspr4 -lpthread -ldl -lz -lm -lgnt -Lfinch/libgnt -Wl,-rpath,finch/libgnt']
```
where `gtkdoc` is called approximately like:
```meson
libfinch_inc = include_directories('somewhere')
libfinch_dep = declare_dependency(
link_with : library('finch', ...),
include_directories : [libfinch_inc, other stuff],
dependencies : [gstreamer, glib, etc.])
DOC_MODULE = 'finch'
gnome.gtkdoc(DOC_MODULE,
main_xml : DOC_MODULE + '-docs.xml',
src_dir : libfinch_inc,
dependencies : libfinch_dep,
gobject_typesfile : 'finch.types',
scan_args : scan_args)
```
You can see that `@BUILD_ROOT@` and `@SOURCE_ROOT@` are not replaced, and the linker paths are relative to the source instead of build+source. I think the problem is it creates a `RunTarget` which does no substitutions and runs out an unspecified directory.
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="http://mesonbuild.com/assets/images/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://travis-ci.org/mesonbuild/meson)
11 [](https://ci.appveyor.com/project/jpakkane/meson)
12 [](https://codecov.io/gh/mesonbuild/meson/branch/master)
13
14 #### Dependencies
15
16 - [Python](http://python.org) (version 3.4 or newer)
17 - [Ninja](https://ninja-build.org) (version 1.5 or newer)
18
19 #### Installing from source
20
21 You can run Meson directly from a revision control checkout or an
22 extracted tarball. If you wish you can install it locally with the
23 standard Python distutils command `python3 setup.py install <your
24 options here>`.
25
26 Meson is also available from
27 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed
28 with `pip3 install meson` (this does not require a source checkout,
29 pip will download the package automatically). The exact command to
30 type to install with pip can very between systems, be sure to use the
31 Python 3 version of pip.
32
33 #### Running
34
35 Meson requires that you have a source directory and a build directory
36 and that these two are different. In your source root must exist a file
37 called 'meson.build'. To generate the build system run this command:
38
39 `meson <source directory> <build directory>`
40
41 Depending on how you obtained Meson the command might also be called
42 `meson.py` instead of plain `meson`. In the rest of this document we
43 are going to use the latter form.
44
45 You can omit either of the two directories, and Meson will substitute
46 the current directory and autodetect what you mean. This allows you to
47 do things like this:
48
49 `cd source_root; mkdir builddir; cd builddir; meson ..`
50
51 or
52
53 `cd source_root; mkdir builddir; meson builddir`
54
55 To compile, cd into your build directory and type `ninja`. To run unit
56 tests, type `ninja test`.
57
58 Install is the same but it can take an extra argument:
59
60 `DESTDIR=/destdir/path ninja install`
61
62 `DESTDIR` can be omitted. If you are installing to system directories,
63 you may need to run this command with sudo.
64
65
66 #### Contributing
67
68 We love code contributions. See the contributing.txt file for
69 details.
70
71
72 #### IRC
73
74 The irc channel for Meson is `#mesonbuild` over at Freenode.
75
76
77 #### Further info
78
79 More information about the Meson build system can be found at the
80 [project's home page](http://mesonbuild.com).
81
82 Meson is a registered trademark of Jussi Pakkanen
83
[end of README.md]
[start of mesonbuild/backend/backends.py]
1 # Copyright 2012-2016 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os, pickle, re
16 from .. import build
17 from .. import dependencies
18 from .. import mesonlib
19 from .. import mlog
20 from .. import compilers
21 import json
22 import subprocess
23 from ..mesonlib import MesonException, get_meson_script
24 from ..mesonlib import get_compiler_for_source, classify_unity_sources
25 from ..compilers import CompilerArgs
26 from collections import OrderedDict
27
28 class CleanTrees:
29 '''
30 Directories outputted by custom targets that have to be manually cleaned
31 because on Linux `ninja clean` only deletes empty directories.
32 '''
33 def __init__(self, build_dir, trees):
34 self.build_dir = build_dir
35 self.trees = trees
36
37 class InstallData:
38 def __init__(self, source_dir, build_dir, prefix, strip_bin, mesonintrospect):
39 self.source_dir = source_dir
40 self.build_dir = build_dir
41 self.prefix = prefix
42 self.strip_bin = strip_bin
43 self.targets = []
44 self.headers = []
45 self.man = []
46 self.data = []
47 self.po_package_name = ''
48 self.po = []
49 self.install_scripts = []
50 self.install_subdirs = []
51 self.mesonintrospect = mesonintrospect
52
53 class ExecutableSerialisation:
54 def __init__(self, name, fname, cmd_args, env, is_cross, exe_wrapper,
55 workdir, extra_paths, capture):
56 self.name = name
57 self.fname = fname
58 self.cmd_args = cmd_args
59 self.env = env
60 self.is_cross = is_cross
61 self.exe_runner = exe_wrapper
62 self.workdir = workdir
63 self.extra_paths = extra_paths
64 self.capture = capture
65
66 class TestSerialisation:
67 def __init__(self, name, suite, fname, is_cross, exe_wrapper, is_parallel, cmd_args, env,
68 should_fail, timeout, workdir, extra_paths):
69 self.name = name
70 self.suite = suite
71 self.fname = fname
72 self.is_cross = is_cross
73 self.exe_runner = exe_wrapper
74 self.is_parallel = is_parallel
75 self.cmd_args = cmd_args
76 self.env = env
77 self.should_fail = should_fail
78 self.timeout = timeout
79 self.workdir = workdir
80 self.extra_paths = extra_paths
81
82 class OptionProxy:
83 def __init__(self, name, value):
84 self.name = name
85 self.value = value
86
87 class OptionOverrideProxy:
88 '''Mimic an option list but transparently override
89 selected option values.'''
90 def __init__(self, overrides, options):
91 self.overrides = overrides
92 self.options = options
93
94 def __getitem__(self, option_name):
95 base_opt = self.options[option_name]
96 if option_name in self.overrides:
97 return OptionProxy(base_opt.name, base_opt.validate_value(self.overrides[option_name]))
98 return base_opt
99
100 # This class contains the basic functionality that is needed by all backends.
101 # Feel free to move stuff in and out of it as you see fit.
102 class Backend:
103 def __init__(self, build):
104 self.build = build
105 self.environment = build.environment
106 self.processed_targets = {}
107 self.build_to_src = os.path.relpath(self.environment.get_source_dir(),
108 self.environment.get_build_dir())
109 for t in self.build.targets:
110 priv_dirname = self.get_target_private_dir_abs(t)
111 os.makedirs(priv_dirname, exist_ok=True)
112
113 def get_target_filename(self, t):
114 if isinstance(t, build.CustomTarget):
115 if len(t.get_outputs()) != 1:
116 mlog.warning('custom_target {!r} has more than one output! '
117 'Using the first one.'.format(t.name))
118 filename = t.get_outputs()[0]
119 else:
120 assert(isinstance(t, build.BuildTarget))
121 filename = t.get_filename()
122 return os.path.join(self.get_target_dir(t), filename)
123
124 def get_target_filename_abs(self, target):
125 return os.path.join(self.environment.get_build_dir(), self.get_target_filename(target))
126
127 def get_option_for_target(self, option_name, target):
128 if option_name in target.option_overrides:
129 override = target.option_overrides[option_name]
130 return self.environment.coredata.validate_option_value(option_name, override)
131 return self.environment.coredata.get_builtin_option(option_name)
132
133 def get_target_filename_for_linking(self, target):
134 # On some platforms (msvc for instance), the file that is used for
135 # dynamic linking is not the same as the dynamic library itself. This
136 # file is called an import library, and we want to link against that.
137 # On all other platforms, we link to the library directly.
138 if isinstance(target, build.SharedLibrary):
139 link_lib = target.get_import_filename() or target.get_filename()
140 return os.path.join(self.get_target_dir(target), link_lib)
141 elif isinstance(target, build.StaticLibrary):
142 return os.path.join(self.get_target_dir(target), target.get_filename())
143 raise AssertionError('BUG: Tried to link to something that\'s not a library')
144
145 def get_target_dir(self, target):
146 if self.environment.coredata.get_builtin_option('layout') == 'mirror':
147 dirname = target.get_subdir()
148 else:
149 dirname = 'meson-out'
150 return dirname
151
152 def get_target_source_dir(self, target):
153 dirname = os.path.join(self.build_to_src, self.get_target_dir(target))
154 return dirname
155
156 def get_target_private_dir(self, target):
157 dirname = os.path.join(self.get_target_dir(target), target.get_basename() + target.type_suffix())
158 return dirname
159
160 def get_target_private_dir_abs(self, target):
161 dirname = os.path.join(self.environment.get_build_dir(), self.get_target_private_dir(target))
162 return dirname
163
164 def get_target_generated_dir(self, target, gensrc, src):
165 """
166 Takes a BuildTarget, a generator source (CustomTarget or GeneratedList),
167 and a generated source filename.
168 Returns the full path of the generated source relative to the build root
169 """
170 # CustomTarget generators output to the build dir of the CustomTarget
171 if isinstance(gensrc, build.CustomTarget):
172 return os.path.join(self.get_target_dir(gensrc), src)
173 # GeneratedList generators output to the private build directory of the
174 # target that the GeneratedList is used in
175 return os.path.join(self.get_target_private_dir(target), src)
176
177 def get_unity_source_filename(self, target, suffix):
178 return target.name + '-unity.' + suffix
179
180 def generate_unity_files(self, target, unity_src):
181 abs_files = []
182 result = []
183 compsrcs = classify_unity_sources(target.compilers.values(), unity_src)
184
185 def init_language_file(suffix):
186 unity_src_name = self.get_unity_source_filename(target, suffix)
187 unity_src_subdir = self.get_target_private_dir_abs(target)
188 outfilename = os.path.join(unity_src_subdir,
189 unity_src_name)
190 outfileabs = os.path.join(self.environment.get_build_dir(),
191 outfilename)
192 outfileabs_tmp = outfileabs + '.tmp'
193 abs_files.append(outfileabs)
194 outfileabs_tmp_dir = os.path.dirname(outfileabs_tmp)
195 if not os.path.exists(outfileabs_tmp_dir):
196 os.makedirs(outfileabs_tmp_dir)
197 result.append(mesonlib.File(True, unity_src_subdir, unity_src_name))
198 return open(outfileabs_tmp, 'w')
199
200 # For each language, generate a unity source file and return the list
201 for comp, srcs in compsrcs.items():
202 with init_language_file(comp.get_default_suffix()) as ofile:
203 for src in srcs:
204 ofile.write('#include<%s>\n' % src)
205 [mesonlib.replace_if_different(x, x + '.tmp') for x in abs_files]
206 return result
207
208 def relpath(self, todir, fromdir):
209 return os.path.relpath(os.path.join('dummyprefixdir', todir),
210 os.path.join('dummyprefixdir', fromdir))
211
212 def flatten_object_list(self, target, proj_dir_to_build_root=''):
213 obj_list = []
214 for obj in target.get_objects():
215 if isinstance(obj, str):
216 o = os.path.join(proj_dir_to_build_root,
217 self.build_to_src, target.get_subdir(), obj)
218 obj_list.append(o)
219 elif isinstance(obj, mesonlib.File):
220 obj_list.append(obj.rel_to_builddir(self.build_to_src))
221 elif isinstance(obj, build.ExtractedObjects):
222 obj_list += self.determine_ext_objs(target, obj, proj_dir_to_build_root)
223 else:
224 raise MesonException('Unknown data type in object list.')
225 return obj_list
226
227 def serialize_executable(self, exe, cmd_args, workdir, env={},
228 capture=None):
229 import hashlib
230 # Can't just use exe.name here; it will likely be run more than once
231 if isinstance(exe, (dependencies.ExternalProgram,
232 build.BuildTarget, build.CustomTarget)):
233 basename = exe.name
234 else:
235 basename = os.path.basename(exe)
236 # Take a digest of the cmd args, env, workdir, and capture. This avoids
237 # collisions and also makes the name deterministic over regenerations
238 # which avoids a rebuild by Ninja because the cmdline stays the same.
239 data = bytes(str(sorted(env.items())) + str(cmd_args) + str(workdir) + str(capture),
240 encoding='utf-8')
241 digest = hashlib.sha1(data).hexdigest()
242 scratch_file = 'meson_exe_{0}_{1}.dat'.format(basename, digest)
243 exe_data = os.path.join(self.environment.get_scratch_dir(), scratch_file)
244 with open(exe_data, 'wb') as f:
245 if isinstance(exe, dependencies.ExternalProgram):
246 exe_cmd = exe.get_command()
247 exe_needs_wrapper = False
248 elif isinstance(exe, (build.BuildTarget, build.CustomTarget)):
249 exe_cmd = [self.get_target_filename_abs(exe)]
250 exe_needs_wrapper = exe.is_cross
251 else:
252 exe_cmd = [exe]
253 exe_needs_wrapper = False
254 is_cross = exe_needs_wrapper and \
255 self.environment.is_cross_build() and \
256 self.environment.cross_info.need_cross_compiler() and \
257 self.environment.cross_info.need_exe_wrapper()
258 if is_cross:
259 exe_wrapper = self.environment.cross_info.config['binaries'].get('exe_wrapper', None)
260 else:
261 exe_wrapper = None
262 if mesonlib.is_windows() or mesonlib.is_cygwin():
263 extra_paths = self.determine_windows_extra_paths(exe)
264 else:
265 extra_paths = []
266 es = ExecutableSerialisation(basename, exe_cmd, cmd_args, env,
267 is_cross, exe_wrapper, workdir,
268 extra_paths, capture)
269 pickle.dump(es, f)
270 return exe_data
271
272 def serialize_tests(self):
273 test_data = os.path.join(self.environment.get_scratch_dir(), 'meson_test_setup.dat')
274 with open(test_data, 'wb') as datafile:
275 self.write_test_file(datafile)
276 benchmark_data = os.path.join(self.environment.get_scratch_dir(), 'meson_benchmark_setup.dat')
277 with open(benchmark_data, 'wb') as datafile:
278 self.write_benchmark_file(datafile)
279 return test_data, benchmark_data
280
281 def determine_linker(self, target):
282 '''
283 If we're building a static library, there is only one static linker.
284 Otherwise, we query the target for the dynamic linker.
285 '''
286 if isinstance(target, build.StaticLibrary):
287 if target.is_cross:
288 return self.build.static_cross_linker
289 else:
290 return self.build.static_linker
291 l = target.get_clike_dynamic_linker()
292 if not l:
293 m = "Couldn't determine linker for target {!r}"
294 raise MesonException(m.format(target.name))
295 return l
296
297 def determine_rpath_dirs(self, target):
298 link_deps = target.get_all_link_deps()
299 result = []
300 for ld in link_deps:
301 prospective = self.get_target_dir(ld)
302 if prospective not in result:
303 result.append(prospective)
304 return result
305
306 def object_filename_from_source(self, target, source, is_unity):
307 if isinstance(source, mesonlib.File):
308 source = source.fname
309 # foo.vala files compile down to foo.c and then foo.c.o, not foo.vala.o
310 if source.endswith('.vala'):
311 if is_unity:
312 return source[:-5] + '.c.' + self.environment.get_object_suffix()
313 source = os.path.join(self.get_target_private_dir(target), source[:-5] + '.c')
314 return source.replace('/', '_').replace('\\', '_') + '.' + self.environment.get_object_suffix()
315
316 def determine_ext_objs(self, target, extobj, proj_dir_to_build_root):
317 result = []
318 targetdir = self.get_target_private_dir(extobj.target)
319 # With unity builds, there's just one object that contains all the
320 # sources, and we only support extracting all the objects in this mode,
321 # so just return that.
322 if self.is_unity(target):
323 comp = get_compiler_for_source(extobj.target.compilers.values(),
324 extobj.srclist[0])
325 # There is a potential conflict here, but it is unlikely that
326 # anyone both enables unity builds and has a file called foo-unity.cpp.
327 osrc = self.get_unity_source_filename(extobj.target,
328 comp.get_default_suffix())
329 osrc = os.path.join(self.get_target_private_dir(extobj.target), osrc)
330 objname = self.object_filename_from_source(extobj.target, osrc, True)
331 objname = objname.replace('/', '_').replace('\\', '_')
332 objpath = os.path.join(proj_dir_to_build_root, targetdir, objname)
333 return [objpath]
334 for osrc in extobj.srclist:
335 objname = self.object_filename_from_source(extobj.target, osrc, False)
336 objpath = os.path.join(proj_dir_to_build_root, targetdir, objname)
337 result.append(objpath)
338 return result
339
340 def get_pch_include_args(self, compiler, target):
341 args = []
342 pchpath = self.get_target_private_dir(target)
343 includeargs = compiler.get_include_args(pchpath, False)
344 for lang in ['c', 'cpp']:
345 p = target.get_pch(lang)
346 if not p:
347 continue
348 if compiler.can_compile(p[-1]):
349 header = p[0]
350 args += compiler.get_pch_use_args(pchpath, header)
351 if len(args) > 0:
352 args = includeargs + args
353 return args
354
355 @staticmethod
356 def escape_extra_args(compiler, args):
357 # No extra escaping/quoting needed when not running on Windows
358 if not mesonlib.is_windows():
359 return args
360 extra_args = []
361 # Compiler-specific escaping is needed for -D args but not for any others
362 if compiler.get_id() == 'msvc':
363 # MSVC needs escaping when a -D argument ends in \ or \"
364 for arg in args:
365 if arg.startswith('-D') or arg.startswith('/D'):
366 # Without extra escaping for these two, the next character
367 # gets eaten
368 if arg.endswith('\\'):
369 arg += '\\'
370 elif arg.endswith('\\"'):
371 arg = arg[:-2] + '\\\\"'
372 extra_args.append(arg)
373 else:
374 # MinGW GCC needs all backslashes in defines to be doubly-escaped
375 # FIXME: Not sure about Cygwin or Clang
376 for arg in args:
377 if arg.startswith('-D') or arg.startswith('/D'):
378 arg = arg.replace('\\', '\\\\')
379 extra_args.append(arg)
380 return extra_args
381
382 def generate_basic_compiler_args(self, target, compiler, no_warn_args=False):
383 # Create an empty commands list, and start adding arguments from
384 # various sources in the order in which they must override each other
385 # starting from hard-coded defaults followed by build options and so on.
386 commands = CompilerArgs(compiler)
387
388 copt_proxy = OptionOverrideProxy(target.option_overrides, self.environment.coredata.compiler_options)
389 # First, the trivial ones that are impossible to override.
390 #
391 # Add -nostdinc/-nostdinc++ if needed; can't be overriden
392 commands += self.get_cross_stdlib_args(target, compiler)
393 # Add things like /NOLOGO or -pipe; usually can't be overriden
394 commands += compiler.get_always_args()
395 # Only add warning-flags by default if the buildtype enables it, and if
396 # we weren't explicitly asked to not emit warnings (for Vala, f.ex)
397 if no_warn_args:
398 commands += compiler.get_no_warn_args()
399 elif self.get_option_for_target('buildtype', target) != 'plain':
400 commands += compiler.get_warn_args(self.get_option_for_target('warning_level', target))
401 # Add -Werror if werror=true is set in the build options set on the
402 # command-line or default_options inside project(). This only sets the
403 # action to be done for warnings if/when they are emitted, so it's ok
404 # to set it after get_no_warn_args() or get_warn_args().
405 if self.get_option_for_target('werror', target):
406 commands += compiler.get_werror_args()
407 # Add compile args for c_* or cpp_* build options set on the
408 # command-line or default_options inside project().
409 commands += compiler.get_option_compile_args(copt_proxy)
410 # Add buildtype args: optimization level, debugging, etc.
411 commands += compiler.get_buildtype_args(self.get_option_for_target('buildtype', target))
412 # Add compile args added using add_project_arguments()
413 commands += self.build.get_project_args(compiler, target.subproject)
414 # Add compile args added using add_global_arguments()
415 # These override per-project arguments
416 commands += self.build.get_global_args(compiler)
417 if not target.is_cross:
418 # Compile args added from the env: CFLAGS/CXXFLAGS, etc. We want these
419 # to override all the defaults, but not the per-target compile args.
420 commands += self.environment.coredata.external_args[compiler.get_language()]
421 # Always set -fPIC for shared libraries
422 if isinstance(target, build.SharedLibrary):
423 commands += compiler.get_pic_args()
424 # Set -fPIC for static libraries by default unless explicitly disabled
425 if isinstance(target, build.StaticLibrary) and target.pic:
426 commands += compiler.get_pic_args()
427 # Add compile args needed to find external dependencies. Link args are
428 # added while generating the link command.
429 # NOTE: We must preserve the order in which external deps are
430 # specified, so we reverse the list before iterating over it.
431 for dep in reversed(target.get_external_deps()):
432 if compiler.language == 'vala':
433 if isinstance(dep, dependencies.PkgConfigDependency):
434 if dep.name == 'glib-2.0' and dep.version_reqs is not None:
435 for req in dep.version_reqs:
436 if req.startswith(('>=', '==')):
437 commands += ['--target-glib', req[2:]]
438 break
439 commands += ['--pkg', dep.name]
440 elif isinstance(dep, dependencies.ExternalLibrary):
441 commands += dep.get_link_args('vala')
442 else:
443 commands += dep.get_compile_args()
444 # Qt needs -fPIC for executables
445 # XXX: We should move to -fPIC for all executables
446 if isinstance(target, build.Executable):
447 commands += dep.get_exe_args(compiler)
448 # For 'automagic' deps: Boost and GTest. Also dependency('threads').
449 # pkg-config puts the thread flags itself via `Cflags:`
450 if dep.need_threads():
451 commands += compiler.thread_flags()
452 # Fortran requires extra include directives.
453 if compiler.language == 'fortran':
454 for lt in target.link_targets:
455 priv_dir = os.path.join(self.get_target_dir(lt), lt.get_basename() + lt.type_suffix())
456 incflag = compiler.get_include_args(priv_dir, False)
457 commands += incflag
458 return commands
459
460 def build_target_link_arguments(self, compiler, deps):
461 args = []
462 for d in deps:
463 if not isinstance(d, (build.StaticLibrary, build.SharedLibrary)):
464 raise RuntimeError('Tried to link with a non-library target "%s".' % d.get_basename())
465 if isinstance(compiler, (compilers.LLVMDCompiler, compilers.DmdDCompiler)):
466 d_arg = '-L' + self.get_target_filename_for_linking(d)
467 else:
468 d_arg = self.get_target_filename_for_linking(d)
469 args.append(d_arg)
470 return args
471
472 def determine_windows_extra_paths(self, target):
473 '''On Windows there is no such thing as an rpath.
474 We must determine all locations of DLLs that this exe
475 links to and return them so they can be used in unit
476 tests.'''
477 if not isinstance(target, build.Executable):
478 return []
479 prospectives = target.get_transitive_link_deps()
480 result = []
481 for ld in prospectives:
482 if ld == '' or ld == '.':
483 continue
484 dirseg = os.path.join(self.environment.get_build_dir(), self.get_target_dir(ld))
485 if dirseg not in result:
486 result.append(dirseg)
487 return result
488
489 def write_benchmark_file(self, datafile):
490 self.write_test_serialisation(self.build.get_benchmarks(), datafile)
491
492 def write_test_file(self, datafile):
493 self.write_test_serialisation(self.build.get_tests(), datafile)
494
495 def write_test_serialisation(self, tests, datafile):
496 arr = []
497 for t in tests:
498 exe = t.get_exe()
499 if isinstance(exe, dependencies.ExternalProgram):
500 cmd = exe.get_command()
501 else:
502 cmd = [os.path.join(self.environment.get_build_dir(), self.get_target_filename(t.get_exe()))]
503 is_cross = self.environment.is_cross_build() and \
504 self.environment.cross_info.need_cross_compiler() and \
505 self.environment.cross_info.need_exe_wrapper()
506 if is_cross:
507 exe_wrapper = self.environment.cross_info.config['binaries'].get('exe_wrapper', None)
508 else:
509 exe_wrapper = None
510 if mesonlib.is_windows() or mesonlib.is_cygwin():
511 extra_paths = self.determine_windows_extra_paths(exe)
512 else:
513 extra_paths = []
514 cmd_args = []
515 for a in t.cmd_args:
516 if hasattr(a, 'held_object'):
517 a = a.held_object
518 if isinstance(a, mesonlib.File):
519 a = os.path.join(self.environment.get_build_dir(), a.rel_to_builddir(self.build_to_src))
520 cmd_args.append(a)
521 elif isinstance(a, str):
522 cmd_args.append(a)
523 elif isinstance(a, build.Target):
524 cmd_args.append(self.get_target_filename(a))
525 else:
526 raise MesonException('Bad object in test command.')
527 ts = TestSerialisation(t.get_name(), t.suite, cmd, is_cross, exe_wrapper,
528 t.is_parallel, cmd_args, t.env, t.should_fail,
529 t.timeout, t.workdir, extra_paths)
530 arr.append(ts)
531 pickle.dump(arr, datafile)
532
533 def generate_depmf_install(self, d):
534 if self.build.dep_manifest_name is None:
535 return
536 ifilename = os.path.join(self.environment.get_build_dir(), 'depmf.json')
537 ofilename = os.path.join(self.environment.get_prefix(), self.build.dep_manifest_name)
538 mfobj = {'type': 'dependency manifest', 'version': '1.0', 'projects': self.build.dep_manifest}
539 with open(ifilename, 'w') as f:
540 f.write(json.dumps(mfobj))
541 # Copy file from, to, and with mode unchanged
542 d.data.append([ifilename, ofilename, None])
543
544 def get_regen_filelist(self):
545 '''List of all files whose alteration means that the build
546 definition needs to be regenerated.'''
547 deps = [os.path.join(self.build_to_src, df)
548 for df in self.interpreter.get_build_def_files()]
549 if self.environment.is_cross_build():
550 deps.append(os.path.join(self.build_to_src,
551 self.environment.coredata.cross_file))
552 deps.append('meson-private/coredata.dat')
553 if os.path.exists(os.path.join(self.environment.get_source_dir(), 'meson_options.txt')):
554 deps.append(os.path.join(self.build_to_src, 'meson_options.txt'))
555 for sp in self.build.subprojects.keys():
556 fname = os.path.join(self.environment.get_source_dir(), sp, 'meson_options.txt')
557 if os.path.isfile(fname):
558 deps.append(os.path.join(self.build_to_src, sp, 'meson_options.txt'))
559 return deps
560
561 def exe_object_to_cmd_array(self, exe):
562 if self.environment.is_cross_build() and \
563 self.environment.cross_info.need_exe_wrapper() and \
564 isinstance(exe, build.BuildTarget) and exe.is_cross:
565 if 'exe_wrapper' not in self.environment.cross_info.config['binaries']:
566 s = 'Can not use target %s as a generator because it is cross-built\n'
567 s += 'and no exe wrapper is defined. You might want to set it to native instead.'
568 s = s % exe.name
569 raise MesonException(s)
570 if isinstance(exe, build.BuildTarget):
571 exe_arr = [os.path.join(self.environment.get_build_dir(), self.get_target_filename(exe))]
572 else:
573 exe_arr = exe.get_command()
574 return exe_arr
575
576 def replace_extra_args(self, args, genlist):
577 final_args = []
578 for a in args:
579 if a == '@EXTRA_ARGS@':
580 final_args += genlist.get_extra_args()
581 else:
582 final_args.append(a)
583 return final_args
584
585 def replace_outputs(self, args, private_dir, output_list):
586 newargs = []
587 regex = re.compile('@OUTPUT(\d+)@')
588 for arg in args:
589 m = regex.search(arg)
590 while m is not None:
591 index = int(m.group(1))
592 src = '@OUTPUT%d@' % index
593 arg = arg.replace(src, os.path.join(private_dir, output_list[index]))
594 m = regex.search(arg)
595 newargs.append(arg)
596 return newargs
597
598 def get_build_by_default_targets(self):
599 result = OrderedDict()
600 # Get all build and custom targets that must be built by default
601 for name, t in self.build.get_targets().items():
602 if t.build_by_default or t.install or t.build_always:
603 result[name] = t
604 # Get all targets used as test executables and arguments. These must
605 # also be built by default. XXX: Sometime in the future these should be
606 # built only before running tests.
607 for t in self.build.get_tests():
608 exe = t.exe
609 if hasattr(exe, 'held_object'):
610 exe = exe.held_object
611 if isinstance(exe, (build.CustomTarget, build.BuildTarget)):
612 result[exe.get_id()] = exe
613 for arg in t.cmd_args:
614 if hasattr(arg, 'held_object'):
615 arg = arg.held_object
616 if not isinstance(arg, (build.CustomTarget, build.BuildTarget)):
617 continue
618 result[arg.get_id()] = arg
619 return result
620
621 def get_custom_target_provided_libraries(self, target):
622 libs = []
623 for t in target.get_generated_sources():
624 if not isinstance(t, build.CustomTarget):
625 continue
626 for f in t.get_outputs():
627 if self.environment.is_library(f):
628 libs.append(os.path.join(self.get_target_dir(t), f))
629 return libs
630
631 def is_unity(self, target):
632 optval = self.get_option_for_target('unity', target)
633 if optval == 'on' or (optval == 'subprojects' and target.subproject != ''):
634 return True
635 return False
636
637 def get_custom_target_sources(self, target):
638 '''
639 Custom target sources can be of various object types; strings, File,
640 BuildTarget, even other CustomTargets.
641 Returns the path to them relative to the build root directory.
642 '''
643 srcs = []
644 for i in target.get_sources():
645 if hasattr(i, 'held_object'):
646 i = i.held_object
647 if isinstance(i, str):
648 fname = [os.path.join(self.build_to_src, target.subdir, i)]
649 elif isinstance(i, build.BuildTarget):
650 fname = [self.get_target_filename(i)]
651 elif isinstance(i, build.CustomTarget):
652 fname = [os.path.join(self.get_target_dir(i), p) for p in i.get_outputs()]
653 elif isinstance(i, build.GeneratedList):
654 fname = [os.path.join(self.get_target_private_dir(target), p) for p in i.get_outputs()]
655 else:
656 fname = [i.rel_to_builddir(self.build_to_src)]
657 if target.absolute_paths:
658 fname = [os.path.join(self.environment.get_build_dir(), f) for f in fname]
659 srcs += fname
660 return srcs
661
662 def get_custom_target_depend_files(self, target, absolute_paths=False):
663 deps = []
664 for i in target.depend_files:
665 if isinstance(i, mesonlib.File):
666 if absolute_paths:
667 deps.append(i.absolute_path(self.environment.get_source_dir(),
668 self.environment.get_build_dir()))
669 else:
670 deps.append(i.rel_to_builddir(self.build_to_src))
671 else:
672 if absolute_paths:
673 deps.append(os.path.join(self.environment.get_build_dir(), i))
674 else:
675 deps.append(os.path.join(self.build_to_src, i))
676 return deps
677
678 def eval_custom_target_command(self, target, absolute_outputs=False):
679 # We want the outputs to be absolute only when using the VS backend
680 # XXX: Maybe allow the vs backend to use relative paths too?
681 source_root = self.build_to_src
682 build_root = '.'
683 outdir = self.get_target_dir(target)
684 if absolute_outputs:
685 source_root = self.environment.get_source_dir()
686 build_root = self.environment.get_source_dir()
687 outdir = os.path.join(self.environment.get_build_dir(), outdir)
688 outputs = []
689 for i in target.get_outputs():
690 outputs.append(os.path.join(outdir, i))
691 inputs = self.get_custom_target_sources(target)
692 # Evaluate the command list
693 cmd = []
694 for i in target.command:
695 if isinstance(i, build.Executable):
696 cmd += self.exe_object_to_cmd_array(i)
697 continue
698 elif isinstance(i, build.CustomTarget):
699 # GIR scanner will attempt to execute this binary but
700 # it assumes that it is in path, so always give it a full path.
701 tmp = i.get_outputs()[0]
702 i = os.path.join(self.get_target_dir(i), tmp)
703 elif isinstance(i, mesonlib.File):
704 i = i.rel_to_builddir(self.build_to_src)
705 if target.absolute_paths:
706 i = os.path.join(self.environment.get_build_dir(), i)
707 # FIXME: str types are blindly added ignoring 'target.absolute_paths'
708 # because we can't know if they refer to a file or just a string
709 elif not isinstance(i, str):
710 err_msg = 'Argument {0} is of unknown type {1}'
711 raise RuntimeError(err_msg.format(str(i), str(type(i))))
712 elif '@SOURCE_ROOT@' in i:
713 i = i.replace('@SOURCE_ROOT@', source_root)
714 elif '@BUILD_ROOT@' in i:
715 i = i.replace('@BUILD_ROOT@', build_root)
716 elif '@DEPFILE@' in i:
717 if target.depfile is None:
718 msg = 'Custom target {!r} has @DEPFILE@ but no depfile ' \
719 'keyword argument.'.format(target.name)
720 raise MesonException(msg)
721 dfilename = os.path.join(outdir, target.depfile)
722 i = i.replace('@DEPFILE@', dfilename)
723 elif '@PRIVATE_OUTDIR_' in i:
724 match = re.search('@PRIVATE_OUTDIR_(ABS_)?([^/\s*]*)@', i)
725 if not match:
726 msg = 'Custom target {!r} has an invalid argument {!r}' \
727 ''.format(target.name, i)
728 raise MesonException(msg)
729 source = match.group(0)
730 if match.group(1) is None and not target.absolute_paths:
731 lead_dir = ''
732 else:
733 lead_dir = self.environment.get_build_dir()
734 i = i.replace(source, os.path.join(lead_dir, outdir))
735 cmd.append(i)
736 # Substitute the rest of the template strings
737 values = mesonlib.get_filenames_templates_dict(inputs, outputs)
738 cmd = mesonlib.substitute_values(cmd, values)
739 # This should not be necessary but removing it breaks
740 # building GStreamer on Windows. The underlying issue
741 # is problems with quoting backslashes on Windows
742 # which is the seventh circle of hell. The downside is
743 # that this breaks custom targets whose command lines
744 # have backslashes. If you try to fix this be sure to
745 # check that it does not break GST.
746 #
747 # The bug causes file paths such as c:\foo to get escaped
748 # into c:\\foo.
749 #
750 # Unfortunately we have not been able to come up with an
751 # isolated test case for this so unless you manage to come up
752 # with one, the only way is to test the building with Gst's
753 # setup. Note this in your MR or ping us and we will get it
754 # fixed.
755 #
756 # https://github.com/mesonbuild/meson/pull/737
757 cmd = [i.replace('\\', '/') for i in cmd]
758 return inputs, outputs, cmd
759
760 def run_postconf_scripts(self):
761 env = {'MESON_SOURCE_ROOT': self.environment.get_source_dir(),
762 'MESON_BUILD_ROOT': self.environment.get_build_dir(),
763 'MESONINTROSPECT': get_meson_script(self.environment, 'mesonintrospect')}
764 child_env = os.environ.copy()
765 child_env.update(env)
766
767 for s in self.build.postconf_scripts:
768 cmd = s['exe'] + s['args']
769 subprocess.check_call(cmd, env=child_env)
770
[end of mesonbuild/backend/backends.py]
[start of mesonbuild/scripts/gtkdochelper.py]
1 # Copyright 2015-2016 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import sys, os
16 import subprocess
17 import shutil
18 import argparse
19 from ..mesonlib import MesonException, Popen_safe
20 from . import destdir_join
21
22 parser = argparse.ArgumentParser()
23
24 parser.add_argument('--sourcedir', dest='sourcedir')
25 parser.add_argument('--builddir', dest='builddir')
26 parser.add_argument('--subdir', dest='subdir')
27 parser.add_argument('--headerdirs', dest='headerdirs')
28 parser.add_argument('--mainfile', dest='mainfile')
29 parser.add_argument('--modulename', dest='modulename')
30 parser.add_argument('--htmlargs', dest='htmlargs', default='')
31 parser.add_argument('--scanargs', dest='scanargs', default='')
32 parser.add_argument('--scanobjsargs', dest='scanobjsargs', default='')
33 parser.add_argument('--gobjects-types-file', dest='gobject_typesfile', default='')
34 parser.add_argument('--fixxrefargs', dest='fixxrefargs', default='')
35 parser.add_argument('--mkdbargs', dest='mkdbargs', default='')
36 parser.add_argument('--ld', dest='ld', default='')
37 parser.add_argument('--cc', dest='cc', default='')
38 parser.add_argument('--ldflags', dest='ldflags', default='')
39 parser.add_argument('--cflags', dest='cflags', default='')
40 parser.add_argument('--content-files', dest='content_files', default='')
41 parser.add_argument('--expand-content-files', dest='expand_content_files', default='')
42 parser.add_argument('--html-assets', dest='html_assets', default='')
43 parser.add_argument('--ignore-headers', dest='ignore_headers', default='')
44 parser.add_argument('--namespace', dest='namespace', default='')
45 parser.add_argument('--mode', dest='mode', default='')
46 parser.add_argument('--installdir', dest='install_dir')
47
48 def gtkdoc_run_check(cmd, cwd):
49 # Put stderr into stdout since we want to print it out anyway.
50 # This preserves the order of messages.
51 p, out = Popen_safe(cmd, cwd=cwd, stderr=subprocess.STDOUT)[0:2]
52 if p.returncode != 0:
53 err_msg = ["{!r} failed with status {:d}".format(cmd[0], p.returncode)]
54 if out:
55 err_msg.append(out)
56 raise MesonException('\n'.join(err_msg))
57
58 def build_gtkdoc(source_root, build_root, doc_subdir, src_subdirs,
59 main_file, module,
60 html_args, scan_args, fixxref_args, mkdb_args,
61 gobject_typesfile, scanobjs_args, ld, cc, ldflags, cflags,
62 html_assets, content_files, ignore_headers, namespace,
63 expand_content_files, mode):
64 print("Building documentation for %s" % module)
65
66 src_dir_args = []
67 for src_dir in src_subdirs:
68 if not os.path.isabs(src_dir):
69 dirs = [os.path.join(source_root, src_dir),
70 os.path.join(build_root, src_dir)]
71 else:
72 dirs = [src_dir]
73 src_dir_args += ['--source-dir=' + d for d in dirs]
74
75 doc_src = os.path.join(source_root, doc_subdir)
76 abs_out = os.path.join(build_root, doc_subdir)
77 htmldir = os.path.join(abs_out, 'html')
78
79 content_files += [main_file]
80 sections = os.path.join(doc_src, module + "-sections.txt")
81 if os.path.exists(sections):
82 content_files.append(sections)
83
84 overrides = os.path.join(doc_src, module + "-overrides.txt")
85 if os.path.exists(overrides):
86 content_files.append(overrides)
87
88 # Copy files to build directory
89 for f in content_files:
90 f_abs = os.path.join(doc_src, f)
91 shutil.copyfile(f_abs, os.path.join(
92 abs_out, os.path.basename(f_abs)))
93
94 shutil.rmtree(htmldir, ignore_errors=True)
95 try:
96 os.mkdir(htmldir)
97 except Exception:
98 pass
99
100 for f in html_assets:
101 f_abs = os.path.join(doc_src, f)
102 shutil.copyfile(f_abs, os.path.join(htmldir, os.path.basename(f_abs)))
103
104 scan_cmd = ['gtkdoc-scan', '--module=' + module] + src_dir_args
105 if ignore_headers:
106 scan_cmd.append('--ignore-headers=' + ' '.join(ignore_headers))
107 # Add user-specified arguments
108 scan_cmd += scan_args
109 gtkdoc_run_check(scan_cmd, abs_out)
110
111 if gobject_typesfile:
112 scanobjs_cmd = ['gtkdoc-scangobj'] + scanobjs_args + ['--types=' + gobject_typesfile,
113 '--module=' + module,
114 '--cflags=' + cflags,
115 '--ldflags=' + ldflags]
116
117 gtkdoc_run_check(scanobjs_cmd, abs_out)
118
119 # Make docbook files
120 if mode == 'auto':
121 # Guessing is probably a poor idea but these keeps compat
122 # with previous behavior
123 if main_file.endswith('sgml'):
124 modeflag = '--sgml-mode'
125 else:
126 modeflag = '--xml-mode'
127 elif mode == 'xml':
128 modeflag = '--xml-mode'
129 elif mode == 'sgml':
130 modeflag = '--sgml-mode'
131 else: # none
132 modeflag = None
133
134 mkdb_cmd = ['gtkdoc-mkdb',
135 '--module=' + module,
136 '--output-format=xml',
137 '--expand-content-files=' + ' '.join(expand_content_files),
138 ] + src_dir_args
139 if namespace:
140 mkdb_cmd.append('--name-space=' + namespace)
141 if modeflag:
142 mkdb_cmd.append(modeflag)
143 if len(main_file) > 0:
144 # Yes, this is the flag even if the file is in xml.
145 mkdb_cmd.append('--main-sgml-file=' + main_file)
146 # Add user-specified arguments
147 mkdb_cmd += mkdb_args
148 gtkdoc_run_check(mkdb_cmd, abs_out)
149
150 # Make HTML documentation
151 mkhtml_cmd = ['gtkdoc-mkhtml',
152 '--path=' + ':'.join((doc_src, abs_out)),
153 module,
154 ] + html_args
155 if len(main_file) > 0:
156 mkhtml_cmd.append('../' + main_file)
157 else:
158 mkhtml_cmd.append('%s-docs.xml' % module)
159 # html gen must be run in the HTML dir
160 gtkdoc_run_check(mkhtml_cmd, os.path.join(abs_out, 'html'))
161
162 # Fix cross-references in HTML files
163 fixref_cmd = ['gtkdoc-fixxref',
164 '--module=' + module,
165 '--module-dir=html'] + fixxref_args
166 gtkdoc_run_check(fixref_cmd, abs_out)
167
168 def install_gtkdoc(build_root, doc_subdir, install_prefix, datadir, module):
169 source = os.path.join(build_root, doc_subdir, 'html')
170 final_destination = os.path.join(install_prefix, datadir, module)
171 shutil.rmtree(final_destination, ignore_errors=True)
172 shutil.copytree(source, final_destination)
173
174 def run(args):
175 options = parser.parse_args(args)
176 if len(options.htmlargs) > 0:
177 htmlargs = options.htmlargs.split('@@')
178 else:
179 htmlargs = []
180 if len(options.scanargs) > 0:
181 scanargs = options.scanargs.split('@@')
182 else:
183 scanargs = []
184 if len(options.scanobjsargs) > 0:
185 scanobjsargs = options.scanobjsargs.split('@@')
186 else:
187 scanobjsargs = []
188 if len(options.fixxrefargs) > 0:
189 fixxrefargs = options.fixxrefargs.split('@@')
190 else:
191 fixxrefargs = []
192 if len(options.mkdbargs) > 0:
193 mkdbargs = options.mkdbargs.split('@@')
194 else:
195 mkdbargs = []
196 build_gtkdoc(
197 options.sourcedir,
198 options.builddir,
199 options.subdir,
200 options.headerdirs.split('@@'),
201 options.mainfile,
202 options.modulename,
203 htmlargs,
204 scanargs,
205 fixxrefargs,
206 mkdbargs,
207 options.gobject_typesfile,
208 scanobjsargs,
209 options.ld,
210 options.cc,
211 options.ldflags,
212 options.cflags,
213 options.html_assets.split('@@') if options.html_assets else [],
214 options.content_files.split('@@') if options.content_files else [],
215 options.ignore_headers.split('@@') if options.ignore_headers else [],
216 options.namespace,
217 options.expand_content_files.split('@@') if options.expand_content_files else [],
218 options.mode)
219
220 if 'MESON_INSTALL_PREFIX' in os.environ:
221 install_dir = options.install_dir if options.install_dir else options.modulename
222 destdir = os.environ.get('DESTDIR', '')
223 installdir = destdir_join(destdir, os.environ['MESON_INSTALL_PREFIX'])
224 install_gtkdoc(options.builddir,
225 options.subdir,
226 installdir,
227 'share/gtk-doc/html',
228 install_dir)
229 return 0
230
231 if __name__ == '__main__':
232 sys.exit(run(sys.argv[1:]))
233
[end of mesonbuild/scripts/gtkdochelper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
797bca22a5803cf37c6d44db2e7028db221fbfd8
|
gtkdoc dependencies produce invalid search paths
If I add a `print` at the beginning of [`gtkdoc_run_check`](https://github.com/mesonbuild/meson/blob/master/mesonbuild/scripts/gtkdochelper.py#L48):
```
['gtkdoc-scangobj',
'--types=finch.types',
'--module=finch',
'--cflags=-I@BUILD_ROOT@/. -I@SOURCE_ROOT@/. -I@BUILD_ROOT@/finch/. -I@SOURCE_ROOT@/finch/. -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/gplugin-1.0/ -I@BUILD_ROOT@/libpurple/. -I@SOURCE_ROOT@/libpurple/. -I/usr/include/dbus-1.0 -I/usr/lib64/dbus-1.0/include -I/usr/include/libxml2 -I/usr/include/farstream-0.2 -I/usr/include/json-glib-1.0 -I/usr/include/p11-kit-1 -I/usr/include/nss3 -I/usr/include/nspr4 -I@BUILD_ROOT@/finch/libgnt/. -I@SOURCE_ROOT@/finch/libgnt/. -I/usr/include/ncursesw', '--ldflags=-lfinch -Lfinch -L/home/elliott/code/pidgin-hg/build/finch -L/home/elliott/code/pidgin-hg/build/libpurple -L/home/elliott/code/pidgin-hg/build/finch/libgnt -Wl,-rpath,finch -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0 -lgplugin -lgmodule-2.0 -pthread -lgio-2.0 -lncursesw -lpanelw -lpurple -Llibpurple -Wl,-rpath,libpurple -ldbus-1 -ldbus-glib-1 -lxml2 -lfarstream-0.2 -lgstbase-1.0 -lgstvideo-1.0 -lgstapp-1.0 -lidn -ljson-glib-1.0 -lgnutls -lssl3 -lsmime3 -lnss3 -lnssutil3 -lplds4 -lplc4 -lnspr4 -lpthread -ldl -lz -lm -lgnt -Lfinch/libgnt -Wl,-rpath,finch/libgnt']
```
where `gtkdoc` is called approximately like:
```meson
libfinch_inc = include_directories('somewhere')
libfinch_dep = declare_dependency(
link_with : library('finch', ...),
include_directories : [libfinch_inc, other stuff],
dependencies : [gstreamer, glib, etc.])
DOC_MODULE = 'finch'
gnome.gtkdoc(DOC_MODULE,
main_xml : DOC_MODULE + '-docs.xml',
src_dir : libfinch_inc,
dependencies : libfinch_dep,
gobject_typesfile : 'finch.types',
scan_args : scan_args)
```
You can see that `@BUILD_ROOT@` and `@SOURCE_ROOT@` are not replaced, and the linker paths are relative to the source instead of build+source. I think the problem is it creates a `RunTarget` which does no substitutions and runs out an unspecified directory.
|
The first part of this, the substitutions, is a dupe of #1681. The second part, the linker paths, is a separate issue and is yet to be fixed.
Both linker search path and rpath are relative (to top of build or source) and so built libraries are not found.
|
2017-06-20T01:06:18Z
|
<patch>
diff --git a/mesonbuild/modules/gnome.py b/mesonbuild/modules/gnome.py
--- a/mesonbuild/modules/gnome.py
+++ b/mesonbuild/modules/gnome.py
@@ -294,7 +294,7 @@ def _get_link_args(self, state, lib, depends=None, include_rpath=False,
else:
link_command = ['-l' + lib.name]
if isinstance(lib, build.SharedLibrary):
- libdir = state.backend.get_target_dir(lib)
+ libdir = os.path.join(state.environment.get_build_dir(), state.backend.get_target_dir(lib))
link_command.append('-L' + libdir)
# Needed for the following binutils bug:
# https://github.com/mesonbuild/meson/issues/1911
@@ -303,6 +303,8 @@ def _get_link_args(self, state, lib, depends=None, include_rpath=False,
for d in state.backend.determine_rpath_dirs(lib):
d = os.path.join(state.environment.get_build_dir(), d)
link_command.append('-L' + d)
+ if include_rpath:
+ link_command.append('-Wl,-rpath,' + d)
if include_rpath:
link_command.append('-Wl,-rpath,' + libdir)
if depends:
@@ -700,6 +702,8 @@ def gtkdoc(self, state, args, kwargs):
for inc_dir in src_dir.get_incdirs():
header_dirs.append(os.path.join(state.environment.get_source_dir(),
src_dir.get_curdir(), inc_dir))
+ header_dirs.append(os.path.join(state.environment.get_build_dir(),
+ src_dir.get_curdir(), inc_dir))
else:
header_dirs.append(src_dir)
</patch>
|
[]
|
[]
| |||
googleapis__google-cloud-python-4439
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BigQuery Python Client v0.28.0: num_rows, schema property of a Table returning None and empty list
Hi,
I am using the BigQuery Python client v0.28.0
Using the bigquery client to list all tables in a dataset using method `list_dataset_tables`, I get an iterator of table references in the dataset. When I try to access the `num_rows` and `schema` properties of the Table object it returns `None` and `[]`.
But when I access the table using `client.get_table()` I am able to access both the num_rows and schema properties.
I am confused since both the methods return a `google.cloud.bigquery.table.Table` object but are inconsistent.
```
for table in bqclient.list_dataset_tables(dataset_ref):
print(table.num_rows)
print(table.schema)
```
Returns None and []
```
table = bqclient.get_table(table)
print(table.num_rows)
print(table.schema)
```
Returns the number of rows and list of schemaFields
Thanks
</issue>
<code>
[start of README.rst]
1 Google Cloud Python Client
2 ==========================
3
4 Python idiomatic client for `Google Cloud Platform`_ services.
5
6 .. _Google Cloud Platform: https://cloud.google.com/
7
8 |pypi| |circleci| |appveyor| |coverage| |versions|
9
10 - `Homepage`_
11 - `API Documentation`_
12 - `Read The Docs Documentation`_
13
14 .. _Homepage: https://googlecloudplatform.github.io/google-cloud-python/
15 .. _API Documentation: https://googlecloudplatform.github.io/google-cloud-python/latest/
16 .. _Read The Docs Documentation: https://google-cloud-python.readthedocs.io/en/latest/
17
18 .. note::
19
20 These libraries currently do not run on Google App Engine Standard.
21 We are actively working on adding this support.
22
23 The following client libraries have **GA** support:
24
25 - `Google Cloud Datastore`_ (`Datastore README`_)
26 - `Google Cloud Natural Language`_ (`Natural Language README`_)
27 - `Google Cloud Storage`_ (`Storage README`_)
28 - `Google Cloud Translation`_ (`Translation README`_)
29 - `Stackdriver Logging`_ (`Logging README`_)
30
31 **GA** (general availability) indicates that the client library for a
32 particular service is stable, and that the code surface will not change in
33 backwards-incompatible ways unless either absolutely necessary (e.g. because
34 of critical security issues) or with an extensive deprecation period.
35 Issues and requests against GA libraries are addressed with the highest
36 priority.
37
38 .. note::
39
40 Sub-components of GA libraries explicitly marked as beta in the
41 import path (e.g. ``google.cloud.language_v1beta2``) should be considered
42 to be beta.
43
44 The following client libraries have **beta** support:
45
46 - `Google BigQuery`_ (`BigQuery README`_)
47 - `Google Cloud Firestore`_ (`Firestore README`_)
48 - `Google Cloud Pub/Sub`_ (`Pub/Sub README`_)
49 - `Google Cloud Spanner`_ (`Spanner README`_)
50 - `Google Cloud Speech`_ (`Speech README`_)
51 - `Google Cloud Video Intelligence`_ (`Video Intelligence README`_)
52 - `Google Cloud Vision`_ (`Vision README`_)
53
54 **Beta** indicates that the client library for a particular service is
55 mostly stable and is being prepared for release. Issues and requests
56 against beta libraries are addressed with a higher priority.
57
58 This client library has **alpha** support for the following Google
59 Cloud Platform services:
60
61 - `Google Cloud Bigtable`_ (`Bigtable README`_)
62 - `Google Cloud Bigtable - HappyBase`_ (`HappyBase README`_)
63 - `Google Cloud DNS`_ (`DNS README`_)
64 - `Google Cloud Resource Manager`_ (`Resource Manager README`_)
65 - `Google Cloud Runtime Configuration`_ (`Runtime Config README`_)
66 - `Stackdriver Error Reporting`_ (`Error Reporting README`_)
67 - `Stackdriver Monitoring`_ (`Monitoring README`_)
68
69 **Alpha** indicates that the client library for a particular service is
70 still a work-in-progress and is more likely to get backwards-incompatible
71 updates. See `versioning`_ for more details.
72
73 .. _Google Cloud Datastore: https://pypi.org/project/google-cloud-datastore/
74 .. _Datastore README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/datastore
75 .. _Google Cloud Storage: https://pypi.org/project/google-cloud-storage/
76 .. _Storage README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/storage
77 .. _Google Cloud Pub/Sub: https://pypi.org/project/google-cloud-pubsub/
78 .. _Pub/Sub README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/pubsub
79 .. _Google BigQuery: https://pypi.org/project/google-cloud-bigquery/
80 .. _BigQuery README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigquery
81 .. _Google Cloud Resource Manager: https://pypi.org/project/google-cloud-resource-manager/
82 .. _Resource Manager README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/resource_manager
83 .. _Stackdriver Logging: https://pypi.org/project/google-cloud-logging/
84 .. _Logging README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/logging
85 .. _Stackdriver Monitoring: https://pypi.org/project/google-cloud-monitoring/
86 .. _Monitoring README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/monitoring
87 .. _Google Cloud Bigtable: https://pypi.org/project/google-cloud-bigtable/
88 .. _Bigtable README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigtable
89 .. _Google Cloud DNS: https://pypi.org/project/google-cloud-dns/
90 .. _DNS README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/dns
91 .. _Stackdriver Error Reporting: https://pypi.org/project/google-cloud-error-reporting/
92 .. _Error Reporting README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/error_reporting
93 .. _Google Cloud Natural Language: https://pypi.org/project/google-cloud-language/
94 .. _Natural Language README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/language
95 .. _Google Cloud Translation: https://pypi.org/project/google-cloud-translate/
96 .. _Translation README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/translate
97 .. _Google Cloud Speech: https://pypi.org/project/google-cloud-speech/
98 .. _Speech README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/speech
99 .. _Google Cloud Vision: https://pypi.org/project/google-cloud-vision/
100 .. _Vision README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/vision
101 .. _Google Cloud Bigtable - HappyBase: https://pypi.org/project/google-cloud-happybase/
102 .. _HappyBase README: https://github.com/GoogleCloudPlatform/google-cloud-python-happybase
103 .. _Google Cloud Runtime Configuration: https://cloud.google.com/deployment-manager/runtime-configurator/
104 .. _Runtime Config README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/runtimeconfig
105 .. _Google Cloud Spanner: https://pypi.python.org/pypi/google-cloud-spanner
106 .. _Spanner README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/spanner
107 .. _Google Cloud Video Intelligence: https://pypi.python.org/pypi/google-cloud-videointelligence
108 .. _Video Intelligence README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/videointelligence
109 .. _versioning: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/CONTRIBUTING.rst#versioning
110 .. _Google Cloud Firestore: https://pypi.org/project/google-cloud-firestore/
111 .. _Firestore README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/firestore
112
113 If you need support for other Google APIs, check out the
114 `Google APIs Python Client library`_.
115
116 .. _Google APIs Python Client library: https://github.com/google/google-api-python-client
117
118 Quick Start
119 -----------
120
121 .. code-block:: console
122
123 $ pip install --upgrade google-cloud
124
125 For more information on setting up your Python development environment,
126 such as installing ``pip`` and ``virtualenv`` on your system, please refer
127 to `Python Development Environment Setup Guide`_ for Google Cloud Platform.
128
129 .. _Python Development Environment Setup Guide: https://cloud.google.com/python/setup
130
131 Example Applications
132 --------------------
133
134 - `getting-started-python`_ - A sample and `tutorial`_ that demonstrates how to build a complete web application using Cloud Datastore, Cloud Storage, and Cloud Pub/Sub and deploy it to Google App Engine or Google Compute Engine.
135 - `google-cloud-python-expenses-demo`_ - A sample expenses demo using Cloud Datastore and Cloud Storage
136
137 .. _getting-started-python: https://github.com/GoogleCloudPlatform/getting-started-python
138 .. _tutorial: https://cloud.google.com/python
139 .. _google-cloud-python-expenses-demo: https://github.com/GoogleCloudPlatform/google-cloud-python-expenses-demo
140
141 Authentication
142 --------------
143
144 With ``google-cloud-python`` we try to make authentication as painless as possible.
145 Check out the `Authentication section`_ in our documentation to learn more.
146 You may also find the `authentication document`_ shared by all the
147 ``google-cloud-*`` libraries to be helpful.
148
149 .. _Authentication section: https://google-cloud-python.readthedocs.io/en/latest/core/auth.html
150 .. _authentication document: https://github.com/GoogleCloudPlatform/google-cloud-common/tree/master/authentication
151
152 Contributing
153 ------------
154
155 Contributions to this library are always welcome and highly encouraged.
156
157 See the `CONTRIBUTING doc`_ for more information on how to get started.
158
159 .. _CONTRIBUTING doc: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/CONTRIBUTING.rst
160
161 Community
162 ---------
163
164 Google Cloud Platform Python developers hang out in `Slack`_ in the ``#python``
165 channel, click here to `get an invitation`_.
166
167
168 .. _Slack: https://googlecloud-community.slack.com
169 .. _get an invitation: https://gcp-slack.appspot.com/
170
171 License
172 -------
173
174 Apache 2.0 - See `the LICENSE`_ for more information.
175
176 .. _the LICENSE: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/LICENSE
177
178 .. |circleci| image:: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python.svg?style=shield
179 :target: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python
180 .. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/googlecloudplatform/google-cloud-python?branch=master&svg=true
181 :target: https://ci.appveyor.com/project/GoogleCloudPlatform/google-cloud-python
182 .. |coverage| image:: https://coveralls.io/repos/GoogleCloudPlatform/google-cloud-python/badge.svg?branch=master
183 :target: https://coveralls.io/r/GoogleCloudPlatform/google-cloud-python?branch=master
184 .. |pypi| image:: https://img.shields.io/pypi/v/google-cloud.svg
185 :target: https://pypi.org/project/google-cloud/
186 .. |versions| image:: https://img.shields.io/pypi/pyversions/google-cloud.svg
187 :target: https://pypi.org/project/google-cloud/
188
[end of README.rst]
[start of bigquery/google/cloud/bigquery/table.py]
1 # Copyright 2015 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Define API Tables."""
16
17 from __future__ import absolute_import
18
19 import copy
20 import datetime
21 import operator
22
23 import six
24 try:
25 import pandas
26 except ImportError: # pragma: NO COVER
27 pandas = None
28
29 from google.api_core.page_iterator import HTTPIterator
30
31 from google.cloud._helpers import _datetime_from_microseconds
32 from google.cloud._helpers import _millis_from_datetime
33 from google.cloud.bigquery._helpers import _item_to_row
34 from google.cloud.bigquery._helpers import _rows_page_start
35 from google.cloud.bigquery._helpers import _snake_to_camel_case
36 from google.cloud.bigquery._helpers import _field_to_index_mapping
37 from google.cloud.bigquery.schema import SchemaField
38 from google.cloud.bigquery.schema import _build_schema_resource
39 from google.cloud.bigquery.schema import _parse_schema_resource
40 from google.cloud.bigquery.external_config import ExternalConfig
41
42
43 _TABLE_HAS_NO_SCHEMA = "Table has no schema: call 'client.get_table()'"
44 _MARKER = object()
45
46
47 def _reference_getter(table):
48 """A :class:`~google.cloud.bigquery.table.TableReference` pointing to
49 this table.
50
51 Returns:
52 google.cloud.bigquery.table.TableReference: pointer to this table
53 """
54 from google.cloud.bigquery import dataset
55
56 dataset_ref = dataset.DatasetReference(table.project, table.dataset_id)
57 return TableReference(dataset_ref, table.table_id)
58
59
60 def _view_use_legacy_sql_getter(table):
61 """Specifies whether to execute the view with Legacy or Standard SQL.
62
63 If this table is not a view, None is returned.
64
65 Returns:
66 bool: True if the view is using legacy SQL, or None if not a view
67 """
68 view = table._properties.get('view')
69 if view is not None:
70 # The server-side default for useLegacySql is True.
71 return view.get('useLegacySql', True)
72 # In some cases, such as in a table list no view object is present, but the
73 # resource still represents a view. Use the type as a fallback.
74 if table.table_type == 'VIEW':
75 # The server-side default for useLegacySql is True.
76 return True
77
78
79 class TableReference(object):
80 """TableReferences are pointers to tables.
81
82 See
83 https://cloud.google.com/bigquery/docs/reference/rest/v2/tables
84
85 :type dataset_ref: :class:`google.cloud.bigquery.dataset.DatasetReference`
86 :param dataset_ref: a pointer to the dataset
87
88 :type table_id: str
89 :param table_id: the ID of the table
90 """
91
92 def __init__(self, dataset_ref, table_id):
93 self._project = dataset_ref.project
94 self._dataset_id = dataset_ref.dataset_id
95 self._table_id = table_id
96
97 @property
98 def project(self):
99 """Project bound to the table.
100
101 :rtype: str
102 :returns: the project (derived from the dataset reference).
103 """
104 return self._project
105
106 @property
107 def dataset_id(self):
108 """ID of dataset containing the table.
109
110 :rtype: str
111 :returns: the ID (derived from the dataset reference).
112 """
113 return self._dataset_id
114
115 @property
116 def table_id(self):
117 """Table ID.
118
119 :rtype: str
120 :returns: the table ID.
121 """
122 return self._table_id
123
124 @property
125 def path(self):
126 """URL path for the table's APIs.
127
128 :rtype: str
129 :returns: the path based on project, dataset and table IDs.
130 """
131 return '/projects/%s/datasets/%s/tables/%s' % (
132 self._project, self._dataset_id, self._table_id)
133
134 @classmethod
135 def from_api_repr(cls, resource):
136 """Factory: construct a table reference given its API representation
137
138 :type resource: dict
139 :param resource: table reference representation returned from the API
140
141 :rtype: :class:`google.cloud.bigquery.table.TableReference`
142 :returns: Table reference parsed from ``resource``.
143 """
144 from google.cloud.bigquery.dataset import DatasetReference
145
146 project = resource['projectId']
147 dataset_id = resource['datasetId']
148 table_id = resource['tableId']
149 return cls(DatasetReference(project, dataset_id), table_id)
150
151 def to_api_repr(self):
152 """Construct the API resource representation of this table reference.
153
154 :rtype: dict
155 :returns: Table reference as represented as an API resource
156 """
157 return {
158 'projectId': self._project,
159 'datasetId': self._dataset_id,
160 'tableId': self._table_id,
161 }
162
163 def _key(self):
164 """A tuple key that uniquely describes this field.
165
166 Used to compute this instance's hashcode and evaluate equality.
167
168 Returns:
169 tuple: The contents of this :class:`DatasetReference`.
170 """
171 return (
172 self._project,
173 self._dataset_id,
174 self._table_id,
175 )
176
177 def __eq__(self, other):
178 if not isinstance(other, TableReference):
179 return NotImplemented
180 return self._key() == other._key()
181
182 def __ne__(self, other):
183 return not self == other
184
185 def __hash__(self):
186 return hash(self._key())
187
188 def __repr__(self):
189 return 'TableReference{}'.format(self._key())
190
191
192 class Table(object):
193 """Tables represent a set of rows whose values correspond to a schema.
194
195 See
196 https://cloud.google.com/bigquery/docs/reference/rest/v2/tables
197
198 :type table_ref: :class:`google.cloud.bigquery.table.TableReference`
199 :param table_ref: a pointer to a table
200
201 :type schema: list of :class:`~google.cloud.bigquery.schema.SchemaField`
202 :param schema: The table's schema
203 """
204
205 _schema = None
206
207 all_fields = [
208 'description', 'friendly_name', 'expires', 'location',
209 'partitioning_type', 'view_use_legacy_sql', 'view_query', 'schema',
210 'external_data_configuration', 'labels',
211 ]
212
213 def __init__(self, table_ref, schema=()):
214 self._project = table_ref.project
215 self._table_id = table_ref.table_id
216 self._dataset_id = table_ref.dataset_id
217 self._external_config = None
218 self._properties = {'labels': {}}
219 # Let the @property do validation.
220 self.schema = schema
221
222 @property
223 def project(self):
224 """Project bound to the table.
225
226 :rtype: str
227 :returns: the project (derived from the dataset).
228 """
229 return self._project
230
231 @property
232 def dataset_id(self):
233 """ID of dataset containing the table.
234
235 :rtype: str
236 :returns: the ID (derived from the dataset).
237 """
238 return self._dataset_id
239
240 @property
241 def table_id(self):
242 """ID of the table.
243
244 :rtype: str
245 :returns: the table ID.
246 """
247 return self._table_id
248
249 reference = property(_reference_getter)
250
251 @property
252 def path(self):
253 """URL path for the table's APIs.
254
255 :rtype: str
256 :returns: the path based on project, dataset and table IDs.
257 """
258 return '/projects/%s/datasets/%s/tables/%s' % (
259 self._project, self._dataset_id, self._table_id)
260
261 @property
262 def schema(self):
263 """Table's schema.
264
265 :rtype: list of :class:`~google.cloud.bigquery.schema.SchemaField`
266 :returns: fields describing the schema
267 """
268 return list(self._schema)
269
270 @schema.setter
271 def schema(self, value):
272 """Update table's schema
273
274 :type value: list of :class:`~google.cloud.bigquery.schema.SchemaField`
275 :param value: fields describing the schema
276
277 :raises: TypeError if 'value' is not a sequence, or ValueError if
278 any item in the sequence is not a SchemaField
279 """
280 if value is None:
281 self._schema = ()
282 elif not all(isinstance(field, SchemaField) for field in value):
283 raise ValueError('Schema items must be fields')
284 else:
285 self._schema = tuple(value)
286
287 @property
288 def labels(self):
289 """Labels for the table.
290
291 This method always returns a dict. To change a table's labels,
292 modify the dict, then call ``Client.update_table``. To delete a
293 label, set its value to ``None`` before updating.
294
295 :rtype: dict, {str -> str}
296 :returns: A dict of the the table's labels.
297 """
298 return self._properties['labels']
299
300 @labels.setter
301 def labels(self, value):
302 """Update labels for the table.
303
304 :type value: dict, {str -> str}
305 :param value: new labels
306
307 :raises: ValueError for invalid value types.
308 """
309 if not isinstance(value, dict):
310 raise ValueError("Pass a dict")
311 self._properties['labels'] = value
312
313 @property
314 def created(self):
315 """Datetime at which the table was created.
316
317 :rtype: ``datetime.datetime``, or ``NoneType``
318 :returns: the creation time (None until set from the server).
319 """
320 creation_time = self._properties.get('creationTime')
321 if creation_time is not None:
322 # creation_time will be in milliseconds.
323 return _datetime_from_microseconds(1000.0 * creation_time)
324
325 @property
326 def etag(self):
327 """ETag for the table resource.
328
329 :rtype: str, or ``NoneType``
330 :returns: the ETag (None until set from the server).
331 """
332 return self._properties.get('etag')
333
334 @property
335 def modified(self):
336 """Datetime at which the table was last modified.
337
338 :rtype: ``datetime.datetime``, or ``NoneType``
339 :returns: the modification time (None until set from the server).
340 """
341 modified_time = self._properties.get('lastModifiedTime')
342 if modified_time is not None:
343 # modified_time will be in milliseconds.
344 return _datetime_from_microseconds(1000.0 * modified_time)
345
346 @property
347 def num_bytes(self):
348 """The size of the table in bytes.
349
350 :rtype: int, or ``NoneType``
351 :returns: the byte count (None until set from the server).
352 """
353 num_bytes_as_str = self._properties.get('numBytes')
354 if num_bytes_as_str is not None:
355 return int(num_bytes_as_str)
356
357 @property
358 def num_rows(self):
359 """The number of rows in the table.
360
361 :rtype: int, or ``NoneType``
362 :returns: the row count (None until set from the server).
363 """
364 num_rows_as_str = self._properties.get('numRows')
365 if num_rows_as_str is not None:
366 return int(num_rows_as_str)
367
368 @property
369 def self_link(self):
370 """URL for the table resource.
371
372 :rtype: str, or ``NoneType``
373 :returns: the URL (None until set from the server).
374 """
375 return self._properties.get('selfLink')
376
377 @property
378 def full_table_id(self):
379 """ID for the table, in the form ``project_id:dataset_id:table_id``.
380
381 :rtype: str, or ``NoneType``
382 :returns: the full ID (None until set from the server).
383 """
384 return self._properties.get('id')
385
386 @property
387 def table_type(self):
388 """The type of the table.
389
390 Possible values are "TABLE", "VIEW", or "EXTERNAL".
391
392 :rtype: str, or ``NoneType``
393 :returns: the URL (None until set from the server).
394 """
395 return self._properties.get('type')
396
397 @property
398 def partitioning_type(self):
399 """Time partitioning of the table.
400 :rtype: str, or ``NoneType``
401 :returns: Returns type if the table is partitioned, None otherwise.
402 """
403 return self._properties.get('timePartitioning', {}).get('type')
404
405 @partitioning_type.setter
406 def partitioning_type(self, value):
407 """Update the partitioning type of the table
408
409 :type value: str
410 :param value: partitioning type only "DAY" is currently supported
411 """
412 if value not in ('DAY', None):
413 raise ValueError("value must be one of ['DAY', None]")
414
415 if value is None:
416 self._properties.pop('timePartitioning', None)
417 else:
418 time_part = self._properties.setdefault('timePartitioning', {})
419 time_part['type'] = value.upper()
420
421 @property
422 def partition_expiration(self):
423 """Expiration time in ms for a partition
424 :rtype: int, or ``NoneType``
425 :returns: Returns the time in ms for partition expiration
426 """
427 return self._properties.get('timePartitioning', {}).get('expirationMs')
428
429 @partition_expiration.setter
430 def partition_expiration(self, value):
431 """Update the experation time in ms for a partition
432
433 :type value: int
434 :param value: partition experiation time in milliseconds
435 """
436 if not isinstance(value, (int, type(None))):
437 raise ValueError(
438 "must be an integer representing millisseconds or None")
439
440 if value is None:
441 if 'timePartitioning' in self._properties:
442 self._properties['timePartitioning'].pop('expirationMs')
443 else:
444 try:
445 self._properties['timePartitioning']['expirationMs'] = value
446 except KeyError:
447 self._properties['timePartitioning'] = {'type': 'DAY'}
448 self._properties['timePartitioning']['expirationMs'] = value
449
450 @property
451 def description(self):
452 """Description of the table.
453
454 :rtype: str, or ``NoneType``
455 :returns: The description as set by the user, or None (the default).
456 """
457 return self._properties.get('description')
458
459 @description.setter
460 def description(self, value):
461 """Update description of the table.
462
463 :type value: str
464 :param value: (Optional) new description
465
466 :raises: ValueError for invalid value types.
467 """
468 if not isinstance(value, six.string_types) and value is not None:
469 raise ValueError("Pass a string, or None")
470 self._properties['description'] = value
471
472 @property
473 def expires(self):
474 """Datetime at which the table will be removed.
475
476 :rtype: ``datetime.datetime``, or ``NoneType``
477 :returns: the expiration time, or None
478 """
479 expiration_time = self._properties.get('expirationTime')
480 if expiration_time is not None:
481 # expiration_time will be in milliseconds.
482 return _datetime_from_microseconds(1000.0 * expiration_time)
483
484 @expires.setter
485 def expires(self, value):
486 """Update datetime at which the table will be removed.
487
488 :type value: ``datetime.datetime``
489 :param value: (Optional) the new expiration time, or None
490 """
491 if not isinstance(value, datetime.datetime) and value is not None:
492 raise ValueError("Pass a datetime, or None")
493 self._properties['expirationTime'] = _millis_from_datetime(value)
494
495 @property
496 def friendly_name(self):
497 """Title of the table.
498
499 :rtype: str, or ``NoneType``
500 :returns: The name as set by the user, or None (the default).
501 """
502 return self._properties.get('friendlyName')
503
504 @friendly_name.setter
505 def friendly_name(self, value):
506 """Update title of the table.
507
508 :type value: str
509 :param value: (Optional) new title
510
511 :raises: ValueError for invalid value types.
512 """
513 if not isinstance(value, six.string_types) and value is not None:
514 raise ValueError("Pass a string, or None")
515 self._properties['friendlyName'] = value
516
517 @property
518 def location(self):
519 """Location in which the table is hosted.
520
521 :rtype: str, or ``NoneType``
522 :returns: The location as set by the user, or None (the default).
523 """
524 return self._properties.get('location')
525
526 @location.setter
527 def location(self, value):
528 """Update location in which the table is hosted.
529
530 :type value: str
531 :param value: (Optional) new location
532
533 :raises: ValueError for invalid value types.
534 """
535 if not isinstance(value, six.string_types) and value is not None:
536 raise ValueError("Pass a string, or None")
537 self._properties['location'] = value
538
539 @property
540 def view_query(self):
541 """SQL query defining the table as a view.
542
543 By default, the query is treated as Standard SQL. To use Legacy
544 SQL, set view_use_legacy_sql to True.
545
546 :rtype: str, or ``NoneType``
547 :returns: The query as set by the user, or None (the default).
548 """
549 view = self._properties.get('view')
550 if view is not None:
551 return view.get('query')
552
553 @view_query.setter
554 def view_query(self, value):
555 """Update SQL query defining the table as a view.
556
557 :type value: str
558 :param value: new query
559
560 :raises: ValueError for invalid value types.
561 """
562 if not isinstance(value, six.string_types):
563 raise ValueError("Pass a string")
564 view = self._properties.get('view')
565 if view is None:
566 view = self._properties['view'] = {}
567 view['query'] = value
568 # The service defaults useLegacySql to True, but this
569 # client uses Standard SQL by default.
570 if view.get('useLegacySql') is None:
571 view['useLegacySql'] = False
572
573 @view_query.deleter
574 def view_query(self):
575 """Delete SQL query defining the table as a view."""
576 self._properties.pop('view', None)
577
578 view_use_legacy_sql = property(_view_use_legacy_sql_getter)
579
580 @view_use_legacy_sql.setter
581 def view_use_legacy_sql(self, value):
582 """Update the view sub-property 'useLegacySql'.
583
584 This boolean specifies whether to execute the view with Legacy SQL
585 (True) or Standard SQL (False). The default, if not specified, is
586 'False'.
587
588 :type value: bool
589 :param value: The boolean for view.useLegacySql
590
591 :raises: ValueError for invalid value types.
592 """
593 if not isinstance(value, bool):
594 raise ValueError("Pass a boolean")
595 if self._properties.get('view') is None:
596 self._properties['view'] = {}
597 self._properties['view']['useLegacySql'] = value
598
599 @property
600 def streaming_buffer(self):
601 """Information about a table's streaming buffer.
602
603 :rtype: :class:`~google.cloud.bigquery.StreamingBuffer`
604 :returns: Streaming buffer information, returned from get_table.
605 """
606 sb = self._properties.get('streamingBuffer')
607 if sb is not None:
608 return StreamingBuffer(sb)
609
610 @property
611 def external_data_configuration(self):
612 """Configuration for an external data source.
613
614 If not set, None is returned.
615
616 :rtype: :class:`~google.cloud.bigquery.ExternalConfig`, or ``NoneType``
617 :returns: The external configuration, or None (the default).
618 """
619 return self._external_config
620
621 @external_data_configuration.setter
622 def external_data_configuration(self, value):
623 """Sets the configuration for an external data source.
624
625 :type value:
626 :class:`~google.cloud.bigquery.ExternalConfig`, or ``NoneType``
627 :param value: The ExternalConfig, or None to unset.
628 """
629 if not (value is None or isinstance(value, ExternalConfig)):
630 raise ValueError("Pass an ExternalConfig or None")
631 self._external_config = value
632
633 @classmethod
634 def from_api_repr(cls, resource):
635 """Factory: construct a table given its API representation
636
637 :type resource: dict
638 :param resource: table resource representation returned from the API
639
640 :type dataset: :class:`google.cloud.bigquery.dataset.Dataset`
641 :param dataset: The dataset containing the table.
642
643 :rtype: :class:`google.cloud.bigquery.table.Table`
644 :returns: Table parsed from ``resource``.
645 """
646 from google.cloud.bigquery import dataset
647
648 if ('tableReference' not in resource or
649 'tableId' not in resource['tableReference']):
650 raise KeyError('Resource lacks required identity information:'
651 '["tableReference"]["tableId"]')
652 project_id = resource['tableReference']['projectId']
653 table_id = resource['tableReference']['tableId']
654 dataset_id = resource['tableReference']['datasetId']
655 dataset_ref = dataset.DatasetReference(project_id, dataset_id)
656
657 table = cls(dataset_ref.table(table_id))
658 table._set_properties(resource)
659 return table
660
661 def _set_properties(self, api_response):
662 """Update properties from resource in body of ``api_response``
663
664 :type api_response: dict
665 :param api_response: response returned from an API call
666 """
667 self._properties.clear()
668 cleaned = api_response.copy()
669 schema = cleaned.pop('schema', {'fields': ()})
670 self.schema = _parse_schema_resource(schema)
671 ec = cleaned.pop('externalDataConfiguration', None)
672 if ec:
673 self.external_data_configuration = ExternalConfig.from_api_repr(ec)
674 if 'creationTime' in cleaned:
675 cleaned['creationTime'] = float(cleaned['creationTime'])
676 if 'lastModifiedTime' in cleaned:
677 cleaned['lastModifiedTime'] = float(cleaned['lastModifiedTime'])
678 if 'expirationTime' in cleaned:
679 cleaned['expirationTime'] = float(cleaned['expirationTime'])
680 if 'labels' not in cleaned:
681 cleaned['labels'] = {}
682 self._properties.update(cleaned)
683
684 def _populate_expires_resource(self, resource):
685 resource['expirationTime'] = _millis_from_datetime(self.expires)
686
687 def _populate_partitioning_type_resource(self, resource):
688 resource['timePartitioning'] = self._properties.get('timePartitioning')
689
690 def _populate_view_use_legacy_sql_resource(self, resource):
691 if 'view' not in resource:
692 resource['view'] = {}
693 resource['view']['useLegacySql'] = self.view_use_legacy_sql
694
695 def _populate_view_query_resource(self, resource):
696 if self.view_query is None:
697 resource['view'] = None
698 return
699 if 'view' not in resource:
700 resource['view'] = {}
701 resource['view']['query'] = self.view_query
702
703 def _populate_schema_resource(self, resource):
704 if not self._schema:
705 resource['schema'] = None
706 else:
707 resource['schema'] = {
708 'fields': _build_schema_resource(self._schema),
709 }
710
711 def _populate_external_config(self, resource):
712 if not self.external_data_configuration:
713 resource['externalDataConfiguration'] = None
714 else:
715 resource['externalDataConfiguration'] = ExternalConfig.to_api_repr(
716 self.external_data_configuration)
717
718 custom_resource_fields = {
719 'expires': _populate_expires_resource,
720 'partitioning_type': _populate_partitioning_type_resource,
721 'view_query': _populate_view_query_resource,
722 'view_use_legacy_sql': _populate_view_use_legacy_sql_resource,
723 'schema': _populate_schema_resource,
724 'external_data_configuration': _populate_external_config,
725 }
726
727 def _build_resource(self, filter_fields):
728 """Generate a resource for ``create`` or ``update``."""
729 resource = {
730 'tableReference': {
731 'projectId': self._project,
732 'datasetId': self._dataset_id,
733 'tableId': self.table_id},
734 }
735 for f in filter_fields:
736 if f in self.custom_resource_fields:
737 self.custom_resource_fields[f](self, resource)
738 else:
739 api_field = _snake_to_camel_case(f)
740 resource[api_field] = getattr(self, f)
741 return resource
742
743
744 class TableListItem(object):
745 """A read-only table resource from a list operation.
746
747 For performance reasons, the BigQuery API only includes some of the table
748 properties when listing tables. Notably,
749 :attr:`~google.cloud.bigquery.table.Table.schema` and
750 :attr:`~google.cloud.bigquery.table.Table.num_rows` are missing.
751
752 For a full list of the properties that the BigQuery API returns, see the
753 `REST documentation for tables.list
754 <https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/list>`_.
755
756
757 Args:
758 resource (dict):
759 A table-like resource object from a table list response.
760 """
761
762 def __init__(self, resource):
763 self._properties = resource
764
765 @property
766 def project(self):
767 """The project ID of the project this table belongs to.
768
769 Returns:
770 str: the project ID of the table.
771 """
772 return self._properties.get('tableReference', {}).get('projectId')
773
774 @property
775 def dataset_id(self):
776 """The dataset ID of the dataset this table belongs to.
777
778 Returns:
779 str: the dataset ID of the table.
780 """
781 return self._properties.get('tableReference', {}).get('datasetId')
782
783 @property
784 def table_id(self):
785 """The table ID.
786
787 Returns:
788 str: the table ID.
789 """
790 return self._properties.get('tableReference', {}).get('tableId')
791
792 reference = property(_reference_getter)
793
794 @property
795 def labels(self):
796 """Labels for the table.
797
798 This method always returns a dict. To change a table's labels,
799 modify the dict, then call ``Client.update_table``. To delete a
800 label, set its value to ``None`` before updating.
801
802 Returns:
803 Map[str, str]: A dictionary of the the table's labels
804 """
805 return self._properties.get('labels', {})
806
807 @property
808 def full_table_id(self):
809 """ID for the table, in the form ``project_id:dataset_id:table_id``.
810
811 Returns:
812 str: The fully-qualified ID of the table
813 """
814 return self._properties.get('id')
815
816 @property
817 def table_type(self):
818 """The type of the table.
819
820 Possible values are "TABLE", "VIEW", or "EXTERNAL".
821
822 Returns:
823 str: The kind of table
824 """
825 return self._properties.get('type')
826
827 @property
828 def partitioning_type(self):
829 """Time partitioning of the table.
830
831 Returns:
832 str:
833 Type of partitioning if the table is partitioned, None
834 otherwise.
835 """
836 return self._properties.get('timePartitioning', {}).get('type')
837
838 @property
839 def partition_expiration(self):
840 """Expiration time in ms for a partition
841
842 Returns:
843 int: The time in ms for partition expiration
844 """
845 return int(
846 self._properties.get('timePartitioning', {}).get('expirationMs'))
847
848 @property
849 def friendly_name(self):
850 """Title of the table.
851
852 Returns:
853 str: The name as set by the user, or None (the default)
854 """
855 return self._properties.get('friendlyName')
856
857 view_use_legacy_sql = property(_view_use_legacy_sql_getter)
858
859
860 def _row_from_mapping(mapping, schema):
861 """Convert a mapping to a row tuple using the schema.
862
863 :type mapping: dict
864 :param mapping: Mapping of row data: must contain keys for all
865 required fields in the schema. Keys which do not correspond
866 to a field in the schema are ignored.
867
868 :type schema: list of :class:`~google.cloud.bigquery.schema.SchemaField`
869 :param schema: The schema of the table destination for the rows
870
871 :rtype: tuple
872 :returns: Tuple whose elements are ordered according to the schema.
873 :raises: ValueError if schema is empty
874 """
875 if len(schema) == 0:
876 raise ValueError(_TABLE_HAS_NO_SCHEMA)
877
878 row = []
879 for field in schema:
880 if field.mode == 'REQUIRED':
881 row.append(mapping[field.name])
882 elif field.mode == 'REPEATED':
883 row.append(mapping.get(field.name, ()))
884 elif field.mode == 'NULLABLE':
885 row.append(mapping.get(field.name))
886 else:
887 raise ValueError(
888 "Unknown field mode: {}".format(field.mode))
889 return tuple(row)
890
891
892 class StreamingBuffer(object):
893 """Information about a table's streaming buffer.
894
895 See https://cloud.google.com/bigquery/streaming-data-into-bigquery.
896
897 :type resource: dict
898 :param resource: streaming buffer representation returned from the API
899 """
900
901 def __init__(self, resource):
902 self.estimated_bytes = int(resource['estimatedBytes'])
903 self.estimated_rows = int(resource['estimatedRows'])
904 # time is in milliseconds since the epoch.
905 self.oldest_entry_time = _datetime_from_microseconds(
906 1000.0 * int(resource['oldestEntryTime']))
907
908
909 class Row(object):
910 """A BigQuery row.
911
912 Values can be accessed by position (index), by key like a dict,
913 or as properties.
914
915 :type values: tuple
916 :param values: the row values
917
918 :type field_to_index: dict
919 :param field_to_index: a mapping from schema field names to indexes
920 """
921
922 # Choose unusual field names to try to avoid conflict with schema fields.
923 __slots__ = ('_xxx_values', '_xxx_field_to_index')
924
925 def __init__(self, values, field_to_index):
926 self._xxx_values = values
927 self._xxx_field_to_index = field_to_index
928
929 def values(self):
930 """Return the values included in this row.
931
932 Returns:
933 Sequence[object]: A sequence of length ``len(row)``.
934 """
935 return copy.deepcopy(self._xxx_values)
936
937 def keys(self):
938 """Return the keys for using a row as a dict.
939
940 Returns:
941 Sequence[str]: The keys corresponding to the columns of a row
942
943 Examples:
944
945 >>> list(Row(('a', 'b'), {'x': 0, 'y': 1}).keys())
946 ['x', 'y']
947 """
948 return six.iterkeys(self._xxx_field_to_index)
949
950 def items(self):
951 """Return items as ``(key, value)`` pairs.
952
953 Returns:
954 Sequence[Tuple[str, object]]:
955 The ``(key, value)`` pairs representing this row.
956
957 Examples:
958
959 >>> list(Row(('a', 'b'), {'x': 0, 'y': 1}).items())
960 [('x', 'a'), ('y', 'b')]
961 """
962 for key, index in six.iteritems(self._xxx_field_to_index):
963 yield (key, copy.deepcopy(self._xxx_values[index]))
964
965 def get(self, key, default=None):
966 """Return a value for key, with a default value if it does not exist.
967
968 Args:
969 key (str): The key of the column to access
970 default (object):
971 The default value to use if the key does not exist. (Defaults
972 to :data:`None`.)
973
974 Returns:
975 object:
976 The value associated with the provided key, or a default value.
977
978 Examples:
979 When the key exists, the value associated with it is returned.
980
981 >>> Row(('a', 'b'), {'x': 0, 'y': 1}).get('x')
982 'a'
983
984 The default value is ``None`` when the key does not exist.
985
986 >>> Row(('a', 'b'), {'x': 0, 'y': 1}).get('z')
987 None
988
989 The default value can be overrided with the ``default`` parameter.
990
991 >>> Row(('a', 'b'), {'x': 0, 'y': 1}).get('z', '')
992 ''
993
994 >>> Row(('a', 'b'), {'x': 0, 'y': 1}).get('z', default = '')
995 ''
996 """
997 index = self._xxx_field_to_index.get(key)
998 if index is None:
999 return default
1000 return self._xxx_values[index]
1001
1002 def __getattr__(self, name):
1003 value = self._xxx_field_to_index.get(name)
1004 if value is None:
1005 raise AttributeError('no row field {!r}'.format(name))
1006 return self._xxx_values[value]
1007
1008 def __len__(self):
1009 return len(self._xxx_values)
1010
1011 def __getitem__(self, key):
1012 if isinstance(key, six.string_types):
1013 value = self._xxx_field_to_index.get(key)
1014 if value is None:
1015 raise KeyError('no row field {!r}'.format(key))
1016 key = value
1017 return self._xxx_values[key]
1018
1019 def __eq__(self, other):
1020 if not isinstance(other, Row):
1021 return NotImplemented
1022 return(
1023 self._xxx_values == other._xxx_values and
1024 self._xxx_field_to_index == other._xxx_field_to_index)
1025
1026 def __ne__(self, other):
1027 return not self == other
1028
1029 def __repr__(self):
1030 # sort field dict by value, for determinism
1031 items = sorted(self._xxx_field_to_index.items(),
1032 key=operator.itemgetter(1))
1033 f2i = '{' + ', '.join('%r: %d' % item for item in items) + '}'
1034 return 'Row({}, {})'.format(self._xxx_values, f2i)
1035
1036
1037 class RowIterator(HTTPIterator):
1038 """A class for iterating through HTTP/JSON API row list responses.
1039
1040 Args:
1041 client (google.cloud.bigquery.Client): The API client.
1042 api_request (Callable[google.cloud._http.JSONConnection.api_request]):
1043 The function to use to make API requests.
1044 path (str): The method path to query for the list of items.
1045 page_token (str): A token identifying a page in a result set to start
1046 fetching results from.
1047 max_results (int): The maximum number of results to fetch.
1048 extra_params (dict): Extra query string parameters for the API call.
1049
1050 .. autoattribute:: pages
1051 """
1052
1053 def __init__(self, client, api_request, path, schema, page_token=None,
1054 max_results=None, extra_params=None):
1055 super(RowIterator, self).__init__(
1056 client, api_request, path, item_to_value=_item_to_row,
1057 items_key='rows', page_token=page_token, max_results=max_results,
1058 extra_params=extra_params, page_start=_rows_page_start,
1059 next_token='pageToken')
1060 self._schema = schema
1061 self._field_to_index = _field_to_index_mapping(schema)
1062 self._total_rows = None
1063
1064 @property
1065 def schema(self):
1066 """Schema for the table containing the rows
1067
1068 Returns:
1069 list of :class:`~google.cloud.bigquery.schema.SchemaField`:
1070 fields describing the schema
1071 """
1072 return list(self._schema)
1073
1074 @property
1075 def total_rows(self):
1076 """The total number of rows in the table.
1077
1078 Returns:
1079 int: the row count.
1080 """
1081 return self._total_rows
1082
1083 def to_dataframe(self):
1084 """Create a pandas DataFrame from the query results.
1085
1086 Returns:
1087 A :class:`~pandas.DataFrame` populated with row data and column
1088 headers from the query results. The column headers are derived
1089 from the destination table's schema.
1090
1091 Raises:
1092 ValueError: If the `pandas` library cannot be imported.
1093
1094 """
1095 if pandas is None:
1096 raise ValueError('The pandas library is not installed, please '
1097 'install pandas to use the to_dataframe() '
1098 'function.')
1099
1100 column_headers = [field.name for field in self.schema]
1101 rows = [row.values() for row in iter(self)]
1102
1103 return pandas.DataFrame(rows, columns=column_headers)
1104
[end of bigquery/google/cloud/bigquery/table.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
googleapis/google-cloud-python
|
38d46401c04e7887c1fd9cbef80c9bf6798c2a83
|
BigQuery Python Client v0.28.0: num_rows, schema property of a Table returning None and empty list
Hi,
I am using the BigQuery Python client v0.28.0
Using the bigquery client to list all tables in a dataset using method `list_dataset_tables`, I get an iterator of table references in the dataset. When I try to access the `num_rows` and `schema` properties of the Table object it returns `None` and `[]`.
But when I access the table using `client.get_table()` I am able to access both the num_rows and schema properties.
I am confused since both the methods return a `google.cloud.bigquery.table.Table` object but are inconsistent.
```
for table in bqclient.list_dataset_tables(dataset_ref):
print(table.num_rows)
print(table.schema)
```
Returns None and []
```
table = bqclient.get_table(table)
print(table.num_rows)
print(table.schema)
```
Returns the number of rows and list of schemaFields
Thanks
|
Thanks for the report.
It appears as though the [BigQuery tables.list](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/list) method only returns a subset of the fields of a table.
I see a few potentially sane fixes:
1. (most desirable) See if we can get the backend team to expose a full Table resource in list operations (maybe with an optional parameter).
2. (breaking, but maybe our best option) Make `list_dataset_tables` return a list of table references and discard the few extra table fields that the list call returns.
3. (inefficient) Make a call to get the table resource for each item in the returned list.
I'll send a note to my contacts on the BigQuery backend team about doing (1), but the most likely fix will be (2).
I guess there's another option (4) document that only a subset of fields are exposed in the table response objects from the list request.
I dislike that option the most, as it leaves a lot of room for confusion.
I got confirmation from the backend team that (1) is infeasible. Properties like the schema and total rows of the table take much longer to fetch in the backend. The list API call includes only properties which are fast for the backend to fetch.
I think I'd like to propose another option (5) Introduce a new type containing this subset of properties. Maybe `PartialTable` or `TableListItem` which includes only those properties present in the list API response. That way it will be much clearer from the documentation which properties are included.
I had (naively, I guess) assumed that this issue was the exact reason `TableReference` even exists.
No, I unfortunately hadn't thought that listing would have some but not all properties. `TableReference` was added for the very similar problem of where properties such as destination table on the query job only include the table ID.
I'm thinking we really do need to introduce a third type.
Just sent https://github.com/GoogleCloudPlatform/google-cloud-python/pull/4427 to add `TableListItem`. This issue is also present on datasets and will need a similar change for them.
|
2017-11-22T21:00:22Z
|
<patch>
diff --git a/bigquery/google/cloud/bigquery/client.py b/bigquery/google/cloud/bigquery/client.py
--- a/bigquery/google/cloud/bigquery/client.py
+++ b/bigquery/google/cloud/bigquery/client.py
@@ -36,6 +36,7 @@
from google.cloud.bigquery._helpers import _snake_to_camel_case
from google.cloud.bigquery._http import Connection
from google.cloud.bigquery.dataset import Dataset
+from google.cloud.bigquery.dataset import DatasetListItem
from google.cloud.bigquery.dataset import DatasetReference
from google.cloud.bigquery.job import CopyJob
from google.cloud.bigquery.job import ExtractJob
@@ -181,8 +182,10 @@ def list_datasets(self, include_all=False, filter=None, max_results=None,
:param retry: (Optional) How to retry the RPC.
:rtype: :class:`~google.api_core.page_iterator.Iterator`
- :returns: Iterator of :class:`~google.cloud.bigquery.dataset.Dataset`.
- accessible to the current client.
+ :returns:
+ Iterator of
+ :class:`~google.cloud.bigquery.dataset.DatasetListItem`.
+ associated with the client's project.
"""
extra_params = {}
if include_all:
@@ -1275,10 +1278,10 @@ def _item_to_dataset(iterator, resource):
:type resource: dict
:param resource: An item to be converted to a dataset.
- :rtype: :class:`.Dataset`
+ :rtype: :class:`.DatasetListItem`
:returns: The next dataset in the page.
"""
- return Dataset.from_api_repr(resource)
+ return DatasetListItem(resource)
def _item_to_job(iterator, resource):
diff --git a/bigquery/google/cloud/bigquery/dataset.py b/bigquery/google/cloud/bigquery/dataset.py
--- a/bigquery/google/cloud/bigquery/dataset.py
+++ b/bigquery/google/cloud/bigquery/dataset.py
@@ -281,8 +281,7 @@ def full_dataset_id(self):
@property
def reference(self):
- """A :class:`~google.cloud.bigquery.dataset.DatasetReference` pointing to
- this dataset.
+ """A reference to this dataset.
Returns:
google.cloud.bigquery.dataset.DatasetReference:
@@ -420,7 +419,7 @@ def labels(self):
:rtype: dict, {str -> str}
:returns: A dict of the the dataset's labels.
"""
- return self._properties['labels']
+ return self._properties.get('labels', {})
@labels.setter
def labels(self, value):
@@ -546,4 +545,105 @@ def table(self, table_id):
:rtype: :class:`~google.cloud.bigquery.table.TableReference`
:returns: a TableReference for a table in this dataset.
"""
- return TableReference(self, table_id)
+ return TableReference(self.reference, table_id)
+
+
+class DatasetListItem(object):
+ """A read-only dataset resource from a list operation.
+
+ For performance reasons, the BigQuery API only includes some of the
+ dataset properties when listing datasets. Notably,
+ :attr:`~google.cloud.bigquery.dataset.Dataset.access_entries` is missing.
+
+ For a full list of the properties that the BigQuery API returns, see the
+ `REST documentation for datasets.list
+ <https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/list>`_.
+
+
+ Args:
+ resource (dict):
+ A dataset-like resource object from a dataset list response. A
+ ``datasetReference`` property is required.
+
+ Raises:
+ ValueError:
+ If ``datasetReference`` or one of its required members is missing
+ from ``resource``.
+ """
+
+ def __init__(self, resource):
+ if 'datasetReference' not in resource:
+ raise ValueError('resource must contain a datasetReference value')
+ if 'projectId' not in resource['datasetReference']:
+ raise ValueError(
+ "resource['datasetReference'] must contain a projectId value")
+ if 'datasetId' not in resource['datasetReference']:
+ raise ValueError(
+ "resource['datasetReference'] must contain a datasetId value")
+ self._properties = resource
+
+ @property
+ def project(self):
+ """Project bound to the dataset.
+
+ :rtype: str
+ :returns: the project.
+ """
+ return self._properties['datasetReference']['projectId']
+
+ @property
+ def dataset_id(self):
+ """Dataset ID.
+
+ :rtype: str
+ :returns: the dataset ID.
+ """
+ return self._properties['datasetReference']['datasetId']
+
+ @property
+ def full_dataset_id(self):
+ """ID for the dataset resource, in the form "project_id:dataset_id".
+
+ :rtype: str, or ``NoneType``
+ :returns: the ID (None until set from the server).
+ """
+ return self._properties.get('id')
+
+ @property
+ def friendly_name(self):
+ """Title of the dataset.
+
+ :rtype: str, or ``NoneType``
+ :returns: The name as set by the user, or None (the default).
+ """
+ return self._properties.get('friendlyName')
+
+ @property
+ def labels(self):
+ """Labels for the dataset.
+
+ :rtype: dict, {str -> str}
+ :returns: A dict of the the dataset's labels.
+ """
+ return self._properties.get('labels', {})
+
+ @property
+ def reference(self):
+ """A reference to this dataset.
+
+ Returns:
+ google.cloud.bigquery.dataset.DatasetReference:
+ A pointer to this dataset
+ """
+ return DatasetReference(self.project, self.dataset_id)
+
+ def table(self, table_id):
+ """Constructs a TableReference.
+
+ :type table_id: str
+ :param table_id: the ID of the table.
+
+ :rtype: :class:`~google.cloud.bigquery.table.TableReference`
+ :returns: a TableReference for a table in this dataset.
+ """
+ return TableReference(self.reference, table_id)
diff --git a/bigquery/google/cloud/bigquery/table.py b/bigquery/google/cloud/bigquery/table.py
--- a/bigquery/google/cloud/bigquery/table.py
+++ b/bigquery/google/cloud/bigquery/table.py
@@ -49,7 +49,7 @@ def _reference_getter(table):
this table.
Returns:
- google.cloud.bigquery.table.TableReference: pointer to this table
+ google.cloud.bigquery.table.TableReference: pointer to this table.
"""
from google.cloud.bigquery import dataset
@@ -295,7 +295,7 @@ def labels(self):
:rtype: dict, {str -> str}
:returns: A dict of the the table's labels.
"""
- return self._properties['labels']
+ return self._properties.get('labels', {})
@labels.setter
def labels(self, value):
@@ -756,10 +756,28 @@ class TableListItem(object):
Args:
resource (dict):
- A table-like resource object from a table list response.
+ A table-like resource object from a table list response. A
+ ``tableReference`` property is required.
+
+ Raises:
+ ValueError:
+ If ``tableReference`` or one of its required members is missing
+ from ``resource``.
"""
def __init__(self, resource):
+ if 'tableReference' not in resource:
+ raise ValueError('resource must contain a tableReference value')
+ if 'projectId' not in resource['tableReference']:
+ raise ValueError(
+ "resource['tableReference'] must contain a projectId value")
+ if 'datasetId' not in resource['tableReference']:
+ raise ValueError(
+ "resource['tableReference'] must contain a datasetId value")
+ if 'tableId' not in resource['tableReference']:
+ raise ValueError(
+ "resource['tableReference'] must contain a tableId value")
+
self._properties = resource
@property
@@ -769,7 +787,7 @@ def project(self):
Returns:
str: the project ID of the table.
"""
- return self._properties.get('tableReference', {}).get('projectId')
+ return self._properties['tableReference']['projectId']
@property
def dataset_id(self):
@@ -778,7 +796,7 @@ def dataset_id(self):
Returns:
str: the dataset ID of the table.
"""
- return self._properties.get('tableReference', {}).get('datasetId')
+ return self._properties['tableReference']['datasetId']
@property
def table_id(self):
@@ -787,7 +805,7 @@ def table_id(self):
Returns:
str: the table ID.
"""
- return self._properties.get('tableReference', {}).get('tableId')
+ return self._properties['tableReference']['tableId']
reference = property(_reference_getter)
@@ -842,8 +860,10 @@ def partition_expiration(self):
Returns:
int: The time in ms for partition expiration
"""
- return int(
- self._properties.get('timePartitioning', {}).get('expirationMs'))
+ expiration = self._properties.get(
+ 'timePartitioning', {}).get('expirationMs')
+ if expiration is not None:
+ return int(expiration)
@property
def friendly_name(self):
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-8488
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DOC: update DataFrame.append / concat section to have a warning about the copying
http://stackoverflow.com/questions/25210819/speeding-up-data-import-function-pandas-and-appending-to-dataframe
DOC: update DataFrame.append / concat section to have a warning about the copying
http://stackoverflow.com/questions/25210819/speeding-up-data-import-function-pandas-and-appending-to-dataframe
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 [](http://scatterci.github.io/pydata/pandas)
6
7 ## What is it
8
9 **pandas** is a Python package providing fast, flexible, and expressive data
10 structures designed to make working with "relational" or "labeled" data both
11 easy and intuitive. It aims to be the fundamental high-level building block for
12 doing practical, **real world** data analysis in Python. Additionally, it has
13 the broader goal of becoming **the most powerful and flexible open source data
14 analysis / manipulation tool available in any language**. It is already well on
15 its way toward this goal.
16
17 ## Main Features
18 Here are just a few of the things that pandas does well:
19
20 - Easy handling of [**missing data**][missing-data] (represented as
21 `NaN`) in floating point as well as non-floating point data
22 - Size mutability: columns can be [**inserted and
23 deleted**][insertion-deletion] from DataFrame and higher dimensional
24 objects
25 - Automatic and explicit [**data alignment**][alignment]: objects can
26 be explicitly aligned to a set of labels, or the user can simply
27 ignore the labels and let `Series`, `DataFrame`, etc. automatically
28 align the data for you in computations
29 - Powerful, flexible [**group by**][groupby] functionality to perform
30 split-apply-combine operations on data sets, for both aggregating
31 and transforming data
32 - Make it [**easy to convert**][conversion] ragged,
33 differently-indexed data in other Python and NumPy data structures
34 into DataFrame objects
35 - Intelligent label-based [**slicing**][slicing], [**fancy
36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
37 large data sets
38 - Intuitive [**merging**][merging] and [**joining**][joining] data
39 sets
40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
41 data sets
42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
43 labels per tick)
44 - Robust IO tools for loading data from [**flat files**][flat-files]
45 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
47 - [**Time series**][timeseries]-specific functionality: date range
48 generation and frequency conversion, moving window statistics,
49 moving window linear regressions, date shifting and lagging, etc.
50
51
52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
70
71 ## Where to get it
72 The source code is currently hosted on GitHub at:
73 http://github.com/pydata/pandas
74
75 Binary installers for the latest released version are available at the Python
76 package index
77
78 http://pypi.python.org/pypi/pandas/
79
80 And via `easy_install`:
81
82 ```sh
83 easy_install pandas
84 ```
85
86 or `pip`:
87
88 ```sh
89 pip install pandas
90 ```
91
92 ## Dependencies
93 - [NumPy](http://www.numpy.org): 1.7.0 or higher
94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
95 - [pytz](http://pytz.sourceforge.net)
96 - Needed for time zone support with ``pandas.date_range``
97
98 ### Highly Recommended Dependencies
99 - [numexpr](https://github.com/pydata/numexpr)
100 - Needed to accelerate some expression evaluation operations
101 - Required by PyTables
102 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
103 - Needed to accelerate certain numerical operations
104
105 ### Optional dependencies
106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
111 - [statsmodels](http://statsmodels.sourceforge.net/)
112 - Needed for parts of `pandas.stats`
113 - For Excel I/O:
114 - [xlrd/xlwt](http://www.python-excel.org/)
115 - Excel reading (xlrd) and writing (xlwt)
116 - [openpyxl](http://packages.python.org/openpyxl/)
117 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for
118 writing .xlsx files
119 - xlrd >= 0.9.0
120 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
121 - Alternative Excel writer.
122 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
123 - Needed for `pandas.io.gbq`
124 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
125 - One of the following combinations of libraries is needed to use the
126 top-level [`pandas.read_html`][read-html-docs] function:
127 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
128 recent version of [html5lib][html5lib] is okay.)
129 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
130 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
131 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
132 for reasons as to why you should probably **not** take this approach.
133
134 #### Notes about HTML parsing libraries
135 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
136 either [lxml][lxml] or [html5lib][html5lib] or both.
137 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
138 installed.
139 - You are strongly encouraged to read [HTML reading
140 gotchas][html-gotchas]. It explains issues surrounding the
141 installation and usage of the above three libraries.
142 - You may need to install an older version of
143 [BeautifulSoup4][BeautifulSoup4]:
144 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
145 32-bit Ubuntu/Debian
146 - Additionally, if you're using [Anaconda][Anaconda] you should
147 definitely read [the gotchas about HTML parsing][html-gotchas]
148 libraries
149 - If you're on a system with `apt-get` you can do
150
151 ```sh
152 sudo apt-get build-dep python-lxml
153 ```
154
155 to get the necessary dependencies for installation of [lxml][lxml].
156 This will prevent further headaches down the line.
157
158 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
159 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
160 [lxml]: http://lxml.de
161 [Anaconda]: https://store.continuum.io/cshop/anaconda
162 [NumPy]: http://numpy.scipy.org/
163 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
164 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
165
166 ## Installation from sources
167 To install pandas from source you need Cython in addition to the normal
168 dependencies above. Cython can be installed from pypi:
169
170 ```sh
171 pip install cython
172 ```
173
174 In the `pandas` directory (same one where you found this file after
175 cloning the git repo), execute:
176
177 ```sh
178 python setup.py install
179 ```
180
181 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
182
183 ```sh
184 python setup.py develop
185 ```
186
187 Alternatively, you can use `pip` if you want all the dependencies pulled
188 in automatically (the `-e` option is for installing it in [development
189 mode](http://www.pip-installer.org/en/latest/usage.html)):
190
191 ```sh
192 pip install -e .
193 ```
194
195 On Windows, you will need to install MinGW and execute:
196
197 ```sh
198 python setup.py build --compiler=mingw32
199 python setup.py install
200 ```
201
202 See http://pandas.pydata.org/ for more information.
203
204 ## License
205 BSD
206
207 ## Documentation
208 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
209
210 The Sphinx documentation should provide a good starting point for learning how
211 to use the library. Expect the docs to continue to expand as time goes on.
212
213 ## Background
214 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
215 has been under active development since then.
216
217 ## Discussion and Development
218 Since pandas development is related to a number of other scientific
219 Python projects, questions are welcome on the scipy-user mailing
220 list. Specialized discussions or design issues should take place on
221 the PyData mailing list / Google group:
222
223 https://groups.google.com/forum/#!forum/pydata
224
[end of README.md]
[start of doc/source/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # pandas documentation build configuration file, created by
4 #
5 # This file is execfile()d with the current directory set to its containing dir.
6 #
7 # Note that not all possible configuration values are present in this
8 # autogenerated file.
9 #
10 # All configuration values have a default; values that are commented out
11 # serve to show the default.
12
13 import sys
14 import os
15 import re
16 from pandas.compat import u, PY3
17
18 # If extensions (or modules to document with autodoc) are in another directory,
19 # add these directories to sys.path here. If the directory is relative to the
20 # documentation root, use os.path.abspath to make it absolute, like shown here.
21 # sys.path.append(os.path.abspath('.'))
22 sys.path.insert(0, os.path.abspath('../sphinxext'))
23
24 sys.path.extend([
25
26 # numpy standard doc extensions
27 os.path.join(os.path.dirname(__file__),
28 '..', '../..',
29 'sphinxext')
30
31 ])
32
33 # -- General configuration -----------------------------------------------
34
35 # Add any Sphinx extension module names here, as strings. They can be extensions
36 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. sphinxext.
37
38 extensions = ['sphinx.ext.autodoc',
39 'sphinx.ext.autosummary',
40 'sphinx.ext.doctest',
41 'sphinx.ext.extlinks',
42 'sphinx.ext.todo',
43 'numpydoc', # used to parse numpy-style docstrings for autodoc
44 'ipython_sphinxext.ipython_directive',
45 'ipython_sphinxext.ipython_console_highlighting',
46 'sphinx.ext.intersphinx',
47 'sphinx.ext.todo',
48 'sphinx.ext.coverage',
49 'sphinx.ext.pngmath',
50 'sphinx.ext.ifconfig',
51 'matplotlib.sphinxext.only_directives',
52 'matplotlib.sphinxext.plot_directive',
53 ]
54
55
56
57 with open("index.rst") as f:
58 lines = f.readlines()
59
60 # only include the slow autosummary feature if we're building the API section
61 # of the docs
62
63 # JP: added from sphinxdocs
64 autosummary_generate = False
65
66 if any([re.match("\s*api\s*",l) for l in lines]):
67 autosummary_generate = True
68
69 ds = []
70 for f in os.listdir(os.path.dirname(__file__)):
71 if (not f.endswith(('.rst'))) or (f.startswith('.')) or os.path.basename(f) == 'index.rst':
72 continue
73
74 _f = f.split('.rst')[0]
75 if not any([re.match("\s*%s\s*$" % _f,l) for l in lines]):
76 ds.append(f)
77
78 if ds:
79 print("I'm about to DELETE the following:\n%s\n" % list(sorted(ds)))
80 sys.stdout.write("WARNING: I'd like to delete those to speed up processing (yes/no)? ")
81 if PY3:
82 answer = input()
83 else:
84 answer = raw_input()
85
86 if answer.lower().strip() in ('y','yes'):
87 for f in ds:
88 f = os.path.join(os.path.join(os.path.dirname(__file__),f))
89 f= os.path.abspath(f)
90 try:
91 print("Deleting %s" % f)
92 os.unlink(f)
93 except:
94 print("Error deleting %s" % f)
95 pass
96
97 # Add any paths that contain templates here, relative to this directory.
98 templates_path = ['../_templates']
99
100 # The suffix of source filenames.
101 source_suffix = '.rst'
102
103 # The encoding of source files.
104 source_encoding = 'utf-8'
105
106 # The master toctree document.
107 master_doc = 'index'
108
109 # General information about the project.
110 project = u('pandas')
111 copyright = u('2008-2014, the pandas development team')
112
113 # The version info for the project you're documenting, acts as replacement for
114 # |version| and |release|, also used in various other places throughout the
115 # built documents.
116 #
117 # The short X.Y version.
118 import pandas
119
120 # version = '%s r%s' % (pandas.__version__, svn_version())
121 version = '%s' % (pandas.__version__)
122
123 # The full version, including alpha/beta/rc tags.
124 release = version
125
126 # The language for content autogenerated by Sphinx. Refer to documentation
127 # for a list of supported languages.
128 # language = None
129
130 # There are two options for replacing |today|: either, you set today to some
131 # non-false value, then it is used:
132 # today = ''
133 # Else, today_fmt is used as the format for a strftime call.
134 # today_fmt = '%B %d, %Y'
135
136 # List of documents that shouldn't be included in the build.
137 # unused_docs = []
138
139 # List of directories, relative to source directory, that shouldn't be searched
140 # for source files.
141 exclude_trees = []
142
143 # The reST default role (used for this markup: `text`) to use for all documents.
144 # default_role = None
145
146 # If true, '()' will be appended to :func: etc. cross-reference text.
147 # add_function_parentheses = True
148
149 # If true, the current module name will be prepended to all description
150 # unit titles (such as .. function::).
151 # add_module_names = True
152
153 # If true, sectionauthor and moduleauthor directives will be shown in the
154 # output. They are ignored by default.
155 # show_authors = False
156
157 # The name of the Pygments (syntax highlighting) style to use.
158 pygments_style = 'sphinx'
159
160 # A list of ignored prefixes for module index sorting.
161 # modindex_common_prefix = []
162
163
164 # -- Options for HTML output ---------------------------------------------
165
166 # The theme to use for HTML and HTML Help pages. Major themes that come with
167 # Sphinx are currently 'default' and 'sphinxdoc'.
168 html_theme = 'nature_with_gtoc'
169
170 # The style sheet to use for HTML and HTML Help pages. A file of that name
171 # must exist either in Sphinx' static/ path, or in one of the custom paths
172 # given in html_static_path.
173 # html_style = 'statsmodels.css'
174
175 # Theme options are theme-specific and customize the look and feel of a theme
176 # further. For a list of options available for each theme, see the
177 # documentation.
178 # html_theme_options = {}
179
180 # Add any paths that contain custom themes here, relative to this directory.
181 html_theme_path = ['themes']
182
183 # The name for this set of Sphinx documents. If None, it defaults to
184 # "<project> v<release> documentation".
185 # html_title = None
186
187 # A shorter title for the navigation bar. Default is the same as html_title.
188 # html_short_title = None
189
190 # The name of an image file (relative to this directory) to place at the top
191 # of the sidebar.
192 # html_logo = None
193
194 # The name of an image file (within the static path) to use as favicon of the
195 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
196 # pixels large.
197 # html_favicon = None
198
199 # Add any paths that contain custom static files (such as style sheets) here,
200 # relative to this directory. They are copied after the builtin static files,
201 # so a file named "default.css" will overwrite the builtin "default.css".
202 html_static_path = ['_static']
203
204 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
205 # using the given strftime format.
206 # html_last_updated_fmt = '%b %d, %Y'
207
208 # If true, SmartyPants will be used to convert quotes and dashes to
209 # typographically correct entities.
210 # html_use_smartypants = True
211
212 # Custom sidebar templates, maps document names to template names.
213 # html_sidebars = {}
214
215 # Additional templates that should be rendered to pages, maps page names to
216 # template names.
217 # html_additional_pages = {}
218
219 # If false, no module index is generated.
220 html_use_modindex = True
221
222 # If false, no index is generated.
223 # html_use_index = True
224
225 # If true, the index is split into individual pages for each letter.
226 # html_split_index = False
227
228 # If true, links to the reST sources are added to the pages.
229 # html_show_sourcelink = True
230
231 # If true, an OpenSearch description file will be output, and all pages will
232 # contain a <link> tag referring to it. The value of this option must be the
233 # base URL from which the finished HTML is served.
234 # html_use_opensearch = ''
235
236 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
237 # html_file_suffix = ''
238
239 # Output file base name for HTML help builder.
240 htmlhelp_basename = 'pandas'
241
242
243 # -- Options for LaTeX output --------------------------------------------
244
245 # The paper size ('letter' or 'a4').
246 # latex_paper_size = 'letter'
247
248 # The font size ('10pt', '11pt' or '12pt').
249 # latex_font_size = '10pt'
250
251 # Grouping the document tree into LaTeX files. List of tuples
252 # (source start file, target name, title, author, documentclass [howto/manual]).
253 latex_documents = [
254 ('index', 'pandas.tex',
255 u('pandas: powerful Python data analysis toolkit'),
256 u('Wes McKinney\n\& PyData Development Team'), 'manual'),
257 ]
258
259 # The name of an image file (relative to this directory) to place at the top of
260 # the title page.
261 # latex_logo = None
262
263 # For "manual" documents, if this is true, then toplevel headings are parts,
264 # not chapters.
265 # latex_use_parts = False
266
267 # Additional stuff for the LaTeX preamble.
268 # latex_preamble = ''
269
270 # Documents to append as an appendix to all manuals.
271 # latex_appendices = []
272
273 # If false, no module index is generated.
274 # latex_use_modindex = True
275
276
277 # Example configuration for intersphinx: refer to the Python standard library.
278 intersphinx_mapping = {
279 'statsmodels': ('http://statsmodels.sourceforge.net/devel/', None),
280 'matplotlib': ('http://matplotlib.org/', None),
281 'python': ('http://docs.python.org/', None),
282 'numpy': ('http://docs.scipy.org/doc/numpy', None)
283 }
284 import glob
285 autosummary_generate = glob.glob("*.rst")
286
287 # extlinks alias
288 extlinks = {'issue': ('https://github.com/pydata/pandas/issues/%s',
289 'GH'),
290 'wiki': ('https://github.com/pydata/pandas/wiki/%s',
291 'wiki ')}
292
293 ipython_exec_lines = [
294 'import numpy as np',
295 'import pandas as pd',
296 # This ensures correct rendering on system with console encoding != utf8
297 # (windows). It forces pandas to encode its output reprs using utf8
298 # whereever the docs are built. The docs' target is the browser, not
299 # the console, so this is fine.
300 'pd.options.display.encoding="utf8"'
301 ]
302
303 # remove the docstring of the flags attribute (inherited from numpy ndarray)
304 # because these give doc build errors (see GH issue 5331)
305 def remove_flags_docstring(app, what, name, obj, options, lines):
306 if what == "attribute" and name.endswith(".flags"):
307 del lines[:]
308
309 def setup(app):
310 app.connect("autodoc-process-docstring", remove_flags_docstring)
311
[end of doc/source/conf.py]
[start of pandas/compat/__init__.py]
1 """
2 compat
3 ======
4
5 Cross-compatible functions for Python 2 and 3.
6
7 Key items to import for 2/3 compatible code:
8 * iterators: range(), map(), zip(), filter(), reduce()
9 * lists: lrange(), lmap(), lzip(), lfilter()
10 * unicode: u() [u"" is a syntax error in Python 3.0-3.2]
11 * longs: long (int in Python 3)
12 * callable
13 * iterable method compatibility: iteritems, iterkeys, itervalues
14 * Uses the original method if available, otherwise uses items, keys, values.
15 * types:
16 * text_type: unicode in Python 2, str in Python 3
17 * binary_type: str in Python 2, bythes in Python 3
18 * string_types: basestring in Python 2, str in Python 3
19 * bind_method: binds functions to classes
20 * add_metaclass(metaclass) - class decorator that recreates class with with the
21 given metaclass instead (and avoids intermediary class creation)
22
23 Python 2.6 compatibility:
24 * OrderedDict
25 * Counter
26
27 Other items:
28 * OrderedDefaultDict
29 """
30 # pylint disable=W0611
31 import functools
32 import itertools
33 from distutils.version import LooseVersion
34 from itertools import product
35 import sys
36 import types
37
38 PY3 = (sys.version_info[0] >= 3)
39 PY3_2 = sys.version_info[:2] == (3, 2)
40
41 try:
42 import __builtin__ as builtins
43 # not writeable when instantiated with string, doesn't handle unicode well
44 from cStringIO import StringIO as cStringIO
45 # always writeable
46 from StringIO import StringIO
47 BytesIO = StringIO
48 import cPickle
49 import httplib
50 except ImportError:
51 import builtins
52 from io import StringIO, BytesIO
53 cStringIO = StringIO
54 import pickle as cPickle
55 import http.client as httplib
56
57 from pandas.compat.chainmap import DeepChainMap
58
59
60 if PY3:
61 def isidentifier(s):
62 return s.isidentifier()
63
64 def str_to_bytes(s, encoding=None):
65 return s.encode(encoding or 'ascii')
66
67 def bytes_to_str(b, encoding=None):
68 return b.decode(encoding or 'utf-8')
69
70 # have to explicitly put builtins into the namespace
71 range = range
72 map = map
73 zip = zip
74 filter = filter
75 reduce = functools.reduce
76 long = int
77 unichr = chr
78
79 # list-producing versions of the major Python iterating functions
80 def lrange(*args, **kwargs):
81 return list(range(*args, **kwargs))
82
83 def lzip(*args, **kwargs):
84 return list(zip(*args, **kwargs))
85
86 def lmap(*args, **kwargs):
87 return list(map(*args, **kwargs))
88
89 def lfilter(*args, **kwargs):
90 return list(filter(*args, **kwargs))
91 else:
92 # Python 2
93 import re
94 _name_re = re.compile(r"[a-zA-Z_][a-zA-Z0-9_]*$")
95
96 def isidentifier(s, dotted=False):
97 return bool(_name_re.match(s))
98
99 def str_to_bytes(s, encoding='ascii'):
100 return s
101
102 def bytes_to_str(b, encoding='ascii'):
103 return b
104
105 # import iterator versions of these functions
106 range = xrange
107 zip = itertools.izip
108 filter = itertools.ifilter
109 map = itertools.imap
110 reduce = reduce
111 long = long
112 unichr = unichr
113
114 # Python 2-builtin ranges produce lists
115 lrange = builtins.range
116 lzip = builtins.zip
117 lmap = builtins.map
118 lfilter = builtins.filter
119
120
121 def iteritems(obj, **kwargs):
122 """replacement for six's iteritems for Python2/3 compat
123 uses 'iteritems' if available and otherwise uses 'items'.
124
125 Passes kwargs to method.
126 """
127 func = getattr(obj, "iteritems", None)
128 if not func:
129 func = obj.items
130 return func(**kwargs)
131
132
133 def iterkeys(obj, **kwargs):
134 func = getattr(obj, "iterkeys", None)
135 if not func:
136 func = obj.keys
137 return func(**kwargs)
138
139
140 def itervalues(obj, **kwargs):
141 func = getattr(obj, "itervalues", None)
142 if not func:
143 func = obj.values
144 return func(**kwargs)
145
146
147 def bind_method(cls, name, func):
148 """Bind a method to class, python 2 and python 3 compatible.
149
150 Parameters
151 ----------
152
153 cls : type
154 class to receive bound method
155 name : basestring
156 name of method on class instance
157 func : function
158 function to be bound as method
159
160
161 Returns
162 -------
163 None
164 """
165 # only python 2 has bound/unbound method issue
166 if not PY3:
167 setattr(cls, name, types.MethodType(func, None, cls))
168 else:
169 setattr(cls, name, func)
170 # ----------------------------------------------------------------------------
171 # functions largely based / taken from the six module
172
173 # Much of the code in this module comes from Benjamin Peterson's six library.
174 # The license for this library can be found in LICENSES/SIX and the code can be
175 # found at https://bitbucket.org/gutworth/six
176
177 if PY3:
178 string_types = str,
179 integer_types = int,
180 class_types = type,
181 text_type = str
182 binary_type = bytes
183
184 def u(s):
185 return s
186
187 def u_safe(s):
188 return s
189 else:
190 string_types = basestring,
191 integer_types = (int, long)
192 class_types = (type, types.ClassType)
193 text_type = unicode
194 binary_type = str
195
196 def u(s):
197 return unicode(s, "unicode_escape")
198
199 def u_safe(s):
200 try:
201 return unicode(s, "unicode_escape")
202 except:
203 return s
204
205
206 string_and_binary_types = string_types + (binary_type,)
207
208
209 try:
210 # callable reintroduced in later versions of Python
211 callable = callable
212 except NameError:
213 def callable(obj):
214 return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
215
216
217 def add_metaclass(metaclass):
218 """Class decorator for creating a class with a metaclass."""
219 def wrapper(cls):
220 orig_vars = cls.__dict__.copy()
221 orig_vars.pop('__dict__', None)
222 orig_vars.pop('__weakref__', None)
223 for slots_var in orig_vars.get('__slots__', ()):
224 orig_vars.pop(slots_var)
225 return metaclass(cls.__name__, cls.__bases__, orig_vars)
226 return wrapper
227
228
229 # ----------------------------------------------------------------------------
230 # Python 2.6 compatibility shims
231 #
232
233 # OrderedDict Shim from Raymond Hettinger, python core dev
234 # http://code.activestate.com/recipes/576693-ordered-dictionary-for-py24/
235 # here to support versions before 2.6
236 if not PY3:
237 # don't need this except in 2.6
238 try:
239 from thread import get_ident as _get_ident
240 except ImportError:
241 from dummy_thread import get_ident as _get_ident
242
243 try:
244 from _abcoll import KeysView, ValuesView, ItemsView
245 except ImportError:
246 pass
247
248
249 class _OrderedDict(dict):
250
251 """Dictionary that remembers insertion order"""
252 # An inherited dict maps keys to values.
253 # The inherited dict provides __getitem__, __len__, __contains__, and get.
254 # The remaining methods are order-aware.
255 # Big-O running times for all methods are the same as for regular
256 # dictionaries.
257
258 # The internal self.__map dictionary maps keys to links in a doubly linked
259 # list. The circular doubly linked list starts and ends with a sentinel
260 # element. The sentinel element never gets deleted (this simplifies the
261 # algorithm). Each link is stored as a list of length three: [PREV, NEXT,
262 # KEY].
263
264 def __init__(self, *args, **kwds):
265 """Initialize an ordered dictionary. Signature is the same as for
266 regular dictionaries, but keyword arguments are not recommended
267 because their insertion order is arbitrary.
268 """
269 if len(args) > 1:
270 raise TypeError('expected at most 1 arguments, got %d' % len(args))
271 try:
272 self.__root
273 except AttributeError:
274 self.__root = root = [] # sentinel node
275 root[:] = [root, root, None]
276 self.__map = {}
277 self.__update(*args, **kwds)
278
279 def __setitem__(self, key, value, dict_setitem=dict.__setitem__):
280 """od.__setitem__(i, y) <==> od[i]=y"""
281 # Setting a new item creates a new link which goes at the end of the
282 # linked list, and the inherited dictionary is updated with the new
283 # key/value pair.
284 if key not in self:
285 root = self.__root
286 last = root[0]
287 last[1] = root[0] = self.__map[key] = [last, root, key]
288 dict_setitem(self, key, value)
289
290 def __delitem__(self, key, dict_delitem=dict.__delitem__):
291 """od.__delitem__(y) <==> del od[y]"""
292 # Deleting an existing item uses self.__map to find the link which is
293 # then removed by updating the links in the predecessor and successor
294 # nodes.
295 dict_delitem(self, key)
296 link_prev, link_next, key = self.__map.pop(key)
297 link_prev[1] = link_next
298 link_next[0] = link_prev
299
300 def __iter__(self):
301 """od.__iter__() <==> iter(od)"""
302 root = self.__root
303 curr = root[1]
304 while curr is not root:
305 yield curr[2]
306 curr = curr[1]
307
308 def __reversed__(self):
309 """od.__reversed__() <==> reversed(od)"""
310 root = self.__root
311 curr = root[0]
312 while curr is not root:
313 yield curr[2]
314 curr = curr[0]
315
316 def clear(self):
317 """od.clear() -> None. Remove all items from od."""
318 try:
319 for node in itervalues(self.__map):
320 del node[:]
321 root = self.__root
322 root[:] = [root, root, None]
323 self.__map.clear()
324 except AttributeError:
325 pass
326 dict.clear(self)
327
328 def popitem(self, last=True):
329 """od.popitem() -> (k, v), return and remove a (key, value) pair.
330
331 Pairs are returned in LIFO order if last is true or FIFO order if
332 false.
333 """
334 if not self:
335 raise KeyError('dictionary is empty')
336 root = self.__root
337 if last:
338 link = root[0]
339 link_prev = link[0]
340 link_prev[1] = root
341 root[0] = link_prev
342 else:
343 link = root[1]
344 link_next = link[1]
345 root[1] = link_next
346 link_next[0] = root
347 key = link[2]
348 del self.__map[key]
349 value = dict.pop(self, key)
350 return key, value
351
352 # -- the following methods do not depend on the internal structure --
353
354 def keys(self):
355 """od.keys() -> list of keys in od"""
356 return list(self)
357
358 def values(self):
359 """od.values() -> list of values in od"""
360 return [self[key] for key in self]
361
362 def items(self):
363 """od.items() -> list of (key, value) pairs in od"""
364 return [(key, self[key]) for key in self]
365
366 def iterkeys(self):
367 """od.iterkeys() -> an iterator over the keys in od"""
368 return iter(self)
369
370 def itervalues(self):
371 """od.itervalues -> an iterator over the values in od"""
372 for k in self:
373 yield self[k]
374
375 def iteritems(self):
376 """od.iteritems -> an iterator over the (key, value) items in od"""
377 for k in self:
378 yield (k, self[k])
379
380 def update(*args, **kwds):
381 """od.update(E, **F) -> None. Update od from dict/iterable E and F.
382
383 If E is a dict instance, does: for k in E: od[k] = E[k]
384 If E has a .keys() method, does: for k in E.keys(): od[k] = E[k]
385 Or if E is an iterable of items, does:for k, v in E: od[k] = v
386 In either case, this is followed by: for k, v in F.items(): od[k] = v
387 """
388 if len(args) > 2:
389 raise TypeError('update() takes at most 2 positional '
390 'arguments (%d given)' % (len(args),))
391 elif not args:
392 raise TypeError('update() takes at least 1 argument (0 given)')
393 self = args[0]
394 # Make progressively weaker assumptions about "other"
395 other = ()
396 if len(args) == 2:
397 other = args[1]
398 if isinstance(other, dict):
399 for key in other:
400 self[key] = other[key]
401 elif hasattr(other, 'keys'):
402 for key in other.keys():
403 self[key] = other[key]
404 else:
405 for key, value in other:
406 self[key] = value
407 for key, value in kwds.items():
408 self[key] = value
409 # let subclasses override update without breaking __init__
410 __update = update
411
412 __marker = object()
413
414 def pop(self, key, default=__marker):
415 """od.pop(k[,d]) -> v, remove specified key and return the
416 corresponding value. If key is not found, d is returned if given,
417 otherwise KeyError is raised.
418 """
419 if key in self:
420 result = self[key]
421 del self[key]
422 return result
423 if default is self.__marker:
424 raise KeyError(key)
425 return default
426
427 def setdefault(self, key, default=None):
428 """od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od
429 """
430 if key in self:
431 return self[key]
432 self[key] = default
433 return default
434
435 def __repr__(self, _repr_running={}):
436 """od.__repr__() <==> repr(od)"""
437 call_key = id(self), _get_ident()
438 if call_key in _repr_running:
439 return '...'
440 _repr_running[call_key] = 1
441 try:
442 if not self:
443 return '%s()' % (self.__class__.__name__,)
444 return '%s(%r)' % (self.__class__.__name__, list(self.items()))
445 finally:
446 del _repr_running[call_key]
447
448 def __reduce__(self):
449 """Return state information for pickling"""
450 items = [[k, self[k]] for k in self]
451 inst_dict = vars(self).copy()
452 for k in vars(OrderedDict()):
453 inst_dict.pop(k, None)
454 if inst_dict:
455 return (self.__class__, (items,), inst_dict)
456 return self.__class__, (items,)
457
458 def copy(self):
459 """od.copy() -> a shallow copy of od"""
460 return self.__class__(self)
461
462 @classmethod
463 def fromkeys(cls, iterable, value=None):
464 """OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S and
465 values equal to v (which defaults to None).
466 """
467 d = cls()
468 for key in iterable:
469 d[key] = value
470 return d
471
472 def __eq__(self, other):
473 """od.__eq__(y) <==> od==y. Comparison to another OD is
474 order-sensitive while comparison to a regular mapping is
475 order-insensitive.
476 """
477 if isinstance(other, OrderedDict):
478 return (len(self) == len(other) and
479 list(self.items()) == list(other.items()))
480 return dict.__eq__(self, other)
481
482 def __ne__(self, other):
483 return not self == other
484
485 # -- the following methods are only used in Python 2.7 --
486
487 def viewkeys(self):
488 """od.viewkeys() -> a set-like object providing a view on od's keys"""
489 return KeysView(self)
490
491 def viewvalues(self):
492 """od.viewvalues() -> an object providing a view on od's values"""
493 return ValuesView(self)
494
495 def viewitems(self):
496 """od.viewitems() -> a set-like object providing a view on od's items
497 """
498 return ItemsView(self)
499
500
501 # {{{ http://code.activestate.com/recipes/576611/ (r11)
502
503 try:
504 from operator import itemgetter
505 from heapq import nlargest
506 except ImportError:
507 pass
508
509
510 class _Counter(dict):
511
512 """Dict subclass for counting hashable objects. Sometimes called a bag
513 or multiset. Elements are stored as dictionary keys and their counts
514 are stored as dictionary values.
515
516 >>> Counter('zyzygy')
517 Counter({'y': 3, 'z': 2, 'g': 1})
518
519 """
520
521 def __init__(self, iterable=None, **kwds):
522 """Create a new, empty Counter object. And if given, count elements
523 from an input iterable. Or, initialize the count from another mapping
524 of elements to their counts.
525
526 >>> c = Counter() # a new, empty counter
527 >>> c = Counter('gallahad') # a new counter from an iterable
528 >>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping
529 >>> c = Counter(a=4, b=2) # a new counter from keyword args
530
531 """
532 self.update(iterable, **kwds)
533
534 def __missing__(self, key):
535 return 0
536
537 def most_common(self, n=None):
538 """List the n most common elements and their counts from the most
539 common to the least. If n is None, then list all element counts.
540
541 >>> Counter('abracadabra').most_common(3)
542 [('a', 5), ('r', 2), ('b', 2)]
543
544 """
545 if n is None:
546 return sorted(iteritems(self), key=itemgetter(1), reverse=True)
547 return nlargest(n, iteritems(self), key=itemgetter(1))
548
549 def elements(self):
550 """Iterator over elements repeating each as many times as its count.
551
552 >>> c = Counter('ABCABC')
553 >>> sorted(c.elements())
554 ['A', 'A', 'B', 'B', 'C', 'C']
555
556 If an element's count has been set to zero or is a negative number,
557 elements() will ignore it.
558
559 """
560 for elem, count in iteritems(self):
561 for _ in range(count):
562 yield elem
563
564 # Override dict methods where the meaning changes for Counter objects.
565
566 @classmethod
567 def fromkeys(cls, iterable, v=None):
568 raise NotImplementedError(
569 'Counter.fromkeys() is undefined. Use Counter(iterable) instead.')
570
571 def update(self, iterable=None, **kwds):
572 """Like dict.update() but add counts instead of replacing them.
573
574 Source can be an iterable, a dictionary, or another Counter instance.
575
576 >>> c = Counter('which')
577 >>> c.update('witch') # add elements from another iterable
578 >>> d = Counter('watch')
579 >>> c.update(d) # add elements from another counter
580 >>> c['h'] # four 'h' in which, witch, and watch
581 4
582
583 """
584 if iterable is not None:
585 if hasattr(iterable, 'iteritems'):
586 if self:
587 self_get = self.get
588 for elem, count in iteritems(iterable):
589 self[elem] = self_get(elem, 0) + count
590 else:
591 dict.update(
592 self, iterable) # fast path when counter is empty
593 else:
594 self_get = self.get
595 for elem in iterable:
596 self[elem] = self_get(elem, 0) + 1
597 if kwds:
598 self.update(kwds)
599
600 def copy(self):
601 """Like dict.copy() but returns a Counter instance instead of a dict.
602 """
603 return Counter(self)
604
605 def __delitem__(self, elem):
606 """Like dict.__delitem__() but does not raise KeyError for missing
607 values.
608 """
609 if elem in self:
610 dict.__delitem__(self, elem)
611
612 def __repr__(self):
613 if not self:
614 return '%s()' % self.__class__.__name__
615 items = ', '.join(map('%r: %r'.__mod__, self.most_common()))
616 return '%s({%s})' % (self.__class__.__name__, items)
617
618 # Multiset-style mathematical operations discussed in:
619 # Knuth TAOCP Volume II section 4.6.3 exercise 19
620 # and at http://en.wikipedia.org/wiki/Multiset
621 #
622 # Outputs guaranteed to only include positive counts.
623 #
624 # To strip negative and zero counts, add-in an empty counter:
625 # c += Counter()
626
627 def __add__(self, other):
628 """Add counts from two counters.
629
630 >>> Counter('abbb') + Counter('bcc')
631 Counter({'b': 4, 'c': 2, 'a': 1})
632
633 """
634 if not isinstance(other, Counter):
635 return NotImplemented
636 result = Counter()
637 for elem in set(self) | set(other):
638 newcount = self[elem] + other[elem]
639 if newcount > 0:
640 result[elem] = newcount
641 return result
642
643 def __sub__(self, other):
644 """Subtract count, but keep only results with positive counts.
645
646 >>> Counter('abbbc') - Counter('bccd')
647 Counter({'b': 2, 'a': 1})
648
649 """
650 if not isinstance(other, Counter):
651 return NotImplemented
652 result = Counter()
653 for elem in set(self) | set(other):
654 newcount = self[elem] - other[elem]
655 if newcount > 0:
656 result[elem] = newcount
657 return result
658
659 def __or__(self, other):
660 """Union is the maximum of value in either of the input counters.
661
662 >>> Counter('abbb') | Counter('bcc')
663 Counter({'b': 3, 'c': 2, 'a': 1})
664
665 """
666 if not isinstance(other, Counter):
667 return NotImplemented
668 _max = max
669 result = Counter()
670 for elem in set(self) | set(other):
671 newcount = _max(self[elem], other[elem])
672 if newcount > 0:
673 result[elem] = newcount
674 return result
675
676 def __and__(self, other):
677 """Intersection is the minimum of corresponding counts.
678
679 >>> Counter('abbb') & Counter('bcc')
680 Counter({'b': 1})
681
682 """
683 if not isinstance(other, Counter):
684 return NotImplemented
685 _min = min
686 result = Counter()
687 if len(self) < len(other):
688 self, other = other, self
689 for elem in filter(self.__contains__, other):
690 newcount = _min(self[elem], other[elem])
691 if newcount > 0:
692 result[elem] = newcount
693 return result
694
695 if sys.version_info[:2] < (2, 7):
696 OrderedDict = _OrderedDict
697 Counter = _Counter
698 else:
699 from collections import OrderedDict, Counter
700
701 if PY3:
702 def raise_with_traceback(exc, traceback=Ellipsis):
703 if traceback == Ellipsis:
704 _, _, traceback = sys.exc_info()
705 raise exc.with_traceback(traceback)
706 else:
707 # this version of raise is a syntax error in Python 3
708 exec("""
709 def raise_with_traceback(exc, traceback=Ellipsis):
710 if traceback == Ellipsis:
711 _, _, traceback = sys.exc_info()
712 raise exc, None, traceback
713 """)
714
715 raise_with_traceback.__doc__ = """Raise exception with existing traceback.
716 If traceback is not passed, uses sys.exc_info() to get traceback."""
717
718
719 # http://stackoverflow.com/questions/4126348
720 # Thanks to @martineau at SO
721
722 from dateutil import parser as _date_parser
723 import dateutil
724 if LooseVersion(dateutil.__version__) < '2.0':
725 @functools.wraps(_date_parser.parse)
726 def parse_date(timestr, *args, **kwargs):
727 timestr = bytes(timestr)
728 return _date_parser.parse(timestr, *args, **kwargs)
729 else:
730 parse_date = _date_parser.parse
731
732
733 class OrderedDefaultdict(OrderedDict):
734
735 def __init__(self, *args, **kwargs):
736 newdefault = None
737 newargs = ()
738 if args:
739 newdefault = args[0]
740 if not (newdefault is None or callable(newdefault)):
741 raise TypeError('first argument must be callable or None')
742 newargs = args[1:]
743 self.default_factory = newdefault
744 super(self.__class__, self).__init__(*newargs, **kwargs)
745
746 def __missing__(self, key):
747 if self.default_factory is None:
748 raise KeyError(key)
749 self[key] = value = self.default_factory()
750 return value
751
752 def __reduce__(self): # optional, for pickle support
753 args = self.default_factory if self.default_factory else tuple()
754 return type(self), args, None, None, list(self.items())
755
[end of pandas/compat/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
795e05944c80e9dcdb49d7aa4550cf85a1f2a36d
|
DOC: update DataFrame.append / concat section to have a warning about the copying
http://stackoverflow.com/questions/25210819/speeding-up-data-import-function-pandas-and-appending-to-dataframe
DOC: update DataFrame.append / concat section to have a warning about the copying
http://stackoverflow.com/questions/25210819/speeding-up-data-import-function-pandas-and-appending-to-dataframe
|
2014-10-06T16:00:32Z
|
<patch>
diff --git a/doc/source/merging.rst b/doc/source/merging.rst
--- a/doc/source/merging.rst
+++ b/doc/source/merging.rst
@@ -51,7 +51,6 @@ takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of "what to do with the other axes":
::
-
concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
keys=None, levels=None, names=None, verify_integrity=False)
@@ -100,6 +99,18 @@ means that we can now do stuff like select out each chunk by key:
It's not a stretch to see how this can be very useful. More detail on this
functionality below.
+.. note::
+ It is worth noting however, that ``concat`` (and therefore ``append``) makes
+ a full copy of the data, and that constantly reusing this function can
+ create a signifcant performance hit. If you need to use the operation over
+ several datasets, use a list comprehension.
+
+::
+
+ frames = [ process_your_file(f) for f in files ]
+ result = pd.concat(frames)
+
+
Set logic on the other axes
~~~~~~~~~~~~~~~~~~~~~~~~~~~
</patch>
|
[]
|
[]
| ||||
numpy__numpy-12251
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
__array_function__ errors should more clearly identify the non-implemented function
xref #12028
Here's what you currently see if `__array_function__` returns `NotImplemented`:
```
In [1]: import numpy as np
In [2]: class MyArray:
...: def __array_function__(*args, **kwargs):
...: return NotImplemented
...:
In [3]: np.sum(MyArray())
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-c8a80bb1d37e> in <module>()
----> 1 np.sum(MyArray())
~/dev/numpy/numpy/core/overrides.py in public_api(*args, **kwargs)
149 relevant_args = dispatcher(*args, **kwargs)
150 return array_function_implementation_or_override(
--> 151 implementation, public_api, relevant_args, args, kwargs)
152
153 # TODO: remove this when we drop Python 2 support (functools.wraps
~/dev/numpy/numpy/core/overrides.py in array_function_implementation_or_override(implementation, public_api, relevant_args, args, kwargs)
108 raise TypeError('no implementation found for {} on types that implement '
109 '__array_function__: {}'
--> 110 .format(public_api, list(map(type, overloaded_args))))
111
112
TypeError: no implementation found for <function sum at 0x10e070bf8> on types that implement __array_function__: [<class '__main__.MyArray'>]
```
This error message should look something like this instead: `TypeError: no implementation found for 'numpy.sum' on types that implement __array_function__: [<class '__main__.MyArray'>]`
I think we will need to add a `name` parameter to `array_function_override` to do this properly. The best we could hope for with introspection is to use `__module__` and `__name__` to come up with something like `numpy.core.fromnumeric.sum`. This would be better what we currently have and could be a reasonable default, but we really don't want people reaching directly into internal modules like `fromnumeric`.
</issue>
<code>
[start of README.md]
1 # <img alt="NumPy" src="https://cdn.rawgit.com/numpy/numpy/master/branding/icons/numpylogo.svg" height="60">
2
3 [](
4 https://travis-ci.org/numpy/numpy)
5 [](
6 https://ci.appveyor.com/project/charris/numpy)
7 [](
8 https://dev.azure.com/numpy/numpy/_apis/build/status/azure-pipeline%20numpy.numpy?branchName=master)
9 [](
10 https://codecov.io/gh/numpy/numpy)
11
12 NumPy is the fundamental package needed for scientific computing with Python.
13
14 - **Website (including documentation):** https://www.numpy.org
15 - **Mailing list:** https://mail.python.org/mailman/listinfo/numpy-discussion
16 - **Source:** https://github.com/numpy/numpy
17 - **Bug reports:** https://github.com/numpy/numpy/issues
18
19 It provides:
20
21 - a powerful N-dimensional array object
22 - sophisticated (broadcasting) functions
23 - tools for integrating C/C++ and Fortran code
24 - useful linear algebra, Fourier transform, and random number capabilities
25
26 Testing:
27
28 - NumPy versions ≥ 1.15 require `pytest`
29 - NumPy versions < 1.15 require `nose`
30
31 Tests can then be run after installation with:
32
33 python -c 'import numpy; numpy.test()'
34
35 [](https://numfocus.org)
36
[end of README.md]
[start of numpy/core/overrides.py]
1 """Preliminary implementation of NEP-18
2
3 TODO: rewrite this in C for performance.
4 """
5 import collections
6 import functools
7
8 from numpy.core._multiarray_umath import ndarray
9 from numpy.compat._inspect import getargspec
10
11
12 _NDARRAY_ARRAY_FUNCTION = ndarray.__array_function__
13
14
15 def get_overloaded_types_and_args(relevant_args):
16 """Returns a list of arguments on which to call __array_function__.
17
18 Parameters
19 ----------
20 relevant_args : iterable of array-like
21 Iterable of array-like arguments to check for __array_function__
22 methods.
23
24 Returns
25 -------
26 overloaded_types : collection of types
27 Types of arguments from relevant_args with __array_function__ methods.
28 overloaded_args : list
29 Arguments from relevant_args on which to call __array_function__
30 methods, in the order in which they should be called.
31 """
32 # Runtime is O(num_arguments * num_unique_types)
33 overloaded_types = []
34 overloaded_args = []
35 for arg in relevant_args:
36 arg_type = type(arg)
37 # We only collect arguments if they have a unique type, which ensures
38 # reasonable performance even with a long list of possibly overloaded
39 # arguments.
40 if (arg_type not in overloaded_types and
41 hasattr(arg_type, '__array_function__')):
42
43 overloaded_types.append(arg_type)
44
45 # By default, insert this argument at the end, but if it is
46 # subclass of another argument, insert it before that argument.
47 # This ensures "subclasses before superclasses".
48 index = len(overloaded_args)
49 for i, old_arg in enumerate(overloaded_args):
50 if issubclass(arg_type, type(old_arg)):
51 index = i
52 break
53 overloaded_args.insert(index, arg)
54
55 # Special handling for ndarray.__array_function__
56 overloaded_args = [
57 arg for arg in overloaded_args
58 if type(arg).__array_function__ is not _NDARRAY_ARRAY_FUNCTION
59 ]
60
61 return overloaded_types, overloaded_args
62
63
64 def array_function_implementation_or_override(
65 implementation, public_api, relevant_args, args, kwargs):
66 """Implement a function with checks for __array_function__ overrides.
67
68 Arguments
69 ---------
70 implementation : function
71 Function that implements the operation on NumPy array without
72 overrides when called like ``implementation(*args, **kwargs)``.
73 public_api : function
74 Function exposed by NumPy's public API originally called like
75 ``public_api(*args, **kwargs)`` on which arguments are now being
76 checked.
77 relevant_args : iterable
78 Iterable of arguments to check for __array_function__ methods.
79 args : tuple
80 Arbitrary positional arguments originally passed into ``public_api``.
81 kwargs : tuple
82 Arbitrary keyword arguments originally passed into ``public_api``.
83
84 Returns
85 -------
86 Result from calling `implementation()` or an `__array_function__`
87 method, as appropriate.
88
89 Raises
90 ------
91 TypeError : if no implementation is found.
92 """
93 # Check for __array_function__ methods.
94 types, overloaded_args = get_overloaded_types_and_args(relevant_args)
95 if not overloaded_args:
96 return implementation(*args, **kwargs)
97
98 # Call overrides
99 for overloaded_arg in overloaded_args:
100 # Use `public_api` instead of `implemenation` so __array_function__
101 # implementations can do equality/identity comparisons.
102 result = overloaded_arg.__array_function__(
103 public_api, types, args, kwargs)
104
105 if result is not NotImplemented:
106 return result
107
108 raise TypeError('no implementation found for {} on types that implement '
109 '__array_function__: {}'
110 .format(public_api, list(map(type, overloaded_args))))
111
112
113 ArgSpec = collections.namedtuple('ArgSpec', 'args varargs keywords defaults')
114
115
116 def verify_matching_signatures(implementation, dispatcher):
117 """Verify that a dispatcher function has the right signature."""
118 implementation_spec = ArgSpec(*getargspec(implementation))
119 dispatcher_spec = ArgSpec(*getargspec(dispatcher))
120
121 if (implementation_spec.args != dispatcher_spec.args or
122 implementation_spec.varargs != dispatcher_spec.varargs or
123 implementation_spec.keywords != dispatcher_spec.keywords or
124 (bool(implementation_spec.defaults) !=
125 bool(dispatcher_spec.defaults)) or
126 (implementation_spec.defaults is not None and
127 len(implementation_spec.defaults) !=
128 len(dispatcher_spec.defaults))):
129 raise RuntimeError('implementation and dispatcher for %s have '
130 'different function signatures' % implementation)
131
132 if implementation_spec.defaults is not None:
133 if dispatcher_spec.defaults != (None,) * len(dispatcher_spec.defaults):
134 raise RuntimeError('dispatcher functions can only use None for '
135 'default argument values')
136
137
138 def array_function_dispatch(dispatcher, verify=True):
139 """Decorator for adding dispatch with the __array_function__ protocol."""
140 def decorator(implementation):
141 # TODO: only do this check when the appropriate flag is enabled or for
142 # a dev install. We want this check for testing but don't want to
143 # slow down all numpy imports.
144 if verify:
145 verify_matching_signatures(implementation, dispatcher)
146
147 @functools.wraps(implementation)
148 def public_api(*args, **kwargs):
149 relevant_args = dispatcher(*args, **kwargs)
150 return array_function_implementation_or_override(
151 implementation, public_api, relevant_args, args, kwargs)
152 return public_api
153
154 return decorator
155
[end of numpy/core/overrides.py]
[start of numpy/doc/subclassing.py]
1 """=============================
2 Subclassing ndarray in python
3 =============================
4
5 Introduction
6 ------------
7
8 Subclassing ndarray is relatively simple, but it has some complications
9 compared to other Python objects. On this page we explain the machinery
10 that allows you to subclass ndarray, and the implications for
11 implementing a subclass.
12
13 ndarrays and object creation
14 ============================
15
16 Subclassing ndarray is complicated by the fact that new instances of
17 ndarray classes can come about in three different ways. These are:
18
19 #. Explicit constructor call - as in ``MySubClass(params)``. This is
20 the usual route to Python instance creation.
21 #. View casting - casting an existing ndarray as a given subclass
22 #. New from template - creating a new instance from a template
23 instance. Examples include returning slices from a subclassed array,
24 creating return types from ufuncs, and copying arrays. See
25 :ref:`new-from-template` for more details
26
27 The last two are characteristics of ndarrays - in order to support
28 things like array slicing. The complications of subclassing ndarray are
29 due to the mechanisms numpy has to support these latter two routes of
30 instance creation.
31
32 .. _view-casting:
33
34 View casting
35 ------------
36
37 *View casting* is the standard ndarray mechanism by which you take an
38 ndarray of any subclass, and return a view of the array as another
39 (specified) subclass:
40
41 >>> import numpy as np
42 >>> # create a completely useless ndarray subclass
43 >>> class C(np.ndarray): pass
44 >>> # create a standard ndarray
45 >>> arr = np.zeros((3,))
46 >>> # take a view of it, as our useless subclass
47 >>> c_arr = arr.view(C)
48 >>> type(c_arr)
49 <class 'C'>
50
51 .. _new-from-template:
52
53 Creating new from template
54 --------------------------
55
56 New instances of an ndarray subclass can also come about by a very
57 similar mechanism to :ref:`view-casting`, when numpy finds it needs to
58 create a new instance from a template instance. The most obvious place
59 this has to happen is when you are taking slices of subclassed arrays.
60 For example:
61
62 >>> v = c_arr[1:]
63 >>> type(v) # the view is of type 'C'
64 <class 'C'>
65 >>> v is c_arr # but it's a new instance
66 False
67
68 The slice is a *view* onto the original ``c_arr`` data. So, when we
69 take a view from the ndarray, we return a new ndarray, of the same
70 class, that points to the data in the original.
71
72 There are other points in the use of ndarrays where we need such views,
73 such as copying arrays (``c_arr.copy()``), creating ufunc output arrays
74 (see also :ref:`array-wrap`), and reducing methods (like
75 ``c_arr.mean()``.
76
77 Relationship of view casting and new-from-template
78 --------------------------------------------------
79
80 These paths both use the same machinery. We make the distinction here,
81 because they result in different input to your methods. Specifically,
82 :ref:`view-casting` means you have created a new instance of your array
83 type from any potential subclass of ndarray. :ref:`new-from-template`
84 means you have created a new instance of your class from a pre-existing
85 instance, allowing you - for example - to copy across attributes that
86 are particular to your subclass.
87
88 Implications for subclassing
89 ----------------------------
90
91 If we subclass ndarray, we need to deal not only with explicit
92 construction of our array type, but also :ref:`view-casting` or
93 :ref:`new-from-template`. NumPy has the machinery to do this, and this
94 machinery that makes subclassing slightly non-standard.
95
96 There are two aspects to the machinery that ndarray uses to support
97 views and new-from-template in subclasses.
98
99 The first is the use of the ``ndarray.__new__`` method for the main work
100 of object initialization, rather then the more usual ``__init__``
101 method. The second is the use of the ``__array_finalize__`` method to
102 allow subclasses to clean up after the creation of views and new
103 instances from templates.
104
105 A brief Python primer on ``__new__`` and ``__init__``
106 =====================================================
107
108 ``__new__`` is a standard Python method, and, if present, is called
109 before ``__init__`` when we create a class instance. See the `python
110 __new__ documentation
111 <https://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail.
112
113 For example, consider the following Python code:
114
115 .. testcode::
116
117 class C(object):
118 def __new__(cls, *args):
119 print('Cls in __new__:', cls)
120 print('Args in __new__:', args)
121 return object.__new__(cls, *args)
122
123 def __init__(self, *args):
124 print('type(self) in __init__:', type(self))
125 print('Args in __init__:', args)
126
127 meaning that we get:
128
129 >>> c = C('hello')
130 Cls in __new__: <class 'C'>
131 Args in __new__: ('hello',)
132 type(self) in __init__: <class 'C'>
133 Args in __init__: ('hello',)
134
135 When we call ``C('hello')``, the ``__new__`` method gets its own class
136 as first argument, and the passed argument, which is the string
137 ``'hello'``. After python calls ``__new__``, it usually (see below)
138 calls our ``__init__`` method, with the output of ``__new__`` as the
139 first argument (now a class instance), and the passed arguments
140 following.
141
142 As you can see, the object can be initialized in the ``__new__``
143 method or the ``__init__`` method, or both, and in fact ndarray does
144 not have an ``__init__`` method, because all the initialization is
145 done in the ``__new__`` method.
146
147 Why use ``__new__`` rather than just the usual ``__init__``? Because
148 in some cases, as for ndarray, we want to be able to return an object
149 of some other class. Consider the following:
150
151 .. testcode::
152
153 class D(C):
154 def __new__(cls, *args):
155 print('D cls is:', cls)
156 print('D args in __new__:', args)
157 return C.__new__(C, *args)
158
159 def __init__(self, *args):
160 # we never get here
161 print('In D __init__')
162
163 meaning that:
164
165 >>> obj = D('hello')
166 D cls is: <class 'D'>
167 D args in __new__: ('hello',)
168 Cls in __new__: <class 'C'>
169 Args in __new__: ('hello',)
170 >>> type(obj)
171 <class 'C'>
172
173 The definition of ``C`` is the same as before, but for ``D``, the
174 ``__new__`` method returns an instance of class ``C`` rather than
175 ``D``. Note that the ``__init__`` method of ``D`` does not get
176 called. In general, when the ``__new__`` method returns an object of
177 class other than the class in which it is defined, the ``__init__``
178 method of that class is not called.
179
180 This is how subclasses of the ndarray class are able to return views
181 that preserve the class type. When taking a view, the standard
182 ndarray machinery creates the new ndarray object with something
183 like::
184
185 obj = ndarray.__new__(subtype, shape, ...
186
187 where ``subdtype`` is the subclass. Thus the returned view is of the
188 same class as the subclass, rather than being of class ``ndarray``.
189
190 That solves the problem of returning views of the same type, but now
191 we have a new problem. The machinery of ndarray can set the class
192 this way, in its standard methods for taking views, but the ndarray
193 ``__new__`` method knows nothing of what we have done in our own
194 ``__new__`` method in order to set attributes, and so on. (Aside -
195 why not call ``obj = subdtype.__new__(...`` then? Because we may not
196 have a ``__new__`` method with the same call signature).
197
198 The role of ``__array_finalize__``
199 ==================================
200
201 ``__array_finalize__`` is the mechanism that numpy provides to allow
202 subclasses to handle the various ways that new instances get created.
203
204 Remember that subclass instances can come about in these three ways:
205
206 #. explicit constructor call (``obj = MySubClass(params)``). This will
207 call the usual sequence of ``MySubClass.__new__`` then (if it exists)
208 ``MySubClass.__init__``.
209 #. :ref:`view-casting`
210 #. :ref:`new-from-template`
211
212 Our ``MySubClass.__new__`` method only gets called in the case of the
213 explicit constructor call, so we can't rely on ``MySubClass.__new__`` or
214 ``MySubClass.__init__`` to deal with the view casting and
215 new-from-template. It turns out that ``MySubClass.__array_finalize__``
216 *does* get called for all three methods of object creation, so this is
217 where our object creation housekeeping usually goes.
218
219 * For the explicit constructor call, our subclass will need to create a
220 new ndarray instance of its own class. In practice this means that
221 we, the authors of the code, will need to make a call to
222 ``ndarray.__new__(MySubClass,...)``, a class-hierarchy prepared call to
223 ``super(MySubClass, cls).__new__(cls, ...)``, or do view casting of an
224 existing array (see below)
225 * For view casting and new-from-template, the equivalent of
226 ``ndarray.__new__(MySubClass,...`` is called, at the C level.
227
228 The arguments that ``__array_finalize__`` receives differ for the three
229 methods of instance creation above.
230
231 The following code allows us to look at the call sequences and arguments:
232
233 .. testcode::
234
235 import numpy as np
236
237 class C(np.ndarray):
238 def __new__(cls, *args, **kwargs):
239 print('In __new__ with class %s' % cls)
240 return super(C, cls).__new__(cls, *args, **kwargs)
241
242 def __init__(self, *args, **kwargs):
243 # in practice you probably will not need or want an __init__
244 # method for your subclass
245 print('In __init__ with class %s' % self.__class__)
246
247 def __array_finalize__(self, obj):
248 print('In array_finalize:')
249 print(' self type is %s' % type(self))
250 print(' obj type is %s' % type(obj))
251
252
253 Now:
254
255 >>> # Explicit constructor
256 >>> c = C((10,))
257 In __new__ with class <class 'C'>
258 In array_finalize:
259 self type is <class 'C'>
260 obj type is <type 'NoneType'>
261 In __init__ with class <class 'C'>
262 >>> # View casting
263 >>> a = np.arange(10)
264 >>> cast_a = a.view(C)
265 In array_finalize:
266 self type is <class 'C'>
267 obj type is <type 'numpy.ndarray'>
268 >>> # Slicing (example of new-from-template)
269 >>> cv = c[:1]
270 In array_finalize:
271 self type is <class 'C'>
272 obj type is <class 'C'>
273
274 The signature of ``__array_finalize__`` is::
275
276 def __array_finalize__(self, obj):
277
278 One sees that the ``super`` call, which goes to
279 ``ndarray.__new__``, passes ``__array_finalize__`` the new object, of our
280 own class (``self``) as well as the object from which the view has been
281 taken (``obj``). As you can see from the output above, the ``self`` is
282 always a newly created instance of our subclass, and the type of ``obj``
283 differs for the three instance creation methods:
284
285 * When called from the explicit constructor, ``obj`` is ``None``
286 * When called from view casting, ``obj`` can be an instance of any
287 subclass of ndarray, including our own.
288 * When called in new-from-template, ``obj`` is another instance of our
289 own subclass, that we might use to update the new ``self`` instance.
290
291 Because ``__array_finalize__`` is the only method that always sees new
292 instances being created, it is the sensible place to fill in instance
293 defaults for new object attributes, among other tasks.
294
295 This may be clearer with an example.
296
297 Simple example - adding an extra attribute to ndarray
298 -----------------------------------------------------
299
300 .. testcode::
301
302 import numpy as np
303
304 class InfoArray(np.ndarray):
305
306 def __new__(subtype, shape, dtype=float, buffer=None, offset=0,
307 strides=None, order=None, info=None):
308 # Create the ndarray instance of our type, given the usual
309 # ndarray input arguments. This will call the standard
310 # ndarray constructor, but return an object of our type.
311 # It also triggers a call to InfoArray.__array_finalize__
312 obj = super(InfoArray, subtype).__new__(subtype, shape, dtype,
313 buffer, offset, strides,
314 order)
315 # set the new 'info' attribute to the value passed
316 obj.info = info
317 # Finally, we must return the newly created object:
318 return obj
319
320 def __array_finalize__(self, obj):
321 # ``self`` is a new object resulting from
322 # ndarray.__new__(InfoArray, ...), therefore it only has
323 # attributes that the ndarray.__new__ constructor gave it -
324 # i.e. those of a standard ndarray.
325 #
326 # We could have got to the ndarray.__new__ call in 3 ways:
327 # From an explicit constructor - e.g. InfoArray():
328 # obj is None
329 # (we're in the middle of the InfoArray.__new__
330 # constructor, and self.info will be set when we return to
331 # InfoArray.__new__)
332 if obj is None: return
333 # From view casting - e.g arr.view(InfoArray):
334 # obj is arr
335 # (type(obj) can be InfoArray)
336 # From new-from-template - e.g infoarr[:3]
337 # type(obj) is InfoArray
338 #
339 # Note that it is here, rather than in the __new__ method,
340 # that we set the default value for 'info', because this
341 # method sees all creation of default objects - with the
342 # InfoArray.__new__ constructor, but also with
343 # arr.view(InfoArray).
344 self.info = getattr(obj, 'info', None)
345 # We do not need to return anything
346
347
348 Using the object looks like this:
349
350 >>> obj = InfoArray(shape=(3,)) # explicit constructor
351 >>> type(obj)
352 <class 'InfoArray'>
353 >>> obj.info is None
354 True
355 >>> obj = InfoArray(shape=(3,), info='information')
356 >>> obj.info
357 'information'
358 >>> v = obj[1:] # new-from-template - here - slicing
359 >>> type(v)
360 <class 'InfoArray'>
361 >>> v.info
362 'information'
363 >>> arr = np.arange(10)
364 >>> cast_arr = arr.view(InfoArray) # view casting
365 >>> type(cast_arr)
366 <class 'InfoArray'>
367 >>> cast_arr.info is None
368 True
369
370 This class isn't very useful, because it has the same constructor as the
371 bare ndarray object, including passing in buffers and shapes and so on.
372 We would probably prefer the constructor to be able to take an already
373 formed ndarray from the usual numpy calls to ``np.array`` and return an
374 object.
375
376 Slightly more realistic example - attribute added to existing array
377 -------------------------------------------------------------------
378
379 Here is a class that takes a standard ndarray that already exists, casts
380 as our type, and adds an extra attribute.
381
382 .. testcode::
383
384 import numpy as np
385
386 class RealisticInfoArray(np.ndarray):
387
388 def __new__(cls, input_array, info=None):
389 # Input array is an already formed ndarray instance
390 # We first cast to be our class type
391 obj = np.asarray(input_array).view(cls)
392 # add the new attribute to the created instance
393 obj.info = info
394 # Finally, we must return the newly created object:
395 return obj
396
397 def __array_finalize__(self, obj):
398 # see InfoArray.__array_finalize__ for comments
399 if obj is None: return
400 self.info = getattr(obj, 'info', None)
401
402
403 So:
404
405 >>> arr = np.arange(5)
406 >>> obj = RealisticInfoArray(arr, info='information')
407 >>> type(obj)
408 <class 'RealisticInfoArray'>
409 >>> obj.info
410 'information'
411 >>> v = obj[1:]
412 >>> type(v)
413 <class 'RealisticInfoArray'>
414 >>> v.info
415 'information'
416
417 .. _array-ufunc:
418
419 ``__array_ufunc__`` for ufuncs
420 ------------------------------
421
422 .. versionadded:: 1.13
423
424 A subclass can override what happens when executing numpy ufuncs on it by
425 overriding the default ``ndarray.__array_ufunc__`` method. This method is
426 executed *instead* of the ufunc and should return either the result of the
427 operation, or :obj:`NotImplemented` if the operation requested is not
428 implemented.
429
430 The signature of ``__array_ufunc__`` is::
431
432 def __array_ufunc__(ufunc, method, *inputs, **kwargs):
433
434 - *ufunc* is the ufunc object that was called.
435 - *method* is a string indicating how the Ufunc was called, either
436 ``"__call__"`` to indicate it was called directly, or one of its
437 :ref:`methods<ufuncs.methods>`: ``"reduce"``, ``"accumulate"``,
438 ``"reduceat"``, ``"outer"``, or ``"at"``.
439 - *inputs* is a tuple of the input arguments to the ``ufunc``
440 - *kwargs* contains any optional or keyword arguments passed to the
441 function. This includes any ``out`` arguments, which are always
442 contained in a tuple.
443
444 A typical implementation would convert any inputs or outputs that are
445 instances of one's own class, pass everything on to a superclass using
446 ``super()``, and finally return the results after possible
447 back-conversion. An example, taken from the test case
448 ``test_ufunc_override_with_super`` in ``core/tests/test_umath.py``, is the
449 following.
450
451 .. testcode::
452
453 input numpy as np
454
455 class A(np.ndarray):
456 def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
457 args = []
458 in_no = []
459 for i, input_ in enumerate(inputs):
460 if isinstance(input_, A):
461 in_no.append(i)
462 args.append(input_.view(np.ndarray))
463 else:
464 args.append(input_)
465
466 outputs = kwargs.pop('out', None)
467 out_no = []
468 if outputs:
469 out_args = []
470 for j, output in enumerate(outputs):
471 if isinstance(output, A):
472 out_no.append(j)
473 out_args.append(output.view(np.ndarray))
474 else:
475 out_args.append(output)
476 kwargs['out'] = tuple(out_args)
477 else:
478 outputs = (None,) * ufunc.nout
479
480 info = {}
481 if in_no:
482 info['inputs'] = in_no
483 if out_no:
484 info['outputs'] = out_no
485
486 results = super(A, self).__array_ufunc__(ufunc, method,
487 *args, **kwargs)
488 if results is NotImplemented:
489 return NotImplemented
490
491 if method == 'at':
492 if isinstance(inputs[0], A):
493 inputs[0].info = info
494 return
495
496 if ufunc.nout == 1:
497 results = (results,)
498
499 results = tuple((np.asarray(result).view(A)
500 if output is None else output)
501 for result, output in zip(results, outputs))
502 if results and isinstance(results[0], A):
503 results[0].info = info
504
505 return results[0] if len(results) == 1 else results
506
507 So, this class does not actually do anything interesting: it just
508 converts any instances of its own to regular ndarray (otherwise, we'd
509 get infinite recursion!), and adds an ``info`` dictionary that tells
510 which inputs and outputs it converted. Hence, e.g.,
511
512 >>> a = np.arange(5.).view(A)
513 >>> b = np.sin(a)
514 >>> b.info
515 {'inputs': [0]}
516 >>> b = np.sin(np.arange(5.), out=(a,))
517 >>> b.info
518 {'outputs': [0]}
519 >>> a = np.arange(5.).view(A)
520 >>> b = np.ones(1).view(A)
521 >>> c = a + b
522 >>> c.info
523 {'inputs': [0, 1]}
524 >>> a += b
525 >>> a.info
526 {'inputs': [0, 1], 'outputs': [0]}
527
528 Note that another approach would be to to use ``getattr(ufunc,
529 methods)(*inputs, **kwargs)`` instead of the ``super`` call. For this example,
530 the result would be identical, but there is a difference if another operand
531 also defines ``__array_ufunc__``. E.g., lets assume that we evalulate
532 ``np.add(a, b)``, where ``b`` is an instance of another class ``B`` that has
533 an override. If you use ``super`` as in the example,
534 ``ndarray.__array_ufunc__`` will notice that ``b`` has an override, which
535 means it cannot evaluate the result itself. Thus, it will return
536 `NotImplemented` and so will our class ``A``. Then, control will be passed
537 over to ``b``, which either knows how to deal with us and produces a result,
538 or does not and returns `NotImplemented`, raising a ``TypeError``.
539
540 If instead, we replace our ``super`` call with ``getattr(ufunc, method)``, we
541 effectively do ``np.add(a.view(np.ndarray), b)``. Again, ``B.__array_ufunc__``
542 will be called, but now it sees an ``ndarray`` as the other argument. Likely,
543 it will know how to handle this, and return a new instance of the ``B`` class
544 to us. Our example class is not set up to handle this, but it might well be
545 the best approach if, e.g., one were to re-implement ``MaskedArray`` using
546 ``__array_ufunc__``.
547
548 As a final note: if the ``super`` route is suited to a given class, an
549 advantage of using it is that it helps in constructing class hierarchies.
550 E.g., suppose that our other class ``B`` also used the ``super`` in its
551 ``__array_ufunc__`` implementation, and we created a class ``C`` that depended
552 on both, i.e., ``class C(A, B)`` (with, for simplicity, not another
553 ``__array_ufunc__`` override). Then any ufunc on an instance of ``C`` would
554 pass on to ``A.__array_ufunc__``, the ``super`` call in ``A`` would go to
555 ``B.__array_ufunc__``, and the ``super`` call in ``B`` would go to
556 ``ndarray.__array_ufunc__``, thus allowing ``A`` and ``B`` to collaborate.
557
558 .. _array-wrap:
559
560 ``__array_wrap__`` for ufuncs and other functions
561 -------------------------------------------------
562
563 Prior to numpy 1.13, the behaviour of ufuncs could only be tuned using
564 ``__array_wrap__`` and ``__array_prepare__``. These two allowed one to
565 change the output type of a ufunc, but, in contrast to
566 ``__array_ufunc__``, did not allow one to make any changes to the inputs.
567 It is hoped to eventually deprecate these, but ``__array_wrap__`` is also
568 used by other numpy functions and methods, such as ``squeeze``, so at the
569 present time is still needed for full functionality.
570
571 Conceptually, ``__array_wrap__`` "wraps up the action" in the sense of
572 allowing a subclass to set the type of the return value and update
573 attributes and metadata. Let's show how this works with an example. First
574 we return to the simpler example subclass, but with a different name and
575 some print statements:
576
577 .. testcode::
578
579 import numpy as np
580
581 class MySubClass(np.ndarray):
582
583 def __new__(cls, input_array, info=None):
584 obj = np.asarray(input_array).view(cls)
585 obj.info = info
586 return obj
587
588 def __array_finalize__(self, obj):
589 print('In __array_finalize__:')
590 print(' self is %s' % repr(self))
591 print(' obj is %s' % repr(obj))
592 if obj is None: return
593 self.info = getattr(obj, 'info', None)
594
595 def __array_wrap__(self, out_arr, context=None):
596 print('In __array_wrap__:')
597 print(' self is %s' % repr(self))
598 print(' arr is %s' % repr(out_arr))
599 # then just call the parent
600 return super(MySubClass, self).__array_wrap__(self, out_arr, context)
601
602 We run a ufunc on an instance of our new array:
603
604 >>> obj = MySubClass(np.arange(5), info='spam')
605 In __array_finalize__:
606 self is MySubClass([0, 1, 2, 3, 4])
607 obj is array([0, 1, 2, 3, 4])
608 >>> arr2 = np.arange(5)+1
609 >>> ret = np.add(arr2, obj)
610 In __array_wrap__:
611 self is MySubClass([0, 1, 2, 3, 4])
612 arr is array([1, 3, 5, 7, 9])
613 In __array_finalize__:
614 self is MySubClass([1, 3, 5, 7, 9])
615 obj is MySubClass([0, 1, 2, 3, 4])
616 >>> ret
617 MySubClass([1, 3, 5, 7, 9])
618 >>> ret.info
619 'spam'
620
621 Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method
622 with arguments ``self`` as ``obj``, and ``out_arr`` as the (ndarray) result
623 of the addition. In turn, the default ``__array_wrap__``
624 (``ndarray.__array_wrap__``) has cast the result to class ``MySubClass``,
625 and called ``__array_finalize__`` - hence the copying of the ``info``
626 attribute. This has all happened at the C level.
627
628 But, we could do anything we wanted:
629
630 .. testcode::
631
632 class SillySubClass(np.ndarray):
633
634 def __array_wrap__(self, arr, context=None):
635 return 'I lost your data'
636
637 >>> arr1 = np.arange(5)
638 >>> obj = arr1.view(SillySubClass)
639 >>> arr2 = np.arange(5)
640 >>> ret = np.multiply(obj, arr2)
641 >>> ret
642 'I lost your data'
643
644 So, by defining a specific ``__array_wrap__`` method for our subclass,
645 we can tweak the output from ufuncs. The ``__array_wrap__`` method
646 requires ``self``, then an argument - which is the result of the ufunc -
647 and an optional parameter *context*. This parameter is returned by
648 ufuncs as a 3-element tuple: (name of the ufunc, arguments of the ufunc,
649 domain of the ufunc), but is not set by other numpy functions. Though,
650 as seen above, it is possible to do otherwise, ``__array_wrap__`` should
651 return an instance of its containing class. See the masked array
652 subclass for an implementation.
653
654 In addition to ``__array_wrap__``, which is called on the way out of the
655 ufunc, there is also an ``__array_prepare__`` method which is called on
656 the way into the ufunc, after the output arrays are created but before any
657 computation has been performed. The default implementation does nothing
658 but pass through the array. ``__array_prepare__`` should not attempt to
659 access the array data or resize the array, it is intended for setting the
660 output array type, updating attributes and metadata, and performing any
661 checks based on the input that may be desired before computation begins.
662 Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or
663 subclass thereof or raise an error.
664
665 Extra gotchas - custom ``__del__`` methods and ndarray.base
666 -----------------------------------------------------------
667
668 One of the problems that ndarray solves is keeping track of memory
669 ownership of ndarrays and their views. Consider the case where we have
670 created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``.
671 The two objects are looking at the same memory. NumPy keeps track of
672 where the data came from for a particular array or view, with the
673 ``base`` attribute:
674
675 >>> # A normal ndarray, that owns its own data
676 >>> arr = np.zeros((4,))
677 >>> # In this case, base is None
678 >>> arr.base is None
679 True
680 >>> # We take a view
681 >>> v1 = arr[1:]
682 >>> # base now points to the array that it derived from
683 >>> v1.base is arr
684 True
685 >>> # Take a view of a view
686 >>> v2 = v1[1:]
687 >>> # base points to the view it derived from
688 >>> v2.base is v1
689 True
690
691 In general, if the array owns its own memory, as for ``arr`` in this
692 case, then ``arr.base`` will be None - there are some exceptions to this
693 - see the numpy book for more details.
694
695 The ``base`` attribute is useful in being able to tell whether we have
696 a view or the original array. This in turn can be useful if we need
697 to know whether or not to do some specific cleanup when the subclassed
698 array is deleted. For example, we may only want to do the cleanup if
699 the original array is deleted, but not the views. For an example of
700 how this can work, have a look at the ``memmap`` class in
701 ``numpy.core``.
702
703 Subclassing and Downstream Compatibility
704 ----------------------------------------
705
706 When sub-classing ``ndarray`` or creating duck-types that mimic the ``ndarray``
707 interface, it is your responsibility to decide how aligned your APIs will be
708 with those of numpy. For convenience, many numpy functions that have a corresponding
709 ``ndarray`` method (e.g., ``sum``, ``mean``, ``take``, ``reshape``) work by checking
710 if the first argument to a function has a method of the same name. If it exists, the
711 method is called instead of coercing the arguments to a numpy array.
712
713 For example, if you want your sub-class or duck-type to be compatible with
714 numpy's ``sum`` function, the method signature for this object's ``sum`` method
715 should be the following:
716
717 .. testcode::
718
719 def sum(self, axis=None, dtype=None, out=None, keepdims=False):
720 ...
721
722 This is the exact same method signature for ``np.sum``, so now if a user calls
723 ``np.sum`` on this object, numpy will call the object's own ``sum`` method and
724 pass in these arguments enumerated above in the signature, and no errors will
725 be raised because the signatures are completely compatible with each other.
726
727 If, however, you decide to deviate from this signature and do something like this:
728
729 .. testcode::
730
731 def sum(self, axis=None, dtype=None):
732 ...
733
734 This object is no longer compatible with ``np.sum`` because if you call ``np.sum``,
735 it will pass in unexpected arguments ``out`` and ``keepdims``, causing a TypeError
736 to be raised.
737
738 If you wish to maintain compatibility with numpy and its subsequent versions (which
739 might add new keyword arguments) but do not want to surface all of numpy's arguments,
740 your function's signature should accept ``**kwargs``. For example:
741
742 .. testcode::
743
744 def sum(self, axis=None, dtype=None, **unused_kwargs):
745 ...
746
747 This object is now compatible with ``np.sum`` again because any extraneous arguments
748 (i.e. keywords that are not ``axis`` or ``dtype``) will be hidden away in the
749 ``**unused_kwargs`` parameter.
750
751 """
752 from __future__ import division, absolute_import, print_function
753
[end of numpy/doc/subclassing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
numpy/numpy
|
7fbcc4eaf22a01ac3282179b49c6363485263fbf
|
__array_function__ errors should more clearly identify the non-implemented function
xref #12028
Here's what you currently see if `__array_function__` returns `NotImplemented`:
```
In [1]: import numpy as np
In [2]: class MyArray:
...: def __array_function__(*args, **kwargs):
...: return NotImplemented
...:
In [3]: np.sum(MyArray())
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-c8a80bb1d37e> in <module>()
----> 1 np.sum(MyArray())
~/dev/numpy/numpy/core/overrides.py in public_api(*args, **kwargs)
149 relevant_args = dispatcher(*args, **kwargs)
150 return array_function_implementation_or_override(
--> 151 implementation, public_api, relevant_args, args, kwargs)
152
153 # TODO: remove this when we drop Python 2 support (functools.wraps
~/dev/numpy/numpy/core/overrides.py in array_function_implementation_or_override(implementation, public_api, relevant_args, args, kwargs)
108 raise TypeError('no implementation found for {} on types that implement '
109 '__array_function__: {}'
--> 110 .format(public_api, list(map(type, overloaded_args))))
111
112
TypeError: no implementation found for <function sum at 0x10e070bf8> on types that implement __array_function__: [<class '__main__.MyArray'>]
```
This error message should look something like this instead: `TypeError: no implementation found for 'numpy.sum' on types that implement __array_function__: [<class '__main__.MyArray'>]`
I think we will need to add a `name` parameter to `array_function_override` to do this properly. The best we could hope for with introspection is to use `__module__` and `__name__` to come up with something like `numpy.core.fromnumeric.sum`. This would be better what we currently have and could be a reasonable default, but we really don't want people reaching directly into internal modules like `fromnumeric`.
|
Interestingly, it looks like IPython has special code for printing functions based on `__module__` rather than `repr()` or `str`:
```
>>> np.sum
<function numpy.core.fromnumeric.sum>
```
We could probably reproduce this inside `__array_function__`. Along with manually updating the `__module__` attribute this would probably give us what we want. Usage would look something like:
```python
@array_function_dispatch(_sum_dispatcher, module='numpy')
def sum(a, axis=None, dtype=None, out=None, keepdims=np._NoValue, initial=np._NoValue):
...
```
|
2018-10-23T15:00:12Z
|
<patch>
diff --git a/numpy/core/arrayprint.py b/numpy/core/arrayprint.py
--- a/numpy/core/arrayprint.py
+++ b/numpy/core/arrayprint.py
@@ -506,7 +506,7 @@ def _array2string_dispatcher(
return (a,)
-@array_function_dispatch(_array2string_dispatcher)
+@array_function_dispatch(_array2string_dispatcher, module='numpy')
def array2string(a, max_line_width=None, precision=None,
suppress_small=None, separator=' ', prefix="",
style=np._NoValue, formatter=None, threshold=None,
@@ -1386,7 +1386,7 @@ def _array_repr_dispatcher(
return (arr,)
-@array_function_dispatch(_array_repr_dispatcher)
+@array_function_dispatch(_array_repr_dispatcher, module='numpy')
def array_repr(arr, max_line_width=None, precision=None, suppress_small=None):
"""
Return the string representation of an array.
@@ -1480,7 +1480,7 @@ def _array_str_dispatcher(
return (a,)
-@array_function_dispatch(_array_str_dispatcher)
+@array_function_dispatch(_array_str_dispatcher, module='numpy')
def array_str(a, max_line_width=None, precision=None, suppress_small=None):
"""
Return a string representation of the data in an array.
diff --git a/numpy/core/defchararray.py b/numpy/core/defchararray.py
--- a/numpy/core/defchararray.py
+++ b/numpy/core/defchararray.py
@@ -17,12 +17,13 @@
"""
from __future__ import division, absolute_import, print_function
+import functools
import sys
from .numerictypes import string_, unicode_, integer, object_, bool_, character
from .numeric import ndarray, compare_chararrays
from .numeric import array as narray
from numpy.core.multiarray import _vec_string
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
from numpy.compat import asbytes, long
import numpy
@@ -48,6 +49,10 @@
_bytes = str
_len = len
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy.char')
+
+
def _use_unicode(*args):
"""
Helper function for determining the output type of some string
diff --git a/numpy/core/fromnumeric.py b/numpy/core/fromnumeric.py
--- a/numpy/core/fromnumeric.py
+++ b/numpy/core/fromnumeric.py
@@ -3,16 +3,17 @@
"""
from __future__ import division, absolute_import, print_function
+import functools
import types
import warnings
import numpy as np
from .. import VisibleDeprecationWarning
from . import multiarray as mu
+from . import overrides
from . import umath as um
from . import numerictypes as nt
from .numeric import asarray, array, asanyarray, concatenate
-from .overrides import array_function_dispatch
from . import _methods
_dt_ = nt.sctype2char
@@ -32,6 +33,9 @@
# save away Python sum
_sum_ = sum
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
+
# functions that are now methods
def _wrapit(obj, method, *args, **kwds):
diff --git a/numpy/core/multiarray.py b/numpy/core/multiarray.py
--- a/numpy/core/multiarray.py
+++ b/numpy/core/multiarray.py
@@ -6,8 +6,10 @@
"""
+import functools
+
+from . import overrides
from . import _multiarray_umath
-from .overrides import array_function_dispatch
import numpy as np
from numpy.core._multiarray_umath import *
from numpy.core._multiarray_umath import (
@@ -37,6 +39,9 @@
'tracemalloc_domain', 'typeinfo', 'unpackbits', 'unravel_index', 'vdot',
'where', 'zeros']
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
+
def _empty_like_dispatcher(prototype, dtype=None, order=None, subok=None):
return (prototype,)
diff --git a/numpy/core/numeric.py b/numpy/core/numeric.py
--- a/numpy/core/numeric.py
+++ b/numpy/core/numeric.py
@@ -6,6 +6,7 @@
import collections.abc as collections_abc
except ImportError:
import collections as collections_abc
+import functools
import itertools
import operator
import sys
@@ -27,8 +28,8 @@
if sys.version_info[0] < 3:
from .multiarray import newbuffer, getbuffer
+from . import overrides
from . import umath
-from .overrides import array_function_dispatch
from .umath import (multiply, invert, sin, UFUNC_BUFSIZE_DEFAULT,
ERR_IGNORE, ERR_WARN, ERR_RAISE, ERR_CALL, ERR_PRINT,
ERR_LOG, ERR_DEFAULT, PINF, NAN)
@@ -55,6 +56,10 @@
import __builtin__ as builtins
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
+
+
def loads(*args, **kwargs):
# NumPy 1.15.0, 2017-12-10
warnings.warn(
diff --git a/numpy/core/overrides.py b/numpy/core/overrides.py
--- a/numpy/core/overrides.py
+++ b/numpy/core/overrides.py
@@ -105,9 +105,10 @@ def array_function_implementation_or_override(
if result is not NotImplemented:
return result
- raise TypeError('no implementation found for {} on types that implement '
+ func_name = '{}.{}'.format(public_api.__module__, public_api.__name__)
+ raise TypeError("no implementation found for '{}' on types that implement "
'__array_function__: {}'
- .format(public_api, list(map(type, overloaded_args))))
+ .format(func_name, list(map(type, overloaded_args))))
ArgSpec = collections.namedtuple('ArgSpec', 'args varargs keywords defaults')
@@ -135,7 +136,7 @@ def verify_matching_signatures(implementation, dispatcher):
'default argument values')
-def array_function_dispatch(dispatcher, verify=True):
+def array_function_dispatch(dispatcher, module=None, verify=True):
"""Decorator for adding dispatch with the __array_function__ protocol."""
def decorator(implementation):
# TODO: only do this check when the appropriate flag is enabled or for
@@ -149,6 +150,10 @@ def public_api(*args, **kwargs):
relevant_args = dispatcher(*args, **kwargs)
return array_function_implementation_or_override(
implementation, public_api, relevant_args, args, kwargs)
+
+ if module is not None:
+ public_api.__module__ = module
+
return public_api
return decorator
diff --git a/numpy/fft/fftpack.py b/numpy/fft/fftpack.py
--- a/numpy/fft/fftpack.py
+++ b/numpy/fft/fftpack.py
@@ -35,10 +35,12 @@
__all__ = ['fft', 'ifft', 'rfft', 'irfft', 'hfft', 'ihfft', 'rfftn',
'irfftn', 'rfft2', 'irfft2', 'fft2', 'ifft2', 'fftn', 'ifftn']
+import functools
+
from numpy.core import (array, asarray, zeros, swapaxes, shape, conjugate,
take, sqrt)
from numpy.core.multiarray import normalize_axis_index
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
from . import fftpack_lite as fftpack
from .helper import _FFTCache
@@ -46,6 +48,10 @@
_real_fft_cache = _FFTCache(max_size_in_mb=100, max_item_count=32)
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy.fft')
+
+
def _raw_fft(a, n=None, axis=-1, init_function=fftpack.cffti,
work_function=fftpack.cfftf, fft_cache=_fft_cache):
a = asarray(a)
diff --git a/numpy/fft/helper.py b/numpy/fft/helper.py
--- a/numpy/fft/helper.py
+++ b/numpy/fft/helper.py
@@ -24,7 +24,7 @@ def _fftshift_dispatcher(x, axes=None):
return (x,)
-@array_function_dispatch(_fftshift_dispatcher)
+@array_function_dispatch(_fftshift_dispatcher, module='numpy.fft')
def fftshift(x, axes=None):
"""
Shift the zero-frequency component to the center of the spectrum.
@@ -81,7 +81,7 @@ def fftshift(x, axes=None):
return roll(x, shift, axes)
-@array_function_dispatch(_fftshift_dispatcher)
+@array_function_dispatch(_fftshift_dispatcher, module='numpy.fft')
def ifftshift(x, axes=None):
"""
The inverse of `fftshift`. Although identical for even-length `x`, the
diff --git a/numpy/lib/arraypad.py b/numpy/lib/arraypad.py
--- a/numpy/lib/arraypad.py
+++ b/numpy/lib/arraypad.py
@@ -995,7 +995,7 @@ def _pad_dispatcher(array, pad_width, mode, **kwargs):
return (array,)
-@array_function_dispatch(_pad_dispatcher)
+@array_function_dispatch(_pad_dispatcher, module='numpy')
def pad(array, pad_width, mode, **kwargs):
"""
Pads an array.
diff --git a/numpy/lib/arraysetops.py b/numpy/lib/arraysetops.py
--- a/numpy/lib/arraysetops.py
+++ b/numpy/lib/arraysetops.py
@@ -27,8 +27,14 @@
"""
from __future__ import division, absolute_import, print_function
+import functools
+
import numpy as np
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
+
+
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
__all__ = [
diff --git a/numpy/lib/financial.py b/numpy/lib/financial.py
--- a/numpy/lib/financial.py
+++ b/numpy/lib/financial.py
@@ -13,9 +13,14 @@
from __future__ import division, absolute_import, print_function
from decimal import Decimal
+import functools
import numpy as np
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
+
+
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
__all__ = ['fv', 'pmt', 'nper', 'ipmt', 'ppmt', 'pv', 'rate',
diff --git a/numpy/lib/function_base.py b/numpy/lib/function_base.py
--- a/numpy/lib/function_base.py
+++ b/numpy/lib/function_base.py
@@ -6,6 +6,7 @@
import collections.abc as collections_abc
except ImportError:
import collections as collections_abc
+import functools
import re
import sys
import warnings
@@ -26,7 +27,7 @@
ravel, nonzero, partition, mean, any, sum
)
from numpy.core.numerictypes import typecodes
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
from numpy.core.function_base import add_newdoc
from numpy.lib.twodim_base import diag
from .utils import deprecate
@@ -44,6 +45,11 @@
else:
import builtins
+
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
+
+
# needed in this module for compatibility
from numpy.lib.histograms import histogram, histogramdd
diff --git a/numpy/lib/index_tricks.py b/numpy/lib/index_tricks.py
--- a/numpy/lib/index_tricks.py
+++ b/numpy/lib/index_tricks.py
@@ -1,5 +1,6 @@
from __future__ import division, absolute_import, print_function
+import functools
import sys
import math
@@ -13,10 +14,14 @@
import numpy.matrixlib as matrixlib
from .function_base import diff
from numpy.core.multiarray import ravel_multi_index, unravel_index
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
from numpy.lib.stride_tricks import as_strided
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
+
+
__all__ = [
'ravel_multi_index', 'unravel_index', 'mgrid', 'ogrid', 'r_', 'c_',
's_', 'index_exp', 'ix_', 'ndenumerate', 'ndindex', 'fill_diagonal',
diff --git a/numpy/lib/nanfunctions.py b/numpy/lib/nanfunctions.py
--- a/numpy/lib/nanfunctions.py
+++ b/numpy/lib/nanfunctions.py
@@ -22,10 +22,15 @@
"""
from __future__ import division, absolute_import, print_function
+import functools
import warnings
import numpy as np
from numpy.lib import function_base
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
+
+
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
__all__ = [
diff --git a/numpy/lib/polynomial.py b/numpy/lib/polynomial.py
--- a/numpy/lib/polynomial.py
+++ b/numpy/lib/polynomial.py
@@ -8,19 +8,24 @@
'polysub', 'polymul', 'polydiv', 'polyval', 'poly1d',
'polyfit', 'RankWarning']
+import functools
import re
import warnings
import numpy.core.numeric as NX
from numpy.core import (isscalar, abs, finfo, atleast_1d, hstack, dot, array,
ones)
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
from numpy.lib.twodim_base import diag, vander
from numpy.lib.function_base import trim_zeros
from numpy.lib.type_check import iscomplex, real, imag, mintypecode
from numpy.linalg import eigvals, lstsq, inv
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
+
+
class RankWarning(UserWarning):
"""
Issued by `polyfit` when the Vandermonde matrix is rank deficient.
diff --git a/numpy/lib/shape_base.py b/numpy/lib/shape_base.py
--- a/numpy/lib/shape_base.py
+++ b/numpy/lib/shape_base.py
@@ -1,5 +1,6 @@
from __future__ import division, absolute_import, print_function
+import functools
import warnings
import numpy.core.numeric as _nx
@@ -8,7 +9,7 @@
)
from numpy.core.fromnumeric import product, reshape, transpose
from numpy.core.multiarray import normalize_axis_index
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
from numpy.core import vstack, atleast_3d
from numpy.lib.index_tricks import ndindex
from numpy.matrixlib.defmatrix import matrix # this raises all the right alarm bells
@@ -22,6 +23,10 @@
]
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
+
+
def _make_along_axis_idx(arr_shape, indices, axis):
# compute dimensions to iterate over
if not _nx.issubdtype(indices.dtype, _nx.integer):
diff --git a/numpy/lib/stride_tricks.py b/numpy/lib/stride_tricks.py
--- a/numpy/lib/stride_tricks.py
+++ b/numpy/lib/stride_tricks.py
@@ -140,7 +140,7 @@ def _broadcast_to_dispatcher(array, shape, subok=None):
return (array,)
-@array_function_dispatch(_broadcast_to_dispatcher)
+@array_function_dispatch(_broadcast_to_dispatcher, module='numpy')
def broadcast_to(array, shape, subok=False):
"""Broadcast an array to a new shape.
@@ -205,7 +205,7 @@ def _broadcast_arrays_dispatcher(*args, **kwargs):
return args
-@array_function_dispatch(_broadcast_arrays_dispatcher)
+@array_function_dispatch(_broadcast_arrays_dispatcher, module='numpy')
def broadcast_arrays(*args, **kwargs):
"""
Broadcast any number of arrays against each other.
diff --git a/numpy/lib/twodim_base.py b/numpy/lib/twodim_base.py
--- a/numpy/lib/twodim_base.py
+++ b/numpy/lib/twodim_base.py
@@ -3,12 +3,14 @@
"""
from __future__ import division, absolute_import, print_function
+import functools
+
from numpy.core.numeric import (
absolute, asanyarray, arange, zeros, greater_equal, multiply, ones,
asarray, where, int8, int16, int32, int64, empty, promote_types, diagonal,
nonzero
)
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
from numpy.core import iinfo, transpose
@@ -18,6 +20,10 @@
'tril_indices_from', 'triu_indices', 'triu_indices_from', ]
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
+
+
i1 = iinfo(int8)
i2 = iinfo(int16)
i4 = iinfo(int32)
diff --git a/numpy/lib/type_check.py b/numpy/lib/type_check.py
--- a/numpy/lib/type_check.py
+++ b/numpy/lib/type_check.py
@@ -2,6 +2,7 @@
"""
from __future__ import division, absolute_import, print_function
+import functools
import warnings
__all__ = ['iscomplexobj', 'isrealobj', 'imag', 'iscomplex',
@@ -11,11 +12,17 @@
import numpy.core.numeric as _nx
from numpy.core.numeric import asarray, asanyarray, array, isnan, zeros
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
from .ufunclike import isneginf, isposinf
+
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy')
+
+
_typecodes_by_elsize = 'GDFgdfQqLlIiHhBb?'
+
def mintypecode(typechars,typeset='GDFgdf',default='d'):
"""
Return the character for the minimum-size type to which given types can
diff --git a/numpy/lib/ufunclike.py b/numpy/lib/ufunclike.py
--- a/numpy/lib/ufunclike.py
+++ b/numpy/lib/ufunclike.py
@@ -60,7 +60,7 @@ def _dispatcher(x, out=None):
return (x, out)
-@array_function_dispatch(_dispatcher, verify=False)
+@array_function_dispatch(_dispatcher, verify=False, module='numpy')
@_fix_out_named_y
def fix(x, out=None):
"""
@@ -107,7 +107,7 @@ def fix(x, out=None):
return res
-@array_function_dispatch(_dispatcher, verify=False)
+@array_function_dispatch(_dispatcher, verify=False, module='numpy')
@_fix_out_named_y
def isposinf(x, out=None):
"""
@@ -176,7 +176,7 @@ def isposinf(x, out=None):
return nx.logical_and(is_inf, signbit, out)
-@array_function_dispatch(_dispatcher, verify=False)
+@array_function_dispatch(_dispatcher, verify=False, module='numpy')
@_fix_out_named_y
def isneginf(x, out=None):
"""
diff --git a/numpy/linalg/linalg.py b/numpy/linalg/linalg.py
--- a/numpy/linalg/linalg.py
+++ b/numpy/linalg/linalg.py
@@ -16,6 +16,7 @@
'svd', 'eig', 'eigh', 'lstsq', 'norm', 'qr', 'cond', 'matrix_rank',
'LinAlgError', 'multi_dot']
+import functools
import operator
import warnings
@@ -28,10 +29,15 @@
swapaxes, divide, count_nonzero, isnan
)
from numpy.core.multiarray import normalize_axis_index
-from numpy.core.overrides import array_function_dispatch
+from numpy.core import overrides
from numpy.lib.twodim_base import triu, eye
from numpy.linalg import lapack_lite, _umath_linalg
+
+array_function_dispatch = functools.partial(
+ overrides.array_function_dispatch, module='numpy.linalg')
+
+
# For Python2/3 compatibility
_N = b'N'
_V = b'V'
</patch>
|
[]
|
[]
| |||
googleapis__google-cloud-python-3793
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Entities created in datastore snippets should use a namespace
This is useful for debugging build failures. This is [already done][1] for the rest of the entities created in the datastore system test.
[1]: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/e63ddc384e4a54a6a085595d779a399ed449c26c/datastore/tests/system/test_system.py#L52-L64
</issue>
<code>
[start of README.rst]
1 Google Cloud Python Client
2 ==========================
3
4 Python idiomatic client for `Google Cloud Platform`_ services.
5
6 .. _Google Cloud Platform: https://cloud.google.com/
7
8 |pypi| |circleci| |appveyor| |coverage| |versions|
9
10 - `Homepage`_
11 - `API Documentation`_
12 - `Read The Docs Documentation`_
13
14 .. _Homepage: https://googlecloudplatform.github.io/google-cloud-python/
15 .. _API Documentation: https://googlecloudplatform.github.io/google-cloud-python/latest/
16 .. _Read The Docs Documentation: https://google-cloud-python.readthedocs.io/en/latest/
17
18 The following client libraries have **GA** support:
19
20 - `Google Cloud Datastore`_ (`Datastore README`_)
21 - `Google Cloud Storage`_ (`Storage README`_)
22 - `Google Cloud Translation`_ (`Translation README`_)
23 - `Stackdriver Logging`_ (`Logging README`_)
24
25 **GA** (general availability) indicates that the client library for a
26 particular service is stable, and that the code surface will not change in
27 backwards-incompatible ways unless either absolutely necessary (e.g. because
28 of critical security issues) or with an extensive deprecation period.
29 Issues and requests against GA libraries are addressed with the highest
30 priority.
31
32 The following client libraries have **beta** support:
33
34 - `Google BigQuery`_ (`BigQuery README`_)
35 - `Google Cloud Natural Language`_ (`Natural Language README`_)
36 - `Google Cloud Speech`_ (`Speech README`_)
37 - `Google Cloud Video Intelligence`_ (`Video Intelligence README`_)
38 - `Google Cloud Vision`_ (`Vision README`_)
39
40 **Beta** indicates that the client library for a particular service is
41 mostly stable and is being prepared for release. Issues and requests
42 against beta libraries are addressed with a higher priority.
43
44 This client library has **alpha** support for the following Google
45 Cloud Platform services:
46
47 - `Google Cloud Bigtable`_ (`Bigtable README`_)
48 - `Google Cloud Bigtable - HappyBase`_ (`HappyBase README`_)
49 - `Google Cloud DNS`_ (`DNS README`_)
50 - `Google Cloud Pub/Sub`_ (`Pub/Sub README`_)
51 - `Google Cloud Resource Manager`_ (`Resource Manager README`_)
52 - `Google Cloud Runtime Configuration`_ (`Runtime Config README`_)
53 - `Google Cloud Spanner`_ (`Spanner README`_)
54 - `Stackdriver Error Reporting`_ (`Error Reporting README`_)
55 - `Stackdriver Monitoring`_ (`Monitoring README`_)
56
57 **Alpha** indicates that the client library for a particular service is
58 still a work-in-progress and is more likely to get backwards-incompatible
59 updates. See `versioning`_ for more details.
60
61 .. _Google Cloud Datastore: https://pypi.org/project/google-cloud-datastore/
62 .. _Datastore README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/datastore
63 .. _Google Cloud Storage: https://pypi.org/project/google-cloud-storage/
64 .. _Storage README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/storage
65 .. _Google Cloud Pub/Sub: https://pypi.org/project/google-cloud-pubsub/
66 .. _Pub/Sub README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/pubsub
67 .. _Google BigQuery: https://pypi.org/project/google-cloud-bigquery/
68 .. _BigQuery README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigquery
69 .. _Google Cloud Resource Manager: https://pypi.org/project/google-cloud-resource-manager/
70 .. _Resource Manager README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/resource_manager
71 .. _Stackdriver Logging: https://pypi.org/project/google-cloud-logging/
72 .. _Logging README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/logging
73 .. _Stackdriver Monitoring: https://pypi.org/project/google-cloud-monitoring/
74 .. _Monitoring README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/monitoring
75 .. _Google Cloud Bigtable: https://pypi.org/project/google-cloud-bigtable/
76 .. _Bigtable README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigtable
77 .. _Google Cloud DNS: https://pypi.org/project/google-cloud-dns/
78 .. _DNS README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/dns
79 .. _Stackdriver Error Reporting: https://pypi.org/project/google-cloud-error-reporting/
80 .. _Error Reporting README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/error_reporting
81 .. _Google Cloud Natural Language: https://pypi.org/project/google-cloud-language/
82 .. _Natural Language README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/language
83 .. _Google Cloud Translation: https://pypi.org/project/google-cloud-translate/
84 .. _Translation README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/translate
85 .. _Google Cloud Speech: https://pypi.org/project/google-cloud-speech/
86 .. _Speech README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/speech
87 .. _Google Cloud Vision: https://pypi.org/project/google-cloud-vision/
88 .. _Vision README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/vision
89 .. _Google Cloud Bigtable - HappyBase: https://pypi.org/project/google-cloud-happybase/
90 .. _HappyBase README: https://github.com/GoogleCloudPlatform/google-cloud-python-happybase
91 .. _Google Cloud Runtime Configuration: https://cloud.google.com/deployment-manager/runtime-configurator/
92 .. _Runtime Config README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/runtimeconfig
93 .. _Google Cloud Spanner: https://pypi.python.org/pypi/google-cloud-spanner
94 .. _Spanner README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/spanner
95 .. _Google Cloud Video Intelligence: https://pypi.python.org/pypi/google-cloud-videointelligence
96 .. _Video Intelligence README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/videointelligence
97 .. _versioning: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/CONTRIBUTING.rst#versioning
98
99 If you need support for other Google APIs, check out the
100 `Google APIs Python Client library`_.
101
102 .. _Google APIs Python Client library: https://github.com/google/google-api-python-client
103
104 Quick Start
105 -----------
106
107 .. code-block:: console
108
109 $ pip install --upgrade google-cloud
110
111 Example Applications
112 --------------------
113
114 - `getting-started-python`_ - A sample and `tutorial`_ that demonstrates how to build a complete web application using Cloud Datastore, Cloud Storage, and Cloud Pub/Sub and deploy it to Google App Engine or Google Compute Engine.
115 - `google-cloud-python-expenses-demo`_ - A sample expenses demo using Cloud Datastore and Cloud Storage
116
117 .. _getting-started-python: https://github.com/GoogleCloudPlatform/getting-started-python
118 .. _tutorial: https://cloud.google.com/python
119 .. _google-cloud-python-expenses-demo: https://github.com/GoogleCloudPlatform/google-cloud-python-expenses-demo
120
121 Authentication
122 --------------
123
124 With ``google-cloud-python`` we try to make authentication as painless as possible.
125 Check out the `Authentication section`_ in our documentation to learn more.
126 You may also find the `authentication document`_ shared by all the
127 ``google-cloud-*`` libraries to be helpful.
128
129 .. _Authentication section: https://google-cloud-python.readthedocs.io/en/latest/core/auth.html
130 .. _authentication document: https://github.com/GoogleCloudPlatform/gcloud-common/tree/master/authentication
131
132 Contributing
133 ------------
134
135 Contributions to this library are always welcome and highly encouraged.
136
137 See the `CONTRIBUTING doc`_ for more information on how to get started.
138
139 .. _CONTRIBUTING doc: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/CONTRIBUTING.rst
140
141 Community
142 ---------
143
144 Google Cloud Platform Python developers hang out in `Slack`_ in the ``#python``
145 channel, click here to `get an invitation`_.
146
147
148 .. _Slack: https://googlecloud-community.slack.com
149 .. _get an invitation: https://gcp-slack.appspot.com/
150
151 License
152 -------
153
154 Apache 2.0 - See `the LICENSE`_ for more information.
155
156 .. _the LICENSE: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/LICENSE
157
158 .. |circleci| image:: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python.svg?style=shield
159 :target: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python
160 .. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/googlecloudplatform/google-cloud-python?branch=master&svg=true
161 :target: https://ci.appveyor.com/project/GoogleCloudPlatform/google-cloud-python
162 .. |coverage| image:: https://coveralls.io/repos/GoogleCloudPlatform/google-cloud-python/badge.svg?branch=master
163 :target: https://coveralls.io/r/GoogleCloudPlatform/google-cloud-python?branch=master
164 .. |pypi| image:: https://img.shields.io/pypi/v/google-cloud.svg
165 :target: https://pypi.org/project/google-cloud/
166 .. |versions| image:: https://img.shields.io/pypi/pyversions/google-cloud.svg
167 :target: https://pypi.org/project/google-cloud/
168
[end of README.rst]
[start of datastore/google/cloud/datastore/batch.py]
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Create / interact with a batch of updates / deletes.
16
17 Batches provide the ability to execute multiple operations
18 in a single request to the Cloud Datastore API.
19
20 See
21 https://cloud.google.com/datastore/docs/concepts/entities#batch_operations
22 """
23
24 from google.cloud.datastore import helpers
25 from google.cloud.proto.datastore.v1 import datastore_pb2 as _datastore_pb2
26
27
28 class Batch(object):
29 """An abstraction representing a collected group of updates / deletes.
30
31 Used to build up a bulk mutation.
32
33 For example, the following snippet of code will put the two ``save``
34 operations and the ``delete`` operation into the same mutation, and send
35 them to the server in a single API request::
36
37 >>> from google.cloud import datastore
38 >>> client = datastore.Client()
39 >>> batch = client.batch()
40 >>> batch.put(entity1)
41 >>> batch.put(entity2)
42 >>> batch.delete(key3)
43 >>> batch.commit()
44
45 You can also use a batch as a context manager, in which case
46 :meth:`commit` will be called automatically if its block exits without
47 raising an exception::
48
49 >>> with batch:
50 ... batch.put(entity1)
51 ... batch.put(entity2)
52 ... batch.delete(key3)
53
54 By default, no updates will be sent if the block exits with an error::
55
56 >>> with batch:
57 ... do_some_work(batch)
58 ... raise Exception() # rolls back
59
60 :type client: :class:`google.cloud.datastore.client.Client`
61 :param client: The client used to connect to datastore.
62 """
63
64 _id = None # "protected" attribute, always None for non-transactions
65
66 _INITIAL = 0
67 """Enum value for _INITIAL status of batch/transaction."""
68
69 _IN_PROGRESS = 1
70 """Enum value for _IN_PROGRESS status of batch/transaction."""
71
72 _ABORTED = 2
73 """Enum value for _ABORTED status of batch/transaction."""
74
75 _FINISHED = 3
76 """Enum value for _FINISHED status of batch/transaction."""
77
78 def __init__(self, client):
79 self._client = client
80 self._mutations = []
81 self._partial_key_entities = []
82 self._status = self._INITIAL
83
84 def current(self):
85 """Return the topmost batch / transaction, or None."""
86 return self._client.current_batch
87
88 @property
89 def project(self):
90 """Getter for project in which the batch will run.
91
92 :rtype: :class:`str`
93 :returns: The project in which the batch will run.
94 """
95 return self._client.project
96
97 @property
98 def namespace(self):
99 """Getter for namespace in which the batch will run.
100
101 :rtype: :class:`str`
102 :returns: The namespace in which the batch will run.
103 """
104 return self._client.namespace
105
106 def _add_partial_key_entity_pb(self):
107 """Adds a new mutation for an entity with a partial key.
108
109 :rtype: :class:`.entity_pb2.Entity`
110 :returns: The newly created entity protobuf that will be
111 updated and sent with a commit.
112 """
113 new_mutation = _datastore_pb2.Mutation()
114 self._mutations.append(new_mutation)
115 return new_mutation.insert
116
117 def _add_complete_key_entity_pb(self):
118 """Adds a new mutation for an entity with a completed key.
119
120 :rtype: :class:`.entity_pb2.Entity`
121 :returns: The newly created entity protobuf that will be
122 updated and sent with a commit.
123 """
124 # We use ``upsert`` for entities with completed keys, rather than
125 # ``insert`` or ``update``, in order not to create race conditions
126 # based on prior existence / removal of the entity.
127 new_mutation = _datastore_pb2.Mutation()
128 self._mutations.append(new_mutation)
129 return new_mutation.upsert
130
131 def _add_delete_key_pb(self):
132 """Adds a new mutation for a key to be deleted.
133
134 :rtype: :class:`.entity_pb2.Key`
135 :returns: The newly created key protobuf that will be
136 deleted when sent with a commit.
137 """
138 new_mutation = _datastore_pb2.Mutation()
139 self._mutations.append(new_mutation)
140 return new_mutation.delete
141
142 @property
143 def mutations(self):
144 """Getter for the changes accumulated by this batch.
145
146 Every batch is committed with a single commit request containing all
147 the work to be done as mutations. Inside a batch, calling :meth:`put`
148 with an entity, or :meth:`delete` with a key, builds up the request by
149 adding a new mutation. This getter returns the protobuf that has been
150 built-up so far.
151
152 :rtype: iterable
153 :returns: The list of :class:`.datastore_pb2.Mutation`
154 protobufs to be sent in the commit request.
155 """
156 return self._mutations
157
158 def put(self, entity):
159 """Remember an entity's state to be saved during :meth:`commit`.
160
161 .. note::
162 Any existing properties for the entity will be replaced by those
163 currently set on this instance. Already-stored properties which do
164 not correspond to keys set on this instance will be removed from
165 the datastore.
166
167 .. note::
168 Property values which are "text" ('unicode' in Python2, 'str' in
169 Python3) map to 'string_value' in the datastore; values which are
170 "bytes" ('str' in Python2, 'bytes' in Python3) map to 'blob_value'.
171
172 When an entity has a partial key, calling :meth:`commit` sends it as
173 an ``insert`` mutation and the key is completed. On return,
174 the key for the ``entity`` passed in is updated to match the key ID
175 assigned by the server.
176
177 :type entity: :class:`google.cloud.datastore.entity.Entity`
178 :param entity: the entity to be saved.
179
180 :raises: :class:`~exceptions.ValueError` if the batch is not in
181 progress, if entity has no key assigned, or if the key's
182 ``project`` does not match ours.
183 """
184 if self._status != self._IN_PROGRESS:
185 raise ValueError('Batch must be in progress to put()')
186
187 if entity.key is None:
188 raise ValueError("Entity must have a key")
189
190 if self.project != entity.key.project:
191 raise ValueError("Key must be from same project as batch")
192
193 if entity.key.is_partial:
194 entity_pb = self._add_partial_key_entity_pb()
195 self._partial_key_entities.append(entity)
196 else:
197 entity_pb = self._add_complete_key_entity_pb()
198
199 _assign_entity_to_pb(entity_pb, entity)
200
201 def delete(self, key):
202 """Remember a key to be deleted during :meth:`commit`.
203
204 :type key: :class:`google.cloud.datastore.key.Key`
205 :param key: the key to be deleted.
206
207 :raises: :class:`~exceptions.ValueError` if the batch is not in
208 progress, if key is not complete, or if the key's
209 ``project`` does not match ours.
210 """
211 if self._status != self._IN_PROGRESS:
212 raise ValueError('Batch must be in progress to delete()')
213
214 if key.is_partial:
215 raise ValueError("Key must be complete")
216
217 if self.project != key.project:
218 raise ValueError("Key must be from same project as batch")
219
220 key_pb = key.to_protobuf()
221 self._add_delete_key_pb().CopyFrom(key_pb)
222
223 def begin(self):
224 """Begins a batch.
225
226 This method is called automatically when entering a with
227 statement, however it can be called explicitly if you don't want
228 to use a context manager.
229
230 Overridden by :class:`google.cloud.datastore.transaction.Transaction`.
231
232 :raises: :class:`ValueError` if the batch has already begun.
233 """
234 if self._status != self._INITIAL:
235 raise ValueError('Batch already started previously.')
236 self._status = self._IN_PROGRESS
237
238 def _commit(self):
239 """Commits the batch.
240
241 This is called by :meth:`commit`.
242 """
243 if self._id is None:
244 mode = _datastore_pb2.CommitRequest.NON_TRANSACTIONAL
245 else:
246 mode = _datastore_pb2.CommitRequest.TRANSACTIONAL
247
248 commit_response_pb = self._client._datastore_api.commit(
249 self.project, mode, self._mutations, transaction=self._id)
250 _, updated_keys = _parse_commit_response(commit_response_pb)
251 # If the back-end returns without error, we are guaranteed that
252 # ``commit`` will return keys that match (length and
253 # order) directly ``_partial_key_entities``.
254 for new_key_pb, entity in zip(updated_keys,
255 self._partial_key_entities):
256 new_id = new_key_pb.path[-1].id
257 entity.key = entity.key.completed_key(new_id)
258
259 def commit(self):
260 """Commits the batch.
261
262 This is called automatically upon exiting a with statement,
263 however it can be called explicitly if you don't want to use a
264 context manager.
265
266 :raises: :class:`~exceptions.ValueError` if the batch is not
267 in progress.
268 """
269 if self._status != self._IN_PROGRESS:
270 raise ValueError('Batch must be in progress to commit()')
271
272 try:
273 self._commit()
274 finally:
275 self._status = self._FINISHED
276
277 def rollback(self):
278 """Rolls back the current batch.
279
280 Marks the batch as aborted (can't be used again).
281
282 Overridden by :class:`google.cloud.datastore.transaction.Transaction`.
283
284 :raises: :class:`~exceptions.ValueError` if the batch is not
285 in progress.
286 """
287 if self._status != self._IN_PROGRESS:
288 raise ValueError('Batch must be in progress to rollback()')
289
290 self._status = self._ABORTED
291
292 def __enter__(self):
293 self.begin()
294 # NOTE: We make sure begin() succeeds before pushing onto the stack.
295 self._client._push_batch(self)
296 return self
297
298 def __exit__(self, exc_type, exc_val, exc_tb):
299 try:
300 if exc_type is None:
301 self.commit()
302 else:
303 self.rollback()
304 finally:
305 self._client._pop_batch()
306
307
308 def _assign_entity_to_pb(entity_pb, entity):
309 """Copy ``entity`` into ``entity_pb``.
310
311 Helper method for ``Batch.put``.
312
313 :type entity_pb: :class:`.entity_pb2.Entity`
314 :param entity_pb: The entity owned by a mutation.
315
316 :type entity: :class:`google.cloud.datastore.entity.Entity`
317 :param entity: The entity being updated within the batch / transaction.
318 """
319 bare_entity_pb = helpers.entity_to_protobuf(entity)
320 bare_entity_pb.key.CopyFrom(bare_entity_pb.key)
321 entity_pb.CopyFrom(bare_entity_pb)
322
323
324 def _parse_commit_response(commit_response_pb):
325 """Extract response data from a commit response.
326
327 :type commit_response_pb: :class:`.datastore_pb2.CommitResponse`
328 :param commit_response_pb: The protobuf response from a commit request.
329
330 :rtype: tuple
331 :returns: The pair of the number of index updates and a list of
332 :class:`.entity_pb2.Key` for each incomplete key
333 that was completed in the commit.
334 """
335 mut_results = commit_response_pb.mutation_results
336 index_updates = commit_response_pb.index_updates
337 completed_keys = [mut_result.key for mut_result in mut_results
338 if mut_result.HasField('key')] # Message field (Key)
339 return index_updates, completed_keys
340
[end of datastore/google/cloud/datastore/batch.py]
[start of datastore/google/cloud/datastore/client.py]
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Convenience wrapper for invoking APIs/factories w/ a project."""
15
16 import os
17
18 from google.cloud.proto.datastore.v1 import datastore_pb2 as _datastore_pb2
19
20 from google.cloud._helpers import _LocalStack
21 from google.cloud._helpers import (
22 _determine_default_project as _base_default_project)
23 from google.cloud.client import ClientWithProject
24 from google.cloud.environment_vars import DISABLE_GRPC
25 from google.cloud.environment_vars import GCD_DATASET
26 from google.cloud.environment_vars import GCD_HOST
27
28 from google.cloud.datastore._http import HTTPDatastoreAPI
29 from google.cloud.datastore import helpers
30 from google.cloud.datastore.batch import Batch
31 from google.cloud.datastore.entity import Entity
32 from google.cloud.datastore.key import Key
33 from google.cloud.datastore.query import Query
34 from google.cloud.datastore.transaction import Transaction
35 try:
36 from google.cloud.datastore._gax import make_datastore_api
37 _HAVE_GRPC = True
38 except ImportError: # pragma: NO COVER
39 make_datastore_api = None
40 _HAVE_GRPC = False
41
42
43 _MAX_LOOPS = 128
44 """Maximum number of iterations to wait for deferred keys."""
45 _DATASTORE_BASE_URL = 'https://datastore.googleapis.com'
46 """Datastore API request URL base."""
47
48 _USE_GRPC = _HAVE_GRPC and not os.getenv(DISABLE_GRPC, False)
49
50
51 def _get_gcd_project():
52 """Gets the GCD application ID if it can be inferred."""
53 return os.getenv(GCD_DATASET)
54
55
56 def _determine_default_project(project=None):
57 """Determine default project explicitly or implicitly as fall-back.
58
59 In implicit case, supports four environments. In order of precedence, the
60 implicit environments are:
61
62 * DATASTORE_DATASET environment variable (for ``gcd`` / emulator testing)
63 * GOOGLE_CLOUD_PROJECT environment variable
64 * Google App Engine application ID
65 * Google Compute Engine project ID (from metadata server)
66
67 :type project: str
68 :param project: Optional. The project to use as default.
69
70 :rtype: str or ``NoneType``
71 :returns: Default project if it can be determined.
72 """
73 if project is None:
74 project = _get_gcd_project()
75
76 if project is None:
77 project = _base_default_project(project=project)
78
79 return project
80
81
82 def _extended_lookup(datastore_api, project, key_pbs,
83 missing=None, deferred=None,
84 eventual=False, transaction_id=None):
85 """Repeat lookup until all keys found (unless stop requested).
86
87 Helper function for :meth:`Client.get_multi`.
88
89 :type datastore_api:
90 :class:`google.cloud.datastore._http.HTTPDatastoreAPI`
91 or :class:`google.cloud.datastore._gax.GAPICDatastoreAPI`
92 :param datastore_api: The datastore API object used to connect
93 to datastore.
94
95 :type project: str
96 :param project: The project to make the request for.
97
98 :type key_pbs: list of :class:`.entity_pb2.Key`
99 :param key_pbs: The keys to retrieve from the datastore.
100
101 :type missing: list
102 :param missing: (Optional) If a list is passed, the key-only entity
103 protobufs returned by the backend as "missing" will be
104 copied into it.
105
106 :type deferred: list
107 :param deferred: (Optional) If a list is passed, the key protobufs returned
108 by the backend as "deferred" will be copied into it.
109
110 :type eventual: bool
111 :param eventual: If False (the default), request ``STRONG`` read
112 consistency. If True, request ``EVENTUAL`` read
113 consistency.
114
115 :type transaction_id: str
116 :param transaction_id: If passed, make the request in the scope of
117 the given transaction. Incompatible with
118 ``eventual==True``.
119
120 :rtype: list of :class:`.entity_pb2.Entity`
121 :returns: The requested entities.
122 :raises: :class:`ValueError` if missing / deferred are not null or
123 empty list.
124 """
125 if missing is not None and missing != []:
126 raise ValueError('missing must be None or an empty list')
127
128 if deferred is not None and deferred != []:
129 raise ValueError('deferred must be None or an empty list')
130
131 results = []
132
133 loop_num = 0
134 read_options = _get_read_options(eventual, transaction_id)
135 while loop_num < _MAX_LOOPS: # loop against possible deferred.
136 loop_num += 1
137 lookup_response = datastore_api.lookup(
138 project, read_options, key_pbs)
139
140 # Accumulate the new results.
141 results.extend(result.entity for result in lookup_response.found)
142
143 if missing is not None:
144 missing.extend(result.entity for result in lookup_response.missing)
145
146 if deferred is not None:
147 deferred.extend(lookup_response.deferred)
148 break
149
150 if len(lookup_response.deferred) == 0:
151 break
152
153 # We have deferred keys, and the user didn't ask to know about
154 # them, so retry (but only with the deferred ones).
155 key_pbs = lookup_response.deferred
156
157 return results
158
159
160 class Client(ClientWithProject):
161 """Convenience wrapper for invoking APIs/factories w/ a project.
162
163 .. doctest::
164
165 >>> from google.cloud import datastore
166 >>> client = datastore.Client()
167
168 :type project: str
169 :param project: (optional) The project to pass to proxied API methods.
170
171 :type namespace: str
172 :param namespace: (optional) namespace to pass to proxied API methods.
173
174 :type credentials: :class:`~google.auth.credentials.Credentials`
175 :param credentials: (Optional) The OAuth2 Credentials to use for this
176 client. If not passed (and if no ``_http`` object is
177 passed), falls back to the default inferred from the
178 environment.
179
180 :type _http: :class:`~requests.Session`
181 :param _http: (Optional) HTTP object to make requests. Can be any object
182 that defines ``request()`` with the same interface as
183 :meth:`requests.Session.request`. If not passed, an
184 ``_http`` object is created that is bound to the
185 ``credentials`` for the current object.
186 This parameter should be considered private, and could
187 change in the future.
188
189 :type _use_grpc: bool
190 :param _use_grpc: (Optional) Explicitly specifies whether
191 to use the gRPC transport (via GAX) or HTTP. If unset,
192 falls back to the ``GOOGLE_CLOUD_DISABLE_GRPC``
193 environment variable.
194 This parameter should be considered private, and could
195 change in the future.
196 """
197
198 SCOPE = ('https://www.googleapis.com/auth/datastore',)
199 """The scopes required for authenticating as a Cloud Datastore consumer."""
200
201 def __init__(self, project=None, namespace=None,
202 credentials=None, _http=None, _use_grpc=None):
203 super(Client, self).__init__(
204 project=project, credentials=credentials, _http=_http)
205 self.namespace = namespace
206 self._batch_stack = _LocalStack()
207 self._datastore_api_internal = None
208 if _use_grpc is None:
209 self._use_grpc = _USE_GRPC
210 else:
211 self._use_grpc = _use_grpc
212 try:
213 host = os.environ[GCD_HOST]
214 self._base_url = 'http://' + host
215 except KeyError:
216 self._base_url = _DATASTORE_BASE_URL
217
218 @staticmethod
219 def _determine_default(project):
220 """Helper: override default project detection."""
221 return _determine_default_project(project)
222
223 @property
224 def _datastore_api(self):
225 """Getter for a wrapped API object."""
226 if self._datastore_api_internal is None:
227 if self._use_grpc:
228 self._datastore_api_internal = make_datastore_api(self)
229 else:
230 self._datastore_api_internal = HTTPDatastoreAPI(self)
231 return self._datastore_api_internal
232
233 def _push_batch(self, batch):
234 """Push a batch/transaction onto our stack.
235
236 "Protected", intended for use by batch / transaction context mgrs.
237
238 :type batch: :class:`google.cloud.datastore.batch.Batch`, or an object
239 implementing its API.
240 :param batch: newly-active batch/transaction.
241 """
242 self._batch_stack.push(batch)
243
244 def _pop_batch(self):
245 """Pop a batch/transaction from our stack.
246
247 "Protected", intended for use by batch / transaction context mgrs.
248
249 :raises: IndexError if the stack is empty.
250 :rtype: :class:`google.cloud.datastore.batch.Batch`, or an object
251 implementing its API.
252 :returns: the top-most batch/transaction, after removing it.
253 """
254 return self._batch_stack.pop()
255
256 @property
257 def current_batch(self):
258 """Currently-active batch.
259
260 :rtype: :class:`google.cloud.datastore.batch.Batch`, or an object
261 implementing its API, or ``NoneType`` (if no batch is active).
262 :returns: The batch/transaction at the top of the batch stack.
263 """
264 return self._batch_stack.top
265
266 @property
267 def current_transaction(self):
268 """Currently-active transaction.
269
270 :rtype: :class:`google.cloud.datastore.transaction.Transaction`, or an
271 object implementing its API, or ``NoneType`` (if no transaction
272 is active).
273 :returns: The transaction at the top of the batch stack.
274 """
275 transaction = self.current_batch
276 if isinstance(transaction, Transaction):
277 return transaction
278
279 def get(self, key, missing=None, deferred=None, transaction=None):
280 """Retrieve an entity from a single key (if it exists).
281
282 .. note::
283
284 This is just a thin wrapper over :meth:`get_multi`.
285 The backend API does not make a distinction between a single key or
286 multiple keys in a lookup request.
287
288 :type key: :class:`google.cloud.datastore.key.Key`
289 :param key: The key to be retrieved from the datastore.
290
291 :type missing: list
292 :param missing: (Optional) If a list is passed, the key-only entities
293 returned by the backend as "missing" will be copied
294 into it.
295
296 :type deferred: list
297 :param deferred: (Optional) If a list is passed, the keys returned
298 by the backend as "deferred" will be copied into it.
299
300 :type transaction:
301 :class:`~google.cloud.datastore.transaction.Transaction`
302 :param transaction: (Optional) Transaction to use for read consistency.
303 If not passed, uses current transaction, if set.
304
305 :rtype: :class:`google.cloud.datastore.entity.Entity` or ``NoneType``
306 :returns: The requested entity if it exists.
307 """
308 entities = self.get_multi(keys=[key], missing=missing,
309 deferred=deferred, transaction=transaction)
310 if entities:
311 return entities[0]
312
313 def get_multi(self, keys, missing=None, deferred=None, transaction=None):
314 """Retrieve entities, along with their attributes.
315
316 :type keys: list of :class:`google.cloud.datastore.key.Key`
317 :param keys: The keys to be retrieved from the datastore.
318
319 :type missing: list
320 :param missing: (Optional) If a list is passed, the key-only entities
321 returned by the backend as "missing" will be copied
322 into it. If the list is not empty, an error will occur.
323
324 :type deferred: list
325 :param deferred: (Optional) If a list is passed, the keys returned
326 by the backend as "deferred" will be copied into it.
327 If the list is not empty, an error will occur.
328
329 :type transaction:
330 :class:`~google.cloud.datastore.transaction.Transaction`
331 :param transaction: (Optional) Transaction to use for read consistency.
332 If not passed, uses current transaction, if set.
333
334 :rtype: list of :class:`google.cloud.datastore.entity.Entity`
335 :returns: The requested entities.
336 :raises: :class:`ValueError` if one or more of ``keys`` has a project
337 which does not match our project.
338 """
339 if not keys:
340 return []
341
342 ids = set(key.project for key in keys)
343 for current_id in ids:
344 if current_id != self.project:
345 raise ValueError('Keys do not match project')
346
347 if transaction is None:
348 transaction = self.current_transaction
349
350 entity_pbs = _extended_lookup(
351 datastore_api=self._datastore_api,
352 project=self.project,
353 key_pbs=[k.to_protobuf() for k in keys],
354 missing=missing,
355 deferred=deferred,
356 transaction_id=transaction and transaction.id,
357 )
358
359 if missing is not None:
360 missing[:] = [
361 helpers.entity_from_protobuf(missed_pb)
362 for missed_pb in missing]
363
364 if deferred is not None:
365 deferred[:] = [
366 helpers.key_from_protobuf(deferred_pb)
367 for deferred_pb in deferred]
368
369 return [helpers.entity_from_protobuf(entity_pb)
370 for entity_pb in entity_pbs]
371
372 def put(self, entity):
373 """Save an entity in the Cloud Datastore.
374
375 .. note::
376
377 This is just a thin wrapper over :meth:`put_multi`.
378 The backend API does not make a distinction between a single
379 entity or multiple entities in a commit request.
380
381 :type entity: :class:`google.cloud.datastore.entity.Entity`
382 :param entity: The entity to be saved to the datastore.
383 """
384 self.put_multi(entities=[entity])
385
386 def put_multi(self, entities):
387 """Save entities in the Cloud Datastore.
388
389 :type entities: list of :class:`google.cloud.datastore.entity.Entity`
390 :param entities: The entities to be saved to the datastore.
391
392 :raises: :class:`ValueError` if ``entities`` is a single entity.
393 """
394 if isinstance(entities, Entity):
395 raise ValueError("Pass a sequence of entities")
396
397 if not entities:
398 return
399
400 current = self.current_batch
401 in_batch = current is not None
402
403 if not in_batch:
404 current = self.batch()
405 current.begin()
406
407 for entity in entities:
408 current.put(entity)
409
410 if not in_batch:
411 current.commit()
412
413 def delete(self, key):
414 """Delete the key in the Cloud Datastore.
415
416 .. note::
417
418 This is just a thin wrapper over :meth:`delete_multi`.
419 The backend API does not make a distinction between a single key or
420 multiple keys in a commit request.
421
422 :type key: :class:`google.cloud.datastore.key.Key`
423 :param key: The key to be deleted from the datastore.
424 """
425 self.delete_multi(keys=[key])
426
427 def delete_multi(self, keys):
428 """Delete keys from the Cloud Datastore.
429
430 :type keys: list of :class:`google.cloud.datastore.key.Key`
431 :param keys: The keys to be deleted from the Datastore.
432 """
433 if not keys:
434 return
435
436 # We allow partial keys to attempt a delete, the backend will fail.
437 current = self.current_batch
438 in_batch = current is not None
439
440 if not in_batch:
441 current = self.batch()
442 current.begin()
443
444 for key in keys:
445 current.delete(key)
446
447 if not in_batch:
448 current.commit()
449
450 def allocate_ids(self, incomplete_key, num_ids):
451 """Allocate a list of IDs from a partial key.
452
453 :type incomplete_key: :class:`google.cloud.datastore.key.Key`
454 :param incomplete_key: Partial key to use as base for allocated IDs.
455
456 :type num_ids: int
457 :param num_ids: The number of IDs to allocate.
458
459 :rtype: list of :class:`google.cloud.datastore.key.Key`
460 :returns: The (complete) keys allocated with ``incomplete_key`` as
461 root.
462 :raises: :class:`ValueError` if ``incomplete_key`` is not a
463 partial key.
464 """
465 if not incomplete_key.is_partial:
466 raise ValueError(('Key is not partial.', incomplete_key))
467
468 incomplete_key_pb = incomplete_key.to_protobuf()
469 incomplete_key_pbs = [incomplete_key_pb] * num_ids
470
471 response_pb = self._datastore_api.allocate_ids(
472 incomplete_key.project, incomplete_key_pbs)
473 allocated_ids = [allocated_key_pb.path[-1].id
474 for allocated_key_pb in response_pb.keys]
475 return [incomplete_key.completed_key(allocated_id)
476 for allocated_id in allocated_ids]
477
478 def key(self, *path_args, **kwargs):
479 """Proxy to :class:`google.cloud.datastore.key.Key`.
480
481 Passes our ``project``.
482 """
483 if 'project' in kwargs:
484 raise TypeError('Cannot pass project')
485 kwargs['project'] = self.project
486 if 'namespace' not in kwargs:
487 kwargs['namespace'] = self.namespace
488 return Key(*path_args, **kwargs)
489
490 def batch(self):
491 """Proxy to :class:`google.cloud.datastore.batch.Batch`."""
492 return Batch(self)
493
494 def transaction(self):
495 """Proxy to :class:`google.cloud.datastore.transaction.Transaction`."""
496 return Transaction(self)
497
498 def query(self, **kwargs):
499 """Proxy to :class:`google.cloud.datastore.query.Query`.
500
501 Passes our ``project``.
502
503 Using query to search a datastore:
504
505 .. testsetup:: query
506
507 from google.cloud import datastore
508
509 client = datastore.Client()
510 query = client.query(kind='_Doctest')
511
512 def do_something(entity):
513 pass
514
515 .. doctest:: query
516
517 >>> query = client.query(kind='MyKind')
518 >>> query.add_filter('property', '=', 'val')
519
520 Using the query iterator
521
522 .. doctest:: query
523
524 >>> query_iter = query.fetch()
525 >>> for entity in query_iter:
526 ... do_something(entity)
527
528 or manually page through results
529
530 .. testsetup:: query-page
531
532 from google.cloud import datastore
533 from tests.system.test_system import Config # system tests
534
535 client = datastore.Client()
536
537 key = client.key('_Doctest')
538 entity1 = datastore.Entity(key=key)
539 entity1['foo'] = 1337
540 entity2 = datastore.Entity(key=key)
541 entity2['foo'] = 42
542 Config.TO_DELETE.extend([entity1, entity2])
543 client.put_multi([entity1, entity2])
544
545 query = client.query(kind='_Doctest')
546 cursor = None
547
548 .. doctest:: query-page
549
550 >>> query_iter = query.fetch(start_cursor=cursor)
551 >>> pages = query_iter.pages
552 >>>
553 >>> first_page = next(pages)
554 >>> first_page_entities = list(first_page)
555 >>> query_iter.next_page_token
556 b'...'
557
558 :type kwargs: dict
559 :param kwargs: Parameters for initializing and instance of
560 :class:`~google.cloud.datastore.query.Query`.
561
562 :rtype: :class:`~google.cloud.datastore.query.Query`
563 :returns: A query object.
564 """
565 if 'client' in kwargs:
566 raise TypeError('Cannot pass client')
567 if 'project' in kwargs:
568 raise TypeError('Cannot pass project')
569 kwargs['project'] = self.project
570 if 'namespace' not in kwargs:
571 kwargs['namespace'] = self.namespace
572 return Query(self, **kwargs)
573
574
575 def _get_read_options(eventual, transaction_id):
576 """Validate rules for read options, and assign to the request.
577
578 Helper method for ``lookup()`` and ``run_query``.
579
580 :type eventual: bool
581 :param eventual: Flag indicating if ``EVENTUAL`` or ``STRONG``
582 consistency should be used.
583
584 :type transaction_id: bytes
585 :param transaction_id: A transaction identifier (may be null).
586
587 :rtype: :class:`.datastore_pb2.ReadOptions`
588 :returns: The read options corresponding to the inputs.
589 :raises: :class:`ValueError` if ``eventual`` is ``True`` and the
590 ``transaction_id`` is not ``None``.
591 """
592 if transaction_id is None:
593 if eventual:
594 return _datastore_pb2.ReadOptions(
595 read_consistency=_datastore_pb2.ReadOptions.EVENTUAL)
596 else:
597 return _datastore_pb2.ReadOptions()
598 else:
599 if eventual:
600 raise ValueError('eventual must be False when in a transaction')
601 else:
602 return _datastore_pb2.ReadOptions(
603 transaction=transaction_id)
604
[end of datastore/google/cloud/datastore/client.py]
[start of datastore/google/cloud/datastore/transaction.py]
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Create / interact with Google Cloud Datastore transactions."""
16
17 from google.cloud.datastore.batch import Batch
18
19
20 class Transaction(Batch):
21 """An abstraction representing datastore Transactions.
22
23 Transactions can be used to build up a bulk mutation and ensure all
24 or none succeed (transactionally).
25
26 For example, the following snippet of code will put the two ``save``
27 operations (either ``insert`` or ``upsert``) into the same
28 mutation, and execute those within a transaction:
29
30 .. testsetup:: txn-put-multi, txn-api
31
32 from google.cloud import datastore
33 from tests.system.test_system import Config # system tests
34
35 client = datastore.Client()
36 key1 = client.key('_Doctest')
37 entity1 = datastore.Entity(key=key1)
38 entity1['foo'] = 1337
39
40 key2 = client.key('_Doctest', 'abcd1234')
41 entity2 = datastore.Entity(key=key2)
42 entity2['foo'] = 42
43
44 Config.TO_DELETE.extend([entity1, entity2])
45
46 .. doctest:: txn-put-multi
47
48 >>> with client.transaction():
49 ... client.put_multi([entity1, entity2])
50
51 Because it derives from :class:`~google.cloud.datastore.batch.Batch`,
52 :class:`Transaction` also provides :meth:`put` and :meth:`delete` methods:
53
54 .. doctest:: txn-api
55
56 >>> with client.transaction() as xact:
57 ... xact.put(entity1)
58 ... xact.delete(entity2.key)
59
60 By default, the transaction is rolled back if the transaction block
61 exits with an error:
62
63 .. testsetup:: txn-error
64
65 from google.cloud import datastore
66
67 client = datastore.Client()
68
69 def do_some_work():
70 return
71
72 class SomeException(Exception):
73 pass
74
75 .. doctest:: txn-error
76
77 >>> with client.transaction():
78 ... do_some_work()
79 ... raise SomeException # rolls back
80 Traceback (most recent call last):
81 ...
82 SomeException
83
84 If the transaction block exits without an exception, it will commit
85 by default.
86
87 .. warning::
88
89 Inside a transaction, automatically assigned IDs for
90 entities will not be available at save time! That means, if you
91 try:
92
93 .. testsetup:: txn-entity-key, txn-entity-key-after, txn-manual
94
95 from google.cloud import datastore
96 from tests.system.test_system import Config # system tests
97
98 client = datastore.Client()
99
100 def Entity(*args, **kwargs):
101 entity = datastore.Entity(*args, **kwargs)
102 Config.TO_DELETE.append(entity)
103 return entity
104
105 .. doctest:: txn-entity-key
106
107 >>> with client.transaction():
108 ... entity = Entity(key=client.key('Thing'))
109 ... client.put(entity)
110
111 ``entity`` won't have a complete key until the transaction is
112 committed.
113
114 Once you exit the transaction (or call :meth:`commit`), the
115 automatically generated ID will be assigned to the entity:
116
117 .. doctest:: txn-entity-key-after
118
119 >>> with client.transaction():
120 ... entity = Entity(key=client.key('Thing'))
121 ... client.put(entity)
122 ... print(entity.key.is_partial) # There is no ID on this key.
123 ...
124 True
125 >>> print(entity.key.is_partial) # There *is* an ID.
126 False
127
128 If you don't want to use the context manager you can initialize a
129 transaction manually:
130
131 .. doctest:: txn-manual
132
133 >>> transaction = client.transaction()
134 >>> transaction.begin()
135 >>>
136 >>> entity = Entity(key=client.key('Thing'))
137 >>> transaction.put(entity)
138 >>>
139 >>> transaction.commit()
140
141 :type client: :class:`google.cloud.datastore.client.Client`
142 :param client: the client used to connect to datastore.
143 """
144
145 _status = None
146
147 def __init__(self, client):
148 super(Transaction, self).__init__(client)
149 self._id = None
150
151 @property
152 def id(self):
153 """Getter for the transaction ID.
154
155 :rtype: str
156 :returns: The ID of the current transaction.
157 """
158 return self._id
159
160 def current(self):
161 """Return the topmost transaction.
162
163 .. note::
164
165 If the topmost element on the stack is not a transaction,
166 returns None.
167
168 :rtype: :class:`google.cloud.datastore.transaction.Transaction` or None
169 :returns: The current transaction (if any are active).
170 """
171 top = super(Transaction, self).current()
172 if isinstance(top, Transaction):
173 return top
174
175 def begin(self):
176 """Begins a transaction.
177
178 This method is called automatically when entering a with
179 statement, however it can be called explicitly if you don't want
180 to use a context manager.
181
182 :raises: :class:`~exceptions.ValueError` if the transaction has
183 already begun.
184 """
185 super(Transaction, self).begin()
186 try:
187 response_pb = self._client._datastore_api.begin_transaction(
188 self.project)
189 self._id = response_pb.transaction
190 except: # noqa: E722 do not use bare except, specify exception instead
191 self._status = self._ABORTED
192 raise
193
194 def rollback(self):
195 """Rolls back the current transaction.
196
197 This method has necessary side-effects:
198
199 - Sets the current transaction's ID to None.
200 """
201 try:
202 # No need to use the response it contains nothing.
203 self._client._datastore_api.rollback(self.project, self._id)
204 finally:
205 super(Transaction, self).rollback()
206 # Clear our own ID in case this gets accidentally reused.
207 self._id = None
208
209 def commit(self):
210 """Commits the transaction.
211
212 This is called automatically upon exiting a with statement,
213 however it can be called explicitly if you don't want to use a
214 context manager.
215
216 This method has necessary side-effects:
217
218 - Sets the current transaction's ID to None.
219 """
220 try:
221 super(Transaction, self).commit()
222 finally:
223 # Clear our own ID in case this gets accidentally reused.
224 self._id = None
225
[end of datastore/google/cloud/datastore/transaction.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
googleapis/google-cloud-python
|
d750a1394781dbe33679057c02233d2dfc2540e1
|
Entities created in datastore snippets should use a namespace
This is useful for debugging build failures. This is [already done][1] for the rest of the entities created in the datastore system test.
[1]: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/e63ddc384e4a54a6a085595d779a399ed449c26c/datastore/tests/system/test_system.py#L52-L64
|
2017-08-11T17:25:00Z
|
<patch>
diff --git a/datastore/google/cloud/datastore/client.py b/datastore/google/cloud/datastore/client.py
--- a/datastore/google/cloud/datastore/client.py
+++ b/datastore/google/cloud/datastore/client.py
@@ -504,56 +504,64 @@ def query(self, **kwargs):
.. testsetup:: query
- from google.cloud import datastore
+ import os
+ import uuid
- client = datastore.Client()
- query = client.query(kind='_Doctest')
+ from google.cloud import datastore
- def do_something(entity):
- pass
+ unique = os.getenv('CIRCLE_BUILD_NUM', str(uuid.uuid4())[0:8])
+ client = datastore.Client(namespace='ns{}'.format(unique))
+ query = client.query(kind='_Doctest')
+
+ def do_something(entity):
+ pass
.. doctest:: query
- >>> query = client.query(kind='MyKind')
- >>> query.add_filter('property', '=', 'val')
+ >>> query = client.query(kind='MyKind')
+ >>> query.add_filter('property', '=', 'val')
Using the query iterator
.. doctest:: query
- >>> query_iter = query.fetch()
- >>> for entity in query_iter:
- ... do_something(entity)
+ >>> query_iter = query.fetch()
+ >>> for entity in query_iter:
+ ... do_something(entity)
or manually page through results
.. testsetup:: query-page
- from google.cloud import datastore
- from tests.system.test_system import Config # system tests
+ import os
+ import uuid
+
+ from google.cloud import datastore
+ from tests.system.test_system import Config # system tests
- client = datastore.Client()
+ unique = os.getenv('CIRCLE_BUILD_NUM', str(uuid.uuid4())[0:8])
+ client = datastore.Client(namespace='ns{}'.format(unique))
- key = client.key('_Doctest')
- entity1 = datastore.Entity(key=key)
- entity1['foo'] = 1337
- entity2 = datastore.Entity(key=key)
- entity2['foo'] = 42
- Config.TO_DELETE.extend([entity1, entity2])
- client.put_multi([entity1, entity2])
+ key = client.key('_Doctest')
+ entity1 = datastore.Entity(key=key)
+ entity1['foo'] = 1337
+ entity2 = datastore.Entity(key=key)
+ entity2['foo'] = 42
+ Config.TO_DELETE.extend([entity1, entity2])
+ client.put_multi([entity1, entity2])
- query = client.query(kind='_Doctest')
- cursor = None
+ query = client.query(kind='_Doctest')
+ cursor = None
.. doctest:: query-page
- >>> query_iter = query.fetch(start_cursor=cursor)
- >>> pages = query_iter.pages
- >>>
- >>> first_page = next(pages)
- >>> first_page_entities = list(first_page)
- >>> query_iter.next_page_token
- b'...'
+ >>> query_iter = query.fetch(start_cursor=cursor)
+ >>> pages = query_iter.pages
+ >>>
+ >>> first_page = next(pages)
+ >>> first_page_entities = list(first_page)
+ >>> query_iter.next_page_token
+ b'...'
:type kwargs: dict
:param kwargs: Parameters for initializing and instance of
diff --git a/datastore/google/cloud/datastore/entity.py b/datastore/google/cloud/datastore/entity.py
--- a/datastore/google/cloud/datastore/entity.py
+++ b/datastore/google/cloud/datastore/entity.py
@@ -42,29 +42,33 @@ class Entity(dict):
.. testsetup:: entity-ctor
- from google.cloud import datastore
- from tests.system.test_system import Config # system tests
+ import os
+ import uuid
+
+ from google.cloud import datastore
+ from tests.system.test_system import Config # system tests
- client = datastore.Client()
- key = client.key('EntityKind', 1234, namespace='_Doctest')
- entity = datastore.Entity(key=key)
- entity['property'] = 'value'
- Config.TO_DELETE.append(entity)
+ unique = os.getenv('CIRCLE_BUILD_NUM', str(uuid.uuid4())[0:8])
+ client = datastore.Client(namespace='ns{}'.format(unique))
+ key = client.key('EntityKind', 1234, namespace='_Doctest')
+ entity = datastore.Entity(key=key)
+ entity['property'] = 'value'
+ Config.TO_DELETE.append(entity)
- client.put(entity)
+ client.put(entity)
.. doctest:: entity-ctor
- >>> client.get(key)
- <Entity('EntityKind', 1234) {'property': 'value'}>
+ >>> client.get(key)
+ <Entity('EntityKind', 1234) {'property': 'value'}>
You can the set values on the entity just like you would on any
other dictionary.
.. doctest:: entity-ctor
- >>> entity['age'] = 20
- >>> entity['name'] = 'JJ'
+ >>> entity['age'] = 20
+ >>> entity['name'] = 'JJ'
However, not all types are allowed as a value for a Google Cloud Datastore
entity. The following basic types are supported by the API:
diff --git a/datastore/google/cloud/datastore/transaction.py b/datastore/google/cloud/datastore/transaction.py
--- a/datastore/google/cloud/datastore/transaction.py
+++ b/datastore/google/cloud/datastore/transaction.py
@@ -29,24 +29,28 @@ class Transaction(Batch):
.. testsetup:: txn-put-multi, txn-api
- from google.cloud import datastore
- from tests.system.test_system import Config # system tests
+ import os
+ import uuid
- client = datastore.Client()
- key1 = client.key('_Doctest')
- entity1 = datastore.Entity(key=key1)
- entity1['foo'] = 1337
+ from google.cloud import datastore
+ from tests.system.test_system import Config # system tests
- key2 = client.key('_Doctest', 'abcd1234')
- entity2 = datastore.Entity(key=key2)
- entity2['foo'] = 42
+ unique = os.getenv('CIRCLE_BUILD_NUM', str(uuid.uuid4())[0:8])
+ client = datastore.Client(namespace='ns{}'.format(unique))
+ key1 = client.key('_Doctest')
+ entity1 = datastore.Entity(key=key1)
+ entity1['foo'] = 1337
- Config.TO_DELETE.extend([entity1, entity2])
+ key2 = client.key('_Doctest', 'abcd1234')
+ entity2 = datastore.Entity(key=key2)
+ entity2['foo'] = 42
+
+ Config.TO_DELETE.extend([entity1, entity2])
.. doctest:: txn-put-multi
- >>> with client.transaction():
- ... client.put_multi([entity1, entity2])
+ >>> with client.transaction():
+ ... client.put_multi([entity1, entity2])
Because it derives from :class:`~google.cloud.datastore.batch.Batch`,
:class:`Transaction` also provides :meth:`put` and :meth:`delete` methods:
@@ -62,51 +66,59 @@ class Transaction(Batch):
.. testsetup:: txn-error
- from google.cloud import datastore
+ import os
+ import uuid
+
+ from google.cloud import datastore
- client = datastore.Client()
+ unique = os.getenv('CIRCLE_BUILD_NUM', str(uuid.uuid4())[0:8])
+ client = datastore.Client(namespace='ns{}'.format(unique))
- def do_some_work():
- return
+ def do_some_work():
+ return
- class SomeException(Exception):
- pass
+ class SomeException(Exception):
+ pass
.. doctest:: txn-error
- >>> with client.transaction():
- ... do_some_work()
- ... raise SomeException # rolls back
- Traceback (most recent call last):
- ...
- SomeException
+ >>> with client.transaction():
+ ... do_some_work()
+ ... raise SomeException # rolls back
+ Traceback (most recent call last):
+ ...
+ SomeException
If the transaction block exits without an exception, it will commit
by default.
.. warning::
- Inside a transaction, automatically assigned IDs for
- entities will not be available at save time! That means, if you
- try:
+ Inside a transaction, automatically assigned IDs for
+ entities will not be available at save time! That means, if you
+ try:
+
+ .. testsetup:: txn-entity-key, txn-entity-key-after, txn-manual
- .. testsetup:: txn-entity-key, txn-entity-key-after, txn-manual
+ import os
+ import uuid
- from google.cloud import datastore
- from tests.system.test_system import Config # system tests
+ from google.cloud import datastore
+ from tests.system.test_system import Config # system tests
- client = datastore.Client()
+ unique = os.getenv('CIRCLE_BUILD_NUM', str(uuid.uuid4())[0:8])
+ client = datastore.Client(namespace='ns{}'.format(unique))
- def Entity(*args, **kwargs):
- entity = datastore.Entity(*args, **kwargs)
- Config.TO_DELETE.append(entity)
- return entity
+ def Entity(*args, **kwargs):
+ entity = datastore.Entity(*args, **kwargs)
+ Config.TO_DELETE.append(entity)
+ return entity
- .. doctest:: txn-entity-key
+ .. doctest:: txn-entity-key
- >>> with client.transaction():
- ... entity = Entity(key=client.key('Thing'))
- ... client.put(entity)
+ >>> with client.transaction():
+ ... entity = Entity(key=client.key('Thing'))
+ ... client.put(entity)
``entity`` won't have a complete key until the transaction is
committed.
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-16860
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API: DataFrame.select_dtypes should accept scalar
```
In [164]: df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c']})
In [165]: df.select_dtypes(include='object')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-165-04044faa1a5a> in <module>()
----> 1 df.select_dtypes(include='object')
~\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py in select_dtypes(self, include, exclude)
2355 include, exclude = include or (), exclude or ()
2356 if not (is_list_like(include) and is_list_like(exclude)):
-> 2357 raise TypeError('include and exclude must both be non-string'
2358 ' sequences')
2359 selection = tuple(map(frozenset, (include, exclude)))
TypeError: include and exclude must both be non-string sequences
In [166]: df.select_dtypes(include=['object'])
Out[166]:
b
0 a
1 b
2 c
```
#### Problem description
Only a convenience thing, but basically anywhere else we take list-likes, we accept a single string and I think should do the same here.
`pandas 0.20.2`
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td>
13 </tr>
14 <td></td>
15 <td><img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /></td>
16 </tr>
17 <tr>
18 <td>Package Status</td>
19 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
20 </tr>
21 <tr>
22 <td>License</td>
23 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td>
24 </tr>
25 <tr>
26 <td>Build Status</td>
27 <td>
28 <a href="https://travis-ci.org/pandas-dev/pandas">
29 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td></td>
35 <td>
36 <a href="https://circleci.com/gh/pandas-dev/pandas">
37 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td></td>
43 <td>
44 <a href="https://ci.appveyor.com/project/pandas-dev/pandas">
45 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td>Coverage</td>
51 <td><img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /></td>
52 </tr>
53 <tr>
54 <td>Conda</td>
55 <td>
56 <a href="http://pandas.pydata.org">
57 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda default downloads" />
58 </a>
59 </td>
60 </tr>
61 <tr>
62 <td>Conda-forge</td>
63 <td>
64 <a href="http://pandas.pydata.org">
65 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
66 </a>
67 </td>
68 </tr>
69 <tr>
70 <td>PyPI</td>
71 <td>
72 <a href="https://pypi.python.org/pypi/pandas/">
73 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" />
74 </a>
75 </td>
76 </tr>
77 </table>
78
79 [](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
80
81 ## What is it
82
83 **pandas** is a Python package providing fast, flexible, and expressive data
84 structures designed to make working with "relational" or "labeled" data both
85 easy and intuitive. It aims to be the fundamental high-level building block for
86 doing practical, **real world** data analysis in Python. Additionally, it has
87 the broader goal of becoming **the most powerful and flexible open source data
88 analysis / manipulation tool available in any language**. It is already well on
89 its way toward this goal.
90
91 ## Main Features
92 Here are just a few of the things that pandas does well:
93
94 - Easy handling of [**missing data**][missing-data] (represented as
95 `NaN`) in floating point as well as non-floating point data
96 - Size mutability: columns can be [**inserted and
97 deleted**][insertion-deletion] from DataFrame and higher dimensional
98 objects
99 - Automatic and explicit [**data alignment**][alignment]: objects can
100 be explicitly aligned to a set of labels, or the user can simply
101 ignore the labels and let `Series`, `DataFrame`, etc. automatically
102 align the data for you in computations
103 - Powerful, flexible [**group by**][groupby] functionality to perform
104 split-apply-combine operations on data sets, for both aggregating
105 and transforming data
106 - Make it [**easy to convert**][conversion] ragged,
107 differently-indexed data in other Python and NumPy data structures
108 into DataFrame objects
109 - Intelligent label-based [**slicing**][slicing], [**fancy
110 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
111 large data sets
112 - Intuitive [**merging**][merging] and [**joining**][joining] data
113 sets
114 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
115 data sets
116 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
117 labels per tick)
118 - Robust IO tools for loading data from [**flat files**][flat-files]
119 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
120 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
121 - [**Time series**][timeseries]-specific functionality: date range
122 generation and frequency conversion, moving window statistics,
123 moving window linear regressions, date shifting and lagging, etc.
124
125
126 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
127 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
128 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
129 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
130 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
131 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
132 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
133 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
134 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
135 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
136 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
137 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
138 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
139 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
140 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
141 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
142 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
143 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
144
145 ## Where to get it
146 The source code is currently hosted on GitHub at:
147 http://github.com/pandas-dev/pandas
148
149 Binary installers for the latest released version are available at the [Python
150 package index](http://pypi.python.org/pypi/pandas/) and on conda.
151
152 ```sh
153 # conda
154 conda install pandas
155 ```
156
157 ```sh
158 # or PyPI
159 pip install pandas
160 ```
161
162 ## Dependencies
163 - [NumPy](http://www.numpy.org): 1.7.0 or higher
164 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
165 - [pytz](http://pytz.sourceforge.net)
166 - Needed for time zone support with ``pandas.date_range``
167
168 See the [full installation instructions](http://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
169 for recommended and optional dependencies.
170
171 ## Installation from sources
172 To install pandas from source you need Cython in addition to the normal
173 dependencies above. Cython can be installed from pypi:
174
175 ```sh
176 pip install cython
177 ```
178
179 In the `pandas` directory (same one where you found this file after
180 cloning the git repo), execute:
181
182 ```sh
183 python setup.py install
184 ```
185
186 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
187
188 ```sh
189 python setup.py develop
190 ```
191
192 Alternatively, you can use `pip` if you want all the dependencies pulled
193 in automatically (the `-e` option is for installing it in [development
194 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
195
196 ```sh
197 pip install -e .
198 ```
199
200 See the full instructions for [installing from source](http://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
201
202 ## License
203 BSD
204
205 ## Documentation
206 The official documentation is hosted on PyData.org: http://pandas.pydata.org/pandas-docs/stable/
207
208 The Sphinx documentation should provide a good starting point for learning how
209 to use the library. Expect the docs to continue to expand as time goes on.
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Getting Help
216
217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
219
220 ## Discussion and Development
221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
222
223 ## Contributing to pandas
224 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
225
226 A detailed overview on how to contribute can be found in the **[contributing guide.](http://pandas.pydata.org/pandas-docs/stable/contributing.html)**
227
228 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [Difficulty Novice](https://github.com/pandas-dev/pandas/issues?q=is%3Aopen+is%3Aissue+label%3A%22Difficulty+Novice%22) where you could start out.
229
230 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
231
232 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
233
[end of README.md]
[start of pandas/core/reshape/concat.py]
1 """
2 concat routines
3 """
4
5 import numpy as np
6 from pandas import compat, DataFrame, Series, Index, MultiIndex
7 from pandas.core.index import (_get_combined_index,
8 _ensure_index, _get_consensus_names,
9 _all_indexes_same)
10 from pandas.core.categorical import (_factorize_from_iterable,
11 _factorize_from_iterables)
12 from pandas.core.internals import concatenate_block_managers
13 from pandas.core import common as com
14 from pandas.core.generic import NDFrame
15 import pandas.core.dtypes.concat as _concat
16
17 # ---------------------------------------------------------------------
18 # Concatenate DataFrame objects
19
20
21 def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
22 keys=None, levels=None, names=None, verify_integrity=False,
23 copy=True):
24 """
25 Concatenate pandas objects along a particular axis with optional set logic
26 along the other axes.
27
28 Can also add a layer of hierarchical indexing on the concatenation axis,
29 which may be useful if the labels are the same (or overlapping) on
30 the passed axis number.
31
32 Parameters
33 ----------
34 objs : a sequence or mapping of Series, DataFrame, or Panel objects
35 If a dict is passed, the sorted keys will be used as the `keys`
36 argument, unless it is passed, in which case the values will be
37 selected (see below). Any None objects will be dropped silently unless
38 they are all None in which case a ValueError will be raised
39 axis : {0/'index', 1/'columns'}, default 0
40 The axis to concatenate along
41 join : {'inner', 'outer'}, default 'outer'
42 How to handle indexes on other axis(es)
43 join_axes : list of Index objects
44 Specific indexes to use for the other n - 1 axes instead of performing
45 inner/outer set logic
46 ignore_index : boolean, default False
47 If True, do not use the index values along the concatenation axis. The
48 resulting axis will be labeled 0, ..., n - 1. This is useful if you are
49 concatenating objects where the concatenation axis does not have
50 meaningful indexing information. Note the index values on the other
51 axes are still respected in the join.
52 keys : sequence, default None
53 If multiple levels passed, should contain tuples. Construct
54 hierarchical index using the passed keys as the outermost level
55 levels : list of sequences, default None
56 Specific levels (unique values) to use for constructing a
57 MultiIndex. Otherwise they will be inferred from the keys
58 names : list, default None
59 Names for the levels in the resulting hierarchical index
60 verify_integrity : boolean, default False
61 Check whether the new concatenated axis contains duplicates. This can
62 be very expensive relative to the actual data concatenation
63 copy : boolean, default True
64 If False, do not copy data unnecessarily
65
66 Returns
67 -------
68 concatenated : type of objects
69
70 Notes
71 -----
72 The keys, levels, and names arguments are all optional.
73
74 A walkthrough of how this method fits in with other tools for combining
75 panda objects can be found `here
76 <http://pandas.pydata.org/pandas-docs/stable/merging.html>`__.
77
78 See Also
79 --------
80 Series.append
81 DataFrame.append
82 DataFrame.join
83 DataFrame.merge
84
85 Examples
86 --------
87 Combine two ``Series``.
88
89 >>> s1 = pd.Series(['a', 'b'])
90 >>> s2 = pd.Series(['c', 'd'])
91 >>> pd.concat([s1, s2])
92 0 a
93 1 b
94 0 c
95 1 d
96 dtype: object
97
98 Clear the existing index and reset it in the result
99 by setting the ``ignore_index`` option to ``True``.
100
101 >>> pd.concat([s1, s2], ignore_index=True)
102 0 a
103 1 b
104 2 c
105 3 d
106 dtype: object
107
108 Add a hierarchical index at the outermost level of
109 the data with the ``keys`` option.
110
111 >>> pd.concat([s1, s2], keys=['s1', 's2',])
112 s1 0 a
113 1 b
114 s2 0 c
115 1 d
116 dtype: object
117
118 Label the index keys you create with the ``names`` option.
119
120 >>> pd.concat([s1, s2], keys=['s1', 's2'],
121 ... names=['Series name', 'Row ID'])
122 Series name Row ID
123 s1 0 a
124 1 b
125 s2 0 c
126 1 d
127 dtype: object
128
129 Combine two ``DataFrame`` objects with identical columns.
130
131 >>> df1 = pd.DataFrame([['a', 1], ['b', 2]],
132 ... columns=['letter', 'number'])
133 >>> df1
134 letter number
135 0 a 1
136 1 b 2
137 >>> df2 = pd.DataFrame([['c', 3], ['d', 4]],
138 ... columns=['letter', 'number'])
139 >>> df2
140 letter number
141 0 c 3
142 1 d 4
143 >>> pd.concat([df1, df2])
144 letter number
145 0 a 1
146 1 b 2
147 0 c 3
148 1 d 4
149
150 Combine ``DataFrame`` objects with overlapping columns
151 and return everything. Columns outside the intersection will
152 be filled with ``NaN`` values.
153
154 >>> df3 = pd.DataFrame([['c', 3, 'cat'], ['d', 4, 'dog']],
155 ... columns=['letter', 'number', 'animal'])
156 >>> df3
157 letter number animal
158 0 c 3 cat
159 1 d 4 dog
160 >>> pd.concat([df1, df3])
161 animal letter number
162 0 NaN a 1
163 1 NaN b 2
164 0 cat c 3
165 1 dog d 4
166
167 Combine ``DataFrame`` objects with overlapping columns
168 and return only those that are shared by passing ``inner`` to
169 the ``join`` keyword argument.
170
171 >>> pd.concat([df1, df3], join="inner")
172 letter number
173 0 a 1
174 1 b 2
175 0 c 3
176 1 d 4
177
178 Combine ``DataFrame`` objects horizontally along the x axis by
179 passing in ``axis=1``.
180
181 >>> df4 = pd.DataFrame([['bird', 'polly'], ['monkey', 'george']],
182 ... columns=['animal', 'name'])
183 >>> pd.concat([df1, df4], axis=1)
184 letter number animal name
185 0 a 1 bird polly
186 1 b 2 monkey george
187
188 Prevent the result from including duplicate index values with the
189 ``verify_integrity`` option.
190
191 >>> df5 = pd.DataFrame([1], index=['a'])
192 >>> df5
193 0
194 a 1
195 >>> df6 = pd.DataFrame([2], index=['a'])
196 >>> df6
197 0
198 a 2
199 >>> pd.concat([df5, df6], verify_integrity=True)
200 Traceback (most recent call last):
201 ...
202 ValueError: Indexes have overlapping values: ['a']
203 """
204 op = _Concatenator(objs, axis=axis, join_axes=join_axes,
205 ignore_index=ignore_index, join=join,
206 keys=keys, levels=levels, names=names,
207 verify_integrity=verify_integrity,
208 copy=copy)
209 return op.get_result()
210
211
212 class _Concatenator(object):
213 """
214 Orchestrates a concatenation operation for BlockManagers
215 """
216
217 def __init__(self, objs, axis=0, join='outer', join_axes=None,
218 keys=None, levels=None, names=None,
219 ignore_index=False, verify_integrity=False, copy=True):
220 if isinstance(objs, (NDFrame, compat.string_types)):
221 raise TypeError('first argument must be an iterable of pandas '
222 'objects, you passed an object of type '
223 '"{0}"'.format(type(objs).__name__))
224
225 if join == 'outer':
226 self.intersect = False
227 elif join == 'inner':
228 self.intersect = True
229 else: # pragma: no cover
230 raise ValueError('Only can inner (intersect) or outer (union) '
231 'join the other axis')
232
233 if isinstance(objs, dict):
234 if keys is None:
235 keys = sorted(objs)
236 objs = [objs[k] for k in keys]
237 else:
238 objs = list(objs)
239
240 if len(objs) == 0:
241 raise ValueError('No objects to concatenate')
242
243 if keys is None:
244 objs = [obj for obj in objs if obj is not None]
245 else:
246 # #1649
247 clean_keys = []
248 clean_objs = []
249 for k, v in zip(keys, objs):
250 if v is None:
251 continue
252 clean_keys.append(k)
253 clean_objs.append(v)
254 objs = clean_objs
255 name = getattr(keys, 'name', None)
256 keys = Index(clean_keys, name=name)
257
258 if len(objs) == 0:
259 raise ValueError('All objects passed were None')
260
261 # consolidate data & figure out what our result ndim is going to be
262 ndims = set()
263 for obj in objs:
264 if not isinstance(obj, NDFrame):
265 raise TypeError("cannot concatenate a non-NDFrame object")
266
267 # consolidate
268 obj._consolidate(inplace=True)
269 ndims.add(obj.ndim)
270
271 # get the sample
272 # want the higest ndim that we have, and must be non-empty
273 # unless all objs are empty
274 sample = None
275 if len(ndims) > 1:
276 max_ndim = max(ndims)
277 for obj in objs:
278 if obj.ndim == max_ndim and np.sum(obj.shape):
279 sample = obj
280 break
281
282 else:
283 # filter out the empties if we have not multi-index possibilities
284 # note to keep empty Series as it affect to result columns / name
285 non_empties = [obj for obj in objs
286 if sum(obj.shape) > 0 or isinstance(obj, Series)]
287
288 if (len(non_empties) and (keys is None and names is None and
289 levels is None and
290 join_axes is None and
291 not self.intersect)):
292 objs = non_empties
293 sample = objs[0]
294
295 if sample is None:
296 sample = objs[0]
297 self.objs = objs
298
299 # Standardize axis parameter to int
300 if isinstance(sample, Series):
301 axis = DataFrame()._get_axis_number(axis)
302 else:
303 axis = sample._get_axis_number(axis)
304
305 # Need to flip BlockManager axis in the DataFrame special case
306 self._is_frame = isinstance(sample, DataFrame)
307 if self._is_frame:
308 axis = 1 if axis == 0 else 0
309
310 self._is_series = isinstance(sample, Series)
311 if not 0 <= axis <= sample.ndim:
312 raise AssertionError("axis must be between 0 and {0}, "
313 "input was {1}".format(sample.ndim, axis))
314
315 # if we have mixed ndims, then convert to highest ndim
316 # creating column numbers as needed
317 if len(ndims) > 1:
318 current_column = 0
319 max_ndim = sample.ndim
320 self.objs, objs = [], self.objs
321 for obj in objs:
322
323 ndim = obj.ndim
324 if ndim == max_ndim:
325 pass
326
327 elif ndim != max_ndim - 1:
328 raise ValueError("cannot concatenate unaligned mixed "
329 "dimensional NDFrame objects")
330
331 else:
332 name = getattr(obj, 'name', None)
333 if ignore_index or name is None:
334 name = current_column
335 current_column += 1
336
337 # doing a row-wise concatenation so need everything
338 # to line up
339 if self._is_frame and axis == 1:
340 name = 0
341 obj = sample._constructor({name: obj})
342
343 self.objs.append(obj)
344
345 # note: this is the BlockManager axis (since DataFrame is transposed)
346 self.axis = axis
347 self.join_axes = join_axes
348 self.keys = keys
349 self.names = names or getattr(keys, 'names', None)
350 self.levels = levels
351
352 self.ignore_index = ignore_index
353 self.verify_integrity = verify_integrity
354 self.copy = copy
355
356 self.new_axes = self._get_new_axes()
357
358 def get_result(self):
359
360 # series only
361 if self._is_series:
362
363 # stack blocks
364 if self.axis == 0:
365 # concat Series with length to keep dtype as much
366 non_empties = [x for x in self.objs if len(x) > 0]
367 if len(non_empties) > 0:
368 values = [x._values for x in non_empties]
369 else:
370 values = [x._values for x in self.objs]
371 new_data = _concat._concat_compat(values)
372
373 name = com._consensus_name_attr(self.objs)
374 cons = _concat._get_series_result_type(new_data)
375
376 return (cons(new_data, index=self.new_axes[0],
377 name=name, dtype=new_data.dtype)
378 .__finalize__(self, method='concat'))
379
380 # combine as columns in a frame
381 else:
382 data = dict(zip(range(len(self.objs)), self.objs))
383 cons = _concat._get_series_result_type(data)
384
385 index, columns = self.new_axes
386 df = cons(data, index=index)
387 df.columns = columns
388 return df.__finalize__(self, method='concat')
389
390 # combine block managers
391 else:
392 mgrs_indexers = []
393 for obj in self.objs:
394 mgr = obj._data
395 indexers = {}
396 for ax, new_labels in enumerate(self.new_axes):
397 if ax == self.axis:
398 # Suppress reindexing on concat axis
399 continue
400
401 obj_labels = mgr.axes[ax]
402 if not new_labels.equals(obj_labels):
403 indexers[ax] = obj_labels.reindex(new_labels)[1]
404
405 mgrs_indexers.append((obj._data, indexers))
406
407 new_data = concatenate_block_managers(
408 mgrs_indexers, self.new_axes, concat_axis=self.axis,
409 copy=self.copy)
410 if not self.copy:
411 new_data._consolidate_inplace()
412
413 cons = _concat._get_frame_result_type(new_data, self.objs)
414 return (cons._from_axes(new_data, self.new_axes)
415 .__finalize__(self, method='concat'))
416
417 def _get_result_dim(self):
418 if self._is_series and self.axis == 1:
419 return 2
420 else:
421 return self.objs[0].ndim
422
423 def _get_new_axes(self):
424 ndim = self._get_result_dim()
425 new_axes = [None] * ndim
426
427 if self.join_axes is None:
428 for i in range(ndim):
429 if i == self.axis:
430 continue
431 new_axes[i] = self._get_comb_axis(i)
432 else:
433 if len(self.join_axes) != ndim - 1:
434 raise AssertionError("length of join_axes must not be "
435 "equal to {0}".format(ndim - 1))
436
437 # ufff...
438 indices = compat.lrange(ndim)
439 indices.remove(self.axis)
440
441 for i, ax in zip(indices, self.join_axes):
442 new_axes[i] = ax
443
444 new_axes[self.axis] = self._get_concat_axis()
445 return new_axes
446
447 def _get_comb_axis(self, i):
448 if self._is_series:
449 all_indexes = [x.index for x in self.objs]
450 else:
451 try:
452 all_indexes = [x._data.axes[i] for x in self.objs]
453 except IndexError:
454 types = [type(x).__name__ for x in self.objs]
455 raise TypeError("Cannot concatenate list of %s" % types)
456
457 return _get_combined_index(all_indexes, intersect=self.intersect)
458
459 def _get_concat_axis(self):
460 """
461 Return index to be used along concatenation axis.
462 """
463 if self._is_series:
464 if self.axis == 0:
465 indexes = [x.index for x in self.objs]
466 elif self.ignore_index:
467 idx = com._default_index(len(self.objs))
468 return idx
469 elif self.keys is None:
470 names = [None] * len(self.objs)
471 num = 0
472 has_names = False
473 for i, x in enumerate(self.objs):
474 if not isinstance(x, Series):
475 raise TypeError("Cannot concatenate type 'Series' "
476 "with object of type "
477 "%r" % type(x).__name__)
478 if x.name is not None:
479 names[i] = x.name
480 has_names = True
481 else:
482 names[i] = num
483 num += 1
484 if has_names:
485 return Index(names)
486 else:
487 return com._default_index(len(self.objs))
488 else:
489 return _ensure_index(self.keys)
490 else:
491 indexes = [x._data.axes[self.axis] for x in self.objs]
492
493 if self.ignore_index:
494 idx = com._default_index(sum(len(i) for i in indexes))
495 return idx
496
497 if self.keys is None:
498 concat_axis = _concat_indexes(indexes)
499 else:
500 concat_axis = _make_concat_multiindex(indexes, self.keys,
501 self.levels, self.names)
502
503 self._maybe_check_integrity(concat_axis)
504
505 return concat_axis
506
507 def _maybe_check_integrity(self, concat_index):
508 if self.verify_integrity:
509 if not concat_index.is_unique:
510 overlap = concat_index.get_duplicates()
511 raise ValueError('Indexes have overlapping values: %s'
512 % str(overlap))
513
514
515 def _concat_indexes(indexes):
516 return indexes[0].append(indexes[1:])
517
518
519 def _make_concat_multiindex(indexes, keys, levels=None, names=None):
520
521 if ((levels is None and isinstance(keys[0], tuple)) or
522 (levels is not None and len(levels) > 1)):
523 zipped = compat.lzip(*keys)
524 if names is None:
525 names = [None] * len(zipped)
526
527 if levels is None:
528 _, levels = _factorize_from_iterables(zipped)
529 else:
530 levels = [_ensure_index(x) for x in levels]
531 else:
532 zipped = [keys]
533 if names is None:
534 names = [None]
535
536 if levels is None:
537 levels = [_ensure_index(keys)]
538 else:
539 levels = [_ensure_index(x) for x in levels]
540
541 if not _all_indexes_same(indexes):
542 label_list = []
543
544 # things are potentially different sizes, so compute the exact labels
545 # for each level and pass those to MultiIndex.from_arrays
546
547 for hlevel, level in zip(zipped, levels):
548 to_concat = []
549 for key, index in zip(hlevel, indexes):
550 try:
551 i = level.get_loc(key)
552 except KeyError:
553 raise ValueError('Key %s not in level %s'
554 % (str(key), str(level)))
555
556 to_concat.append(np.repeat(i, len(index)))
557 label_list.append(np.concatenate(to_concat))
558
559 concat_index = _concat_indexes(indexes)
560
561 # these go at the end
562 if isinstance(concat_index, MultiIndex):
563 levels.extend(concat_index.levels)
564 label_list.extend(concat_index.labels)
565 else:
566 codes, categories = _factorize_from_iterable(concat_index)
567 levels.append(categories)
568 label_list.append(codes)
569
570 if len(names) == len(levels):
571 names = list(names)
572 else:
573 # make sure that all of the passed indices have the same nlevels
574 if not len(set([idx.nlevels for idx in indexes])) == 1:
575 raise AssertionError("Cannot concat indices that do"
576 " not have the same number of levels")
577
578 # also copies
579 names = names + _get_consensus_names(indexes)
580
581 return MultiIndex(levels=levels, labels=label_list, names=names,
582 verify_integrity=False)
583
584 new_index = indexes[0]
585 n = len(new_index)
586 kpieces = len(indexes)
587
588 # also copies
589 new_names = list(names)
590 new_levels = list(levels)
591
592 # construct labels
593 new_labels = []
594
595 # do something a bit more speedy
596
597 for hlevel, level in zip(zipped, levels):
598 hlevel = _ensure_index(hlevel)
599 mapped = level.get_indexer(hlevel)
600
601 mask = mapped == -1
602 if mask.any():
603 raise ValueError('Values not found in passed level: %s'
604 % str(hlevel[mask]))
605
606 new_labels.append(np.repeat(mapped, n))
607
608 if isinstance(new_index, MultiIndex):
609 new_levels.extend(new_index.levels)
610 new_labels.extend([np.tile(lab, kpieces) for lab in new_index.labels])
611 else:
612 new_levels.append(new_index)
613 new_labels.append(np.tile(np.arange(n), kpieces))
614
615 if len(new_names) < len(new_levels):
616 new_names.extend(new_index.names)
617
618 return MultiIndex(levels=new_levels, labels=new_labels, names=new_names,
619 verify_integrity=False)
620
[end of pandas/core/reshape/concat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
18f929fa83d2d1f335f8ccf325c05a6ce314b94d
|
API: DataFrame.select_dtypes should accept scalar
```
In [164]: df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c']})
In [165]: df.select_dtypes(include='object')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-165-04044faa1a5a> in <module>()
----> 1 df.select_dtypes(include='object')
~\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py in select_dtypes(self, include, exclude)
2355 include, exclude = include or (), exclude or ()
2356 if not (is_list_like(include) and is_list_like(exclude)):
-> 2357 raise TypeError('include and exclude must both be non-string'
2358 ' sequences')
2359 selection = tuple(map(frozenset, (include, exclude)))
TypeError: include and exclude must both be non-string sequences
In [166]: df.select_dtypes(include=['object'])
Out[166]:
b
0 a
1 b
2 c
```
#### Problem description
Only a convenience thing, but basically anywhere else we take list-likes, we accept a single string and I think should do the same here.
`pandas 0.20.2`
|
+100 :) We should do the same for `exclude`
|
2017-07-08T14:52:02Z
|
<patch>
diff --git a/doc/source/basics.rst b/doc/source/basics.rst
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -2229,7 +2229,3 @@ All numpy dtypes are subclasses of ``numpy.generic``:
Pandas also defines the types ``category``, and ``datetime64[ns, tz]``, which are not integrated into the normal
numpy hierarchy and wont show up with the above function.
-
-.. note::
-
- The ``include`` and ``exclude`` parameters must be non-string sequences.
diff --git a/doc/source/style.ipynb b/doc/source/style.ipynb
--- a/doc/source/style.ipynb
+++ b/doc/source/style.ipynb
@@ -935,7 +935,7 @@
"\n",
"<span style=\"color: red\">*Experimental: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.*</span>\n",
"\n",
- "Some support is available for exporting styled `DataFrames` to Excel worksheets using the `OpenPyXL` engine. CSS2.2 properties handled include:\n",
+ "Some support is available for exporting styled `DataFrames` to Excel worksheets using the `OpenPyXL` engine. CSS2.2 properties handled include:\n",
"\n",
"- `background-color`\n",
"- `border-style`, `border-width`, `border-color` and their {`top`, `right`, `bottom`, `left` variants}\n",
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -39,6 +39,7 @@ Other Enhancements
- :func:`read_feather` has gained the ``nthreads`` parameter for multi-threaded operations (:issue:`16359`)
- :func:`DataFrame.clip()` and :func:`Series.clip()` have gained an ``inplace`` argument. (:issue:`15388`)
- :func:`crosstab` has gained a ``margins_name`` parameter to define the name of the row / column that will contain the totals when ``margins=True``. (:issue:`15972`)
+- :func:`Dataframe.select_dtypes` now accepts scalar values for include/exclude as well as list-like. (:issue:`16855`)
.. _whatsnew_0210.api_breaking:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2285,9 +2285,9 @@ def select_dtypes(self, include=None, exclude=None):
Parameters
----------
- include, exclude : list-like
- A list of dtypes or strings to be included/excluded. You must pass
- in a non-empty sequence for at least one of these.
+ include, exclude : scalar or list-like
+ A selection of dtypes or strings to be included/excluded. At least
+ one of these parameters must be supplied.
Raises
------
@@ -2295,8 +2295,6 @@ def select_dtypes(self, include=None, exclude=None):
* If both of ``include`` and ``exclude`` are empty
* If ``include`` and ``exclude`` have overlapping elements
* If any kind of string dtype is passed in.
- TypeError
- * If either of ``include`` or ``exclude`` is not a sequence
Returns
-------
@@ -2331,6 +2329,14 @@ def select_dtypes(self, include=None, exclude=None):
3 0.0764 False 2
4 -0.9703 True 1
5 -1.2094 False 2
+ >>> df.select_dtypes(include='bool')
+ c
+ 0 True
+ 1 False
+ 2 True
+ 3 False
+ 4 True
+ 5 False
>>> df.select_dtypes(include=['float64'])
c
0 1
@@ -2348,10 +2354,12 @@ def select_dtypes(self, include=None, exclude=None):
4 True
5 False
"""
- include, exclude = include or (), exclude or ()
- if not (is_list_like(include) and is_list_like(exclude)):
- raise TypeError('include and exclude must both be non-string'
- ' sequences')
+
+ if not is_list_like(include):
+ include = (include,) if include is not None else ()
+ if not is_list_like(exclude):
+ exclude = (exclude,) if exclude is not None else ()
+
selection = tuple(map(frozenset, (include, exclude)))
if not any(selection):
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-36981
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: warning when using colors 'CN'
- [x] I have checked that this issue has not already been reported.
- [ ] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randn(10, 3), columns=["a", "b", "c"])
df.plot(color='C0')
plt.show()
```
#### Problem description
On master branch when executing the code above the following warning is raised.
```
workspaces/pandas/pandas/plotting/_matplotlib/style.py:64: MatplotlibDeprecationWarning: Support for uppercase single-letter colors is deprecated since Matplotlib 3.1 and will be removed in 3.3; please use lowercase instead.
[conv.to_rgba(c) for c in colors]
```
#### Expected Output
Expected no warnings.
Related issue: https://github.com/pandas-dev/pandas/issues/15516. The feature with "CN"-like colors was introduced in https://github.com/pandas-dev/pandas/pull/15873.
I figured out that the problem lies in the line
```
maybe_color_cycle = _maybe_valid_colors(list(colors))
```
So, if colors is "C0", then it is split into ["C", "0"].
I will fix it.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.8.3.final.0
python-bits : 32
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United States.1251
pandas : 1.0.5
numpy : 1.19.0
pytz : 2020.1
dateutil : 2.8.1
pip : 19.2.3
setuptools : 41.2.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.2.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.5.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : None
numba : None
</details>
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://doi.org/10.5281/zenodo.3509134)
11 [](https://pypi.org/project/pandas/)
12 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [](https://travis-ci.org/pandas-dev/pandas)
14 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
15 [](https://codecov.io/gh/pandas-dev/pandas)
16 [](https://pandas.pydata.org)
17 [](https://gitter.im/pydata/pandas)
18 [](https://numfocus.org)
19 [](https://github.com/psf/black)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging.
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 package index](https://pypi.org/project/pandas) and on conda.
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy](https://www.numpy.org)
104 - [python-dateutil](https://labix.org/python-dateutil)
105 - [pytz](https://pythonhosted.org/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need Cython in addition to the normal
111 dependencies above. Cython can be installed from pypi:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
[start of pandas/util/_print_versions.py]
1 import codecs
2 import json
3 import locale
4 import os
5 import platform
6 import struct
7 import sys
8 from typing import Dict, Optional, Union
9
10 from pandas._typing import JSONSerializable
11 from pandas.compat._optional import VERSIONS, _get_version, import_optional_dependency
12
13
14 def _get_commit_hash() -> Optional[str]:
15 """
16 Use vendored versioneer code to get git hash, which handles
17 git worktree correctly.
18 """
19 from pandas._version import get_versions
20
21 versions = get_versions()
22 return versions["full-revisionid"]
23
24
25 def _get_sys_info() -> Dict[str, JSONSerializable]:
26 """
27 Returns system information as a JSON serializable dictionary.
28 """
29 uname_result = platform.uname()
30 language_code, encoding = locale.getlocale()
31 return {
32 "commit": _get_commit_hash(),
33 "python": ".".join(str(i) for i in sys.version_info),
34 "python-bits": struct.calcsize("P") * 8,
35 "OS": uname_result.system,
36 "OS-release": uname_result.release,
37 "Version": uname_result.version,
38 "machine": uname_result.machine,
39 "processor": uname_result.processor,
40 "byteorder": sys.byteorder,
41 "LC_ALL": os.environ.get("LC_ALL"),
42 "LANG": os.environ.get("LANG"),
43 "LOCALE": {"language-code": language_code, "encoding": encoding},
44 }
45
46
47 def _get_dependency_info() -> Dict[str, JSONSerializable]:
48 """
49 Returns dependency information as a JSON serializable dictionary.
50 """
51 deps = [
52 "pandas",
53 # required
54 "numpy",
55 "pytz",
56 "dateutil",
57 # install / build,
58 "pip",
59 "setuptools",
60 "Cython",
61 # test
62 "pytest",
63 "hypothesis",
64 # docs
65 "sphinx",
66 # Other, need a min version
67 "blosc",
68 "feather",
69 "xlsxwriter",
70 "lxml.etree",
71 "html5lib",
72 "pymysql",
73 "psycopg2",
74 "jinja2",
75 # Other, not imported.
76 "IPython",
77 "pandas_datareader",
78 ]
79 deps.extend(list(VERSIONS))
80
81 result: Dict[str, JSONSerializable] = {}
82 for modname in deps:
83 mod = import_optional_dependency(
84 modname, raise_on_missing=False, on_version="ignore"
85 )
86 result[modname] = _get_version(mod) if mod else None
87 return result
88
89
90 def show_versions(as_json: Union[str, bool] = False) -> None:
91 """
92 Provide useful information, important for bug reports.
93
94 It comprises info about hosting operation system, pandas version,
95 and versions of other installed relative packages.
96
97 Parameters
98 ----------
99 as_json : str or bool, default False
100 * If False, outputs info in a human readable form to the console.
101 * If str, it will be considered as a path to a file.
102 Info will be written to that file in JSON format.
103 * If True, outputs info in JSON format to the console.
104 """
105 sys_info = _get_sys_info()
106 deps = _get_dependency_info()
107
108 if as_json:
109 j = dict(system=sys_info, dependencies=deps)
110
111 if as_json is True:
112 print(j)
113 else:
114 assert isinstance(as_json, str) # needed for mypy
115 with codecs.open(as_json, "wb", encoding="utf8") as f:
116 json.dump(j, f, indent=2)
117
118 else:
119 assert isinstance(sys_info["LOCALE"], dict) # needed for mypy
120 language_code = sys_info["LOCALE"]["language-code"]
121 encoding = sys_info["LOCALE"]["encoding"]
122 sys_info["LOCALE"] = f"{language_code}.{encoding}"
123
124 maxlen = max(len(x) for x in deps)
125 print("\nINSTALLED VERSIONS")
126 print("------------------")
127 for k, v in sys_info.items():
128 print(f"{k:<{maxlen}}: {v}")
129 print("")
130 for k, v in deps.items():
131 print(f"{k:<{maxlen}}: {v}")
132
133
134 def main() -> int:
135 from optparse import OptionParser
136
137 parser = OptionParser()
138 parser.add_option(
139 "-j",
140 "--json",
141 metavar="FILE",
142 nargs=1,
143 help="Save output as JSON into file, pass in '-' to output to stdout",
144 )
145
146 (options, args) = parser.parse_args()
147
148 if options.json == "-":
149 options.json = True
150
151 show_versions(as_json=options.json)
152
153 return 0
154
155
156 if __name__ == "__main__":
157 sys.exit(main())
158
[end of pandas/util/_print_versions.py]
[start of setup.py]
1 #!/usr/bin/env python3
2
3 """
4 Parts of this file were taken from the pyzmq project
5 (https://github.com/zeromq/pyzmq) which have been permitted for use under the
6 BSD license. Parts are from lxml (https://github.com/lxml/lxml)
7 """
8
9 import argparse
10 from distutils.sysconfig import get_config_vars
11 from distutils.version import LooseVersion
12 import multiprocessing
13 import os
14 from os.path import join as pjoin
15 import platform
16 import shutil
17 import sys
18
19 import pkg_resources
20 from setuptools import Command, find_packages, setup
21
22 # versioning
23 import versioneer
24
25 cmdclass = versioneer.get_cmdclass()
26
27
28 def is_platform_windows():
29 return sys.platform == "win32" or sys.platform == "cygwin"
30
31
32 def is_platform_mac():
33 return sys.platform == "darwin"
34
35
36 min_numpy_ver = "1.16.5"
37 min_cython_ver = "0.29.21" # note: sync with pyproject.toml
38
39 try:
40 import Cython
41
42 _CYTHON_VERSION = Cython.__version__
43 from Cython.Build import cythonize
44
45 _CYTHON_INSTALLED = _CYTHON_VERSION >= LooseVersion(min_cython_ver)
46 except ImportError:
47 _CYTHON_VERSION = None
48 _CYTHON_INSTALLED = False
49 cythonize = lambda x, *args, **kwargs: x # dummy func
50
51 # The import of Extension must be after the import of Cython, otherwise
52 # we do not get the appropriately patched class.
53 # See https://cython.readthedocs.io/en/latest/src/userguide/source_files_and_compilation.html # noqa
54 from distutils.extension import Extension # isort:skip
55 from distutils.command.build import build # isort:skip
56
57 if _CYTHON_INSTALLED:
58 from Cython.Distutils.old_build_ext import old_build_ext as _build_ext
59
60 cython = True
61 from Cython import Tempita as tempita
62 else:
63 from distutils.command.build_ext import build_ext as _build_ext
64
65 cython = False
66
67
68 _pxi_dep_template = {
69 "algos": ["_libs/algos_common_helper.pxi.in", "_libs/algos_take_helper.pxi.in"],
70 "hashtable": [
71 "_libs/hashtable_class_helper.pxi.in",
72 "_libs/hashtable_func_helper.pxi.in",
73 ],
74 "index": ["_libs/index_class_helper.pxi.in"],
75 "sparse": ["_libs/sparse_op_helper.pxi.in"],
76 "interval": ["_libs/intervaltree.pxi.in"],
77 }
78
79 _pxifiles = []
80 _pxi_dep = {}
81 for module, files in _pxi_dep_template.items():
82 pxi_files = [pjoin("pandas", x) for x in files]
83 _pxifiles.extend(pxi_files)
84 _pxi_dep[module] = pxi_files
85
86
87 class build_ext(_build_ext):
88 @classmethod
89 def render_templates(cls, pxifiles):
90 for pxifile in pxifiles:
91 # build pxifiles first, template extension must be .pxi.in
92 assert pxifile.endswith(".pxi.in")
93 outfile = pxifile[:-3]
94
95 if (
96 os.path.exists(outfile)
97 and os.stat(pxifile).st_mtime < os.stat(outfile).st_mtime
98 ):
99 # if .pxi.in is not updated, no need to output .pxi
100 continue
101
102 with open(pxifile) as f:
103 tmpl = f.read()
104 pyxcontent = tempita.sub(tmpl)
105
106 with open(outfile, "w") as f:
107 f.write(pyxcontent)
108
109 def build_extensions(self):
110 # if building from c files, don't need to
111 # generate template output
112 if cython:
113 self.render_templates(_pxifiles)
114
115 super().build_extensions()
116
117
118 DESCRIPTION = "Powerful data structures for data analysis, time series, and statistics"
119 LONG_DESCRIPTION = """
120 **pandas** is a Python package that provides fast, flexible, and expressive data
121 structures designed to make working with structured (tabular, multidimensional,
122 potentially heterogeneous) and time series data both easy and intuitive. It
123 aims to be the fundamental high-level building block for doing practical,
124 **real world** data analysis in Python. Additionally, it has the broader goal
125 of becoming **the most powerful and flexible open source data analysis /
126 manipulation tool available in any language**. It is already well on its way
127 toward this goal.
128
129 pandas is well suited for many different kinds of data:
130
131 - Tabular data with heterogeneously-typed columns, as in an SQL table or
132 Excel spreadsheet
133 - Ordered and unordered (not necessarily fixed-frequency) time series data.
134 - Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
135 column labels
136 - Any other form of observational / statistical data sets. The data actually
137 need not be labeled at all to be placed into a pandas data structure
138
139 The two primary data structures of pandas, Series (1-dimensional) and DataFrame
140 (2-dimensional), handle the vast majority of typical use cases in finance,
141 statistics, social science, and many areas of engineering. For R users,
142 DataFrame provides everything that R's ``data.frame`` provides and much
143 more. pandas is built on top of `NumPy <https://www.numpy.org>`__ and is
144 intended to integrate well within a scientific computing environment with many
145 other 3rd party libraries.
146
147 Here are just a few of the things that pandas does well:
148
149 - Easy handling of **missing data** (represented as NaN) in floating point as
150 well as non-floating point data
151 - Size mutability: columns can be **inserted and deleted** from DataFrame and
152 higher dimensional objects
153 - Automatic and explicit **data alignment**: objects can be explicitly
154 aligned to a set of labels, or the user can simply ignore the labels and
155 let `Series`, `DataFrame`, etc. automatically align the data for you in
156 computations
157 - Powerful, flexible **group by** functionality to perform
158 split-apply-combine operations on data sets, for both aggregating and
159 transforming data
160 - Make it **easy to convert** ragged, differently-indexed data in other
161 Python and NumPy data structures into DataFrame objects
162 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
163 of large data sets
164 - Intuitive **merging** and **joining** data sets
165 - Flexible **reshaping** and pivoting of data sets
166 - **Hierarchical** labeling of axes (possible to have multiple labels per
167 tick)
168 - Robust IO tools for loading data from **flat files** (CSV and delimited),
169 Excel files, databases, and saving / loading data from the ultrafast **HDF5
170 format**
171 - **Time series**-specific functionality: date range generation and frequency
172 conversion, moving window statistics, date shifting and lagging.
173
174 Many of these principles are here to address the shortcomings frequently
175 experienced using other languages / scientific research environments. For data
176 scientists, working with data is typically divided into multiple stages:
177 munging and cleaning data, analyzing / modeling it, then organizing the results
178 of the analysis into a form suitable for plotting or tabular display. pandas is
179 the ideal tool for all of these tasks.
180 """
181
182 DISTNAME = "pandas"
183 LICENSE = "BSD"
184 AUTHOR = "The PyData Development Team"
185 EMAIL = "[email protected]"
186 URL = "https://pandas.pydata.org"
187 DOWNLOAD_URL = ""
188 PROJECT_URLS = {
189 "Bug Tracker": "https://github.com/pandas-dev/pandas/issues",
190 "Documentation": "https://pandas.pydata.org/pandas-docs/stable/",
191 "Source Code": "https://github.com/pandas-dev/pandas",
192 }
193 CLASSIFIERS = [
194 "Development Status :: 5 - Production/Stable",
195 "Environment :: Console",
196 "Operating System :: OS Independent",
197 "Intended Audience :: Science/Research",
198 "Programming Language :: Python",
199 "Programming Language :: Python :: 3",
200 "Programming Language :: Python :: 3.7",
201 "Programming Language :: Python :: 3.8",
202 "Programming Language :: Python :: 3.9",
203 "Programming Language :: Cython",
204 "Topic :: Scientific/Engineering",
205 ]
206
207
208 class CleanCommand(Command):
209 """Custom distutils command to clean the .so and .pyc files."""
210
211 user_options = [("all", "a", "")]
212
213 def initialize_options(self):
214 self.all = True
215 self._clean_me = []
216 self._clean_trees = []
217
218 base = pjoin("pandas", "_libs", "src")
219 tsbase = pjoin("pandas", "_libs", "tslibs", "src")
220 dt = pjoin(tsbase, "datetime")
221 util = pjoin("pandas", "util")
222 parser = pjoin(base, "parser")
223 ujson_python = pjoin(base, "ujson", "python")
224 ujson_lib = pjoin(base, "ujson", "lib")
225 self._clean_exclude = [
226 pjoin(dt, "np_datetime.c"),
227 pjoin(dt, "np_datetime_strings.c"),
228 pjoin(parser, "tokenizer.c"),
229 pjoin(parser, "io.c"),
230 pjoin(ujson_python, "ujson.c"),
231 pjoin(ujson_python, "objToJSON.c"),
232 pjoin(ujson_python, "JSONtoObj.c"),
233 pjoin(ujson_python, "date_conversions.c"),
234 pjoin(ujson_lib, "ultrajsonenc.c"),
235 pjoin(ujson_lib, "ultrajsondec.c"),
236 pjoin(util, "move.c"),
237 ]
238
239 for root, dirs, files in os.walk("pandas"):
240 for f in files:
241 filepath = pjoin(root, f)
242 if filepath in self._clean_exclude:
243 continue
244
245 if os.path.splitext(f)[-1] in (
246 ".pyc",
247 ".so",
248 ".o",
249 ".pyo",
250 ".pyd",
251 ".c",
252 ".cpp",
253 ".orig",
254 ):
255 self._clean_me.append(filepath)
256 for d in dirs:
257 if d == "__pycache__":
258 self._clean_trees.append(pjoin(root, d))
259
260 # clean the generated pxi files
261 for pxifile in _pxifiles:
262 pxifile = pxifile.replace(".pxi.in", ".pxi")
263 self._clean_me.append(pxifile)
264
265 for d in ("build", "dist"):
266 if os.path.exists(d):
267 self._clean_trees.append(d)
268
269 def finalize_options(self):
270 pass
271
272 def run(self):
273 for clean_me in self._clean_me:
274 try:
275 os.unlink(clean_me)
276 except OSError:
277 pass
278 for clean_tree in self._clean_trees:
279 try:
280 shutil.rmtree(clean_tree)
281 except OSError:
282 pass
283
284
285 # we need to inherit from the versioneer
286 # class as it encodes the version info
287 sdist_class = cmdclass["sdist"]
288
289
290 class CheckSDist(sdist_class):
291 """Custom sdist that ensures Cython has compiled all pyx files to c."""
292
293 _pyxfiles = [
294 "pandas/_libs/lib.pyx",
295 "pandas/_libs/hashtable.pyx",
296 "pandas/_libs/tslib.pyx",
297 "pandas/_libs/index.pyx",
298 "pandas/_libs/internals.pyx",
299 "pandas/_libs/algos.pyx",
300 "pandas/_libs/join.pyx",
301 "pandas/_libs/indexing.pyx",
302 "pandas/_libs/interval.pyx",
303 "pandas/_libs/hashing.pyx",
304 "pandas/_libs/missing.pyx",
305 "pandas/_libs/reduction.pyx",
306 "pandas/_libs/testing.pyx",
307 "pandas/_libs/sparse.pyx",
308 "pandas/_libs/ops.pyx",
309 "pandas/_libs/parsers.pyx",
310 "pandas/_libs/tslibs/base.pyx",
311 "pandas/_libs/tslibs/ccalendar.pyx",
312 "pandas/_libs/tslibs/dtypes.pyx",
313 "pandas/_libs/tslibs/period.pyx",
314 "pandas/_libs/tslibs/strptime.pyx",
315 "pandas/_libs/tslibs/np_datetime.pyx",
316 "pandas/_libs/tslibs/timedeltas.pyx",
317 "pandas/_libs/tslibs/timestamps.pyx",
318 "pandas/_libs/tslibs/timezones.pyx",
319 "pandas/_libs/tslibs/conversion.pyx",
320 "pandas/_libs/tslibs/fields.pyx",
321 "pandas/_libs/tslibs/offsets.pyx",
322 "pandas/_libs/tslibs/parsing.pyx",
323 "pandas/_libs/tslibs/tzconversion.pyx",
324 "pandas/_libs/tslibs/vectorized.pyx",
325 "pandas/_libs/window/indexers.pyx",
326 "pandas/_libs/writers.pyx",
327 "pandas/io/sas/sas.pyx",
328 ]
329
330 _cpp_pyxfiles = [
331 "pandas/_libs/window/aggregations.pyx",
332 ]
333
334 def initialize_options(self):
335 sdist_class.initialize_options(self)
336
337 def run(self):
338 if "cython" in cmdclass:
339 self.run_command("cython")
340 else:
341 # If we are not running cython then
342 # compile the extensions correctly
343 pyx_files = [(self._pyxfiles, "c"), (self._cpp_pyxfiles, "cpp")]
344
345 for pyxfiles, extension in pyx_files:
346 for pyxfile in pyxfiles:
347 sourcefile = pyxfile[:-3] + extension
348 msg = (
349 f"{extension}-source file '{sourcefile}' not found.\n"
350 "Run 'setup.py cython' before sdist."
351 )
352 assert os.path.isfile(sourcefile), msg
353 sdist_class.run(self)
354
355
356 class CheckingBuildExt(build_ext):
357 """
358 Subclass build_ext to get clearer report if Cython is necessary.
359 """
360
361 def check_cython_extensions(self, extensions):
362 for ext in extensions:
363 for src in ext.sources:
364 if not os.path.exists(src):
365 print(f"{ext.name}: -> [{ext.sources}]")
366 raise Exception(
367 f"""Cython-generated file '{src}' not found.
368 Cython is required to compile pandas from a development branch.
369 Please install Cython or download a release package of pandas.
370 """
371 )
372
373 def build_extensions(self):
374 self.check_cython_extensions(self.extensions)
375 build_ext.build_extensions(self)
376
377
378 class CythonCommand(build_ext):
379 """
380 Custom distutils command subclassed from Cython.Distutils.build_ext
381 to compile pyx->c, and stop there. All this does is override the
382 C-compile method build_extension() with a no-op.
383 """
384
385 def build_extension(self, ext):
386 pass
387
388
389 class DummyBuildSrc(Command):
390 """numpy's build_src command interferes with Cython's build_ext."""
391
392 user_options = []
393
394 def initialize_options(self):
395 self.py_modules_dict = {}
396
397 def finalize_options(self):
398 pass
399
400 def run(self):
401 pass
402
403
404 cmdclass.update({"clean": CleanCommand, "build": build})
405 cmdclass["build_ext"] = CheckingBuildExt
406
407 if cython:
408 suffix = ".pyx"
409 cmdclass["cython"] = CythonCommand
410 else:
411 suffix = ".c"
412 cmdclass["build_src"] = DummyBuildSrc
413
414 # ----------------------------------------------------------------------
415 # Preparation of compiler arguments
416
417 debugging_symbols_requested = "--with-debugging-symbols" in sys.argv
418 if debugging_symbols_requested:
419 sys.argv.remove("--with-debugging-symbols")
420
421
422 if sys.byteorder == "big":
423 endian_macro = [("__BIG_ENDIAN__", "1")]
424 else:
425 endian_macro = [("__LITTLE_ENDIAN__", "1")]
426
427
428 if is_platform_windows():
429 extra_compile_args = []
430 extra_link_args = []
431 if debugging_symbols_requested:
432 extra_compile_args.append("/Z7")
433 extra_link_args.append("/DEBUG")
434 else:
435 extra_compile_args = ["-Werror"]
436 extra_link_args = []
437 if debugging_symbols_requested:
438 extra_compile_args.append("-g")
439
440 # Build for at least macOS 10.9 when compiling on a 10.9 system or above,
441 # overriding CPython distuitls behaviour which is to target the version that
442 # python was built for. This may be overridden by setting
443 # MACOSX_DEPLOYMENT_TARGET before calling setup.py
444 if is_platform_mac():
445 if "MACOSX_DEPLOYMENT_TARGET" not in os.environ:
446 current_system = platform.mac_ver()[0]
447 python_target = get_config_vars().get(
448 "MACOSX_DEPLOYMENT_TARGET", current_system
449 )
450 if (
451 LooseVersion(python_target) < "10.9"
452 and LooseVersion(current_system) >= "10.9"
453 ):
454 os.environ["MACOSX_DEPLOYMENT_TARGET"] = "10.9"
455
456 if sys.version_info[:2] == (3, 8): # GH 33239
457 extra_compile_args.append("-Wno-error=deprecated-declarations")
458
459 # https://github.com/pandas-dev/pandas/issues/35559
460 extra_compile_args.append("-Wno-error=unreachable-code")
461
462 # enable coverage by building cython files by setting the environment variable
463 # "PANDAS_CYTHON_COVERAGE" (with a Truthy value) or by running build_ext
464 # with `--with-cython-coverage`enabled
465 linetrace = os.environ.get("PANDAS_CYTHON_COVERAGE", False)
466 if "--with-cython-coverage" in sys.argv:
467 linetrace = True
468 sys.argv.remove("--with-cython-coverage")
469
470 # Note: if not using `cythonize`, coverage can be enabled by
471 # pinning `ext.cython_directives = directives` to each ext in extensions.
472 # github.com/cython/cython/wiki/enhancements-compilerdirectives#in-setuppy
473 directives = {"linetrace": False, "language_level": 3}
474 macros = []
475 if linetrace:
476 # https://pypkg.com/pypi/pytest-cython/f/tests/example-project/setup.py
477 directives["linetrace"] = True
478 macros = [("CYTHON_TRACE", "1"), ("CYTHON_TRACE_NOGIL", "1")]
479
480 # in numpy>=1.16.0, silence build warnings about deprecated API usage
481 # we can't do anything about these warnings because they stem from
482 # cython+numpy version mismatches.
483 macros.append(("NPY_NO_DEPRECATED_API", "0"))
484 if "-Werror" in extra_compile_args:
485 try:
486 import numpy as np
487 except ImportError:
488 pass
489 else:
490 if np.__version__ < LooseVersion("1.16.0"):
491 extra_compile_args.remove("-Werror")
492
493
494 # ----------------------------------------------------------------------
495 # Specification of Dependencies
496
497 # TODO: Need to check to see if e.g. `linetrace` has changed and possibly
498 # re-compile.
499 def maybe_cythonize(extensions, *args, **kwargs):
500 """
501 Render tempita templates before calling cythonize. This is skipped for
502
503 * clean
504 * sdist
505 """
506 if "clean" in sys.argv or "sdist" in sys.argv:
507 # See https://github.com/cython/cython/issues/1495
508 return extensions
509
510 elif not cython:
511 # GH#28836 raise a helfpul error message
512 if _CYTHON_VERSION:
513 raise RuntimeError(
514 f"Cannot cythonize with old Cython version ({_CYTHON_VERSION} "
515 f"installed, needs {min_cython_ver})"
516 )
517 raise RuntimeError("Cannot cythonize without Cython installed.")
518
519 numpy_incl = pkg_resources.resource_filename("numpy", "core/include")
520 # TODO: Is this really necessary here?
521 for ext in extensions:
522 if hasattr(ext, "include_dirs") and numpy_incl not in ext.include_dirs:
523 ext.include_dirs.append(numpy_incl)
524
525 # reuse any parallel arguments provided for compilation to cythonize
526 parser = argparse.ArgumentParser()
527 parser.add_argument("-j", type=int)
528 parser.add_argument("--parallel", type=int)
529 parsed, _ = parser.parse_known_args()
530
531 nthreads = 0
532 if parsed.parallel:
533 nthreads = parsed.parallel
534 elif parsed.j:
535 nthreads = parsed.j
536
537 kwargs["nthreads"] = nthreads
538 build_ext.render_templates(_pxifiles)
539 return cythonize(extensions, *args, **kwargs)
540
541
542 def srcpath(name=None, suffix=".pyx", subdir="src"):
543 return pjoin("pandas", subdir, name + suffix)
544
545
546 lib_depends = ["pandas/_libs/src/parse_helper.h"]
547
548 klib_include = ["pandas/_libs/src/klib"]
549
550 tseries_depends = [
551 "pandas/_libs/tslibs/src/datetime/np_datetime.h",
552 "pandas/_libs/tslibs/src/datetime/np_datetime_strings.h",
553 ]
554
555 ext_data = {
556 "_libs.algos": {
557 "pyxfile": "_libs/algos",
558 "include": klib_include,
559 "depends": _pxi_dep["algos"],
560 },
561 "_libs.groupby": {"pyxfile": "_libs/groupby"},
562 "_libs.hashing": {"pyxfile": "_libs/hashing", "depends": []},
563 "_libs.hashtable": {
564 "pyxfile": "_libs/hashtable",
565 "include": klib_include,
566 "depends": (["pandas/_libs/src/klib/khash_python.h"] + _pxi_dep["hashtable"]),
567 },
568 "_libs.index": {
569 "pyxfile": "_libs/index",
570 "include": klib_include,
571 "depends": _pxi_dep["index"],
572 },
573 "_libs.indexing": {"pyxfile": "_libs/indexing"},
574 "_libs.internals": {"pyxfile": "_libs/internals"},
575 "_libs.interval": {
576 "pyxfile": "_libs/interval",
577 "include": klib_include,
578 "depends": _pxi_dep["interval"],
579 },
580 "_libs.join": {"pyxfile": "_libs/join", "include": klib_include},
581 "_libs.lib": {
582 "pyxfile": "_libs/lib",
583 "depends": lib_depends + tseries_depends,
584 "include": klib_include, # due to tokenizer import
585 "sources": ["pandas/_libs/src/parser/tokenizer.c"],
586 },
587 "_libs.missing": {"pyxfile": "_libs/missing", "depends": tseries_depends},
588 "_libs.parsers": {
589 "pyxfile": "_libs/parsers",
590 "include": klib_include + ["pandas/_libs/src"],
591 "depends": [
592 "pandas/_libs/src/parser/tokenizer.h",
593 "pandas/_libs/src/parser/io.h",
594 ],
595 "sources": [
596 "pandas/_libs/src/parser/tokenizer.c",
597 "pandas/_libs/src/parser/io.c",
598 ],
599 },
600 "_libs.reduction": {"pyxfile": "_libs/reduction"},
601 "_libs.ops": {"pyxfile": "_libs/ops"},
602 "_libs.ops_dispatch": {"pyxfile": "_libs/ops_dispatch"},
603 "_libs.properties": {"pyxfile": "_libs/properties"},
604 "_libs.reshape": {"pyxfile": "_libs/reshape", "depends": []},
605 "_libs.sparse": {"pyxfile": "_libs/sparse", "depends": _pxi_dep["sparse"]},
606 "_libs.tslib": {"pyxfile": "_libs/tslib", "depends": tseries_depends},
607 "_libs.tslibs.base": {"pyxfile": "_libs/tslibs/base"},
608 "_libs.tslibs.ccalendar": {"pyxfile": "_libs/tslibs/ccalendar"},
609 "_libs.tslibs.dtypes": {"pyxfile": "_libs/tslibs/dtypes"},
610 "_libs.tslibs.conversion": {
611 "pyxfile": "_libs/tslibs/conversion",
612 "depends": tseries_depends,
613 "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"],
614 },
615 "_libs.tslibs.fields": {
616 "pyxfile": "_libs/tslibs/fields",
617 "depends": tseries_depends,
618 },
619 "_libs.tslibs.nattype": {"pyxfile": "_libs/tslibs/nattype"},
620 "_libs.tslibs.np_datetime": {
621 "pyxfile": "_libs/tslibs/np_datetime",
622 "depends": tseries_depends,
623 "sources": [
624 "pandas/_libs/tslibs/src/datetime/np_datetime.c",
625 "pandas/_libs/tslibs/src/datetime/np_datetime_strings.c",
626 ],
627 },
628 "_libs.tslibs.offsets": {
629 "pyxfile": "_libs/tslibs/offsets",
630 "depends": tseries_depends,
631 },
632 "_libs.tslibs.parsing": {
633 "pyxfile": "_libs/tslibs/parsing",
634 "include": klib_include,
635 "depends": ["pandas/_libs/src/parser/tokenizer.h"],
636 "sources": ["pandas/_libs/src/parser/tokenizer.c"],
637 },
638 "_libs.tslibs.period": {
639 "pyxfile": "_libs/tslibs/period",
640 "depends": tseries_depends,
641 "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"],
642 },
643 "_libs.tslibs.strptime": {
644 "pyxfile": "_libs/tslibs/strptime",
645 "depends": tseries_depends,
646 },
647 "_libs.tslibs.timedeltas": {
648 "pyxfile": "_libs/tslibs/timedeltas",
649 "depends": tseries_depends,
650 },
651 "_libs.tslibs.timestamps": {
652 "pyxfile": "_libs/tslibs/timestamps",
653 "depends": tseries_depends,
654 },
655 "_libs.tslibs.timezones": {"pyxfile": "_libs/tslibs/timezones"},
656 "_libs.tslibs.tzconversion": {
657 "pyxfile": "_libs/tslibs/tzconversion",
658 "depends": tseries_depends,
659 },
660 "_libs.tslibs.vectorized": {"pyxfile": "_libs/tslibs/vectorized"},
661 "_libs.testing": {"pyxfile": "_libs/testing"},
662 "_libs.window.aggregations": {
663 "pyxfile": "_libs/window/aggregations",
664 "language": "c++",
665 "suffix": ".cpp",
666 "depends": ["pandas/_libs/src/skiplist.h"],
667 },
668 "_libs.window.indexers": {"pyxfile": "_libs/window/indexers"},
669 "_libs.writers": {"pyxfile": "_libs/writers"},
670 "io.sas._sas": {"pyxfile": "io/sas/sas"},
671 }
672
673 extensions = []
674
675 for name, data in ext_data.items():
676 source_suffix = suffix if suffix == ".pyx" else data.get("suffix", ".c")
677
678 sources = [srcpath(data["pyxfile"], suffix=source_suffix, subdir="")]
679
680 sources.extend(data.get("sources", []))
681
682 include = data.get("include")
683
684 obj = Extension(
685 f"pandas.{name}",
686 sources=sources,
687 depends=data.get("depends", []),
688 include_dirs=include,
689 language=data.get("language", "c"),
690 define_macros=data.get("macros", macros),
691 extra_compile_args=extra_compile_args,
692 extra_link_args=extra_link_args,
693 )
694
695 extensions.append(obj)
696
697 # ----------------------------------------------------------------------
698 # ujson
699
700 if suffix == ".pyx":
701 # undo dumb setuptools bug clobbering .pyx sources back to .c
702 for ext in extensions:
703 if ext.sources[0].endswith((".c", ".cpp")):
704 root, _ = os.path.splitext(ext.sources[0])
705 ext.sources[0] = root + suffix
706
707 ujson_ext = Extension(
708 "pandas._libs.json",
709 depends=[
710 "pandas/_libs/src/ujson/lib/ultrajson.h",
711 "pandas/_libs/src/ujson/python/date_conversions.h",
712 ],
713 sources=(
714 [
715 "pandas/_libs/src/ujson/python/ujson.c",
716 "pandas/_libs/src/ujson/python/objToJSON.c",
717 "pandas/_libs/src/ujson/python/date_conversions.c",
718 "pandas/_libs/src/ujson/python/JSONtoObj.c",
719 "pandas/_libs/src/ujson/lib/ultrajsonenc.c",
720 "pandas/_libs/src/ujson/lib/ultrajsondec.c",
721 ]
722 + [
723 "pandas/_libs/tslibs/src/datetime/np_datetime.c",
724 "pandas/_libs/tslibs/src/datetime/np_datetime_strings.c",
725 ]
726 ),
727 include_dirs=[
728 "pandas/_libs/src/ujson/python",
729 "pandas/_libs/src/ujson/lib",
730 "pandas/_libs/src/datetime",
731 ],
732 extra_compile_args=(["-D_GNU_SOURCE"] + extra_compile_args),
733 extra_link_args=extra_link_args,
734 define_macros=macros,
735 )
736
737
738 extensions.append(ujson_ext)
739
740 # ----------------------------------------------------------------------
741
742
743 def setup_package():
744 setuptools_kwargs = {
745 "install_requires": [
746 "python-dateutil >= 2.7.3",
747 "pytz >= 2017.3",
748 f"numpy >= {min_numpy_ver}",
749 ],
750 "setup_requires": [f"numpy >= {min_numpy_ver}"],
751 "zip_safe": False,
752 }
753
754 setup(
755 name=DISTNAME,
756 maintainer=AUTHOR,
757 version=versioneer.get_version(),
758 packages=find_packages(include=["pandas", "pandas.*"]),
759 package_data={"": ["templates/*", "_libs/**/*.dll"]},
760 ext_modules=maybe_cythonize(extensions, compiler_directives=directives),
761 maintainer_email=EMAIL,
762 description=DESCRIPTION,
763 license=LICENSE,
764 cmdclass=cmdclass,
765 url=URL,
766 download_url=DOWNLOAD_URL,
767 project_urls=PROJECT_URLS,
768 long_description=LONG_DESCRIPTION,
769 classifiers=CLASSIFIERS,
770 platforms="any",
771 python_requires=">=3.7.1",
772 extras_require={
773 "test": [
774 # sync with setup.cfg minversion & install.rst
775 "pytest>=5.0.1",
776 "pytest-xdist",
777 "hypothesis>=3.58",
778 ]
779 },
780 entry_points={
781 "pandas_plotting_backends": ["matplotlib = pandas:plotting._matplotlib"]
782 },
783 **setuptools_kwargs,
784 )
785
786
787 if __name__ == "__main__":
788 # Freeze to support parallel compilation when using spawn instead of fork
789 multiprocessing.freeze_support()
790 setup_package()
791
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
136353f0d86450422d1866e57b0b909d3ee14e3a
|
BUG: warning when using colors 'CN'
- [x] I have checked that this issue has not already been reported.
- [ ] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randn(10, 3), columns=["a", "b", "c"])
df.plot(color='C0')
plt.show()
```
#### Problem description
On master branch when executing the code above the following warning is raised.
```
workspaces/pandas/pandas/plotting/_matplotlib/style.py:64: MatplotlibDeprecationWarning: Support for uppercase single-letter colors is deprecated since Matplotlib 3.1 and will be removed in 3.3; please use lowercase instead.
[conv.to_rgba(c) for c in colors]
```
#### Expected Output
Expected no warnings.
Related issue: https://github.com/pandas-dev/pandas/issues/15516. The feature with "CN"-like colors was introduced in https://github.com/pandas-dev/pandas/pull/15873.
I figured out that the problem lies in the line
```
maybe_color_cycle = _maybe_valid_colors(list(colors))
```
So, if colors is "C0", then it is split into ["C", "0"].
I will fix it.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.8.3.final.0
python-bits : 32
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United States.1251
pandas : 1.0.5
numpy : 1.19.0
pytz : 2020.1
dateutil : 2.8.1
pip : 19.2.3
setuptools : 41.2.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.2.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.5.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : None
numba : None
</details>
|
2020-10-08T12:42:03Z
|
<patch>
diff --git a/pandas/plotting/_matplotlib/style.py b/pandas/plotting/_matplotlib/style.py
--- a/pandas/plotting/_matplotlib/style.py
+++ b/pandas/plotting/_matplotlib/style.py
@@ -56,29 +56,9 @@ def random_color(column):
else:
raise ValueError("color_type must be either 'default' or 'random'")
- if isinstance(colors, str):
- conv = matplotlib.colors.ColorConverter()
-
- def _maybe_valid_colors(colors):
- try:
- [conv.to_rgba(c) for c in colors]
- return True
- except ValueError:
- return False
-
- # check whether the string can be convertible to single color
- maybe_single_color = _maybe_valid_colors([colors])
- # check whether each character can be convertible to colors
- maybe_color_cycle = _maybe_valid_colors(list(colors))
- if maybe_single_color and maybe_color_cycle and len(colors) > 1:
- hex_color = [c["color"] for c in list(plt.rcParams["axes.prop_cycle"])]
- colors = [hex_color[int(colors[1])]]
- elif maybe_single_color:
- colors = [colors]
- else:
- # ``colors`` is regarded as color cycle.
- # mpl will raise error any of them is invalid
- pass
+ if isinstance(colors, str) and _is_single_color(colors):
+ # GH #36972
+ colors = [colors]
# Append more colors by cycling if there is not enough color.
# Extra colors will be ignored by matplotlib if there are more colors
@@ -94,3 +74,33 @@ def _maybe_valid_colors(colors):
colors += colors[:mod]
return colors
+
+
+def _is_single_color(color: str) -> bool:
+ """Check if ``color`` is a single color.
+
+ Examples of single colors:
+ - 'r'
+ - 'g'
+ - 'red'
+ - 'green'
+ - 'C3'
+
+ Parameters
+ ----------
+ color : string
+ Color string.
+
+ Returns
+ -------
+ bool
+ True if ``color`` looks like a valid color.
+ False otherwise.
+ """
+ conv = matplotlib.colors.ColorConverter()
+ try:
+ conv.to_rgba(color)
+ except ValueError:
+ return False
+ else:
+ return True
</patch>
|
[]
|
[]
| ||||
ipython__ipython-10338
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
?? is not coloring docstring anymore.
Mike and I are on it.
</issue>
<code>
[start of README.rst]
1 .. image:: https://codecov.io/github/ipython/ipython/coverage.svg?branch=master
2 :target: https://codecov.io/github/ipython/ipython?branch=master
3
4 .. image:: https://img.shields.io/pypi/dm/IPython.svg
5 :target: https://pypi.python.org/pypi/ipython
6
7 .. image:: https://img.shields.io/pypi/v/IPython.svg
8 :target: https://pypi.python.org/pypi/ipython
9
10 .. image:: https://img.shields.io/travis/ipython/ipython.svg
11 :target: https://travis-ci.org/ipython/ipython
12
13
14 ===========================================
15 IPython: Productive Interactive Computing
16 ===========================================
17
18 Overview
19 ========
20
21 Welcome to IPython. Our full documentation is available on `ipython.readthedocs.io
22 <https://ipython.readthedocs.io/en/stable/>`_ and contains information on how to install, use and
23 contribute to the project.
24
25 Officially, IPython requires Python version 3.3 and above.
26 IPython 5.x is the last IPython version to support Python 2.7.
27
28 The Notebook, Qt console and a number of other pieces are now parts of *Jupyter*.
29 See the `Jupyter installation docs <http://jupyter.readthedocs.io/en/latest/install.html>`__
30 if you want to use these.
31
32
33
34
35 Development and Instant running
36 ===============================
37
38 You can find the latest version of the development documentation on `readthedocs
39 <http://ipython.readthedocs.io/en/latest/>`_.
40
41 You can run IPython from this directory without even installing it system-wide
42 by typing at the terminal::
43
44 $ python -m IPython
45
46 Or see the `development installation docs
47 <http://ipython.readthedocs.io/en/latest/install/install.html#installing-the-development-version>`_
48 for the latest revision on read the docs.
49
50 Documentation and installation instructions for older version of IPython can be
51 found on the `IPython website <http://ipython.org/documentation.html>`_
52
53
54
55 IPython requires Python version 3 or above
56 ==========================================
57
58 Starting with version 6.0, IPython does not support Python 2.7, 3.0, 3.1, or
59 3.2.
60
61 For a version compatible with Python 2.7, please install the 5.x LTS Long Term
62 Support version.
63
64 If you are encountering this error message you are likely trying to install or
65 use IPython from source. You need to checkout the remote 5.x branch. If you are
66 using git the following should work:
67
68 $ git fetch origin
69 $ git checkout -b origin/5.x
70
71 If you encounter this error message with a regular install of IPython, then you
72 likely need to update your package manager, for example if you are using `pip`
73 check the version of pip with
74
75 $ pip --version
76
77 You will need to update pip to the version 8.2 or greater. If you are not using
78 pip, please inquiry with the maintainers of the package for your package
79 manager.
80
81 For more information see one of our blog posts:
82
83 http://blog.jupyter.org/2016/07/08/ipython-5-0-released/
84
85 As well as the following Pull-Request for discussion:
86
87 https://github.com/ipython/ipython/pull/9900
88
[end of README.rst]
[start of IPython/core/oinspect.py]
1 # -*- coding: utf-8 -*-
2 """Tools for inspecting Python objects.
3
4 Uses syntax highlighting for presenting the various information elements.
5
6 Similar in spirit to the inspect module, but all calls take a name argument to
7 reference the name under which an object is being read.
8 """
9
10 # Copyright (c) IPython Development Team.
11 # Distributed under the terms of the Modified BSD License.
12
13 __all__ = ['Inspector','InspectColors']
14
15 # stdlib modules
16 import inspect
17 from inspect import signature
18 import linecache
19 import warnings
20 import os
21 from textwrap import dedent
22 import types
23 import io as stdlib_io
24 from itertools import zip_longest
25
26 # IPython's own
27 from IPython.core import page
28 from IPython.lib.pretty import pretty
29 from IPython.testing.skipdoctest import skip_doctest
30 from IPython.utils import PyColorize
31 from IPython.utils import openpy
32 from IPython.utils import py3compat
33 from IPython.utils.dir2 import safe_hasattr
34 from IPython.utils.path import compress_user
35 from IPython.utils.text import indent
36 from IPython.utils.wildcard import list_namespace
37 from IPython.utils.coloransi import TermColors, ColorScheme, ColorSchemeTable
38 from IPython.utils.py3compat import cast_unicode
39 from IPython.utils.colorable import Colorable
40 from IPython.utils.decorators import undoc
41
42 from pygments import highlight
43 from pygments.lexers import PythonLexer
44 from pygments.formatters import HtmlFormatter
45
46 def pylight(code):
47 return highlight(code, PythonLexer(), HtmlFormatter(noclasses=True))
48
49 # builtin docstrings to ignore
50 _func_call_docstring = types.FunctionType.__call__.__doc__
51 _object_init_docstring = object.__init__.__doc__
52 _builtin_type_docstrings = {
53 inspect.getdoc(t) for t in (types.ModuleType, types.MethodType,
54 types.FunctionType, property)
55 }
56
57 _builtin_func_type = type(all)
58 _builtin_meth_type = type(str.upper) # Bound methods have the same type as builtin functions
59 #****************************************************************************
60 # Builtin color schemes
61
62 Colors = TermColors # just a shorthand
63
64 InspectColors = PyColorize.ANSICodeColors
65
66 #****************************************************************************
67 # Auxiliary functions and objects
68
69 # See the messaging spec for the definition of all these fields. This list
70 # effectively defines the order of display
71 info_fields = ['type_name', 'base_class', 'string_form', 'namespace',
72 'length', 'file', 'definition', 'docstring', 'source',
73 'init_definition', 'class_docstring', 'init_docstring',
74 'call_def', 'call_docstring',
75 # These won't be printed but will be used to determine how to
76 # format the object
77 'ismagic', 'isalias', 'isclass', 'argspec', 'found', 'name'
78 ]
79
80
81 def object_info(**kw):
82 """Make an object info dict with all fields present."""
83 infodict = dict(zip_longest(info_fields, [None]))
84 infodict.update(kw)
85 return infodict
86
87
88 def get_encoding(obj):
89 """Get encoding for python source file defining obj
90
91 Returns None if obj is not defined in a sourcefile.
92 """
93 ofile = find_file(obj)
94 # run contents of file through pager starting at line where the object
95 # is defined, as long as the file isn't binary and is actually on the
96 # filesystem.
97 if ofile is None:
98 return None
99 elif ofile.endswith(('.so', '.dll', '.pyd')):
100 return None
101 elif not os.path.isfile(ofile):
102 return None
103 else:
104 # Print only text files, not extension binaries. Note that
105 # getsourcelines returns lineno with 1-offset and page() uses
106 # 0-offset, so we must adjust.
107 with stdlib_io.open(ofile, 'rb') as buffer: # Tweaked to use io.open for Python 2
108 encoding, lines = openpy.detect_encoding(buffer.readline)
109 return encoding
110
111 def getdoc(obj):
112 """Stable wrapper around inspect.getdoc.
113
114 This can't crash because of attribute problems.
115
116 It also attempts to call a getdoc() method on the given object. This
117 allows objects which provide their docstrings via non-standard mechanisms
118 (like Pyro proxies) to still be inspected by ipython's ? system.
119 """
120 # Allow objects to offer customized documentation via a getdoc method:
121 try:
122 ds = obj.getdoc()
123 except Exception:
124 pass
125 else:
126 # if we get extra info, we add it to the normal docstring.
127 if isinstance(ds, str):
128 return inspect.cleandoc(ds)
129 try:
130 docstr = inspect.getdoc(obj)
131 encoding = get_encoding(obj)
132 return py3compat.cast_unicode(docstr, encoding=encoding)
133 except Exception:
134 # Harden against an inspect failure, which can occur with
135 # extensions modules.
136 raise
137 return None
138
139
140 def getsource(obj, oname=''):
141 """Wrapper around inspect.getsource.
142
143 This can be modified by other projects to provide customized source
144 extraction.
145
146 Parameters
147 ----------
148 obj : object
149 an object whose source code we will attempt to extract
150 oname : str
151 (optional) a name under which the object is known
152
153 Returns
154 -------
155 src : unicode or None
156
157 """
158
159 if isinstance(obj, property):
160 sources = []
161 for attrname in ['fget', 'fset', 'fdel']:
162 fn = getattr(obj, attrname)
163 if fn is not None:
164 encoding = get_encoding(fn)
165 oname_prefix = ('%s.' % oname) if oname else ''
166 sources.append(cast_unicode(
167 ''.join(('# ', oname_prefix, attrname)),
168 encoding=encoding))
169 if inspect.isfunction(fn):
170 sources.append(dedent(getsource(fn)))
171 else:
172 # Default str/repr only prints function name,
173 # pretty.pretty prints module name too.
174 sources.append(cast_unicode(
175 '%s%s = %s\n' % (
176 oname_prefix, attrname, pretty(fn)),
177 encoding=encoding))
178 if sources:
179 return '\n'.join(sources)
180 else:
181 return None
182
183 else:
184 # Get source for non-property objects.
185
186 obj = _get_wrapped(obj)
187
188 try:
189 src = inspect.getsource(obj)
190 except TypeError:
191 # The object itself provided no meaningful source, try looking for
192 # its class definition instead.
193 if hasattr(obj, '__class__'):
194 try:
195 src = inspect.getsource(obj.__class__)
196 except TypeError:
197 return None
198
199 encoding = get_encoding(obj)
200 return cast_unicode(src, encoding=encoding)
201
202
203 def is_simple_callable(obj):
204 """True if obj is a function ()"""
205 return (inspect.isfunction(obj) or inspect.ismethod(obj) or \
206 isinstance(obj, _builtin_func_type) or isinstance(obj, _builtin_meth_type))
207
208
209 def getargspec(obj):
210 """Wrapper around :func:`inspect.getfullargspec` on Python 3, and
211 :func:inspect.getargspec` on Python 2.
212
213 In addition to functions and methods, this can also handle objects with a
214 ``__call__`` attribute.
215 """
216 if safe_hasattr(obj, '__call__') and not is_simple_callable(obj):
217 obj = obj.__call__
218
219 return inspect.getfullargspec(obj)
220
221
222 def format_argspec(argspec):
223 """Format argspect, convenience wrapper around inspect's.
224
225 This takes a dict instead of ordered arguments and calls
226 inspect.format_argspec with the arguments in the necessary order.
227 """
228 return inspect.formatargspec(argspec['args'], argspec['varargs'],
229 argspec['varkw'], argspec['defaults'])
230
231 @undoc
232 def call_tip(oinfo, format_call=True):
233 """DEPRECATED. Extract call tip data from an oinfo dict.
234 """
235 warnings.warn('`call_tip` function is deprecated as of IPython 6.0'
236 'and will be removed in future versions.', DeprecationWarning, stacklevel=2)
237 # Get call definition
238 argspec = oinfo.get('argspec')
239 if argspec is None:
240 call_line = None
241 else:
242 # Callable objects will have 'self' as their first argument, prune
243 # it out if it's there for clarity (since users do *not* pass an
244 # extra first argument explicitly).
245 try:
246 has_self = argspec['args'][0] == 'self'
247 except (KeyError, IndexError):
248 pass
249 else:
250 if has_self:
251 argspec['args'] = argspec['args'][1:]
252
253 call_line = oinfo['name']+format_argspec(argspec)
254
255 # Now get docstring.
256 # The priority is: call docstring, constructor docstring, main one.
257 doc = oinfo.get('call_docstring')
258 if doc is None:
259 doc = oinfo.get('init_docstring')
260 if doc is None:
261 doc = oinfo.get('docstring','')
262
263 return call_line, doc
264
265
266 def _get_wrapped(obj):
267 """Get the original object if wrapped in one or more @decorators
268
269 Some objects automatically construct similar objects on any unrecognised
270 attribute access (e.g. unittest.mock.call). To protect against infinite loops,
271 this will arbitrarily cut off after 100 levels of obj.__wrapped__
272 attribute access. --TK, Jan 2016
273 """
274 orig_obj = obj
275 i = 0
276 while safe_hasattr(obj, '__wrapped__'):
277 obj = obj.__wrapped__
278 i += 1
279 if i > 100:
280 # __wrapped__ is probably a lie, so return the thing we started with
281 return orig_obj
282 return obj
283
284 def find_file(obj):
285 """Find the absolute path to the file where an object was defined.
286
287 This is essentially a robust wrapper around `inspect.getabsfile`.
288
289 Returns None if no file can be found.
290
291 Parameters
292 ----------
293 obj : any Python object
294
295 Returns
296 -------
297 fname : str
298 The absolute path to the file where the object was defined.
299 """
300 obj = _get_wrapped(obj)
301
302 fname = None
303 try:
304 fname = inspect.getabsfile(obj)
305 except TypeError:
306 # For an instance, the file that matters is where its class was
307 # declared.
308 if hasattr(obj, '__class__'):
309 try:
310 fname = inspect.getabsfile(obj.__class__)
311 except TypeError:
312 # Can happen for builtins
313 pass
314 except:
315 pass
316 return cast_unicode(fname)
317
318
319 def find_source_lines(obj):
320 """Find the line number in a file where an object was defined.
321
322 This is essentially a robust wrapper around `inspect.getsourcelines`.
323
324 Returns None if no file can be found.
325
326 Parameters
327 ----------
328 obj : any Python object
329
330 Returns
331 -------
332 lineno : int
333 The line number where the object definition starts.
334 """
335 obj = _get_wrapped(obj)
336
337 try:
338 try:
339 lineno = inspect.getsourcelines(obj)[1]
340 except TypeError:
341 # For instances, try the class object like getsource() does
342 if hasattr(obj, '__class__'):
343 lineno = inspect.getsourcelines(obj.__class__)[1]
344 else:
345 lineno = None
346 except:
347 return None
348
349 return lineno
350
351 class Inspector(Colorable):
352
353 def __init__(self, color_table=InspectColors,
354 code_color_table=PyColorize.ANSICodeColors,
355 scheme='NoColor',
356 str_detail_level=0,
357 parent=None, config=None):
358 super(Inspector, self).__init__(parent=parent, config=config)
359 self.color_table = color_table
360 self.parser = PyColorize.Parser(out='str', parent=self, style=scheme)
361 self.format = self.parser.format
362 self.str_detail_level = str_detail_level
363 self.set_active_scheme(scheme)
364
365 def _getdef(self,obj,oname=''):
366 """Return the call signature for any callable object.
367
368 If any exception is generated, None is returned instead and the
369 exception is suppressed."""
370 try:
371 hdef = oname + str(signature(obj))
372 return cast_unicode(hdef)
373 except:
374 return None
375
376 def __head(self,h):
377 """Return a header string with proper colors."""
378 return '%s%s%s' % (self.color_table.active_colors.header,h,
379 self.color_table.active_colors.normal)
380
381 def set_active_scheme(self, scheme):
382 self.color_table.set_active_scheme(scheme)
383 self.parser.color_table.set_active_scheme(scheme)
384
385 def noinfo(self, msg, oname):
386 """Generic message when no information is found."""
387 print('No %s found' % msg, end=' ')
388 if oname:
389 print('for %s' % oname)
390 else:
391 print()
392
393 def pdef(self, obj, oname=''):
394 """Print the call signature for any callable object.
395
396 If the object is a class, print the constructor information."""
397
398 if not callable(obj):
399 print('Object is not callable.')
400 return
401
402 header = ''
403
404 if inspect.isclass(obj):
405 header = self.__head('Class constructor information:\n')
406
407
408 output = self._getdef(obj,oname)
409 if output is None:
410 self.noinfo('definition header',oname)
411 else:
412 print(header,self.format(output), end=' ')
413
414 # In Python 3, all classes are new-style, so they all have __init__.
415 @skip_doctest
416 def pdoc(self, obj, oname='', formatter=None):
417 """Print the docstring for any object.
418
419 Optional:
420 -formatter: a function to run the docstring through for specially
421 formatted docstrings.
422
423 Examples
424 --------
425
426 In [1]: class NoInit:
427 ...: pass
428
429 In [2]: class NoDoc:
430 ...: def __init__(self):
431 ...: pass
432
433 In [3]: %pdoc NoDoc
434 No documentation found for NoDoc
435
436 In [4]: %pdoc NoInit
437 No documentation found for NoInit
438
439 In [5]: obj = NoInit()
440
441 In [6]: %pdoc obj
442 No documentation found for obj
443
444 In [5]: obj2 = NoDoc()
445
446 In [6]: %pdoc obj2
447 No documentation found for obj2
448 """
449
450 head = self.__head # For convenience
451 lines = []
452 ds = getdoc(obj)
453 if formatter:
454 ds = formatter(ds).get('plain/text', ds)
455 if ds:
456 lines.append(head("Class docstring:"))
457 lines.append(indent(ds))
458 if inspect.isclass(obj) and hasattr(obj, '__init__'):
459 init_ds = getdoc(obj.__init__)
460 if init_ds is not None:
461 lines.append(head("Init docstring:"))
462 lines.append(indent(init_ds))
463 elif hasattr(obj,'__call__'):
464 call_ds = getdoc(obj.__call__)
465 if call_ds:
466 lines.append(head("Call docstring:"))
467 lines.append(indent(call_ds))
468
469 if not lines:
470 self.noinfo('documentation',oname)
471 else:
472 page.page('\n'.join(lines))
473
474 def psource(self, obj, oname=''):
475 """Print the source code for an object."""
476
477 # Flush the source cache because inspect can return out-of-date source
478 linecache.checkcache()
479 try:
480 src = getsource(obj, oname=oname)
481 except Exception:
482 src = None
483
484 if src is None:
485 self.noinfo('source', oname)
486 else:
487 page.page(self.format(src))
488
489 def pfile(self, obj, oname=''):
490 """Show the whole file where an object was defined."""
491
492 lineno = find_source_lines(obj)
493 if lineno is None:
494 self.noinfo('file', oname)
495 return
496
497 ofile = find_file(obj)
498 # run contents of file through pager starting at line where the object
499 # is defined, as long as the file isn't binary and is actually on the
500 # filesystem.
501 if ofile.endswith(('.so', '.dll', '.pyd')):
502 print('File %r is binary, not printing.' % ofile)
503 elif not os.path.isfile(ofile):
504 print('File %r does not exist, not printing.' % ofile)
505 else:
506 # Print only text files, not extension binaries. Note that
507 # getsourcelines returns lineno with 1-offset and page() uses
508 # 0-offset, so we must adjust.
509 page.page(self.format(openpy.read_py_file(ofile, skip_encoding_cookie=False)), lineno - 1)
510
511 def _format_fields(self, fields, title_width=0):
512 """Formats a list of fields for display.
513
514 Parameters
515 ----------
516 fields : list
517 A list of 2-tuples: (field_title, field_content)
518 title_width : int
519 How many characters to pad titles to. Default to longest title.
520 """
521 out = []
522 header = self.__head
523 if title_width == 0:
524 title_width = max(len(title) + 2 for title, _ in fields)
525 for title, content in fields:
526 if len(content.splitlines()) > 1:
527 title = header(title + ':') + '\n'
528 else:
529 title = header((title + ':').ljust(title_width))
530 out.append(cast_unicode(title) + cast_unicode(content))
531 return "\n".join(out)
532
533 def _mime_format(self, text, formatter=None):
534 """Return a mime bundle representation of the input text.
535
536 - if `formatter` is None, the returned mime bundle has
537 a `text/plain` field, with the input text.
538 a `text/html` field with a `<pre>` tag containing the input text.
539
540 - if `formatter` is not None, it must be a callable transforming the
541 input text into a mime bundle. Default values for `text/plain` and
542 `text/html` representations are the ones described above.
543
544 Note:
545
546 Formatters returning strings are supported but this behavior is deprecated.
547
548 """
549 text = cast_unicode(text)
550 defaults = {
551 'text/plain': text,
552 'text/html': '<pre>' + text + '</pre>'
553 }
554
555 if formatter is None:
556 return defaults
557 else:
558 formatted = formatter(text)
559
560 if not isinstance(formatted, dict):
561 # Handle the deprecated behavior of a formatter returning
562 # a string instead of a mime bundle.
563 return {
564 'text/plain': formatted,
565 'text/html': '<pre>' + formatted + '</pre>'
566 }
567
568 else:
569 return dict(defaults, **formatted)
570
571
572 def format_mime(self, bundle):
573
574 text_plain = bundle['text/plain']
575
576 text = ''
577 heads, bodies = list(zip(*text_plain))
578 _len = max(len(h) for h in heads)
579
580 for head, body in zip(heads, bodies):
581 body = body.strip('\n')
582 delim = '\n' if '\n' in body else ' '
583 text += self.__head(head+':') + (_len - len(head))*' ' +delim + body +'\n'
584
585 bundle['text/plain'] = text
586 return bundle
587
588 def _get_info(self, obj, oname='', formatter=None, info=None, detail_level=0):
589 """Retrieve an info dict and format it."""
590
591 info = self._info(obj, oname=oname, info=info, detail_level=detail_level)
592
593 _mime = {
594 'text/plain': [],
595 'text/html': '',
596 }
597
598 def append_field(bundle, title, key, formatter=None):
599 field = info[key]
600 if field is not None:
601 formatted_field = self._mime_format(field, formatter)
602 bundle['text/plain'].append((title, formatted_field['text/plain']))
603 bundle['text/html'] += '<h1>' + title + '</h1>\n' + formatted_field['text/html'] + '\n'
604
605 def code_formatter(text):
606 return {
607 'text/plain': self.format(text),
608 'text/html': pylight(text)
609 }
610
611 if info['isalias']:
612 append_field(_mime, 'Repr', 'string_form')
613
614 elif info['ismagic']:
615 if detail_level > 0:
616 append_field(_mime, 'Source', 'source', code_formatter)
617 else:
618 append_field(_mime, 'Docstring', 'docstring', formatter)
619 append_field(_mime, 'File', 'file')
620
621 elif info['isclass'] or is_simple_callable(obj):
622 # Functions, methods, classes
623 append_field(_mime, 'Signature', 'definition', code_formatter)
624 append_field(_mime, 'Init signature', 'init_definition', code_formatter)
625 if detail_level > 0 and info['source']:
626 append_field(_mime, 'Source', 'source', code_formatter)
627 else:
628 append_field(_mime, 'Docstring', 'docstring', formatter)
629 append_field(_mime, 'Init docstring', 'init_docstring', formatter)
630
631 append_field(_mime, 'File', 'file')
632 append_field(_mime, 'Type', 'type_name')
633
634 else:
635 # General Python objects
636 append_field(_mime, 'Signature', 'definition', code_formatter)
637 append_field(_mime, 'Call signature', 'call_def', code_formatter)
638 append_field(_mime, 'Type', 'type_name')
639 append_field(_mime, 'String form', 'string_form')
640
641 # Namespace
642 if info['namespace'] != 'Interactive':
643 append_field(_mime, 'Namespace', 'namespace')
644
645 append_field(_mime, 'Length', 'length')
646 append_field(_mime, 'File', 'file')
647
648 # Source or docstring, depending on detail level and whether
649 # source found.
650 if detail_level > 0:
651 append_field(_mime, 'Source', 'source', code_formatter)
652 else:
653 append_field(_mime, 'Docstring', 'docstring', formatter)
654
655 append_field(_mime, 'Class docstring', 'class_docstring', formatter)
656 append_field(_mime, 'Init docstring', 'init_docstring', formatter)
657 append_field(_mime, 'Call docstring', 'call_docstring', formatter)
658
659
660 return self.format_mime(_mime)
661
662 def pinfo(self, obj, oname='', formatter=None, info=None, detail_level=0, enable_html_pager=True):
663 """Show detailed information about an object.
664
665 Optional arguments:
666
667 - oname: name of the variable pointing to the object.
668
669 - formatter: callable (optional)
670 A special formatter for docstrings.
671
672 The formatter is a callable that takes a string as an input
673 and returns either a formatted string or a mime type bundle
674 in the form of a dictionnary.
675
676 Although the support of custom formatter returning a string
677 instead of a mime type bundle is deprecated.
678
679 - info: a structure with some information fields which may have been
680 precomputed already.
681
682 - detail_level: if set to 1, more information is given.
683 """
684 info = self._get_info(obj, oname, formatter, info, detail_level)
685 if not enable_html_pager:
686 del info['text/html']
687 page.page(info)
688
689 def info(self, obj, oname='', formatter=None, info=None, detail_level=0):
690 """DEPRECATED. Compute a dict with detailed information about an object.
691 """
692 if formatter is not None:
693 warnings.warn('The `formatter` keyword argument to `Inspector.info`'
694 'is deprecated as of IPython 5.0 and will have no effects.',
695 DeprecationWarning, stacklevel=2)
696 return self._info(obj, oname=oname, info=info, detail_level=detail_level)
697
698 def _info(self, obj, oname='', info=None, detail_level=0):
699 """Compute a dict with detailed information about an object.
700
701 Optional arguments:
702
703 - oname: name of the variable pointing to the object.
704
705 - info: a structure with some information fields which may have been
706 precomputed already.
707
708 - detail_level: if set to 1, more information is given.
709 """
710
711 obj_type = type(obj)
712
713 if info is None:
714 ismagic = 0
715 isalias = 0
716 ospace = ''
717 else:
718 ismagic = info.ismagic
719 isalias = info.isalias
720 ospace = info.namespace
721
722 # Get docstring, special-casing aliases:
723 if isalias:
724 if not callable(obj):
725 try:
726 ds = "Alias to the system command:\n %s" % obj[1]
727 except:
728 ds = "Alias: " + str(obj)
729 else:
730 ds = "Alias to " + str(obj)
731 if obj.__doc__:
732 ds += "\nDocstring:\n" + obj.__doc__
733 else:
734 ds = getdoc(obj)
735 if ds is None:
736 ds = '<no docstring>'
737
738 # store output in a dict, we initialize it here and fill it as we go
739 out = dict(name=oname, found=True, isalias=isalias, ismagic=ismagic)
740
741 string_max = 200 # max size of strings to show (snipped if longer)
742 shalf = int((string_max - 5) / 2)
743
744 if ismagic:
745 obj_type_name = 'Magic function'
746 elif isalias:
747 obj_type_name = 'System alias'
748 else:
749 obj_type_name = obj_type.__name__
750 out['type_name'] = obj_type_name
751
752 try:
753 bclass = obj.__class__
754 out['base_class'] = str(bclass)
755 except: pass
756
757 # String form, but snip if too long in ? form (full in ??)
758 if detail_level >= self.str_detail_level:
759 try:
760 ostr = str(obj)
761 str_head = 'string_form'
762 if not detail_level and len(ostr)>string_max:
763 ostr = ostr[:shalf] + ' <...> ' + ostr[-shalf:]
764 ostr = ("\n" + " " * len(str_head.expandtabs())).\
765 join(q.strip() for q in ostr.split("\n"))
766 out[str_head] = ostr
767 except:
768 pass
769
770 if ospace:
771 out['namespace'] = ospace
772
773 # Length (for strings and lists)
774 try:
775 out['length'] = str(len(obj))
776 except: pass
777
778 # Filename where object was defined
779 binary_file = False
780 fname = find_file(obj)
781 if fname is None:
782 # if anything goes wrong, we don't want to show source, so it's as
783 # if the file was binary
784 binary_file = True
785 else:
786 if fname.endswith(('.so', '.dll', '.pyd')):
787 binary_file = True
788 elif fname.endswith('<string>'):
789 fname = 'Dynamically generated function. No source code available.'
790 out['file'] = compress_user(fname)
791
792 # Original source code for a callable, class or property.
793 if detail_level:
794 # Flush the source cache because inspect can return out-of-date
795 # source
796 linecache.checkcache()
797 try:
798 if isinstance(obj, property) or not binary_file:
799 src = getsource(obj, oname)
800 if src is not None:
801 src = src.rstrip()
802 out['source'] = src
803
804 except Exception:
805 pass
806
807 # Add docstring only if no source is to be shown (avoid repetitions).
808 if ds and out.get('source', None) is None:
809 out['docstring'] = ds
810
811 # Constructor docstring for classes
812 if inspect.isclass(obj):
813 out['isclass'] = True
814
815 # get the init signature:
816 try:
817 init_def = self._getdef(obj, oname)
818 except AttributeError:
819 init_def = None
820
821 # get the __init__ docstring
822 try:
823 obj_init = obj.__init__
824 except AttributeError:
825 init_ds = None
826 else:
827 if init_def is None:
828 # Get signature from init if top-level sig failed.
829 # Can happen for built-in types (list, etc.).
830 try:
831 init_def = self._getdef(obj_init, oname)
832 except AttributeError:
833 pass
834 init_ds = getdoc(obj_init)
835 # Skip Python's auto-generated docstrings
836 if init_ds == _object_init_docstring:
837 init_ds = None
838
839 if init_def:
840 out['init_definition'] = init_def
841
842 if init_ds:
843 out['init_docstring'] = init_ds
844
845 # and class docstring for instances:
846 else:
847 # reconstruct the function definition and print it:
848 defln = self._getdef(obj, oname)
849 if defln:
850 out['definition'] = defln
851
852 # First, check whether the instance docstring is identical to the
853 # class one, and print it separately if they don't coincide. In
854 # most cases they will, but it's nice to print all the info for
855 # objects which use instance-customized docstrings.
856 if ds:
857 try:
858 cls = getattr(obj,'__class__')
859 except:
860 class_ds = None
861 else:
862 class_ds = getdoc(cls)
863 # Skip Python's auto-generated docstrings
864 if class_ds in _builtin_type_docstrings:
865 class_ds = None
866 if class_ds and ds != class_ds:
867 out['class_docstring'] = class_ds
868
869 # Next, try to show constructor docstrings
870 try:
871 init_ds = getdoc(obj.__init__)
872 # Skip Python's auto-generated docstrings
873 if init_ds == _object_init_docstring:
874 init_ds = None
875 except AttributeError:
876 init_ds = None
877 if init_ds:
878 out['init_docstring'] = init_ds
879
880 # Call form docstring for callable instances
881 if safe_hasattr(obj, '__call__') and not is_simple_callable(obj):
882 call_def = self._getdef(obj.__call__, oname)
883 if call_def and (call_def != out.get('definition')):
884 # it may never be the case that call def and definition differ,
885 # but don't include the same signature twice
886 out['call_def'] = call_def
887 call_ds = getdoc(obj.__call__)
888 # Skip Python's auto-generated docstrings
889 if call_ds == _func_call_docstring:
890 call_ds = None
891 if call_ds:
892 out['call_docstring'] = call_ds
893
894 # Compute the object's argspec as a callable. The key is to decide
895 # whether to pull it from the object itself, from its __init__ or
896 # from its __call__ method.
897
898 if inspect.isclass(obj):
899 # Old-style classes need not have an __init__
900 callable_obj = getattr(obj, "__init__", None)
901 elif callable(obj):
902 callable_obj = obj
903 else:
904 callable_obj = None
905
906 if callable_obj is not None:
907 try:
908 argspec = getargspec(callable_obj)
909 except (TypeError, AttributeError):
910 # For extensions/builtins we can't retrieve the argspec
911 pass
912 else:
913 # named tuples' _asdict() method returns an OrderedDict, but we
914 # we want a normal
915 out['argspec'] = argspec_dict = dict(argspec._asdict())
916 # We called this varkw before argspec became a named tuple.
917 # With getfullargspec it's also called varkw.
918 if 'varkw' not in argspec_dict:
919 argspec_dict['varkw'] = argspec_dict.pop('keywords')
920
921 return object_info(**out)
922
923 def psearch(self,pattern,ns_table,ns_search=[],
924 ignore_case=False,show_all=False):
925 """Search namespaces with wildcards for objects.
926
927 Arguments:
928
929 - pattern: string containing shell-like wildcards to use in namespace
930 searches and optionally a type specification to narrow the search to
931 objects of that type.
932
933 - ns_table: dict of name->namespaces for search.
934
935 Optional arguments:
936
937 - ns_search: list of namespace names to include in search.
938
939 - ignore_case(False): make the search case-insensitive.
940
941 - show_all(False): show all names, including those starting with
942 underscores.
943 """
944 #print 'ps pattern:<%r>' % pattern # dbg
945
946 # defaults
947 type_pattern = 'all'
948 filter = ''
949
950 cmds = pattern.split()
951 len_cmds = len(cmds)
952 if len_cmds == 1:
953 # Only filter pattern given
954 filter = cmds[0]
955 elif len_cmds == 2:
956 # Both filter and type specified
957 filter,type_pattern = cmds
958 else:
959 raise ValueError('invalid argument string for psearch: <%s>' %
960 pattern)
961
962 # filter search namespaces
963 for name in ns_search:
964 if name not in ns_table:
965 raise ValueError('invalid namespace <%s>. Valid names: %s' %
966 (name,ns_table.keys()))
967
968 #print 'type_pattern:',type_pattern # dbg
969 search_result, namespaces_seen = set(), set()
970 for ns_name in ns_search:
971 ns = ns_table[ns_name]
972 # Normally, locals and globals are the same, so we just check one.
973 if id(ns) in namespaces_seen:
974 continue
975 namespaces_seen.add(id(ns))
976 tmp_res = list_namespace(ns, type_pattern, filter,
977 ignore_case=ignore_case, show_all=show_all)
978 search_result.update(tmp_res)
979
980 page.page('\n'.join(sorted(search_result)))
981
[end of IPython/core/oinspect.py]
[start of IPython/utils/PyColorize.py]
1 # -*- coding: utf-8 -*-
2 """
3 Class and program to colorize python source code for ANSI terminals.
4
5 Based on an HTML code highlighter by Jurgen Hermann found at:
6 http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52298
7
8 Modifications by Fernando Perez ([email protected]).
9
10 Information on the original HTML highlighter follows:
11
12 MoinMoin - Python Source Parser
13
14 Title: Colorize Python source using the built-in tokenizer
15
16 Submitter: Jurgen Hermann
17 Last Updated:2001/04/06
18
19 Version no:1.2
20
21 Description:
22
23 This code is part of MoinMoin (http://moin.sourceforge.net/) and converts
24 Python source code to HTML markup, rendering comments, keywords,
25 operators, numeric and string literals in different colors.
26
27 It shows how to use the built-in keyword, token and tokenize modules to
28 scan Python source code and re-emit it with no changes to its original
29 formatting (which is the hard part).
30 """
31
32 __all__ = ['ANSICodeColors','Parser']
33
34 _scheme_default = 'Linux'
35
36
37 # Imports
38 import keyword
39 import os
40 import sys
41 import token
42 import tokenize
43
44 generate_tokens = tokenize.generate_tokens
45
46 from IPython.utils.coloransi import TermColors, InputTermColors ,ColorScheme, ColorSchemeTable
47 from .colorable import Colorable
48 from io import StringIO
49
50 #############################################################################
51 ### Python Source Parser (does Highlighting)
52 #############################################################################
53
54 _KEYWORD = token.NT_OFFSET + 1
55 _TEXT = token.NT_OFFSET + 2
56
57 #****************************************************************************
58 # Builtin color schemes
59
60 Colors = TermColors # just a shorthand
61
62 # Build a few color schemes
63 NoColor = ColorScheme(
64 'NoColor',{
65 'header' : Colors.NoColor,
66 token.NUMBER : Colors.NoColor,
67 token.OP : Colors.NoColor,
68 token.STRING : Colors.NoColor,
69 tokenize.COMMENT : Colors.NoColor,
70 token.NAME : Colors.NoColor,
71 token.ERRORTOKEN : Colors.NoColor,
72
73 _KEYWORD : Colors.NoColor,
74 _TEXT : Colors.NoColor,
75
76 'in_prompt' : InputTermColors.NoColor, # Input prompt
77 'in_number' : InputTermColors.NoColor, # Input prompt number
78 'in_prompt2' : InputTermColors.NoColor, # Continuation prompt
79 'in_normal' : InputTermColors.NoColor, # color off (usu. Colors.Normal)
80
81 'out_prompt' : Colors.NoColor, # Output prompt
82 'out_number' : Colors.NoColor, # Output prompt number
83
84 'normal' : Colors.NoColor # color off (usu. Colors.Normal)
85 } )
86
87 LinuxColors = ColorScheme(
88 'Linux',{
89 'header' : Colors.LightRed,
90 token.NUMBER : Colors.LightCyan,
91 token.OP : Colors.Yellow,
92 token.STRING : Colors.LightBlue,
93 tokenize.COMMENT : Colors.LightRed,
94 token.NAME : Colors.Normal,
95 token.ERRORTOKEN : Colors.Red,
96
97 _KEYWORD : Colors.LightGreen,
98 _TEXT : Colors.Yellow,
99
100 'in_prompt' : InputTermColors.Green,
101 'in_number' : InputTermColors.LightGreen,
102 'in_prompt2' : InputTermColors.Green,
103 'in_normal' : InputTermColors.Normal, # color off (usu. Colors.Normal)
104
105 'out_prompt' : Colors.Red,
106 'out_number' : Colors.LightRed,
107
108 'normal' : Colors.Normal # color off (usu. Colors.Normal)
109 } )
110
111 NeutralColors = ColorScheme(
112 'Neutral',{
113 'header' : Colors.Red,
114 token.NUMBER : Colors.Cyan,
115 token.OP : Colors.Blue,
116 token.STRING : Colors.Blue,
117 tokenize.COMMENT : Colors.Red,
118 token.NAME : Colors.Normal,
119 token.ERRORTOKEN : Colors.Red,
120
121 _KEYWORD : Colors.Green,
122 _TEXT : Colors.Blue,
123
124 'in_prompt' : InputTermColors.Blue,
125 'in_number' : InputTermColors.LightBlue,
126 'in_prompt2' : InputTermColors.Blue,
127 'in_normal' : InputTermColors.Normal, # color off (usu. Colors.Normal)
128
129 'out_prompt' : Colors.Red,
130 'out_number' : Colors.LightRed,
131
132 'normal' : Colors.Normal # color off (usu. Colors.Normal)
133 } )
134
135 # Hack: the 'neutral' colours are not very visible on a dark background on
136 # Windows. Since Windows command prompts have a dark background by default, and
137 # relatively few users are likely to alter that, we will use the 'Linux' colours,
138 # designed for a dark background, as the default on Windows. Changing it here
139 # avoids affecting the prompt colours rendered by prompt_toolkit, where the
140 # neutral defaults do work OK.
141
142 if os.name == 'nt':
143 NeutralColors = LinuxColors.copy(name='Neutral')
144
145 LightBGColors = ColorScheme(
146 'LightBG',{
147 'header' : Colors.Red,
148 token.NUMBER : Colors.Cyan,
149 token.OP : Colors.Blue,
150 token.STRING : Colors.Blue,
151 tokenize.COMMENT : Colors.Red,
152 token.NAME : Colors.Normal,
153 token.ERRORTOKEN : Colors.Red,
154
155
156 _KEYWORD : Colors.Green,
157 _TEXT : Colors.Blue,
158
159 'in_prompt' : InputTermColors.Blue,
160 'in_number' : InputTermColors.LightBlue,
161 'in_prompt2' : InputTermColors.Blue,
162 'in_normal' : InputTermColors.Normal, # color off (usu. Colors.Normal)
163
164 'out_prompt' : Colors.Red,
165 'out_number' : Colors.LightRed,
166
167 'normal' : Colors.Normal # color off (usu. Colors.Normal)
168 } )
169
170 # Build table of color schemes (needed by the parser)
171 ANSICodeColors = ColorSchemeTable([NoColor,LinuxColors,LightBGColors, NeutralColors],
172 _scheme_default)
173
174 Undefined = object()
175
176 class Parser(Colorable):
177 """ Format colored Python source.
178 """
179
180 def __init__(self, color_table=None, out = sys.stdout, parent=None, style=None):
181 """ Create a parser with a specified color table and output channel.
182
183 Call format() to process code.
184 """
185
186 super(Parser, self).__init__(parent=parent)
187
188 self.color_table = color_table and color_table or ANSICodeColors
189 self.out = out
190 if not style:
191 self.style = self.default_style
192 else:
193 self.style = style
194
195
196 def format(self, raw, out=None, scheme=Undefined):
197 import warnings
198 if scheme is not Undefined:
199 warnings.warn('The `scheme` argument of IPython.utils.PyColorize:Parser.format is deprecated since IPython 6.0.'
200 'It will have no effect. Set the parser `style` directly.',
201 stacklevel=2)
202 return self.format2(raw, out)[0]
203
204 def format2(self, raw, out = None):
205 """ Parse and send the colored source.
206
207 If out and scheme are not specified, the defaults (given to
208 constructor) are used.
209
210 out should be a file-type object. Optionally, out can be given as the
211 string 'str' and the parser will automatically return the output in a
212 string."""
213
214 string_output = 0
215 if out == 'str' or self.out == 'str' or \
216 isinstance(self.out,StringIO):
217 # XXX - I don't really like this state handling logic, but at this
218 # point I don't want to make major changes, so adding the
219 # isinstance() check is the simplest I can do to ensure correct
220 # behavior.
221 out_old = self.out
222 self.out = StringIO()
223 string_output = 1
224 elif out is not None:
225 self.out = out
226
227 # Fast return of the unmodified input for NoColor scheme
228 if self.style == 'NoColor':
229 error = False
230 self.out.write(raw)
231 if string_output:
232 return raw,error
233 else:
234 return None,error
235
236 # local shorthands
237 colors = self.color_table[self.style].colors
238 self.colors = colors # put in object so __call__ sees it
239
240 # Remove trailing whitespace and normalize tabs
241 self.raw = raw.expandtabs().rstrip()
242
243 # store line offsets in self.lines
244 self.lines = [0, 0]
245 pos = 0
246 raw_find = self.raw.find
247 lines_append = self.lines.append
248 while 1:
249 pos = raw_find('\n', pos) + 1
250 if not pos: break
251 lines_append(pos)
252 lines_append(len(self.raw))
253
254 # parse the source and write it
255 self.pos = 0
256 text = StringIO(self.raw)
257
258 error = False
259 try:
260 for atoken in generate_tokens(text.readline):
261 self(*atoken)
262 except tokenize.TokenError as ex:
263 msg = ex.args[0]
264 line = ex.args[1][0]
265 self.out.write("%s\n\n*** ERROR: %s%s%s\n" %
266 (colors[token.ERRORTOKEN],
267 msg, self.raw[self.lines[line]:],
268 colors.normal)
269 )
270 error = True
271 self.out.write(colors.normal+'\n')
272 if string_output:
273 output = self.out.getvalue()
274 self.out = out_old
275 return (output, error)
276 return (None, error)
277
278 def __call__(self, toktype, toktext, start_pos, end_pos, line):
279 """ Token handler, with syntax highlighting."""
280 (srow,scol) = start_pos
281 (erow,ecol) = end_pos
282 colors = self.colors
283 owrite = self.out.write
284
285 # line separator, so this works across platforms
286 linesep = os.linesep
287
288 # calculate new positions
289 oldpos = self.pos
290 newpos = self.lines[srow] + scol
291 self.pos = newpos + len(toktext)
292
293 # send the original whitespace, if needed
294 if newpos > oldpos:
295 owrite(self.raw[oldpos:newpos])
296
297 # skip indenting tokens
298 if toktype in [token.INDENT, token.DEDENT]:
299 self.pos = newpos
300 return
301
302 # map token type to a color group
303 if token.LPAR <= toktype <= token.OP:
304 toktype = token.OP
305 elif toktype == token.NAME and keyword.iskeyword(toktext):
306 toktype = _KEYWORD
307 color = colors.get(toktype, colors[_TEXT])
308
309 #print '<%s>' % toktext, # dbg
310
311 # Triple quoted strings must be handled carefully so that backtracking
312 # in pagers works correctly. We need color terminators on _each_ line.
313 if linesep in toktext:
314 toktext = toktext.replace(linesep, '%s%s%s' %
315 (colors.normal,linesep,color))
316
317 # send text
318 owrite('%s%s%s' % (color,toktext,colors.normal))
319
[end of IPython/utils/PyColorize.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ipython/ipython
|
f51c975b99c91b91f6f455acd8dc335509c078f2
|
?? is not coloring docstring anymore.
Mike and I are on it.
|
2017-02-23T22:44:25Z
|
<patch>
diff --git a/IPython/core/interactiveshell.py b/IPython/core/interactiveshell.py
--- a/IPython/core/interactiveshell.py
+++ b/IPython/core/interactiveshell.py
@@ -624,7 +624,7 @@ def init_inspector(self):
# Object inspector
self.inspector = oinspect.Inspector(oinspect.InspectColors,
PyColorize.ANSICodeColors,
- 'NoColor',
+ self.colors,
self.object_info_string_level)
def init_io(self):
diff --git a/IPython/core/oinspect.py b/IPython/core/oinspect.py
--- a/IPython/core/oinspect.py
+++ b/IPython/core/oinspect.py
@@ -352,7 +352,7 @@ class Inspector(Colorable):
def __init__(self, color_table=InspectColors,
code_color_table=PyColorize.ANSICodeColors,
- scheme='NoColor',
+ scheme=None,
str_detail_level=0,
parent=None, config=None):
super(Inspector, self).__init__(parent=parent, config=config)
@@ -379,8 +379,9 @@ def __head(self,h):
self.color_table.active_colors.normal)
def set_active_scheme(self, scheme):
- self.color_table.set_active_scheme(scheme)
- self.parser.color_table.set_active_scheme(scheme)
+ if scheme is not None:
+ self.color_table.set_active_scheme(scheme)
+ self.parser.color_table.set_active_scheme(scheme)
def noinfo(self, msg, oname):
"""Generic message when no information is found."""
</patch>
|
[]
|
[]
| ||||
apache__airflow-19048
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The S3ToGCSOperator fails on templated `dest_gcs` URL
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**: Docker
**What happened**:
When passing a templatized `dest_gcs` argument to the `S3ToGCSOperator` operator, the DAG fails to import because the constructor attempts to test the validity of the URL before the template has been populated in `execute`.
The error is:
```
Broken DAG: [/opt/airflow/dags/bad_gs_dag.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/gcs.py", line 1051, in gcs_object_is_directory
_, blob = _parse_gcs_url(bucket)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/gcs.py", line 1063, in _parse_gcs_url
raise AirflowException('Please provide a bucket name')
airflow.exceptions.AirflowException: Please provide a bucket name
```
**What you expected to happen**:
The DAG should successfully parse when using a templatized `dest_gcs` value.
**How to reproduce it**:
Instantiating a `S3ToGCSOperator` task with `dest_gcs="{{ var.gcs_url }}"` fails.
<details>
```python
from airflow.decorators import dag
from airflow.utils.dates import days_ago
from airflow.providers.google.cloud.transfers.s3_to_gcs import S3ToGCSOperator
@dag(
schedule_interval=None,
description="Demo S3-to-GS Bug",
catchup=False,
start_date=days_ago(1),
)
def demo_bug():
S3ToGCSOperator(
task_id="transfer_task",
bucket="example_bucket",
prefix="fake/prefix",
dest_gcs="{{ var.gcs_url }}",
)
demo_dag = demo_bug()
```
</details>
**Anything else we need to know**:
Should be fixable by moving the code that evaluates whether the URL is a folder to `execute()`.
</issue>
<code>
[start of README.md]
1 <!--
2 Licensed to the Apache Software Foundation (ASF) under one
3 or more contributor license agreements. See the NOTICE file
4 distributed with this work for additional information
5 regarding copyright ownership. The ASF licenses this file
6 to you under the Apache License, Version 2.0 (the
7 "License"); you may not use this file except in compliance
8 with the License. You may obtain a copy of the License at
9
10 http://www.apache.org/licenses/LICENSE-2.0
11
12 Unless required by applicable law or agreed to in writing,
13 software distributed under the License is distributed on an
14 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 KIND, either express or implied. See the License for the
16 specific language governing permissions and limitations
17 under the License.
18 -->
19
20 # Apache Airflow
21
22 [](https://badge.fury.io/py/apache-airflow)
23 [](https://github.com/apache/airflow/actions)
24 [](https://codecov.io/github/apache/airflow?branch=main)
25 [](https://www.apache.org/licenses/LICENSE-2.0.txt)
26 [](https://pypi.org/project/apache-airflow/)
27 [](https://hub.docker.com/r/apache/airflow)
28 [](https://hub.docker.com/r/apache/airflow)
29 [](https://pypi.org/project/apache-airflow/)
30 [](https://artifacthub.io/packages/search?repo=apache-airflow)
31 [](https://github.com/psf/black)
32 [](https://twitter.com/ApacheAirflow)
33 [](https://s.apache.org/airflow-slack)
34
35 [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows.
36
37 When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative.
38
39 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
40
41 <!-- START doctoc generated TOC please keep comment here to allow auto update -->
42 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
43 **Table of contents**
44
45 - [Project Focus](#project-focus)
46 - [Principles](#principles)
47 - [Requirements](#requirements)
48 - [Getting started](#getting-started)
49 - [Installing from PyPI](#installing-from-pypi)
50 - [Official source code](#official-source-code)
51 - [Convenience packages](#convenience-packages)
52 - [User Interface](#user-interface)
53 - [Semantic versioning](#semantic-versioning)
54 - [Version Life Cycle](#version-life-cycle)
55 - [Support for Python and Kubernetes versions](#support-for-python-and-kubernetes-versions)
56 - [Contributing](#contributing)
57 - [Who uses Apache Airflow?](#who-uses-apache-airflow)
58 - [Who Maintains Apache Airflow?](#who-maintains-apache-airflow)
59 - [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation)
60 - [Airflow merchandise](#airflow-merchandise)
61 - [Links](#links)
62 - [Sponsors](#sponsors)
63
64 <!-- END doctoc generated TOC please keep comment here to allow auto update -->
65
66 ## Project Focus
67
68 Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include [Luigi](https://github.com/spotify/luigi), [Oozie](https://oozie.apache.org/) and [Azkaban](https://azkaban.github.io/).
69
70 Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's [Xcom feature](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#xcoms)). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work.
71
72 Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches.
73
74 ## Principles
75
76 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically.
77 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment.
78 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine.
79 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers.
80
81 ## Requirements
82
83 Apache Airflow is tested with:
84
85 | | Main version (dev) | Stable version (2.2.0) |
86 | -------------------- | ------------------------- | ------------------------ |
87 | Python | 3.6, 3.7, 3.8, 3.9 | 3.6, 3.7, 3.8, 3.9 |
88 | Kubernetes | 1.18, 1.19, 1.20 | 1.18, 1.19, 1.20 |
89 | PostgreSQL | 9.6, 10, 11, 12, 13 | 9.6, 10, 11, 12, 13 |
90 | MySQL | 5.7, 8 | 5.7, 8 |
91 | SQLite | 3.15.0+ | 3.15.0+ |
92 | MSSQL(Experimental) | 2017, 2019 | |
93
94 **Note**: MySQL 5.x versions are unable to or have limitations with
95 running multiple schedulers -- please see the [Scheduler docs](https://airflow.apache.org/docs/apache-airflow/stable/scheduler.html).
96 MariaDB is not tested/recommended.
97
98 **Note**: SQLite is used in Airflow tests. Do not use it in production. We recommend
99 using the latest stable version of SQLite for local development.
100
101 **Note**: Python v3.10 is not supported yet. For details, see [#19059](https://github.com/apache/airflow/issues/19059).
102
103 ## Getting started
104
105 Visit the official Airflow website documentation (latest **stable** release) for help with
106 [installing Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation.html),
107 [getting started](https://airflow.apache.org/docs/apache-airflow/stable/start/index.html), or walking
108 through a more complete [tutorial](https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html).
109
110 > Note: If you're looking for documentation for the main branch (latest development branch): you can find it on [s.apache.org/airflow-docs](https://s.apache.org/airflow-docs/).
111
112 For more information on Airflow Improvement Proposals (AIPs), visit
113 the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals).
114
115 Documentation for dependent projects like provider packages, Docker image, Helm Chart, you'll find it in [the documentation index](https://airflow.apache.org/docs/).
116
117 ## Installing from PyPI
118
119 We publish Apache Airflow as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky
120 because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open, and
121 applications usually pin them, but we should do neither and both simultaneously. We decided to keep
122 our dependencies as open as possible (in `setup.py`) so users can install different versions of libraries
123 if needed. This means that `pip install apache-airflow` will not work from time to time or will
124 produce unusable Airflow installation.
125
126 To have repeatable installation, however, we keep a set of "known-to-be-working" constraint
127 files in the orphan `constraints-main` and `constraints-2-0` branches. We keep those "known-to-be-working"
128 constraints files separately per major/minor Python version.
129 You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify
130 correct Airflow tag/version/branch and Python versions in the URL.
131
132
133 1. Installing just Airflow:
134
135 > Note: Only `pip` installation is currently officially supported.
136
137 While it is possible to install Airflow with tools like [Poetry](https://python-poetry.org) or
138 [pip-tools](https://pypi.org/project/pip-tools), they do not share the same workflow as
139 `pip` - especially when it comes to constraint vs. requirements management.
140 Installing via `Poetry` or `pip-tools` is not currently supported.
141
142 If you wish to install Airflow using those tools, you should use the constraint files and convert
143 them to the appropriate format and workflow that your tool requires.
144
145
146 ```bash
147 pip install 'apache-airflow==2.2.0' \
148 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.0/constraints-3.7.txt"
149 ```
150
151 2. Installing with extras (i.e., postgres, google)
152
153 ```bash
154 pip install 'apache-airflow[postgres,google]==2.2.0' \
155 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.0/constraints-3.7.txt"
156 ```
157
158 For information on installing provider packages, check
159 [providers](http://airflow.apache.org/docs/apache-airflow-providers/index.html).
160
161 ## Official source code
162
163 Apache Airflow is an [Apache Software Foundation](https://www.apache.org) (ASF) project,
164 and our official source code releases:
165
166 - Follow the [ASF Release Policy](https://www.apache.org/legal/release-policy.html)
167 - Can be downloaded from [the ASF Distribution Directory](https://downloads.apache.org/airflow)
168 - Are cryptographically signed by the release manager
169 - Are officially voted on by the PMC members during the
170 [Release Approval Process](https://www.apache.org/legal/release-policy.html#release-approval)
171
172 Following the ASF rules, the source packages released must be sufficient for a user to build and test the
173 release provided they have access to the appropriate platform and tools.
174
175 ## Convenience packages
176
177 There are other ways of installing and using Airflow. Those are "convenience" methods - they are
178 not "official releases" as stated by the `ASF Release Policy`, but they can be used by the users
179 who do not want to build the software themselves.
180
181 Those are - in the order of most common ways people install Airflow:
182
183 - [PyPI releases](https://pypi.org/project/apache-airflow/) to install Airflow using standard `pip` tool
184 - [Docker Images](https://hub.docker.com/r/apache/airflow) to install airflow via
185 `docker` tool, use them in Kubernetes, Helm Charts, `docker-compose`, `docker swarm`, etc. You can
186 read more about using, customising, and extending the images in the
187 [Latest docs](https://airflow.apache.org/docs/docker-stack/index.html), and
188 learn details on the internals in the [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst) document.
189 - [Tags in GitHub](https://github.com/apache/airflow/tags) to retrieve the git project sources that
190 were used to generate official source packages via git
191
192 All those artifacts are not official releases, but they are prepared using officially released sources.
193 Some of those artifacts are "development" or "pre-release" ones, and they are clearly marked as such
194 following the ASF Policy.
195
196 ## User Interface
197
198 - **DAGs**: Overview of all DAGs in your environment.
199
200 
201
202 - **Tree**: Tree representation of a DAG that spans across time.
203
204 
205
206 - **Graph**: Visualization of a DAG's dependencies and their current status for a specific run.
207
208 
209
210 - **Task Duration**: Total time spent on different tasks over time.
211
212 
213
214 - **Gantt**: Duration and overlap of a DAG.
215
216 
217
218 - **Code**: Quick way to view source code of a DAG.
219
220 
221
222 ## Semantic versioning
223
224 As of Airflow 2.0.0, we support a strict [SemVer](https://semver.org/) approach for all packages released.
225
226 There are few specific rules that we agreed to that define details of versioning of the different
227 packages:
228
229 * **Airflow**: SemVer rules apply to core airflow only (excludes any changes to providers).
230 Changing limits for versions of Airflow dependencies is not a breaking change on its own.
231 * **Airflow Providers**: SemVer rules apply to changes in the particular provider's code only.
232 SemVer MAJOR and MINOR versions for the packages are independent of the Airflow version.
233 For example, `google 4.1.0` and `amazon 3.0.3` providers can happily be installed
234 with `Airflow 2.1.2`. If there are limits of cross-dependencies between providers and Airflow packages,
235 they are present in providers as `install_requires` limitations. We aim to keep backwards
236 compatibility of providers with all previously released Airflow 2 versions but
237 there will sometimes be breaking changes that might make some, or all
238 providers, have minimum Airflow version specified. Change of that minimum supported Airflow version
239 is a breaking change for provider because installing the new provider might automatically
240 upgrade Airflow (which might be an undesired side effect of upgrading provider).
241 * **Airflow Helm Chart**: SemVer rules apply to changes in the chart only. SemVer MAJOR and MINOR
242 versions for the chart are independent from the Airflow version. We aim to keep backwards
243 compatibility of the Helm Chart with all released Airflow 2 versions, but some new features might
244 only work starting from specific Airflow releases. We might however limit the Helm
245 Chart to depend on minimal Airflow version.
246 * **Airflow API clients**: SemVer MAJOR and MINOR versions follow MAJOR and MINOR versions of Airflow.
247 The first MAJOR or MINOR X.Y.0 release of Airflow should always be followed by X.Y.0 release of
248 all clients. The clients then can release their own PATCH releases with bugfixes,
249 independently of Airflow PATCH releases.
250
251 ## Version Life Cycle
252
253 Apache Airflow version life cycle:
254
255 | Version | Current Patch/Minor | State | First Release | Limited Support | EOL/Terminated |
256 |---------|---------------------|-----------|---------------|-----------------|----------------|
257 | 2 | 2.2.0 | Supported | Dec 17, 2020 | TBD | TBD |
258 | 1.10 | 1.10.15 | EOL | Aug 27, 2018 | Dec 17, 2020 | June 17, 2021 |
259 | 1.9 | 1.9.0 | EOL | Jan 03, 2018 | Aug 27, 2018 | Aug 27, 2018 |
260 | 1.8 | 1.8.2 | EOL | Mar 19, 2017 | Jan 03, 2018 | Jan 03, 2018 |
261 | 1.7 | 1.7.1.2 | EOL | Mar 28, 2016 | Mar 19, 2017 | Mar 19, 2017 |
262
263 Limited support versions will be supported with security and critical bug fix only.
264 EOL versions will not get any fixes nor support.
265 We always recommend that all users run the latest available minor release for whatever major version is in use.
266 We **highly** recommend upgrading to the latest Airflow major release at the earliest convenient time and before the EOL date.
267
268 ## Support for Python and Kubernetes versions
269
270 As of Airflow 2.0, we agreed to certain rules we follow for Python and Kubernetes support.
271 They are based on the official release schedule of Python and Kubernetes, nicely summarized in the
272 [Python Developer's Guide](https://devguide.python.org/#status-of-python-branches) and
273 [Kubernetes version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/).
274
275 1. We drop support for Python and Kubernetes versions when they reach EOL. We drop support for those
276 EOL versions in main right after EOL date, and it is effectively removed when we release the
277 first new MINOR (Or MAJOR if there is no new MINOR version) of Airflow
278 For example, for Python 3.6 it means that we drop support in main right after 23.12.2021, and the first
279 MAJOR or MINOR version of Airflow released after will not have it.
280
281 2. The "oldest" supported version of Python/Kubernetes is the default one until we decide to switch to
282 later version. "Default" is only meaningful in terms of "smoke tests" in CI PRs, which are run using this
283 default version and the default reference image available. Currently `apache/airflow:latest`
284 and `apache/airflow:2.2.0` images are Python 3.7 images as we are preparing for 23.12.2021 when will
285 Python 3.6 reaches end of life.
286
287 3. We support a new version of Python/Kubernetes in main after they are officially released, as soon as we
288 make them work in our CI pipeline (which might not be immediate due to dependencies catching up with
289 new versions of Python mostly) we release new images/support in Airflow based on the working CI setup.
290
291 ### Additional notes on Python version requirements
292
293 * Previous versions [require](https://github.com/apache/airflow/issues/8162) at least Python 3.5.3
294 when using Python 3.
295
296 ## Contributing
297
298 Want to help build Apache Airflow? Check out our [contributing documentation](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst).
299
300 Official Docker (container) images for Apache Airflow are described in [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst).
301
302 ## Who uses Apache Airflow?
303
304 More than 400 organizations are using Apache Airflow
305 [in the wild](https://github.com/apache/airflow/blob/main/INTHEWILD.md).
306
307 ## Who Maintains Apache Airflow?
308
309 Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors),
310 but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow)
311 are responsible for reviewing and merging PRs as well as steering conversations around new feature requests.
312 If you would like to become a maintainer, please review the Apache Airflow
313 [committer requirements](https://github.com/apache/airflow/blob/main/COMMITTERS.rst#guidelines-to-become-an-airflow-committer).
314
315 ## Can I use the Apache Airflow logo in my presentation?
316
317 Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up to date logos are found in [this repo](/docs/apache-airflow/img/logos) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html).
318
319 ## Airflow merchandise
320
321 If you would love to have Apache Airflow stickers, t-shirt, etc. then check out
322 [Redbubble Shop](https://www.redbubble.com/i/sticker/Apache-Airflow-by-comdev/40497530.EJUG5).
323
324 ## Links
325
326 - [Documentation](https://airflow.apache.org/docs/apache-airflow/stable/)
327 - [Chat](https://s.apache.org/airflow-slack)
328
329 ## Sponsors
330
331 The CI infrastructure for Apache Airflow has been sponsored by:
332
333 <!-- Ordered by most recently "funded" -->
334
335 <a href="https://astronomer.io"><img src="https://assets2.astronomer.io/logos/logoForLIGHTbackground.png" alt="astronomer.io" width="250px"></a>
336 <a href="https://aws.amazon.com/opensource/"><img src="docs/integration-logos/aws/[email protected]" alt="AWS OpenSource" width="130px"></a>
337
[end of README.md]
[start of airflow/providers/google/cloud/transfers/s3_to_gcs.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18 import warnings
19 from tempfile import NamedTemporaryFile
20 from typing import Iterable, Optional, Sequence, Union
21
22 from airflow.exceptions import AirflowException
23 from airflow.providers.amazon.aws.hooks.s3 import S3Hook
24 from airflow.providers.amazon.aws.operators.s3_list import S3ListOperator
25 from airflow.providers.google.cloud.hooks.gcs import GCSHook, _parse_gcs_url, gcs_object_is_directory
26
27
28 class S3ToGCSOperator(S3ListOperator):
29 """
30 Synchronizes an S3 key, possibly a prefix, with a Google Cloud Storage
31 destination path.
32
33 .. seealso::
34 For more information on how to use this operator, take a look at the guide:
35 :ref:`howto/operator:S3ToGCSOperator`
36
37 :param bucket: The S3 bucket where to find the objects. (templated)
38 :type bucket: str
39 :param prefix: Prefix string which filters objects whose name begin with
40 such prefix. (templated)
41 :type prefix: str
42 :param delimiter: the delimiter marks key hierarchy. (templated)
43 :type delimiter: str
44 :param aws_conn_id: The source S3 connection
45 :type aws_conn_id: str
46 :param verify: Whether or not to verify SSL certificates for S3 connection.
47 By default SSL certificates are verified.
48 You can provide the following values:
49
50 - ``False``: do not validate SSL certificates. SSL will still be used
51 (unless use_ssl is False), but SSL certificates will not be
52 verified.
53 - ``path/to/cert/bundle.pem``: A filename of the CA cert bundle to uses.
54 You can specify this argument if you want to use a different
55 CA cert bundle than the one used by botocore.
56 :type verify: bool or str
57 :param gcp_conn_id: (Optional) The connection ID used to connect to Google Cloud.
58 :type gcp_conn_id: str
59 :param dest_gcs_conn_id: (Deprecated) The connection ID used to connect to Google Cloud.
60 This parameter has been deprecated. You should pass the gcp_conn_id parameter instead.
61 :type dest_gcs_conn_id: str
62 :param dest_gcs: The destination Google Cloud Storage bucket and prefix
63 where you want to store the files. (templated)
64 :type dest_gcs: str
65 :param delegate_to: Google account to impersonate using domain-wide delegation of authority,
66 if any. For this to work, the service account making the request must have
67 domain-wide delegation enabled.
68 :type delegate_to: str
69 :param replace: Whether you want to replace existing destination files
70 or not.
71 :type replace: bool
72 :param gzip: Option to compress file for upload
73 :type gzip: bool
74 :param google_impersonation_chain: Optional Google service account to impersonate using
75 short-term credentials, or chained list of accounts required to get the access_token
76 of the last account in the list, which will be impersonated in the request.
77 If set as a string, the account must grant the originating account
78 the Service Account Token Creator IAM role.
79 If set as a sequence, the identities from the list must grant
80 Service Account Token Creator IAM role to the directly preceding identity, with first
81 account from the list granting this role to the originating account (templated).
82 :type google_impersonation_chain: Union[str, Sequence[str]]
83
84
85 **Example**:
86
87 .. code-block:: python
88
89 s3_to_gcs_op = S3ToGCSOperator(
90 task_id="s3_to_gcs_example",
91 bucket="my-s3-bucket",
92 prefix="data/customers-201804",
93 dest_gcs_conn_id="google_cloud_default",
94 dest_gcs="gs://my.gcs.bucket/some/customers/",
95 replace=False,
96 gzip=True,
97 dag=my - dag,
98 )
99
100 Note that ``bucket``, ``prefix``, ``delimiter`` and ``dest_gcs`` are
101 templated, so you can use variables in them if you wish.
102 """
103
104 template_fields: Iterable[str] = (
105 'bucket',
106 'prefix',
107 'delimiter',
108 'dest_gcs',
109 'google_impersonation_chain',
110 )
111 ui_color = '#e09411'
112
113 def __init__(
114 self,
115 *,
116 bucket,
117 prefix='',
118 delimiter='',
119 aws_conn_id='aws_default',
120 verify=None,
121 gcp_conn_id='google_cloud_default',
122 dest_gcs_conn_id=None,
123 dest_gcs=None,
124 delegate_to=None,
125 replace=False,
126 gzip=False,
127 google_impersonation_chain: Optional[Union[str, Sequence[str]]] = None,
128 **kwargs,
129 ):
130
131 super().__init__(bucket=bucket, prefix=prefix, delimiter=delimiter, aws_conn_id=aws_conn_id, **kwargs)
132
133 if dest_gcs_conn_id:
134 warnings.warn(
135 "The dest_gcs_conn_id parameter has been deprecated. You should pass "
136 "the gcp_conn_id parameter.",
137 DeprecationWarning,
138 stacklevel=3,
139 )
140 gcp_conn_id = dest_gcs_conn_id
141
142 self.gcp_conn_id = gcp_conn_id
143 self.dest_gcs = dest_gcs
144 self.delegate_to = delegate_to
145 self.replace = replace
146 self.verify = verify
147 self.gzip = gzip
148 self.google_impersonation_chain = google_impersonation_chain
149
150 if dest_gcs and not gcs_object_is_directory(self.dest_gcs):
151 self.log.info(
152 'Destination Google Cloud Storage path is not a valid '
153 '"directory", define a path that ends with a slash "/" or '
154 'leave it empty for the root of the bucket.'
155 )
156 raise AirflowException(
157 'The destination Google Cloud Storage path must end with a slash "/" or be empty.'
158 )
159
160 def execute(self, context):
161 # use the super method to list all the files in an S3 bucket/key
162 files = super().execute(context)
163
164 gcs_hook = GCSHook(
165 gcp_conn_id=self.gcp_conn_id,
166 delegate_to=self.delegate_to,
167 impersonation_chain=self.google_impersonation_chain,
168 )
169
170 if not self.replace:
171 # if we are not replacing -> list all files in the GCS bucket
172 # and only keep those files which are present in
173 # S3 and not in Google Cloud Storage
174 bucket_name, object_prefix = _parse_gcs_url(self.dest_gcs)
175 existing_files_prefixed = gcs_hook.list(bucket_name, prefix=object_prefix)
176
177 existing_files = []
178
179 if existing_files_prefixed:
180 # Remove the object prefix itself, an empty directory was found
181 if object_prefix in existing_files_prefixed:
182 existing_files_prefixed.remove(object_prefix)
183
184 # Remove the object prefix from all object string paths
185 for f in existing_files_prefixed:
186 if f.startswith(object_prefix):
187 existing_files.append(f[len(object_prefix) :])
188 else:
189 existing_files.append(f)
190
191 files = list(set(files) - set(existing_files))
192 if len(files) > 0:
193 self.log.info('%s files are going to be synced: %s.', len(files), files)
194 else:
195 self.log.info('There are no new files to sync. Have a nice day!')
196
197 if files:
198 hook = S3Hook(aws_conn_id=self.aws_conn_id, verify=self.verify)
199
200 for file in files:
201 # GCS hook builds its own in-memory file so we have to create
202 # and pass the path
203 file_object = hook.get_key(file, self.bucket)
204 with NamedTemporaryFile(mode='wb', delete=True) as f:
205 file_object.download_fileobj(f)
206 f.flush()
207
208 dest_gcs_bucket, dest_gcs_object_prefix = _parse_gcs_url(self.dest_gcs)
209 # There will always be a '/' before file because it is
210 # enforced at instantiation time
211 dest_gcs_object = dest_gcs_object_prefix + file
212
213 # Sync is sequential and the hook already logs too much
214 # so skip this for now
215 # self.log.info(
216 # 'Saving file {0} from S3 bucket {1} in GCS bucket {2}'
217 # ' as object {3}'.format(file, self.bucket,
218 # dest_gcs_bucket,
219 # dest_gcs_object))
220
221 gcs_hook.upload(dest_gcs_bucket, dest_gcs_object, f.name, gzip=self.gzip)
222
223 self.log.info("All done, uploaded %d files to Google Cloud Storage", len(files))
224 else:
225 self.log.info('In sync, no files needed to be uploaded to Google Cloud Storage')
226
227 return files
228
[end of airflow/providers/google/cloud/transfers/s3_to_gcs.py]
[start of airflow/providers/google/cloud/utils/mlengine_operator_utils.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17
18 #
19 """This module contains helper functions for MLEngine operators."""
20
21 import base64
22 import json
23 import os
24 import re
25 from typing import Callable, Dict, Iterable, List, Optional, Tuple, TypeVar
26 from urllib.parse import urlsplit
27
28 import dill
29
30 from airflow import DAG
31 from airflow.exceptions import AirflowException
32 from airflow.operators.python import PythonOperator
33 from airflow.providers.apache.beam.operators.beam import BeamRunPythonPipelineOperator
34 from airflow.providers.google.cloud.hooks.gcs import GCSHook
35 from airflow.providers.google.cloud.operators.mlengine import MLEngineStartBatchPredictionJobOperator
36
37 T = TypeVar("T", bound=Callable)
38
39
40 def create_evaluate_ops(
41 task_prefix: str,
42 data_format: str,
43 input_paths: List[str],
44 prediction_path: str,
45 metric_fn_and_keys: Tuple[T, Iterable[str]],
46 validate_fn: T,
47 batch_prediction_job_id: Optional[str] = None,
48 region: Optional[str] = None,
49 project_id: Optional[str] = None,
50 dataflow_options: Optional[Dict] = None,
51 model_uri: Optional[str] = None,
52 model_name: Optional[str] = None,
53 version_name: Optional[str] = None,
54 dag: Optional[DAG] = None,
55 py_interpreter="python3",
56 ):
57 """
58 Creates Operators needed for model evaluation and returns.
59
60 It gets prediction over inputs via Cloud ML Engine BatchPrediction API by
61 calling MLEngineBatchPredictionOperator, then summarize and validate
62 the result via Cloud Dataflow using DataFlowPythonOperator.
63
64 For details and pricing about Batch prediction, please refer to the website
65 https://cloud.google.com/ml-engine/docs/how-tos/batch-predict
66 and for Cloud Dataflow, https://cloud.google.com/dataflow/docs/
67
68 It returns three chained operators for prediction, summary, and validation,
69 named as ``<prefix>-prediction``, ``<prefix>-summary``, and ``<prefix>-validation``,
70 respectively.
71 (``<prefix>`` should contain only alphanumeric characters or hyphen.)
72
73 The upstream and downstream can be set accordingly like:
74
75 .. code-block:: python
76
77 pred, _, val = create_evaluate_ops(...)
78 pred.set_upstream(upstream_op)
79 ...
80 downstream_op.set_upstream(val)
81
82 Callers will provide two python callables, metric_fn and validate_fn, in
83 order to customize the evaluation behavior as they wish.
84
85 - metric_fn receives a dictionary per instance derived from json in the
86 batch prediction result. The keys might vary depending on the model.
87 It should return a tuple of metrics.
88 - validation_fn receives a dictionary of the averaged metrics that metric_fn
89 generated over all instances.
90 The key/value of the dictionary matches to what's given by
91 metric_fn_and_keys arg.
92 The dictionary contains an additional metric, 'count' to represent the
93 total number of instances received for evaluation.
94 The function would raise an exception to mark the task as failed, in a
95 case the validation result is not okay to proceed (i.e. to set the trained
96 version as default).
97
98 Typical examples are like this:
99
100 .. code-block:: python
101
102 def get_metric_fn_and_keys():
103 import math # imports should be outside of the metric_fn below.
104
105 def error_and_squared_error(inst):
106 label = float(inst["input_label"])
107 classes = float(inst["classes"]) # 0 or 1
108 err = abs(classes - label)
109 squared_err = math.pow(classes - label, 2)
110 return (err, squared_err) # returns a tuple.
111
112 return error_and_squared_error, ["err", "mse"] # key order must match.
113
114
115 def validate_err_and_count(summary):
116 if summary["err"] > 0.2:
117 raise ValueError("Too high err>0.2; summary=%s" % summary)
118 if summary["mse"] > 0.05:
119 raise ValueError("Too high mse>0.05; summary=%s" % summary)
120 if summary["count"] < 1000:
121 raise ValueError("Too few instances<1000; summary=%s" % summary)
122 return summary
123
124 For the details on the other BatchPrediction-related arguments (project_id,
125 job_id, region, data_format, input_paths, prediction_path, model_uri),
126 please refer to MLEngineBatchPredictionOperator too.
127
128 :param task_prefix: a prefix for the tasks. Only alphanumeric characters and
129 hyphen are allowed (no underscores), since this will be used as dataflow
130 job name, which doesn't allow other characters.
131 :type task_prefix: str
132
133 :param data_format: either of 'TEXT', 'TF_RECORD', 'TF_RECORD_GZIP'
134 :type data_format: str
135
136 :param input_paths: a list of input paths to be sent to BatchPrediction.
137 :type input_paths: list[str]
138
139 :param prediction_path: GCS path to put the prediction results in.
140 :type prediction_path: str
141
142 :param metric_fn_and_keys: a tuple of metric_fn and metric_keys:
143
144 - metric_fn is a function that accepts a dictionary (for an instance),
145 and returns a tuple of metric(s) that it calculates.
146
147 - metric_keys is a list of strings to denote the key of each metric.
148 :type metric_fn_and_keys: tuple of a function and a list[str]
149
150 :param validate_fn: a function to validate whether the averaged metric(s) is
151 good enough to push the model.
152 :type validate_fn: function
153
154 :param batch_prediction_job_id: the id to use for the Cloud ML Batch
155 prediction job. Passed directly to the MLEngineBatchPredictionOperator as
156 the job_id argument.
157 :type batch_prediction_job_id: str
158
159 :param project_id: the Google Cloud project id in which to execute
160 Cloud ML Batch Prediction and Dataflow jobs. If None, then the `dag`'s
161 `default_args['project_id']` will be used.
162 :type project_id: str
163
164 :param region: the Google Cloud region in which to execute Cloud ML
165 Batch Prediction and Dataflow jobs. If None, then the `dag`'s
166 `default_args['region']` will be used.
167 :type region: str
168
169 :param dataflow_options: options to run Dataflow jobs. If None, then the
170 `dag`'s `default_args['dataflow_default_options']` will be used.
171 :type dataflow_options: dictionary
172
173 :param model_uri: GCS path of the model exported by Tensorflow using
174 ``tensorflow.estimator.export_savedmodel()``. It cannot be used with
175 model_name or version_name below. See MLEngineBatchPredictionOperator for
176 more detail.
177 :type model_uri: str
178
179 :param model_name: Used to indicate a model to use for prediction. Can be
180 used in combination with version_name, but cannot be used together with
181 model_uri. See MLEngineBatchPredictionOperator for more detail. If None,
182 then the `dag`'s `default_args['model_name']` will be used.
183 :type model_name: str
184
185 :param version_name: Used to indicate a model version to use for prediction,
186 in combination with model_name. Cannot be used together with model_uri.
187 See MLEngineBatchPredictionOperator for more detail. If None, then the
188 `dag`'s `default_args['version_name']` will be used.
189 :type version_name: str
190
191 :param dag: The `DAG` to use for all Operators.
192 :type dag: airflow.models.DAG
193
194 :param py_interpreter: Python version of the beam pipeline.
195 If None, this defaults to the python3.
196 To track python versions supported by beam and related
197 issues check: https://issues.apache.org/jira/browse/BEAM-1251
198 :type py_interpreter: str
199
200 :returns: a tuple of three operators, (prediction, summary, validation)
201 :rtype: tuple(DataFlowPythonOperator, DataFlowPythonOperator,
202 PythonOperator)
203 """
204 batch_prediction_job_id = batch_prediction_job_id or ""
205 dataflow_options = dataflow_options or {}
206 region = region or ""
207
208 # Verify that task_prefix doesn't have any special characters except hyphen
209 # '-', which is the only allowed non-alphanumeric character by Dataflow.
210 if not re.match(r"^[a-zA-Z][-A-Za-z0-9]*$", task_prefix):
211 raise AirflowException(
212 "Malformed task_id for DataFlowPythonOperator (only alphanumeric "
213 "and hyphens are allowed but got: " + task_prefix
214 )
215
216 metric_fn, metric_keys = metric_fn_and_keys
217 if not callable(metric_fn):
218 raise AirflowException("`metric_fn` param must be callable.")
219 if not callable(validate_fn):
220 raise AirflowException("`validate_fn` param must be callable.")
221
222 if dag is not None and dag.default_args is not None:
223 default_args = dag.default_args
224 project_id = project_id or default_args.get('project_id')
225 region = region or default_args['region']
226 model_name = model_name or default_args.get('model_name')
227 version_name = version_name or default_args.get('version_name')
228 dataflow_options = dataflow_options or default_args.get('dataflow_default_options')
229
230 evaluate_prediction = MLEngineStartBatchPredictionJobOperator(
231 task_id=(task_prefix + "-prediction"),
232 project_id=project_id,
233 job_id=batch_prediction_job_id,
234 region=region,
235 data_format=data_format,
236 input_paths=input_paths,
237 output_path=prediction_path,
238 uri=model_uri,
239 model_name=model_name,
240 version_name=version_name,
241 dag=dag,
242 )
243
244 metric_fn_encoded = base64.b64encode(dill.dumps(metric_fn, recurse=True)).decode()
245 evaluate_summary = BeamRunPythonPipelineOperator(
246 task_id=(task_prefix + "-summary"),
247 py_file=os.path.join(os.path.dirname(__file__), 'mlengine_prediction_summary.py'),
248 default_pipeline_options=dataflow_options,
249 pipeline_options={
250 "prediction_path": prediction_path,
251 "metric_fn_encoded": metric_fn_encoded,
252 "metric_keys": ','.join(metric_keys),
253 },
254 py_interpreter=py_interpreter,
255 py_requirements=['apache-beam[gcp]>=2.14.0'],
256 dag=dag,
257 )
258 evaluate_summary.set_upstream(evaluate_prediction)
259
260 def apply_validate_fn(*args, templates_dict, **kwargs):
261 prediction_path = templates_dict["prediction_path"]
262 scheme, bucket, obj, _, _ = urlsplit(prediction_path)
263 if scheme != "gs" or not bucket or not obj:
264 raise ValueError(f"Wrong format prediction_path: {prediction_path}")
265 summary = os.path.join(obj.strip("/"), "prediction.summary.json")
266 gcs_hook = GCSHook()
267 summary = json.loads(gcs_hook.download(bucket, summary))
268 return validate_fn(summary)
269
270 evaluate_validation = PythonOperator(
271 task_id=(task_prefix + "-validation"),
272 python_callable=apply_validate_fn,
273 templates_dict={"prediction_path": prediction_path},
274 dag=dag,
275 )
276 evaluate_validation.set_upstream(evaluate_summary)
277
278 return evaluate_prediction, evaluate_summary, evaluate_validation
279
[end of airflow/providers/google/cloud/utils/mlengine_operator_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
apache/airflow
|
3928eecc024f99088d487957e161880f5fe75ec9
|
The S3ToGCSOperator fails on templated `dest_gcs` URL
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**: Docker
**What happened**:
When passing a templatized `dest_gcs` argument to the `S3ToGCSOperator` operator, the DAG fails to import because the constructor attempts to test the validity of the URL before the template has been populated in `execute`.
The error is:
```
Broken DAG: [/opt/airflow/dags/bad_gs_dag.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/gcs.py", line 1051, in gcs_object_is_directory
_, blob = _parse_gcs_url(bucket)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/gcs.py", line 1063, in _parse_gcs_url
raise AirflowException('Please provide a bucket name')
airflow.exceptions.AirflowException: Please provide a bucket name
```
**What you expected to happen**:
The DAG should successfully parse when using a templatized `dest_gcs` value.
**How to reproduce it**:
Instantiating a `S3ToGCSOperator` task with `dest_gcs="{{ var.gcs_url }}"` fails.
<details>
```python
from airflow.decorators import dag
from airflow.utils.dates import days_ago
from airflow.providers.google.cloud.transfers.s3_to_gcs import S3ToGCSOperator
@dag(
schedule_interval=None,
description="Demo S3-to-GS Bug",
catchup=False,
start_date=days_ago(1),
)
def demo_bug():
S3ToGCSOperator(
task_id="transfer_task",
bucket="example_bucket",
prefix="fake/prefix",
dest_gcs="{{ var.gcs_url }}",
)
demo_dag = demo_bug()
```
</details>
**Anything else we need to know**:
Should be fixable by moving the code that evaluates whether the URL is a folder to `execute()`.
|
Thanks for opening your first issue here! Be sure to follow the issue template!
It may be the same case for `AzureFileShareToGCSOperator`
I'd like to work on this issue. Seems a similar one might affect other operators as well - @TobKed mentioned [AzureFileShareToGCSOperator](https://github.com/apache/airflow/blob/866a601b76e219b3c043e1dbbc8fb22300866351/airflow/providers/google/cloud/transfers/azure_fileshare_to_gcs.py#L106-L114). [`LocalFilesystemToS3Operator`](https://github.com/apache/airflow/blob/1632c9f519510ff218656bbc1554c80cb158e85a/airflow/providers/amazon/aws/transfers/local_to_s3.py#L97-L98) might work incorrect as well.
Basically any validation of input parameter, which can be templated, shouldn't be inside operator's `__init__` function, as its real value will be available at earliest only during task preparation stage.
How it'd be better to handle? Move validation to the top of `execute()` function?
I'll search then for other potential operators which might be affected
|
2021-10-18T15:43:36Z
|
<patch>
diff --git a/airflow/providers/amazon/aws/transfers/local_to_s3.py b/airflow/providers/amazon/aws/transfers/local_to_s3.py
--- a/airflow/providers/amazon/aws/transfers/local_to_s3.py
+++ b/airflow/providers/amazon/aws/transfers/local_to_s3.py
@@ -94,10 +94,12 @@ def __init__(
self.gzip = gzip
self.acl_policy = acl_policy
+ def _check_inputs(self):
if 's3://' in self.dest_key and self.dest_bucket is not None:
raise TypeError('dest_bucket should be None when dest_key is provided as a full s3:// file path.')
def execute(self, context):
+ self._check_inputs()
s3_hook = S3Hook(aws_conn_id=self.aws_conn_id, verify=self.verify)
s3_hook.load_file(
self.filename,
diff --git a/airflow/providers/google/cloud/transfers/azure_fileshare_to_gcs.py b/airflow/providers/google/cloud/transfers/azure_fileshare_to_gcs.py
--- a/airflow/providers/google/cloud/transfers/azure_fileshare_to_gcs.py
+++ b/airflow/providers/google/cloud/transfers/azure_fileshare_to_gcs.py
@@ -103,7 +103,8 @@ def __init__(
self.gzip = gzip
self.google_impersonation_chain = google_impersonation_chain
- if dest_gcs and not gcs_object_is_directory(self.dest_gcs):
+ def _check_inputs(self) -> None:
+ if self.dest_gcs and not gcs_object_is_directory(self.dest_gcs):
self.log.info(
'Destination Google Cloud Storage path is not a valid '
'"directory", define a path that ends with a slash "/" or '
@@ -114,6 +115,7 @@ def __init__(
)
def execute(self, context):
+ self._check_inputs()
azure_fileshare_hook = AzureFileShareHook(self.azure_fileshare_conn_id)
files = azure_fileshare_hook.list_files(
share_name=self.share_name, directory_name=self.directory_name
diff --git a/airflow/providers/google/cloud/transfers/s3_to_gcs.py b/airflow/providers/google/cloud/transfers/s3_to_gcs.py
--- a/airflow/providers/google/cloud/transfers/s3_to_gcs.py
+++ b/airflow/providers/google/cloud/transfers/s3_to_gcs.py
@@ -147,7 +147,8 @@ def __init__(
self.gzip = gzip
self.google_impersonation_chain = google_impersonation_chain
- if dest_gcs and not gcs_object_is_directory(self.dest_gcs):
+ def _check_inputs(self) -> None:
+ if self.dest_gcs and not gcs_object_is_directory(self.dest_gcs):
self.log.info(
'Destination Google Cloud Storage path is not a valid '
'"directory", define a path that ends with a slash "/" or '
@@ -158,6 +159,7 @@ def __init__(
)
def execute(self, context):
+ self._check_inputs()
# use the super method to list all the files in an S3 bucket/key
files = super().execute(context)
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-2833
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Specifying schedule_los should give more informative error when backend not present
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: master
- **Python version**: 3.6.8
- **Operating system**: Ubuntu 18.04
### What is the current behavior?
When `assemble`ing a schedule and also specifying `schedule_los` with no backend specified (or when `qubit_lo_freq/range`, `meas_lo_freq/range` are missing), an `IndexError` is raised.
### Steps to reproduce the problem
```
import numpy as np
from qiskit.pulse.channels import (DeviceSpecification, Qubit, RegisterSlot, MemorySlot,
DriveChannel, AcquireChannel, ControlChannel, MeasureChannel)
from qiskit.pulse.commands import functional_pulse
from qiskit.pulse import LoConfig
from qiskit.pulse.schedule import Schedule
from qiskit.compiler import assemble
@functional_pulse
def linear(duration, slope, intercept):
x = np.linspace(0, duration - 1, duration)
return slope * x + intercept
qubits = [Qubit(0, DriveChannel(0), AcquireChannel(0), MeasureChannel(0),
control_channels=[ControlChannel(0)]), Qubit(1, DriveChannel(1), MeasureChannel(0), AcquireChannel(1))]
registers = [RegisterSlot(i) for i in range(2)]
mem_slots = [MemorySlot(i) for i in range(2)]
two_qubit_device = DeviceSpecification(qubits, registers, mem_slots)
device = two_qubit_device
lp0 = linear(duration=3, slope=0.2, intercept=0.1)
sched = Schedule()
sched = sched.append(lp0(device.q[0].drive))
assemble(
sched,
schedule_los=[LoConfig(channel_los={
device.q[0].drive: 2*np.pi * 5.4,
device.q[1].drive: 2*np.pi * 5.5
})]
)
```
Raises `IndexError: list assignment index out of range`
### What is the expected behavior?
An error is raised.
### Suggested solutions
The error raised should be more informative than the current error.
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)
4
5 **Qiskit** is an open-source framework for working with Noisy Intermediate-Scale Quantum (NISQ) computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> result = execute(qc, backend_sim).result()
40 >>> print(result.get_counts(qc))
41 ```
42
43 In this case, the output will be:
44
45 ```python
46 {'00': 513, '11': 511}
47 ```
48
49 A script is available [here](examples/python/hello_quantum.py), where we also show how to
50 run the same program on a real quantum computer via IBMQ.
51
52 ### Executing your code on a real quantum chip
53
54 You can also use Qiskit to execute your code on a
55 **real quantum chip**.
56 In order to do so, you need to configure Qiskit for using the credentials in
57 your IBM Q account:
58
59 #### Configure your IBMQ credentials
60
61 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
62
63 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
64
65 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
66
67 ```python
68 >>> from qiskit import IBMQ
69 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
70 ```
71
72 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
73 Once they are stored, at any point in the future you can load and use them
74 in your program simply via:
75
76 ```python
77 >>> from qiskit import IBMQ
78 >>> IBMQ.load_account()
79 ```
80
81 Those who do not want to save their credentials to disk should use instead:
82
83 ```python
84 >>> from qiskit import IBMQ
85 >>> IBMQ.enable_account('MY_API_TOKEN')
86 ```
87
88 and the token will only be active for the session. For examples using Terra with real
89 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
90 the levels.
91
92 ## Contribution Guidelines
93
94 If you'd like to contribute to Qiskit Terra, please take a look at our
95 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
96
97 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
98 [join the Qiskit Slack community](https://join.slack.com/t/qiskit/shared_invite/enQtNjQ5OTc5ODM1ODYyLTc2YWJhOWViZDA2OWI5N2EyMjIxN2YwODM5MWQyN2Q3MjczOGRlMDU4MzMxMWE5MzZjMzEzYzM3MmJiMzU5MzU)
99 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
100 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
101
102 ## Next Steps
103
104 Now you're set up and ready to check out some of the other examples from our
105 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
106
107 ## Authors and Citation
108
109 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
110 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
111
112 ## License
113
114 [Apache License 2.0](LICENSE.txt)
115
[end of README.md]
[start of qiskit/assembler/assemble_schedules.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Assemble function for converting a list of circuits into a qobj"""
16 from qiskit.exceptions import QiskitError
17 from qiskit.pulse.commands import PulseInstruction, AcquireInstruction, SamplePulse
18 from qiskit.qobj import (PulseQobj, QobjExperimentHeader,
19 PulseQobjInstruction, PulseQobjExperimentConfig,
20 PulseQobjExperiment, PulseQobjConfig, PulseLibraryItem)
21 from qiskit.qobj.converters import InstructionToQobjConverter, LoConfigConverter
22
23
24 def assemble_schedules(schedules, qobj_id, qobj_header, run_config):
25 """Assembles a list of schedules into a qobj which can be run on the backend.
26 Args:
27 schedules (list[Schedule]): schedules to assemble
28 qobj_id (int): identifier for the generated qobj
29 qobj_header (QobjHeader): header to pass to the results
30 run_config (RunConfig): configuration of the runtime environment
31 Returns:
32 PulseQobj: the Qobj to be run on the backends
33 Raises:
34 QiskitError: when invalid schedules or configs are provided
35 """
36 if hasattr(run_config, 'instruction_converter'):
37 instruction_converter = run_config.instruction_converter
38 else:
39 instruction_converter = InstructionToQobjConverter
40
41 qobj_config = run_config.to_dict()
42 qubit_lo_range = qobj_config.pop('qubit_lo_range')
43 meas_lo_range = qobj_config.pop('meas_lo_range')
44 meas_map = qobj_config.pop('meas_map', None)
45
46 max_memory_slot = 0
47
48 instruction_converter = instruction_converter(PulseQobjInstruction, **qobj_config)
49
50 lo_converter = LoConfigConverter(PulseQobjExperimentConfig, qubit_lo_range=qubit_lo_range,
51 meas_lo_range=meas_lo_range, **qobj_config)
52
53 # Pack everything into the Qobj
54 qobj_schedules = []
55 user_pulselib = {}
56 for idx, schedule in enumerate(schedules):
57 # instructions
58 qobj_instructions = []
59 # Instructions are returned as tuple of shifted time and instruction
60 for shift, instruction in schedule.instructions:
61 # TODO: support conditional gate
62 if isinstance(instruction, PulseInstruction):
63 name = instruction.command.name
64 if name in user_pulselib and instruction.command != user_pulselib[name]:
65 name = "{0}-{1:x}".format(name, hash(instruction.command.samples.tostring()))
66 instruction = PulseInstruction(
67 command=SamplePulse(name=name, samples=instruction.command.samples),
68 name=instruction.name,
69 channel=instruction.timeslots.channels[0])
70 # add samples to pulse library
71 user_pulselib[name] = instruction.command
72 if isinstance(instruction, AcquireInstruction):
73 max_memory_slot = max(max_memory_slot,
74 *[slot.index for slot in instruction.mem_slots])
75 if meas_map:
76 # verify all acquires satisfy meas_map
77 _validate_meas_map(instruction, meas_map)
78
79 qobj_instructions.append(instruction_converter(shift, instruction))
80
81 # experiment header
82 qobj_experiment_header = QobjExperimentHeader(
83 name=schedule.name or 'Experiment-%d' % idx
84 )
85
86 qobj_schedules.append({
87 'header': qobj_experiment_header,
88 'instructions': qobj_instructions
89 })
90
91 # set number of memoryslots
92 qobj_config['memory_slots'] = max_memory_slot + 1
93
94 # setup pulse_library
95 qobj_config['pulse_library'] = [PulseLibraryItem(name=pulse.name, samples=pulse.samples)
96 for pulse in user_pulselib.values()]
97
98 # create qobj experiment field
99 experiments = []
100 schedule_los = qobj_config.pop('schedule_los', [])
101
102 if len(schedule_los) == 1:
103 lo_dict = schedule_los[0]
104 # update global config
105 q_los = lo_converter.get_qubit_los(lo_dict)
106 if q_los:
107 qobj_config['qubit_lo_freq'] = q_los
108 m_los = lo_converter.get_meas_los(lo_dict)
109 if m_los:
110 qobj_config['meas_lo_freq'] = m_los
111
112 if schedule_los:
113 # multiple frequency setups
114 if len(qobj_schedules) == 1:
115 # frequency sweep
116 for lo_dict in schedule_los:
117 experiments.append(PulseQobjExperiment(
118 instructions=qobj_schedules[0]['instructions'],
119 header=qobj_schedules[0]['header'],
120 config=lo_converter(lo_dict)
121 ))
122 elif len(qobj_schedules) == len(schedule_los):
123 # n:n setup
124 for lo_dict, schedule in zip(schedule_los, qobj_schedules):
125 experiments.append(PulseQobjExperiment(
126 instructions=schedule['instructions'],
127 header=schedule['header'],
128 config=lo_converter(lo_dict)
129 ))
130 else:
131 raise QiskitError('Invalid LO setting is specified. '
132 'The LO should be configured for each schedule, or '
133 'single setup for all schedules (unique), or '
134 'multiple setups for a single schedule (frequency sweep),'
135 'or no LO configured at all.')
136 else:
137 # unique frequency setup
138 for schedule in qobj_schedules:
139 experiments.append(PulseQobjExperiment(
140 instructions=schedule['instructions'],
141 header=schedule['header'],
142 ))
143
144 qobj_config = PulseQobjConfig(**qobj_config)
145
146 return PulseQobj(qobj_id=qobj_id,
147 config=qobj_config,
148 experiments=experiments,
149 header=qobj_header)
150
151
152 def _validate_meas_map(acquire, meas_map):
153 """Validate all qubits tied in meas_map are to be acquired."""
154 meas_map_set = [set(m) for m in meas_map]
155 # Verify that each qubit is listed once in measurement map
156 measured_qubits = {acq_ch.index for acq_ch in acquire.acquires}
157 tied_qubits = set()
158 for meas_qubit in measured_qubits:
159 for map_inst in meas_map_set:
160 if meas_qubit in map_inst:
161 tied_qubits |= map_inst
162
163 if measured_qubits != tied_qubits:
164 raise QiskitError('Qubits to be acquired: {0} do not satisfy required qubits '
165 'in measurement map: {1}'.format(measured_qubits, tied_qubits))
166 return True
167
[end of qiskit/assembler/assemble_schedules.py]
[start of qiskit/compiler/assemble.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Assemble function for converting a list of circuits into a qobj"""
16 import uuid
17 import copy
18
19 from qiskit.circuit import QuantumCircuit
20 from qiskit.exceptions import QiskitError
21 from qiskit.pulse import ScheduleComponent, LoConfig
22 from qiskit.assembler.run_config import RunConfig
23 from qiskit.assembler import assemble_circuits, assemble_schedules
24 from qiskit.qobj import QobjHeader
25 from qiskit.validation.exceptions import ModelValidationError
26
27
28 # TODO: parallelize over the experiments (serialize each separately, then add global header/config)
29 def assemble(experiments,
30 backend=None,
31 qobj_id=None, qobj_header=None,
32 shots=1024, memory=False, max_credits=None, seed_simulator=None,
33 qubit_lo_freq=None, meas_lo_freq=None,
34 qubit_lo_range=None, meas_lo_range=None,
35 schedule_los=None, meas_level=2, meas_return='avg', meas_map=None,
36 memory_slot_size=100, rep_time=None, parameter_binds=None,
37 **run_config):
38 """Assemble a list of circuits or pulse schedules into a Qobj.
39
40 This function serializes the payloads, which could be either circuits or schedules,
41 to create Qobj "experiments". It further annotates the experiment payload with
42 header and configurations.
43
44 Args:
45 experiments (QuantumCircuit or list[QuantumCircuit] or Schedule or list[Schedule]):
46 Circuit(s) or pulse schedule(s) to execute
47
48 backend (BaseBackend):
49 If set, some runtime options are automatically grabbed from
50 backend.configuration() and backend.defaults().
51 If any other option is explicitly set (e.g. rep_rate), it
52 will override the backend's.
53 If any other options is set in the run_config, it will
54 also override the backend's.
55
56 qobj_id (str):
57 String identifier to annotate the Qobj
58
59 qobj_header (QobjHeader or dict):
60 User input that will be inserted in Qobj header, and will also be
61 copied to the corresponding Result header. Headers do not affect the run.
62
63 shots (int):
64 Number of repetitions of each circuit, for sampling. Default: 1024
65
66 memory (bool):
67 If True, per-shot measurement bitstrings are returned as well
68 (provided the backend supports it). For OpenPulse jobs, only
69 measurement level 2 supports this option. Default: False
70
71 max_credits (int):
72 Maximum credits to spend on job. Default: 10
73
74 seed_simulator (int):
75 Random seed to control sampling, for when backend is a simulator
76
77 qubit_lo_freq (list):
78 List of default qubit lo frequencies
79
80 meas_lo_freq (list):
81 List of default meas lo frequencies
82
83 qubit_lo_range (list):
84 List of drive lo ranges
85
86 meas_lo_range (list):
87 List of meas lo ranges
88
89 schedule_los (None or list[Union[Dict[PulseChannel, float], LoConfig]] or \
90 Union[Dict[PulseChannel, float], LoConfig]):
91 Experiment LO configurations
92
93 meas_level (int):
94 Set the appropriate level of the measurement output for pulse experiments.
95
96 meas_return (str):
97 Level of measurement data for the backend to return
98 For `meas_level` 0 and 1:
99 * "single" returns information from every shot.
100 * "avg" returns average measurement output (averaged over number of shots).
101
102 meas_map (list):
103 List of lists, containing qubits that must be measured together.
104
105 memory_slot_size (int):
106 Size of each memory slot if the output is Level 0.
107
108 rep_time (int): repetition time of the experiment in μs.
109 The delay between experiments will be rep_time.
110 Must be from the list provided by the device.
111
112 parameter_binds (list[dict{Parameter: Value}]):
113 List of Parameter bindings over which the set of experiments will be
114 executed. Each list element (bind) should be of the form
115 {Parameter1: value1, Parameter2: value2, ...}. All binds will be
116 executed across all experiments, e.g. if parameter_binds is a
117 length-n list, and there are m experiments, a total of m x n
118 experiments will be run (one for each experiment/bind pair).
119
120 run_config (dict):
121 extra arguments used to configure the run (e.g. for Aer configurable backends)
122 Refer to the backend documentation for details on these arguments
123
124 Returns:
125 Qobj: a qobj which can be run on a backend. Depending on the type of input,
126 this will be either a QasmQobj or a PulseQobj.
127
128 Raises:
129 QiskitError: if the input cannot be interpreted as either circuits or schedules
130 """
131 experiments = experiments if isinstance(experiments, list) else [experiments]
132 qobj_id, qobj_header, run_config_common_dict = _parse_common_args(backend, qobj_id, qobj_header,
133 shots, memory, max_credits,
134 seed_simulator, **run_config)
135
136 # assemble either circuits or schedules
137 if all(isinstance(exp, QuantumCircuit) for exp in experiments):
138 run_config = _parse_circuit_args(parameter_binds, **run_config_common_dict)
139
140 # If circuits are parameterized, bind parameters and remove from run_config
141 bound_experiments, run_config = _expand_parameters(circuits=experiments,
142 run_config=run_config)
143 return assemble_circuits(circuits=bound_experiments, qobj_id=qobj_id,
144 qobj_header=qobj_header, run_config=run_config)
145
146 elif all(isinstance(exp, ScheduleComponent) for exp in experiments):
147 run_config = _parse_pulse_args(backend, qubit_lo_freq, meas_lo_freq,
148 qubit_lo_range, meas_lo_range,
149 schedule_los, meas_level, meas_return,
150 meas_map, memory_slot_size, rep_time,
151 **run_config_common_dict)
152
153 return assemble_schedules(schedules=experiments, qobj_id=qobj_id,
154 qobj_header=qobj_header, run_config=run_config)
155
156 else:
157 raise QiskitError("bad input to assemble() function; "
158 "must be either circuits or schedules")
159
160
161 # TODO: rework to return a list of RunConfigs (one for each experiments), and a global one
162 def _parse_common_args(backend, qobj_id, qobj_header, shots,
163 memory, max_credits, seed_simulator,
164 **run_config):
165 """Resolve the various types of args allowed to the assemble() function through
166 duck typing, overriding args, etc. Refer to the assemble() docstring for details on
167 what types of inputs are allowed.
168
169 Here the args are resolved by converting them to standard instances, and prioritizing
170 them in case a run option is passed through multiple args (explicitly setting an arg
171 has more priority than the arg set by backend)
172
173 Returns:
174 RunConfig: a run config, which is a standardized object that configures the qobj
175 and determines the runtime environment.
176 """
177 # grab relevant info from backend if it exists
178 backend_config = None
179 if backend:
180 backend_config = backend.configuration()
181
182 # an identifier for the Qobj
183 qobj_id = qobj_id or str(uuid.uuid4())
184
185 # The header that goes at the top of the Qobj (and later Result)
186 # we process it as dict, then write entries that are not None to a QobjHeader object
187 qobj_header = qobj_header or {}
188 if isinstance(qobj_header, QobjHeader):
189 qobj_header = qobj_header.to_dict()
190 backend_name = getattr(backend_config, 'backend_name', None)
191 backend_version = getattr(backend_config, 'backend_version', None)
192 qobj_header = {**dict(backend_name=backend_name, backend_version=backend_version),
193 **qobj_header}
194 qobj_header = QobjHeader(**{k: v for k, v in qobj_header.items() if v is not None})
195
196 # create run configuration and populate
197 run_config_dict = dict(shots=shots,
198 memory=memory,
199 max_credits=max_credits,
200 seed_simulator=seed_simulator,
201 **run_config)
202
203 return qobj_id, qobj_header, run_config_dict
204
205
206 def _parse_pulse_args(backend, qubit_lo_freq, meas_lo_freq, qubit_lo_range,
207 meas_lo_range, schedule_los, meas_level,
208 meas_return, meas_map,
209 memory_slot_size, rep_time,
210 **run_config):
211 """Build a pulse RunConfig replacing unset arguments with defaults derived from the `backend`.
212 See `assemble` for more information on the required arguments.
213
214 Returns:
215 RunConfig: a run config, which is a standardized object that configures the qobj
216 and determines the runtime environment.
217 """
218 # grab relevant info from backend if it exists
219 backend_config = None
220 backend_default = None
221 if backend:
222 backend_config = backend.configuration()
223 # TODO : Remove usage of config.defaults when backend.defaults() is updated.
224 try:
225 backend_default = backend.defaults()
226 except (ModelValidationError, AttributeError):
227 from collections import namedtuple
228 backend_config_defaults = getattr(backend_config, 'defaults', {})
229 BackendDefault = namedtuple('BackendDefault', ('qubit_freq_est', 'meas_freq_est'))
230 backend_default = BackendDefault(
231 qubit_freq_est=backend_config_defaults.get('qubit_freq_est'),
232 meas_freq_est=backend_config_defaults.get('meas_freq_est')
233 )
234
235 meas_map = meas_map or getattr(backend_config, 'meas_map', None)
236
237 schedule_los = schedule_los or []
238 if isinstance(schedule_los, (LoConfig, dict)):
239 schedule_los = [schedule_los]
240
241 # Convert to LoConfig if lo configuration supplied as dictionary
242 schedule_los = [lo_config if isinstance(lo_config, LoConfig) else LoConfig(lo_config)
243 for lo_config in schedule_los]
244
245 qubit_lo_freq = qubit_lo_freq or getattr(backend_default, 'qubit_freq_est', [])
246 meas_lo_freq = meas_lo_freq or getattr(backend_default, 'meas_freq_est', [])
247
248 qubit_lo_range = qubit_lo_range or getattr(backend_config, 'qubit_lo_range', [])
249 meas_lo_range = meas_lo_range or getattr(backend_config, 'meas_lo_range', [])
250
251 rep_time = rep_time or getattr(backend_config, 'rep_times', None)
252 if isinstance(rep_time, list):
253 rep_time = rep_time[0]
254
255 # create run configuration and populate
256 run_config_dict = dict(qubit_lo_freq=qubit_lo_freq,
257 meas_lo_freq=meas_lo_freq,
258 qubit_lo_range=qubit_lo_range,
259 meas_lo_range=meas_lo_range,
260 schedule_los=schedule_los,
261 meas_level=meas_level,
262 meas_return=meas_return,
263 meas_map=meas_map,
264 memory_slot_size=memory_slot_size,
265 rep_time=rep_time,
266 **run_config)
267 run_config = RunConfig(**{k: v for k, v in run_config_dict.items() if v is not None})
268
269 return run_config
270
271
272 def _parse_circuit_args(parameter_binds, **run_config):
273 """Build a circuit RunConfig replacing unset arguments with defaults derived from the `backend`.
274 See `assemble` for more information on the required arguments.
275
276 Returns:
277 RunConfig: a run config, which is a standardized object that configures the qobj
278 and determines the runtime environment.
279 """
280 parameter_binds = parameter_binds or []
281
282 # create run configuration and populate
283 run_config_dict = dict(parameter_binds=parameter_binds, **run_config)
284 run_config = RunConfig(**{k: v for k, v in run_config_dict.items() if v is not None})
285
286 return run_config
287
288
289 def _expand_parameters(circuits, run_config):
290 """Verifies that there is a single common set of parameters shared between
291 all circuits and all parameter binds in the run_config. Returns an expanded
292 list of circuits (if parameterized) with all parameters bound, and a copy of
293 the run_config with parameter_binds cleared.
294
295 If neither the circuits nor the run_config specify parameters, the two are
296 returned unmodified.
297
298 Raises:
299 QiskitError: if run_config parameters are not compatible with circuit parameters
300
301 Returns:
302 Tuple(List[QuantumCircuit], RunConfig):
303 - List of input circuits expanded and with parameters bound
304 - RunConfig with parameter_binds removed
305 """
306
307 parameter_binds = run_config.parameter_binds
308 if parameter_binds or \
309 any(circuit.parameters for circuit in circuits):
310
311 all_bind_parameters = [bind.keys()
312 for bind in parameter_binds]
313 all_circuit_parameters = [circuit.parameters for circuit in circuits]
314
315 # Collect set of all unique parameters across all circuits and binds
316 unique_parameters = {param
317 for param_list in all_bind_parameters + all_circuit_parameters
318 for param in param_list}
319
320 # Check that all parameters are common to all circuits and binds
321 if not all_bind_parameters \
322 or not all_circuit_parameters \
323 or any(unique_parameters != bind_params for bind_params in all_bind_parameters) \
324 or any(unique_parameters != parameters for parameters in all_circuit_parameters):
325 raise QiskitError(
326 ('Mismatch between run_config.parameter_binds and all circuit parameters. ' +
327 'Parameter binds: {} ' +
328 'Circuit parameters: {}').format(all_bind_parameters, all_circuit_parameters))
329
330 circuits = [circuit.bind_parameters(binds)
331 for circuit in circuits
332 for binds in parameter_binds]
333
334 # All parameters have been expanded and bound, so remove from run_config
335 run_config = copy.deepcopy(run_config)
336 run_config.parameter_binds = []
337
338 return circuits, run_config
339
[end of qiskit/compiler/assemble.py]
[start of qiskit/execute.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 Helper module for simplified Qiskit usage.
17
18 In general we recommend using the SDK modules directly. However, to get something
19 running quickly we have provided this wrapper module.
20 """
21 from qiskit.compiler import transpile, assemble
22
23
24 def execute(experiments, backend,
25 basis_gates=None, coupling_map=None, # circuit transpile options
26 backend_properties=None, initial_layout=None,
27 seed_transpiler=None, optimization_level=None, pass_manager=None,
28 qobj_id=None, qobj_header=None, shots=1024, # common run options
29 memory=False, max_credits=10, seed_simulator=None,
30 default_qubit_los=None, default_meas_los=None, # schedule run options
31 schedule_los=None, meas_level=2, meas_return='avg',
32 memory_slots=None, memory_slot_size=100, rep_time=None, parameter_binds=None,
33 **run_config):
34 """Execute a list of circuits or pulse schedules on a backend.
35
36 The execution is asynchronous, and a handle to a job instance is returned.
37
38 Args:
39 experiments (QuantumCircuit or list[QuantumCircuit] or Schedule or list[Schedule]):
40 Circuit(s) or pulse schedule(s) to execute
41
42 backend (BaseBackend):
43 Backend to execute circuits on.
44 Transpiler options are automatically grabbed from
45 backend.configuration() and backend.properties().
46 If any other option is explicitly set (e.g. coupling_map), it
47 will override the backend's.
48
49 basis_gates (list[str]):
50 List of basis gate names to unroll to.
51 e.g:
52 ['u1', 'u2', 'u3', 'cx']
53 If None, do not unroll.
54
55 coupling_map (CouplingMap or list):
56 Coupling map (perhaps custom) to target in mapping.
57 Multiple formats are supported:
58 a. CouplingMap instance
59
60 b. list
61 Must be given as an adjacency matrix, where each entry
62 specifies all two-qubit interactions supported by backend
63 e.g:
64 [[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]
65
66 backend_properties (BackendProperties):
67 Properties returned by a backend, including information on gate
68 errors, readout errors, qubit coherence times, etc. For a backend
69 that provides this information, it can be obtained with:
70 ``backend.properties()``
71
72 initial_layout (Layout or dict or list):
73 Initial position of virtual qubits on physical qubits.
74 If this layout makes the circuit compatible with the coupling_map
75 constraints, it will be used.
76 The final layout is not guaranteed to be the same, as the transpiler
77 may permute qubits through swaps or other means.
78
79 Multiple formats are supported:
80 a. Layout instance
81
82 b. dict
83 virtual to physical:
84 {qr[0]: 0,
85 qr[1]: 3,
86 qr[2]: 5}
87
88 physical to virtual:
89 {0: qr[0],
90 3: qr[1],
91 5: qr[2]}
92
93 c. list
94 virtual to physical:
95 [0, 3, 5] # virtual qubits are ordered (in addition to named)
96
97 physical to virtual:
98 [qr[0], None, None, qr[1], None, qr[2]]
99
100 seed_transpiler (int):
101 Sets random seed for the stochastic parts of the transpiler
102
103 optimization_level (int):
104 How much optimization to perform on the circuits.
105 Higher levels generate more optimized circuits,
106 at the expense of longer transpilation time.
107 0: no optimization
108 1: light optimization
109 2: heavy optimization
110 3: even heavier optimization
111 If None, level 1 will be chosen as default.
112
113 pass_manager (PassManager):
114 The pass manager to use during transpilation. If this arg is present,
115 auto-selection of pass manager based on the transpile options will be
116 turned off and this pass manager will be used directly.
117
118 qobj_id (str):
119 String identifier to annotate the Qobj
120
121 qobj_header (QobjHeader or dict):
122 User input that will be inserted in Qobj header, and will also be
123 copied to the corresponding Result header. Headers do not affect the run.
124
125 shots (int):
126 Number of repetitions of each circuit, for sampling. Default: 1024
127
128 memory (bool):
129 If True, per-shot measurement bitstrings are returned as well
130 (provided the backend supports it). For OpenPulse jobs, only
131 measurement level 2 supports this option. Default: False
132
133 max_credits (int):
134 Maximum credits to spend on job. Default: 10
135
136 seed_simulator (int):
137 Random seed to control sampling, for when backend is a simulator
138
139 default_qubit_los (list):
140 List of default qubit lo frequencies
141
142 default_meas_los (list):
143 List of default meas lo frequencies
144
145 schedule_los (None or list[Union[Dict[PulseChannel, float], LoConfig]] or
146 Union[Dict[PulseChannel, float], LoConfig]):
147 Experiment LO configurations
148
149 meas_level (int):
150 Set the appropriate level of the measurement output for pulse experiments.
151
152 meas_return (str):
153 Level of measurement data for the backend to return
154 For `meas_level` 0 and 1:
155 "single" returns information from every shot.
156 "avg" returns average measurement output (averaged over number of shots).
157
158 memory_slots (int):
159 Number of classical memory slots used in this job.
160
161 memory_slot_size (int):
162 Size of each memory slot if the output is Level 0.
163
164 rep_time (int): repetition time of the experiment in μs.
165 The delay between experiments will be rep_time.
166 Must be from the list provided by the device.
167
168 parameter_binds (list[dict{Parameter: Value}]):
169 List of Parameter bindings over which the set of experiments will be
170 executed. Each list element (bind) should be of the form
171 {Parameter1: value1, Parameter2: value2, ...}. All binds will be
172 executed across all experiments, e.g. if parameter_binds is a
173 length-n list, and there are m experiments, a total of m x n
174 experiments will be run (one for each experiment/bind pair).
175
176 run_config (dict):
177 Extra arguments used to configure the run (e.g. for Aer configurable backends)
178 Refer to the backend documentation for details on these arguments
179 Note: for now, these keyword arguments will both be copied to the
180 Qobj config, and passed to backend.run()
181
182 Returns:
183 BaseJob: returns job instance derived from BaseJob
184
185 Raises:
186 QiskitError: if the execution cannot be interpreted as either circuits or schedules
187 """
188 # transpiling the circuits using given transpile options
189 experiments = transpile(experiments,
190 basis_gates=basis_gates,
191 coupling_map=coupling_map,
192 backend_properties=backend_properties,
193 initial_layout=initial_layout,
194 seed_transpiler=seed_transpiler,
195 optimization_level=optimization_level,
196 backend=backend,
197 pass_manager=pass_manager,
198 )
199
200 # assembling the circuits into a qobj to be run on the backend
201 qobj = assemble(experiments,
202 qobj_id=qobj_id,
203 qobj_header=qobj_header,
204 shots=shots,
205 memory=memory,
206 max_credits=max_credits,
207 seed_simulator=seed_simulator,
208 default_qubit_los=default_qubit_los,
209 default_meas_los=default_meas_los,
210 schedule_los=schedule_los,
211 meas_level=meas_level,
212 meas_return=meas_return,
213 memory_slots=memory_slots,
214 memory_slot_size=memory_slot_size,
215 rep_time=rep_time,
216 parameter_binds=parameter_binds,
217 backend=backend,
218 run_config=run_config
219 )
220
221 # executing the circuits on the backend and returning the job
222 return backend.run(qobj, **run_config)
223
[end of qiskit/execute.py]
[start of qiskit/pulse/__init__.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Module for Pulses."""
16
17 from .channels import (DeviceSpecification, PulseChannelSpec, DriveChannel,
18 MeasureChannel, AcquireChannel,
19 ControlChannel, RegisterSlot, MemorySlot)
20 from .cmd_def import CmdDef
21 from .commands import (Instruction, Acquire, FrameChange, PersistentValue,
22 SamplePulse, Snapshot, Kernel, Discriminator, functional_pulse)
23 from .configuration import LoConfig, LoRange
24 from .exceptions import PulseError
25 from .interfaces import ScheduleComponent
26 # from .parser import parse_string_expr
27 from .schedule import Schedule
28
[end of qiskit/pulse/__init__.py]
[start of qiskit/pulse/channels/device_specification.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 Specification of the device.
17 """
18 from typing import List
19 from warnings import warn
20
21 from qiskit.validation.exceptions import ModelValidationError
22
23 from qiskit.pulse.exceptions import PulseError
24 from .pulse_channels import DriveChannel, ControlChannel, MeasureChannel
25 from .channels import AcquireChannel, MemorySlot, RegisterSlot
26 from .qubit import Qubit
27
28
29 class DeviceSpecification:
30 """Implement a device specification, which is usually constructed from backend info."""
31
32 def __init__(self,
33 qubits: List[Qubit],
34 registers: List[RegisterSlot],
35 mem_slots: List[MemorySlot]):
36 """
37 Create device specification with specified `qubits`.
38 Args:
39 qubits:
40 """
41 self._qubits = qubits
42 self._reg_slots = registers
43 self._mem_slots = mem_slots
44
45 warn('DeviceSpecification is deprecated.'
46 'Instead of DeviceSpecification, use PulseChannelSpec.', DeprecationWarning)
47
48 @classmethod
49 def create_from(cls, backend):
50 """
51 Create device specification with values in backend configuration.
52 Args:
53 backend(Backend): backend configuration
54 Returns:
55 DeviceSpecification: created device specification
56 Raises:
57 PulseError: when an invalid backend is specified
58 """
59 backend_config = backend.configuration()
60
61 if not backend_config.open_pulse:
62 raise PulseError(backend_config.backend_name + ' does not support OpenPulse.')
63
64 # TODO : Remove usage of config.defaults when backend.defaults() is updated.
65 try:
66 backend_default = backend.defaults()
67 buffer = backend_default.buffer
68 except ModelValidationError:
69 try:
70 buffer = backend_config.defaults.get('buffer', 0)
71 except AttributeError:
72 buffer = 0
73
74 # system size
75 n_qubits = backend_config.n_qubits
76 n_registers = backend_config.n_registers
77 n_uchannels = backend_config.n_uchannels
78
79 # generate channels with assuming their numberings are aligned with qubits
80 drives = [DriveChannel(i, buffer=buffer) for i in range(n_qubits)]
81
82 measures = [MeasureChannel(i, buffer=buffer) for i in range(n_qubits)]
83
84 controls = [ControlChannel(i, buffer=buffer) for i in range(n_uchannels)]
85
86 acquires = [AcquireChannel(i, buffer=buffer) for i in range(n_qubits)]
87
88 qubits = []
89 for i in range(n_qubits):
90 # TODO: get qubits <-> channels relationship from backend
91 qubit = Qubit(i, drives[i], measures[i], acquires[i],
92 control_channels=[] if not controls else controls)
93 qubits.append(qubit)
94
95 registers = [RegisterSlot(i) for i in range(n_registers)]
96 # TODO: get #mem_slots from backend
97 mem_slots = [MemorySlot(i) for i in range(len(qubits))]
98
99 return DeviceSpecification(qubits, registers, mem_slots)
100
101 def __eq__(self, other):
102 """Two device specs are the same if they have the same qubits.
103
104 Args:
105 other (DeviceSpecification): other DeviceSpecification
106
107 Returns:
108 bool: are self and other equal.
109 """
110 if type(self) is type(other) and \
111 self._qubits == other._qubits:
112 return True
113 return False
114
115 @property
116 def q(self) -> List[Qubit]:
117 """Return qubits in this device."""
118 return self._qubits
119
120 @property
121 def c(self) -> List[RegisterSlot]:
122 """Return register slots in this device."""
123 return self._reg_slots
124
125 @property
126 def mem(self) -> List[MemorySlot]:
127 """Return memory slots in this device."""
128 return self._mem_slots
129
[end of qiskit/pulse/channels/device_specification.py]
[start of qiskit/pulse/channels/pulse_channel_spec.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 # pylint: disable=invalid-name
16
17 """
18 Pulse channel wrapper object for the system.
19 """
20 from typing import List
21
22 from qiskit.pulse.exceptions import PulseError
23 from .channels import AcquireChannel, MemorySlot, RegisterSlot
24 from .pulse_channels import DriveChannel, ControlChannel, MeasureChannel
25 from .qubit import Qubit
26 from .system_topology import SystemTopology
27
28
29 class PulseChannelSpec:
30 """A helper class to support assembling channel objects and their mapping to qubits.
31 This class can be initialized with two methods shown as below.
32
33 1. With the `BaseBackend` object of the target pulse backend:
34 ```python
35 system = PulseChannelSpec.from_backend(backend)
36 ```
37
38 2. By specifying the number of pulse elements constituting the target quantum computing system:
39 ```python
40 system = PulseChannelSpec(n_qubits=5, n_control=6, n_registers=1, buffer=10)
41 ```
42
43 Within Qiskit a quantum computing system at the level of pulses is abstracted as
44 a combination of multiple types of channels on which instructions are scheduled.
45 These most common channel types are the:
46 - `PulseChannel`: For performing stimulus of the system.
47 - `AcquireChannel`: For scheduling acquisition of qubit data.
48 - `MemorySlot`: For persistent storage of measurement results.
49 - `RegisterSlot`: For temporary storage of and conditional feedback on measurement results.
50
51 There are also several special types of pulse channels which are the:
52 - `DriveChannel`: Used to control a single qubit.
53 - `MeasureChannel`: Used to perform measurement stimulus of a single qubit.
54 - `ControlChannel`: Used to control an arbitrary Hamiltonian term on the system.
55 Typically a two-qubit interaction.
56
57 A collection of above channels is automatically assembled within `PulseChannelSpec`.
58
59 For example, the zeroth drive channel may be accessed by
60 ```python
61 system.drives[0]
62 ```
63 or if the channel is connected to the first qubit,
64 ```python
65 system.qubits[0].drive
66 ```
67 In the above example, both commands refer to the same object.
68 """
69 def __init__(self,
70 n_qubits: int,
71 n_control: int,
72 n_registers: int,
73 buffer: int = 0):
74 """
75 Create pulse channel specification with number of channels.
76
77 Args:
78 n_qubits: Number of qubits.
79 n_control: Number of control channels.
80 n_registers: Number of classical registers.
81 buffer: Buffer that should be placed between instructions on channel.
82 """
83 self._drives = [DriveChannel(idx, buffer) for idx in range(n_qubits)]
84 self._controls = [ControlChannel(idx, buffer) for idx in range(n_control)]
85 self._measures = [MeasureChannel(idx, buffer) for idx in range(n_qubits)]
86 self._acquires = [AcquireChannel(idx, buffer) for idx in range(n_qubits)]
87 self._mem_slots = [MemorySlot(idx) for idx in range(n_qubits)]
88 self._reg_slots = [RegisterSlot(idx) for idx in range(n_registers)]
89
90 # create mapping information from channels
91 self._topology = SystemTopology(self._drives, self.controls,
92 self.measures, self.acquires)
93
94 @classmethod
95 def from_backend(cls, backend):
96 """
97 Create pulse channel specification with values from backend.
98
99 Args:
100 backend (BaseBackend): Backend configuration.
101
102 Returns:
103 PulseChannelSpec: New PulseSpecification configured by backend.
104
105 Raises:
106 PulseError: When OpenPulse is not supported.
107 """
108 configuration = backend.configuration()
109 defaults = backend.defaults()
110
111 if not configuration.open_pulse:
112 raise PulseError(configuration.backend_name + ' does not support OpenPulse.')
113
114 # TODO: allow for drives/measures which are not identical to number of qubit
115 n_qubits = configuration.n_qubits
116 n_controls = configuration.n_uchannels
117 n_registers = configuration.n_registers
118 buffer = defaults.buffer
119
120 return PulseChannelSpec(n_qubits=n_qubits, n_control=n_controls,
121 n_registers=n_registers, buffer=buffer)
122
123 @property
124 def drives(self) -> List[DriveChannel]:
125 """Return system's drive channels."""
126 return self._drives
127
128 @property
129 def controls(self):
130 """Return system's control channels."""
131 return self._controls
132
133 @property
134 def measures(self):
135 """Return system's measure channels."""
136 return self._measures
137
138 @property
139 def acquires(self):
140 """Return system's acquire channels."""
141 return self._acquires
142
143 @property
144 def qubits(self) -> List[Qubit]:
145 """Return system's qubits."""
146 return self._topology.qubits
147
148 @property
149 def registers(self) -> List[RegisterSlot]:
150 """Return system's register slots."""
151 return self._reg_slots
152
153 @property
154 def memoryslots(self) -> List[MemorySlot]:
155 """Return system's memory slots."""
156 return self._mem_slots
157
[end of qiskit/pulse/channels/pulse_channel_spec.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
98141a02e13b219ee80463ab6b5c94516681304d
|
Specifying schedule_los should give more informative error when backend not present
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: master
- **Python version**: 3.6.8
- **Operating system**: Ubuntu 18.04
### What is the current behavior?
When `assemble`ing a schedule and also specifying `schedule_los` with no backend specified (or when `qubit_lo_freq/range`, `meas_lo_freq/range` are missing), an `IndexError` is raised.
### Steps to reproduce the problem
```
import numpy as np
from qiskit.pulse.channels import (DeviceSpecification, Qubit, RegisterSlot, MemorySlot,
DriveChannel, AcquireChannel, ControlChannel, MeasureChannel)
from qiskit.pulse.commands import functional_pulse
from qiskit.pulse import LoConfig
from qiskit.pulse.schedule import Schedule
from qiskit.compiler import assemble
@functional_pulse
def linear(duration, slope, intercept):
x = np.linspace(0, duration - 1, duration)
return slope * x + intercept
qubits = [Qubit(0, DriveChannel(0), AcquireChannel(0), MeasureChannel(0),
control_channels=[ControlChannel(0)]), Qubit(1, DriveChannel(1), MeasureChannel(0), AcquireChannel(1))]
registers = [RegisterSlot(i) for i in range(2)]
mem_slots = [MemorySlot(i) for i in range(2)]
two_qubit_device = DeviceSpecification(qubits, registers, mem_slots)
device = two_qubit_device
lp0 = linear(duration=3, slope=0.2, intercept=0.1)
sched = Schedule()
sched = sched.append(lp0(device.q[0].drive))
assemble(
sched,
schedule_los=[LoConfig(channel_los={
device.q[0].drive: 2*np.pi * 5.4,
device.q[1].drive: 2*np.pi * 5.5
})]
)
```
Raises `IndexError: list assignment index out of range`
### What is the expected behavior?
An error is raised.
### Suggested solutions
The error raised should be more informative than the current error.
|
2019-07-20T19:42:31Z
|
<patch>
diff --git a/qiskit/assembler/assemble_schedules.py b/qiskit/assembler/assemble_schedules.py
--- a/qiskit/assembler/assemble_schedules.py
+++ b/qiskit/assembler/assemble_schedules.py
@@ -39,16 +39,27 @@ def assemble_schedules(schedules, qobj_id, qobj_header, run_config):
instruction_converter = InstructionToQobjConverter
qobj_config = run_config.to_dict()
- qubit_lo_range = qobj_config.pop('qubit_lo_range')
- meas_lo_range = qobj_config.pop('meas_lo_range')
+
+ qubit_lo_freq = qobj_config.get('qubit_lo_freq', None)
+ if qubit_lo_freq is None:
+ raise QiskitError('qubit_lo_freq must be supplied.')
+
+ meas_lo_freq = qobj_config.get('meas_lo_freq', None)
+ if meas_lo_freq is None:
+ raise QiskitError('meas_lo_freq must be supplied.')
+
+ qubit_lo_range = qobj_config.pop('qubit_lo_range', None)
+ meas_lo_range = qobj_config.pop('meas_lo_range', None)
meas_map = qobj_config.pop('meas_map', None)
max_memory_slot = 0
instruction_converter = instruction_converter(PulseQobjInstruction, **qobj_config)
- lo_converter = LoConfigConverter(PulseQobjExperimentConfig, qubit_lo_range=qubit_lo_range,
- meas_lo_range=meas_lo_range, **qobj_config)
+ lo_converter = LoConfigConverter(PulseQobjExperimentConfig,
+ qubit_lo_range=qubit_lo_range,
+ meas_lo_range=meas_lo_range,
+ **qobj_config)
# Pack everything into the Qobj
qobj_schedules = []
diff --git a/qiskit/compiler/assemble.py b/qiskit/compiler/assemble.py
--- a/qiskit/compiler/assemble.py
+++ b/qiskit/compiler/assemble.py
@@ -75,16 +75,20 @@ def assemble(experiments,
Random seed to control sampling, for when backend is a simulator
qubit_lo_freq (list):
- List of default qubit lo frequencies
+ List of default qubit lo frequencies. Will be overridden by
+ `schedule_los` if set.
meas_lo_freq (list):
- List of default meas lo frequencies
+ List of default meas lo frequencies. Will be overridden by
+ `schedule_los` if set.
qubit_lo_range (list):
- List of drive lo ranges
+ List of drive lo ranges used to validate that the supplied qubit los
+ are valid.
meas_lo_range (list):
- List of meas lo ranges
+ List of meas lo ranges used to validate that the supplied measurement los
+ are valid.
schedule_los (None or list[Union[Dict[PulseChannel, float], LoConfig]] or \
Union[Dict[PulseChannel, float], LoConfig]):
@@ -242,11 +246,11 @@ def _parse_pulse_args(backend, qubit_lo_freq, meas_lo_freq, qubit_lo_range,
schedule_los = [lo_config if isinstance(lo_config, LoConfig) else LoConfig(lo_config)
for lo_config in schedule_los]
- qubit_lo_freq = qubit_lo_freq or getattr(backend_default, 'qubit_freq_est', [])
- meas_lo_freq = meas_lo_freq or getattr(backend_default, 'meas_freq_est', [])
+ qubit_lo_freq = qubit_lo_freq or getattr(backend_default, 'qubit_freq_est', None)
+ meas_lo_freq = meas_lo_freq or getattr(backend_default, 'meas_freq_est', None)
- qubit_lo_range = qubit_lo_range or getattr(backend_config, 'qubit_lo_range', [])
- meas_lo_range = meas_lo_range or getattr(backend_config, 'meas_lo_range', [])
+ qubit_lo_range = qubit_lo_range or getattr(backend_config, 'qubit_lo_range', None)
+ meas_lo_range = meas_lo_range or getattr(backend_config, 'meas_lo_range', None)
rep_time = rep_time or getattr(backend_config, 'rep_times', None)
if isinstance(rep_time, list):
diff --git a/qiskit/qobj/converters/lo_config.py b/qiskit/qobj/converters/lo_config.py
--- a/qiskit/qobj/converters/lo_config.py
+++ b/qiskit/qobj/converters/lo_config.py
@@ -25,7 +25,7 @@ class LoConfigConverter:
"""
def __init__(self, qobj_model, qubit_lo_freq, meas_lo_freq,
- qubit_lo_range, meas_lo_range, **run_config):
+ qubit_lo_range=None, meas_lo_range=None, **run_config):
"""Create new converter.
Args:
@@ -43,10 +43,13 @@ def __init__(self, qobj_model, qubit_lo_freq, meas_lo_freq,
self.default_lo_config = LoConfig()
- for i, lo_range in enumerate(qubit_lo_range):
- self.default_lo_config.add_lo_range(DriveChannel(i), lo_range)
- for i, lo_range in enumerate(meas_lo_range):
- self.default_lo_config.add_lo_range(MeasureChannel(i), lo_range)
+ if qubit_lo_range:
+ for i, lo_range in enumerate(qubit_lo_range):
+ self.default_lo_config.add_lo_range(DriveChannel(i), lo_range)
+
+ if meas_lo_range:
+ for i, lo_range in enumerate(meas_lo_range):
+ self.default_lo_config.add_lo_range(MeasureChannel(i), lo_range)
def __call__(self, user_lo_config):
"""Return PulseQobjExperimentConfig
</patch>
|
[]
|
[]
| ||||
wagtail__wagtail-9090
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement target width, resize rules, and overlap fallback for side panels
In https://github.com/wagtail/wagtail/blob/main/client/scss/components/forms/_form-width.scss, we define how much of the forms’ width should be dedicated to the form itself. This has fallen out of date with our [design system’s grid setup](https://www.figma.com/file/h67EsVXdWsfu38WGGxWfpi/Wagtail-Design-System?node-id=5172%3A30739), and needs to be updated.
We also need to update the width of the side panels, so it uses the correct value across different breakpoints.
For the time being, this should be done while retaining:
- Existing breakpoints in Wagtail as-is
- Existing mobile / desktop padding as-is (20px / 80px)
</issue>
<code>
[start of README.md]
1 <h1 align="center">
2 <img width="343" src=".github/wagtail.svg#gh-light-mode-only" alt="Wagtail">
3 <img width="343" src=".github/wagtail-inverse.svg#gh-dark-mode-only" alt="Wagtail">
4 </h1>
5 <p align="center">
6 <br>
7 <a href="https://github.com/wagtail/wagtail/actions">
8 <img src="https://github.com/wagtail/wagtail/workflows/Wagtail%20CI/badge.svg" alt="Build Status" />
9 </a>
10 <a href="https://opensource.org/licenses/BSD-3-Clause">
11 <img src="https://img.shields.io/badge/license-BSD-blue.svg" alt="License" />
12 </a>
13 <a href="https://pypi.python.org/pypi/wagtail/">
14 <img src="https://img.shields.io/pypi/v/wagtail.svg" alt="Version" />
15 </a>
16 <a href="https://lgtm.com/projects/g/wagtail/wagtail/alerts/">
17 <img src="https://img.shields.io/lgtm/alerts/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Total alerts" />
18 </a>
19 <a href="https://lgtm.com/projects/g/wagtail/wagtail/context:python">
20 <img src="https://img.shields.io/lgtm/grade/python/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Language grade: Python" />
21 </a>
22 <a href="https://lgtm.com/projects/g/wagtail/wagtail/context:javascript">
23 <img src="https://img.shields.io/lgtm/grade/javascript/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Language grade: JavaScript" />
24 </a>
25 <a href="https://pypi.python.org/pypi/wagtail/">
26 <img src="https://img.shields.io/pypi/dm/wagtail?logo=Downloads" alt="Monthly downloads" />
27 </a>
28 <a href="https://twitter.com/WagtailCMS">
29 <img src="https://img.shields.io/twitter/follow/WagtailCMS?style=social&logo=twitter" alt="follow on Twitter">
30 </a>
31 </p>
32
33 Wagtail is an open source content management system built on Django, with a strong community and commercial support. It's focused on user experience, and offers precise control for designers and developers.
34
35 
36
37 ### 🔥 Features
38
39 - A fast, attractive interface for authors
40 - Complete control over front-end design and structure
41 - Scales to millions of pages and thousands of editors
42 - Fast out of the box, cache-friendly when you need it
43 - Content API for 'headless' sites with de-coupled front-end
44 - Runs on a Raspberry Pi or a multi-datacenter cloud platform
45 - StreamField encourages flexible content without compromising structure
46 - Powerful, integrated search, using Elasticsearch or PostgreSQL
47 - Excellent support for images and embedded content
48 - Multi-site and multi-language ready
49 - Embraces and extends Django
50
51 Find out more at [wagtail.org](https://wagtail.org/).
52
53 ### 👉 Getting started
54
55 Wagtail works with [Python 3](https://www.python.org/downloads/), on any platform.
56
57 To get started with using Wagtail, run the following in a virtual environment:
58
59 
60
61 ```sh
62 pip install wagtail
63 wagtail start mysite
64 cd mysite
65 pip install -r requirements.txt
66 python manage.py migrate
67 python manage.py createsuperuser
68 python manage.py runserver
69 ```
70
71 For detailed installation and setup docs, see [the getting started tutorial](https://docs.wagtail.org/en/stable/getting_started/tutorial.html).
72
73 ### 👨👩👧👦 Who’s using it?
74
75 Wagtail is used by [NASA](https://www.nasa.gov/), [Google](https://www.google.com/), [Oxfam](https://www.oxfam.org/en), the [NHS](https://www.nhs.uk/), [Mozilla](https://www.mozilla.org/en-US/), [MIT](https://www.mit.edu/), the [Red Cross](https://www.icrc.org/en), [Salesforce](https://www.salesforce.com/), [NBC](https://www.nbc.com/), [BMW](https://www.bmw.com/en/index.html), and the US and UK governments. Add your own Wagtail site to [madewithwagtail.org](https://madewithwagtail.org).
76
77 ### 📖 Documentation
78
79 [docs.wagtail.org](https://docs.wagtail.org/) is the full reference for Wagtail, and includes guides for developers, designers and editors, alongside release notes and our roadmap.
80
81 For those who are **new to Wagtail**, the [Zen of Wagtail](https://docs.wagtail.org/en/stable/getting_started/the_zen_of_wagtail.html) will help you understand what Wagtail is, and what Wagtail is _not_.
82
83 **For developers** who are ready to jump in to their first Wagtail website the [Getting Started Tutorial](https://docs.wagtail.org/en/stable/getting_started/tutorial.html) will guide you through creating and editing your first page.
84
85 **Do you have an existing Django project?** The [Wagtail Integration documentation](https://docs.wagtail.org/en/stable/getting_started/integrating_into_django.html) is the best place to start.
86
87 ### 📌 Compatibility
88
89 _(If you are reading this on GitHub, the details here may not be indicative of the current released version - please see [Compatible Django / Python versions](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) in the Wagtail documentation.)_
90
91 Wagtail supports:
92
93 - Django 3.2.x, 4.0.x and 4.1.x
94 - Python 3.7, 3.8, 3.9 and 3.10
95 - PostgreSQL, MySQL and SQLite (with JSON1) as database backends
96
97 [Previous versions of Wagtail](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) additionally supported Python 2.7 and earlier Django versions.
98
99 ---
100
101 ### 📢 Community Support
102
103 There is an active community of Wagtail users and developers responding to questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/wagtail). When posting questions, please read Stack Overflow's advice on [how to ask questions](https://stackoverflow.com/help/how-to-ask) and remember to tag your question "wagtail".
104
105 For topics and discussions that do not fit Stack Overflow's question and answer format we have a [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack). Please respect the time and effort of volunteers by not asking the same question in multiple places.
106
107 [](https://github.com/wagtail/wagtail/wiki/Slack)
108
109 Our [Github discussion boards](https://github.com/wagtail/wagtail/discussions) are open for sharing ideas and plans for the Wagtail project.
110
111 We maintain a curated list of third party packages, articles and other resources at [Awesome Wagtail](https://github.com/springload/awesome-wagtail).
112
113 ### 🧑💼 Commercial Support
114
115 Wagtail is sponsored by [Torchbox](https://torchbox.com/). If you need help implementing or hosting Wagtail, please contact us: [email protected]. See also [madewithwagtail.org/developers/](https://madewithwagtail.org/developers/) for expert Wagtail developers around the world.
116
117 ### 🔐 Security
118
119 We take the security of Wagtail, and related packages we maintain, seriously. If you have found a security issue with any of our projects please email us at [[email protected]](mailto:[email protected]) so we can work together to find and patch the issue. We appreciate responsible disclosure with any security related issues, so please contact us first before creating a Github issue.
120
121 If you want to send an encrypted email (optional), the public key ID for [email protected] is 0xbed227b4daf93ff9, and this public key is available from most commonly-used keyservers.
122
123 ### 🕒 Release schedule
124
125 Feature releases of Wagtail are released every three months. Selected releases are designated as Long Term Support (LTS) releases, and will receive maintenance updates for an extended period to address any security and data-loss related issues. For dates of past and upcoming releases and support periods, see [Release Schedule](https://github.com/wagtail/wagtail/wiki/Release-schedule).
126
127 #### 🕛 Nightly releases
128
129 To try out the latest features before a release, we also create builds from `main` every night. You can find instructions on how to install the latest nightly release at https://releases.wagtail.org/nightly/index.html
130
131 ### 🙋🏽 Contributing
132
133 If you're a Python or Django developer, fork the repo and get stuck in! We have several developer focused channels on the [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack).
134
135 You might like to start by reviewing the [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html) and checking issues with the [good first issue](https://github.com/wagtail/wagtail/labels/good%20first%20issue) label.
136
137 We also welcome translations for Wagtail's interface. Translation work should be submitted through [Transifex](https://explore.transifex.com/torchbox/wagtail/).
138
139 ### 🔓 License
140
141 [BSD](https://github.com/wagtail/wagtail/blob/main/LICENSE) - Free to use and modify for any purpose, including both open and closed-source code.
142
143 ### 👏 Thanks
144
145 We thank the following organisations for their services used in Wagtail's development:
146
147 [](https://www.browserstack.com/)<br>
148 [BrowserStack](https://www.browserstack.com/) provides the project with free access to their live web-based browser testing tool, and automated Selenium cloud testing.
149
150 [](https://www.squash.io/)<br>
151 [Squash](https://www.squash.io/) provides the project with free test environments for reviewing pull requests.
152
153 [](https://assistivlabs.com/)<br>
154 [Assistiv Labs](https://assistivlabs.com/) provides the project with unlimited access to their remote testing with assistive technologies.
155
[end of README.md]
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Wagtail documentation build configuration file, created by
4 # sphinx-quickstart on Tue Jan 14 17:38:55 2014.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import os
16 import sys
17 from datetime import datetime
18
19 import django
20 import sphinx_wagtail_theme
21
22 from wagtail import VERSION, __version__
23
24 # on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org
25 on_rtd = os.environ.get("READTHEDOCS", None) == "True"
26
27 html_theme = "sphinx_wagtail_theme"
28 html_theme_path = [sphinx_wagtail_theme.get_html_theme_path()]
29
30 html_theme_options = {
31 "project_name": "Wagtail Documentation",
32 "github_url": "https://github.com/wagtail/wagtail/blob/main/docs/",
33 }
34
35 # If extensions (or modules to document with autodoc) are in another directory,
36 # add these directories to sys.path here. If the directory is relative to the
37 # documentation root, use os.path.abspath to make it absolute, like shown here.
38 sys.path.insert(0, os.path.abspath(".."))
39
40 # Autodoc may need to import some models modules which require django settings
41 # be configured
42 os.environ["DJANGO_SETTINGS_MODULE"] = "wagtail.test.settings"
43 django.setup()
44
45 # Use SQLite3 database engine so it doesn't attempt to use psycopg2 on RTD
46 os.environ["DATABASE_ENGINE"] = "django.db.backends.sqlite3"
47
48 # -- General configuration ------------------------------------------------
49
50 # If your documentation needs a minimal Sphinx version, state it here.
51 # needs_sphinx = '1.0'
52
53 # Add any Sphinx extension module names here, as strings. They can be
54 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
55 # ones.
56 extensions = [
57 "sphinx.ext.autodoc",
58 "sphinx.ext.intersphinx",
59 "sphinx_copybutton",
60 "myst_parser",
61 "sphinx_wagtail_theme",
62 ]
63
64 if not on_rtd:
65 extensions.append("sphinxcontrib.spelling")
66
67 # Add any paths that contain templates here, relative to this directory.
68 templates_path = ["_templates"]
69
70 # The suffix of source filenames.
71 source_suffix = ".rst"
72
73 # The encoding of source files.
74 # source_encoding = 'utf-8-sig'
75
76 # The master toctree document.
77 master_doc = "index"
78
79 # General information about the project.
80 project = "Wagtail Documentation"
81 copyright = f"{datetime.now().year}, Torchbox and contributors"
82
83 # The version info for the project you're documenting, acts as replacement for
84 # |version| and |release|, also used in various other places throughout the
85 # built documents.
86
87 # The short X.Y version.
88 version = "{}.{}".format(VERSION[0], VERSION[1])
89 # The full version, including alpha/beta/rc tags.
90 release = __version__
91
92 # The language for content autogenerated by Sphinx. Refer to documentation
93 # for a list of supported languages.
94 # language = None
95
96 # There are two options for replacing |today|: either, you set today to some
97 # non-false value, then it is used:
98 # today = ''
99 # Else, today_fmt is used as the format for a strftime call.
100 # today_fmt = '%B %d, %Y'
101
102 # List of patterns, relative to source directory, that match files and
103 # directories to ignore when looking for source files.
104 exclude_patterns = ["_build", "README.md"]
105
106 # The reST default role (used for this markup: `text`) to use for all
107 # documents.
108 # default_role = None
109
110 # If true, '()' will be appended to :func: etc. cross-reference text.
111 # add_function_parentheses = True
112
113 # If true, the current module name will be prepended to all description
114 # unit titles (such as .. function::).
115 # add_module_names = True
116
117 # If true, sectionauthor and moduleauthor directives will be shown in the
118 # output. They are ignored by default.
119 # show_authors = False
120
121 # The name of the Pygments (syntax highlighting) style to use.
122 pygments_style = None
123
124 # A list of ignored prefixes for module index sorting.
125 # modindex_common_prefix = []
126
127 # If true, keep warnings as "system message" paragraphs in the built documents.
128 # keep_warnings = False
129
130 # splhinxcontrib.spelling settings
131
132 spelling_lang = "en_GB"
133 spelling_word_list_filename = "spelling_wordlist.txt"
134
135 # sphinx.ext.intersphinx settings
136 intersphinx_mapping = {
137 "django": (
138 "https://docs.djangoproject.com/en/stable/",
139 "https://docs.djangoproject.com/en/stable/_objects/",
140 )
141 }
142
143 # -- Options for HTML output ----------------------------------------------
144
145 # Theme options are theme-specific and customise the look and feel of a theme
146 # further. For a list of options available for each theme, see the
147 # documentation.
148 # html_theme_options = {}
149
150 # The name for this set of Sphinx documents. If None, it defaults to
151 # "<project> v<release> documentation".
152 # html_title = None
153
154 # A shorter title for the navigation bar. Default is the same as html_title.
155 # html_short_title = None
156
157 # The name of an image file (relative to this directory) to place at the top
158 # of the sidebar.
159 # html_logo = 'logo.png'
160
161 # The name of an image file (within the static path) to use as favicon of the
162 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
163 # pixels large.
164 html_favicon = "favicon.ico"
165
166 # Add any paths that contain custom static files (such as style sheets) here,
167 # relative to this directory. They are copied after the builtin static files,
168 # so a file named "default.css" will overwrite the builtin "default.css".
169 html_static_path = ["_static"]
170
171 # Add any extra paths that contain custom files (such as robots.txt or
172 # .htaccess) here, relative to this directory. These files are copied
173 # directly to the root of the documentation.
174 html_extra_path = ["public"]
175
176 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
177 # using the given strftime format.
178 # html_last_updated_fmt = '%b %d, %Y'
179
180 # If true, SmartyPants will be used to convert quotes and dashes to
181 # typographically correct entities.
182 # html_use_smartypants = True
183
184 # Custom sidebar templates, maps document names to template names.
185 # html_sidebars = {}
186
187 # Additional templates that should be rendered to pages, maps page names to
188 # template names.
189 # html_additional_pages = {}
190
191 # If false, no module index is generated.
192 # html_domain_indices = True
193
194 # If false, no index is generated.
195 # Since we are implementing search with Algolia DocSearch, we do not need Sphinx to
196 # generate its own index. It might not hurt to keep the Sphinx index, but it
197 # could potentially speed up the build process.
198 html_use_index = False
199
200 # If true, the index is split into individual pages for each letter.
201 # html_split_index = False
202
203 # If true, links to the reST sources are added to the pages.
204 # html_show_sourcelink = True
205
206 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
207 # html_show_sphinx = True
208
209 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
210 # html_show_copyright = True
211
212 # If true, an OpenSearch description file will be output, and all pages will
213 # contain a <link> tag referring to it. The value of this option must be the
214 # base URL from which the finished HTML is served.
215 # html_use_opensearch = ''
216
217 # This is the file name suffix for HTML files (for example ".xhtml").
218 # html_file_suffix = None
219
220 # Output file base name for HTML help builder.
221 htmlhelp_basename = "Wagtaildoc"
222
223 # -- Options for LaTeX output ---------------------------------------------
224
225 latex_elements = {
226 # The paper size ('letterpaper' or 'a4paper').
227 # 'papersize': 'letterpaper',
228 # The font size ('10pt', '11pt' or '12pt').
229 # 'pointsize': '10pt',
230 # Additional stuff for the LaTeX preamble.
231 # 'preamble': '',
232 }
233
234 # Grouping the document tree into LaTeX files. List of tuples
235 # (source start file, target name, title,
236 # author, documentclass [howto, manual, or own class]).
237 latex_documents = [
238 ("index", "Wagtail.tex", "Wagtail Documentation", "Torchbox", "manual"),
239 ]
240
241 # The name of an image file (relative to this directory) to place at the top of
242 # the title page.
243 # latex_logo = None
244
245 # For "manual" documents, if this is true, then toplevel headings are parts,
246 # not chapters.
247 # latex_use_parts = False
248
249 # If true, show page references after internal links.
250 # latex_show_pagerefs = False
251
252 # If true, show URL addresses after external links.
253 # latex_show_urls = False
254
255 # Documents to append as an appendix to all manuals.
256 # latex_appendices = []
257
258 # If false, no module index is generated.
259 # latex_domain_indices = True
260
261 # -- Options for manual page output ---------------------------------------
262
263 # One entry per manual page. List of tuples
264 # (source start file, name, description, authors, manual section).
265 man_pages = [("index", "wagtail", "Wagtail Documentation", ["Torchbox"], 1)]
266
267 # If true, show URL addresses after external links.
268 # man_show_urls = False
269
270 # -- Options for Texinfo output -------------------------------------------
271
272 # Grouping the document tree into Texinfo files. List of tuples
273 # (source start file, target name, title, author,
274 # dir menu entry, description, category)
275 texinfo_documents = [
276 (
277 "index",
278 "Wagtail",
279 "Wagtail Documentation",
280 "Torchbox",
281 "Wagtail",
282 "One line description of project.",
283 "Miscellaneous",
284 ),
285 ]
286
287 # Documents to append as an appendix to all manuals.
288 # texinfo_appendices = []
289
290 # If false, no module index is generated.
291 # texinfo_domain_indices = True
292
293 # How to display URL addresses: 'footnote', 'no', or 'inline'.
294 # texinfo_show_urls = 'footnote'
295
296 # If true, do not generate a @detailmenu in the "Top" node's menu.
297 # texinfo_no_detailmenu = False
298
299
300 def setup(app):
301 app.add_js_file("js/banner.js")
302
[end of docs/conf.py]
[start of wagtail/blocks/base.py]
1 import collections
2 import json
3 import re
4 from functools import lru_cache
5 from importlib import import_module
6
7 from django import forms
8 from django.core import checks
9 from django.core.exceptions import ImproperlyConfigured
10 from django.template.loader import render_to_string
11 from django.utils.encoding import force_str
12 from django.utils.functional import cached_property
13 from django.utils.html import format_html
14 from django.utils.safestring import mark_safe
15 from django.utils.text import capfirst
16
17 from wagtail.admin.staticfiles import versioned_static
18 from wagtail.telepath import JSContext
19
20 __all__ = [
21 "BaseBlock",
22 "Block",
23 "BoundBlock",
24 "DeclarativeSubBlocksMetaclass",
25 "BlockWidget",
26 "BlockField",
27 ]
28
29
30 # =========================================
31 # Top-level superclasses and helper objects
32 # =========================================
33
34
35 class BaseBlock(type):
36 def __new__(mcs, name, bases, attrs):
37 meta_class = attrs.pop("Meta", None)
38
39 cls = super(BaseBlock, mcs).__new__(mcs, name, bases, attrs)
40
41 # Get all the Meta classes from all the bases
42 meta_class_bases = [meta_class] + [
43 getattr(base, "_meta_class", None) for base in bases
44 ]
45 meta_class_bases = tuple(filter(bool, meta_class_bases))
46 cls._meta_class = type(str(name + "Meta"), meta_class_bases, {})
47
48 return cls
49
50
51 class Block(metaclass=BaseBlock):
52 name = ""
53 creation_counter = 0
54
55 TEMPLATE_VAR = "value"
56
57 class Meta:
58 label = None
59 icon = "placeholder"
60 classname = None
61 group = ""
62
63 # Attributes of Meta which can legally be modified after the block has been instantiated.
64 # Used to implement __eq__. label is not included here, despite it technically being mutable via
65 # set_name, since its value must originate from either the constructor arguments or set_name,
66 # both of which are captured by the equality test, so checking label as well would be redundant.
67 MUTABLE_META_ATTRIBUTES = []
68
69 def __new__(cls, *args, **kwargs):
70 # adapted from django.utils.deconstruct.deconstructible; capture the arguments
71 # so that we can return them in the 'deconstruct' method
72 obj = super(Block, cls).__new__(cls)
73 obj._constructor_args = (args, kwargs)
74 return obj
75
76 def __init__(self, **kwargs):
77 if "classname" in self._constructor_args[1]:
78 # Adding this so that migrations are not triggered
79 # when form_classname is used instead of classname
80 # in the initialisation of the FieldBlock
81 classname = self._constructor_args[1].pop("classname")
82 self._constructor_args[1].setdefault("form_classname", classname)
83
84 self.meta = self._meta_class()
85
86 for attr, value in kwargs.items():
87 setattr(self.meta, attr, value)
88
89 # Increase the creation counter, and save our local copy.
90 self.creation_counter = Block.creation_counter
91 Block.creation_counter += 1
92 self.definition_prefix = "blockdef-%d" % self.creation_counter
93
94 self.label = self.meta.label or ""
95
96 def set_name(self, name):
97 self.name = name
98 if not self.meta.label:
99 self.label = capfirst(force_str(name).replace("_", " "))
100
101 def set_meta_options(self, opts):
102 """
103 Update this block's meta options (out of the ones designated as mutable) from the given dict.
104 Used by the StreamField constructor to pass on kwargs that are to be handled by the block,
105 since the block object has already been created by that point, e.g.:
106 body = StreamField(SomeStreamBlock(), max_num=5)
107 """
108 for attr, value in opts.items():
109 if attr in self.MUTABLE_META_ATTRIBUTES:
110 setattr(self.meta, attr, value)
111 else:
112 raise TypeError(
113 "set_meta_options received unexpected option: %r" % attr
114 )
115
116 def value_from_datadict(self, data, files, prefix):
117 raise NotImplementedError("%s.value_from_datadict" % self.__class__)
118
119 def value_omitted_from_data(self, data, files, name):
120 """
121 Used only for top-level blocks wrapped by BlockWidget (i.e.: typically only StreamBlock)
122 to inform ModelForm logic on Django >=1.10.2 whether the field is absent from the form
123 submission (and should therefore revert to the field default).
124 """
125 return name not in data
126
127 def bind(self, value, prefix=None, errors=None):
128 """
129 Return a BoundBlock which represents the association of this block definition with a value
130 and a prefix (and optionally, a ValidationError to be rendered).
131 BoundBlock primarily exists as a convenience to allow rendering within templates:
132 bound_block.render() rather than blockdef.render(value, prefix) which can't be called from
133 within a template.
134 """
135 return BoundBlock(self, value, prefix=prefix, errors=errors)
136
137 def get_default(self):
138 """
139 Return this block's default value (conventionally found in self.meta.default),
140 converted to the value type expected by this block. This caters for the case
141 where that value type is not something that can be expressed statically at
142 model definition type (e.g. something like StructValue which incorporates a
143 pointer back to the block definion object).
144 """
145 return self.meta.default
146
147 def clean(self, value):
148 """
149 Validate value and return a cleaned version of it, or throw a ValidationError if validation fails.
150 The thrown ValidationError instance will subsequently be passed to render() to display the
151 error message; the ValidationError must therefore include all detail necessary to perform that
152 rendering, such as identifying the specific child block(s) with errors, in the case of nested
153 blocks. (It is suggested that you use the 'params' attribute for this; using error_list /
154 error_dict is unreliable because Django tends to hack around with these when nested.)
155 """
156 return value
157
158 def to_python(self, value):
159 """
160 Convert 'value' from a simple (JSON-serialisable) value to a (possibly complex) Python value to be
161 used in the rest of the block API and within front-end templates . In simple cases this might be
162 the value itself; alternatively, it might be a 'smart' version of the value which behaves mostly
163 like the original value but provides a native HTML rendering when inserted into a template; or it
164 might be something totally different (e.g. an image chooser will use the image ID as the clean
165 value, and turn this back into an actual image object here).
166 """
167 return value
168
169 def bulk_to_python(self, values):
170 """
171 Apply the to_python conversion to a list of values. The default implementation simply
172 iterates over the list; subclasses may optimise this, e.g. by combining database lookups
173 into a single query.
174 """
175 return [self.to_python(value) for value in values]
176
177 def get_prep_value(self, value):
178 """
179 The reverse of to_python; convert the python value into JSON-serialisable form.
180 """
181 return value
182
183 def get_form_state(self, value):
184 """
185 Convert a python value for this block into a JSON-serialisable representation containing
186 all the data needed to present the value in a form field, to be received by the block's
187 client-side component. Examples of where this conversion is not trivial include rich text
188 (where it needs to be supplied in a format that the editor can process, e.g. ContentState
189 for Draftail) and page / image / document choosers (where it needs to include all displayed
190 data for the selected item, such as title or thumbnail).
191 """
192 return value
193
194 def get_context(self, value, parent_context=None):
195 """
196 Return a dict of context variables (derived from the block value and combined with the parent_context)
197 to be used as the template context when rendering this value through a template.
198 """
199
200 context = parent_context or {}
201 context.update(
202 {
203 "self": value,
204 self.TEMPLATE_VAR: value,
205 }
206 )
207 return context
208
209 def get_template(self, context=None):
210 """
211 Return the template to use for rendering the block if specified on meta class.
212 This extraction was added to make dynamic templates possible if you override this method
213 """
214 return getattr(self.meta, "template", None)
215
216 def render(self, value, context=None):
217 """
218 Return a text rendering of 'value', suitable for display on templates. By default, this will
219 use a template (with the passed context, supplemented by the result of get_context) if a
220 'template' property is specified on the block, and fall back on render_basic otherwise.
221 """
222 template = self.get_template(context=context)
223 if not template:
224 return self.render_basic(value, context=context)
225
226 if context is None:
227 new_context = self.get_context(value)
228 else:
229 new_context = self.get_context(value, parent_context=dict(context))
230
231 return mark_safe(render_to_string(template, new_context))
232
233 def get_api_representation(self, value, context=None):
234 """
235 Can be used to customise the API response and defaults to the value returned by get_prep_value.
236 """
237 return self.get_prep_value(value)
238
239 def render_basic(self, value, context=None):
240 """
241 Return a text rendering of 'value', suitable for display on templates. render() will fall back on
242 this if the block does not define a 'template' property.
243 """
244 return force_str(value)
245
246 def get_searchable_content(self, value):
247 """
248 Returns a list of strings containing text content within this block to be used in a search engine.
249 """
250 return []
251
252 def extract_references(self, value):
253 return []
254
255 def check(self, **kwargs):
256 """
257 Hook for the Django system checks framework -
258 returns a list of django.core.checks.Error objects indicating validity errors in the block
259 """
260 return []
261
262 def _check_name(self, **kwargs):
263 """
264 Helper method called by container blocks as part of the system checks framework,
265 to validate that this block's name is a valid identifier.
266 (Not called universally, because not all blocks need names)
267 """
268 errors = []
269 if not self.name:
270 errors.append(
271 checks.Error(
272 "Block name %r is invalid" % self.name,
273 hint="Block name cannot be empty",
274 obj=kwargs.get("field", self),
275 id="wagtailcore.E001",
276 )
277 )
278
279 if " " in self.name:
280 errors.append(
281 checks.Error(
282 "Block name %r is invalid" % self.name,
283 hint="Block names cannot contain spaces",
284 obj=kwargs.get("field", self),
285 id="wagtailcore.E001",
286 )
287 )
288
289 if "-" in self.name:
290 errors.append(
291 checks.Error(
292 "Block name %r is invalid" % self.name,
293 "Block names cannot contain dashes",
294 obj=kwargs.get("field", self),
295 id="wagtailcore.E001",
296 )
297 )
298
299 if self.name and self.name[0].isdigit():
300 errors.append(
301 checks.Error(
302 "Block name %r is invalid" % self.name,
303 "Block names cannot begin with a digit",
304 obj=kwargs.get("field", self),
305 id="wagtailcore.E001",
306 )
307 )
308
309 if not errors and not re.match(r"^[_a-zA-Z][_a-zA-Z0-9]*$", self.name):
310 errors.append(
311 checks.Error(
312 "Block name %r is invalid" % self.name,
313 "Block names should follow standard Python conventions for "
314 "variable names: alphanumeric and underscores, and cannot "
315 "begin with a digit",
316 obj=kwargs.get("field", self),
317 id="wagtailcore.E001",
318 )
319 )
320
321 return errors
322
323 def id_for_label(self, prefix):
324 """
325 Return the ID to be used as the 'for' attribute of <label> elements that refer to this block,
326 when the given field prefix is in use. Return None if no 'for' attribute should be used.
327 """
328 return None
329
330 @property
331 def required(self):
332 """
333 Flag used to determine whether labels for this block should display a 'required' asterisk.
334 False by default, since Block does not provide any validation of its own - it's up to subclasses
335 to define what required-ness means.
336 """
337 return False
338
339 def deconstruct(self):
340 # adapted from django.utils.deconstruct.deconstructible
341 module_name = self.__module__
342 name = self.__class__.__name__
343
344 # Make sure it's actually there and not an inner class
345 module = import_module(module_name)
346 if not hasattr(module, name):
347 raise ValueError(
348 "Could not find object %s in %s.\n"
349 "Please note that you cannot serialize things like inner "
350 "classes. Please move the object into the main module "
351 "body to use migrations.\n" % (name, module_name)
352 )
353
354 # if the module defines a DECONSTRUCT_ALIASES dictionary, see if the class has an entry in there;
355 # if so, use that instead of the real path
356 try:
357 path = module.DECONSTRUCT_ALIASES[self.__class__]
358 except (AttributeError, KeyError):
359 path = "%s.%s" % (module_name, name)
360
361 return (
362 path,
363 self._constructor_args[0],
364 self._constructor_args[1],
365 )
366
367 def __eq__(self, other):
368 """
369 Implement equality on block objects so that two blocks with matching definitions are considered
370 equal. Block objects are intended to be immutable with the exception of set_name() and any meta
371 attributes identified in MUTABLE_META_ATTRIBUTES, so checking these along with the result of
372 deconstruct (which captures the constructor arguments) is sufficient to identify (valid) differences.
373
374 This was originally necessary as a workaround for https://code.djangoproject.com/ticket/24340
375 in Django <1.9; the deep_deconstruct function used to detect changes for migrations did not
376 recurse into the block lists, and left them as Block instances. This __eq__ method therefore
377 came into play when identifying changes within migrations.
378
379 As of Django >=1.9, this *probably* isn't required any more. However, it may be useful in
380 future as a way of identifying blocks that can be re-used within StreamField definitions
381 (https://github.com/wagtail/wagtail/issues/4298#issuecomment-367656028).
382 """
383
384 if not isinstance(other, Block):
385 # if the other object isn't a block at all, it clearly isn't equal.
386 return False
387
388 # Note that we do not require the two blocks to be of the exact same class. This is because
389 # we may wish the following blocks to be considered equal:
390 #
391 # class FooBlock(StructBlock):
392 # first_name = CharBlock()
393 # surname = CharBlock()
394 #
395 # class BarBlock(StructBlock):
396 # first_name = CharBlock()
397 # surname = CharBlock()
398 #
399 # FooBlock() == BarBlock() == StructBlock([('first_name', CharBlock()), ('surname': CharBlock())])
400 #
401 # For this to work, StructBlock will need to ensure that 'deconstruct' returns the same signature
402 # in all of these cases, including reporting StructBlock as the path:
403 #
404 # FooBlock().deconstruct() == (
405 # 'wagtail.blocks.StructBlock',
406 # [('first_name', CharBlock()), ('surname': CharBlock())],
407 # {}
408 # )
409 #
410 # This has the bonus side effect that the StructBlock field definition gets frozen into
411 # the migration, rather than leaving the migration vulnerable to future changes to FooBlock / BarBlock
412 # in models.py.
413
414 return (
415 self.name == other.name
416 and self.deconstruct() == other.deconstruct()
417 and all(
418 getattr(self.meta, attr, None) == getattr(other.meta, attr, None)
419 for attr in self.MUTABLE_META_ATTRIBUTES
420 )
421 )
422
423
424 class BoundBlock:
425 def __init__(self, block, value, prefix=None, errors=None):
426 self.block = block
427 self.value = value
428 self.prefix = prefix
429 self.errors = errors
430
431 def render(self, context=None):
432 return self.block.render(self.value, context=context)
433
434 def render_as_block(self, context=None):
435 """
436 Alias for render; the include_block tag will specifically check for the presence of a method
437 with this name. (This is because {% include_block %} is just as likely to be invoked on a bare
438 value as a BoundBlock. If we looked for a `render` method instead, we'd run the risk of finding
439 an unrelated method that just happened to have that name - for example, when called on a
440 PageChooserBlock it could end up calling page.render.
441 """
442 return self.block.render(self.value, context=context)
443
444 def id_for_label(self):
445 return self.block.id_for_label(self.prefix)
446
447 def __str__(self):
448 """Render the value according to the block's native rendering"""
449 return self.block.render(self.value)
450
451 def __repr__(self):
452 return "<block %s: %r>" % (
453 self.block.name or type(self.block).__name__,
454 self.value,
455 )
456
457
458 class DeclarativeSubBlocksMetaclass(BaseBlock):
459 """
460 Metaclass that collects sub-blocks declared on the base classes.
461 (cheerfully stolen from https://github.com/django/django/blob/main/django/forms/forms.py)
462 """
463
464 def __new__(mcs, name, bases, attrs):
465 # Collect sub-blocks declared on the current class.
466 # These are available on the class as `declared_blocks`
467 current_blocks = []
468 for key, value in list(attrs.items()):
469 if isinstance(value, Block):
470 current_blocks.append((key, value))
471 value.set_name(key)
472 attrs.pop(key)
473 current_blocks.sort(key=lambda x: x[1].creation_counter)
474 attrs["declared_blocks"] = collections.OrderedDict(current_blocks)
475
476 new_class = super(DeclarativeSubBlocksMetaclass, mcs).__new__(
477 mcs, name, bases, attrs
478 )
479
480 # Walk through the MRO, collecting all inherited sub-blocks, to make
481 # the combined `base_blocks`.
482 base_blocks = collections.OrderedDict()
483 for base in reversed(new_class.__mro__):
484 # Collect sub-blocks from base class.
485 if hasattr(base, "declared_blocks"):
486 base_blocks.update(base.declared_blocks)
487
488 # Field shadowing.
489 for attr, value in base.__dict__.items():
490 if value is None and attr in base_blocks:
491 base_blocks.pop(attr)
492 new_class.base_blocks = base_blocks
493
494 return new_class
495
496
497 # ========================
498 # django.forms integration
499 # ========================
500
501
502 class BlockWidget(forms.Widget):
503 """Wraps a block object as a widget so that it can be incorporated into a Django form"""
504
505 def __init__(self, block_def, attrs=None):
506 super().__init__(attrs=attrs)
507 self.block_def = block_def
508 self._js_context = None
509
510 def _build_block_json(self):
511 self._js_context = JSContext()
512 self._block_json = json.dumps(self._js_context.pack(self.block_def))
513
514 @property
515 def js_context(self):
516 if self._js_context is None:
517 self._build_block_json()
518
519 return self._js_context
520
521 @property
522 def block_json(self):
523 if self._js_context is None:
524 self._build_block_json()
525
526 return self._block_json
527
528 def id_for_label(self, prefix):
529 # Delegate the job of choosing a label ID to the top-level block.
530 # (In practice, the top-level block will typically be a StreamBlock, which returns None.)
531 return self.block_def.id_for_label(prefix)
532
533 def render_with_errors(self, name, value, attrs=None, errors=None, renderer=None):
534 value_json = json.dumps(self.block_def.get_form_state(value))
535
536 if errors:
537 errors_json = json.dumps(self.js_context.pack(errors.as_data()))
538 else:
539 errors_json = "[]"
540
541 return format_html(
542 """
543 <div id="{id}" data-block="{block_json}" data-value="{value_json}" data-errors="{errors_json}"></div>
544 <script>
545 initBlockWidget('{id}');
546 </script>
547 """,
548 id=name,
549 block_json=self.block_json,
550 value_json=value_json,
551 errors_json=errors_json,
552 )
553
554 def render(self, name, value, attrs=None, renderer=None):
555 return self.render_with_errors(
556 name, value, attrs=attrs, errors=None, renderer=renderer
557 )
558
559 @cached_property
560 def media(self):
561 return self.js_context.media + forms.Media(
562 js=[
563 # needed for initBlockWidget, although these will almost certainly be
564 # pulled in by the block adapters too
565 versioned_static("wagtailadmin/js/telepath/telepath.js"),
566 versioned_static("wagtailadmin/js/telepath/blocks.js"),
567 ],
568 css={
569 "all": [
570 versioned_static("wagtailadmin/css/panels/streamfield.css"),
571 ]
572 },
573 )
574
575 def value_from_datadict(self, data, files, name):
576 return self.block_def.value_from_datadict(data, files, name)
577
578 def value_omitted_from_data(self, data, files, name):
579 return self.block_def.value_omitted_from_data(data, files, name)
580
581
582 class BlockField(forms.Field):
583 """Wraps a block object as a form field so that it can be incorporated into a Django form"""
584
585 def __init__(self, block=None, **kwargs):
586 if block is None:
587 raise ImproperlyConfigured("BlockField was not passed a 'block' object")
588 self.block = block
589
590 if "widget" not in kwargs:
591 kwargs["widget"] = BlockWidget(block)
592
593 super().__init__(**kwargs)
594
595 def clean(self, value):
596 return self.block.clean(value)
597
598 def has_changed(self, initial_value, data_value):
599 return self.block.get_prep_value(initial_value) != self.block.get_prep_value(
600 data_value
601 )
602
603
604 @lru_cache(maxsize=1)
605 def get_help_icon():
606 return render_to_string(
607 "wagtailadmin/shared/icon.html", {"name": "help", "class_name": "default"}
608 )
609
610
611 DECONSTRUCT_ALIASES = {
612 Block: "wagtail.blocks.Block",
613 }
614
[end of wagtail/blocks/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
wagtail/wagtail
|
fb2c7760a5dc4971db6119924d3ed3a45d7784d7
|
Implement target width, resize rules, and overlap fallback for side panels
In https://github.com/wagtail/wagtail/blob/main/client/scss/components/forms/_form-width.scss, we define how much of the forms’ width should be dedicated to the form itself. This has fallen out of date with our [design system’s grid setup](https://www.figma.com/file/h67EsVXdWsfu38WGGxWfpi/Wagtail-Design-System?node-id=5172%3A30739), and needs to be updated.
We also need to update the width of the side panels, so it uses the correct value across different breakpoints.
For the time being, this should be done while retaining:
- Existing breakpoints in Wagtail as-is
- Existing mobile / desktop padding as-is (20px / 80px)
|
2022-08-26T00:14:53Z
|
<patch>
diff --git a/wagtail/admin/templatetags/wagtailadmin_tags.py b/wagtail/admin/templatetags/wagtailadmin_tags.py
--- a/wagtail/admin/templatetags/wagtailadmin_tags.py
+++ b/wagtail/admin/templatetags/wagtailadmin_tags.py
@@ -67,6 +67,7 @@ def breadcrumbs(
page_perms=None,
querystring_value=None,
trailing_breadcrumb_title=None,
+ classname=None,
):
user = context["request"].user
@@ -87,6 +88,7 @@ def breadcrumbs(
"trailing_breadcrumb_title": trailing_breadcrumb_title, # Only used in collapsible breadcrumb templates
"url_name": url_name,
"url_root_name": url_root_name,
+ "classname": classname,
}
</patch>
|
[]
|
[]
| ||||
Qiskit__qiskit-5385
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Qiskit transpile different basis gate set
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Informations
- **Qiskit version**:
{'qiskit-terra': '0.16.0',
'qiskit-aer': '0.7.0',
'qiskit-ignis': '0.5.0',
'qiskit-ibmq-provider': '0.11.0',
'qiskit-aqua': '0.8.0',
'qiskit': '0.23.0'}
- **Python version**: Python 3.8.1
- **Operating system**: macOs
### What is the current behavior?
Currently, the output from the qiskit transpile function for my random gate set (random gates) with the basis gate set 'rx', 'ry' and 'cx' is this circuit:
<img width="1183" alt="Bildschirmfoto 2020-10-25 um 17 56 11" src="https://user-images.githubusercontent.com/60929371/97113540-60ace000-16eb-11eb-96c9-11c462b40142.png">
I'm not sure, why the transpile function gives sometimes three single qubit rotations (Is it not possible to decompose every single qubit rotation into one rx( -pi <= theta<= pi) and one ry( -pi <= theta <= pi) rotation ?).
The second minor issue is that sometimes a rotation with an angle of 0 degree is still in the circuit and not removed (see image).
### Steps to reproduce the problem
The following code produces the circuit of the provided image.
`
qc = QuantumCircuit(3)
qc.h(0)
qc.h(1)
qc.x(0)
qc.h(0)
qc.ry(np.pi/4, 0)
qc.rx(np.pi/10, 0)
qc.y(1)
qc.rx(np.pi/10, 0)
qc.rx(np.pi/10, 0)
qc.h(2)
qc.cx(2, 1)
qc.x(0)
qc.cx(0, 1)
qc.y(2)
qc.rx(np.pi/10, 0)
qc.rx(np.pi/10, 0)
qc.y(0)
qc.x(2)
qc.rx(np.pi/10, 0)
qc.ry(np.pi/4, 2)
qc.cx(0, 2)
qc.h(2)
qc.cx(2, 1)
qc.x(0)
qc.cx(0, 1)
qc.rx(np.pi/10, 0)
qc.rx(np.pi/10, 0)
qc.x(2)
qc.ry(np.pi/4, 2)
qc.ry(np.pi/4, 2)
qc.ry(np.pi/4, 2)
qc.cx(0, 2)
qc.h(2)
qc.cx(2, 1)
qc.measure_all()
coupling_string = [[0, 1], [1, 0], [2, 0], [0,2], [1,2], [2, 1]]
CM = CouplingMap(coupling_string)
basis_gates = ['id', 'ry', 'rx', 'cx']
transpiled_qc = transpile(qc, coupling_map=CM, basis_gates=basis_gates, optimization_level=3, seed_transpiler=1)
### What is the expected behavior?
Normally I would expect that the transpile function with the passed basis gate set would give me in my circuit only two consecutive applied single qubit roations and not three and that rotations with an angle of 0 would be removed.
Am I wrong with my expectations ?
### Suggested solutions
...
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)
4
5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> transpiled_qc = transpile(qc, backend_sim)
40 >>> result = backend_sim.run(assemble(transpiled_qc)).result()
41 >>> print(result.get_counts(qc))
42 ```
43
44 In this case, the output will be:
45
46 ```python
47 {'00': 513, '11': 511}
48 ```
49
50 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
51 run the same program on a real quantum computer via IBMQ.
52
53 ### Executing your code on a real quantum chip
54
55 You can also use Qiskit to execute your code on a
56 **real quantum chip**.
57 In order to do so, you need to configure Qiskit for using the credentials in
58 your IBM Q account:
59
60 #### Configure your IBMQ credentials
61
62 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
63
64 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
65
66 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
67
68 ```python
69 >>> from qiskit import IBMQ
70 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
71 ```
72
73 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
74 Once they are stored, at any point in the future you can load and use them
75 in your program simply via:
76
77 ```python
78 >>> from qiskit import IBMQ
79 >>> IBMQ.load_account()
80 ```
81
82 Those who do not want to save their credentials to disk should use instead:
83
84 ```python
85 >>> from qiskit import IBMQ
86 >>> IBMQ.enable_account('MY_API_TOKEN')
87 ```
88
89 and the token will only be active for the session. For examples using Terra with real
90 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
91 the levels.
92
93 ## Contribution Guidelines
94
95 If you'd like to contribute to Qiskit Terra, please take a look at our
96 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
97
98 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
99 [join the Qiskit Slack community](https://ibm.co/joinqiskitslack)
100 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
101 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
102
103 ## Next Steps
104
105 Now you're set up and ready to check out some of the other examples from our
106 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
107
108 ## Authors and Citation
109
110 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
111 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
112
113 ## Changelog and Release Notes
114
115 The changelog for a particular release is dynamically generated and gets
116 written to the release page on Github for each release. For example, you can
117 find the page for the `0.9.0` release here:
118
119 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
120
121 The changelog for the current release can be found in the releases tab:
122 
123 The changelog provides a quick overview of notable changes for a given
124 release.
125
126 Additionally, as part of each release detailed release notes are written to
127 document in detail what has changed as part of a release. This includes any
128 documentation on potential breaking changes on upgrade and new features.
129 For example, You can find the release notes for the `0.9.0` release in the
130 Qiskit documentation here:
131
132 https://qiskit.org/documentation/release_notes.html#terra-0-9
133
134 ## License
135
136 [Apache License 2.0](LICENSE.txt)
137
[end of README.md]
[start of qiskit/extensions/quantum_initializer/squ.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2019.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """Decompose an arbitrary 2*2 unitary into three rotation gates: U=R_zR_yR_z.
14
15 Note that the decomposition is up to a global phase shift.
16 (This is a well known decomposition, which can be found for example in Nielsen and Chuang's book
17 "Quantum computation and quantum information".)
18 """
19
20 import cmath
21
22 import numpy as np
23
24 from qiskit.circuit import QuantumRegister, Qubit, QuantumCircuit
25 from qiskit.circuit.gate import Gate
26 from qiskit.circuit.exceptions import CircuitError
27 from qiskit.quantum_info.operators.predicates import is_unitary_matrix
28 from qiskit.exceptions import QiskitError
29 from qiskit.util import deprecate_arguments
30
31 _EPS = 1e-10 # global variable used to chop very small numbers to zero
32
33
34 class SingleQubitUnitary(Gate):
35 """
36 u = 2*2 unitary (given as a (complex) numpy.ndarray)
37
38 mode - determines the used decomposition by providing the rotation axes
39
40 up_to_diagonal - the single-qubit unitary is decomposed up to a diagonal matrix,
41 i.e. a unitary u' is implemented such that there exists a 2*2 diagonal
42 gate d with u = d.dot(u').
43 """
44
45 # pylint: disable=unused-argument, invalid-name
46 @deprecate_arguments({'u': 'unitary'})
47 def __init__(self, unitary_matrix, mode='ZYZ', up_to_diagonal=False, u=None):
48 """Create a new single qubit gate based on the unitary ``u``."""
49 if mode not in ['ZYZ']:
50 raise QiskitError("The decomposition mode is not known.")
51 # Check if the matrix u has the right dimensions and if it is a unitary
52 if unitary_matrix.shape != (2, 2):
53 raise QiskitError("The dimension of the input matrix is not equal to (2,2).")
54 if not is_unitary_matrix(unitary_matrix):
55 raise QiskitError("The 2*2 matrix is not unitary.")
56
57 self.mode = mode
58 self.up_to_diagonal = up_to_diagonal
59 self._diag = None
60
61 # Create new gate
62 super().__init__("unitary", 1, [unitary_matrix])
63
64 def inverse(self):
65 """Return the inverse.
66
67 Note that the resulting gate has an empty ``params`` property.
68 """
69 inverse_gate = Gate(name=self.name + '_dg',
70 num_qubits=self.num_qubits,
71 params=[]) # removing the params because arrays are deprecated
72
73 inverse_gate.definition = QuantumCircuit(*self.definition.qregs)
74 inverse_gate.definition._data = [(inst.inverse(), qargs, [])
75 for inst, qargs, _ in reversed(self._definition)]
76
77 return inverse_gate
78
79 @property
80 def diag(self):
81 """Returns the diagonal gate D up to which the single-qubit unitary u is implemented.
82
83 I.e. u=D.u', where u' is the unitary implemented by the found circuit.
84 """
85 if self._diag is None:
86 self._define()
87 return self._diag
88
89 def _define(self):
90 """Define the gate using the decomposition."""
91
92 if self.mode == 'ZYZ':
93 circuit, diag = self._zyz_circuit()
94 else:
95 raise QiskitError('The decomposition mode is not known.')
96
97 self._diag = diag
98
99 self.definition = circuit
100
101 def _zyz_circuit(self):
102 """Get the circuit for the ZYZ decomposition."""
103 q = QuantumRegister(self.num_qubits)
104 qc = QuantumCircuit(q, name=self.name)
105
106 diag = [1., 1.]
107 alpha, beta, gamma, _ = self._zyz_dec()
108
109 if abs(alpha) > _EPS:
110 qc.rz(alpha, q[0])
111 if abs(beta) > _EPS:
112 qc.ry(beta, q[0])
113 if abs(gamma) > _EPS:
114 if self.up_to_diagonal:
115 diag = [np.exp(-1j * gamma / 2.), np.exp(1j * gamma / 2.)]
116 else:
117 qc.rz(gamma, q[0])
118
119 return qc, diag
120
121 def _zyz_dec(self):
122 """Finds rotation angles (a,b,c,d) in the decomposition u=exp(id)*Rz(c).Ry(b).Rz(a).
123
124 Note that where "." denotes matrix multiplication.
125 """
126 unitary = self.params[0]
127 u00 = unitary.item(0, 0)
128 u01 = unitary.item(0, 1)
129 u10 = unitary.item(1, 0)
130 u11 = unitary.item(1, 1)
131 # Handle special case if the entry (0,0) of the unitary is equal to zero
132 if np.abs(u00) < _EPS:
133 # Note that u10 can't be zero, since u is unitary (and u00 == 0)
134 gamma = cmath.phase(-u01 / u10)
135 delta = cmath.phase(u01 * np.exp(-1j * gamma / 2))
136 return 0., -np.pi, -gamma, delta
137 # Handle special case if the entry (0,1) of the unitary is equal to zero
138 if np.abs(u01) < _EPS:
139 # Note that u11 can't be zero, since u is unitary (and u01 == 0)
140 gamma = cmath.phase(u00 / u11)
141 delta = cmath.phase(u00 * np.exp(-1j * gamma / 2))
142 return 0., 0., -gamma, delta
143 beta = 2 * np.arccos(np.abs(u00))
144 if np.sin(beta / 2) - np.cos(beta / 2) > 0:
145 gamma = cmath.phase(-u00 / u10)
146 alpha = cmath.phase(u00 / u01)
147 else:
148 gamma = -cmath.phase(-u10 / u00)
149 alpha = -cmath.phase(u01 / u00)
150 delta = cmath.phase(u00 * np.exp(-1j * (alpha + gamma) / 2))
151 # The decomposition works with another convention for the rotation gates
152 # (the one using negative angles).
153 # Therefore, we have to take the inverse of the angles at the end.
154 return -alpha, -beta, -gamma, delta
155
156 def validate_parameter(self, parameter):
157 """Single-qubit unitary gate parameter has to be an ndarray."""
158 if isinstance(parameter, np.ndarray):
159 return parameter
160 else:
161 raise CircuitError("invalid param type {0} in gate "
162 "{1}".format(type(parameter), self.name))
163
164
165 # pylint: disable=unused-argument, invalid-name, missing-type-doc, missing-param-doc
166 @deprecate_arguments({'u': 'unitary'})
167 def squ(self, unitary_matrix, qubit, mode='ZYZ', up_to_diagonal=False, *, u=None):
168 """Decompose an arbitrary 2*2 unitary into three rotation gates.
169
170 Note that the decomposition is up to a global phase shift.
171 (This is a well known decomposition, which can be found for example in Nielsen and Chuang's book
172 "Quantum computation and quantum information".)
173
174 Args:
175 unitary_matrix (ndarray): 2*2 unitary (given as a (complex) ndarray).
176 qubit (QuantumRegister | Qubit): The qubit which the gate is acting on.
177 mode (string): determines the used decomposition by providing the rotation axes.
178 The allowed modes are: "ZYZ" (default)
179 up_to_diagonal (bool): if set to True, the single-qubit unitary is decomposed up to
180 a diagonal matrix, i.e. a unitary u' is implemented such that there exists a 2*2
181 diagonal gate d with u = d.dot(u')
182 u (ndarray): Deprecated, use ``unitary_matrix`` instead.
183
184 Returns:
185 InstructionSet: The single-qubit unitary instruction attached to the circuit.
186
187 Raises:
188 QiskitError: if the format is wrong; if the array u is not unitary
189 """
190
191 if isinstance(qubit, QuantumRegister):
192 qubit = qubit[:]
193 if len(qubit) == 1:
194 qubit = qubit[0]
195 else:
196 raise QiskitError("The target qubit is a QuantumRegister containing more than"
197 " one qubits.")
198 # Check if there is one target qubit provided
199 if not isinstance(qubit, Qubit):
200 raise QiskitError("The target qubit is not a single qubit from a QuantumRegister.")
201 return self.append(SingleQubitUnitary(unitary_matrix, mode, up_to_diagonal), [qubit], [])
202
203
204 QuantumCircuit.squ = squ
205
[end of qiskit/extensions/quantum_initializer/squ.py]
[start of qiskit/transpiler/__init__.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2018.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """
14 =====================================
15 Transpiler (:mod:`qiskit.transpiler`)
16 =====================================
17
18 .. currentmodule:: qiskit.transpiler
19
20 Overview
21 ========
22 Transpilation is the process of rewriting a given input circuit to match
23 the topoplogy of a specific quantum device, and/or to optimize the circuit
24 for execution on present day noisy quantum systems.
25
26 Most circuits must undergo a series of transformations that make them compatible with
27 a given target device, and optimize them to reduce the effects of noise on the
28 resulting outcomes. Rewriting quantum circuits to match hardware constraints and
29 optimizing for performance can be far from trivial. The flow of logic in the rewriting
30 tool chain need not be linear, and can often have iterative sub-loops, conditional
31 branches, and other complex behaviors. That being said, the basic building blocks
32 follow the structure given below:
33
34 .. image:: /source_images/transpiling_core_steps.png
35
36 .. raw:: html
37
38 <br>
39
40 Qiskit has four pre-built transpilation pipelines available here:
41 :mod:`qiskit.transpiler.preset_passmanagers`. Unless the reader is familiar with
42 quantum circuit optimization methods and their usage, it is best to use one of
43 these ready-made routines.
44
45
46 Supplementary Information
47 =========================
48
49 .. container:: toggle
50
51 .. container:: header
52
53 **Basis Gates**
54
55 When writing a quantum circuit you are free to use any quantum gate (unitary operator) that
56 you like, along with a collection of non-gate operations such as qubit measurements and
57 reset operations. However, when running a circuit on a real quantum device one no longer
58 has this flexibility. Due to limitations in, for example, the physical interactions
59 between qubits, difficulty in implementing multi-qubit gates, control electronics etc,
60 a quantum computing device can only natively support a handful of quantum gates and non-gate
61 operations. In the present case of IBM Q devices, the native gate set can be found by querying
62 the devices themselves, and looking for the corresponding attribute in their configuration:
63
64 .. jupyter-execute::
65 :hide-code:
66 :hide-output:
67
68 from qiskit.test.mock import FakeVigo
69 backend = FakeVigo()
70
71 .. jupyter-execute::
72
73 backend.configuration().basis_gates
74
75
76 Every quantum circuit run on an IBM Q device must be expressed using only these basis gates.
77 For example, suppose one wants to run a simple phase estimation circuit:
78
79 .. jupyter-execute::
80
81 import numpy as np
82 from qiskit import QuantumCircuit
83 qc = QuantumCircuit(2, 1)
84
85 qc.h(0)
86 qc.x(1)
87 qc.cp(np.pi/4, 0, 1)
88 qc.h(0)
89 qc.measure([0], [0])
90 qc.draw(output='mpl')
91
92 We have :math:`H`, :math:`X`, and controlled-:math:`P` gates, all of which are
93 not in our devices basis gate set, and must be expanded. This expansion is taken
94 care of for us in the :func:`qiskit.execute` function. However, we can
95 decompose the circuit to show what it would look like in the native gate set of
96 the IBM Quantum devices:
97
98 .. jupyter-execute::
99
100 qc_basis = qc.decompose()
101 qc_basis.draw(output='mpl')
102
103
104 A few things to highlight. First, the circuit has gotten longer with respect to the
105 initial one. This can be verified by checking the depth of the circuits:
106
107 .. jupyter-execute::
108
109 print('Original depth:', qc.depth(), 'Decomposed Depth:', qc_basis.depth())
110
111 Second, although we had a single controlled gate, the fact that it was not in the basis
112 set means that, when expanded, it requires more than a single `cx` gate to implement.
113 All said, unrolling to the basis set of gates leads to an increase in the depth of a
114 quantum circuit and the number of gates.
115
116 It is important to highlight two special cases:
117
118 1. A SWAP gate is not a native gate on the IBM Q devices, and must be decomposed into
119 three CNOT gates:
120
121 .. jupyter-execute::
122
123 swap_circ = QuantumCircuit(2)
124 swap_circ.swap(0, 1)
125 swap_circ.decompose().draw(output='mpl')
126
127 As a product of three CNOT gates, SWAP gates are expensive operations to perform on a
128 noisy quantum devices. However, such operations are usually necessary for embedding a
129 circuit into the limited entangling gate connectivities of actual devices. Thus,
130 minimizing the number of SWAP gates in a circuit is a primary goal in the
131 transpilation process.
132
133
134 2. A Toffoli, or controlled-controlled-not gate (`ccx`), is a three-qubit gate. Given
135 that our basis gate set includes only single- and two-qubit gates, it is obvious that
136 this gate must be decomposed. This decomposition is quite costly:
137
138 .. jupyter-execute::
139
140 ccx_circ = QuantumCircuit(3)
141 ccx_circ.ccx(0, 1, 2)
142 ccx_circ.decompose().draw(output='mpl')
143
144 For every Toffoli gate in a quantum circuit, the IBM Quantum hardware may execute up to
145 six CNOT gates, and a handful of single-qubit gates. From this example, it should be
146 clear that any algorithm that makes use of multiple Toffoli gates will end up as a
147 circuit with large depth and will therefore be appreciably affected by noise and gate
148 errors.
149
150
151 .. raw:: html
152
153 <br>
154
155 .. container:: toggle
156
157 .. container:: header
158
159 **Initial Layout**
160
161 Quantum circuits are abstract entities whose qubits are "virtual" representations of actual
162 qubits used in computations. We need to be able to map these virtual qubits in a one-to-one
163 manner to the "physical" qubits in an actual quantum device.
164
165 .. image:: /source_images/mapping.png
166
167 .. raw:: html
168
169 <br><br>
170
171 By default, qiskit will do this mapping for you. The choice of mapping depends on the
172 properties of the circuit, the particular device you are targeting, and the optimization
173 level that is chosen. The basic mapping strategies are the following:
174
175 - **Trivial layout**: Map virtual qubits to the same numbered physical qubit on the device,
176 i.e. `[0,1,2,3,4]` -> `[0,1,2,3,4]` (default in `optimization_level=0` and
177 `optimization_level=1`).
178
179 - **Dense layout**: Find the sub-graph of the device with same number of qubits as the circuit
180 with the greatest connectivity (default in `optimization_level=2` and `optimization_level=3`).
181
182
183 The choice of initial layout is extremely important when:
184
185 1. Computing the number of SWAP operations needed to map the input circuit onto the device
186 topology.
187
188 2. Taking into account the noise properties of the device.
189
190
191 The choice of `initial_layout` can mean the difference between getting a result,
192 and getting nothing but noise.
193
194 Lets see what layouts are automatically picked at various optimization levels. The modified
195 circuits returned by :func:`qiskit.compiler.transpile` have this initial layout information
196 in them, and we can view this layout selection graphically using
197 :func:`qiskit.visualization.plot_circuit_layout`:
198
199 .. jupyter-execute::
200
201 from qiskit import QuantumCircuit, transpile
202 from qiskit.visualization import plot_circuit_layout
203 from qiskit.test.mock import FakeVigo
204 backend = FakeVigo()
205
206 ghz = QuantumCircuit(3, 3)
207 ghz.h(0)
208 ghz.cx(0,range(1,3))
209 ghz.barrier()
210 ghz.measure(range(3), range(3))
211 ghz.draw(output='mpl')
212
213
214 - **Layout Using Optimization Level 0**
215
216 .. jupyter-execute::
217
218 new_circ_lv0 = transpile(ghz, backend=backend, optimization_level=0)
219 plot_circuit_layout(new_circ_lv0, backend)
220
221
222 - **Layout Using Optimization Level 3**
223
224 .. jupyter-execute::
225
226 new_circ_lv3 = transpile(ghz, backend=backend, optimization_level=3)
227 plot_circuit_layout(new_circ_lv3, backend)
228
229
230 It is completely possible to specify your own initial layout. To do so we can
231 pass a list of integers to :func:`qiskit.compiler.transpile` via the `initial_layout`
232 keyword argument, where the index labels the virtual qubit in the circuit and the
233 corresponding value is the label for the physical qubit to map onto:
234
235 .. jupyter-execute::
236
237 # Virtual -> physical
238 # 0 -> 3
239 # 1 -> 4
240 # 2 -> 2
241
242 my_ghz = transpile(ghz, backend, initial_layout=[3, 4, 2])
243 plot_circuit_layout(my_ghz, backend)
244
245 .. raw:: html
246
247 <br>
248
249
250 .. container:: toggle
251
252 .. container:: header
253
254 **Mapping Circuits to Hardware Topology**
255
256 In order to implement a CNOT gate between qubits in a quantum circuit that are not directly
257 connected on a quantum device one or more SWAP gates must be inserted into the circuit to
258 move the qubit states around until they are adjacent on the device gate map. Each SWAP
259 gate is decomposed into three CNOT gates on the IBM Quantum devices, and represents an
260 expensive and noisy operation to perform. Thus, finding the minimum number of SWAP gates
261 needed to map a circuit onto a given device, is an important step (if not the most important)
262 in the whole execution process.
263
264 However, as with many important things in life, finding the optimal SWAP mapping is hard.
265 In fact it is in a class of problems called NP-Hard, and is thus prohibitively expensive
266 to compute for all but the smallest quantum devices and input circuits. To get around this,
267 by default Qiskit uses a stochastic heuristic algorithm called
268 :class:`Qiskit.transpiler.passes.StochasticSwap` to compute a good, but not necessarily minimal
269 SWAP count. The use of a stochastic method means the circuits generated by
270 :func:`Qiskit.compiler.transpile` (or :func:`Qiskit.execute` that calls `transpile` internally)
271 are not guaranteed to be the same over repeated runs. Indeed, running the same circuit
272 repeatedly will in general result in a distribution of circuit depths and gate counts at the
273 output.
274
275 In order to highlight this, we run a GHZ circuit 100 times, using a "bad" (disconnected)
276 `initial_layout`:
277
278 .. jupyter-execute::
279
280 import matplotlib.pyplot as plt
281 from qiskit import QuantumCircuit, transpile
282 from qiskit.test.mock import FakeBoeblingen
283 backend = FakeBoeblingen()
284
285 ghz = QuantumCircuit(5)
286 ghz.h(0)
287 ghz.cx(0,range(1,5))
288 ghz.draw(output='mpl')
289
290
291 .. jupyter-execute::
292
293 depths = []
294 for _ in range(100):
295 depths.append(transpile(ghz,
296 backend,
297 initial_layout=[7, 0, 4, 15, 19],
298 ).depth())
299
300 plt.figure(figsize=(8, 6))
301 plt.hist(depths, bins=list(range(14,36)), align='left', color='#AC557C')
302 plt.xlabel('Depth', fontsize=14)
303 plt.ylabel('Counts', fontsize=14);
304
305
306 This distribution is quite wide, signaling the difficultly the SWAP mapper is having
307 in computing the best mapping. Most circuits will have a distribution of depths,
308 perhaps not as wide as this one, due to the stochastic nature of the default SWAP
309 mapper. Of course, we want the best circuit we can get, especially in cases where
310 the depth is critical to success or failure. In cases like this, it is best to
311 :func:`transpile` a circuit several times, e.g. 10, and take the one with the
312 lowest depth. The :func:`transpile` function will automatically run in parallel
313 mode, making this procedure relatively speedy in most cases.
314
315 .. raw:: html
316
317 <br>
318
319
320 .. container:: toggle
321
322 .. container:: header
323
324 **Gate Optimization**
325
326 Decomposing quantum circuits into the basis gate set of the IBM Quantum devices,
327 and the addition of SWAP gates needed to match hardware topology, conspire to
328 increase the depth and gate count of quantum circuits. Fortunately many routines
329 for optimizing circuits by combining or eliminating gates exist. In some cases
330 these methods are so effective the output circuits have lower depth than the inputs.
331 In other cases, not much can be done, and the computation may be difficult to
332 perform on noisy devices. Different gate optimizations are turned on with
333 different `optimization_level` values. Below we show the benefits gained from
334 setting the optimization level higher:
335
336 .. important::
337
338 The output from :func:`transpile` varies due to the stochastic swap mapper.
339 So the numbers below will likely change each time you run the code.
340
341
342 .. jupyter-execute::
343
344 import matplotlib.pyplot as plt
345 from qiskit import QuantumCircuit, transpile
346 from qiskit.test.mock import FakeBoeblingen
347 backend = FakeBoeblingen()
348
349 ghz = QuantumCircuit(5)
350 ghz.h(0)
351 ghz.cx(0,range(1,5))
352 ghz.draw(output='mpl')
353
354
355 .. jupyter-execute::
356
357 for kk in range(4):
358 circ = transpile(ghz, backend, optimization_level=kk)
359 print('Optimization Level {}'.format(kk))
360 print('Depth:', circ.depth())
361 print('Gate counts:', circ.count_ops())
362 print()
363
364
365 .. raw:: html
366
367 <br>
368
369
370 Transpiler API
371 ==============
372
373 Pass Manager Construction
374 -------------------------
375
376 .. autosummary::
377 :toctree: ../stubs/
378
379 PassManager
380 PassManagerConfig
381 PropertySet
382 FlowController
383
384 Layout and Topology
385 -------------------
386
387 .. autosummary::
388 :toctree: ../stubs/
389
390 Layout
391 CouplingMap
392
393 Scheduling
394 ----------
395
396 .. autosummary::
397 :toctree: ../stubs/
398
399 InstructionDurations
400
401 Fenced Objects
402 --------------
403
404 .. autosummary::
405 :toctree: ../stubs/
406
407 FencedDAGCircuit
408 FencedPropertySet
409
410 Exceptions
411 ----------
412
413 .. autosummary::
414 :toctree: ../stubs/
415
416 TranspilerError
417 TranspilerAccessError
418 """
419
420 from .runningpassmanager import FlowController
421 from .passmanager import PassManager
422 from .passmanager_config import PassManagerConfig
423 from .propertyset import PropertySet
424 from .exceptions import TranspilerError, TranspilerAccessError
425 from .fencedobjs import FencedDAGCircuit, FencedPropertySet
426 from .basepasses import AnalysisPass, TransformationPass
427 from .coupling import CouplingMap
428 from .layout import Layout
429 from .instruction_durations import InstructionDurations
430
[end of qiskit/transpiler/__init__.py]
[start of qiskit/transpiler/passes/optimization/optimize_1q_gates.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2018.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """Optimize chains of single-qubit u1, u2, u3 gates by combining them into a single gate."""
14
15 from itertools import groupby
16
17 import numpy as np
18
19 from qiskit.transpiler.exceptions import TranspilerError
20 from qiskit.circuit.library.standard_gates.u1 import U1Gate
21 from qiskit.circuit.library.standard_gates.u2 import U2Gate
22 from qiskit.circuit.library.standard_gates.u3 import U3Gate
23 from qiskit.circuit.gate import Gate
24 from qiskit.transpiler.basepasses import TransformationPass
25 from qiskit.quantum_info.operators import Quaternion
26
27 _CHOP_THRESHOLD = 1e-15
28
29
30 class Optimize1qGates(TransformationPass):
31 """Optimize chains of single-qubit u1, u2, u3 gates by combining them into a single gate."""
32
33 def __init__(self, basis=None, eps=1e-15):
34 """Optimize1qGates initializer.
35
36 Args:
37 basis (list[str]): Basis gates to consider, e.g. `['u3', 'cx']`. For the effects
38 of this pass, the basis is the set intersection between the `basis` parameter and
39 the set `{'u1','u2','u3'}`.
40 eps (float): EPS to check against
41 """
42 super().__init__()
43 self.basis = basis if basis else ["u1", "u2", "u3"]
44 self.eps = eps
45
46 def run(self, dag):
47 """Run the Optimize1qGates pass on `dag`.
48
49 Args:
50 dag (DAGCircuit): the DAG to be optimized.
51
52 Returns:
53 DAGCircuit: the optimized DAG.
54
55 Raises:
56 TranspilerError: if YZY and ZYZ angles do not give same rotation matrix.
57 """
58 runs = dag.collect_runs(["u1", "u2", "u3"])
59 runs = _split_runs_on_parameters(runs)
60 for run in runs:
61 right_name = "u1"
62 right_parameters = (0, 0, 0) # (theta, phi, lambda)
63 right_global_phase = 0
64 for current_node in run:
65 left_name = current_node.name
66 if (current_node.condition is not None
67 or len(current_node.qargs) != 1
68 or left_name not in ["u1", "u2", "u3", "id"]):
69 raise TranspilerError("internal error")
70 if left_name == "u1":
71 left_parameters = (0, 0, current_node.op.params[0])
72 elif left_name == "u2":
73 left_parameters = (np.pi / 2, current_node.op.params[0],
74 current_node.op.params[1])
75 elif left_name == "u3":
76 left_parameters = tuple(current_node.op.params)
77 else:
78 left_name = "u1" # replace id with u1
79 left_parameters = (0, 0, 0)
80 if (current_node.op.definition is not None and
81 current_node.op.definition.global_phase):
82 right_global_phase += current_node.op.definition.global_phase
83 # If there are any sympy objects coming from the gate convert
84 # to numpy.
85 left_parameters = tuple([float(x) for x in left_parameters])
86 # Compose gates
87 name_tuple = (left_name, right_name)
88 if name_tuple == ("u1", "u1"):
89 # u1(lambda1) * u1(lambda2) = u1(lambda1 + lambda2)
90 right_parameters = (0, 0, right_parameters[2] +
91 left_parameters[2])
92 elif name_tuple == ("u1", "u2"):
93 # u1(lambda1) * u2(phi2, lambda2) = u2(phi2 + lambda1, lambda2)
94 right_parameters = (np.pi / 2, right_parameters[1] +
95 left_parameters[2], right_parameters[2])
96 elif name_tuple == ("u2", "u1"):
97 # u2(phi1, lambda1) * u1(lambda2) = u2(phi1, lambda1 + lambda2)
98 right_name = "u2"
99 right_parameters = (np.pi / 2, left_parameters[1],
100 right_parameters[2] + left_parameters[2])
101 elif name_tuple == ("u1", "u3"):
102 # u1(lambda1) * u3(theta2, phi2, lambda2) =
103 # u3(theta2, phi2 + lambda1, lambda2)
104 right_parameters = (right_parameters[0], right_parameters[1] +
105 left_parameters[2], right_parameters[2])
106 elif name_tuple == ("u3", "u1"):
107 # u3(theta1, phi1, lambda1) * u1(lambda2) =
108 # u3(theta1, phi1, lambda1 + lambda2)
109 right_name = "u3"
110 right_parameters = (left_parameters[0], left_parameters[1],
111 right_parameters[2] + left_parameters[2])
112 elif name_tuple == ("u2", "u2"):
113 # Using Ry(pi/2).Rz(2*lambda).Ry(pi/2) =
114 # Rz(pi/2).Ry(pi-2*lambda).Rz(pi/2),
115 # u2(phi1, lambda1) * u2(phi2, lambda2) =
116 # u3(pi - lambda1 - phi2, phi1 + pi/2, lambda2 + pi/2)
117 right_name = "u3"
118 right_parameters = (np.pi - left_parameters[2] -
119 right_parameters[1], left_parameters[1] +
120 np.pi / 2, right_parameters[2] +
121 np.pi / 2)
122 elif name_tuple[1] == "nop":
123 right_name = left_name
124 right_parameters = left_parameters
125 else:
126 # For composing u3's or u2's with u3's, use
127 # u2(phi, lambda) = u3(pi/2, phi, lambda)
128 # together with the qiskit.mapper.compose_u3 method.
129 right_name = "u3"
130 # Evaluate the symbolic expressions for efficiency
131 right_parameters = Optimize1qGates.compose_u3(left_parameters[0],
132 left_parameters[1],
133 left_parameters[2],
134 right_parameters[0],
135 right_parameters[1],
136 right_parameters[2])
137 # Why evalf()? This program:
138 # OPENQASM 2.0;
139 # include "qelib1.inc";
140 # qreg q[2];
141 # creg c[2];
142 # u3(0.518016983430947*pi,1.37051598592907*pi,1.36816383603222*pi) q[0];
143 # u3(1.69867232277986*pi,0.371448347747471*pi,0.461117217930936*pi) q[0];
144 # u3(0.294319836336836*pi,0.450325871124225*pi,1.46804720442555*pi) q[0];
145 # measure q -> c;
146 # took >630 seconds (did not complete) to optimize without
147 # calling evalf() at all, 19 seconds to optimize calling
148 # evalf() AFTER compose_u3, and 1 second to optimize
149 # calling evalf() BEFORE compose_u3.
150 # 1. Here down, when we simplify, we add f(theta) to lambda to
151 # correct the global phase when f(theta) is 2*pi. This isn't
152 # necessary but the other steps preserve the global phase, so
153 # we continue in that manner.
154 # 2. The final step will remove Z rotations by 2*pi.
155 # 3. Note that is_zero is true only if the expression is exactly
156 # zero. If the input expressions have already been evaluated
157 # then these final simplifications will not occur.
158 # TODO After we refactor, we should have separate passes for
159 # exact and approximate rewriting.
160
161 # Y rotation is 0 mod 2*pi, so the gate is a u1
162 if abs(np.mod(right_parameters[0],
163 (2 * np.pi))) < self.eps and right_name != "u1":
164 right_name = "u1"
165 right_parameters = (0, 0, right_parameters[1] +
166 right_parameters[2] +
167 right_parameters[0])
168 # Y rotation is pi/2 or -pi/2 mod 2*pi, so the gate is a u2
169 if right_name == "u3":
170 # theta = pi/2 + 2*k*pi
171 right_angle = right_parameters[0] - np.pi / 2
172 if abs(right_angle) < self.eps:
173 right_angle = 0
174 if abs(np.mod((right_angle),
175 2 * np.pi)) < self.eps:
176 right_name = "u2"
177 right_parameters = (np.pi / 2, right_parameters[1],
178 right_parameters[2] +
179 (right_parameters[0] - np.pi / 2))
180 # theta = -pi/2 + 2*k*pi
181 right_angle = right_parameters[0] + np.pi / 2
182 if abs(right_angle) < self.eps:
183 right_angle = 0
184 if abs(np.mod(right_angle,
185 2 * np.pi)) < self.eps:
186 right_name = "u2"
187 right_parameters = (np.pi / 2, right_parameters[1] +
188 np.pi, right_parameters[2] -
189 np.pi + (right_parameters[0] +
190 np.pi / 2))
191 # u1 and lambda is 0 mod 2*pi so gate is nop (up to a global phase)
192 if right_name == "u1" and abs(np.mod(right_parameters[2],
193 2 * np.pi)) < self.eps:
194 right_name = "nop"
195
196 if right_name == "u2" and "u2" not in self.basis:
197 right_name = "u3"
198 if right_name == "u1" and "u1" not in self.basis:
199 right_name = "u3"
200
201 new_op = Gate(name="", num_qubits=1, params=[])
202 if right_name == "u1":
203 new_op = U1Gate(right_parameters[2])
204 if right_name == "u2":
205 new_op = U2Gate(right_parameters[1], right_parameters[2])
206 if right_name == "u3":
207 if "u3" in self.basis:
208 new_op = U3Gate(*right_parameters)
209 else:
210 raise TranspilerError('It was not possible to use the basis %s' % self.basis)
211
212 dag.global_phase += right_global_phase
213
214 if right_name != 'nop':
215 dag.substitute_node(run[0], new_op, inplace=True)
216
217 # Delete the other nodes in the run
218 for current_node in run[1:]:
219 dag.remove_op_node(current_node)
220 if right_name == "nop":
221 dag.remove_op_node(run[0])
222
223 return dag
224
225 @staticmethod
226 def compose_u3(theta1, phi1, lambda1, theta2, phi2, lambda2):
227 """Return a triple theta, phi, lambda for the product.
228
229 u3(theta, phi, lambda)
230 = u3(theta1, phi1, lambda1).u3(theta2, phi2, lambda2)
231 = Rz(phi1).Ry(theta1).Rz(lambda1+phi2).Ry(theta2).Rz(lambda2)
232 = Rz(phi1).Rz(phi').Ry(theta').Rz(lambda').Rz(lambda2)
233 = u3(theta', phi1 + phi', lambda2 + lambda')
234
235 Return theta, phi, lambda.
236 """
237 # Careful with the factor of two in yzy_to_zyz
238 thetap, phip, lambdap = Optimize1qGates.yzy_to_zyz((lambda1 + phi2), theta1, theta2)
239 (theta, phi, lamb) = (thetap, phi1 + phip, lambda2 + lambdap)
240
241 return (theta, phi, lamb)
242
243 @staticmethod
244 def yzy_to_zyz(xi, theta1, theta2, eps=1e-9): # pylint: disable=invalid-name
245 """Express a Y.Z.Y single qubit gate as a Z.Y.Z gate.
246
247 Solve the equation
248
249 .. math::
250
251 Ry(theta1).Rz(xi).Ry(theta2) = Rz(phi).Ry(theta).Rz(lambda)
252
253 for theta, phi, and lambda.
254
255 Return a solution theta, phi, and lambda.
256 """
257 quaternion_yzy = Quaternion.from_euler([theta1, xi, theta2], 'yzy')
258 euler = quaternion_yzy.to_zyz()
259 quaternion_zyz = Quaternion.from_euler(euler, 'zyz')
260 # output order different than rotation order
261 out_angles = (euler[1], euler[0], euler[2])
262 abs_inner = abs(quaternion_zyz.data.dot(quaternion_yzy.data))
263 if not np.allclose(abs_inner, 1, eps):
264 raise TranspilerError('YZY and ZYZ angles do not give same rotation matrix.')
265 out_angles = tuple(0 if np.abs(angle) < _CHOP_THRESHOLD else angle
266 for angle in out_angles)
267 return out_angles
268
269
270 def _split_runs_on_parameters(runs):
271 """Finds runs containing parameterized gates and splits them into sequential
272 runs excluding the parameterized gates.
273 """
274
275 out = []
276 for run in runs:
277 groups = groupby(run, lambda x: x.op.is_parameterized())
278
279 for group_is_parameterized, gates in groups:
280 if not group_is_parameterized:
281 out.append(list(gates))
282
283 return out
284
[end of qiskit/transpiler/passes/optimization/optimize_1q_gates.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
230c07beedf214e1e02c91c6b1fcbe121acb4040
|
Qiskit transpile different basis gate set
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Informations
- **Qiskit version**:
{'qiskit-terra': '0.16.0',
'qiskit-aer': '0.7.0',
'qiskit-ignis': '0.5.0',
'qiskit-ibmq-provider': '0.11.0',
'qiskit-aqua': '0.8.0',
'qiskit': '0.23.0'}
- **Python version**: Python 3.8.1
- **Operating system**: macOs
### What is the current behavior?
Currently, the output from the qiskit transpile function for my random gate set (random gates) with the basis gate set 'rx', 'ry' and 'cx' is this circuit:
<img width="1183" alt="Bildschirmfoto 2020-10-25 um 17 56 11" src="https://user-images.githubusercontent.com/60929371/97113540-60ace000-16eb-11eb-96c9-11c462b40142.png">
I'm not sure, why the transpile function gives sometimes three single qubit rotations (Is it not possible to decompose every single qubit rotation into one rx( -pi <= theta<= pi) and one ry( -pi <= theta <= pi) rotation ?).
The second minor issue is that sometimes a rotation with an angle of 0 degree is still in the circuit and not removed (see image).
### Steps to reproduce the problem
The following code produces the circuit of the provided image.
`
qc = QuantumCircuit(3)
qc.h(0)
qc.h(1)
qc.x(0)
qc.h(0)
qc.ry(np.pi/4, 0)
qc.rx(np.pi/10, 0)
qc.y(1)
qc.rx(np.pi/10, 0)
qc.rx(np.pi/10, 0)
qc.h(2)
qc.cx(2, 1)
qc.x(0)
qc.cx(0, 1)
qc.y(2)
qc.rx(np.pi/10, 0)
qc.rx(np.pi/10, 0)
qc.y(0)
qc.x(2)
qc.rx(np.pi/10, 0)
qc.ry(np.pi/4, 2)
qc.cx(0, 2)
qc.h(2)
qc.cx(2, 1)
qc.x(0)
qc.cx(0, 1)
qc.rx(np.pi/10, 0)
qc.rx(np.pi/10, 0)
qc.x(2)
qc.ry(np.pi/4, 2)
qc.ry(np.pi/4, 2)
qc.ry(np.pi/4, 2)
qc.cx(0, 2)
qc.h(2)
qc.cx(2, 1)
qc.measure_all()
coupling_string = [[0, 1], [1, 0], [2, 0], [0,2], [1,2], [2, 1]]
CM = CouplingMap(coupling_string)
basis_gates = ['id', 'ry', 'rx', 'cx']
transpiled_qc = transpile(qc, coupling_map=CM, basis_gates=basis_gates, optimization_level=3, seed_transpiler=1)
### What is the expected behavior?
Normally I would expect that the transpile function with the passed basis gate set would give me in my circuit only two consecutive applied single qubit roations and not three and that rotations with an angle of 0 would be removed.
Am I wrong with my expectations ?
### Suggested solutions
...
|
A general single-qubit gate requires 3 Pauli rotations, right? E.g. the ZYZ decomposition (see for instance [this paper](https://arxiv.org/pdf/quant-ph/0406176.pdf)). But we're working on improving the compiler for these arbitrary gate sets, to e.g. remove things like `RX(0)`.
Or is there a special decomposition you had in mind?
So I dug into the `Rx(0)` in the output circuit. The source of this is coming from the level 3 optimization pass `CollectBlocks` and then `UnitarySynthesis`. The two qubit decomposer is adding the rx(0) to the circuit when decomposing the unitary matrix, which happens before we run `Optimize1qGatesDecomposition`. This then gets skipped over inside `Optimize1qGatesDecomposition` because it skips single qubit gates already in the target basis since the eutler decomposition will often expand it to multiple gates: https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/transpiler/passes/optimization/optimize_1q_decomposition.py#L73-L75
To optimize away the rx(0) case we'll probably have to modify that if statement to manually check for 0 degree rotations and either manually remove them or call the decomposer if the angles are 0 and let it's simplification do the removal.
I just pushed up https://github.com/Qiskit/qiskit-terra/pull/5292 which should fix the rx(0) case mentioned above (I marked it as fixing the issue, but can change that if there is more to do here). I'm not able to reproduce the exact case anymore because of other changes made to the two qubit unitary decomposition on master since the last release (I was able to reproduce it on 0.16.0 though). So I can't really verify the stray rx(0) in the specific case above is fixed by that PR (short of backporting the change in isolation to 0.16.0).
As for the decomposition @Cryoris is correct you need 3 rotation gates for general single qubit gates. The optimization pass (https://qiskit.org/documentation/stubs/qiskit.transpiler.passes.Optimize1qGatesDecomposition.html#qiskit.transpiler.passes.Optimize1qGatesDecomposition ) is just using https://qiskit.org/documentation/stubs/qiskit.quantum_info.OneQubitEulerDecomposer.html#qiskit.quantum_info.OneQubitEulerDecomposer under the covers to go from the unitary for the chain of single qubit gates to a smaller number of gates. We can look at adding different types of optimization passes if you had something in mind that performs better for your basis set.
Hi,
thanks for the quick fix of the rx(0) issue. With the single qubit decomposition into two rotations I was of course wrong. Thanks for the correction here.
@mtreinish: When it comes to optimization passes I have a lack of knowledge here and I don't know what suits best for our 'XYX' basis. For this I would need to dive deeper into the literature of optimizers. Can you recommend some papers?
|
2020-11-11T21:49:28Z
|
<patch>
diff --git a/qiskit/transpiler/passes/optimization/optimize_1q_decomposition.py b/qiskit/transpiler/passes/optimization/optimize_1q_decomposition.py
--- a/qiskit/transpiler/passes/optimization/optimize_1q_decomposition.py
+++ b/qiskit/transpiler/passes/optimization/optimize_1q_decomposition.py
@@ -15,6 +15,8 @@
from itertools import groupby
import logging
+import numpy as np
+
from qiskit.circuit import QuantumCircuit, QuantumRegister
from qiskit.quantum_info import Operator
from qiskit.transpiler.basepasses import TransformationPass
@@ -71,8 +73,14 @@ def run(self, dag):
runs = dag.collect_runs(self.euler_basis_names[self.basis])
runs = _split_runs_on_parameters(runs)
for run in runs:
- # Don't try to optimize a single 1q gate
if len(run) <= 1:
+ params = run[0].op.params
+ # Remove single identity gates
+ if run[0].op.name in self.euler_basis_names[self.basis] and len(
+ params) > 0 and np.array_equal(run[0].op.to_matrix(),
+ np.eye(2)):
+ dag.remove_op_node(run[0])
+ # Don't try to optimize a single 1q gate
continue
q = QuantumRegister(1, "q")
qc = QuantumCircuit(1)
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-21300
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
to_csv failing with encoding='utf-16'
#### Code Sample:
```python
df.to_csv('test.gz', sep='~', header=False, index=False,compression='gzip',line_terminator='\r\n',encoding='utf-16', na_rep='')
```
/opt/anaconda/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)
#### Problem description
In first place, big thank you for supporting pandas, my life is easier and fun with pandas in the toolkit.
In previous version 0.22 we were able to do _to_csv_ with encoding='utf-16' to handle Japanese, Chinese among other content properly. Need the utf-16 encoding for next steps like upload data to MSSQL server in bulk mode.
I would like to know if I can use a workaround to continue have the support of uft-16.
Any other suggestions are welcome.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.114-42-default
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: POSIX
LOCALE: None.None
pandas: 0.23.0
pytest: 3.5.1
pip: 10.0.1
setuptools: 39.1.0
Cython: 0.28.2
numpy: 1.14.2
scipy: 1.1.0
pyarrow: 0.9.0
xarray: None
IPython: 6.4.0
sphinx: 1.7.4
patsy: 0.5.0
dateutil: 2.7.3
pytz: 2018.4
blosc: None
bottleneck: 1.2.1
tables: 3.4.3
numexpr: 2.6.5
feather: None
matplotlib: 2.2.2
openpyxl: 2.5.3
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.4
lxml: 4.2.1
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.7
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
30 </a>
31 </tr>
32 <tr>
33 <td>License</td>
34 <td>
35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
37 </a>
38 </td>
39 </tr>
40 <tr>
41 <td>Build Status</td>
42 <td>
43 <a href="https://travis-ci.org/pandas-dev/pandas">
44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
45 </a>
46 </td>
47 </tr>
48 <tr>
49 <td></td>
50 <td>
51 <a href="https://circleci.com/gh/pandas-dev/pandas">
52 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" />
53 </a>
54 </td>
55 </tr>
56 <tr>
57 <td></td>
58 <td>
59 <a href="https://ci.appveyor.com/project/pandas-dev/pandas">
60 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" />
61 </a>
62 </td>
63 </tr>
64 <tr>
65 <td>Coverage</td>
66 <td>
67 <a href="https://codecov.io/gh/pandas-dev/pandas">
68 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
69 </a>
70 </td>
71 </tr>
72 <tr>
73 <td>Downloads</td>
74 <td>
75 <a href="https://pandas.pydata.org">
76 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
77 </a>
78 </td>
79 </tr>
80 <tr>
81 <td>Gitter</td>
82 <td>
83 <a href="https://gitter.im/pydata/pandas">
84 <img src="https://badges.gitter.im/Join%20Chat.svg"
85 </a>
86 </td>
87 </tr>
88 </table>
89
90
91
92 ## What is it
93
94 **pandas** is a Python package providing fast, flexible, and expressive data
95 structures designed to make working with "relational" or "labeled" data both
96 easy and intuitive. It aims to be the fundamental high-level building block for
97 doing practical, **real world** data analysis in Python. Additionally, it has
98 the broader goal of becoming **the most powerful and flexible open source data
99 analysis / manipulation tool available in any language**. It is already well on
100 its way toward this goal.
101
102 ## Main Features
103 Here are just a few of the things that pandas does well:
104
105 - Easy handling of [**missing data**][missing-data] (represented as
106 `NaN`) in floating point as well as non-floating point data
107 - Size mutability: columns can be [**inserted and
108 deleted**][insertion-deletion] from DataFrame and higher dimensional
109 objects
110 - Automatic and explicit [**data alignment**][alignment]: objects can
111 be explicitly aligned to a set of labels, or the user can simply
112 ignore the labels and let `Series`, `DataFrame`, etc. automatically
113 align the data for you in computations
114 - Powerful, flexible [**group by**][groupby] functionality to perform
115 split-apply-combine operations on data sets, for both aggregating
116 and transforming data
117 - Make it [**easy to convert**][conversion] ragged,
118 differently-indexed data in other Python and NumPy data structures
119 into DataFrame objects
120 - Intelligent label-based [**slicing**][slicing], [**fancy
121 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
122 large data sets
123 - Intuitive [**merging**][merging] and [**joining**][joining] data
124 sets
125 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
126 data sets
127 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
128 labels per tick)
129 - Robust IO tools for loading data from [**flat files**][flat-files]
130 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
131 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
132 - [**Time series**][timeseries]-specific functionality: date range
133 generation and frequency conversion, moving window statistics,
134 moving window linear regressions, date shifting and lagging, etc.
135
136
137 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
138 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
139 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
140 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
141 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
142 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
143 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
144 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
145 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
146 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
147 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
148 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
149 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
150 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
151 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
152 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
153 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
154 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
155
156 ## Where to get it
157 The source code is currently hosted on GitHub at:
158 https://github.com/pandas-dev/pandas
159
160 Binary installers for the latest released version are available at the [Python
161 package index](https://pypi.org/project/pandas) and on conda.
162
163 ```sh
164 # conda
165 conda install pandas
166 ```
167
168 ```sh
169 # or PyPI
170 pip install pandas
171 ```
172
173 ## Dependencies
174 - [NumPy](https://www.numpy.org): 1.9.0 or higher
175 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
176 - [pytz](https://pythonhosted.org/pytz): 2011k or higher
177
178 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
179 for recommended and optional dependencies.
180
181 ## Installation from sources
182 To install pandas from source you need Cython in addition to the normal
183 dependencies above. Cython can be installed from pypi:
184
185 ```sh
186 pip install cython
187 ```
188
189 In the `pandas` directory (same one where you found this file after
190 cloning the git repo), execute:
191
192 ```sh
193 python setup.py install
194 ```
195
196 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
197
198 ```sh
199 python setup.py develop
200 ```
201
202 Alternatively, you can use `pip` if you want all the dependencies pulled
203 in automatically (the `-e` option is for installing it in [development
204 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
205
206 ```sh
207 pip install -e .
208 ```
209
210 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
211
212 ## License
213 [BSD 3](LICENSE)
214
215 ## Documentation
216 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
217
218 ## Background
219 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
220 has been under active development since then.
221
222 ## Getting Help
223
224 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
225 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
226
227 ## Discussion and Development
228 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
229
230 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
231
232 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
233
234 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)**
235
236 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
237
238 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
239
240 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
241
242 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
243
[end of README.md]
[start of pandas/io/common.py]
1 """Common IO api utilities"""
2
3 import os
4 import csv
5 import codecs
6 import mmap
7 from contextlib import contextmanager, closing
8 import zipfile
9
10 from pandas.compat import StringIO, BytesIO, string_types, text_type
11 from pandas import compat
12 from pandas.io.formats.printing import pprint_thing
13 import pandas.core.common as com
14 from pandas.core.dtypes.common import is_number, is_file_like
15
16 # compat
17 from pandas.errors import (ParserError, DtypeWarning, # noqa
18 EmptyDataError, ParserWarning)
19
20 # gh-12665: Alias for now and remove later.
21 CParserError = ParserError
22
23 # common NA values
24 # no longer excluding inf representations
25 # '1.#INF','-1.#INF', '1.#INF000000',
26 _NA_VALUES = set([
27 '-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A',
28 'N/A', 'n/a', 'NA', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan', '-nan', ''
29 ])
30
31
32 if compat.PY3:
33 from urllib.request import urlopen, pathname2url
34 _urlopen = urlopen
35 from urllib.parse import urlparse as parse_url
36 from urllib.parse import (uses_relative, uses_netloc, uses_params,
37 urlencode, urljoin)
38 from urllib.error import URLError
39 from http.client import HTTPException # noqa
40 else:
41 from urllib2 import urlopen as _urlopen
42 from urllib import urlencode, pathname2url # noqa
43 from urlparse import urlparse as parse_url
44 from urlparse import uses_relative, uses_netloc, uses_params, urljoin
45 from urllib2 import URLError # noqa
46 from httplib import HTTPException # noqa
47 from contextlib import contextmanager, closing # noqa
48 from functools import wraps # noqa
49
50 # @wraps(_urlopen)
51 @contextmanager
52 def urlopen(*args, **kwargs):
53 with closing(_urlopen(*args, **kwargs)) as f:
54 yield f
55
56
57 _VALID_URLS = set(uses_relative + uses_netloc + uses_params)
58 _VALID_URLS.discard('')
59
60
61 class BaseIterator(object):
62 """Subclass this and provide a "__next__()" method to obtain an iterator.
63 Useful only when the object being iterated is non-reusable (e.g. OK for a
64 parser, not for an in-memory table, yes for its iterator)."""
65
66 def __iter__(self):
67 return self
68
69 def __next__(self):
70 raise com.AbstractMethodError(self)
71
72
73 if not compat.PY3:
74 BaseIterator.next = lambda self: self.__next__()
75
76
77 def _is_url(url):
78 """Check to see if a URL has a valid protocol.
79
80 Parameters
81 ----------
82 url : str or unicode
83
84 Returns
85 -------
86 isurl : bool
87 If `url` has a valid protocol return True otherwise False.
88 """
89 try:
90 return parse_url(url).scheme in _VALID_URLS
91 except:
92 return False
93
94
95 def _expand_user(filepath_or_buffer):
96 """Return the argument with an initial component of ~ or ~user
97 replaced by that user's home directory.
98
99 Parameters
100 ----------
101 filepath_or_buffer : object to be converted if possible
102
103 Returns
104 -------
105 expanded_filepath_or_buffer : an expanded filepath or the
106 input if not expandable
107 """
108 if isinstance(filepath_or_buffer, string_types):
109 return os.path.expanduser(filepath_or_buffer)
110 return filepath_or_buffer
111
112
113 def _validate_header_arg(header):
114 if isinstance(header, bool):
115 raise TypeError("Passing a bool to header is invalid. "
116 "Use header=None for no header or "
117 "header=int or list-like of ints to specify "
118 "the row(s) making up the column names")
119
120
121 def _stringify_path(filepath_or_buffer):
122 """Attempt to convert a path-like object to a string.
123
124 Parameters
125 ----------
126 filepath_or_buffer : object to be converted
127
128 Returns
129 -------
130 str_filepath_or_buffer : maybe a string version of the object
131
132 Notes
133 -----
134 Objects supporting the fspath protocol (python 3.6+) are coerced
135 according to its __fspath__ method.
136
137 For backwards compatibility with older pythons, pathlib.Path and
138 py.path objects are specially coerced.
139
140 Any other object is passed through unchanged, which includes bytes,
141 strings, buffers, or anything else that's not even path-like.
142 """
143 try:
144 import pathlib
145 _PATHLIB_INSTALLED = True
146 except ImportError:
147 _PATHLIB_INSTALLED = False
148
149 try:
150 from py.path import local as LocalPath
151 _PY_PATH_INSTALLED = True
152 except ImportError:
153 _PY_PATH_INSTALLED = False
154
155 if hasattr(filepath_or_buffer, '__fspath__'):
156 return filepath_or_buffer.__fspath__()
157 if _PATHLIB_INSTALLED and isinstance(filepath_or_buffer, pathlib.Path):
158 return text_type(filepath_or_buffer)
159 if _PY_PATH_INSTALLED and isinstance(filepath_or_buffer, LocalPath):
160 return filepath_or_buffer.strpath
161 return filepath_or_buffer
162
163
164 def is_s3_url(url):
165 """Check for an s3, s3n, or s3a url"""
166 try:
167 return parse_url(url).scheme in ['s3', 's3n', 's3a']
168 except: # noqa
169 return False
170
171
172 def get_filepath_or_buffer(filepath_or_buffer, encoding=None,
173 compression=None, mode=None):
174 """
175 If the filepath_or_buffer is a url, translate and return the buffer.
176 Otherwise passthrough.
177
178 Parameters
179 ----------
180 filepath_or_buffer : a url, filepath (str, py.path.local or pathlib.Path),
181 or buffer
182 encoding : the encoding to use to decode py3 bytes, default is 'utf-8'
183 mode : str, optional
184
185 Returns
186 -------
187 tuple of ({a filepath_ or buffer or S3File instance},
188 encoding, str,
189 compression, str,
190 should_close, bool)
191 """
192 filepath_or_buffer = _stringify_path(filepath_or_buffer)
193
194 if _is_url(filepath_or_buffer):
195 req = _urlopen(filepath_or_buffer)
196 content_encoding = req.headers.get('Content-Encoding', None)
197 if content_encoding == 'gzip':
198 # Override compression based on Content-Encoding header
199 compression = 'gzip'
200 reader = BytesIO(req.read())
201 req.close()
202 return reader, encoding, compression, True
203
204 if is_s3_url(filepath_or_buffer):
205 from pandas.io import s3
206 return s3.get_filepath_or_buffer(filepath_or_buffer,
207 encoding=encoding,
208 compression=compression,
209 mode=mode)
210
211 if isinstance(filepath_or_buffer, (compat.string_types,
212 compat.binary_type,
213 mmap.mmap)):
214 return _expand_user(filepath_or_buffer), None, compression, False
215
216 if not is_file_like(filepath_or_buffer):
217 msg = "Invalid file path or buffer object type: {_type}"
218 raise ValueError(msg.format(_type=type(filepath_or_buffer)))
219
220 return filepath_or_buffer, None, compression, False
221
222
223 def file_path_to_url(path):
224 """
225 converts an absolute native path to a FILE URL.
226
227 Parameters
228 ----------
229 path : a path in native format
230
231 Returns
232 -------
233 a valid FILE URL
234 """
235 return urljoin('file:', pathname2url(path))
236
237
238 _compression_to_extension = {
239 'gzip': '.gz',
240 'bz2': '.bz2',
241 'zip': '.zip',
242 'xz': '.xz',
243 }
244
245
246 def _infer_compression(filepath_or_buffer, compression):
247 """
248 Get the compression method for filepath_or_buffer. If compression='infer',
249 the inferred compression method is returned. Otherwise, the input
250 compression method is returned unchanged, unless it's invalid, in which
251 case an error is raised.
252
253 Parameters
254 ----------
255 filepath_or_buf :
256 a path (str) or buffer
257 compression : str or None
258 the compression method including None for no compression and 'infer'
259
260 Returns
261 -------
262 string or None :
263 compression method
264
265 Raises
266 ------
267 ValueError on invalid compression specified
268 """
269
270 # No compression has been explicitly specified
271 if compression is None:
272 return None
273
274 # Infer compression
275 if compression == 'infer':
276 # Convert all path types (e.g. pathlib.Path) to strings
277 filepath_or_buffer = _stringify_path(filepath_or_buffer)
278 if not isinstance(filepath_or_buffer, compat.string_types):
279 # Cannot infer compression of a buffer, assume no compression
280 return None
281
282 # Infer compression from the filename/URL extension
283 for compression, extension in _compression_to_extension.items():
284 if filepath_or_buffer.endswith(extension):
285 return compression
286 return None
287
288 # Compression has been specified. Check that it's valid
289 if compression in _compression_to_extension:
290 return compression
291
292 msg = 'Unrecognized compression type: {}'.format(compression)
293 valid = ['infer', None] + sorted(_compression_to_extension)
294 msg += '\nValid compression types are {}'.format(valid)
295 raise ValueError(msg)
296
297
298 def _get_handle(path_or_buf, mode, encoding=None, compression=None,
299 memory_map=False, is_text=True):
300 """
301 Get file handle for given path/buffer and mode.
302
303 Parameters
304 ----------
305 path_or_buf :
306 a path (str) or buffer
307 mode : str
308 mode to open path_or_buf with
309 encoding : str or None
310 compression : str or None
311 Supported compression protocols are gzip, bz2, zip, and xz
312 memory_map : boolean, default False
313 See parsers._parser_params for more information.
314 is_text : boolean, default True
315 whether file/buffer is in text format (csv, json, etc.), or in binary
316 mode (pickle, etc.)
317
318 Returns
319 -------
320 f : file-like
321 A file-like object
322 handles : list of file-like objects
323 A list of file-like object that were opened in this function.
324 """
325 try:
326 from s3fs import S3File
327 need_text_wrapping = (BytesIO, S3File)
328 except ImportError:
329 need_text_wrapping = (BytesIO,)
330
331 handles = list()
332 f = path_or_buf
333
334 # Convert pathlib.Path/py.path.local or string
335 path_or_buf = _stringify_path(path_or_buf)
336 is_path = isinstance(path_or_buf, compat.string_types)
337
338 if compression:
339
340 if compat.PY2 and not is_path and encoding:
341 msg = 'compression with encoding is not yet supported in Python 2'
342 raise ValueError(msg)
343
344 # GZ Compression
345 if compression == 'gzip':
346 import gzip
347 if is_path:
348 f = gzip.open(path_or_buf, mode)
349 else:
350 f = gzip.GzipFile(fileobj=path_or_buf)
351
352 # BZ Compression
353 elif compression == 'bz2':
354 import bz2
355 if is_path:
356 f = bz2.BZ2File(path_or_buf, mode)
357 elif compat.PY2:
358 # Python 2's bz2 module can't take file objects, so have to
359 # run through decompress manually
360 f = StringIO(bz2.decompress(path_or_buf.read()))
361 path_or_buf.close()
362 else:
363 f = bz2.BZ2File(path_or_buf)
364
365 # ZIP Compression
366 elif compression == 'zip':
367 zf = BytesZipFile(path_or_buf, mode)
368 if zf.mode == 'w':
369 f = zf
370 elif zf.mode == 'r':
371 zip_names = zf.namelist()
372 if len(zip_names) == 1:
373 f = zf.open(zip_names.pop())
374 elif len(zip_names) == 0:
375 raise ValueError('Zero files found in ZIP file {}'
376 .format(path_or_buf))
377 else:
378 raise ValueError('Multiple files found in ZIP file.'
379 ' Only one file per ZIP: {}'
380 .format(zip_names))
381
382 # XZ Compression
383 elif compression == 'xz':
384 lzma = compat.import_lzma()
385 f = lzma.LZMAFile(path_or_buf, mode)
386
387 # Unrecognized Compression
388 else:
389 msg = 'Unrecognized compression type: {}'.format(compression)
390 raise ValueError(msg)
391
392 handles.append(f)
393
394 elif is_path:
395 if compat.PY2:
396 # Python 2
397 f = open(path_or_buf, mode)
398 elif encoding:
399 # Python 3 and encoding
400 f = open(path_or_buf, mode, encoding=encoding)
401 elif is_text:
402 # Python 3 and no explicit encoding
403 f = open(path_or_buf, mode, errors='replace')
404 else:
405 # Python 3 and binary mode
406 f = open(path_or_buf, mode)
407 handles.append(f)
408
409 # in Python 3, convert BytesIO or fileobjects passed with an encoding
410 if compat.PY3 and is_text and\
411 (compression or isinstance(f, need_text_wrapping)):
412 from io import TextIOWrapper
413 f = TextIOWrapper(f, encoding=encoding)
414 handles.append(f)
415
416 if memory_map and hasattr(f, 'fileno'):
417 try:
418 g = MMapWrapper(f)
419 f.close()
420 f = g
421 except Exception:
422 # we catch any errors that may have occurred
423 # because that is consistent with the lower-level
424 # functionality of the C engine (pd.read_csv), so
425 # leave the file handler as is then
426 pass
427
428 return f, handles
429
430
431 class BytesZipFile(zipfile.ZipFile, BytesIO):
432 """
433 Wrapper for standard library class ZipFile and allow the returned file-like
434 handle to accept byte strings via `write` method.
435
436 BytesIO provides attributes of file-like object and ZipFile.writestr writes
437 bytes strings into a member of the archive.
438 """
439 # GH 17778
440 def __init__(self, file, mode, compression=zipfile.ZIP_DEFLATED, **kwargs):
441 if mode in ['wb', 'rb']:
442 mode = mode.replace('b', '')
443 super(BytesZipFile, self).__init__(file, mode, compression, **kwargs)
444
445 def write(self, data):
446 super(BytesZipFile, self).writestr(self.filename, data)
447
448
449 class MMapWrapper(BaseIterator):
450 """
451 Wrapper for the Python's mmap class so that it can be properly read in
452 by Python's csv.reader class.
453
454 Parameters
455 ----------
456 f : file object
457 File object to be mapped onto memory. Must support the 'fileno'
458 method or have an equivalent attribute
459
460 """
461
462 def __init__(self, f):
463 self.mmap = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
464
465 def __getattr__(self, name):
466 return getattr(self.mmap, name)
467
468 def __iter__(self):
469 return self
470
471 def __next__(self):
472 newline = self.mmap.readline()
473
474 # readline returns bytes, not str, in Python 3,
475 # but Python's CSV reader expects str, so convert
476 # the output to str before continuing
477 if compat.PY3:
478 newline = compat.bytes_to_str(newline)
479
480 # mmap doesn't raise if reading past the allocated
481 # data but instead returns an empty string, so raise
482 # if that is returned
483 if newline == '':
484 raise StopIteration
485 return newline
486
487
488 if not compat.PY3:
489 MMapWrapper.next = lambda self: self.__next__()
490
491
492 class UTF8Recoder(BaseIterator):
493
494 """
495 Iterator that reads an encoded stream and reencodes the input to UTF-8
496 """
497
498 def __init__(self, f, encoding):
499 self.reader = codecs.getreader(encoding)(f)
500
501 def read(self, bytes=-1):
502 return self.reader.read(bytes).encode("utf-8")
503
504 def readline(self):
505 return self.reader.readline().encode("utf-8")
506
507 def next(self):
508 return next(self.reader).encode("utf-8")
509
510
511 if compat.PY3: # pragma: no cover
512 def UnicodeReader(f, dialect=csv.excel, encoding="utf-8", **kwds):
513 # ignore encoding
514 return csv.reader(f, dialect=dialect, **kwds)
515
516 def UnicodeWriter(f, dialect=csv.excel, encoding="utf-8", **kwds):
517 return csv.writer(f, dialect=dialect, **kwds)
518 else:
519 class UnicodeReader(BaseIterator):
520
521 """
522 A CSV reader which will iterate over lines in the CSV file "f",
523 which is encoded in the given encoding.
524
525 On Python 3, this is replaced (below) by csv.reader, which handles
526 unicode.
527 """
528
529 def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
530 f = UTF8Recoder(f, encoding)
531 self.reader = csv.reader(f, dialect=dialect, **kwds)
532
533 def __next__(self):
534 row = next(self.reader)
535 return [compat.text_type(s, "utf-8") for s in row]
536
537 class UnicodeWriter(object):
538
539 """
540 A CSV writer which will write rows to CSV file "f",
541 which is encoded in the given encoding.
542 """
543
544 def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
545 # Redirect output to a queue
546 self.queue = StringIO()
547 self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
548 self.stream = f
549 self.encoder = codecs.getincrementalencoder(encoding)()
550 self.quoting = kwds.get("quoting", None)
551
552 def writerow(self, row):
553 def _check_as_is(x):
554 return (self.quoting == csv.QUOTE_NONNUMERIC and
555 is_number(x)) or isinstance(x, str)
556
557 row = [x if _check_as_is(x)
558 else pprint_thing(x).encode("utf-8") for x in row]
559
560 self.writer.writerow([s for s in row])
561 # Fetch UTF-8 output from the queue ...
562 data = self.queue.getvalue()
563 data = data.decode("utf-8")
564 # ... and re-encode it into the target encoding
565 data = self.encoder.encode(data)
566 # write to the target stream
567 self.stream.write(data)
568 # empty queue
569 self.queue.truncate(0)
570
571 def writerows(self, rows):
572 def _check_as_is(x):
573 return (self.quoting == csv.QUOTE_NONNUMERIC and
574 is_number(x)) or isinstance(x, str)
575
576 for i, row in enumerate(rows):
577 rows[i] = [x if _check_as_is(x)
578 else pprint_thing(x).encode("utf-8") for x in row]
579
580 self.writer.writerows([[s for s in row] for row in rows])
581 # Fetch UTF-8 output from the queue ...
582 data = self.queue.getvalue()
583 data = data.decode("utf-8")
584 # ... and re-encode it into the target encoding
585 data = self.encoder.encode(data)
586 # write to the target stream
587 self.stream.write(data)
588 # empty queue
589 self.queue.truncate(0)
590
[end of pandas/io/common.py]
[start of pandas/util/_print_versions.py]
1 import os
2 import platform
3 import sys
4 import struct
5 import subprocess
6 import codecs
7 import locale
8 import importlib
9
10
11 def get_sys_info():
12 "Returns system information as a dict"
13
14 blob = []
15
16 # get full commit hash
17 commit = None
18 if os.path.isdir(".git") and os.path.isdir("pandas"):
19 try:
20 pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "),
21 stdout=subprocess.PIPE,
22 stderr=subprocess.PIPE)
23 so, serr = pipe.communicate()
24 except:
25 pass
26 else:
27 if pipe.returncode == 0:
28 commit = so
29 try:
30 commit = so.decode('utf-8')
31 except ValueError:
32 pass
33 commit = commit.strip().strip('"')
34
35 blob.append(('commit', commit))
36
37 try:
38 (sysname, nodename, release,
39 version, machine, processor) = platform.uname()
40 blob.extend([
41 ("python", '.'.join(map(str, sys.version_info))),
42 ("python-bits", struct.calcsize("P") * 8),
43 ("OS", "{sysname}".format(sysname=sysname)),
44 ("OS-release", "{release}".format(release=release)),
45 # ("Version", "{version}".format(version=version)),
46 ("machine", "{machine}".format(machine=machine)),
47 ("processor", "{processor}".format(processor=processor)),
48 ("byteorder", "{byteorder}".format(byteorder=sys.byteorder)),
49 ("LC_ALL", "{lc}".format(lc=os.environ.get('LC_ALL', "None"))),
50 ("LANG", "{lang}".format(lang=os.environ.get('LANG', "None"))),
51 ("LOCALE", '.'.join(map(str, locale.getlocale()))),
52 ])
53 except:
54 pass
55
56 return blob
57
58
59 def show_versions(as_json=False):
60 sys_info = get_sys_info()
61
62 deps = [
63 # (MODULE_NAME, f(mod) -> mod version)
64 ("pandas", lambda mod: mod.__version__),
65 ("pytest", lambda mod: mod.__version__),
66 ("pip", lambda mod: mod.__version__),
67 ("setuptools", lambda mod: mod.__version__),
68 ("Cython", lambda mod: mod.__version__),
69 ("numpy", lambda mod: mod.version.version),
70 ("scipy", lambda mod: mod.version.version),
71 ("pyarrow", lambda mod: mod.__version__),
72 ("xarray", lambda mod: mod.__version__),
73 ("IPython", lambda mod: mod.__version__),
74 ("sphinx", lambda mod: mod.__version__),
75 ("patsy", lambda mod: mod.__version__),
76 ("dateutil", lambda mod: mod.__version__),
77 ("pytz", lambda mod: mod.VERSION),
78 ("blosc", lambda mod: mod.__version__),
79 ("bottleneck", lambda mod: mod.__version__),
80 ("tables", lambda mod: mod.__version__),
81 ("numexpr", lambda mod: mod.__version__),
82 ("feather", lambda mod: mod.__version__),
83 ("matplotlib", lambda mod: mod.__version__),
84 ("openpyxl", lambda mod: mod.__version__),
85 ("xlrd", lambda mod: mod.__VERSION__),
86 ("xlwt", lambda mod: mod.__VERSION__),
87 ("xlsxwriter", lambda mod: mod.__version__),
88 ("lxml", lambda mod: mod.etree.__version__),
89 ("bs4", lambda mod: mod.__version__),
90 ("html5lib", lambda mod: mod.__version__),
91 ("sqlalchemy", lambda mod: mod.__version__),
92 ("pymysql", lambda mod: mod.__version__),
93 ("psycopg2", lambda mod: mod.__version__),
94 ("jinja2", lambda mod: mod.__version__),
95 ("s3fs", lambda mod: mod.__version__),
96 ("fastparquet", lambda mod: mod.__version__),
97 ("pandas_gbq", lambda mod: mod.__version__),
98 ("pandas_datareader", lambda mod: mod.__version__),
99 ]
100
101 deps_blob = list()
102 for (modname, ver_f) in deps:
103 try:
104 if modname in sys.modules:
105 mod = sys.modules[modname]
106 else:
107 mod = importlib.import_module(modname)
108 ver = ver_f(mod)
109 deps_blob.append((modname, ver))
110 except:
111 deps_blob.append((modname, None))
112
113 if (as_json):
114 try:
115 import json
116 except:
117 import simplejson as json
118
119 j = dict(system=dict(sys_info), dependencies=dict(deps_blob))
120
121 if as_json is True:
122 print(j)
123 else:
124 with codecs.open(as_json, "wb", encoding='utf8') as f:
125 json.dump(j, f, indent=2)
126
127 else:
128
129 print("\nINSTALLED VERSIONS")
130 print("------------------")
131
132 for k, stat in sys_info:
133 print("{k}: {stat}".format(k=k, stat=stat))
134
135 print("")
136 for k, stat in deps_blob:
137 print("{k}: {stat}".format(k=k, stat=stat))
138
139
140 def main():
141 from optparse import OptionParser
142 parser = OptionParser()
143 parser.add_option("-j", "--json", metavar="FILE", nargs=1,
144 help="Save output as JSON into file, pass in "
145 "'-' to output to stdout")
146
147 (options, args) = parser.parse_args()
148
149 if options.json == "-":
150 options.json = True
151
152 show_versions(as_json=options.json)
153
154 return 0
155
156
157 if __name__ == "__main__":
158 sys.exit(main())
159
[end of pandas/util/_print_versions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
4274b840e64374a39a0285c2174968588753ec35
|
to_csv failing with encoding='utf-16'
#### Code Sample:
```python
df.to_csv('test.gz', sep='~', header=False, index=False,compression='gzip',line_terminator='\r\n',encoding='utf-16', na_rep='')
```
/opt/anaconda/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)
#### Problem description
In first place, big thank you for supporting pandas, my life is easier and fun with pandas in the toolkit.
In previous version 0.22 we were able to do _to_csv_ with encoding='utf-16' to handle Japanese, Chinese among other content properly. Need the utf-16 encoding for next steps like upload data to MSSQL server in bulk mode.
I would like to know if I can use a workaround to continue have the support of uft-16.
Any other suggestions are welcome.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.114-42-default
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: POSIX
LOCALE: None.None
pandas: 0.23.0
pytest: 3.5.1
pip: 10.0.1
setuptools: 39.1.0
Cython: 0.28.2
numpy: 1.14.2
scipy: 1.1.0
pyarrow: 0.9.0
xarray: None
IPython: 6.4.0
sphinx: 1.7.4
patsy: 0.5.0
dateutil: 2.7.3
pytz: 2018.4
blosc: None
bottleneck: 1.2.1
tables: 3.4.3
numexpr: 2.6.5
feather: None
matplotlib: 2.2.2
openpyxl: 2.5.3
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.4
lxml: 4.2.1
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.7
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
|
Can you please post a reproducible example? Tried locally with the below on master and it worked fine:
```python
buf = io.BytesIO(b'\xff\x34')
df = pd.read_csv(buf, encoding='utf16')
outbuf = buf.StringIO()
df.to_csv(outbuf, encoding='utf-16')
```
Notice my code is failing not due to data content.
Here an example using sklearn datasets
```python
from sklearn import datasets
iris = datasets.load_iris()
data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
data1.to_csv('test.gz'
, sep='~'
, header=False, index=False
,compression='gzip'
,line_terminator='\r\n'
,encoding='utf-16'
, na_rep=''
)
```
If I use compression parameter I got the _UnicodeDecodeError_; but without the parameter, runs properly.
|
2018-06-03T10:03:06Z
|
<patch>
diff --git a/doc/source/whatsnew/v0.23.1.txt b/doc/source/whatsnew/v0.23.1.txt
--- a/doc/source/whatsnew/v0.23.1.txt
+++ b/doc/source/whatsnew/v0.23.1.txt
@@ -92,6 +92,7 @@ I/O
- Bug in IO methods specifying ``compression='zip'`` which produced uncompressed zip archives (:issue:`17778`, :issue:`21144`)
- Bug in :meth:`DataFrame.to_stata` which prevented exporting DataFrames to buffers and most file-like objects (:issue:`21041`)
+- Bug in :meth:`DataFrame.to_csv` and :meth:`Series.to_csv` causes encoding error when compression and encoding are specified (:issue:`21241`, :issue:`21118`)
-
Plotting
diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py
--- a/pandas/io/formats/csvs.py
+++ b/pandas/io/formats/csvs.py
@@ -9,6 +9,7 @@
import numpy as np
from pandas.core.dtypes.missing import notna
+from pandas.core.dtypes.inference import is_file_like
from pandas.core.index import Index, MultiIndex
from pandas import compat
from pandas.compat import (StringIO, range, zip)
@@ -127,14 +128,19 @@ def save(self):
else:
encoding = self.encoding
- if hasattr(self.path_or_buf, 'write'):
- f = self.path_or_buf
- close = False
+ # PR 21300 uses string buffer to receive csv writing and dump into
+ # file-like output with compression as option. GH 21241, 21118
+ f = StringIO()
+ if not is_file_like(self.path_or_buf):
+ # path_or_buf is path
+ path_or_buf = self.path_or_buf
+ elif hasattr(self.path_or_buf, 'name'):
+ # path_or_buf is file handle
+ path_or_buf = self.path_or_buf.name
else:
- f, handles = _get_handle(self.path_or_buf, self.mode,
- encoding=encoding,
- compression=None)
- close = True if self.compression is None else False
+ # path_or_buf is file-like IO objects.
+ f = self.path_or_buf
+ path_or_buf = None
try:
writer_kwargs = dict(lineterminator=self.line_terminator,
@@ -151,18 +157,16 @@ def save(self):
self._save()
finally:
- # GH 17778 handles compression for byte strings.
- if not close and self.compression:
- f.close()
- with open(f.name, 'r') as f:
- data = f.read()
- f, handles = _get_handle(f.name, self.mode,
+ # GH 17778 handles zip compression for byte strings separately.
+ buf = f.getvalue()
+ if path_or_buf:
+ f, handles = _get_handle(path_or_buf, self.mode,
encoding=encoding,
compression=self.compression)
- f.write(data)
- close = True
- if close:
+ f.write(buf)
f.close()
+ for _fh in handles:
+ _fh.close()
def _save_header(self):
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-4592
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TST: sometimes failing ujson comparison
cc @Komnomnomnom
skipping test via: b5ff81d34d74efdf88b8151ff58ed16ff7c02c67
fails about 1/3 times (and only seems on py3)
https://travis-ci.org/pydata/pandas/jobs/10223886
don't use now, pick a known value (that you can then compare too exactly)
```
def test_datetime_units(self):
from pandas.lib import Timestamp
val = datetime.datetime.now()
stamp = Timestamp(val)
```
</issue>
<code>
[start of README.rst]
1 =============================================
2 pandas: powerful Python data analysis toolkit
3 =============================================
4
5 .. image:: https://travis-ci.org/pydata/pandas.png
6 :target: https://travis-ci.org/pydata/pandas
7
8 What is it
9 ==========
10
11 **pandas** is a Python package providing fast, flexible, and expressive data
12 structures designed to make working with "relational" or "labeled" data both
13 easy and intuitive. It aims to be the fundamental high-level building block for
14 doing practical, **real world** data analysis in Python. Additionally, it has
15 the broader goal of becoming **the most powerful and flexible open source data
16 analysis / manipulation tool available in any language**. It is already well on
17 its way toward this goal.
18
19 Main Features
20 =============
21
22 Here are just a few of the things that pandas does well:
23
24 - Easy handling of **missing data** (represented as NaN) in floating point as
25 well as non-floating point data
26 - Size mutability: columns can be **inserted and deleted** from DataFrame and
27 higher dimensional objects
28 - Automatic and explicit **data alignment**: objects can be explicitly
29 aligned to a set of labels, or the user can simply ignore the labels and
30 let `Series`, `DataFrame`, etc. automatically align the data for you in
31 computations
32 - Powerful, flexible **group by** functionality to perform
33 split-apply-combine operations on data sets, for both aggregating and
34 transforming data
35 - Make it **easy to convert** ragged, differently-indexed data in other
36 Python and NumPy data structures into DataFrame objects
37 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
38 of large data sets
39 - Intuitive **merging** and **joining** data sets
40 - Flexible **reshaping** and pivoting of data sets
41 - **Hierarchical** labeling of axes (possible to have multiple labels per
42 tick)
43 - Robust IO tools for loading data from **flat files** (CSV and delimited),
44 Excel files, databases, and saving / loading data from the ultrafast **HDF5
45 format**
46 - **Time series**-specific functionality: date range generation and frequency
47 conversion, moving window statistics, moving window linear regressions,
48 date shifting and lagging, etc.
49
50 Where to get it
51 ===============
52
53 The source code is currently hosted on GitHub at: http://github.com/pydata/pandas
54
55 Binary installers for the latest released version are available at the Python
56 package index::
57
58 http://pypi.python.org/pypi/pandas/
59
60 And via ``easy_install`` or ``pip``::
61
62 easy_install pandas
63 pip install pandas
64
65 Dependencies
66 ============
67
68 - `NumPy <http://www.numpy.org>`__: 1.6.1 or higher
69 - `python-dateutil <http://labix.org/python-dateutil>`__ 1.5 or higher
70 - `pytz <http://pytz.sourceforge.net/>`__
71 - Needed for time zone support with ``date_range``
72
73 Highly Recommended Dependencies
74 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
75
76 - `numexpr <http://code.google.com/p/numexpr/>`__
77 - Needed to accelerate some expression evaluation operations
78 - Required by `PyTables`
79 - `bottleneck <http://berkeleyanalytics.com/bottleneck>`__
80 - Needed to accelerate certain numerical operations
81
82 Optional dependencies
83 ~~~~~~~~~~~~~~~~~~~~~
84
85 - `Cython <http://www.cython.org>`__: Only necessary to build development version. Version 0.17.1 or higher.
86 - `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions
87 - `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage
88 - `matplotlib <http://matplotlib.sourceforge.net/>`__: for plotting
89 - `statsmodels <http://statsmodels.sourceforge.net/>`__
90 - Needed for parts of :mod:`pandas.stats`
91 - `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__
92 - openpyxl version 1.6.1 or higher, for writing .xlsx files
93 - xlrd >= 0.9.0
94 - Needed for Excel I/O
95 - `boto <https://pypi.python.org/pypi/boto>`__: necessary for Amazon S3
96 access.
97 - One of the following combinations of libraries is needed to use the
98 top-level :func:`~pandas.io.html.read_html` function:
99
100 - `BeautifulSoup4`_ and `html5lib`_ (Any recent version of `html5lib`_ is
101 okay.)
102 - `BeautifulSoup4`_ and `lxml`_
103 - `BeautifulSoup4`_ and `html5lib`_ and `lxml`_
104 - Only `lxml`_, although see :ref:`HTML reading gotchas <html-gotchas>`
105 for reasons as to why you should probably **not** take this approach.
106
107 .. warning::
108
109 - if you install `BeautifulSoup4`_ you must install either
110 `lxml`_ or `html5lib`_ or both.
111 :func:`~pandas.io.html.read_html` will **not** work with *only*
112 `BeautifulSoup4`_ installed.
113 - You are highly encouraged to read :ref:`HTML reading gotchas
114 <html-gotchas>`. It explains issues surrounding the installation and
115 usage of the above three libraries
116 - You may need to install an older version of `BeautifulSoup4`_:
117 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
118 32-bit Ubuntu/Debian
119 - Additionally, if you're using `Anaconda`_ you should definitely
120 read :ref:`the gotchas about HTML parsing libraries <html-gotchas>`
121
122 .. note::
123
124 - if you're on a system with ``apt-get`` you can do
125
126 .. code-block:: sh
127
128 sudo apt-get build-dep python-lxml
129
130 to get the necessary dependencies for installation of `lxml`_. This
131 will prevent further headaches down the line.
132
133
134 .. _html5lib: https://github.com/html5lib/html5lib-python
135 .. _BeautifulSoup4: http://www.crummy.com/software/BeautifulSoup
136 .. _lxml: http://lxml.de
137 .. _Anaconda: https://store.continuum.io/cshop/anaconda
138
139
140 Installation from sources
141 =========================
142
143 To install pandas from source you need ``cython`` in addition to the normal dependencies above,
144 which can be installed from pypi::
145
146 pip install cython
147
148 In the ``pandas`` directory (same one where you found this file after cloning the git repo), execute::
149
150 python setup.py install
151
152 or for installing in `development mode <http://www.pip-installer.org/en/latest/usage.html>`__::
153
154 python setup.py develop
155
156 Alternatively, you can use `pip` if you want all the dependencies pulled in automatically
157 (the optional ``-e`` option is for installing it in
158 `development mode <http://www.pip-installer.org/en/latest/usage.html>`__)::
159
160 pip install -e .
161
162 On Windows, you will need to install MinGW and execute::
163
164 python setup.py build --compiler=mingw32
165 python setup.py install
166
167 See http://pandas.pydata.org/ for more information.
168
169 License
170 =======
171
172 BSD
173
174 Documentation
175 =============
176
177 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
178
179 The Sphinx documentation should provide a good starting point for learning how
180 to use the library. Expect the docs to continue to expand as time goes on.
181
182 Background
183 ==========
184
185 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
186 has been under active development since then.
187
188 Discussion and Development
189 ==========================
190
191 Since ``pandas`` development is related to a number of other scientific
192 Python projects, questions are welcome on the scipy-user mailing
193 list. Specialized discussions or design issues should take place on
194 the pystatsmodels mailing list / Google group, where
195 ``scikits.statsmodels`` and other libraries will also be discussed:
196
197 http://groups.google.com/group/pystatsmodels
198
199 .. _NumPy: http://numpy.scipy.org/
200
[end of README.rst]
[start of pandas/io/stata.py]
1 """
2 Module contains tools for processing Stata files into DataFrames
3
4 The StataReader below was originally written by Joe Presbrey as part of PyDTA.
5 It has been extended and improved by Skipper Seabold from the Statsmodels project
6 who also developed the StataWriter and was finally added to pandas in an once again
7 improved version.
8
9 You can find more information on http://presbrey.mit.edu/PyDTA and
10 http://statsmodels.sourceforge.net/devel/
11 """
12 # TODO: Fix this module so it can use cross-compatible zip, map, and range
13 import numpy as np
14
15 import sys
16 import struct
17 from pandas.core.base import StringMixin
18 from pandas.core.frame import DataFrame
19 from pandas.core.series import Series
20 from pandas.core.categorical import Categorical
21 import datetime
22 from pandas import compat
23 from pandas import compat
24 from pandas.compat import StringIO, long, lrange, lmap, lzip
25 from pandas import isnull
26 from pandas.io.parsers import _parser_params, Appender
27 from pandas.io.common import get_filepath_or_buffer
28
29
30 _read_stata_doc = """
31 Read Stata file into DataFrame
32
33 %s
34 """ % (_parser_params)
35
36
37 @Appender(_read_stata_doc)
38 def read_stata(filepath_or_buffer, convert_dates=True, convert_categoricals=True, encoding=None, index=None):
39 reader = StataReader(filepath_or_buffer, encoding)
40
41 return reader.data(convert_dates, convert_categoricals, index)
42
43 _date_formats = ["%tc", "%tC", "%td", "%tw", "%tm", "%tq", "%th", "%ty"]
44
45 def _stata_elapsed_date_to_datetime(date, fmt):
46 """
47 Convert from SIF to datetime. http://www.stata.com/help.cgi?datetime
48
49 Parameters
50 ----------
51 date : int
52 The Stata Internal Format date to convert to datetime according to fmt
53 fmt : str
54 The format to convert to. Can be, tc, td, tw, tm, tq, th, ty
55
56 Examples
57 --------
58 >>> _stata_elapsed_date_to_datetime(52, "%tw") datetime.datetime(1961, 1, 1, 0, 0)
59
60 Notes
61 -----
62 datetime/c - tc
63 milliseconds since 01jan1960 00:00:00.000, assuming 86,400 s/day
64 datetime/C - tC - NOT IMPLEMENTED
65 milliseconds since 01jan1960 00:00:00.000, adjusted for leap seconds
66 date - td
67 days since 01jan1960 (01jan1960 = 0)
68 weekly date - tw
69 weeks since 1960w1
70 This assumes 52 weeks in a year, then adds 7 * remainder of the weeks.
71 The datetime value is the start of the week in terms of days in the
72 year, not ISO calendar weeks.
73 monthly date - tm
74 months since 1960m1
75 quarterly date - tq
76 quarters since 1960q1
77 half-yearly date - th
78 half-years since 1960h1 yearly
79 date - ty
80 years since 0000
81
82 If you don't have pandas with datetime support, then you can't do
83 milliseconds accurately.
84 """
85 #NOTE: we could run into overflow / loss of precision situations here
86 # casting to int, but I'm not sure what to do. datetime won't deal with
87 # numpy types and numpy datetime isn't mature enough / we can't rely on
88 # pandas version > 0.7.1
89 #TODO: IIRC relative delta doesn't play well with np.datetime?
90 if np.isnan(date):
91 return np.datetime64('nat')
92
93 date = int(date)
94 stata_epoch = datetime.datetime(1960, 1, 1)
95 if fmt in ["%tc", "tc"]:
96 from dateutil.relativedelta import relativedelta
97 return stata_epoch + relativedelta(microseconds=date * 1000)
98 elif fmt in ["%tC", "tC"]:
99 from warnings import warn
100 warn("Encountered %tC format. Leaving in Stata Internal Format.")
101 return date
102 elif fmt in ["%td", "td"]:
103 return stata_epoch + datetime.timedelta(int(date))
104 elif fmt in ["%tw", "tw"]: # does not count leap days - 7 days is a week
105 year = datetime.datetime(stata_epoch.year + date // 52, 1, 1)
106 day_delta = (date % 52) * 7
107 return year + datetime.timedelta(int(day_delta))
108 elif fmt in ["%tm", "tm"]:
109 year = stata_epoch.year + date // 12
110 month_delta = (date % 12) + 1
111 return datetime.datetime(year, month_delta, 1)
112 elif fmt in ["%tq", "tq"]:
113 year = stata_epoch.year + date // 4
114 month_delta = (date % 4) * 3 + 1
115 return datetime.datetime(year, month_delta, 1)
116 elif fmt in ["%th", "th"]:
117 year = stata_epoch.year + date // 2
118 month_delta = (date % 2) * 6 + 1
119 return datetime.datetime(year, month_delta, 1)
120 elif fmt in ["%ty", "ty"]:
121 if date > 0:
122 return datetime.datetime(date, 1, 1)
123 else: # don't do negative years bc can't mix dtypes in column
124 raise ValueError("Year 0 and before not implemented")
125 else:
126 raise ValueError("Date fmt %s not understood" % fmt)
127
128
129 def _datetime_to_stata_elapsed(date, fmt):
130 """
131 Convert from datetime to SIF. http://www.stata.com/help.cgi?datetime
132
133 Parameters
134 ----------
135 date : datetime.datetime
136 The date to convert to the Stata Internal Format given by fmt
137 fmt : str
138 The format to convert to. Can be, tc, td, tw, tm, tq, th, ty
139 """
140 if not isinstance(date, datetime.datetime):
141 raise ValueError("date should be datetime.datetime format")
142 stata_epoch = datetime.datetime(1960, 1, 1)
143 if fmt in ["%tc", "tc"]:
144 delta = date - stata_epoch
145 return (delta.days * 86400000 + delta.seconds*1000 +
146 delta.microseconds/1000)
147 elif fmt in ["%tC", "tC"]:
148 from warnings import warn
149 warn("Stata Internal Format tC not supported.")
150 return date
151 elif fmt in ["%td", "td"]:
152 return (date - stata_epoch).days
153 elif fmt in ["%tw", "tw"]:
154 return (52*(date.year-stata_epoch.year) +
155 (date - datetime.datetime(date.year, 1, 1)).days / 7)
156 elif fmt in ["%tm", "tm"]:
157 return (12 * (date.year - stata_epoch.year) + date.month - 1)
158 elif fmt in ["%tq", "tq"]:
159 return 4*(date.year-stata_epoch.year) + int((date.month - 1)/3)
160 elif fmt in ["%th", "th"]:
161 return 2 * (date.year - stata_epoch.year) + int(date.month > 6)
162 elif fmt in ["%ty", "ty"]:
163 return date.year
164 else:
165 raise ValueError("fmt %s not understood" % fmt)
166
167
168 class StataMissingValue(StringMixin):
169 """
170 An observation's missing value.
171
172 Parameters
173 -----------
174 offset
175 value
176
177 Attributes
178 ----------
179 string
180 value
181
182 Notes
183 -----
184 More information: <http://www.stata.com/help.cgi?missing>
185 """
186
187 def __init__(self, offset, value):
188 self._value = value
189 if type(value) is int or type(value) is long:
190 self._str = value - offset is 1 and \
191 '.' or ('.' + chr(value - offset + 96))
192 else:
193 self._str = '.'
194 string = property(lambda self: self._str, doc="The Stata representation of the missing value: '.', '.a'..'.z'")
195 value = property(lambda self: self._value, doc='The binary representation of the missing value.')
196
197 def __unicode__(self):
198 return self.string
199
200 def __repr__(self):
201 # not perfect :-/
202 return "%s(%s)" % (self.__class__, self)
203
204
205 class StataParser(object):
206 def __init__(self, encoding):
207 if(encoding is None):
208 self._encoding = 'cp1252'
209 else:
210 self._encoding = encoding
211
212 #type code.
213 #--------------------
214 #str1 1 = 0x01
215 #str2 2 = 0x02
216 #...
217 #str244 244 = 0xf4
218 #byte 251 = 0xfb (sic)
219 #int 252 = 0xfc
220 #long 253 = 0xfd
221 #float 254 = 0xfe
222 #double 255 = 0xff
223 #--------------------
224 #NOTE: the byte type seems to be reserved for categorical variables
225 # with a label, but the underlying variable is -127 to 100
226 # we're going to drop the label and cast to int
227 self.DTYPE_MAP = \
228 dict(
229 lzip(range(1, 245), ['a' + str(i) for i in range(1, 245)]) +
230 [
231 (251, np.int16),
232 (252, np.int32),
233 (253, np.int64),
234 (254, np.float32),
235 (255, np.float64)
236 ]
237 )
238 self.TYPE_MAP = lrange(251) + list('bhlfd')
239 #NOTE: technically, some of these are wrong. there are more numbers
240 # that can be represented. it's the 27 ABOVE and BELOW the max listed
241 # numeric data type in [U] 12.2.2 of the 11.2 manual
242 self.MISSING_VALUES = \
243 {
244 'b': (-127, 100),
245 'h': (-32767, 32740),
246 'l': (-2147483647, 2147483620),
247 'f': (-1.701e+38, +1.701e+38),
248 'd': (-1.798e+308, +8.988e+307)
249 }
250
251 self.OLD_TYPE_MAPPING = \
252 {
253 'i': 252,
254 'f': 254,
255 'b': 251
256 }
257
258 def _decode_bytes(self, str, errors=None):
259 if compat.PY3:
260 return str.decode(self._encoding, errors)
261 else:
262 return str
263
264
265 class StataReader(StataParser):
266 """
267 Class for working with a Stata dataset. There are two possibilities for usage:
268
269 * The from_dta() method on the DataFrame class.
270 This will return a DataFrame with the Stata dataset. Note that when using the
271 from_dta() method, you will not have access to meta-information like variable
272 labels or the data label.
273
274 * Work with this object directly. Upon instantiation, the header of the Stata data
275 file is read, giving you access to attributes like variable_labels(), data_label(),
276 nobs(), ... A DataFrame with the data is returned by the read() method; this will
277 also fill up the value_labels. Note that calling the value_labels() method will
278 result in an error if the read() method has not been called yet. This is because
279 the value labels are stored at the end of a Stata dataset, after the data.
280
281 Parameters
282 ----------
283 path_or_buf : string or file-like object
284 Path to .dta file or object implementing a binary read() functions
285 encoding : string, None or encoding
286 Encoding used to parse the files. Note that Stata doesn't
287 support unicode. None defaults to cp1252.
288 """
289 def __init__(self, path_or_buf, encoding=None):
290 super(StataReader, self).__init__(encoding)
291 self.col_sizes = ()
292 self._has_string_data = False
293 self._missing_values = False
294 self._data_read = False
295 self._value_labels_read = False
296 if isinstance(path_or_buf, str):
297 path_or_buf, encoding = get_filepath_or_buffer(path_or_buf, encoding='cp1252')
298 if encoding is not None:
299 self._encoding = encoding
300
301 if isinstance(path_or_buf, (str, compat.text_type, bytes)):
302 self.path_or_buf = open(path_or_buf, 'rb')
303 else:
304 self.path_or_buf = path_or_buf
305
306 self._read_header()
307
308 def _read_header(self):
309 # header
310 self.format_version = struct.unpack('b', self.path_or_buf.read(1))[0]
311 if self.format_version not in [104, 105, 108, 113, 114, 115]:
312 raise ValueError("Version of given Stata file is not 104, 105, 108, 113 (Stata 8/9), 114 (Stata 10/11) or 115 (Stata 12)")
313 self.byteorder = self.path_or_buf.read(1) == 0x1 and '>' or '<'
314 self.filetype = struct.unpack('b', self.path_or_buf.read(1))[0]
315 self.path_or_buf.read(1) # unused
316
317 self.nvar = struct.unpack(self.byteorder + 'H', self.path_or_buf.read(2))[0]
318 self.nobs = struct.unpack(self.byteorder + 'I', self.path_or_buf.read(4))[0]
319 if self.format_version > 105:
320 self.data_label = self.path_or_buf.read(81)
321 else:
322 self.data_label = self.path_or_buf.read(32)
323 if self.format_version > 104:
324 self.time_stamp = self.path_or_buf.read(18)
325
326 # descriptors
327 if self.format_version > 108:
328 typlist = [ord(self.path_or_buf.read(1)) for i in range(self.nvar)]
329 else:
330 typlist = [self.OLD_TYPE_MAPPING[self._decode_bytes(self.path_or_buf.read(1))] for i in range(self.nvar)]
331
332 try:
333 self.typlist = [self.TYPE_MAP[typ] for typ in typlist]
334 except:
335 raise ValueError("cannot convert stata types [{0}]".format(','.join(typlist)))
336 try:
337 self.dtyplist = [self.DTYPE_MAP[typ] for typ in typlist]
338 except:
339 raise ValueError("cannot convert stata dtypes [{0}]".format(','.join(typlist)))
340
341 if self.format_version > 108:
342 self.varlist = [self._null_terminate(self.path_or_buf.read(33)) for i in range(self.nvar)]
343 else:
344 self.varlist = [self._null_terminate(self.path_or_buf.read(9)) for i in range(self.nvar)]
345 self.srtlist = struct.unpack(self.byteorder + ('h' * (self.nvar + 1)), self.path_or_buf.read(2 * (self.nvar + 1)))[:-1]
346 if self.format_version > 113:
347 self.fmtlist = [self._null_terminate(self.path_or_buf.read(49)) for i in range(self.nvar)]
348 elif self.format_version > 104:
349 self.fmtlist = [self._null_terminate(self.path_or_buf.read(12)) for i in range(self.nvar)]
350 else:
351 self.fmtlist = [self._null_terminate(self.path_or_buf.read(7)) for i in range(self.nvar)]
352 if self.format_version > 108:
353 self.lbllist = [self._null_terminate(self.path_or_buf.read(33)) for i in range(self.nvar)]
354 else:
355 self.lbllist = [self._null_terminate(self.path_or_buf.read(9)) for i in range(self.nvar)]
356 if self.format_version > 105:
357 self.vlblist = [self._null_terminate(self.path_or_buf.read(81)) for i in range(self.nvar)]
358 else:
359 self.vlblist = [self._null_terminate(self.path_or_buf.read(32)) for i in range(self.nvar)]
360
361 # ignore expansion fields (Format 105 and later)
362 # When reading, read five bytes; the last four bytes now tell you the
363 # size of the next read, which you discard. You then continue like
364 # this until you read 5 bytes of zeros.
365
366 if self.format_version > 104:
367 while True:
368 data_type = struct.unpack(self.byteorder + 'b', self.path_or_buf.read(1))[0]
369 if self.format_version > 108:
370 data_len = struct.unpack(self.byteorder + 'i', self.path_or_buf.read(4))[0]
371 else:
372 data_len = struct.unpack(self.byteorder + 'h', self.path_or_buf.read(2))[0]
373 if data_type == 0:
374 break
375 self.path_or_buf.read(data_len)
376
377 # necessary data to continue parsing
378 self.data_location = self.path_or_buf.tell()
379 self.has_string_data = len([x for x in self.typlist if type(x) is int]) > 0
380 self._col_size()
381
382 def _calcsize(self, fmt):
383 return type(fmt) is int and fmt or struct.calcsize(self.byteorder + fmt)
384
385 def _col_size(self, k=None):
386 """Calculate size of a data record."""
387 if len(self.col_sizes) == 0:
388 self.col_sizes = lmap(lambda x: self._calcsize(x), self.typlist)
389 if k is None:
390 return self.col_sizes
391 else:
392 return self.col_sizes[k]
393
394 def _unpack(self, fmt, byt):
395 d = struct.unpack(self.byteorder + fmt, byt)[0]
396 if fmt[-1] in self.MISSING_VALUES:
397 nmin, nmax = self.MISSING_VALUES[fmt[-1]]
398 if d < nmin or d > nmax:
399 if self._missing_values:
400 return StataMissingValue(nmax, d)
401 else:
402 return None
403 return d
404
405 def _null_terminate(self, s):
406 if compat.PY3: # have bytes not strings, so must decode
407 null_byte = b"\0"
408 try:
409 s = s[:s.index(null_byte)]
410 except:
411 pass
412 return s.decode(self._encoding)
413 else:
414 null_byte = "\0"
415 try:
416 return s.lstrip(null_byte)[:s.index(null_byte)]
417 except:
418 return s
419
420 def _next(self):
421 typlist = self.typlist
422 if self.has_string_data:
423 data = [None] * self.nvar
424 for i in range(len(data)):
425 if type(typlist[i]) is int:
426 data[i] = self._null_terminate(self.path_or_buf.read(typlist[i]))
427 else:
428 data[i] = self._unpack(typlist[i], self.path_or_buf.read(self._col_size(i)))
429 return data
430 else:
431 return list(map(lambda i: self._unpack(typlist[i],
432 self.path_or_buf.read(self._col_size(i))),
433 range(self.nvar)))
434
435 def _dataset(self):
436 """
437 Returns a Python generator object for iterating over the dataset.
438
439
440 Parameters
441 ----------
442
443 Returns
444 -------
445 Generator object for iterating over the dataset. Yields each row of
446 observations as a list by default.
447
448 Notes
449 -----
450 If missing_values is True during instantiation of StataReader then
451 observations with _StataMissingValue(s) are not filtered and should
452 be handled by your applcation.
453 """
454
455 try:
456 self._file.seek(self._data_location)
457 except Exception:
458 pass
459
460 for i in range(self.nobs):
461 yield self._next()
462
463 def _read_value_labels(self):
464 if not self._data_read:
465 raise Exception("Data has not been read. Because of the layout of Stata files, this is necessary before reading value labels.")
466 if self._value_labels_read:
467 raise Exception("Value labels have already been read.")
468
469 self.value_label_dict = dict()
470
471 if self.format_version <= 108:
472 return # Value labels are not supported in version 108 and earlier.
473
474 while True:
475 slength = self.path_or_buf.read(4)
476 if not slength:
477 break # end of variable lable table
478 labname = self._null_terminate(self.path_or_buf.read(33))
479 self.path_or_buf.read(3) # padding
480
481 n = struct.unpack(self.byteorder + 'I', self.path_or_buf.read(4))[0]
482 txtlen = struct.unpack(self.byteorder + 'I', self.path_or_buf.read(4))[0]
483 off = []
484 for i in range(n):
485 off.append(struct.unpack(self.byteorder + 'I', self.path_or_buf.read(4))[0])
486 val = []
487 for i in range(n):
488 val.append(struct.unpack(self.byteorder + 'I', self.path_or_buf.read(4))[0])
489 txt = self.path_or_buf.read(txtlen)
490 self.value_label_dict[labname] = dict()
491 for i in range(n):
492 self.value_label_dict[labname][val[i]] = self._null_terminate(txt[off[i]:])
493 self._value_labels_read = True
494
495 def data(self, convert_dates=True, convert_categoricals=True, index=None):
496 """
497 Reads observations from Stata file, converting them into a dataframe
498
499 Parameters
500 ----------
501 convert_dates : boolean, defaults to True
502 Convert date variables to DataFrame time values
503 convert_categoricals : boolean, defaults to True
504 Read value labels and convert columns to Categorical/Factor variables
505 index : identifier of index column
506 identifier of column that should be used as index of the DataFrame
507
508 Returns
509 -------
510 y : DataFrame instance
511 """
512 if self._data_read:
513 raise Exception("Data has already been read.")
514 self._data_read = True
515
516 stata_dta = self._dataset()
517
518 data = []
519 for rownum, line in enumerate(stata_dta):
520 # doesn't handle missing value objects, just casts
521 # None will only work without missing value object.
522 for i, val in enumerate(line):
523 #NOTE: This will only be scalar types because missing strings
524 # are empty not None in Stata
525 if val is None:
526 line[i] = np.nan
527 data.append(tuple(line))
528
529 if convert_categoricals:
530 self._read_value_labels()
531
532 data = DataFrame(data, columns=self.varlist, index=index)
533
534 cols_ = np.where(self.dtyplist)[0]
535 for i in cols_:
536 if self.dtyplist[i] is not None:
537 col = data.columns[i]
538 if data[col].dtype is not np.dtype(object):
539 data[col] = Series(data[col], data[col].index, self.dtyplist[i])
540
541 if convert_dates:
542 cols = np.where(lmap(lambda x: x in _date_formats, self.fmtlist))[0]
543 for i in cols:
544 col = data.columns[i]
545 data[col] = data[col].apply(_stata_elapsed_date_to_datetime, args=(self.fmtlist[i],))
546
547 if convert_categoricals:
548 cols = np.where(lmap(lambda x: x in compat.iterkeys(self.value_label_dict), self.lbllist))[0]
549 for i in cols:
550 col = data.columns[i]
551 labeled_data = np.copy(data[col])
552 labeled_data = labeled_data.astype(object)
553 for k, v in compat.iteritems(self.value_label_dict[self.lbllist[i]]):
554 labeled_data[(data[col] == k).values] = v
555 data[col] = Categorical.from_array(labeled_data)
556
557 return data
558
559 def data_label(self):
560 """Returns data label of Stata file"""
561 return self.data_label
562
563 def variable_labels(self):
564 """Returns variable labels as a dict, associating each variable name with corresponding label"""
565 return dict(zip(self.varlist, self.vlblist))
566
567 def value_labels(self):
568 """Returns a dict, associating each variable name a dict, associating each value its corresponding label"""
569 if not self._value_labels_read:
570 self._read_value_labels()
571
572 return self.value_label_dict
573
574
575 def _open_file_binary_write(fname, encoding):
576 if hasattr(fname, 'write'):
577 #if 'b' not in fname.mode:
578 return fname
579 return open(fname, "wb")
580
581
582 def _set_endianness(endianness):
583 if endianness.lower() in ["<", "little"]:
584 return "<"
585 elif endianness.lower() in [">", "big"]:
586 return ">"
587 else: # pragma : no cover
588 raise ValueError("Endianness %s not understood" % endianness)
589
590
591 def _pad_bytes(name, length):
592 """
593 Takes a char string and pads it wih null bytes until it's length chars
594 """
595 return name + "\x00" * (length - len(name))
596
597
598 def _default_names(nvar):
599 """
600 Returns default Stata names v1, v2, ... vnvar
601 """
602 return ["v%d" % i for i in range(1, nvar+1)]
603
604
605 def _convert_datetime_to_stata_type(fmt):
606 """
607 Converts from one of the stata date formats to a type in TYPE_MAP
608 """
609 if fmt in ["tc", "%tc", "td", "%td", "tw", "%tw", "tm", "%tm", "tq",
610 "%tq", "th", "%th", "ty", "%ty"]:
611 return np.float64 # Stata expects doubles for SIFs
612 else:
613 raise ValueError("fmt %s not understood" % fmt)
614
615
616 def _maybe_convert_to_int_keys(convert_dates, varlist):
617 new_dict = {}
618 for key in convert_dates:
619 if not convert_dates[key].startswith("%"): # make sure proper fmts
620 convert_dates[key] = "%" + convert_dates[key]
621 if key in varlist:
622 new_dict.update({varlist.index(key): convert_dates[key]})
623 else:
624 if not isinstance(key, int):
625 raise ValueError("convery_dates key is not in varlist and is not an int")
626 new_dict.update({key: convert_dates[key]})
627 return new_dict
628
629
630 def _dtype_to_stata_type(dtype):
631 """
632 Converts dtype types to stata types. Returns the byte of the given ordinal.
633 See TYPE_MAP and comments for an explanation. This is also explained in
634 the dta spec.
635 1 - 244 are strings of this length
636 251 - chr(251) - for int8 and int16, byte
637 252 - chr(252) - for int32, int
638 253 - chr(253) - for int64, long
639 254 - chr(254) - for float32, float
640 255 - chr(255) - double, double
641
642 If there are dates to convert, then dtype will already have the correct
643 type inserted.
644 """
645 #TODO: expand to handle datetime to integer conversion
646 if dtype.type == np.string_:
647 return chr(dtype.itemsize)
648 elif dtype.type == np.object_: # try to coerce it to the biggest string
649 # not memory efficient, what else could we do?
650 return chr(244)
651 elif dtype == np.float64:
652 return chr(255)
653 elif dtype == np.float32:
654 return chr(254)
655 elif dtype == np.int64:
656 return chr(253)
657 elif dtype == np.int32:
658 return chr(252)
659 elif dtype == np.int8 or dtype == np.int16:
660 return chr(251)
661 else: # pragma : no cover
662 raise ValueError("Data type %s not currently understood. "
663 "Please report an error to the developers." % dtype)
664
665
666 def _dtype_to_default_stata_fmt(dtype):
667 """
668 Maps numpy dtype to stata's default format for this type. Not terribly
669 important since users can change this in Stata. Semantics are
670
671 string -> "%DDs" where DD is the length of the string
672 float64 -> "%10.0g"
673 float32 -> "%9.0g"
674 int64 -> "%9.0g"
675 int32 -> "%12.0g"
676 int16 -> "%8.0g"
677 int8 -> "%8.0g"
678 """
679 #TODO: expand this to handle a default datetime format?
680 if dtype.type == np.string_:
681 return "%" + str(dtype.itemsize) + "s"
682 elif dtype.type == np.object_:
683 return "%244s"
684 elif dtype == np.float64:
685 return "%10.0g"
686 elif dtype == np.float32:
687 return "%9.0g"
688 elif dtype == np.int64:
689 return "%9.0g"
690 elif dtype == np.int32:
691 return "%12.0g"
692 elif dtype == np.int8 or dtype == np.int16:
693 return "%8.0g"
694 else: # pragma : no cover
695 raise ValueError("Data type %s not currently understood. "
696 "Please report an error to the developers." % dtype)
697
698
699 class StataWriter(StataParser):
700 """
701 A class for writing Stata binary dta files from array-like objects
702
703 Parameters
704 ----------
705 fname : file path or buffer
706 Where to save the dta file.
707 data : array-like
708 Array-like input to save. Pandas objects are also accepted.
709 convert_dates : dict
710 Dictionary mapping column of datetime types to the stata internal
711 format that you want to use for the dates. Options are
712 'tc', 'td', 'tm', 'tw', 'th', 'tq', 'ty'. Column can be either a
713 number or a name.
714 encoding : str
715 Default is latin-1. Note that Stata does not support unicode.
716 byteorder : str
717 Can be ">", "<", "little", or "big". The default is None which uses
718 `sys.byteorder`
719
720 Returns
721 -------
722 writer : StataWriter instance
723 The StataWriter instance has a write_file method, which will
724 write the file to the given `fname`.
725
726 Examples
727 --------
728 >>> writer = StataWriter('./data_file.dta', data)
729 >>> writer.write_file()
730
731 Or with dates
732
733 >>> writer = StataWriter('./date_data_file.dta', date, {2 : 'tw'})
734 >>> writer.write_file()
735 """
736 def __init__(self, fname, data, convert_dates=None, write_index=True, encoding="latin-1",
737 byteorder=None):
738 super(StataWriter, self).__init__(encoding)
739 self._convert_dates = convert_dates
740 self._write_index = write_index
741 # attach nobs, nvars, data, varlist, typlist
742 self._prepare_pandas(data)
743
744 if byteorder is None:
745 byteorder = sys.byteorder
746 self._byteorder = _set_endianness(byteorder)
747 self._file = _open_file_binary_write(fname, self._encoding)
748 self.type_converters = {253: np.long, 252: int}
749
750 def _write(self, to_write):
751 """
752 Helper to call encode before writing to file for Python 3 compat.
753 """
754 if compat.PY3:
755 self._file.write(to_write.encode(self._encoding))
756 else:
757 self._file.write(to_write)
758
759 def _prepare_pandas(self, data):
760 #NOTE: we might need a different API / class for pandas objects so
761 # we can set different semantics - handle this with a PR to pandas.io
762 class DataFrameRowIter(object):
763 def __init__(self, data):
764 self.data = data
765
766 def __iter__(self):
767 for i, row in data.iterrows():
768 yield row
769
770 if self._write_index:
771 data = data.reset_index()
772 self.datarows = DataFrameRowIter(data)
773 self.nobs, self.nvar = data.shape
774 self.data = data
775 self.varlist = data.columns.tolist()
776 dtypes = data.dtypes
777 if self._convert_dates is not None:
778 self._convert_dates = _maybe_convert_to_int_keys(self._convert_dates, self.varlist)
779 for key in self._convert_dates:
780 new_type = _convert_datetime_to_stata_type(self._convert_dates[key])
781 dtypes[key] = np.dtype(new_type)
782 self.typlist = [_dtype_to_stata_type(dt) for dt in dtypes]
783 self.fmtlist = [_dtype_to_default_stata_fmt(dt) for dt in dtypes]
784 # set the given format for the datetime cols
785 if self._convert_dates is not None:
786 for key in self._convert_dates:
787 self.fmtlist[key] = self._convert_dates[key]
788
789 def write_file(self):
790 self._write_header()
791 self._write_descriptors()
792 self._write_variable_labels()
793 # write 5 zeros for expansion fields
794 self._write(_pad_bytes("", 5))
795 if self._convert_dates is None:
796 self._write_data_nodates()
797 else:
798 self._write_data_dates()
799 #self._write_value_labels()
800 self._file.close()
801
802 def _write_header(self, data_label=None, time_stamp=None):
803 byteorder = self._byteorder
804 # ds_format - just use 114
805 self._file.write(struct.pack("b", 114))
806 # byteorder
807 self._write(byteorder == ">" and "\x01" or "\x02")
808 # filetype
809 self._write("\x01")
810 # unused
811 self._write("\x00")
812 # number of vars, 2 bytes
813 self._file.write(struct.pack(byteorder+"h", self.nvar)[:2])
814 # number of obs, 4 bytes
815 self._file.write(struct.pack(byteorder+"i", self.nobs)[:4])
816 # data label 81 bytes, char, null terminated
817 if data_label is None:
818 self._file.write(self._null_terminate(_pad_bytes("", 80)))
819 else:
820 self._file.write(self._null_terminate(_pad_bytes(data_label[:80], 80)))
821 # time stamp, 18 bytes, char, null terminated
822 # format dd Mon yyyy hh:mm
823 if time_stamp is None:
824 time_stamp = datetime.datetime.now()
825 elif not isinstance(time_stamp, datetime):
826 raise ValueError("time_stamp should be datetime type")
827 self._file.write(self._null_terminate(time_stamp.strftime("%d %b %Y %H:%M")))
828
829 def _write_descriptors(self, typlist=None, varlist=None, srtlist=None,
830 fmtlist=None, lbllist=None):
831 nvar = self.nvar
832 # typlist, length nvar, format byte array
833 for typ in self.typlist:
834 self._write(typ)
835
836 # varlist, length 33*nvar, char array, null terminated
837 for name in self.varlist:
838 name = self._null_terminate(name, True)
839 name = _pad_bytes(name[:32], 33)
840 self._write(name)
841
842 # srtlist, 2*(nvar+1), int array, encoded by byteorder
843 srtlist = _pad_bytes("", (2*(nvar+1)))
844 self._write(srtlist)
845
846 # fmtlist, 49*nvar, char array
847 for fmt in self.fmtlist:
848 self._write(_pad_bytes(fmt, 49))
849
850 # lbllist, 33*nvar, char array
851 #NOTE: this is where you could get fancy with pandas categorical type
852 for i in range(nvar):
853 self._write(_pad_bytes("", 33))
854
855 def _write_variable_labels(self, labels=None):
856 nvar = self.nvar
857 if labels is None:
858 for i in range(nvar):
859 self._write(_pad_bytes("", 81))
860
861 def _write_data_nodates(self):
862 data = self.datarows
863 byteorder = self._byteorder
864 TYPE_MAP = self.TYPE_MAP
865 typlist = self.typlist
866 for row in data:
867 #row = row.squeeze().tolist() # needed for structured arrays
868 for i, var in enumerate(row):
869 typ = ord(typlist[i])
870 if typ <= 244: # we've got a string
871 if len(var) < typ:
872 var = _pad_bytes(var, typ)
873 self._write(var)
874 else:
875 try:
876 self._file.write(struct.pack(byteorder + TYPE_MAP[typ], var))
877 except struct.error:
878 # have to be strict about type pack won't do any
879 # kind of casting
880 self._file.write(struct.pack(byteorder+TYPE_MAP[typ],
881 self.type_converters[typ](var)))
882
883 def _write_data_dates(self):
884 convert_dates = self._convert_dates
885 data = self.datarows
886 byteorder = self._byteorder
887 TYPE_MAP = self.TYPE_MAP
888 MISSING_VALUES = self.MISSING_VALUES
889 typlist = self.typlist
890 for row in data:
891 #row = row.squeeze().tolist() # needed for structured arrays
892 for i, var in enumerate(row):
893 typ = ord(typlist[i])
894 #NOTE: If anyone finds this terribly slow, there is
895 # a vectorized way to convert dates, see genfromdta for going
896 # from int to datetime and reverse it. will copy data though
897 if i in convert_dates:
898 var = _datetime_to_stata_elapsed(var, self.fmtlist[i])
899 if typ <= 244: # we've got a string
900 if len(var) < typ:
901 var = _pad_bytes(var, typ)
902 self._write(var)
903 else:
904 if isnull(var): # this only matters for floats
905 var = MISSING_VALUES[typ]
906 self._file.write(struct.pack(byteorder+TYPE_MAP[typ], var))
907
908 def _null_terminate(self, s, as_string=False):
909 null_byte = '\x00'
910 if compat.PY3 and not as_string:
911 s += null_byte
912 return s.encode(self._encoding)
913 else:
914 s += null_byte
915 return s
916
[end of pandas/io/stata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
68dfe52774b8e600d25b90ec043b18e62bb551fb
|
TST: sometimes failing ujson comparison
cc @Komnomnomnom
skipping test via: b5ff81d34d74efdf88b8151ff58ed16ff7c02c67
fails about 1/3 times (and only seems on py3)
https://travis-ci.org/pydata/pandas/jobs/10223886
don't use now, pick a known value (that you can then compare too exactly)
```
def test_datetime_units(self):
from pandas.lib import Timestamp
val = datetime.datetime.now()
stamp = Timestamp(val)
```
|
Hmm I'll take a look, is this only on Python 3?
I only had it fail on py3 as well; maybe the division is not comparing correctly..(should be truediv)
in any even pick a time that has non-zero comparsion values and do the same type of comparison
it doesn't always fail, that's what is weird....this is rebased on master: https://travis-ci.org/jreback/pandas/builds/10224858
run on py3, and put it in a loop until it fails then import debugger at that point
Ok will do. I won't be able to look into this for a while, so if you want to comment out the test etc so the master branch is ok for now then feel free.
ok....skipped for now...come back to it when you can
|
2013-08-17T11:28:32Z
|
<patch>
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-4670
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: allow as_index=False for non aggregating groupby methods
closes #4648
closes #3417
</issue>
<code>
[start of README.rst]
1 =============================================
2 pandas: powerful Python data analysis toolkit
3 =============================================
4
5 .. image:: https://travis-ci.org/pydata/pandas.png
6 :target: https://travis-ci.org/pydata/pandas
7
8 What is it
9 ==========
10
11 **pandas** is a Python package providing fast, flexible, and expressive data
12 structures designed to make working with "relational" or "labeled" data both
13 easy and intuitive. It aims to be the fundamental high-level building block for
14 doing practical, **real world** data analysis in Python. Additionally, it has
15 the broader goal of becoming **the most powerful and flexible open source data
16 analysis / manipulation tool available in any language**. It is already well on
17 its way toward this goal.
18
19 Main Features
20 =============
21
22 Here are just a few of the things that pandas does well:
23
24 - Easy handling of **missing data** (represented as NaN) in floating point as
25 well as non-floating point data
26 - Size mutability: columns can be **inserted and deleted** from DataFrame and
27 higher dimensional objects
28 - Automatic and explicit **data alignment**: objects can be explicitly
29 aligned to a set of labels, or the user can simply ignore the labels and
30 let `Series`, `DataFrame`, etc. automatically align the data for you in
31 computations
32 - Powerful, flexible **group by** functionality to perform
33 split-apply-combine operations on data sets, for both aggregating and
34 transforming data
35 - Make it **easy to convert** ragged, differently-indexed data in other
36 Python and NumPy data structures into DataFrame objects
37 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
38 of large data sets
39 - Intuitive **merging** and **joining** data sets
40 - Flexible **reshaping** and pivoting of data sets
41 - **Hierarchical** labeling of axes (possible to have multiple labels per
42 tick)
43 - Robust IO tools for loading data from **flat files** (CSV and delimited),
44 Excel files, databases, and saving / loading data from the ultrafast **HDF5
45 format**
46 - **Time series**-specific functionality: date range generation and frequency
47 conversion, moving window statistics, moving window linear regressions,
48 date shifting and lagging, etc.
49
50 Where to get it
51 ===============
52
53 The source code is currently hosted on GitHub at: http://github.com/pydata/pandas
54
55 Binary installers for the latest released version are available at the Python
56 package index::
57
58 http://pypi.python.org/pypi/pandas/
59
60 And via ``easy_install`` or ``pip``::
61
62 easy_install pandas
63 pip install pandas
64
65 Dependencies
66 ============
67
68 - `NumPy <http://www.numpy.org>`__: 1.6.1 or higher
69 - `python-dateutil <http://labix.org/python-dateutil>`__ 1.5 or higher
70 - `pytz <http://pytz.sourceforge.net/>`__
71 - Needed for time zone support with ``date_range``
72
73 Highly Recommended Dependencies
74 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
75
76 - `numexpr <http://code.google.com/p/numexpr/>`__
77 - Needed to accelerate some expression evaluation operations
78 - Required by `PyTables`
79 - `bottleneck <http://berkeleyanalytics.com/bottleneck>`__
80 - Needed to accelerate certain numerical operations
81
82 Optional dependencies
83 ~~~~~~~~~~~~~~~~~~~~~
84
85 - `Cython <http://www.cython.org>`__: Only necessary to build development version. Version 0.17.1 or higher.
86 - `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions
87 - `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage
88 - `matplotlib <http://matplotlib.sourceforge.net/>`__: for plotting
89 - `statsmodels <http://statsmodels.sourceforge.net/>`__
90 - Needed for parts of :mod:`pandas.stats`
91 - `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__
92 - openpyxl version 1.6.1 or higher, for writing .xlsx files
93 - xlrd >= 0.9.0
94 - Needed for Excel I/O
95 - `boto <https://pypi.python.org/pypi/boto>`__: necessary for Amazon S3
96 access.
97 - One of the following combinations of libraries is needed to use the
98 top-level :func:`~pandas.io.html.read_html` function:
99
100 - `BeautifulSoup4`_ and `html5lib`_ (Any recent version of `html5lib`_ is
101 okay.)
102 - `BeautifulSoup4`_ and `lxml`_
103 - `BeautifulSoup4`_ and `html5lib`_ and `lxml`_
104 - Only `lxml`_, although see :ref:`HTML reading gotchas <html-gotchas>`
105 for reasons as to why you should probably **not** take this approach.
106
107 .. warning::
108
109 - if you install `BeautifulSoup4`_ you must install either
110 `lxml`_ or `html5lib`_ or both.
111 :func:`~pandas.io.html.read_html` will **not** work with *only*
112 `BeautifulSoup4`_ installed.
113 - You are highly encouraged to read :ref:`HTML reading gotchas
114 <html-gotchas>`. It explains issues surrounding the installation and
115 usage of the above three libraries
116 - You may need to install an older version of `BeautifulSoup4`_:
117 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
118 32-bit Ubuntu/Debian
119 - Additionally, if you're using `Anaconda`_ you should definitely
120 read :ref:`the gotchas about HTML parsing libraries <html-gotchas>`
121
122 .. note::
123
124 - if you're on a system with ``apt-get`` you can do
125
126 .. code-block:: sh
127
128 sudo apt-get build-dep python-lxml
129
130 to get the necessary dependencies for installation of `lxml`_. This
131 will prevent further headaches down the line.
132
133
134 .. _html5lib: https://github.com/html5lib/html5lib-python
135 .. _BeautifulSoup4: http://www.crummy.com/software/BeautifulSoup
136 .. _lxml: http://lxml.de
137 .. _Anaconda: https://store.continuum.io/cshop/anaconda
138
139
140 Installation from sources
141 =========================
142
143 To install pandas from source you need ``cython`` in addition to the normal dependencies above,
144 which can be installed from pypi::
145
146 pip install cython
147
148 In the ``pandas`` directory (same one where you found this file after cloning the git repo), execute::
149
150 python setup.py install
151
152 or for installing in `development mode <http://www.pip-installer.org/en/latest/usage.html>`__::
153
154 python setup.py develop
155
156 Alternatively, you can use `pip` if you want all the dependencies pulled in automatically
157 (the optional ``-e`` option is for installing it in
158 `development mode <http://www.pip-installer.org/en/latest/usage.html>`__)::
159
160 pip install -e .
161
162 On Windows, you will need to install MinGW and execute::
163
164 python setup.py build --compiler=mingw32
165 python setup.py install
166
167 See http://pandas.pydata.org/ for more information.
168
169 License
170 =======
171
172 BSD
173
174 Documentation
175 =============
176
177 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
178
179 The Sphinx documentation should provide a good starting point for learning how
180 to use the library. Expect the docs to continue to expand as time goes on.
181
182 Background
183 ==========
184
185 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
186 has been under active development since then.
187
188 Discussion and Development
189 ==========================
190
191 Since ``pandas`` development is related to a number of other scientific
192 Python projects, questions are welcome on the scipy-user mailing
193 list. Specialized discussions or design issues should take place on
194 the pystatsmodels mailing list / Google group, where
195 ``scikits.statsmodels`` and other libraries will also be discussed:
196
197 http://groups.google.com/group/pystatsmodels
198
199 .. _NumPy: http://numpy.scipy.org/
200
[end of README.rst]
[start of examples/finance.py]
1 """
2 Some examples playing around with yahoo finance data
3 """
4
5 from datetime import datetime
6 from pandas.compat import zip
7
8 import matplotlib.finance as fin
9 import numpy as np
10 from pylab import show
11
12
13 from pandas import Index, DataFrame
14 from pandas.core.datetools import BMonthEnd
15 from pandas import ols
16
17 startDate = datetime(2008, 1, 1)
18 endDate = datetime(2009, 9, 1)
19
20
21 def getQuotes(symbol, start, end):
22 quotes = fin.quotes_historical_yahoo(symbol, start, end)
23 dates, open, close, high, low, volume = zip(*quotes)
24
25 data = {
26 'open': open,
27 'close': close,
28 'high': high,
29 'low': low,
30 'volume': volume
31 }
32
33 dates = Index([datetime.fromordinal(int(d)) for d in dates])
34 return DataFrame(data, index=dates)
35
36 msft = getQuotes('MSFT', startDate, endDate)
37 aapl = getQuotes('AAPL', startDate, endDate)
38 goog = getQuotes('GOOG', startDate, endDate)
39 ibm = getQuotes('IBM', startDate, endDate)
40
41 px = DataFrame({'MSFT': msft['close'],
42 'IBM': ibm['close'],
43 'GOOG': goog['close'],
44 'AAPL': aapl['close']})
45 returns = px / px.shift(1) - 1
46
47 # Select dates
48
49 subIndex = ibm.index[(ibm['close'] > 95) & (ibm['close'] < 100)]
50 msftOnSameDates = msft.reindex(subIndex)
51
52 # Insert columns
53
54 msft['hi-lo spread'] = msft['high'] - msft['low']
55 ibm['hi-lo spread'] = ibm['high'] - ibm['low']
56
57 # Aggregate monthly
58
59
60 def toMonthly(frame, how):
61 offset = BMonthEnd()
62
63 return frame.groupby(offset.rollforward).aggregate(how)
64
65 msftMonthly = toMonthly(msft, np.mean)
66 ibmMonthly = toMonthly(ibm, np.mean)
67
68 # Statistics
69
70 stdev = DataFrame({
71 'MSFT': msft.std(),
72 'IBM': ibm.std()
73 })
74
75 # Arithmetic
76
77 ratios = ibm / msft
78
79 # Works with different indices
80
81 ratio = ibm / ibmMonthly
82 monthlyRatio = ratio.reindex(ibmMonthly.index)
83
84 # Ratio relative to past month average
85
86 filledRatio = ibm / ibmMonthly.reindex(ibm.index, method='pad')
87
[end of examples/finance.py]
[start of pandas/tseries/resample.py]
1 from datetime import timedelta
2
3 import numpy as np
4
5 from pandas.core.groupby import BinGrouper, CustomGrouper
6 from pandas.tseries.frequencies import to_offset, is_subperiod, is_superperiod
7 from pandas.tseries.index import DatetimeIndex, date_range
8 from pandas.tseries.offsets import DateOffset, Tick, _delta_to_nanoseconds
9 from pandas.tseries.period import PeriodIndex, period_range
10 import pandas.tseries.tools as tools
11 import pandas.core.common as com
12 import pandas.compat as compat
13
14 from pandas.lib import Timestamp
15 import pandas.lib as lib
16
17
18 _DEFAULT_METHOD = 'mean'
19
20
21 class TimeGrouper(CustomGrouper):
22 """
23 Custom groupby class for time-interval grouping
24
25 Parameters
26 ----------
27 freq : pandas date offset or offset alias for identifying bin edges
28 closed : closed end of interval; left or right
29 label : interval boundary to use for labeling; left or right
30 nperiods : optional, integer
31 convention : {'start', 'end', 'e', 's'}
32 If axis is PeriodIndex
33
34 Notes
35 -----
36 Use begin, end, nperiods to generate intervals that cannot be derived
37 directly from the associated object
38 """
39 def __init__(self, freq='Min', closed=None, label=None, how='mean',
40 nperiods=None, axis=0,
41 fill_method=None, limit=None, loffset=None, kind=None,
42 convention=None, base=0):
43 self.freq = to_offset(freq)
44
45 end_types = set(['M', 'A', 'Q', 'BM', 'BA', 'BQ', 'W'])
46 rule = self.freq.rule_code
47 if (rule in end_types or
48 ('-' in rule and rule[:rule.find('-')] in end_types)):
49 if closed is None:
50 closed = 'right'
51 if label is None:
52 label = 'right'
53 else:
54 if closed is None:
55 closed = 'left'
56 if label is None:
57 label = 'left'
58
59 self.closed = closed
60 self.label = label
61 self.nperiods = nperiods
62 self.kind = kind
63
64 self.convention = convention or 'E'
65 self.convention = self.convention.lower()
66
67 self.axis = axis
68 self.loffset = loffset
69 self.how = how
70 self.fill_method = fill_method
71 self.limit = limit
72 self.base = base
73
74 def resample(self, obj):
75 axis = obj._get_axis(self.axis)
76
77 if not axis.is_monotonic:
78 try:
79 obj = obj.sort_index(axis=self.axis)
80 except TypeError:
81 obj = obj.sort_index()
82
83 if isinstance(axis, DatetimeIndex):
84 rs = self._resample_timestamps(obj)
85 elif isinstance(axis, PeriodIndex):
86 offset = to_offset(self.freq)
87 if offset.n > 1:
88 if self.kind == 'period': # pragma: no cover
89 print ('Warning: multiple of frequency -> timestamps')
90 # Cannot have multiple of periods, convert to timestamp
91 self.kind = 'timestamp'
92
93 if self.kind is None or self.kind == 'period':
94 rs = self._resample_periods(obj)
95 else:
96 obj = obj.to_timestamp(how=self.convention)
97 rs = self._resample_timestamps(obj)
98 elif len(axis) == 0:
99 return obj
100 else: # pragma: no cover
101 raise TypeError('Only valid with DatetimeIndex or PeriodIndex')
102
103 rs_axis = rs._get_axis(self.axis)
104 rs_axis.name = axis.name
105 return rs
106
107 def get_grouper(self, obj):
108 # Only return grouper
109 return self._get_time_grouper(obj)[1]
110
111 def _get_time_grouper(self, obj):
112 axis = obj._get_axis(self.axis)
113
114 if self.kind is None or self.kind == 'timestamp':
115 binner, bins, binlabels = self._get_time_bins(axis)
116 else:
117 binner, bins, binlabels = self._get_time_period_bins(axis)
118
119 grouper = BinGrouper(bins, binlabels)
120 return binner, grouper
121
122 def _get_time_bins(self, axis):
123 if not (isinstance(axis, DatetimeIndex)):
124 raise AssertionError()
125
126 if len(axis) == 0:
127 binner = labels = DatetimeIndex(data=[], freq=self.freq)
128 return binner, [], labels
129
130 first, last = _get_range_edges(axis, self.freq, closed=self.closed,
131 base=self.base)
132 tz = axis.tz
133 binner = labels = DatetimeIndex(freq=self.freq,
134 start=first.replace(tzinfo=None),
135 end=last.replace(tzinfo=None), tz=tz)
136
137 # a little hack
138 trimmed = False
139 if (len(binner) > 2 and binner[-2] == axis[-1] and
140 self.closed == 'right'):
141
142 binner = binner[:-1]
143 trimmed = True
144
145 ax_values = axis.asi8
146 binner, bin_edges = self._adjust_bin_edges(binner, ax_values)
147
148 # general version, knowing nothing about relative frequencies
149 bins = lib.generate_bins_dt64(ax_values, bin_edges, self.closed)
150
151 if self.closed == 'right':
152 labels = binner
153 if self.label == 'right':
154 labels = labels[1:]
155 elif not trimmed:
156 labels = labels[:-1]
157 else:
158 if self.label == 'right':
159 labels = labels[1:]
160 elif not trimmed:
161 labels = labels[:-1]
162
163 return binner, bins, labels
164
165 def _adjust_bin_edges(self, binner, ax_values):
166 # Some hacks for > daily data, see #1471, #1458, #1483
167
168 bin_edges = binner.asi8
169
170 if self.freq != 'D' and is_superperiod(self.freq, 'D'):
171 day_nanos = _delta_to_nanoseconds(timedelta(1))
172 if self.closed == 'right':
173 bin_edges = bin_edges + day_nanos - 1
174
175 # intraday values on last day
176 if bin_edges[-2] > ax_values[-1]:
177 bin_edges = bin_edges[:-1]
178 binner = binner[:-1]
179
180 return binner, bin_edges
181
182 def _get_time_period_bins(self, axis):
183 if not(isinstance(axis, DatetimeIndex)):
184 raise AssertionError()
185
186 if len(axis) == 0:
187 binner = labels = PeriodIndex(data=[], freq=self.freq)
188 return binner, [], labels
189
190 labels = binner = PeriodIndex(start=axis[0], end=axis[-1],
191 freq=self.freq)
192
193 end_stamps = (labels + 1).asfreq('D', 's').to_timestamp()
194 bins = axis.searchsorted(end_stamps, side='left')
195
196 return binner, bins, labels
197
198 @property
199 def _agg_method(self):
200 return self.how if self.how else _DEFAULT_METHOD
201
202 def _resample_timestamps(self, obj):
203 axlabels = obj._get_axis(self.axis)
204
205 binner, grouper = self._get_time_grouper(obj)
206
207 # Determine if we're downsampling
208 if axlabels.freq is not None or axlabels.inferred_freq is not None:
209 if len(grouper.binlabels) < len(axlabels) or self.how is not None:
210 grouped = obj.groupby(grouper, axis=self.axis)
211 result = grouped.aggregate(self._agg_method)
212 else:
213 # upsampling shortcut
214 if not (self.axis == 0):
215 raise AssertionError()
216
217 if self.closed == 'right':
218 res_index = binner[1:]
219 else:
220 res_index = binner[:-1]
221
222 result = obj.reindex(res_index, method=self.fill_method,
223 limit=self.limit)
224 else:
225 # Irregular data, have to use groupby
226 grouped = obj.groupby(grouper, axis=self.axis)
227 result = grouped.aggregate(self._agg_method)
228
229 if self.fill_method is not None:
230 result = result.fillna(method=self.fill_method,
231 limit=self.limit)
232
233 loffset = self.loffset
234 if isinstance(loffset, compat.string_types):
235 loffset = to_offset(self.loffset)
236
237 if isinstance(loffset, (DateOffset, timedelta)):
238 if (isinstance(result.index, DatetimeIndex)
239 and len(result.index) > 0):
240
241 result.index = result.index + loffset
242
243 return result
244
245 def _resample_periods(self, obj):
246 axlabels = obj._get_axis(self.axis)
247
248 if len(axlabels) == 0:
249 new_index = PeriodIndex(data=[], freq=self.freq)
250 return obj.reindex(new_index)
251 else:
252 start = axlabels[0].asfreq(self.freq, how=self.convention)
253 end = axlabels[-1].asfreq(self.freq, how='end')
254
255 new_index = period_range(start, end, freq=self.freq)
256
257 # Start vs. end of period
258 memb = axlabels.asfreq(self.freq, how=self.convention)
259
260 if is_subperiod(axlabels.freq, self.freq) or self.how is not None:
261 # Downsampling
262 rng = np.arange(memb.values[0], memb.values[-1] + 1)
263 bins = memb.searchsorted(rng, side='right')
264 grouper = BinGrouper(bins, new_index)
265
266 grouped = obj.groupby(grouper, axis=self.axis)
267 return grouped.aggregate(self._agg_method)
268 elif is_superperiod(axlabels.freq, self.freq):
269 # Get the fill indexer
270 indexer = memb.get_indexer(new_index, method=self.fill_method,
271 limit=self.limit)
272
273 return _take_new_index(obj, indexer, new_index, axis=self.axis)
274 else:
275 raise ValueError('Frequency %s cannot be resampled to %s'
276 % (axlabels.freq, self.freq))
277
278
279 def _take_new_index(obj, indexer, new_index, axis=0):
280 from pandas.core.api import Series, DataFrame
281 from pandas.core.internals import BlockManager
282
283 if isinstance(obj, Series):
284 new_values = com.take_1d(obj.values, indexer)
285 return Series(new_values, index=new_index, name=obj.name)
286 elif isinstance(obj, DataFrame):
287 if axis == 1:
288 raise NotImplementedError
289 return DataFrame(obj._data.take(indexer,new_index=new_index,axis=1))
290 else:
291 raise NotImplementedError
292
293
294 def _get_range_edges(axis, offset, closed='left', base=0):
295 if isinstance(offset, compat.string_types):
296 offset = to_offset(offset)
297
298 if isinstance(offset, Tick):
299 day_nanos = _delta_to_nanoseconds(timedelta(1))
300 # #1165
301 if (day_nanos % offset.nanos) == 0:
302 return _adjust_dates_anchored(axis[0], axis[-1], offset,
303 closed=closed, base=base)
304
305 first, last = axis[0], axis[-1]
306 if not isinstance(offset, Tick): # and first.time() != last.time():
307 # hack!
308 first = tools.normalize_date(first)
309 last = tools.normalize_date(last)
310
311 if closed == 'left':
312 first = Timestamp(offset.rollback(first))
313 else:
314 first = Timestamp(first - offset)
315
316 last = Timestamp(last + offset)
317
318 return first, last
319
320
321 def _adjust_dates_anchored(first, last, offset, closed='right', base=0):
322 from pandas.tseries.tools import normalize_date
323
324 start_day_nanos = Timestamp(normalize_date(first)).value
325 last_day_nanos = Timestamp(normalize_date(last)).value
326
327 base_nanos = (base % offset.n) * offset.nanos // offset.n
328 start_day_nanos += base_nanos
329 last_day_nanos += base_nanos
330
331 foffset = (first.value - start_day_nanos) % offset.nanos
332 loffset = (last.value - last_day_nanos) % offset.nanos
333
334 if closed == 'right':
335 if foffset > 0:
336 # roll back
337 fresult = first.value - foffset
338 else:
339 fresult = first.value - offset.nanos
340
341 if loffset > 0:
342 # roll forward
343 lresult = last.value + (offset.nanos - loffset)
344 else:
345 # already the end of the road
346 lresult = last.value
347 else: # closed == 'left'
348 if foffset > 0:
349 fresult = first.value - foffset
350 else:
351 # start of the road
352 fresult = first.value
353
354 if loffset > 0:
355 # roll forward
356 lresult = last.value + (offset.nanos - loffset)
357 else:
358 lresult = last.value + offset.nanos
359
360 return (Timestamp(fresult, tz=first.tz),
361 Timestamp(lresult, tz=last.tz))
362
363
364 def asfreq(obj, freq, method=None, how=None, normalize=False):
365 """
366 Utility frequency conversion method for Series/DataFrame
367 """
368 if isinstance(obj.index, PeriodIndex):
369 if method is not None:
370 raise NotImplementedError
371
372 if how is None:
373 how = 'E'
374
375 new_index = obj.index.asfreq(freq, how=how)
376 new_obj = obj.copy()
377 new_obj.index = new_index
378 return new_obj
379 else:
380 if len(obj.index) == 0:
381 return obj.copy()
382 dti = date_range(obj.index[0], obj.index[-1], freq=freq)
383 rs = obj.reindex(dti, method=method)
384 if normalize:
385 rs.index = rs.index.normalize()
386 return rs
387
[end of pandas/tseries/resample.py]
[start of scripts/gen_release_notes.py]
1 from __future__ import print_function
2 import sys
3 import json
4 from pandas.io.common import urlopen
5 from datetime import datetime
6
7
8 class Milestone(object):
9
10 def __init__(self, title, number):
11 self.title = title
12 self.number = number
13
14 def __eq__(self, other):
15 if isinstance(other, Milestone):
16 return self.number == other.number
17 return False
18
19
20 class Issue(object):
21
22 def __init__(self, title, labels, number, milestone, body, state):
23 self.title = title
24 self.labels = set([x['name'] for x in labels])
25 self.number = number
26 self.milestone = milestone
27 self.body = body
28 self.closed = state == 'closed'
29
30 def __eq__(self, other):
31 if isinstance(other, Issue):
32 return self.number == other.number
33 return False
34
35
36 def get_issues():
37 all_issues = []
38 page_number = 1
39 while True:
40 iss = _get_page(page_number)
41 if len(iss) == 0:
42 break
43 page_number += 1
44 all_issues.extend(iss)
45 return all_issues
46
47
48 def _get_page(page_number):
49 gh_url = ('https://api.github.com/repos/pydata/pandas/issues?'
50 'milestone=*&state=closed&assignee=*&page=%d') % page_number
51 with urlopen(gh_url) as resp:
52 rs = resp.readlines()[0]
53 jsondata = json.loads(rs)
54 issues = [Issue(x['title'], x['labels'], x['number'],
55 get_milestone(x['milestone']), x['body'], x['state'])
56 for x in jsondata]
57 return issues
58
59
60 def get_milestone(data):
61 if data is None:
62 return None
63 return Milestone(data['title'], data['number'])
64
65
66 def collate_label(issues, label):
67 lines = []
68 for x in issues:
69 if label in x.labels:
70 lines.append('\t- %s(#%d)' % (x.title, x.number))
71
72 return '\n'.join(lines)
73
74
75 def release_notes(milestone):
76 issues = get_issues()
77
78 headers = ['New Features', 'Improvements to existing features',
79 'API Changes', 'Bug fixes']
80 labels = ['New', 'Enhancement', 'API-Change', 'Bug']
81
82 rs = 'pandas %s' % milestone
83 rs += '\n' + ('=' * len(rs))
84 rs += '\n\n **Release date:** %s' % datetime.today().strftime('%B %d, %Y')
85 for i, h in enumerate(headers):
86 rs += '\n\n**%s**\n\n' % h
87 l = labels[i]
88 rs += collate_label(issues, l)
89
90 return rs
91
92 if __name__ == '__main__':
93
94 rs = release_notes(sys.argv[1])
95 print(rs)
96
[end of scripts/gen_release_notes.py]
[start of setup.py]
1 #!/usr/bin/env python
2
3 """
4 Parts of this file were taken from the pyzmq project
5 (https://github.com/zeromq/pyzmq) which have been permitted for use under the
6 BSD license. Parts are from lxml (https://github.com/lxml/lxml)
7 """
8
9 import os
10 import sys
11 import shutil
12 import warnings
13
14 # may need to work around setuptools bug by providing a fake Pyrex
15 try:
16 import Cython
17 sys.path.insert(0, os.path.join(os.path.dirname(__file__), "fake_pyrex"))
18 except ImportError:
19 pass
20
21 # try bootstrapping setuptools if it doesn't exist
22 try:
23 import pkg_resources
24 try:
25 pkg_resources.require("setuptools>=0.6c5")
26 except pkg_resources.VersionConflict:
27 from ez_setup import use_setuptools
28 use_setuptools(version="0.6c5")
29 from setuptools import setup, Command
30 _have_setuptools = True
31 except ImportError:
32 # no setuptools installed
33 from distutils.core import setup, Command
34 _have_setuptools = False
35
36 setuptools_kwargs = {}
37 min_numpy_ver = '1.6'
38 if sys.version_info[0] >= 3:
39
40 if sys.version_info[1] >= 3: # 3.3 needs numpy 1.7+
41 min_numpy_ver = "1.7.0b2"
42
43 setuptools_kwargs = {
44 'zip_safe': False,
45 'install_requires': ['python-dateutil >= 2',
46 'pytz >= 2011k',
47 'numpy >= %s' % min_numpy_ver],
48 'setup_requires': ['numpy >= %s' % min_numpy_ver],
49 }
50 if not _have_setuptools:
51 sys.exit("need setuptools/distribute for Py3k"
52 "\n$ pip install distribute")
53
54 else:
55 min_numpy_ver = '1.6.1'
56 setuptools_kwargs = {
57 'install_requires': ['python-dateutil',
58 'pytz >= 2011k',
59 'numpy >= %s' % min_numpy_ver],
60 'setup_requires': ['numpy >= %s' % min_numpy_ver],
61 'zip_safe': False,
62 }
63
64 if not _have_setuptools:
65 try:
66 import numpy
67 import dateutil
68 setuptools_kwargs = {}
69 except ImportError:
70 sys.exit("install requires: 'python-dateutil < 2','numpy'."
71 " use pip or easy_install."
72 "\n $ pip install 'python-dateutil < 2' 'numpy'")
73
74 from distutils.extension import Extension
75 from distutils.command.build import build
76 from distutils.command.sdist import sdist
77 from distutils.command.build_ext import build_ext as _build_ext
78
79 try:
80 from Cython.Distutils import build_ext as _build_ext
81 # from Cython.Distutils import Extension # to get pyrex debugging symbols
82 cython = True
83 except ImportError:
84 cython = False
85
86 from os.path import splitext, basename, join as pjoin
87
88
89 class build_ext(_build_ext):
90 def build_extensions(self):
91 numpy_incl = pkg_resources.resource_filename('numpy', 'core/include')
92
93 for ext in self.extensions:
94 if hasattr(ext, 'include_dirs') and not numpy_incl in ext.include_dirs:
95 ext.include_dirs.append(numpy_incl)
96 _build_ext.build_extensions(self)
97
98
99 DESCRIPTION = ("Powerful data structures for data analysis, time series,"
100 "and statistics")
101 LONG_DESCRIPTION = """
102 **pandas** is a Python package providing fast, flexible, and expressive data
103 structures designed to make working with structured (tabular, multidimensional,
104 potentially heterogeneous) and time series data both easy and intuitive. It
105 aims to be the fundamental high-level building block for doing practical,
106 **real world** data analysis in Python. Additionally, it has the broader goal
107 of becoming **the most powerful and flexible open source data analysis /
108 manipulation tool available in any language**. It is already well on its way
109 toward this goal.
110
111 pandas is well suited for many different kinds of data:
112
113 - Tabular data with heterogeneously-typed columns, as in an SQL table or
114 Excel spreadsheet
115 - Ordered and unordered (not necessarily fixed-frequency) time series data.
116 - Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
117 column labels
118 - Any other form of observational / statistical data sets. The data actually
119 need not be labeled at all to be placed into a pandas data structure
120
121 The two primary data structures of pandas, Series (1-dimensional) and DataFrame
122 (2-dimensional), handle the vast majority of typical use cases in finance,
123 statistics, social science, and many areas of engineering. For R users,
124 DataFrame provides everything that R's ``data.frame`` provides and much
125 more. pandas is built on top of `NumPy <http://www.numpy.org>`__ and is
126 intended to integrate well within a scientific computing environment with many
127 other 3rd party libraries.
128
129 Here are just a few of the things that pandas does well:
130
131 - Easy handling of **missing data** (represented as NaN) in floating point as
132 well as non-floating point data
133 - Size mutability: columns can be **inserted and deleted** from DataFrame and
134 higher dimensional objects
135 - Automatic and explicit **data alignment**: objects can be explicitly
136 aligned to a set of labels, or the user can simply ignore the labels and
137 let `Series`, `DataFrame`, etc. automatically align the data for you in
138 computations
139 - Powerful, flexible **group by** functionality to perform
140 split-apply-combine operations on data sets, for both aggregating and
141 transforming data
142 - Make it **easy to convert** ragged, differently-indexed data in other
143 Python and NumPy data structures into DataFrame objects
144 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
145 of large data sets
146 - Intuitive **merging** and **joining** data sets
147 - Flexible **reshaping** and pivoting of data sets
148 - **Hierarchical** labeling of axes (possible to have multiple labels per
149 tick)
150 - Robust IO tools for loading data from **flat files** (CSV and delimited),
151 Excel files, databases, and saving / loading data from the ultrafast **HDF5
152 format**
153 - **Time series**-specific functionality: date range generation and frequency
154 conversion, moving window statistics, moving window linear regressions,
155 date shifting and lagging, etc.
156
157 Many of these principles are here to address the shortcomings frequently
158 experienced using other languages / scientific research environments. For data
159 scientists, working with data is typically divided into multiple stages:
160 munging and cleaning data, analyzing / modeling it, then organizing the results
161 of the analysis into a form suitable for plotting or tabular display. pandas is
162 the ideal tool for all of these tasks.
163
164 Note
165 ----
166 Windows binaries built against NumPy 1.7.1
167 """
168
169 DISTNAME = 'pandas'
170 LICENSE = 'BSD'
171 AUTHOR = "The PyData Development Team"
172 EMAIL = "[email protected]"
173 URL = "http://pandas.pydata.org"
174 DOWNLOAD_URL = ''
175 CLASSIFIERS = [
176 'Development Status :: 4 - Beta',
177 'Environment :: Console',
178 'Operating System :: OS Independent',
179 'Intended Audience :: Science/Research',
180 'Programming Language :: Python',
181 'Programming Language :: Python :: 2',
182 'Programming Language :: Python :: 3',
183 'Programming Language :: Python :: 2.6',
184 'Programming Language :: Python :: 2.7',
185 'Programming Language :: Python :: 3.2',
186 'Programming Language :: Python :: 3.3',
187 'Programming Language :: Cython',
188 'Topic :: Scientific/Engineering',
189 ]
190
191 MAJOR = 0
192 MINOR = 12
193 MICRO = 0
194 ISRELEASED = False
195 VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)
196 QUALIFIER = ''
197
198 FULLVERSION = VERSION
199 if not ISRELEASED:
200 FULLVERSION += '.dev'
201 try:
202 import subprocess
203 try:
204 pipe = subprocess.Popen(["git", "describe", "HEAD"],
205 stdout=subprocess.PIPE).stdout
206 except OSError:
207 # msysgit compatibility
208 pipe = subprocess.Popen(
209 ["git.cmd", "describe", "HEAD"],
210 stdout=subprocess.PIPE).stdout
211 rev = pipe.read().strip()
212 # makes distutils blow up on Python 2.7
213 if sys.version_info[0] >= 3:
214 rev = rev.decode('ascii')
215
216 FULLVERSION = rev.lstrip('v')
217
218 except:
219 warnings.warn("WARNING: Couldn't get git revision")
220 else:
221 FULLVERSION += QUALIFIER
222
223
224 def write_version_py(filename=None):
225 cnt = """\
226 version = '%s'
227 short_version = '%s'
228 """
229 if not filename:
230 filename = os.path.join(
231 os.path.dirname(__file__), 'pandas', 'version.py')
232
233 a = open(filename, 'w')
234 try:
235 a.write(cnt % (FULLVERSION, VERSION))
236 finally:
237 a.close()
238
239
240 class CleanCommand(Command):
241 """Custom distutils command to clean the .so and .pyc files."""
242
243 user_options = [("all", "a", "")]
244
245 def initialize_options(self):
246 self.all = True
247 self._clean_me = []
248 self._clean_trees = []
249 self._clean_exclude = ['np_datetime.c',
250 'np_datetime_strings.c',
251 'period.c',
252 'tokenizer.c',
253 'io.c',
254 'ujson.c',
255 'objToJSON.c',
256 'JSONtoObj.c',
257 'ultrajsonenc.c',
258 'ultrajsondec.c',
259 ]
260
261 for root, dirs, files in list(os.walk('pandas')):
262 for f in files:
263 if f in self._clean_exclude:
264 continue
265
266 # XXX
267 if 'ujson' in f:
268 continue
269
270 if os.path.splitext(f)[-1] in ('.pyc', '.so', '.o',
271 '.pyo',
272 '.pyd', '.c', '.orig'):
273 self._clean_me.append(pjoin(root, f))
274 for d in dirs:
275 if d == '__pycache__':
276 self._clean_trees.append(pjoin(root, d))
277
278 for d in ('build',):
279 if os.path.exists(d):
280 self._clean_trees.append(d)
281
282 def finalize_options(self):
283 pass
284
285 def run(self):
286 for clean_me in self._clean_me:
287 try:
288 os.unlink(clean_me)
289 except Exception:
290 pass
291 for clean_tree in self._clean_trees:
292 try:
293 shutil.rmtree(clean_tree)
294 except Exception:
295 pass
296
297
298 class CheckSDist(sdist):
299 """Custom sdist that ensures Cython has compiled all pyx files to c."""
300
301 _pyxfiles = ['pandas/lib.pyx',
302 'pandas/hashtable.pyx',
303 'pandas/tslib.pyx',
304 'pandas/index.pyx',
305 'pandas/algos.pyx',
306 'pandas/parser.pyx',
307 'pandas/src/sparse.pyx']
308
309 def initialize_options(self):
310 sdist.initialize_options(self)
311
312 '''
313 self._pyxfiles = []
314 for root, dirs, files in os.walk('pandas'):
315 for f in files:
316 if f.endswith('.pyx'):
317 self._pyxfiles.append(pjoin(root, f))
318 '''
319
320 def run(self):
321 if 'cython' in cmdclass:
322 self.run_command('cython')
323 else:
324 for pyxfile in self._pyxfiles:
325 cfile = pyxfile[:-3] + 'c'
326 msg = "C-source file '%s' not found." % (cfile) +\
327 " Run 'setup.py cython' before sdist."
328 assert os.path.isfile(cfile), msg
329 sdist.run(self)
330
331
332 class CheckingBuildExt(build_ext):
333 """Subclass build_ext to get clearer report if Cython is necessary."""
334
335 def check_cython_extensions(self, extensions):
336 for ext in extensions:
337 for src in ext.sources:
338 if not os.path.exists(src):
339 raise Exception("""Cython-generated file '%s' not found.
340 Cython is required to compile pandas from a development branch.
341 Please install Cython or download a release package of pandas.
342 """ % src)
343
344 def build_extensions(self):
345 self.check_cython_extensions(self.extensions)
346 build_ext.build_extensions(self)
347
348
349 class CythonCommand(build_ext):
350 """Custom distutils command subclassed from Cython.Distutils.build_ext
351 to compile pyx->c, and stop there. All this does is override the
352 C-compile method build_extension() with a no-op."""
353 def build_extension(self, ext):
354 pass
355
356
357 class DummyBuildSrc(Command):
358 """ numpy's build_src command interferes with Cython's build_ext.
359 """
360 user_options = []
361
362 def initialize_options(self):
363 self.py_modules_dict = {}
364
365 def finalize_options(self):
366 pass
367
368 def run(self):
369 pass
370
371 cmdclass = {'clean': CleanCommand,
372 'build': build,
373 'sdist': CheckSDist}
374
375 if cython:
376 suffix = '.pyx'
377 cmdclass['build_ext'] = CheckingBuildExt
378 cmdclass['cython'] = CythonCommand
379 else:
380 suffix = '.c'
381 cmdclass['build_src'] = DummyBuildSrc
382 cmdclass['build_ext'] = CheckingBuildExt
383
384 lib_depends = ['reduce', 'inference', 'properties']
385
386
387 def srcpath(name=None, suffix='.pyx', subdir='src'):
388 return pjoin('pandas', subdir, name + suffix)
389
390 if suffix == '.pyx':
391 lib_depends = [srcpath(f, suffix='.pyx') for f in lib_depends]
392 lib_depends.append('pandas/src/util.pxd')
393 else:
394 lib_depends = []
395 plib_depends = []
396
397 common_include = ['pandas/src/klib', 'pandas/src']
398
399
400 def pxd(name):
401 return os.path.abspath(pjoin('pandas', name + '.pxd'))
402
403
404 lib_depends = lib_depends + ['pandas/src/numpy_helper.h',
405 'pandas/src/parse_helper.h']
406
407
408 tseries_depends = ['pandas/src/datetime/np_datetime.h',
409 'pandas/src/datetime/np_datetime_strings.h',
410 'pandas/src/period.h']
411
412
413 # some linux distros require it
414 libraries = ['m'] if 'win32' not in sys.platform else []
415
416 ext_data = dict(
417 lib={'pyxfile': 'lib',
418 'pxdfiles': [],
419 'depends': lib_depends},
420 hashtable={'pyxfile': 'hashtable',
421 'pxdfiles': ['hashtable']},
422 tslib={'pyxfile': 'tslib',
423 'depends': tseries_depends,
424 'sources': ['pandas/src/datetime/np_datetime.c',
425 'pandas/src/datetime/np_datetime_strings.c',
426 'pandas/src/period.c']},
427 index={'pyxfile': 'index',
428 'sources': ['pandas/src/datetime/np_datetime.c',
429 'pandas/src/datetime/np_datetime_strings.c']},
430 algos={'pyxfile': 'algos',
431 'depends': [srcpath('generated', suffix='.pyx')]},
432 parser=dict(pyxfile='parser',
433 depends=['pandas/src/parser/tokenizer.h',
434 'pandas/src/parser/io.h',
435 'pandas/src/numpy_helper.h'],
436 sources=['pandas/src/parser/tokenizer.c',
437 'pandas/src/parser/io.c'])
438 )
439
440 extensions = []
441
442 for name, data in ext_data.items():
443 sources = [srcpath(data['pyxfile'], suffix=suffix, subdir='')]
444 pxds = [pxd(x) for x in data.get('pxdfiles', [])]
445 if suffix == '.pyx' and pxds:
446 sources.extend(pxds)
447
448 sources.extend(data.get('sources', []))
449
450 include = data.get('include', common_include)
451
452 obj = Extension('pandas.%s' % name,
453 sources=sources,
454 depends=data.get('depends', []),
455 include_dirs=include)
456
457 extensions.append(obj)
458
459
460 sparse_ext = Extension('pandas._sparse',
461 sources=[srcpath('sparse', suffix=suffix)],
462 include_dirs=[],
463 libraries=libraries)
464
465 extensions.extend([sparse_ext])
466
467 # if not ISRELEASED:
468 # extensions.extend([sandbox_ext])
469
470 if suffix == '.pyx' and 'setuptools' in sys.modules:
471 # undo dumb setuptools bug clobbering .pyx sources back to .c
472 for ext in extensions:
473 if ext.sources[0].endswith('.c'):
474 root, _ = os.path.splitext(ext.sources[0])
475 ext.sources[0] = root + suffix
476
477 ujson_ext = Extension('pandas.json',
478 depends=['pandas/src/ujson/lib/ultrajson.h',
479 'pandas/src/numpy_helper.h'],
480 sources=['pandas/src/ujson/python/ujson.c',
481 'pandas/src/ujson/python/objToJSON.c',
482 'pandas/src/ujson/python/JSONtoObj.c',
483 'pandas/src/ujson/lib/ultrajsonenc.c',
484 'pandas/src/ujson/lib/ultrajsondec.c',
485 'pandas/src/datetime/np_datetime.c',
486 'pandas/src/datetime/np_datetime_strings.c'],
487 include_dirs=['pandas/src/ujson/python',
488 'pandas/src/ujson/lib',
489 'pandas/src/datetime'] + common_include,
490 extra_compile_args=['-D_GNU_SOURCE'])
491
492
493 extensions.append(ujson_ext)
494
495
496 if _have_setuptools:
497 setuptools_kwargs["test_suite"] = "nose.collector"
498
499 write_version_py()
500
501 # The build cache system does string matching below this point.
502 # if you change something, be careful.
503
504 setup(name=DISTNAME,
505 version=FULLVERSION,
506 maintainer=AUTHOR,
507 packages=['pandas',
508 'pandas.compat',
509 'pandas.core',
510 'pandas.io',
511 'pandas.rpy',
512 'pandas.sandbox',
513 'pandas.sparse',
514 'pandas.sparse.tests',
515 'pandas.stats',
516 'pandas.util',
517 'pandas.tests',
518 'pandas.tools',
519 'pandas.tools.tests',
520 'pandas.tseries',
521 'pandas.tseries.tests',
522 'pandas.io.tests',
523 'pandas.io.tests.test_json',
524 'pandas.stats.tests',
525 ],
526 package_data={'pandas.io': ['tests/data/legacy_hdf/*.h5',
527 'tests/data/legacy_pickle/0.10.1/*.pickle',
528 'tests/data/legacy_pickle/0.11.0/*.pickle',
529 'tests/data/*.csv',
530 'tests/data/*.dta',
531 'tests/data/*.txt',
532 'tests/data/*.xls',
533 'tests/data/*.xlsx',
534 'tests/data/*.table',
535 'tests/data/*.html',
536 'tests/test_json/data/*.json'],
537 'pandas.tools': ['tests/*.csv'],
538 'pandas.tests': ['data/*.pickle',
539 'data/*.csv'],
540 'pandas.tseries.tests': ['data/*.pickle',
541 'data/*.csv']
542 },
543 ext_modules=extensions,
544 maintainer_email=EMAIL,
545 description=DESCRIPTION,
546 license=LICENSE,
547 cmdclass=cmdclass,
548 url=URL,
549 download_url=DOWNLOAD_URL,
550 long_description=LONG_DESCRIPTION,
551 classifiers=CLASSIFIERS,
552 platforms='any',
553 **setuptools_kwargs)
554
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
f2a7b9ca72ed444cd6bdd8a1ca50e1a2d9e915b7
|
BUG: allow as_index=False for non aggregating groupby methods
closes #4648
closes #3417
|
@jreback merge-able? pretty trivial fix, but want to get your okay on it.
looks fine
FYI I think there might be an open issue related to this (aside from the one u indicates)
might be the same or unrelelated don't know
Does this work with an index which isn't plain (i.e. not just an arange)? Don't think you're testing for that...
@hayd good call thanks
Does this resolve #3805?
@cpcloud example I had in mind was something like this (with non-boring index), from your branch:
```
In [1]: df = pd.DataFrame([[1, 2], [2, 3], [1, 4], [1, 5], [2, 6]], index=list('abcde'))
In [2]: g = df.groupby(0, as_index=False)
In [3]: g.apply(lambda x: x)
Out[3]:
0 1
0 1 2
1 2 3
2 1 4
3 1 5
4 2 6
```
Shouldn't this have index of original DataFrame.
@jtratner nope (just tested)
ah then i misunderstood what `as_index` was supposed to do.
So I think the thing from SO was:
```
In [34]: g = df.groupby(0)
In [35]: g.apply(lambda x: x.head(2))
Out[35]:
0 1
0
1 a 1 2
c 1 4
2 b 2 3
e 2 6
In [36]: g = df.groupby(0, as_index=False)
In [37]: g.apply(lambda x: x.head(2))
Out[37]:
0 1
0
1 a 1 2
c 1 4
2 b 2 3
e 2 6
```
And we expect the latter not to include the group info.
```
0 1
a 1 2
c 1 4
b 2 3
e 2 6
```
Now I think about it, we It could be completely misusing apply here...
|
2013-08-25T19:42:22Z
|
<patch>
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -217,6 +217,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
of a duplicate index (:issue:`4359`)
- In ``to_json``, fix date handling so milliseconds are the default timestamp
as the docstring says (:issue:`4362`).
+ - ``as_index`` is no longer ignored when doing groupby apply (:issue:`4648`), (:issue:`3417`)
- JSON NaT handling fixed, NaTs are now serialised to `null` (:issue:`4498`)
- Fixed JSON handling of escapable characters in JSON object keys (:issue:`4593`)
- Fixed passing ``keep_default_na=False`` when ``na_values=None`` (:issue:`4318`)
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -516,7 +516,7 @@ def _concat_objects(self, keys, values, not_indexed_same=False):
result = result.reindex(ax)
else:
result = result.reindex_axis(ax, axis=self.axis)
- elif self.group_keys:
+ elif self.group_keys and self.as_index:
group_keys = keys
group_levels = self.grouper.levels
group_names = self.grouper.names
</patch>
|
[]
|
[]
| |||
mesonbuild__meson-6344
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
regression: nuisance warning for g++ clang++
the solution is probably similar to #6050 #6053 as this problem was also introduced in b1b8a7a and symptom is almost identical, but for g++ instead of gfortran.
A distinct symptom from #6050 is that I do see this on g++ 9.2.0 MSYS2 on Windows, but I don't see this on Ubuntu 18.04 g++ 7.4.0.
Also, with clang++ 9.0.0 on Windows I get
```
WARNING: No include directory found parsing "clang++ -xc++ -E -v -" output
```
on any trivial Meson C++ project,
meson.build
```meson
project('blah', 'cpp')
executable('foo', 'foo.cxx')
```
foo.cxx
```c++
int main(void) { return 0; }
```
on
```sh
meson setup build
```
a lot of nuisance warnings are printed like:
```
WARNING: No include directory found parsing "c++ -xc++ -E -v -" output
```
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="https://mesonbuild.com/assets/images/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://travis-ci.org/mesonbuild/meson)
11 [](https://dev.azure.com/jussi0947/jussi/_build/latest?definitionId=1)
12 [](https://codecov.io/gh/mesonbuild/meson/branch/master)
13 [](https://lgtm.com/projects/g/mesonbuild/meson/context:python)
14 [](https://lgtm.com/projects/g/mesonbuild/meson/alerts)
15
16 #### Dependencies
17
18 - [Python](https://python.org) (version 3.5 or newer)
19 - [Ninja](https://ninja-build.org) (version 1.5 or newer)
20
21 #### Installing from source
22
23 You can run Meson directly from a revision control checkout or an
24 extracted tarball. If you wish you can install it locally with the
25 standard Python command
26
27 ```sh
28 python3 -m pip install meson <your options here>
29 ```
30
31 Meson is also available from
32 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed
33 with `pip3 install meson` (this does not require a source checkout,
34 pip will download the package automatically). The exact command to
35 type to install with Pip can vary between systems, be sure to use the
36 Python 3 version of Pip.
37
38 For builds using Ninja, Ninja can be [downloaded directly](https://github.com/ninja-build/ninja/releases) or via
39
40 ```sh
41 python3 -m pip install ninja
42 ```
43
44 #### Running
45
46 Meson requires that you have a source directory and a build directory
47 and that these two are different. In your source root must exist a
48 file called `meson.build`. To generate the build system run this
49 command:
50
51 `meson <source directory> <build directory>`
52
53 Depending on how you obtained Meson the command might also be called
54 `meson.py` instead of plain `meson`. In the rest of this document we
55 are going to use the latter form.
56
57 You can omit either of the two directories, and Meson will substitute
58 the current directory and autodetect what you mean. This allows you to
59 do things like this:
60
61 `cd source_root; mkdir builddir; cd builddir; meson ..`
62
63 or
64
65 `cd source_root; mkdir builddir; meson builddir`
66
67 To compile, cd into your build directory and type `ninja`. To run unit
68 tests, type `ninja test`.
69
70 Install is the same but it can take an extra argument:
71
72 `DESTDIR=/destdir/path ninja install`
73
74 `DESTDIR` can be omitted. If you are installing to system directories,
75 you may need to run this command with sudo.
76
77
78 #### Contributing
79
80 We love code contributions. See the [contribution
81 page](https://mesonbuild.com/Contributing.html) on the web site for
82 details.
83
84
85 #### IRC
86
87 The irc channel for Meson is `#mesonbuild` over at Freenode.
88
89 You can use [FreeNode's official webchat][meson_irc]
90 to connect to this channel.
91
92 [meson_irc]: https://webchat.freenode.net/?channels=%23mesonbuild
93
94 #### Further info
95
96 More information about the Meson build system can be found at the
97 [project's home page](https://mesonbuild.com).
98
99 Meson is a registered trademark of Jussi Pakkanen.
100
[end of README.md]
[start of mesonbuild/compilers/mixins/gnu.py]
1 # Copyright 2019 The meson development team
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Provides mixins for GNU compilers and GNU-like compilers."""
16
17 import abc
18 import functools
19 import os
20 import pathlib
21 import re
22 import subprocess
23 import typing
24
25 from ... import mesonlib
26 from ... import mlog
27
28 if typing.TYPE_CHECKING:
29 from ...coredata import UserOption # noqa: F401
30 from ...environment import Environment
31
32 # XXX: prevent circular references.
33 # FIXME: this really is a posix interface not a c-like interface
34 clike_debug_args = {
35 False: [],
36 True: ['-g'],
37 } # type: typing.Dict[bool, typing.List[str]]
38
39 gnulike_buildtype_args = {
40 'plain': [],
41 'debug': [],
42 'debugoptimized': [],
43 'release': [],
44 'minsize': [],
45 'custom': [],
46 } # type: typing.Dict[str, typing.List[str]]
47
48 gnu_optimization_args = {
49 '0': [],
50 'g': ['-Og'],
51 '1': ['-O1'],
52 '2': ['-O2'],
53 '3': ['-O3'],
54 's': ['-Os'],
55 } # type: typing.Dict[str, typing.List[str]]
56
57 gnulike_instruction_set_args = {
58 'mmx': ['-mmmx'],
59 'sse': ['-msse'],
60 'sse2': ['-msse2'],
61 'sse3': ['-msse3'],
62 'ssse3': ['-mssse3'],
63 'sse41': ['-msse4.1'],
64 'sse42': ['-msse4.2'],
65 'avx': ['-mavx'],
66 'avx2': ['-mavx2'],
67 'neon': ['-mfpu=neon'],
68 } # type: typing.Dict[str, typing.List[str]]
69
70 gnu_symbol_visibility_args = {
71 '': [],
72 'default': ['-fvisibility=default'],
73 'internal': ['-fvisibility=internal'],
74 'hidden': ['-fvisibility=hidden'],
75 'protected': ['-fvisibility=protected'],
76 'inlineshidden': ['-fvisibility=hidden', '-fvisibility-inlines-hidden'],
77 } # type: typing.Dict[str, typing.List[str]]
78
79 gnu_color_args = {
80 'auto': ['-fdiagnostics-color=auto'],
81 'always': ['-fdiagnostics-color=always'],
82 'never': ['-fdiagnostics-color=never'],
83 } # type: typing.Dict[str, typing.List[str]]
84
85
86 @functools.lru_cache(maxsize=None)
87 def gnulike_default_include_dirs(compiler: typing.Tuple[str], lang: str) -> typing.List[str]:
88 lang_map = {
89 'c': 'c',
90 'cpp': 'c++',
91 'objc': 'objective-c',
92 'objcpp': 'objective-c++'
93 }
94 if lang not in lang_map:
95 return []
96 lang = lang_map[lang]
97 env = os.environ.copy()
98 env["LC_ALL"] = 'C'
99 cmd = list(compiler) + ['-x{}'.format(lang), '-E', '-v', '-']
100 p = subprocess.Popen(
101 cmd,
102 stdin=subprocess.DEVNULL,
103 stderr=subprocess.PIPE,
104 stdout=subprocess.PIPE,
105 env=env
106 )
107 stderr = p.stderr.read().decode('utf-8', errors='replace')
108 parse_state = 0
109 paths = []
110 for line in stderr.split('\n'):
111 if parse_state == 0:
112 if line == '#include "..." search starts here:':
113 parse_state = 1
114 elif parse_state == 1:
115 if line == '#include <...> search starts here:':
116 parse_state = 2
117 else:
118 paths.append(line[1:])
119 elif parse_state == 2:
120 if line == 'End of search list.':
121 break
122 else:
123 paths.append(line[1:])
124 if not paths:
125 mlog.warning('No include directory found parsing "{cmd}" output'.format(cmd=" ".join(cmd)))
126 return paths
127
128
129 class GnuLikeCompiler(metaclass=abc.ABCMeta):
130 """
131 GnuLikeCompiler is a common interface to all compilers implementing
132 the GNU-style commandline interface. This includes GCC, Clang
133 and ICC. Certain functionality between them is different and requires
134 that the actual concrete subclass define their own implementation.
135 """
136
137 LINKER_PREFIX = '-Wl,'
138
139 def __init__(self):
140 self.base_options = ['b_pch', 'b_lto', 'b_pgo', 'b_sanitize', 'b_coverage',
141 'b_ndebug', 'b_staticpic', 'b_pie']
142 if not (self.info.is_windows() or self.info.is_cygwin() or self.info.is_openbsd()):
143 self.base_options.append('b_lundef')
144 if not self.info.is_windows() or self.info.is_cygwin():
145 self.base_options.append('b_asneeded')
146 # All GCC-like backends can do assembly
147 self.can_compile_suffixes.add('s')
148
149 def get_pic_args(self) -> typing.List[str]:
150 if self.info.is_windows() or self.info.is_cygwin() or self.info.is_darwin():
151 return [] # On Window and OS X, pic is always on.
152 return ['-fPIC']
153
154 def get_pie_args(self) -> typing.List[str]:
155 return ['-fPIE']
156
157 def get_buildtype_args(self, buildtype: str) -> typing.List[str]:
158 return gnulike_buildtype_args[buildtype]
159
160 @abc.abstractmethod
161 def get_optimization_args(self, optimization_level: str) -> typing.List[str]:
162 raise NotImplementedError("get_optimization_args not implemented")
163
164 def get_debug_args(self, is_debug: bool) -> typing.List[str]:
165 return clike_debug_args[is_debug]
166
167 @abc.abstractmethod
168 def get_pch_suffix(self) -> str:
169 raise NotImplementedError("get_pch_suffix not implemented")
170
171 def split_shlib_to_parts(self, fname: str) -> typing.Tuple[str, str]:
172 return os.path.dirname(fname), fname
173
174 def get_instruction_set_args(self, instruction_set: str) -> typing.Optional[typing.List[str]]:
175 return gnulike_instruction_set_args.get(instruction_set, None)
176
177 def get_default_include_dirs(self) -> typing.List[str]:
178 return gnulike_default_include_dirs(tuple(self.exelist), self.language)
179
180 @abc.abstractmethod
181 def openmp_flags(self) -> typing.List[str]:
182 raise NotImplementedError("openmp_flags not implemented")
183
184 def gnu_symbol_visibility_args(self, vistype: str) -> typing.List[str]:
185 return gnu_symbol_visibility_args[vistype]
186
187 def gen_vs_module_defs_args(self, defsfile: str) -> typing.List[str]:
188 if not isinstance(defsfile, str):
189 raise RuntimeError('Module definitions file should be str')
190 # On Windows targets, .def files may be specified on the linker command
191 # line like an object file.
192 if self.info.is_windows() or self.info.is_cygwin():
193 return [defsfile]
194 # For other targets, discard the .def file.
195 return []
196
197 def get_argument_syntax(self) -> str:
198 return 'gcc'
199
200 def get_profile_generate_args(self) -> typing.List[str]:
201 return ['-fprofile-generate']
202
203 def get_profile_use_args(self) -> typing.List[str]:
204 return ['-fprofile-use', '-fprofile-correction']
205
206 def get_gui_app_args(self, value: bool) -> typing.List[str]:
207 if self.info.is_windows() or self.info.is_cygwin():
208 return ['-mwindows' if value else '-mconsole']
209 return []
210
211 def compute_parameters_with_absolute_paths(self, parameter_list: typing.List[str], build_dir: str) -> typing.List[str]:
212 for idx, i in enumerate(parameter_list):
213 if i[:2] == '-I' or i[:2] == '-L':
214 parameter_list[idx] = i[:2] + os.path.normpath(os.path.join(build_dir, i[2:]))
215
216 return parameter_list
217
218 @functools.lru_cache()
219 def _get_search_dirs(self, env: 'Environment') -> str:
220 extra_args = ['--print-search-dirs']
221 stdo = None
222 with self._build_wrapper('', env, extra_args=extra_args,
223 dependencies=None, mode='compile',
224 want_output=True) as p:
225 stdo = p.stdo
226 return stdo
227
228 def _split_fetch_real_dirs(self, pathstr: str) -> typing.List[str]:
229 # We need to use the path separator used by the compiler for printing
230 # lists of paths ("gcc --print-search-dirs"). By default
231 # we assume it uses the platform native separator.
232 pathsep = os.pathsep
233
234 # clang uses ':' instead of ';' on Windows https://reviews.llvm.org/D61121
235 # so we need to repair things like 'C:\foo:C:\bar'
236 if pathsep == ';':
237 pathstr = re.sub(r':([^/\\])', r';\1', pathstr)
238
239 # pathlib treats empty paths as '.', so filter those out
240 paths = [p for p in pathstr.split(pathsep) if p]
241
242 result = []
243 for p in paths:
244 # GCC returns paths like this:
245 # /usr/lib/gcc/x86_64-linux-gnu/8/../../../../x86_64-linux-gnu/lib
246 # It would make sense to normalize them to get rid of the .. parts
247 # Sadly when you are on a merged /usr fs it also kills these:
248 # /lib/x86_64-linux-gnu
249 # since /lib is a symlink to /usr/lib. This would mean
250 # paths under /lib would be considered not a "system path",
251 # which is wrong and breaks things. Store everything, just to be sure.
252 pobj = pathlib.Path(p)
253 unresolved = pobj.as_posix()
254 if pobj.exists():
255 if unresolved not in result:
256 result.append(unresolved)
257 try:
258 resolved = pathlib.Path(p).resolve().as_posix()
259 if resolved not in result:
260 result.append(resolved)
261 except FileNotFoundError:
262 pass
263 return result
264
265 def get_compiler_dirs(self, env: 'Environment', name: str) -> typing.List[str]:
266 '''
267 Get dirs from the compiler, either `libraries:` or `programs:`
268 '''
269 stdo = self._get_search_dirs(env)
270 for line in stdo.split('\n'):
271 if line.startswith(name + ':'):
272 return self._split_fetch_real_dirs(line.split('=', 1)[1])
273 return []
274
275 def get_lto_compile_args(self) -> typing.List[str]:
276 return ['-flto']
277
278 def sanitizer_compile_args(self, value: str) -> typing.List[str]:
279 if value == 'none':
280 return []
281 args = ['-fsanitize=' + value]
282 if 'address' in value: # for -fsanitize=address,undefined
283 args.append('-fno-omit-frame-pointer')
284 return args
285
286 def get_output_args(self, target: str) -> typing.List[str]:
287 return ['-o', target]
288
289 def get_dependency_gen_args(self, outtarget, outfile):
290 return ['-MD', '-MQ', outtarget, '-MF', outfile]
291
292 def get_compile_only_args(self) -> typing.List[str]:
293 return ['-c']
294
295 def get_include_args(self, path: str, is_system: bool) -> typing.List[str]:
296 if not path:
297 path = '.'
298 if is_system:
299 return ['-isystem' + path]
300 return ['-I' + path]
301
302 @classmethod
303 def use_linker_args(cls, linker: str) -> typing.List[str]:
304 return ['-fuse-ld={}'.format(linker)]
305
306
307 class GnuCompiler(GnuLikeCompiler):
308 """
309 GnuCompiler represents an actual GCC in its many incarnations.
310 Compilers imitating GCC (Clang/Intel) should use the GnuLikeCompiler ABC.
311 """
312
313 def __init__(self, defines: typing.Dict[str, str]):
314 super().__init__()
315 self.id = 'gcc'
316 self.defines = defines or {}
317 self.base_options.append('b_colorout')
318
319 def get_colorout_args(self, colortype: str) -> typing.List[str]:
320 if mesonlib.version_compare(self.version, '>=4.9.0'):
321 return gnu_color_args[colortype][:]
322 return []
323
324 def get_warn_args(self, level: str) -> typing.List[str]:
325 args = super().get_warn_args(level)
326 if mesonlib.version_compare(self.version, '<4.8.0') and '-Wpedantic' in args:
327 # -Wpedantic was added in 4.8.0
328 # https://gcc.gnu.org/gcc-4.8/changes.html
329 args[args.index('-Wpedantic')] = '-pedantic'
330 return args
331
332 def has_builtin_define(self, define: str) -> bool:
333 return define in self.defines
334
335 def get_builtin_define(self, define: str) -> typing.Optional[str]:
336 if define in self.defines:
337 return self.defines[define]
338 return None
339
340 def get_optimization_args(self, optimization_level: str) -> typing.List[str]:
341 return gnu_optimization_args[optimization_level]
342
343 def get_pch_suffix(self) -> str:
344 return 'gch'
345
346 def openmp_flags(self) -> typing.List[str]:
347 return ['-fopenmp']
348
349 def has_arguments(self, args, env, code, mode):
350 # For some compiler command line arguments, the GNU compilers will
351 # emit a warning on stderr indicating that an option is valid for a
352 # another language, but still complete with exit_success
353 with self._build_wrapper(code, env, args, None, mode, disable_cache=False, want_output=True) as p:
354 result = p.returncode == 0
355 if self.language in {'cpp', 'objcpp'} and 'is valid for C/ObjC' in p.stde:
356 result = False
357 if self.language in {'c', 'objc'} and 'is valid for C++/ObjC++' in p.stde:
358 result = False
359 return result, p.cached
360
[end of mesonbuild/compilers/mixins/gnu.py]
[start of mesonbuild/dependencies/boost.py]
1 # Copyright 2013-2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # This file contains the detection logic for miscellaneous external dependencies.
16
17 import glob
18 import os
19
20 from .. import mlog
21 from .. import mesonlib
22 from ..environment import detect_cpu_family
23
24 from .base import (DependencyException, ExternalDependency)
25 from .misc import ThreadDependency
26
27 # On windows 3 directory layouts are supported:
28 # * The default layout (versioned) installed:
29 # - $BOOST_ROOT/include/boost-x_x/boost/*.hpp
30 # - $BOOST_ROOT/lib/*.lib
31 # * The non-default layout (system) installed:
32 # - $BOOST_ROOT/include/boost/*.hpp
33 # - $BOOST_ROOT/lib/*.lib
34 # * The pre-built binaries from sf.net:
35 # - $BOOST_ROOT/boost/*.hpp
36 # - $BOOST_ROOT/lib<arch>-<compiler>/*.lib where arch=32/64 and compiler=msvc-14.1
37 #
38 # Note that we should also try to support:
39 # mingw-w64 / Windows : libboost_<module>-mt.a (location = <prefix>/mingw64/lib/)
40 # libboost_<module>-mt.dll.a
41 #
42 # Library names supported:
43 # - libboost_<module>-<compiler>-mt-gd-x_x.lib (static)
44 # - boost_<module>-<compiler>-mt-gd-x_x.lib|.dll (shared)
45 # - libboost_<module>.lib (static)
46 # - boost_<module>.lib|.dll (shared)
47 # where compiler is vc141 for example.
48 #
49 # NOTE: -gd means runtime and build time debugging is on
50 # -mt means threading=multi
51 #
52 # The `modules` argument accept library names. This is because every module that
53 # has libraries to link against also has multiple options regarding how to
54 # link. See for example:
55 # * http://www.boost.org/doc/libs/1_65_1/libs/test/doc/html/boost_test/usage_variants.html
56 # * http://www.boost.org/doc/libs/1_65_1/doc/html/stacktrace/configuration_and_build.html
57 # * http://www.boost.org/doc/libs/1_65_1/libs/math/doc/html/math_toolkit/main_tr1.html
58
59 # **On Unix**, official packaged versions of boost libraries follow the following schemes:
60 #
61 # Linux / Debian: libboost_<module>.so -> libboost_<module>.so.1.66.0
62 # Linux / Red Hat: libboost_<module>.so -> libboost_<module>.so.1.66.0
63 # Linux / OpenSuse: libboost_<module>.so -> libboost_<module>.so.1.66.0
64 # Win / Cygwin: libboost_<module>.dll.a (location = /usr/lib)
65 # libboost_<module>.a
66 # cygboost_<module>_1_64.dll (location = /usr/bin)
67 # Win / VS: boost_<module>-vc<ver>-mt[-gd]-<arch>-1_67.dll (location = C:/local/boost_1_67_0)
68 # Mac / homebrew: libboost_<module>.dylib + libboost_<module>-mt.dylib (location = /usr/local/lib)
69 # Mac / macports: libboost_<module>.dylib + libboost_<module>-mt.dylib (location = /opt/local/lib)
70 #
71 # Its not clear that any other abi tags (e.g. -gd) are used in official packages.
72 #
73 # On Linux systems, boost libs have multithreading support enabled, but without the -mt tag.
74 #
75 # Boost documentation recommends using complex abi tags like "-lboost_regex-gcc34-mt-d-1_36".
76 # (See http://www.boost.org/doc/libs/1_66_0/more/getting_started/unix-variants.html#library-naming)
77 # However, its not clear that any Unix distribution follows this scheme.
78 # Furthermore, the boost documentation for unix above uses examples from windows like
79 # "libboost_regex-vc71-mt-d-x86-1_34.lib", so apparently the abi tags may be more aimed at windows.
80 #
81 # Probably we should use the linker search path to decide which libraries to use. This will
82 # make it possible to find the macports boost libraries without setting BOOST_ROOT, and will
83 # also mean that it would be possible to use user-installed boost libraries when official
84 # packages are installed.
85 #
86 # We thus follow the following strategy:
87 # 1. Look for libraries using compiler.find_library( )
88 # 1.1 On Linux, just look for boost_<module>
89 # 1.2 On other systems (e.g. Mac) look for boost_<module>-mt if multithreading.
90 # 1.3 Otherwise look for boost_<module>
91 # 2. Fall back to previous approach
92 # 2.1. Search particular directories.
93 # 2.2. Find boost libraries with unknown suffixes using file-name globbing.
94
95 # TODO: Unix: Don't assume we know where the boost dir is, rely on -Idir and -Ldir being set.
96 # TODO: Allow user to specify suffix in BOOST_SUFFIX, or add specific options like BOOST_DEBUG for 'd' for debug.
97
98 class BoostDependency(ExternalDependency):
99 def __init__(self, environment, kwargs):
100 super().__init__('boost', environment, 'cpp', kwargs)
101 self.need_static_link = ['boost_exception', 'boost_test_exec_monitor']
102 self.is_debug = environment.coredata.get_builtin_option('buildtype').startswith('debug')
103 threading = kwargs.get("threading", "multi")
104 self.is_multithreading = threading == "multi"
105
106 self.requested_modules = self.get_requested(kwargs)
107 if 'thread' in self.requested_modules:
108 self._add_sub_dependency(ThreadDependency, environment, kwargs)
109
110 self.boost_root = None
111 self.boost_roots = []
112 self.incdir = None
113 self.libdir = None
114
115 if 'BOOST_ROOT' in os.environ:
116 self.boost_root = os.environ['BOOST_ROOT']
117 self.boost_roots = [self.boost_root]
118 if not os.path.isabs(self.boost_root):
119 raise DependencyException('BOOST_ROOT must be an absolute path.')
120 if 'BOOST_INCLUDEDIR' in os.environ:
121 self.incdir = os.environ['BOOST_INCLUDEDIR']
122 if 'BOOST_LIBRARYDIR' in os.environ:
123 self.libdir = os.environ['BOOST_LIBRARYDIR']
124
125 if self.boost_root is None:
126 if self.env.machines[self.for_machine].is_windows():
127 self.boost_roots = self.detect_win_roots()
128 else:
129 self.boost_roots = self.detect_nix_roots()
130
131 if self.incdir is None:
132 if self.env.machines[self.for_machine].is_windows():
133 self.incdir = self.detect_win_incdir()
134 else:
135 self.incdir = self.detect_nix_incdir()
136
137 mlog.debug('Boost library root dir is', mlog.bold(self.boost_root))
138 mlog.debug('Boost include directory is', mlog.bold(self.incdir))
139
140 # 1. check if we can find BOOST headers.
141 self.detect_headers_and_version()
142
143 if not self.is_found:
144 return # if we can not find 'boost/version.hpp'
145
146 # 2. check if we can find BOOST libraries.
147 self.detect_lib_modules()
148 mlog.debug('Boost library directory is', mlog.bold(self.libdir))
149
150 mlog.debug('Installed Boost libraries: ')
151 for key in sorted(self.lib_modules.keys()):
152 mlog.debug(key, self.lib_modules[key])
153
154 # 3. check if requested modules are valid, that is, either found or in the list of known boost libraries
155 self.check_invalid_modules()
156
157 # 4. final check whether or not we find all requested and valid modules
158 self.check_find_requested_modules()
159
160 def check_invalid_modules(self):
161 invalid_modules = [c for c in self.requested_modules if 'boost_' + c not in self.lib_modules and 'boost_' + c not in BOOST_LIBS]
162
163 # previous versions of meson allowed include dirs as modules
164 remove = []
165 for m in invalid_modules:
166 if m in BOOST_DIRS:
167 mlog.warning('Requested boost library', mlog.bold(m), 'that doesn\'t exist. '
168 'This will be an error in the future')
169 remove.append(m)
170
171 self.requested_modules = [x for x in self.requested_modules if x not in remove]
172 invalid_modules = [x for x in invalid_modules if x not in remove]
173
174 if invalid_modules:
175 mlog.error('Invalid Boost modules: ' + ', '.join(invalid_modules))
176 return True
177 else:
178 return False
179
180 def log_details(self):
181 module_str = ', '.join(self.requested_modules)
182 return module_str
183
184 def log_info(self):
185 if self.boost_root:
186 return self.boost_root
187 return ''
188
189 def detect_nix_roots(self):
190 return [os.path.abspath(os.path.join(x, '..'))
191 for x in self.clib_compiler.get_default_include_dirs()]
192
193 def detect_win_roots(self):
194 res = []
195 # Where boost documentation says it should be
196 globtext = 'C:\\Program Files\\boost\\boost_*'
197 files = glob.glob(globtext)
198 res.extend(files)
199
200 # Where boost built from source actually installs it
201 if os.path.isdir('C:\\Boost'):
202 res.append('C:\\Boost')
203
204 # Where boost prebuilt binaries are
205 globtext = 'C:\\local\\boost_*'
206 files = glob.glob(globtext)
207 res.extend(files)
208 return res
209
210 def detect_nix_incdir(self):
211 if self.boost_root:
212 return os.path.join(self.boost_root, 'include')
213 return None
214
215 # FIXME: Should pick a version that matches the requested version
216 # Returns the folder that contains the boost folder.
217 def detect_win_incdir(self):
218 for root in self.boost_roots:
219 globtext = os.path.join(root, 'include', 'boost-*')
220 incdirs = glob.glob(globtext)
221 if incdirs:
222 return incdirs[0]
223 incboostdir = os.path.join(root, 'include', 'boost')
224 if os.path.isdir(incboostdir):
225 return os.path.join(root, 'include')
226 incboostdir = os.path.join(root, 'boost')
227 if os.path.isdir(incboostdir):
228 return root
229 return None
230
231 def get_compile_args(self):
232 args = []
233 include_dir = self.incdir
234
235 # Use "-isystem" when including boost headers instead of "-I"
236 # to avoid compiler warnings/failures when "-Werror" is used
237
238 # Careful not to use "-isystem" on default include dirs as it
239 # breaks some of the headers for certain gcc versions
240
241 # For example, doing g++ -isystem /usr/include on a simple
242 # "int main()" source results in the error:
243 # "/usr/include/c++/6.3.1/cstdlib:75:25: fatal error: stdlib.h: No such file or directory"
244
245 # See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70129
246 # and http://stackoverflow.com/questions/37218953/isystem-on-a-system-include-directory-causes-errors
247 # for more details
248
249 if include_dir and include_dir not in self.clib_compiler.get_default_include_dirs():
250 args.append("".join(self.clib_compiler.get_include_args(include_dir, True)))
251 return args
252
253 def get_requested(self, kwargs):
254 candidates = mesonlib.extract_as_list(kwargs, 'modules')
255 for c in candidates:
256 if not isinstance(c, str):
257 raise DependencyException('Boost module argument is not a string.')
258 return candidates
259
260 def detect_headers_and_version(self):
261 try:
262 version = self.clib_compiler.get_define('BOOST_LIB_VERSION', '#include <boost/version.hpp>', self.env, self.get_compile_args(), [], disable_cache=True)[0]
263 except mesonlib.EnvironmentException:
264 return
265 except TypeError:
266 return
267 # Remove quotes
268 version = version[1:-1]
269 # Fix version string
270 self.version = version.replace('_', '.')
271 self.is_found = True
272
273 def detect_lib_modules(self):
274 self.lib_modules = {}
275 # 1. Try to find modules using compiler.find_library( )
276 if self.find_libraries_with_abi_tags(self.abi_tags()):
277 pass
278 # 2. Fall back to the old method
279 else:
280 if self.env.machines[self.for_machine].is_windows():
281 self.detect_lib_modules_win()
282 else:
283 self.detect_lib_modules_nix()
284
285 def check_find_requested_modules(self):
286 # 3. Check if we can find the modules
287 for m in self.requested_modules:
288 if 'boost_' + m not in self.lib_modules:
289 mlog.debug('Requested Boost library {!r} not found'.format(m))
290 self.is_found = False
291
292 def modname_from_filename(self, filename):
293 modname = os.path.basename(filename)
294 modname = modname.split('.', 1)[0]
295 modname = modname.split('-', 1)[0]
296 if modname.startswith('libboost'):
297 modname = modname[3:]
298 return modname
299
300 def compiler_tag(self):
301 tag = None
302 compiler = self.env.detect_cpp_compiler(self.for_machine)
303 if self.env.machines[self.for_machine].is_windows():
304 if compiler.get_id() in ['msvc', 'clang-cl']:
305 comp_ts_version = compiler.get_toolset_version()
306 compiler_ts = comp_ts_version.split('.')
307 # FIXME - what about other compilers?
308 tag = '-vc{}{}'.format(compiler_ts[0], compiler_ts[1])
309 else:
310 tag = ''
311 return tag
312
313 def threading_tag(self):
314 if not self.is_multithreading:
315 return ''
316
317 if self.env.machines[self.for_machine].is_darwin():
318 # - Mac: requires -mt for multithreading, so should not fall back to non-mt libraries.
319 return '-mt'
320 elif self.env.machines[self.for_machine].is_windows():
321 # - Windows: requires -mt for multithreading, so should not fall back to non-mt libraries.
322 return '-mt'
323 else:
324 # - Linux: leaves off -mt but libraries are multithreading-aware.
325 # - Cygwin: leaves off -mt but libraries are multithreading-aware.
326 return ''
327
328 def version_tag(self):
329 return '-' + self.version.replace('.', '_')
330
331 def debug_tag(self):
332 return '-gd' if self.is_debug else ''
333
334 def arch_tag(self):
335 # currently only applies to windows msvc installed binaries
336 if self.env.detect_cpp_compiler(self.for_machine).get_id() not in ['msvc', 'clang-cl']:
337 return ''
338 # pre-compiled binaries only added arch tag for versions > 1.64
339 if float(self.version) < 1.65:
340 return ''
341 arch = detect_cpu_family(self.env.coredata.compilers.host)
342 if arch == 'x86':
343 return '-x32'
344 elif arch == 'x86_64':
345 return '-x64'
346 return ''
347
348 def versioned_abi_tag(self):
349 return self.compiler_tag() + self.threading_tag() + self.debug_tag() + self.arch_tag() + self.version_tag()
350
351 # FIXME - how to handle different distributions, e.g. for Mac? Currently we handle homebrew and macports, but not fink.
352 def abi_tags(self):
353 if self.env.machines[self.for_machine].is_windows():
354 return [self.versioned_abi_tag(), self.threading_tag()]
355 else:
356 return [self.threading_tag()]
357
358 def sourceforge_dir(self):
359 if self.env.detect_cpp_compiler(self.for_machine).get_id() != 'msvc':
360 return None
361 comp_ts_version = self.env.detect_cpp_compiler(self.for_machine).get_toolset_version()
362 arch = detect_cpu_family(self.env.coredata.compilers.host)
363 if arch == 'x86':
364 return 'lib32-msvc-{}'.format(comp_ts_version)
365 elif arch == 'x86_64':
366 return 'lib64-msvc-{}'.format(comp_ts_version)
367 else:
368 # Does anyone do Boost cross-compiling to other archs on Windows?
369 return None
370
371 def find_libraries_with_abi_tag(self, tag):
372
373 # All modules should have the same tag
374 self.lib_modules = {}
375
376 all_found = True
377
378 for module in self.requested_modules:
379 libname = 'boost_' + module + tag
380
381 args = self.clib_compiler.find_library(libname, self.env, self.extra_lib_dirs())
382 if args is None:
383 mlog.debug("Couldn\'t find library '{}' for boost module '{}' (ABI tag = '{}')".format(libname, module, tag))
384 all_found = False
385 else:
386 mlog.debug('Link args for boost module "{}" are {}'.format(module, args))
387 self.lib_modules['boost_' + module] = args
388
389 return all_found
390
391 def find_libraries_with_abi_tags(self, tags):
392 for tag in tags:
393 if self.find_libraries_with_abi_tag(tag):
394 return True
395 return False
396
397 def detect_lib_modules_win(self):
398 if not self.libdir:
399 # The libdirs in the distributed binaries (from sf)
400 lib_sf = self.sourceforge_dir()
401
402 if self.boost_root:
403 roots = [self.boost_root]
404 else:
405 roots = self.boost_roots
406 for root in roots:
407 # The default libdir when building
408 libdir = os.path.join(root, 'lib')
409 if os.path.isdir(libdir):
410 self.libdir = libdir
411 break
412 if lib_sf:
413 full_path = os.path.join(root, lib_sf)
414 if os.path.isdir(full_path):
415 self.libdir = full_path
416 break
417
418 if not self.libdir:
419 return
420
421 for name in self.need_static_link:
422 # FIXME - why are we only looking for *.lib? Mingw provides *.dll.a and *.a
423 libname = 'lib' + name + self.versioned_abi_tag() + '.lib'
424 if os.path.isfile(os.path.join(self.libdir, libname)):
425 self.lib_modules[self.modname_from_filename(libname)] = [libname]
426 else:
427 libname = "lib{}.lib".format(name)
428 if os.path.isfile(os.path.join(self.libdir, libname)):
429 self.lib_modules[name[3:]] = [libname]
430
431 # globber1 applies to a layout=system installation
432 # globber2 applies to a layout=versioned installation
433 globber1 = 'libboost_*' if self.static else 'boost_*'
434 globber2 = globber1 + self.versioned_abi_tag()
435 # FIXME - why are we only looking for *.lib? Mingw provides *.dll.a and *.a
436 globber2_matches = glob.glob(os.path.join(self.libdir, globber2 + '.lib'))
437 for entry in globber2_matches:
438 fname = os.path.basename(entry)
439 self.lib_modules[self.modname_from_filename(fname)] = [fname]
440 if not globber2_matches:
441 # FIXME - why are we only looking for *.lib? Mingw provides *.dll.a and *.a
442 for entry in glob.glob(os.path.join(self.libdir, globber1 + '.lib')):
443 if self.static:
444 fname = os.path.basename(entry)
445 self.lib_modules[self.modname_from_filename(fname)] = [fname]
446
447 def detect_lib_modules_nix(self):
448 if self.static:
449 libsuffix = 'a'
450 elif self.env.machines[self.for_machine].is_darwin():
451 libsuffix = 'dylib'
452 else:
453 libsuffix = 'so'
454
455 globber = 'libboost_*.{}'.format(libsuffix)
456 if self.libdir:
457 libdirs = [self.libdir]
458 elif self.boost_root is None:
459 libdirs = mesonlib.get_library_dirs()
460 else:
461 libdirs = [os.path.join(self.boost_root, 'lib')]
462 for libdir in libdirs:
463 for name in self.need_static_link:
464 libname = 'lib{}.a'.format(name)
465 if os.path.isfile(os.path.join(libdir, libname)):
466 self.lib_modules[name] = [libname]
467 for entry in glob.glob(os.path.join(libdir, globber)):
468 # I'm not 100% sure what to do here. Some distros
469 # have modules such as thread only as -mt versions.
470 # On debian all packages are built threading=multi
471 # but not suffixed with -mt.
472 # FIXME: implement detect_lib_modules_{debian, redhat, ...}
473 # FIXME: this wouldn't work with -mt-gd either. -BDR
474 if self.is_multithreading and mesonlib.is_debianlike():
475 pass
476 elif self.is_multithreading and entry.endswith('-mt.{}'.format(libsuffix)):
477 pass
478 elif not entry.endswith('-mt.{}'.format(libsuffix)):
479 pass
480 else:
481 continue
482 modname = self.modname_from_filename(entry)
483 if modname not in self.lib_modules:
484 self.lib_modules[modname] = [entry]
485
486 def extra_lib_dirs(self):
487 if self.libdir:
488 return [self.libdir]
489 elif self.boost_root:
490 return [os.path.join(self.boost_root, 'lib')]
491 return []
492
493 def get_link_args(self, **kwargs):
494 args = []
495 for d in self.extra_lib_dirs():
496 args += self.clib_compiler.get_linker_search_args(d)
497 for lib in self.requested_modules:
498 args += self.lib_modules['boost_' + lib]
499 return args
500
501 def get_sources(self):
502 return []
503
504 # Generated with boost_names.py
505 BOOST_LIBS = [
506 'boost_atomic',
507 'boost_chrono',
508 'boost_chrono',
509 'boost_container',
510 'boost_context',
511 'boost_coroutine',
512 'boost_date_time',
513 'boost_exception',
514 'boost_fiber',
515 'boost_filesystem',
516 'boost_graph',
517 'boost_iostreams',
518 'boost_locale',
519 'boost_log',
520 'boost_log_setup',
521 'boost_math_tr1',
522 'boost_math_tr1f',
523 'boost_math_tr1l',
524 'boost_math_c99',
525 'boost_math_c99f',
526 'boost_math_c99l',
527 'boost_math_tr1',
528 'boost_math_tr1f',
529 'boost_math_tr1l',
530 'boost_math_c99',
531 'boost_math_c99f',
532 'boost_math_c99l',
533 'boost_math_tr1',
534 'boost_math_tr1f',
535 'boost_math_tr1l',
536 'boost_math_c99',
537 'boost_math_c99f',
538 'boost_math_c99l',
539 'boost_math_tr1',
540 'boost_math_tr1f',
541 'boost_math_tr1l',
542 'boost_math_c99',
543 'boost_math_c99f',
544 'boost_math_c99l',
545 'boost_math_tr1',
546 'boost_math_tr1f',
547 'boost_math_tr1l',
548 'boost_math_c99',
549 'boost_math_c99f',
550 'boost_math_c99l',
551 'boost_math_tr1',
552 'boost_math_tr1f',
553 'boost_math_tr1l',
554 'boost_math_c99',
555 'boost_math_c99f',
556 'boost_math_c99l',
557 'boost_mpi',
558 'boost_program_options',
559 'boost_random',
560 'boost_regex',
561 'boost_serialization',
562 'boost_wserialization',
563 'boost_signals',
564 'boost_stacktrace_noop',
565 'boost_stacktrace_backtrace',
566 'boost_stacktrace_addr2line',
567 'boost_stacktrace_basic',
568 'boost_stacktrace_windbg',
569 'boost_stacktrace_windbg_cached',
570 'boost_system',
571 'boost_prg_exec_monitor',
572 'boost_test_exec_monitor',
573 'boost_unit_test_framework',
574 'boost_thread',
575 'boost_timer',
576 'boost_type_erasure',
577 'boost_wave'
578 ]
579
580 BOOST_DIRS = [
581 'lambda',
582 'optional',
583 'convert',
584 'system',
585 'uuid',
586 'archive',
587 'align',
588 'timer',
589 'chrono',
590 'gil',
591 'logic',
592 'signals',
593 'predef',
594 'tr1',
595 'multi_index',
596 'property_map',
597 'multi_array',
598 'context',
599 'random',
600 'endian',
601 'circular_buffer',
602 'proto',
603 'assign',
604 'format',
605 'math',
606 'phoenix',
607 'graph',
608 'locale',
609 'mpl',
610 'pool',
611 'unordered',
612 'core',
613 'exception',
614 'ptr_container',
615 'flyweight',
616 'range',
617 'typeof',
618 'thread',
619 'move',
620 'spirit',
621 'dll',
622 'compute',
623 'serialization',
624 'ratio',
625 'msm',
626 'config',
627 'metaparse',
628 'coroutine2',
629 'qvm',
630 'program_options',
631 'concept',
632 'detail',
633 'hana',
634 'concept_check',
635 'compatibility',
636 'variant',
637 'type_erasure',
638 'mpi',
639 'test',
640 'fusion',
641 'log',
642 'sort',
643 'local_function',
644 'units',
645 'functional',
646 'preprocessor',
647 'integer',
648 'container',
649 'polygon',
650 'interprocess',
651 'numeric',
652 'iterator',
653 'wave',
654 'lexical_cast',
655 'multiprecision',
656 'utility',
657 'tti',
658 'asio',
659 'dynamic_bitset',
660 'algorithm',
661 'xpressive',
662 'bimap',
663 'signals2',
664 'type_traits',
665 'regex',
666 'statechart',
667 'parameter',
668 'icl',
669 'python',
670 'lockfree',
671 'intrusive',
672 'io',
673 'pending',
674 'geometry',
675 'tuple',
676 'iostreams',
677 'heap',
678 'atomic',
679 'filesystem',
680 'smart_ptr',
681 'function',
682 'fiber',
683 'type_index',
684 'accumulators',
685 'function_types',
686 'coroutine',
687 'vmd',
688 'date_time',
689 'property_tree',
690 'bind'
691 ]
692
[end of mesonbuild/dependencies/boost.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
31bb6eae455b547a563d3130c483bbb7ace96ae0
|
regression: nuisance warning for g++ clang++
the solution is probably similar to #6050 #6053 as this problem was also introduced in b1b8a7a and symptom is almost identical, but for g++ instead of gfortran.
A distinct symptom from #6050 is that I do see this on g++ 9.2.0 MSYS2 on Windows, but I don't see this on Ubuntu 18.04 g++ 7.4.0.
Also, with clang++ 9.0.0 on Windows I get
```
WARNING: No include directory found parsing "clang++ -xc++ -E -v -" output
```
on any trivial Meson C++ project,
meson.build
```meson
project('blah', 'cpp')
executable('foo', 'foo.cxx')
```
foo.cxx
```c++
int main(void) { return 0; }
```
on
```sh
meson setup build
```
a lot of nuisance warnings are printed like:
```
WARNING: No include directory found parsing "c++ -xc++ -E -v -" output
```
|
The warning should be correct in this case, because we are trying to get the include directories from a gnu-like C++ compiler for C++ code.
I also can't reproduce this waring on my machine.
OK I didn't realize the distinctiveness of this case. At least, it will be handy to refer to if future users notice it.
Also, what is the output for `c++ -xc++ -E -v -` on the system in question? Not getting the include directories can lead to tons of problems if a dependency adds system include directories.
```posh
c++ -xc++ -E -v -
```
returns:
```
Using built-in specs.
COLLECT_GCC=C:\msys64\mingw64\bin\c++.exe
Target: x86_64-w64-mingw32
Configured with: ../gcc-9.2.0/configure --prefix=/mingw64 --with-local-prefix=/mingw64/local --build=x86_64-w64-mingw32 --host=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --with-native-system-header-dir=/mingw64/x86_64-w64-mingw32/include --libexecdir=/mingw64/lib --enable-bootstrap --with-arch=x86-64 --with-tune=generic --enable-languages=c,lto,c++,fortran,ada,objc,obj-c++ --enable-shared --enable-static --enable-libatomic --enable-threads=posix --enable-graphite --enable-fully-dynamic-string --enable-libstdcxx-filesystem-ts=yes --enable-libstdcxx-time=yes --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-isl-version-check --enable-lto --enable-libgomp --disable-multilib --enable-checking=release --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --enable-plugin --with-libiconv --with-system-zlib --with-gmp=/mingw64 --with-mpfr=/mingw64 --with-mpc=/mingw64 --with-isl=/mingw64 --with-pkgversion='Rev2, Built by MSYS2 project' --with-bugurl=https://sourceforge.net/projects/msys2 --with-gnu-as --with-gnu-ld
Thread model: posix
gcc version 9.2.0 (Rev2, Built by MSYS2 project)
COLLECT_GCC_OPTIONS='-E' '-v' '-shared-libgcc' '-mtune=generic' '-march=x86-64'
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/cc1plus.exe -E -quiet -v -iprefix C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/ -D_REENTRANT - -mtune=generic -march=x86-64
ignoring duplicate directory "C:/msys64/mingw64/lib/gcc/../../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../include/c++/9.2.0"
ignoring duplicate directory "C:/msys64/mingw64/lib/gcc/../../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../include/c++/9.2.0/x86_64-w64-mingw32"
ignoring duplicate directory "C:/msys64/mingw64/lib/gcc/../../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../include/c++/9.2.0/backward"
ignoring duplicate directory "C:/msys64/mingw64/lib/gcc/../../lib/gcc/x86_64-w64-mingw32/9.2.0/include"
ignoring nonexistent directory "C:/building/msys64/mingw64/include"
ignoring nonexistent directory "/mingw64/include"
ignoring duplicate directory "C:/msys64/mingw64/lib/gcc/../../lib/gcc/x86_64-w64-mingw32/9.2.0/include-fixed"
ignoring duplicate directory "C:/msys64/mingw64/lib/gcc/../../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/include"
ignoring nonexistent directory "C:/building/msys64/mingw64/x86_64-w64-mingw32/include"
#include "..." search starts here:
#include <...> search starts here:
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../include/c++/9.2.0
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../include/c++/9.2.0/x86_64-w64-mingw32
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../include/c++/9.2.0/backward
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/include
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../include
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/include-fixed
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/include
End of search list.
```
```posh
cc -xc -E -v -
```
```
Using built-in specs.
COLLECT_GCC=C:\msys64\mingw64\bin\cc.exe
Target: x86_64-w64-mingw32
Configured with: ../gcc-9.2.0/configure --prefix=/mingw64 --with-local-prefix=/mingw64/local --build=x86_64-w64-mingw32 --host=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --with-native-system-header-dir=/mingw64/x86_64-w64-mingw32/include --libexecdir=/mingw64/lib --enable-bootstrap --with-arch=x86-64 --with-tune=generic --enable-languages=c,lto,c++,fortran,ada,objc,obj-c++ --enable-shared --enable-static --enable-libatomic --enable-threads=posix --enable-graphite --enable-fully-dynamic-string --enable-libstdcxx-filesystem-ts=yes --enable-libstdcxx-time=yes --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-isl-version-check --enable-lto --enable-libgomp --disable-multilib --enable-checking=release --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --enable-plugin --with-libiconv --with-system-zlib --with-gmp=/mingw64 --with-mpfr=/mingw64 --with-mpc=/mingw64 --with-isl=/mingw64 --with-pkgversion='Rev2, Built by MSYS2 project' --with-bugurl=https://sourceforge.net/projects/msys2 --with-gnu-as --with-gnu-ld
Thread model: posix
gcc version 9.2.0 (Rev2, Built by MSYS2 project)
COLLECT_GCC_OPTIONS='-E' '-v' '-mtune=generic' '-march=x86-64'
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/cc1.exe -E -quiet -v -iprefix C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/ -D_REENTRANT - -mtune=generic -march=x86-64
ignoring duplicate directory "C:/msys64/mingw64/lib/gcc/../../lib/gcc/x86_64-w64-mingw32/9.2.0/include"
ignoring nonexistent directory "C:/building/msys64/mingw64/include"
ignoring nonexistent directory "/mingw64/include"
ignoring duplicate directory "C:/msys64/mingw64/lib/gcc/../../lib/gcc/x86_64-w64-mingw32/9.2.0/include-fixed"
ignoring duplicate directory "C:/msys64/mingw64/lib/gcc/../../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/include"
ignoring nonexistent directory "C:/building/msys64/mingw64/x86_64-w64-mingw32/include"
#include "..." search starts here:
#include <...> search starts here:
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/include
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../include
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/include-fixed
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/include
End of search list.
```
|
2019-12-14T12:56:00Z
|
<patch>
diff --git a/mesonbuild/compilers/mixins/gnu.py b/mesonbuild/compilers/mixins/gnu.py
--- a/mesonbuild/compilers/mixins/gnu.py
+++ b/mesonbuild/compilers/mixins/gnu.py
@@ -100,14 +100,15 @@ def gnulike_default_include_dirs(compiler: typing.Tuple[str], lang: str) -> typi
p = subprocess.Popen(
cmd,
stdin=subprocess.DEVNULL,
- stderr=subprocess.PIPE,
+ stderr=subprocess.STDOUT,
stdout=subprocess.PIPE,
env=env
)
- stderr = p.stderr.read().decode('utf-8', errors='replace')
+ stdout = p.stdout.read().decode('utf-8', errors='replace')
parse_state = 0
paths = []
- for line in stderr.split('\n'):
+ for line in stdout.split('\n'):
+ line = line.strip(' \n\r\t')
if parse_state == 0:
if line == '#include "..." search starts here:':
parse_state = 1
@@ -115,14 +116,16 @@ def gnulike_default_include_dirs(compiler: typing.Tuple[str], lang: str) -> typi
if line == '#include <...> search starts here:':
parse_state = 2
else:
- paths.append(line[1:])
+ paths.append(line)
elif parse_state == 2:
if line == 'End of search list.':
break
else:
- paths.append(line[1:])
+ paths.append(line)
if not paths:
mlog.warning('No include directory found parsing "{cmd}" output'.format(cmd=" ".join(cmd)))
+ # Append a normalized copy of paths to make path lookup easier
+ paths += [os.path.normpath(x) for x in paths]
return paths
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-8550
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API: add level kwarg for Series.any/.all
It appears that Series' `s.any` and `s.all` methods miss level kwargs, unlike their statistical counterparts like `s.sum`:
``` python
In [4]: s = pd.Series([0,1,2], index=[0,0,1])
In [5]: s.sum(level=0)
Out[5]:
0 1
1 2
dtype: int64
In [6]: s.prod(level=0)
Out[6]:
0 0
1 2
dtype: int64
In [7]: s.any(level=0)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-1d8c43752bc9> in <module>()
----> 1 s.any(level=0)
/home/immerrr/sources/pandas/pandas/core/series.pyc in f(self, *args, **kwargs)
74 @Appender(func.__doc__)
75 def f(self, *args, **kwargs):
---> 76 result = func(self.values, *args, **kwargs)
77 if isinstance(result, (pa.Array, Series)) and result.ndim == 0:
78 # return NumPy type
TypeError: _any() got an unexpected keyword argument 'level'
In [8]: s.all(level=0)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-bca0491001a6> in <module>()
----> 1 s.all(level=0)
/home/immerrr/sources/pandas/pandas/core/series.pyc in f(self, *args, **kwargs)
74 @Appender(func.__doc__)
75 def f(self, *args, **kwargs):
---> 76 result = func(self.values, *args, **kwargs)
77 if isinstance(result, (pa.Array, Series)) and result.ndim == 0:
78 # return NumPy type
TypeError: _all() got an unexpected keyword argument 'level'
```
Frames have those and I think so should series. Maybe, there are more reduction methods that I know not of that also miss those...
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 [](http://scatterci.github.io/pydata/pandas)
6
7 ## What is it
8
9 **pandas** is a Python package providing fast, flexible, and expressive data
10 structures designed to make working with "relational" or "labeled" data both
11 easy and intuitive. It aims to be the fundamental high-level building block for
12 doing practical, **real world** data analysis in Python. Additionally, it has
13 the broader goal of becoming **the most powerful and flexible open source data
14 analysis / manipulation tool available in any language**. It is already well on
15 its way toward this goal.
16
17 ## Main Features
18 Here are just a few of the things that pandas does well:
19
20 - Easy handling of [**missing data**][missing-data] (represented as
21 `NaN`) in floating point as well as non-floating point data
22 - Size mutability: columns can be [**inserted and
23 deleted**][insertion-deletion] from DataFrame and higher dimensional
24 objects
25 - Automatic and explicit [**data alignment**][alignment]: objects can
26 be explicitly aligned to a set of labels, or the user can simply
27 ignore the labels and let `Series`, `DataFrame`, etc. automatically
28 align the data for you in computations
29 - Powerful, flexible [**group by**][groupby] functionality to perform
30 split-apply-combine operations on data sets, for both aggregating
31 and transforming data
32 - Make it [**easy to convert**][conversion] ragged,
33 differently-indexed data in other Python and NumPy data structures
34 into DataFrame objects
35 - Intelligent label-based [**slicing**][slicing], [**fancy
36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
37 large data sets
38 - Intuitive [**merging**][merging] and [**joining**][joining] data
39 sets
40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
41 data sets
42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
43 labels per tick)
44 - Robust IO tools for loading data from [**flat files**][flat-files]
45 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
47 - [**Time series**][timeseries]-specific functionality: date range
48 generation and frequency conversion, moving window statistics,
49 moving window linear regressions, date shifting and lagging, etc.
50
51
52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
70
71 ## Where to get it
72 The source code is currently hosted on GitHub at:
73 http://github.com/pydata/pandas
74
75 Binary installers for the latest released version are available at the Python
76 package index
77
78 http://pypi.python.org/pypi/pandas/
79
80 And via `easy_install`:
81
82 ```sh
83 easy_install pandas
84 ```
85
86 or `pip`:
87
88 ```sh
89 pip install pandas
90 ```
91
92 ## Dependencies
93 - [NumPy](http://www.numpy.org): 1.7.0 or higher
94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
95 - [pytz](http://pytz.sourceforge.net)
96 - Needed for time zone support with ``pandas.date_range``
97
98 ### Highly Recommended Dependencies
99 - [numexpr](https://github.com/pydata/numexpr)
100 - Needed to accelerate some expression evaluation operations
101 - Required by PyTables
102 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
103 - Needed to accelerate certain numerical operations
104
105 ### Optional dependencies
106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
111 - [statsmodels](http://statsmodels.sourceforge.net/)
112 - Needed for parts of `pandas.stats`
113 - For Excel I/O:
114 - [xlrd/xlwt](http://www.python-excel.org/)
115 - Excel reading (xlrd) and writing (xlwt)
116 - [openpyxl](http://packages.python.org/openpyxl/)
117 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for
118 writing .xlsx files
119 - xlrd >= 0.9.0
120 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
121 - Alternative Excel writer.
122 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
123 - Needed for `pandas.io.gbq`
124 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
125 - One of the following combinations of libraries is needed to use the
126 top-level [`pandas.read_html`][read-html-docs] function:
127 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
128 recent version of [html5lib][html5lib] is okay.)
129 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
130 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
131 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
132 for reasons as to why you should probably **not** take this approach.
133
134 #### Notes about HTML parsing libraries
135 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
136 either [lxml][lxml] or [html5lib][html5lib] or both.
137 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
138 installed.
139 - You are strongly encouraged to read [HTML reading
140 gotchas][html-gotchas]. It explains issues surrounding the
141 installation and usage of the above three libraries.
142 - You may need to install an older version of
143 [BeautifulSoup4][BeautifulSoup4]:
144 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
145 32-bit Ubuntu/Debian
146 - Additionally, if you're using [Anaconda][Anaconda] you should
147 definitely read [the gotchas about HTML parsing][html-gotchas]
148 libraries
149 - If you're on a system with `apt-get` you can do
150
151 ```sh
152 sudo apt-get build-dep python-lxml
153 ```
154
155 to get the necessary dependencies for installation of [lxml][lxml].
156 This will prevent further headaches down the line.
157
158 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
159 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
160 [lxml]: http://lxml.de
161 [Anaconda]: https://store.continuum.io/cshop/anaconda
162 [NumPy]: http://numpy.scipy.org/
163 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
164 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
165
166 ## Installation from sources
167 To install pandas from source you need Cython in addition to the normal
168 dependencies above. Cython can be installed from pypi:
169
170 ```sh
171 pip install cython
172 ```
173
174 In the `pandas` directory (same one where you found this file after
175 cloning the git repo), execute:
176
177 ```sh
178 python setup.py install
179 ```
180
181 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
182
183 ```sh
184 python setup.py develop
185 ```
186
187 Alternatively, you can use `pip` if you want all the dependencies pulled
188 in automatically (the `-e` option is for installing it in [development
189 mode](http://www.pip-installer.org/en/latest/usage.html)):
190
191 ```sh
192 pip install -e .
193 ```
194
195 On Windows, you will need to install MinGW and execute:
196
197 ```sh
198 python setup.py build --compiler=mingw32
199 python setup.py install
200 ```
201
202 See http://pandas.pydata.org/ for more information.
203
204 ## License
205 BSD
206
207 ## Documentation
208 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
209
210 The Sphinx documentation should provide a good starting point for learning how
211 to use the library. Expect the docs to continue to expand as time goes on.
212
213 ## Background
214 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
215 has been under active development since then.
216
217 ## Discussion and Development
218 Since pandas development is related to a number of other scientific
219 Python projects, questions are welcome on the scipy-user mailing
220 list. Specialized discussions or design issues should take place on
221 the PyData mailing list / Google group:
222
223 https://groups.google.com/forum/#!forum/pydata
224
[end of README.md]
[start of pandas/compat/__init__.py]
1 """
2 compat
3 ======
4
5 Cross-compatible functions for Python 2 and 3.
6
7 Key items to import for 2/3 compatible code:
8 * iterators: range(), map(), zip(), filter(), reduce()
9 * lists: lrange(), lmap(), lzip(), lfilter()
10 * unicode: u() [u"" is a syntax error in Python 3.0-3.2]
11 * longs: long (int in Python 3)
12 * callable
13 * iterable method compatibility: iteritems, iterkeys, itervalues
14 * Uses the original method if available, otherwise uses items, keys, values.
15 * types:
16 * text_type: unicode in Python 2, str in Python 3
17 * binary_type: str in Python 2, bythes in Python 3
18 * string_types: basestring in Python 2, str in Python 3
19 * bind_method: binds functions to classes
20 * add_metaclass(metaclass) - class decorator that recreates class with with the
21 given metaclass instead (and avoids intermediary class creation)
22
23 Python 2.6 compatibility:
24 * OrderedDict
25 * Counter
26
27 Other items:
28 * OrderedDefaultDict
29 """
30 # pylint disable=W0611
31 import functools
32 import itertools
33 from distutils.version import LooseVersion
34 from itertools import product
35 import sys
36 import types
37
38 PY3 = (sys.version_info[0] >= 3)
39 PY3_2 = sys.version_info[:2] == (3, 2)
40
41 try:
42 import __builtin__ as builtins
43 # not writeable when instantiated with string, doesn't handle unicode well
44 from cStringIO import StringIO as cStringIO
45 # always writeable
46 from StringIO import StringIO
47 BytesIO = StringIO
48 import cPickle
49 import httplib
50 except ImportError:
51 import builtins
52 from io import StringIO, BytesIO
53 cStringIO = StringIO
54 import pickle as cPickle
55 import http.client as httplib
56
57 from pandas.compat.chainmap import DeepChainMap
58
59
60 if PY3:
61 def isidentifier(s):
62 return s.isidentifier()
63
64 def str_to_bytes(s, encoding=None):
65 return s.encode(encoding or 'ascii')
66
67 def bytes_to_str(b, encoding=None):
68 return b.decode(encoding or 'utf-8')
69
70 # have to explicitly put builtins into the namespace
71 range = range
72 map = map
73 zip = zip
74 filter = filter
75 reduce = functools.reduce
76 long = int
77 unichr = chr
78
79 # list-producing versions of the major Python iterating functions
80 def lrange(*args, **kwargs):
81 return list(range(*args, **kwargs))
82
83 def lzip(*args, **kwargs):
84 return list(zip(*args, **kwargs))
85
86 def lmap(*args, **kwargs):
87 return list(map(*args, **kwargs))
88
89 def lfilter(*args, **kwargs):
90 return list(filter(*args, **kwargs))
91 else:
92 # Python 2
93 import re
94 _name_re = re.compile(r"[a-zA-Z_][a-zA-Z0-9_]*$")
95
96 def isidentifier(s, dotted=False):
97 return bool(_name_re.match(s))
98
99 def str_to_bytes(s, encoding='ascii'):
100 return s
101
102 def bytes_to_str(b, encoding='ascii'):
103 return b
104
105 # import iterator versions of these functions
106 range = xrange
107 zip = itertools.izip
108 filter = itertools.ifilter
109 map = itertools.imap
110 reduce = reduce
111 long = long
112 unichr = unichr
113
114 # Python 2-builtin ranges produce lists
115 lrange = builtins.range
116 lzip = builtins.zip
117 lmap = builtins.map
118 lfilter = builtins.filter
119
120
121 def iteritems(obj, **kwargs):
122 """replacement for six's iteritems for Python2/3 compat
123 uses 'iteritems' if available and otherwise uses 'items'.
124
125 Passes kwargs to method.
126 """
127 func = getattr(obj, "iteritems", None)
128 if not func:
129 func = obj.items
130 return func(**kwargs)
131
132
133 def iterkeys(obj, **kwargs):
134 func = getattr(obj, "iterkeys", None)
135 if not func:
136 func = obj.keys
137 return func(**kwargs)
138
139
140 def itervalues(obj, **kwargs):
141 func = getattr(obj, "itervalues", None)
142 if not func:
143 func = obj.values
144 return func(**kwargs)
145
146
147 def bind_method(cls, name, func):
148 """Bind a method to class, python 2 and python 3 compatible.
149
150 Parameters
151 ----------
152
153 cls : type
154 class to receive bound method
155 name : basestring
156 name of method on class instance
157 func : function
158 function to be bound as method
159
160
161 Returns
162 -------
163 None
164 """
165 # only python 2 has bound/unbound method issue
166 if not PY3:
167 setattr(cls, name, types.MethodType(func, None, cls))
168 else:
169 setattr(cls, name, func)
170 # ----------------------------------------------------------------------------
171 # functions largely based / taken from the six module
172
173 # Much of the code in this module comes from Benjamin Peterson's six library.
174 # The license for this library can be found in LICENSES/SIX and the code can be
175 # found at https://bitbucket.org/gutworth/six
176
177 if PY3:
178 string_types = str,
179 integer_types = int,
180 class_types = type,
181 text_type = str
182 binary_type = bytes
183
184 def u(s):
185 return s
186
187 def u_safe(s):
188 return s
189 else:
190 string_types = basestring,
191 integer_types = (int, long)
192 class_types = (type, types.ClassType)
193 text_type = unicode
194 binary_type = str
195
196 def u(s):
197 return unicode(s, "unicode_escape")
198
199 def u_safe(s):
200 try:
201 return unicode(s, "unicode_escape")
202 except:
203 return s
204
205
206 string_and_binary_types = string_types + (binary_type,)
207
208
209 try:
210 # callable reintroduced in later versions of Python
211 callable = callable
212 except NameError:
213 def callable(obj):
214 return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
215
216
217 def add_metaclass(metaclass):
218 """Class decorator for creating a class with a metaclass."""
219 def wrapper(cls):
220 orig_vars = cls.__dict__.copy()
221 orig_vars.pop('__dict__', None)
222 orig_vars.pop('__weakref__', None)
223 for slots_var in orig_vars.get('__slots__', ()):
224 orig_vars.pop(slots_var)
225 return metaclass(cls.__name__, cls.__bases__, orig_vars)
226 return wrapper
227
228
229 # ----------------------------------------------------------------------------
230 # Python 2.6 compatibility shims
231 #
232
233 # OrderedDict Shim from Raymond Hettinger, python core dev
234 # http://code.activestate.com/recipes/576693-ordered-dictionary-for-py24/
235 # here to support versions before 2.6
236 if not PY3:
237 # don't need this except in 2.6
238 try:
239 from thread import get_ident as _get_ident
240 except ImportError:
241 from dummy_thread import get_ident as _get_ident
242
243 try:
244 from _abcoll import KeysView, ValuesView, ItemsView
245 except ImportError:
246 pass
247
248
249 class _OrderedDict(dict):
250
251 """Dictionary that remembers insertion order"""
252 # An inherited dict maps keys to values.
253 # The inherited dict provides __getitem__, __len__, __contains__, and get.
254 # The remaining methods are order-aware.
255 # Big-O running times for all methods are the same as for regular
256 # dictionaries.
257
258 # The internal self.__map dictionary maps keys to links in a doubly linked
259 # list. The circular doubly linked list starts and ends with a sentinel
260 # element. The sentinel element never gets deleted (this simplifies the
261 # algorithm). Each link is stored as a list of length three: [PREV, NEXT,
262 # KEY].
263
264 def __init__(self, *args, **kwds):
265 """Initialize an ordered dictionary. Signature is the same as for
266 regular dictionaries, but keyword arguments are not recommended
267 because their insertion order is arbitrary.
268 """
269 if len(args) > 1:
270 raise TypeError('expected at most 1 arguments, got %d' % len(args))
271 try:
272 self.__root
273 except AttributeError:
274 self.__root = root = [] # sentinel node
275 root[:] = [root, root, None]
276 self.__map = {}
277 self.__update(*args, **kwds)
278
279 def __setitem__(self, key, value, dict_setitem=dict.__setitem__):
280 """od.__setitem__(i, y) <==> od[i]=y"""
281 # Setting a new item creates a new link which goes at the end of the
282 # linked list, and the inherited dictionary is updated with the new
283 # key/value pair.
284 if key not in self:
285 root = self.__root
286 last = root[0]
287 last[1] = root[0] = self.__map[key] = [last, root, key]
288 dict_setitem(self, key, value)
289
290 def __delitem__(self, key, dict_delitem=dict.__delitem__):
291 """od.__delitem__(y) <==> del od[y]"""
292 # Deleting an existing item uses self.__map to find the link which is
293 # then removed by updating the links in the predecessor and successor
294 # nodes.
295 dict_delitem(self, key)
296 link_prev, link_next, key = self.__map.pop(key)
297 link_prev[1] = link_next
298 link_next[0] = link_prev
299
300 def __iter__(self):
301 """od.__iter__() <==> iter(od)"""
302 root = self.__root
303 curr = root[1]
304 while curr is not root:
305 yield curr[2]
306 curr = curr[1]
307
308 def __reversed__(self):
309 """od.__reversed__() <==> reversed(od)"""
310 root = self.__root
311 curr = root[0]
312 while curr is not root:
313 yield curr[2]
314 curr = curr[0]
315
316 def clear(self):
317 """od.clear() -> None. Remove all items from od."""
318 try:
319 for node in itervalues(self.__map):
320 del node[:]
321 root = self.__root
322 root[:] = [root, root, None]
323 self.__map.clear()
324 except AttributeError:
325 pass
326 dict.clear(self)
327
328 def popitem(self, last=True):
329 """od.popitem() -> (k, v), return and remove a (key, value) pair.
330
331 Pairs are returned in LIFO order if last is true or FIFO order if
332 false.
333 """
334 if not self:
335 raise KeyError('dictionary is empty')
336 root = self.__root
337 if last:
338 link = root[0]
339 link_prev = link[0]
340 link_prev[1] = root
341 root[0] = link_prev
342 else:
343 link = root[1]
344 link_next = link[1]
345 root[1] = link_next
346 link_next[0] = root
347 key = link[2]
348 del self.__map[key]
349 value = dict.pop(self, key)
350 return key, value
351
352 # -- the following methods do not depend on the internal structure --
353
354 def keys(self):
355 """od.keys() -> list of keys in od"""
356 return list(self)
357
358 def values(self):
359 """od.values() -> list of values in od"""
360 return [self[key] for key in self]
361
362 def items(self):
363 """od.items() -> list of (key, value) pairs in od"""
364 return [(key, self[key]) for key in self]
365
366 def iterkeys(self):
367 """od.iterkeys() -> an iterator over the keys in od"""
368 return iter(self)
369
370 def itervalues(self):
371 """od.itervalues -> an iterator over the values in od"""
372 for k in self:
373 yield self[k]
374
375 def iteritems(self):
376 """od.iteritems -> an iterator over the (key, value) items in od"""
377 for k in self:
378 yield (k, self[k])
379
380 def update(*args, **kwds):
381 """od.update(E, **F) -> None. Update od from dict/iterable E and F.
382
383 If E is a dict instance, does: for k in E: od[k] = E[k]
384 If E has a .keys() method, does: for k in E.keys(): od[k] = E[k]
385 Or if E is an iterable of items, does:for k, v in E: od[k] = v
386 In either case, this is followed by: for k, v in F.items(): od[k] = v
387 """
388 if len(args) > 2:
389 raise TypeError('update() takes at most 2 positional '
390 'arguments (%d given)' % (len(args),))
391 elif not args:
392 raise TypeError('update() takes at least 1 argument (0 given)')
393 self = args[0]
394 # Make progressively weaker assumptions about "other"
395 other = ()
396 if len(args) == 2:
397 other = args[1]
398 if isinstance(other, dict):
399 for key in other:
400 self[key] = other[key]
401 elif hasattr(other, 'keys'):
402 for key in other.keys():
403 self[key] = other[key]
404 else:
405 for key, value in other:
406 self[key] = value
407 for key, value in kwds.items():
408 self[key] = value
409 # let subclasses override update without breaking __init__
410 __update = update
411
412 __marker = object()
413
414 def pop(self, key, default=__marker):
415 """od.pop(k[,d]) -> v, remove specified key and return the
416 corresponding value. If key is not found, d is returned if given,
417 otherwise KeyError is raised.
418 """
419 if key in self:
420 result = self[key]
421 del self[key]
422 return result
423 if default is self.__marker:
424 raise KeyError(key)
425 return default
426
427 def setdefault(self, key, default=None):
428 """od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od
429 """
430 if key in self:
431 return self[key]
432 self[key] = default
433 return default
434
435 def __repr__(self, _repr_running={}):
436 """od.__repr__() <==> repr(od)"""
437 call_key = id(self), _get_ident()
438 if call_key in _repr_running:
439 return '...'
440 _repr_running[call_key] = 1
441 try:
442 if not self:
443 return '%s()' % (self.__class__.__name__,)
444 return '%s(%r)' % (self.__class__.__name__, list(self.items()))
445 finally:
446 del _repr_running[call_key]
447
448 def __reduce__(self):
449 """Return state information for pickling"""
450 items = [[k, self[k]] for k in self]
451 inst_dict = vars(self).copy()
452 for k in vars(OrderedDict()):
453 inst_dict.pop(k, None)
454 if inst_dict:
455 return (self.__class__, (items,), inst_dict)
456 return self.__class__, (items,)
457
458 def copy(self):
459 """od.copy() -> a shallow copy of od"""
460 return self.__class__(self)
461
462 @classmethod
463 def fromkeys(cls, iterable, value=None):
464 """OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S and
465 values equal to v (which defaults to None).
466 """
467 d = cls()
468 for key in iterable:
469 d[key] = value
470 return d
471
472 def __eq__(self, other):
473 """od.__eq__(y) <==> od==y. Comparison to another OD is
474 order-sensitive while comparison to a regular mapping is
475 order-insensitive.
476 """
477 if isinstance(other, OrderedDict):
478 return (len(self) == len(other) and
479 list(self.items()) == list(other.items()))
480 return dict.__eq__(self, other)
481
482 def __ne__(self, other):
483 return not self == other
484
485 # -- the following methods are only used in Python 2.7 --
486
487 def viewkeys(self):
488 """od.viewkeys() -> a set-like object providing a view on od's keys"""
489 return KeysView(self)
490
491 def viewvalues(self):
492 """od.viewvalues() -> an object providing a view on od's values"""
493 return ValuesView(self)
494
495 def viewitems(self):
496 """od.viewitems() -> a set-like object providing a view on od's items
497 """
498 return ItemsView(self)
499
500
501 # {{{ http://code.activestate.com/recipes/576611/ (r11)
502
503 try:
504 from operator import itemgetter
505 from heapq import nlargest
506 except ImportError:
507 pass
508
509
510 class _Counter(dict):
511
512 """Dict subclass for counting hashable objects. Sometimes called a bag
513 or multiset. Elements are stored as dictionary keys and their counts
514 are stored as dictionary values.
515
516 >>> Counter('zyzygy')
517 Counter({'y': 3, 'z': 2, 'g': 1})
518
519 """
520
521 def __init__(self, iterable=None, **kwds):
522 """Create a new, empty Counter object. And if given, count elements
523 from an input iterable. Or, initialize the count from another mapping
524 of elements to their counts.
525
526 >>> c = Counter() # a new, empty counter
527 >>> c = Counter('gallahad') # a new counter from an iterable
528 >>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping
529 >>> c = Counter(a=4, b=2) # a new counter from keyword args
530
531 """
532 self.update(iterable, **kwds)
533
534 def __missing__(self, key):
535 return 0
536
537 def most_common(self, n=None):
538 """List the n most common elements and their counts from the most
539 common to the least. If n is None, then list all element counts.
540
541 >>> Counter('abracadabra').most_common(3)
542 [('a', 5), ('r', 2), ('b', 2)]
543
544 """
545 if n is None:
546 return sorted(iteritems(self), key=itemgetter(1), reverse=True)
547 return nlargest(n, iteritems(self), key=itemgetter(1))
548
549 def elements(self):
550 """Iterator over elements repeating each as many times as its count.
551
552 >>> c = Counter('ABCABC')
553 >>> sorted(c.elements())
554 ['A', 'A', 'B', 'B', 'C', 'C']
555
556 If an element's count has been set to zero or is a negative number,
557 elements() will ignore it.
558
559 """
560 for elem, count in iteritems(self):
561 for _ in range(count):
562 yield elem
563
564 # Override dict methods where the meaning changes for Counter objects.
565
566 @classmethod
567 def fromkeys(cls, iterable, v=None):
568 raise NotImplementedError(
569 'Counter.fromkeys() is undefined. Use Counter(iterable) instead.')
570
571 def update(self, iterable=None, **kwds):
572 """Like dict.update() but add counts instead of replacing them.
573
574 Source can be an iterable, a dictionary, or another Counter instance.
575
576 >>> c = Counter('which')
577 >>> c.update('witch') # add elements from another iterable
578 >>> d = Counter('watch')
579 >>> c.update(d) # add elements from another counter
580 >>> c['h'] # four 'h' in which, witch, and watch
581 4
582
583 """
584 if iterable is not None:
585 if hasattr(iterable, 'iteritems'):
586 if self:
587 self_get = self.get
588 for elem, count in iteritems(iterable):
589 self[elem] = self_get(elem, 0) + count
590 else:
591 dict.update(
592 self, iterable) # fast path when counter is empty
593 else:
594 self_get = self.get
595 for elem in iterable:
596 self[elem] = self_get(elem, 0) + 1
597 if kwds:
598 self.update(kwds)
599
600 def copy(self):
601 """Like dict.copy() but returns a Counter instance instead of a dict.
602 """
603 return Counter(self)
604
605 def __delitem__(self, elem):
606 """Like dict.__delitem__() but does not raise KeyError for missing
607 values.
608 """
609 if elem in self:
610 dict.__delitem__(self, elem)
611
612 def __repr__(self):
613 if not self:
614 return '%s()' % self.__class__.__name__
615 items = ', '.join(map('%r: %r'.__mod__, self.most_common()))
616 return '%s({%s})' % (self.__class__.__name__, items)
617
618 # Multiset-style mathematical operations discussed in:
619 # Knuth TAOCP Volume II section 4.6.3 exercise 19
620 # and at http://en.wikipedia.org/wiki/Multiset
621 #
622 # Outputs guaranteed to only include positive counts.
623 #
624 # To strip negative and zero counts, add-in an empty counter:
625 # c += Counter()
626
627 def __add__(self, other):
628 """Add counts from two counters.
629
630 >>> Counter('abbb') + Counter('bcc')
631 Counter({'b': 4, 'c': 2, 'a': 1})
632
633 """
634 if not isinstance(other, Counter):
635 return NotImplemented
636 result = Counter()
637 for elem in set(self) | set(other):
638 newcount = self[elem] + other[elem]
639 if newcount > 0:
640 result[elem] = newcount
641 return result
642
643 def __sub__(self, other):
644 """Subtract count, but keep only results with positive counts.
645
646 >>> Counter('abbbc') - Counter('bccd')
647 Counter({'b': 2, 'a': 1})
648
649 """
650 if not isinstance(other, Counter):
651 return NotImplemented
652 result = Counter()
653 for elem in set(self) | set(other):
654 newcount = self[elem] - other[elem]
655 if newcount > 0:
656 result[elem] = newcount
657 return result
658
659 def __or__(self, other):
660 """Union is the maximum of value in either of the input counters.
661
662 >>> Counter('abbb') | Counter('bcc')
663 Counter({'b': 3, 'c': 2, 'a': 1})
664
665 """
666 if not isinstance(other, Counter):
667 return NotImplemented
668 _max = max
669 result = Counter()
670 for elem in set(self) | set(other):
671 newcount = _max(self[elem], other[elem])
672 if newcount > 0:
673 result[elem] = newcount
674 return result
675
676 def __and__(self, other):
677 """Intersection is the minimum of corresponding counts.
678
679 >>> Counter('abbb') & Counter('bcc')
680 Counter({'b': 1})
681
682 """
683 if not isinstance(other, Counter):
684 return NotImplemented
685 _min = min
686 result = Counter()
687 if len(self) < len(other):
688 self, other = other, self
689 for elem in filter(self.__contains__, other):
690 newcount = _min(self[elem], other[elem])
691 if newcount > 0:
692 result[elem] = newcount
693 return result
694
695 if sys.version_info[:2] < (2, 7):
696 OrderedDict = _OrderedDict
697 Counter = _Counter
698 else:
699 from collections import OrderedDict, Counter
700
701 if PY3:
702 def raise_with_traceback(exc, traceback=Ellipsis):
703 if traceback == Ellipsis:
704 _, _, traceback = sys.exc_info()
705 raise exc.with_traceback(traceback)
706 else:
707 # this version of raise is a syntax error in Python 3
708 exec("""
709 def raise_with_traceback(exc, traceback=Ellipsis):
710 if traceback == Ellipsis:
711 _, _, traceback = sys.exc_info()
712 raise exc, None, traceback
713 """)
714
715 raise_with_traceback.__doc__ = """Raise exception with existing traceback.
716 If traceback is not passed, uses sys.exc_info() to get traceback."""
717
718
719 # http://stackoverflow.com/questions/4126348
720 # Thanks to @martineau at SO
721
722 from dateutil import parser as _date_parser
723 import dateutil
724 if LooseVersion(dateutil.__version__) < '2.0':
725 @functools.wraps(_date_parser.parse)
726 def parse_date(timestr, *args, **kwargs):
727 timestr = bytes(timestr)
728 return _date_parser.parse(timestr, *args, **kwargs)
729 else:
730 parse_date = _date_parser.parse
731
732
733 class OrderedDefaultdict(OrderedDict):
734
735 def __init__(self, *args, **kwargs):
736 newdefault = None
737 newargs = ()
738 if args:
739 newdefault = args[0]
740 if not (newdefault is None or callable(newdefault)):
741 raise TypeError('first argument must be callable or None')
742 newargs = args[1:]
743 self.default_factory = newdefault
744 super(self.__class__, self).__init__(*newargs, **kwargs)
745
746 def __missing__(self, key):
747 if self.default_factory is None:
748 raise KeyError(key)
749 self[key] = value = self.default_factory()
750 return value
751
752 def __reduce__(self): # optional, for pickle support
753 args = self.default_factory if self.default_factory else tuple()
754 return type(self), args, None, None, list(self.items())
755
[end of pandas/compat/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
a617b0b60bc5b1a768b39b8ca4488b6399417ad4
|
API: add level kwarg for Series.any/.all
It appears that Series' `s.any` and `s.all` methods miss level kwargs, unlike their statistical counterparts like `s.sum`:
``` python
In [4]: s = pd.Series([0,1,2], index=[0,0,1])
In [5]: s.sum(level=0)
Out[5]:
0 1
1 2
dtype: int64
In [6]: s.prod(level=0)
Out[6]:
0 0
1 2
dtype: int64
In [7]: s.any(level=0)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-1d8c43752bc9> in <module>()
----> 1 s.any(level=0)
/home/immerrr/sources/pandas/pandas/core/series.pyc in f(self, *args, **kwargs)
74 @Appender(func.__doc__)
75 def f(self, *args, **kwargs):
---> 76 result = func(self.values, *args, **kwargs)
77 if isinstance(result, (pa.Array, Series)) and result.ndim == 0:
78 # return NumPy type
TypeError: _any() got an unexpected keyword argument 'level'
In [8]: s.all(level=0)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-bca0491001a6> in <module>()
----> 1 s.all(level=0)
/home/immerrr/sources/pandas/pandas/core/series.pyc in f(self, *args, **kwargs)
74 @Appender(func.__doc__)
75 def f(self, *args, **kwargs):
---> 76 result = func(self.values, *args, **kwargs)
77 if isinstance(result, (pa.Array, Series)) and result.ndim == 0:
78 # return NumPy type
TypeError: _all() got an unexpected keyword argument 'level'
```
Frames have those and I think so should series. Maybe, there are more reduction methods that I know not of that also miss those...
|
Hi, I'd like to take a crack at this one.
It looks like most aggregation functions are implemented in NDFrame and accept standardized arguments. All and any, however, are implemented separately in Series (via IndexOpsMixin) and DataFrame. The Series implementations forward to the corresponding numpy implementations, and so accept a different set of arguments.
Currently most of the aggregation functions accept something like:
axis, skipna, level, numeric_only, *kwargs
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sum.html
(there are minor differences between functions)
While any/all accept:
axis, out, keepdims
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.any.html
http://docs.scipy.org/doc/numpy/reference/generated/numpy.any.html
The request in this ticket is for any/all to accept the level argument. It seems like we might want to implement any/all the same way the other aggregation functions are implemented in NDFrame rather than as thin wrappers around the numpy implementations. Would it make sense to:
1) Make any/all support the union of argument accepted by numpy and the standard aggregation functions.
or
2) Deprecate support for the numpy only arguments, and only support the standard aggregation arguments. (This means deprecating support for ‘out’ and ‘keepdims’.) There seems to be some precedent for this option, as for example Series.sum does not accept 'out' and 'keepdims' while ndarray.sum does. In a very cursory scan I didn’t see any cases where pandas internally depends on using the out and keepdims arguments on Series’ any/all.
@staple `any/all` is just historical that they just forward to the numpy accessors. You can simply write them _llike_ `sum/mean/etc` (eg.. use a function generator) and accept these level. no need to deprecate anything. (just accept `**kwargs` for compat). These also can be different (and should be)
`skipna` doesn't make any sense (these are boolean arrays ater all).
I'm also not sure if `numeric_only` makes sense for them.
I noticed that in DataFrame's any / all, there is an implementation for parameters `skipna` and `numeric_only`. My guess is it will make sense to generalize this implementation into NDFrame, preserving the `skipna` and `numeric_only` arguments and thereby making them available for Series as well.
Additionally, we can continue to support Series' existing special arguments from ndarray.any/all via kwargs, which will allow named arguments to be handled the way they were before. But unnamed arguments, for example, `series.all(0, type(True), g, True)` will not behave the same. This might not be a big deal, but I wanted to at least check to see if an api change like this requires any special actions.
@staple
ok, start by moving `DataFrame.any/all` to `core/generic.py` this exposes them generally (and I think they will simply work), but you will need to define them like `.sum/.mean` etc are defined (e.g. in `make_stat_function`(its possible you will need to define a new function, say `make_stat_bool_function` to handle this slightly differently), try to conform to how `any/all` work now.
Then, in `core/base.py` you can remove this `_unbox` stuff, and define `any/all` which will ONLY apply to `Index` (fyi, prob no tests for this). as the `Series` definition will be taken from `core/generic.py` (its defined after the `core/base.py`)
so this definitely needs cleanup as was never done originally.
|
2014-10-13T17:39:48Z
|
<patch>
diff --git a/doc/source/whatsnew/v0.15.2.txt b/doc/source/whatsnew/v0.15.2.txt
--- a/doc/source/whatsnew/v0.15.2.txt
+++ b/doc/source/whatsnew/v0.15.2.txt
@@ -22,6 +22,20 @@ API changes
- Bug in concat of Series with ``category`` dtype which were coercing to ``object``. (:issue:`8641`)
+- ``Series.all`` and ``Series.any`` now support the ``level`` and ``skipna`` parameters. ``Series.all``, ``Series.any``, ``Index.all``, and ``Index.any`` no longer support the ``out`` and ``keepdims`` parameters, which existed for compatibility with ndarray. Various index types no longer support the ``all`` and ``any`` aggregation functions. (:issue:`8302`):
+
+ .. ipython:: python
+
+ s = pd.Series([False, True, False], index=[0, 0, 1])
+ s.any(level=0)
+
+- ``Panel`` now supports the ``all`` and ``any`` aggregation functions. (:issue:`8302`):
+
+ .. ipython:: python
+
+ p = pd.Panel(np.random.rand(2, 5, 4) > 0.1)
+ p.all()
+
.. _whatsnew_0152.enhancements:
Enhancements
@@ -44,4 +58,4 @@ Experimental
Bug Fixes
~~~~~~~~~
- Bug in ``groupby`` signatures that didn't include *args or **kwargs (:issue:`8733`).
-- ``io.data.Options`` now raises ``RemoteDataError`` when no expiry dates are available from Yahoo (:issue:`8761`).
\ No newline at end of file
+- ``io.data.Options`` now raises ``RemoteDataError`` when no expiry dates are available from Yahoo (:issue:`8761`).
diff --git a/pandas/core/base.py b/pandas/core/base.py
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -268,18 +268,6 @@ def __unicode__(self):
quote_strings=True)
return "%s(%s, dtype='%s')" % (type(self).__name__, prepr, self.dtype)
-def _unbox(func):
- @Appender(func.__doc__)
- def f(self, *args, **kwargs):
- result = func(self.values, *args, **kwargs)
- from pandas.core.index import Index
- if isinstance(result, (np.ndarray, com.ABCSeries, Index)) and result.ndim == 0:
- # return NumPy type
- return result.dtype.type(result.item())
- else: # pragma: no cover
- return result
- f.__name__ = func.__name__
- return f
class IndexOpsMixin(object):
""" common ops mixin to support a unified inteface / docs for Series / Index """
@@ -528,12 +516,6 @@ def duplicated(self, take_last=False):
from pandas.core.index import Index
return Index(duplicated)
- #----------------------------------------------------------------------
- # unbox reductions
-
- all = _unbox(np.ndarray.all)
- any = _unbox(np.ndarray.any)
-
#----------------------------------------------------------------------
# abstracts
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4133,68 +4133,6 @@ def _count_level(self, level, axis=0, numeric_only=False):
else:
return result
- def any(self, axis=None, bool_only=None, skipna=True, level=None,
- **kwargs):
- """
- Return whether any element is True over requested axis.
- %(na_action)s
-
- Parameters
- ----------
- axis : {0, 1}
- 0 for row-wise, 1 for column-wise
- skipna : boolean, default True
- Exclude NA/null values. If an entire row/column is NA, the result
- will be NA
- level : int or level name, default None
- If the axis is a MultiIndex (hierarchical), count along a
- particular level, collapsing into a DataFrame
- bool_only : boolean, default None
- Only include boolean data.
-
- Returns
- -------
- any : Series (or DataFrame if level specified)
- """
- if axis is None:
- axis = self._stat_axis_number
- if level is not None:
- return self._agg_by_level('any', axis=axis, level=level,
- skipna=skipna)
- return self._reduce(nanops.nanany, 'any', axis=axis, skipna=skipna,
- numeric_only=bool_only, filter_type='bool')
-
- def all(self, axis=None, bool_only=None, skipna=True, level=None,
- **kwargs):
- """
- Return whether all elements are True over requested axis.
- %(na_action)s
-
- Parameters
- ----------
- axis : {0, 1}
- 0 for row-wise, 1 for column-wise
- skipna : boolean, default True
- Exclude NA/null values. If an entire row/column is NA, the result
- will be NA
- level : int or level name, default None
- If the axis is a MultiIndex (hierarchical), count along a
- particular level, collapsing into a DataFrame
- bool_only : boolean, default None
- Only include boolean data.
-
- Returns
- -------
- any : Series (or DataFrame if level specified)
- """
- if axis is None:
- axis = self._stat_axis_number
- if level is not None:
- return self._agg_by_level('all', axis=axis, level=level,
- skipna=skipna)
- return self._reduce(nanops.nanall, 'all', axis=axis, skipna=skipna,
- numeric_only=bool_only, filter_type='bool')
-
def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None,
filter_type=None, **kwds):
axis = self._get_axis_number(axis)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3888,6 +3888,7 @@ def _add_numeric_operations(cls):
])
name = (cls._constructor_sliced.__name__
if cls._AXIS_LEN > 1 else 'scalar')
+
_num_doc = """
%(desc)s
@@ -3905,6 +3906,27 @@ def _add_numeric_operations(cls):
Include only float, int, boolean data. If None, will attempt to use
everything, then use only numeric data
+Returns
+-------
+%(outname)s : """ + name + " or " + cls.__name__ + " (if level specified)\n"
+
+ _bool_doc = """
+
+%(desc)s
+
+Parameters
+----------
+axis : """ + axis_descr + """
+skipna : boolean, default True
+ Exclude NA/null values. If an entire row/column is NA, the result
+ will be NA
+level : int or level name, default None
+ If the axis is a MultiIndex (hierarchical), count along a
+ particular level, collapsing into a """ + name + """
+bool_only : boolean, default None
+ Include only boolean data. If None, will attempt to use everything,
+ then use only boolean data
+
Returns
-------
%(outname)s : """ + name + " or " + cls.__name__ + " (if level specified)\n"
@@ -3971,6 +3993,36 @@ def stat_func(self, axis=None, skipna=None, level=None,
want the *index* of the minimum, use ``idxmin``. This is the
equivalent of the ``numpy.ndarray`` method ``argmin``.""", nanops.nanmin)
+ def _make_logical_function(name, desc, f):
+
+ @Substitution(outname=name, desc=desc)
+ @Appender(_bool_doc)
+ def logical_func(self, axis=None, bool_only=None, skipna=None,
+ level=None, **kwargs):
+ if skipna is None:
+ skipna = True
+ if axis is None:
+ axis = self._stat_axis_number
+ if level is not None:
+ if bool_only is not None:
+ raise NotImplementedError(
+ "Option bool_only is not implemented with option "
+ "level.")
+ return self._agg_by_level(name, axis=axis, level=level,
+ skipna=skipna)
+ return self._reduce(f, axis=axis, skipna=skipna,
+ numeric_only=bool_only, filter_type='bool',
+ name=name)
+ logical_func.__name__ = name
+ return logical_func
+
+ cls.any = _make_logical_function(
+ 'any', 'Return whether any element is True over requested axis',
+ nanops.nanany)
+ cls.all = _make_logical_function(
+ 'all', 'Return whether all elements are True over requested axis',
+ nanops.nanall)
+
@Substitution(outname='mad',
desc="Return the mean absolute deviation of the values "
"for the requested axis")
diff --git a/pandas/core/index.py b/pandas/core/index.py
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -14,7 +14,8 @@
import pandas.index as _index
from pandas.lib import Timestamp, Timedelta, is_datetime_array
from pandas.core.base import PandasObject, FrozenList, FrozenNDArray, IndexOpsMixin, _shared_docs
-from pandas.util.decorators import Appender, cache_readonly, deprecate
+from pandas.util.decorators import (Appender, Substitution, cache_readonly,
+ deprecate)
from pandas.core.common import isnull, array_equivalent
import pandas.core.common as com
from pandas.core.common import (_values_from_object, is_float, is_integer,
@@ -2088,12 +2089,13 @@ def _evaluate_with_datetime_like(self, other, op, opstr):
def _add_numeric_methods_disabled(cls):
""" add in numeric methods to disable """
- def _make_invalid_op(opstr):
+ def _make_invalid_op(name):
- def _invalid_op(self, other=None):
- raise TypeError("cannot perform {opstr} with this index type: {typ}".format(opstr=opstr,
- typ=type(self)))
- return _invalid_op
+ def invalid_op(self, other=None):
+ raise TypeError("cannot perform {name} with this index type: {typ}".format(name=name,
+ typ=type(self)))
+ invalid_op.__name__ = name
+ return invalid_op
cls.__mul__ = cls.__rmul__ = _make_invalid_op('__mul__')
cls.__floordiv__ = cls.__rfloordiv__ = _make_invalid_op('__floordiv__')
@@ -2178,8 +2180,62 @@ def _evaluate_numeric_unary(self):
cls.__abs__ = _make_evaluate_unary(lambda x: np.abs(x),'__abs__')
cls.__inv__ = _make_evaluate_unary(lambda x: -x,'__inv__')
+ @classmethod
+ def _add_logical_methods(cls):
+ """ add in logical methods """
+
+ _doc = """
+
+ %(desc)s
+
+ Parameters
+ ----------
+ All arguments to numpy.%(outname)s are accepted.
+
+ Returns
+ -------
+ %(outname)s : bool or array_like (if axis is specified)
+ A single element array_like may be converted to bool."""
+
+ def _make_logical_function(name, desc, f):
+
+ @Substitution(outname=name, desc=desc)
+ @Appender(_doc)
+ def logical_func(self, *args, **kwargs):
+ result = f(self.values)
+ if isinstance(result, (np.ndarray, com.ABCSeries, Index)) \
+ and result.ndim == 0:
+ # return NumPy type
+ return result.dtype.type(result.item())
+ else: # pragma: no cover
+ return result
+ logical_func.__name__ = name
+ return logical_func
+
+ cls.all = _make_logical_function(
+ 'all', 'Return whether all elements are True', np.all)
+ cls.any = _make_logical_function(
+ 'any', 'Return whether any element is True', np.any)
+
+ @classmethod
+ def _add_logical_methods_disabled(cls):
+ """ add in logical methods to disable """
+
+ def _make_invalid_op(name):
+
+ def invalid_op(self, other=None):
+ raise TypeError("cannot perform {name} with this index type: {typ}".format(name=name,
+ typ=type(self)))
+ invalid_op.__name__ = name
+ return invalid_op
+
+ cls.all = _make_invalid_op('all')
+ cls.any = _make_invalid_op('any')
+
Index._add_numeric_methods_disabled()
+Index._add_logical_methods()
+
class NumericIndex(Index):
"""
@@ -2291,7 +2347,11 @@ def equals(self, other):
def _wrap_joined_index(self, joined, other):
name = self.name if self.name == other.name else None
return Int64Index(joined, name=name)
+
+
Int64Index._add_numeric_methods()
+Int64Index._add_logical_methods()
+
class Float64Index(NumericIndex):
@@ -2483,7 +2543,10 @@ def isin(self, values, level=None):
self._validate_index_level(level)
return lib.ismember_nans(self._array_values(), value_set,
isnull(list(value_set)).any())
+
+
Float64Index._add_numeric_methods()
+Float64Index._add_logical_methods_disabled()
class MultiIndex(Index):
@@ -4436,7 +4499,11 @@ def isin(self, values, level=None):
return np.zeros(len(labs), dtype=np.bool_)
else:
return np.lib.arraysetops.in1d(labs, sought_labels)
+
+
MultiIndex._add_numeric_methods_disabled()
+MultiIndex._add_logical_methods_disabled()
+
# For utility purposes
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -1665,9 +1665,13 @@ def to_julian_date(self):
self.microsecond/3600.0/1e+6 +
self.nanosecond/3600.0/1e+9
)/24.0)
+
+
DatetimeIndex._add_numeric_methods_disabled()
+DatetimeIndex._add_logical_methods_disabled()
DatetimeIndex._add_datetimelike_methods()
+
def _generate_regular_range(start, end, periods, offset):
if isinstance(offset, Tick):
stride = offset.nanos
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -1262,9 +1262,12 @@ def tz_localize(self, tz, infer_dst=False):
"""
raise NotImplementedError("Not yet implemented for PeriodIndex")
+
PeriodIndex._add_numeric_methods_disabled()
+PeriodIndex._add_logical_methods_disabled()
PeriodIndex._add_datetimelike_methods()
+
def _get_ordinal_range(start, end, periods, freq):
if com._count_not_none(start, end, periods) < 2:
raise ValueError('Must specify 2 of start, end, periods')
diff --git a/pandas/tseries/tdi.py b/pandas/tseries/tdi.py
--- a/pandas/tseries/tdi.py
+++ b/pandas/tseries/tdi.py
@@ -890,9 +890,12 @@ def delete(self, loc):
return TimedeltaIndex(new_tds, name=self.name, freq=freq)
+
TimedeltaIndex._add_numeric_methods()
+TimedeltaIndex._add_logical_methods_disabled()
TimedeltaIndex._add_datetimelike_methods()
+
def _is_convertible_to_index(other):
""" return a boolean whether I can attempt conversion to a TimedeltaIndex """
if isinstance(other, TimedeltaIndex):
</patch>
|
[]
|
[]
| |||
numpy__numpy-12684
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UFunc benchmarks complaining lack of coverage of matmul and _arg
When running benchmarks, a warning appears (see https://github.com/numpy/numpy/pull/12666#issuecomment-451790069) that
```
[ 0.00%] ···· Missing ufunc '_arg'
Missing ufunc 'matmul'
```
This originates on
https://github.com/numpy/numpy/blob/master/benchmarks/benchmarks/bench_ufunc.py#L25
I think `matmul` may well be tested elsewhere, so can be excluded here. `np.core.umath_arg.__doc__` suggests `_arg` is for testing purposes only; am not sure why it is exported.
</issue>
<code>
[start of README.md]
1 # <img alt="NumPy" src="https://cdn.rawgit.com/numpy/numpy/master/branding/icons/numpylogo.svg" height="60">
2
3 [](
4 https://travis-ci.org/numpy/numpy)
5 [](
6 https://ci.appveyor.com/project/charris/numpy)
7 [](
8 https://dev.azure.com/numpy/numpy/_build/latest?definitionId=5)
9 [](
10 https://codecov.io/gh/numpy/numpy)
11
12 NumPy is the fundamental package needed for scientific computing with Python.
13
14 - **Website (including documentation):** https://www.numpy.org
15 - **Mailing list:** https://mail.python.org/mailman/listinfo/numpy-discussion
16 - **Source:** https://github.com/numpy/numpy
17 - **Bug reports:** https://github.com/numpy/numpy/issues
18
19 It provides:
20
21 - a powerful N-dimensional array object
22 - sophisticated (broadcasting) functions
23 - tools for integrating C/C++ and Fortran code
24 - useful linear algebra, Fourier transform, and random number capabilities
25
26 Testing:
27
28 - NumPy versions ≥ 1.15 require `pytest`
29 - NumPy versions < 1.15 require `nose`
30
31 Tests can then be run after installation with:
32
33 python -c 'import numpy; numpy.test()'
34
35 [](https://numfocus.org)
36
[end of README.md]
[start of numpy/core/code_generators/generate_umath.py]
1 from __future__ import division, print_function
2
3 import os
4 import re
5 import struct
6 import sys
7 import textwrap
8
9 sys.path.insert(0, os.path.dirname(__file__))
10 import ufunc_docstrings as docstrings
11 sys.path.pop(0)
12
13 Zero = "PyInt_FromLong(0)"
14 One = "PyInt_FromLong(1)"
15 True_ = "(Py_INCREF(Py_True), Py_True)"
16 False_ = "(Py_INCREF(Py_False), Py_False)"
17 None_ = object()
18 AllOnes = "PyInt_FromLong(-1)"
19 MinusInfinity = 'PyFloat_FromDouble(-NPY_INFINITY)'
20 ReorderableNone = "(Py_INCREF(Py_None), Py_None)"
21
22 # Sentinel value to specify using the full type description in the
23 # function name
24 class FullTypeDescr(object):
25 pass
26
27 class FuncNameSuffix(object):
28 """Stores the suffix to append when generating functions names.
29 """
30 def __init__(self, suffix):
31 self.suffix = suffix
32
33 class TypeDescription(object):
34 """Type signature for a ufunc.
35
36 Attributes
37 ----------
38 type : str
39 Character representing the nominal type.
40 func_data : str or None or FullTypeDescr or FuncNameSuffix, optional
41 The string representing the expression to insert into the data
42 array, if any.
43 in_ : str or None, optional
44 The typecode(s) of the inputs.
45 out : str or None, optional
46 The typecode(s) of the outputs.
47 astype : dict or None, optional
48 If astype['x'] is 'y', uses PyUFunc_x_x_As_y_y/PyUFunc_xx_x_As_yy_y
49 instead of PyUFunc_x_x/PyUFunc_xx_x.
50 simd: list
51 Available SIMD ufunc loops, dispatched at runtime in specified order
52 Currently only supported for simples types (see make_arrays)
53 """
54 def __init__(self, type, f=None, in_=None, out=None, astype=None, simd=None):
55 self.type = type
56 self.func_data = f
57 if astype is None:
58 astype = {}
59 self.astype_dict = astype
60 if in_ is not None:
61 in_ = in_.replace('P', type)
62 self.in_ = in_
63 if out is not None:
64 out = out.replace('P', type)
65 self.out = out
66 self.simd = simd
67
68 def finish_signature(self, nin, nout):
69 if self.in_ is None:
70 self.in_ = self.type * nin
71 assert len(self.in_) == nin
72 if self.out is None:
73 self.out = self.type * nout
74 assert len(self.out) == nout
75 self.astype = self.astype_dict.get(self.type, None)
76
77 _fdata_map = dict(e='npy_%sf', f='npy_%sf', d='npy_%s', g='npy_%sl',
78 F='nc_%sf', D='nc_%s', G='nc_%sl')
79 def build_func_data(types, f):
80 func_data = [_fdata_map.get(t, '%s') % (f,) for t in types]
81 return func_data
82
83 def TD(types, f=None, astype=None, in_=None, out=None, simd=None):
84 if f is not None:
85 if isinstance(f, str):
86 func_data = build_func_data(types, f)
87 elif len(f) != len(types):
88 raise ValueError("Number of types and f do not match")
89 else:
90 func_data = f
91 else:
92 func_data = (None,) * len(types)
93 if isinstance(in_, str):
94 in_ = (in_,) * len(types)
95 elif in_ is None:
96 in_ = (None,) * len(types)
97 elif len(in_) != len(types):
98 raise ValueError("Number of types and inputs do not match")
99 if isinstance(out, str):
100 out = (out,) * len(types)
101 elif out is None:
102 out = (None,) * len(types)
103 elif len(out) != len(types):
104 raise ValueError("Number of types and outputs do not match")
105 tds = []
106 for t, fd, i, o in zip(types, func_data, in_, out):
107 # [(simd-name, list of types)]
108 if simd is not None:
109 simdt = [k for k, v in simd if t in v]
110 else:
111 simdt = []
112 tds.append(TypeDescription(t, f=fd, in_=i, out=o, astype=astype, simd=simdt))
113 return tds
114
115 class Ufunc(object):
116 """Description of a ufunc.
117
118 Attributes
119 ----------
120 nin : number of input arguments
121 nout : number of output arguments
122 identity : identity element for a two-argument function
123 docstring : docstring for the ufunc
124 type_descriptions : list of TypeDescription objects
125 """
126 def __init__(self, nin, nout, identity, docstring, typereso,
127 *type_descriptions, **kwargs):
128 self.nin = nin
129 self.nout = nout
130 if identity is None:
131 identity = None_
132 self.identity = identity
133 self.docstring = docstring
134 self.typereso = typereso
135 self.type_descriptions = []
136 self.signature = kwargs.pop('signature', None)
137 for td in type_descriptions:
138 self.type_descriptions.extend(td)
139 for td in self.type_descriptions:
140 td.finish_signature(self.nin, self.nout)
141 if kwargs:
142 raise ValueError('unknown kwargs %r' % str(kwargs))
143
144 # String-handling utilities to avoid locale-dependence.
145
146 import string
147 if sys.version_info[0] < 3:
148 UPPER_TABLE = string.maketrans(string.ascii_lowercase,
149 string.ascii_uppercase)
150 else:
151 UPPER_TABLE = bytes.maketrans(bytes(string.ascii_lowercase, "ascii"),
152 bytes(string.ascii_uppercase, "ascii"))
153
154 def english_upper(s):
155 """ Apply English case rules to convert ASCII strings to all upper case.
156
157 This is an internal utility function to replace calls to str.upper() such
158 that we can avoid changing behavior with changing locales. In particular,
159 Turkish has distinct dotted and dotless variants of the Latin letter "I" in
160 both lowercase and uppercase. Thus, "i".upper() != "I" in a "tr" locale.
161
162 Parameters
163 ----------
164 s : str
165
166 Returns
167 -------
168 uppered : str
169
170 Examples
171 --------
172 >>> from numpy.lib.utils import english_upper
173 >>> s = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_'
174 >>> english_upper(s)
175 'ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_'
176 >>> english_upper('')
177 ''
178 """
179 uppered = s.translate(UPPER_TABLE)
180 return uppered
181
182
183 #each entry in defdict is a Ufunc object.
184
185 #name: [string of chars for which it is defined,
186 # string of characters using func interface,
187 # tuple of strings giving funcs for data,
188 # (in, out), or (instr, outstr) giving the signature as character codes,
189 # identity,
190 # docstring,
191 # output specification (optional)
192 # ]
193
194 chartoname = {'?': 'bool',
195 'b': 'byte',
196 'B': 'ubyte',
197 'h': 'short',
198 'H': 'ushort',
199 'i': 'int',
200 'I': 'uint',
201 'l': 'long',
202 'L': 'ulong',
203 'q': 'longlong',
204 'Q': 'ulonglong',
205 'e': 'half',
206 'f': 'float',
207 'd': 'double',
208 'g': 'longdouble',
209 'F': 'cfloat',
210 'D': 'cdouble',
211 'G': 'clongdouble',
212 'M': 'datetime',
213 'm': 'timedelta',
214 'O': 'OBJECT',
215 # '.' is like 'O', but calls a method of the object instead
216 # of a function
217 'P': 'OBJECT',
218 }
219
220 all = '?bBhHiIlLqQefdgFDGOMm'
221 O = 'O'
222 P = 'P'
223 ints = 'bBhHiIlLqQ'
224 times = 'Mm'
225 timedeltaonly = 'm'
226 intsO = ints + O
227 bints = '?' + ints
228 bintsO = bints + O
229 flts = 'efdg'
230 fltsO = flts + O
231 fltsP = flts + P
232 cmplx = 'FDG'
233 cmplxO = cmplx + O
234 cmplxP = cmplx + P
235 inexact = flts + cmplx
236 inexactvec = 'fd'
237 noint = inexact+O
238 nointP = inexact+P
239 allP = bints+times+flts+cmplxP
240 nobool = all[1:]
241 noobj = all[:-3]+all[-2:]
242 nobool_or_obj = all[1:-3]+all[-2:]
243 nobool_or_datetime = all[1:-2]+all[-1:]
244 intflt = ints+flts
245 intfltcmplx = ints+flts+cmplx
246 nocmplx = bints+times+flts
247 nocmplxO = nocmplx+O
248 nocmplxP = nocmplx+P
249 notimes_or_obj = bints + inexact
250 nodatetime_or_obj = bints + inexact
251
252 # Find which code corresponds to int64.
253 int64 = ''
254 uint64 = ''
255 for code in 'bhilq':
256 if struct.calcsize(code) == 8:
257 int64 = code
258 uint64 = english_upper(code)
259 break
260
261 # This dictionary describes all the ufunc implementations, generating
262 # all the function names and their corresponding ufunc signatures. TD is
263 # an object which expands a list of character codes into an array of
264 # TypeDescriptions.
265 defdict = {
266 'add':
267 Ufunc(2, 1, Zero,
268 docstrings.get('numpy.core.umath.add'),
269 'PyUFunc_AdditionTypeResolver',
270 TD(notimes_or_obj, simd=[('avx2', ints)]),
271 [TypeDescription('M', FullTypeDescr, 'Mm', 'M'),
272 TypeDescription('m', FullTypeDescr, 'mm', 'm'),
273 TypeDescription('M', FullTypeDescr, 'mM', 'M'),
274 ],
275 TD(O, f='PyNumber_Add'),
276 ),
277 'subtract':
278 Ufunc(2, 1, None, # Zero is only a unit to the right, not the left
279 docstrings.get('numpy.core.umath.subtract'),
280 'PyUFunc_SubtractionTypeResolver',
281 TD(notimes_or_obj, simd=[('avx2', ints)]),
282 [TypeDescription('M', FullTypeDescr, 'Mm', 'M'),
283 TypeDescription('m', FullTypeDescr, 'mm', 'm'),
284 TypeDescription('M', FullTypeDescr, 'MM', 'm'),
285 ],
286 TD(O, f='PyNumber_Subtract'),
287 ),
288 'multiply':
289 Ufunc(2, 1, One,
290 docstrings.get('numpy.core.umath.multiply'),
291 'PyUFunc_MultiplicationTypeResolver',
292 TD(notimes_or_obj, simd=[('avx2', ints)]),
293 [TypeDescription('m', FullTypeDescr, 'mq', 'm'),
294 TypeDescription('m', FullTypeDescr, 'qm', 'm'),
295 TypeDescription('m', FullTypeDescr, 'md', 'm'),
296 TypeDescription('m', FullTypeDescr, 'dm', 'm'),
297 ],
298 TD(O, f='PyNumber_Multiply'),
299 ),
300 'divide':
301 Ufunc(2, 1, None, # One is only a unit to the right, not the left
302 docstrings.get('numpy.core.umath.divide'),
303 'PyUFunc_MixedDivisionTypeResolver',
304 TD(intfltcmplx),
305 [TypeDescription('m', FullTypeDescr, 'mq', 'm'),
306 TypeDescription('m', FullTypeDescr, 'md', 'm'),
307 TypeDescription('m', FullTypeDescr, 'mm', 'd'),
308 ],
309 TD(O, f='PyNumber_Divide'),
310 ),
311 'floor_divide':
312 Ufunc(2, 1, None, # One is only a unit to the right, not the left
313 docstrings.get('numpy.core.umath.floor_divide'),
314 'PyUFunc_DivisionTypeResolver',
315 TD(intfltcmplx),
316 [TypeDescription('m', FullTypeDescr, 'mq', 'm'),
317 TypeDescription('m', FullTypeDescr, 'md', 'm'),
318 TypeDescription('m', FullTypeDescr, 'mm', 'q'),
319 ],
320 TD(O, f='PyNumber_FloorDivide'),
321 ),
322 'true_divide':
323 Ufunc(2, 1, None, # One is only a unit to the right, not the left
324 docstrings.get('numpy.core.umath.true_divide'),
325 'PyUFunc_TrueDivisionTypeResolver',
326 TD(flts+cmplx),
327 [TypeDescription('m', FullTypeDescr, 'mq', 'm'),
328 TypeDescription('m', FullTypeDescr, 'md', 'm'),
329 TypeDescription('m', FullTypeDescr, 'mm', 'd'),
330 ],
331 TD(O, f='PyNumber_TrueDivide'),
332 ),
333 'conjugate':
334 Ufunc(1, 1, None,
335 docstrings.get('numpy.core.umath.conjugate'),
336 None,
337 TD(ints+flts+cmplx, simd=[('avx2', ints)]),
338 TD(P, f='conjugate'),
339 ),
340 'fmod':
341 Ufunc(2, 1, None,
342 docstrings.get('numpy.core.umath.fmod'),
343 None,
344 TD(ints),
345 TD(flts, f='fmod', astype={'e':'f'}),
346 TD(P, f='fmod'),
347 ),
348 'square':
349 Ufunc(1, 1, None,
350 docstrings.get('numpy.core.umath.square'),
351 None,
352 TD(ints+inexact, simd=[('avx2', ints)]),
353 TD(O, f='Py_square'),
354 ),
355 'reciprocal':
356 Ufunc(1, 1, None,
357 docstrings.get('numpy.core.umath.reciprocal'),
358 None,
359 TD(ints+inexact, simd=[('avx2', ints)]),
360 TD(O, f='Py_reciprocal'),
361 ),
362 # This is no longer used as numpy.ones_like, however it is
363 # still used by some internal calls.
364 '_ones_like':
365 Ufunc(1, 1, None,
366 docstrings.get('numpy.core.umath._ones_like'),
367 'PyUFunc_OnesLikeTypeResolver',
368 TD(noobj),
369 TD(O, f='Py_get_one'),
370 ),
371 'power':
372 Ufunc(2, 1, None,
373 docstrings.get('numpy.core.umath.power'),
374 None,
375 TD(ints),
376 TD(inexact, f='pow', astype={'e':'f'}),
377 TD(O, f='npy_ObjectPower'),
378 ),
379 'float_power':
380 Ufunc(2, 1, None,
381 docstrings.get('numpy.core.umath.float_power'),
382 None,
383 TD('dgDG', f='pow'),
384 ),
385 'absolute':
386 Ufunc(1, 1, None,
387 docstrings.get('numpy.core.umath.absolute'),
388 'PyUFunc_AbsoluteTypeResolver',
389 TD(bints+flts+timedeltaonly),
390 TD(cmplx, out=('f', 'd', 'g')),
391 TD(O, f='PyNumber_Absolute'),
392 ),
393 '_arg':
394 Ufunc(1, 1, None,
395 docstrings.get('numpy.core.umath._arg'),
396 None,
397 TD(cmplx, out=('f', 'd', 'g')),
398 ),
399 'negative':
400 Ufunc(1, 1, None,
401 docstrings.get('numpy.core.umath.negative'),
402 'PyUFunc_NegativeTypeResolver',
403 TD(bints+flts+timedeltaonly, simd=[('avx2', ints)]),
404 TD(cmplx, f='neg'),
405 TD(O, f='PyNumber_Negative'),
406 ),
407 'positive':
408 Ufunc(1, 1, None,
409 docstrings.get('numpy.core.umath.positive'),
410 'PyUFunc_SimpleUnaryOperationTypeResolver',
411 TD(ints+flts+timedeltaonly),
412 TD(cmplx, f='pos'),
413 TD(O, f='PyNumber_Positive'),
414 ),
415 'sign':
416 Ufunc(1, 1, None,
417 docstrings.get('numpy.core.umath.sign'),
418 'PyUFunc_SimpleUnaryOperationTypeResolver',
419 TD(nobool_or_datetime),
420 ),
421 'greater':
422 Ufunc(2, 1, None,
423 docstrings.get('numpy.core.umath.greater'),
424 'PyUFunc_SimpleBinaryComparisonTypeResolver',
425 TD(all, out='?', simd=[('avx2', ints)]),
426 [TypeDescription('O', FullTypeDescr, 'OO', 'O')],
427 ),
428 'greater_equal':
429 Ufunc(2, 1, None,
430 docstrings.get('numpy.core.umath.greater_equal'),
431 'PyUFunc_SimpleBinaryComparisonTypeResolver',
432 TD(all, out='?', simd=[('avx2', ints)]),
433 [TypeDescription('O', FullTypeDescr, 'OO', 'O')],
434 ),
435 'less':
436 Ufunc(2, 1, None,
437 docstrings.get('numpy.core.umath.less'),
438 'PyUFunc_SimpleBinaryComparisonTypeResolver',
439 TD(all, out='?', simd=[('avx2', ints)]),
440 [TypeDescription('O', FullTypeDescr, 'OO', 'O')],
441 ),
442 'less_equal':
443 Ufunc(2, 1, None,
444 docstrings.get('numpy.core.umath.less_equal'),
445 'PyUFunc_SimpleBinaryComparisonTypeResolver',
446 TD(all, out='?', simd=[('avx2', ints)]),
447 [TypeDescription('O', FullTypeDescr, 'OO', 'O')],
448 ),
449 'equal':
450 Ufunc(2, 1, None,
451 docstrings.get('numpy.core.umath.equal'),
452 'PyUFunc_SimpleBinaryComparisonTypeResolver',
453 TD(all, out='?', simd=[('avx2', ints)]),
454 [TypeDescription('O', FullTypeDescr, 'OO', 'O')],
455 ),
456 'not_equal':
457 Ufunc(2, 1, None,
458 docstrings.get('numpy.core.umath.not_equal'),
459 'PyUFunc_SimpleBinaryComparisonTypeResolver',
460 TD(all, out='?', simd=[('avx2', ints)]),
461 [TypeDescription('O', FullTypeDescr, 'OO', 'O')],
462 ),
463 'logical_and':
464 Ufunc(2, 1, True_,
465 docstrings.get('numpy.core.umath.logical_and'),
466 'PyUFunc_SimpleBinaryComparisonTypeResolver',
467 TD(nodatetime_or_obj, out='?', simd=[('avx2', ints)]),
468 TD(O, f='npy_ObjectLogicalAnd'),
469 ),
470 'logical_not':
471 Ufunc(1, 1, None,
472 docstrings.get('numpy.core.umath.logical_not'),
473 None,
474 TD(nodatetime_or_obj, out='?', simd=[('avx2', ints)]),
475 TD(O, f='npy_ObjectLogicalNot'),
476 ),
477 'logical_or':
478 Ufunc(2, 1, False_,
479 docstrings.get('numpy.core.umath.logical_or'),
480 'PyUFunc_SimpleBinaryComparisonTypeResolver',
481 TD(nodatetime_or_obj, out='?', simd=[('avx2', ints)]),
482 TD(O, f='npy_ObjectLogicalOr'),
483 ),
484 'logical_xor':
485 Ufunc(2, 1, False_,
486 docstrings.get('numpy.core.umath.logical_xor'),
487 'PyUFunc_SimpleBinaryComparisonTypeResolver',
488 TD(nodatetime_or_obj, out='?'),
489 TD(P, f='logical_xor'),
490 ),
491 'maximum':
492 Ufunc(2, 1, ReorderableNone,
493 docstrings.get('numpy.core.umath.maximum'),
494 'PyUFunc_SimpleBinaryOperationTypeResolver',
495 TD(noobj),
496 TD(O, f='npy_ObjectMax')
497 ),
498 'minimum':
499 Ufunc(2, 1, ReorderableNone,
500 docstrings.get('numpy.core.umath.minimum'),
501 'PyUFunc_SimpleBinaryOperationTypeResolver',
502 TD(noobj),
503 TD(O, f='npy_ObjectMin')
504 ),
505 'fmax':
506 Ufunc(2, 1, ReorderableNone,
507 docstrings.get('numpy.core.umath.fmax'),
508 'PyUFunc_SimpleBinaryOperationTypeResolver',
509 TD(noobj),
510 TD(O, f='npy_ObjectMax')
511 ),
512 'fmin':
513 Ufunc(2, 1, ReorderableNone,
514 docstrings.get('numpy.core.umath.fmin'),
515 'PyUFunc_SimpleBinaryOperationTypeResolver',
516 TD(noobj),
517 TD(O, f='npy_ObjectMin')
518 ),
519 'logaddexp':
520 Ufunc(2, 1, MinusInfinity,
521 docstrings.get('numpy.core.umath.logaddexp'),
522 None,
523 TD(flts, f="logaddexp", astype={'e':'f'})
524 ),
525 'logaddexp2':
526 Ufunc(2, 1, None,
527 docstrings.get('numpy.core.umath.logaddexp2'),
528 None,
529 TD(flts, f="logaddexp2", astype={'e':'f'})
530 ),
531 'bitwise_and':
532 Ufunc(2, 1, AllOnes,
533 docstrings.get('numpy.core.umath.bitwise_and'),
534 None,
535 TD(bints, simd=[('avx2', ints)]),
536 TD(O, f='PyNumber_And'),
537 ),
538 'bitwise_or':
539 Ufunc(2, 1, Zero,
540 docstrings.get('numpy.core.umath.bitwise_or'),
541 None,
542 TD(bints, simd=[('avx2', ints)]),
543 TD(O, f='PyNumber_Or'),
544 ),
545 'bitwise_xor':
546 Ufunc(2, 1, Zero,
547 docstrings.get('numpy.core.umath.bitwise_xor'),
548 None,
549 TD(bints, simd=[('avx2', ints)]),
550 TD(O, f='PyNumber_Xor'),
551 ),
552 'invert':
553 Ufunc(1, 1, None,
554 docstrings.get('numpy.core.umath.invert'),
555 None,
556 TD(bints, simd=[('avx2', ints)]),
557 TD(O, f='PyNumber_Invert'),
558 ),
559 'left_shift':
560 Ufunc(2, 1, None,
561 docstrings.get('numpy.core.umath.left_shift'),
562 None,
563 TD(ints, simd=[('avx2', ints)]),
564 TD(O, f='PyNumber_Lshift'),
565 ),
566 'right_shift':
567 Ufunc(2, 1, None,
568 docstrings.get('numpy.core.umath.right_shift'),
569 None,
570 TD(ints, simd=[('avx2', ints)]),
571 TD(O, f='PyNumber_Rshift'),
572 ),
573 'heaviside':
574 Ufunc(2, 1, None,
575 docstrings.get('numpy.core.umath.heaviside'),
576 None,
577 TD(flts, f='heaviside', astype={'e':'f'}),
578 ),
579 'degrees':
580 Ufunc(1, 1, None,
581 docstrings.get('numpy.core.umath.degrees'),
582 None,
583 TD(fltsP, f='degrees', astype={'e':'f'}),
584 ),
585 'rad2deg':
586 Ufunc(1, 1, None,
587 docstrings.get('numpy.core.umath.rad2deg'),
588 None,
589 TD(fltsP, f='rad2deg', astype={'e':'f'}),
590 ),
591 'radians':
592 Ufunc(1, 1, None,
593 docstrings.get('numpy.core.umath.radians'),
594 None,
595 TD(fltsP, f='radians', astype={'e':'f'}),
596 ),
597 'deg2rad':
598 Ufunc(1, 1, None,
599 docstrings.get('numpy.core.umath.deg2rad'),
600 None,
601 TD(fltsP, f='deg2rad', astype={'e':'f'}),
602 ),
603 'arccos':
604 Ufunc(1, 1, None,
605 docstrings.get('numpy.core.umath.arccos'),
606 None,
607 TD(inexact, f='acos', astype={'e':'f'}),
608 TD(P, f='arccos'),
609 ),
610 'arccosh':
611 Ufunc(1, 1, None,
612 docstrings.get('numpy.core.umath.arccosh'),
613 None,
614 TD(inexact, f='acosh', astype={'e':'f'}),
615 TD(P, f='arccosh'),
616 ),
617 'arcsin':
618 Ufunc(1, 1, None,
619 docstrings.get('numpy.core.umath.arcsin'),
620 None,
621 TD(inexact, f='asin', astype={'e':'f'}),
622 TD(P, f='arcsin'),
623 ),
624 'arcsinh':
625 Ufunc(1, 1, None,
626 docstrings.get('numpy.core.umath.arcsinh'),
627 None,
628 TD(inexact, f='asinh', astype={'e':'f'}),
629 TD(P, f='arcsinh'),
630 ),
631 'arctan':
632 Ufunc(1, 1, None,
633 docstrings.get('numpy.core.umath.arctan'),
634 None,
635 TD(inexact, f='atan', astype={'e':'f'}),
636 TD(P, f='arctan'),
637 ),
638 'arctanh':
639 Ufunc(1, 1, None,
640 docstrings.get('numpy.core.umath.arctanh'),
641 None,
642 TD(inexact, f='atanh', astype={'e':'f'}),
643 TD(P, f='arctanh'),
644 ),
645 'cos':
646 Ufunc(1, 1, None,
647 docstrings.get('numpy.core.umath.cos'),
648 None,
649 TD(inexact, f='cos', astype={'e':'f'}),
650 TD(P, f='cos'),
651 ),
652 'sin':
653 Ufunc(1, 1, None,
654 docstrings.get('numpy.core.umath.sin'),
655 None,
656 TD(inexact, f='sin', astype={'e':'f'}),
657 TD(P, f='sin'),
658 ),
659 'tan':
660 Ufunc(1, 1, None,
661 docstrings.get('numpy.core.umath.tan'),
662 None,
663 TD(inexact, f='tan', astype={'e':'f'}),
664 TD(P, f='tan'),
665 ),
666 'cosh':
667 Ufunc(1, 1, None,
668 docstrings.get('numpy.core.umath.cosh'),
669 None,
670 TD(inexact, f='cosh', astype={'e':'f'}),
671 TD(P, f='cosh'),
672 ),
673 'sinh':
674 Ufunc(1, 1, None,
675 docstrings.get('numpy.core.umath.sinh'),
676 None,
677 TD(inexact, f='sinh', astype={'e':'f'}),
678 TD(P, f='sinh'),
679 ),
680 'tanh':
681 Ufunc(1, 1, None,
682 docstrings.get('numpy.core.umath.tanh'),
683 None,
684 TD(inexact, f='tanh', astype={'e':'f'}),
685 TD(P, f='tanh'),
686 ),
687 'exp':
688 Ufunc(1, 1, None,
689 docstrings.get('numpy.core.umath.exp'),
690 None,
691 TD(inexact, f='exp', astype={'e':'f'}),
692 TD(P, f='exp'),
693 ),
694 'exp2':
695 Ufunc(1, 1, None,
696 docstrings.get('numpy.core.umath.exp2'),
697 None,
698 TD(inexact, f='exp2', astype={'e':'f'}),
699 TD(P, f='exp2'),
700 ),
701 'expm1':
702 Ufunc(1, 1, None,
703 docstrings.get('numpy.core.umath.expm1'),
704 None,
705 TD(inexact, f='expm1', astype={'e':'f'}),
706 TD(P, f='expm1'),
707 ),
708 'log':
709 Ufunc(1, 1, None,
710 docstrings.get('numpy.core.umath.log'),
711 None,
712 TD(inexact, f='log', astype={'e':'f'}),
713 TD(P, f='log'),
714 ),
715 'log2':
716 Ufunc(1, 1, None,
717 docstrings.get('numpy.core.umath.log2'),
718 None,
719 TD(inexact, f='log2', astype={'e':'f'}),
720 TD(P, f='log2'),
721 ),
722 'log10':
723 Ufunc(1, 1, None,
724 docstrings.get('numpy.core.umath.log10'),
725 None,
726 TD(inexact, f='log10', astype={'e':'f'}),
727 TD(P, f='log10'),
728 ),
729 'log1p':
730 Ufunc(1, 1, None,
731 docstrings.get('numpy.core.umath.log1p'),
732 None,
733 TD(inexact, f='log1p', astype={'e':'f'}),
734 TD(P, f='log1p'),
735 ),
736 'sqrt':
737 Ufunc(1, 1, None,
738 docstrings.get('numpy.core.umath.sqrt'),
739 None,
740 TD('e', f='sqrt', astype={'e':'f'}),
741 TD(inexactvec),
742 TD(inexact, f='sqrt', astype={'e':'f'}),
743 TD(P, f='sqrt'),
744 ),
745 'cbrt':
746 Ufunc(1, 1, None,
747 docstrings.get('numpy.core.umath.cbrt'),
748 None,
749 TD(flts, f='cbrt', astype={'e':'f'}),
750 TD(P, f='cbrt'),
751 ),
752 'ceil':
753 Ufunc(1, 1, None,
754 docstrings.get('numpy.core.umath.ceil'),
755 None,
756 TD(flts, f='ceil', astype={'e':'f'}),
757 TD(P, f='ceil'),
758 ),
759 'trunc':
760 Ufunc(1, 1, None,
761 docstrings.get('numpy.core.umath.trunc'),
762 None,
763 TD(flts, f='trunc', astype={'e':'f'}),
764 TD(P, f='trunc'),
765 ),
766 'fabs':
767 Ufunc(1, 1, None,
768 docstrings.get('numpy.core.umath.fabs'),
769 None,
770 TD(flts, f='fabs', astype={'e':'f'}),
771 TD(P, f='fabs'),
772 ),
773 'floor':
774 Ufunc(1, 1, None,
775 docstrings.get('numpy.core.umath.floor'),
776 None,
777 TD(flts, f='floor', astype={'e':'f'}),
778 TD(P, f='floor'),
779 ),
780 'rint':
781 Ufunc(1, 1, None,
782 docstrings.get('numpy.core.umath.rint'),
783 None,
784 TD(inexact, f='rint', astype={'e':'f'}),
785 TD(P, f='rint'),
786 ),
787 'arctan2':
788 Ufunc(2, 1, None,
789 docstrings.get('numpy.core.umath.arctan2'),
790 None,
791 TD(flts, f='atan2', astype={'e':'f'}),
792 TD(P, f='arctan2'),
793 ),
794 'remainder':
795 Ufunc(2, 1, None,
796 docstrings.get('numpy.core.umath.remainder'),
797 'PyUFunc_RemainderTypeResolver',
798 TD(intflt),
799 [TypeDescription('m', FullTypeDescr, 'mm', 'm')],
800 TD(O, f='PyNumber_Remainder'),
801 ),
802 'divmod':
803 Ufunc(2, 2, None,
804 docstrings.get('numpy.core.umath.divmod'),
805 None,
806 TD(intflt),
807 # TD(O, f='PyNumber_Divmod'), # gh-9730
808 ),
809 'hypot':
810 Ufunc(2, 1, Zero,
811 docstrings.get('numpy.core.umath.hypot'),
812 None,
813 TD(flts, f='hypot', astype={'e':'f'}),
814 TD(P, f='hypot'),
815 ),
816 'isnan':
817 Ufunc(1, 1, None,
818 docstrings.get('numpy.core.umath.isnan'),
819 None,
820 TD(inexact, out='?'),
821 ),
822 'isnat':
823 Ufunc(1, 1, None,
824 docstrings.get('numpy.core.umath.isnat'),
825 'PyUFunc_IsNaTTypeResolver',
826 TD(times, out='?'),
827 ),
828 'isinf':
829 Ufunc(1, 1, None,
830 docstrings.get('numpy.core.umath.isinf'),
831 None,
832 TD(inexact, out='?'),
833 ),
834 'isfinite':
835 Ufunc(1, 1, None,
836 docstrings.get('numpy.core.umath.isfinite'),
837 None,
838 TD(inexact, out='?'),
839 ),
840 'signbit':
841 Ufunc(1, 1, None,
842 docstrings.get('numpy.core.umath.signbit'),
843 None,
844 TD(flts, out='?'),
845 ),
846 'copysign':
847 Ufunc(2, 1, None,
848 docstrings.get('numpy.core.umath.copysign'),
849 None,
850 TD(flts),
851 ),
852 'nextafter':
853 Ufunc(2, 1, None,
854 docstrings.get('numpy.core.umath.nextafter'),
855 None,
856 TD(flts),
857 ),
858 'spacing':
859 Ufunc(1, 1, None,
860 docstrings.get('numpy.core.umath.spacing'),
861 None,
862 TD(flts),
863 ),
864 'modf':
865 Ufunc(1, 2, None,
866 docstrings.get('numpy.core.umath.modf'),
867 None,
868 TD(flts),
869 ),
870 'ldexp' :
871 Ufunc(2, 1, None,
872 docstrings.get('numpy.core.umath.ldexp'),
873 None,
874 [TypeDescription('e', None, 'ei', 'e'),
875 TypeDescription('f', None, 'fi', 'f'),
876 TypeDescription('e', FuncNameSuffix('long'), 'el', 'e'),
877 TypeDescription('f', FuncNameSuffix('long'), 'fl', 'f'),
878 TypeDescription('d', None, 'di', 'd'),
879 TypeDescription('d', FuncNameSuffix('long'), 'dl', 'd'),
880 TypeDescription('g', None, 'gi', 'g'),
881 TypeDescription('g', FuncNameSuffix('long'), 'gl', 'g'),
882 ],
883 ),
884 'frexp' :
885 Ufunc(1, 2, None,
886 docstrings.get('numpy.core.umath.frexp'),
887 None,
888 [TypeDescription('e', None, 'e', 'ei'),
889 TypeDescription('f', None, 'f', 'fi'),
890 TypeDescription('d', None, 'd', 'di'),
891 TypeDescription('g', None, 'g', 'gi'),
892 ],
893 ),
894 'gcd' :
895 Ufunc(2, 1, Zero,
896 docstrings.get('numpy.core.umath.gcd'),
897 "PyUFunc_SimpleBinaryOperationTypeResolver",
898 TD(ints),
899 TD('O', f='npy_ObjectGCD'),
900 ),
901 'lcm' :
902 Ufunc(2, 1, None,
903 docstrings.get('numpy.core.umath.lcm'),
904 "PyUFunc_SimpleBinaryOperationTypeResolver",
905 TD(ints),
906 TD('O', f='npy_ObjectLCM'),
907 ),
908 'matmul' :
909 Ufunc(2, 1, None,
910 docstrings.get('numpy.core.umath.matmul'),
911 "PyUFunc_SimpleBinaryOperationTypeResolver",
912 TD(notimes_or_obj),
913 signature='(n?,k),(k,m?)->(n?,m?)',
914 ),
915 }
916
917 if sys.version_info[0] >= 3:
918 # Will be aliased to true_divide in umathmodule.c.src:InitOtherOperators
919 del defdict['divide']
920
921 def indent(st, spaces):
922 indentation = ' '*spaces
923 indented = indentation + st.replace('\n', '\n'+indentation)
924 # trim off any trailing spaces
925 indented = re.sub(r' +$', r'', indented)
926 return indented
927
928 chartotype1 = {'e': 'e_e',
929 'f': 'f_f',
930 'd': 'd_d',
931 'g': 'g_g',
932 'F': 'F_F',
933 'D': 'D_D',
934 'G': 'G_G',
935 'O': 'O_O',
936 'P': 'O_O_method'}
937
938 chartotype2 = {'e': 'ee_e',
939 'f': 'ff_f',
940 'd': 'dd_d',
941 'g': 'gg_g',
942 'F': 'FF_F',
943 'D': 'DD_D',
944 'G': 'GG_G',
945 'O': 'OO_O',
946 'P': 'OO_O_method'}
947 #for each name
948 # 1) create functions, data, and signature
949 # 2) fill in functions and data in InitOperators
950 # 3) add function.
951
952 def make_arrays(funcdict):
953 # functions array contains an entry for every type implemented NULL
954 # should be placed where PyUfunc_ style function will be filled in
955 # later
956 code1list = []
957 code2list = []
958 names = sorted(funcdict.keys())
959 for name in names:
960 uf = funcdict[name]
961 funclist = []
962 datalist = []
963 siglist = []
964 k = 0
965 sub = 0
966
967 for t in uf.type_descriptions:
968 if t.func_data is FullTypeDescr:
969 tname = english_upper(chartoname[t.type])
970 datalist.append('(void *)NULL')
971 funclist.append(
972 '%s_%s_%s_%s' % (tname, t.in_, t.out, name))
973 elif isinstance(t.func_data, FuncNameSuffix):
974 datalist.append('(void *)NULL')
975 tname = english_upper(chartoname[t.type])
976 funclist.append(
977 '%s_%s_%s' % (tname, name, t.func_data.suffix))
978 elif t.func_data is None:
979 datalist.append('(void *)NULL')
980 tname = english_upper(chartoname[t.type])
981 funclist.append('%s_%s' % (tname, name))
982 if t.simd is not None:
983 for vt in t.simd:
984 code2list.append(textwrap.dedent("""\
985 #ifdef HAVE_ATTRIBUTE_TARGET_{ISA}
986 if (npy_cpu_supports("{isa}")) {{
987 {fname}_functions[{idx}] = {type}_{fname}_{isa};
988 }}
989 #endif
990 """).format(
991 ISA=vt.upper(), isa=vt,
992 fname=name, type=tname, idx=k
993 ))
994 else:
995 funclist.append('NULL')
996 if (uf.nin, uf.nout) == (2, 1):
997 thedict = chartotype2
998 elif (uf.nin, uf.nout) == (1, 1):
999 thedict = chartotype1
1000 else:
1001 raise ValueError("Could not handle {}[{}]".format(name, t.type))
1002
1003 astype = ''
1004 if not t.astype is None:
1005 astype = '_As_%s' % thedict[t.astype]
1006 astr = ('%s_functions[%d] = PyUFunc_%s%s;' %
1007 (name, k, thedict[t.type], astype))
1008 code2list.append(astr)
1009 if t.type == 'O':
1010 astr = ('%s_data[%d] = (void *) %s;' %
1011 (name, k, t.func_data))
1012 code2list.append(astr)
1013 datalist.append('(void *)NULL')
1014 elif t.type == 'P':
1015 datalist.append('(void *)"%s"' % t.func_data)
1016 else:
1017 astr = ('%s_data[%d] = (void *) %s;' %
1018 (name, k, t.func_data))
1019 code2list.append(astr)
1020 datalist.append('(void *)NULL')
1021 #datalist.append('(void *)%s' % t.func_data)
1022 sub += 1
1023
1024 for x in t.in_ + t.out:
1025 siglist.append('NPY_%s' % (english_upper(chartoname[x]),))
1026
1027 k += 1
1028
1029 funcnames = ', '.join(funclist)
1030 signames = ', '.join(siglist)
1031 datanames = ', '.join(datalist)
1032 code1list.append("static PyUFuncGenericFunction %s_functions[] = {%s};"
1033 % (name, funcnames))
1034 code1list.append("static void * %s_data[] = {%s};"
1035 % (name, datanames))
1036 code1list.append("static char %s_signatures[] = {%s};"
1037 % (name, signames))
1038 return "\n".join(code1list), "\n".join(code2list)
1039
1040 def make_ufuncs(funcdict):
1041 code3list = []
1042 names = sorted(funcdict.keys())
1043 for name in names:
1044 uf = funcdict[name]
1045 mlist = []
1046 docstring = textwrap.dedent(uf.docstring).strip()
1047 if sys.version_info[0] < 3:
1048 docstring = docstring.encode('string-escape')
1049 docstring = docstring.replace(r'"', r'\"')
1050 else:
1051 docstring = docstring.encode('unicode-escape').decode('ascii')
1052 docstring = docstring.replace(r'"', r'\"')
1053 # XXX: I don't understand why the following replace is not
1054 # necessary in the python 2 case.
1055 docstring = docstring.replace(r"'", r"\'")
1056 # Split the docstring because some compilers (like MS) do not like big
1057 # string literal in C code. We split at endlines because textwrap.wrap
1058 # do not play well with \n
1059 docstring = '\\n\"\"'.join(docstring.split(r"\n"))
1060 if uf.signature is None:
1061 sig = "NULL"
1062 else:
1063 sig = '"{}"'.format(uf.signature)
1064 fmt = textwrap.dedent("""\
1065 identity = {identity_expr};
1066 if ({has_identity} && identity == NULL) {{
1067 return -1;
1068 }}
1069 f = PyUFunc_FromFuncAndDataAndSignatureAndIdentity(
1070 {name}_functions, {name}_data, {name}_signatures, {nloops},
1071 {nin}, {nout}, {identity}, "{name}",
1072 "{doc}", 0, {sig}, identity
1073 );
1074 if ({has_identity}) {{
1075 Py_DECREF(identity);
1076 }}
1077 if (f == NULL) {{
1078 return -1;
1079 }}
1080 """)
1081 args = dict(
1082 name=name, nloops=len(uf.type_descriptions),
1083 nin=uf.nin, nout=uf.nout,
1084 has_identity='0' if uf.identity is None_ else '1',
1085 identity='PyUFunc_IdentityValue',
1086 identity_expr=uf.identity,
1087 doc=docstring,
1088 sig=sig,
1089 )
1090
1091 # Only PyUFunc_None means don't reorder - we pass this using the old
1092 # argument
1093 if uf.identity is None_:
1094 args['identity'] = 'PyUFunc_None'
1095 args['identity_expr'] = 'NULL'
1096
1097 mlist.append(fmt.format(**args))
1098 if uf.typereso is not None:
1099 mlist.append(
1100 r"((PyUFuncObject *)f)->type_resolver = &%s;" % uf.typereso)
1101 mlist.append(r"""PyDict_SetItemString(dictionary, "%s", f);""" % name)
1102 mlist.append(r"""Py_DECREF(f);""")
1103 code3list.append('\n'.join(mlist))
1104 return '\n'.join(code3list)
1105
1106
1107 def make_code(funcdict, filename):
1108 code1, code2 = make_arrays(funcdict)
1109 code3 = make_ufuncs(funcdict)
1110 code2 = indent(code2, 4)
1111 code3 = indent(code3, 4)
1112 code = textwrap.dedent(r"""
1113
1114 /** Warning this file is autogenerated!!!
1115
1116 Please make changes to the code generator program (%s)
1117 **/
1118 #include "cpuid.h"
1119 #include "ufunc_object.h"
1120 #include "ufunc_type_resolution.h"
1121 #include "loops.h"
1122 #include "matmul.h"
1123 %s
1124
1125 static int
1126 InitOperators(PyObject *dictionary) {
1127 PyObject *f, *identity;
1128
1129 %s
1130 %s
1131
1132 return 0;
1133 }
1134 """) % (filename, code1, code2, code3)
1135 return code
1136
1137
1138 if __name__ == "__main__":
1139 filename = __file__
1140 fid = open('__umath_generated.c', 'w')
1141 code = make_code(defdict, filename)
1142 fid.write(code)
1143 fid.close()
1144
[end of numpy/core/code_generators/generate_umath.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
numpy/numpy
|
10bf4c63c0a4c5bb06002158fea16af0d0d79085
|
UFunc benchmarks complaining lack of coverage of matmul and _arg
When running benchmarks, a warning appears (see https://github.com/numpy/numpy/pull/12666#issuecomment-451790069) that
```
[ 0.00%] ···· Missing ufunc '_arg'
Missing ufunc 'matmul'
```
This originates on
https://github.com/numpy/numpy/blob/master/benchmarks/benchmarks/bench_ufunc.py#L25
I think `matmul` may well be tested elsewhere, so can be excluded here. `np.core.umath_arg.__doc__` suggests `_arg` is for testing purposes only; am not sure why it is exported.
|
2019-01-07T09:40:19Z
|
<patch>
diff --git a/benchmarks/benchmarks/bench_ufunc.py b/benchmarks/benchmarks/bench_ufunc.py
--- a/benchmarks/benchmarks/bench_ufunc.py
+++ b/benchmarks/benchmarks/bench_ufunc.py
@@ -15,7 +15,7 @@
'isinf', 'isnan', 'isnat', 'lcm', 'ldexp', 'left_shift', 'less',
'less_equal', 'log', 'log10', 'log1p', 'log2', 'logaddexp',
'logaddexp2', 'logical_and', 'logical_not', 'logical_or',
- 'logical_xor', 'maximum', 'minimum', 'mod', 'modf', 'multiply',
+ 'logical_xor', 'matmul', 'maximum', 'minimum', 'mod', 'modf', 'multiply',
'negative', 'nextafter', 'not_equal', 'positive', 'power',
'rad2deg', 'radians', 'reciprocal', 'remainder', 'right_shift',
'rint', 'sign', 'signbit', 'sin', 'sinh', 'spacing', 'sqrt',
diff --git a/numpy/core/umath.py b/numpy/core/umath.py
--- a/numpy/core/umath.py
+++ b/numpy/core/umath.py
@@ -9,7 +9,7 @@
from . import _multiarray_umath
from numpy.core._multiarray_umath import *
from numpy.core._multiarray_umath import (
- _UFUNC_API, _add_newdoc_ufunc, _arg, _ones_like
+ _UFUNC_API, _add_newdoc_ufunc, _ones_like
)
__all__ = [
@@ -18,7 +18,7 @@
'FPE_DIVIDEBYZERO', 'FPE_INVALID', 'FPE_OVERFLOW', 'FPE_UNDERFLOW', 'NAN',
'NINF', 'NZERO', 'PINF', 'PZERO', 'SHIFT_DIVIDEBYZERO', 'SHIFT_INVALID',
'SHIFT_OVERFLOW', 'SHIFT_UNDERFLOW', 'UFUNC_BUFSIZE_DEFAULT',
- 'UFUNC_PYVALS_NAME', '_add_newdoc_ufunc', '_arg', 'absolute', 'add',
+ 'UFUNC_PYVALS_NAME', '_add_newdoc_ufunc', 'absolute', 'add',
'arccos', 'arccosh', 'arcsin', 'arcsinh', 'arctan', 'arctan2', 'arctanh',
'bitwise_and', 'bitwise_or', 'bitwise_xor', 'cbrt', 'ceil', 'conj',
'conjugate', 'copysign', 'cos', 'cosh', 'deg2rad', 'degrees', 'divide',
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-3949
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
allow HDFStore to remain open when TableIterator is returned from read_hdf
Hi,
I'm using a TableIterator from pandas.read_hdf function (with the keyword argument iterator=True), I am unable to retrieve any data due to the error "ClosedNodeError: the node object is closed".
For instance:
```
pandas.DataFrame({'a':[1,2,3], 'b':[4,5,6]}).to_hdf("test.h5", "test", append=True)
it = pandas.read_hdf("test.h5","test",iterator=True)
iter(it).next()
Traceback (most recent call last):
File "<ipython-input-22-5634d86698ab>", line 1, in <module>
iter(it).next()
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 912, in __iter__
v = self.func(current, stop)
...
File "/usr/local/lib/python2.7/site-packages/tables/node.py", line 355, in _g_check_open
raise ClosedNodeError("the node object is closed")
ClosedNodeError: the node object is closed
```
I looked through source code of panda.io.pytables and found that in the
get_store function, store.close() is always run when read_hdf returns, even if
it returns an TableIterator. My assumption is that store should remain open in
order for TableIterator to work. Can you please let me know if this fix is
acceptable, or is there an easier way to do this?
Thanks,
Sean
</issue>
<code>
[start of README.rst]
1 =============================================
2 pandas: powerful Python data analysis toolkit
3 =============================================
4
5 .. image:: https://travis-ci.org/pydata/pandas.png
6 :target: https://travis-ci.org/pydata/pandas
7
8 What is it
9 ==========
10
11 **pandas** is a Python package providing fast, flexible, and expressive data
12 structures designed to make working with "relational" or "labeled" data both
13 easy and intuitive. It aims to be the fundamental high-level building block for
14 doing practical, **real world** data analysis in Python. Additionally, it has
15 the broader goal of becoming **the most powerful and flexible open source data
16 analysis / manipulation tool available in any language**. It is already well on
17 its way toward this goal.
18
19 Main Features
20 =============
21
22 Here are just a few of the things that pandas does well:
23
24 - Easy handling of **missing data** (represented as NaN) in floating point as
25 well as non-floating point data
26 - Size mutability: columns can be **inserted and deleted** from DataFrame and
27 higher dimensional objects
28 - Automatic and explicit **data alignment**: objects can be explicitly
29 aligned to a set of labels, or the user can simply ignore the labels and
30 let `Series`, `DataFrame`, etc. automatically align the data for you in
31 computations
32 - Powerful, flexible **group by** functionality to perform
33 split-apply-combine operations on data sets, for both aggregating and
34 transforming data
35 - Make it **easy to convert** ragged, differently-indexed data in other
36 Python and NumPy data structures into DataFrame objects
37 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
38 of large data sets
39 - Intuitive **merging** and **joining** data sets
40 - Flexible **reshaping** and pivoting of data sets
41 - **Hierarchical** labeling of axes (possible to have multiple labels per
42 tick)
43 - Robust IO tools for loading data from **flat files** (CSV and delimited),
44 Excel files, databases, and saving / loading data from the ultrafast **HDF5
45 format**
46 - **Time series**-specific functionality: date range generation and frequency
47 conversion, moving window statistics, moving window linear regressions,
48 date shifting and lagging, etc.
49
50 Where to get it
51 ===============
52
53 The source code is currently hosted on GitHub at: http://github.com/pydata/pandas
54
55 Binary installers for the latest released version are available at the Python
56 package index::
57
58 http://pypi.python.org/pypi/pandas/
59
60 And via ``easy_install`` or ``pip``::
61
62 easy_install pandas
63 pip install pandas
64
65 Dependencies
66 ============
67
68 - `NumPy <http://www.numpy.org>`__: 1.6.1 or higher
69 - `python-dateutil <http://labix.org/python-dateutil>`__ 1.5 or higher
70 - `pytz <http://pytz.sourceforge.net/>`__
71 - Needed for time zone support with ``date_range``
72
73 Highly Recommended Dependencies
74 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
75
76 - `numexpr <http://code.google.com/p/numexpr/>`__
77 - Needed to accelerate some expression evaluation operations
78 - Required by `PyTables`
79 - `bottleneck <http://berkeleyanalytics.com/bottleneck>`__
80 - Needed to accelerate certain numerical operations
81
82 Optional dependencies
83 ~~~~~~~~~~~~~~~~~~~~~
84
85 - `Cython <http://www.cython.org>`__: Only necessary to build development version. Version 0.17.1 or higher.
86 - `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions
87 - `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage
88 - `matplotlib <http://matplotlib.sourceforge.net/>`__: for plotting
89 - `statsmodels <http://statsmodels.sourceforge.net/>`__
90 - Needed for parts of :mod:`pandas.stats`
91 - `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__
92 - openpyxl version 1.6.1 or higher, for writing .xlsx files
93 - xlrd >= 0.9.0
94 - Needed for Excel I/O
95 - `boto <https://pypi.python.org/pypi/boto>`__: necessary for Amazon S3
96 access.
97 - One of the following combinations of libraries is needed to use the
98 top-level :func:`~pandas.io.html.read_html` function:
99
100 - `BeautifulSoup4`_ and `html5lib`_ (Any recent version of `html5lib`_ is
101 okay.)
102 - `BeautifulSoup4`_ and `lxml`_
103 - `BeautifulSoup4`_ and `html5lib`_ and `lxml`_
104 - Only `lxml`_, although see :ref:`HTML reading gotchas <html-gotchas>`
105 for reasons as to why you should probably **not** take this approach.
106
107 .. warning::
108
109 - if you install `BeautifulSoup4`_ you must install either
110 `lxml`_ or `html5lib`_ or both.
111 :func:`~pandas.io.html.read_html` will **not** work with *only*
112 `BeautifulSoup4`_ installed.
113 - You are highly encouraged to read :ref:`HTML reading gotchas
114 <html-gotchas>`. It explains issues surrounding the installation and
115 usage of the above three libraries
116 - You may need to install an older version of `BeautifulSoup4`_:
117 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
118 32-bit Ubuntu/Debian
119 - Additionally, if you're using `Anaconda`_ you should definitely
120 read :ref:`the gotchas about HTML parsing libraries <html-gotchas>`
121
122 .. note::
123
124 - if you're on a system with ``apt-get`` you can do
125
126 .. code-block:: sh
127
128 sudo apt-get build-dep python-lxml
129
130 to get the necessary dependencies for installation of `lxml`_. This
131 will prevent further headaches down the line.
132
133
134 .. _html5lib: https://github.com/html5lib/html5lib-python
135 .. _BeautifulSoup4: http://www.crummy.com/software/BeautifulSoup
136 .. _lxml: http://lxml.de
137 .. _Anaconda: https://store.continuum.io/cshop/anaconda
138
139
140 Installation from sources
141 =========================
142
143 To install pandas from source you need ``cython`` in addition to the normal dependencies above,
144 which can be installed from pypi::
145
146 pip install cython
147
148 In the ``pandas`` directory (same one where you found this file after cloning the git repo), execute::
149
150 python setup.py install
151
152 or for installing in `development mode <http://www.pip-installer.org/en/latest/usage.html>`__::
153
154 python setup.py develop
155
156 Alternatively, you can use `pip` if you want all the dependencies pulled in automatically
157 (the optional ``-e`` option is for installing it in
158 `development mode <http://www.pip-installer.org/en/latest/usage.html>`__)::
159
160 pip install -e .
161
162 On Windows, you will need to install MinGW and execute::
163
164 python setup.py build --compiler=mingw32
165 python setup.py install
166
167 See http://pandas.pydata.org/ for more information.
168
169 License
170 =======
171
172 BSD
173
174 Documentation
175 =============
176
177 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
178
179 The Sphinx documentation should provide a good starting point for learning how
180 to use the library. Expect the docs to continue to expand as time goes on.
181
182 Background
183 ==========
184
185 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
186 has been under active development since then.
187
188 Discussion and Development
189 ==========================
190
191 Since ``pandas`` development is related to a number of other scientific
192 Python projects, questions are welcome on the scipy-user mailing
193 list. Specialized discussions or design issues should take place on
194 the pystatsmodels mailing list / Google group, where
195 ``scikits.statsmodels`` and other libraries will also be discussed:
196
197 http://groups.google.com/group/pystatsmodels
198
199 .. _NumPy: http://numpy.scipy.org/
200
[end of README.rst]
[start of doc/sphinxext/ipython_directive.py]
1 # -*- coding: utf-8 -*-
2 """Sphinx directive to support embedded IPython code.
3
4 This directive allows pasting of entire interactive IPython sessions, prompts
5 and all, and their code will actually get re-executed at doc build time, with
6 all prompts renumbered sequentially. It also allows you to input code as a pure
7 python input by giving the argument python to the directive. The output looks
8 like an interactive ipython section.
9
10 To enable this directive, simply list it in your Sphinx ``conf.py`` file
11 (making sure the directory where you placed it is visible to sphinx, as is
12 needed for all Sphinx directives).
13
14 By default this directive assumes that your prompts are unchanged IPython ones,
15 but this can be customized. The configurable options that can be placed in
16 conf.py are
17
18 ipython_savefig_dir:
19 The directory in which to save the figures. This is relative to the
20 Sphinx source directory. The default is `html_static_path`.
21 ipython_rgxin:
22 The compiled regular expression to denote the start of IPython input
23 lines. The default is re.compile('In \[(\d+)\]:\s?(.*)\s*'). You
24 shouldn't need to change this.
25 ipython_rgxout:
26 The compiled regular expression to denote the start of IPython output
27 lines. The default is re.compile('Out\[(\d+)\]:\s?(.*)\s*'). You
28 shouldn't need to change this.
29 ipython_promptin:
30 The string to represent the IPython input prompt in the generated ReST.
31 The default is 'In [%d]:'. This expects that the line numbers are used
32 in the prompt.
33 ipython_promptout:
34
35 The string to represent the IPython prompt in the generated ReST. The
36 default is 'Out [%d]:'. This expects that the line numbers are used
37 in the prompt.
38
39 ToDo
40 ----
41
42 - Turn the ad-hoc test() function into a real test suite.
43 - Break up ipython-specific functionality from matplotlib stuff into better
44 separated code.
45
46 Authors
47 -------
48
49 - John D Hunter: orignal author.
50 - Fernando Perez: refactoring, documentation, cleanups, port to 0.11.
51 - VĂĄclavĹ milauer <eudoxos-AT-arcig.cz>: Prompt generalizations.
52 - Skipper Seabold, refactoring, cleanups, pure python addition
53 """
54
55 #-----------------------------------------------------------------------------
56 # Imports
57 #-----------------------------------------------------------------------------
58
59 # Stdlib
60 import ast
61 import cStringIO
62 import os
63 import re
64 import sys
65 import tempfile
66
67 # Third-party
68 import matplotlib
69 from docutils.parsers.rst import directives
70 from docutils import nodes
71 from sphinx.util.compat import Directive
72
73 matplotlib.use('Agg')
74
75 # Our own
76 from IPython import Config, InteractiveShell
77 from IPython.core.profiledir import ProfileDir
78 from IPython.utils import io
79
80
81 #-----------------------------------------------------------------------------
82 # Globals
83 #-----------------------------------------------------------------------------
84 # for tokenizing blocks
85 COMMENT, INPUT, OUTPUT = range(3)
86
87 #-----------------------------------------------------------------------------
88 # Functions and class declarations
89 #-----------------------------------------------------------------------------
90 def block_parser(part, rgxin, rgxout, fmtin, fmtout):
91 """
92 part is a string of ipython text, comprised of at most one
93 input, one ouput, comments, and blank lines. The block parser
94 parses the text into a list of::
95
96 blocks = [ (TOKEN0, data0), (TOKEN1, data1), ...]
97
98 where TOKEN is one of [COMMENT | INPUT | OUTPUT ] and
99 data is, depending on the type of token::
100
101 COMMENT : the comment string
102
103 INPUT: the (DECORATOR, INPUT_LINE, REST) where
104 DECORATOR: the input decorator (or None)
105 INPUT_LINE: the input as string (possibly multi-line)
106 REST : any stdout generated by the input line (not OUTPUT)
107
108
109 OUTPUT: the output string, possibly multi-line
110 """
111
112 block = []
113 lines = part.split('\n')
114 N = len(lines)
115 i = 0
116 decorator = None
117 while 1:
118
119 if i==N:
120 # nothing left to parse -- the last line
121 break
122
123 line = lines[i]
124 i += 1
125 line_stripped = line.strip()
126 if line_stripped.startswith('#'):
127 block.append((COMMENT, line))
128 continue
129
130 if line_stripped.startswith('@'):
131 # we're assuming at most one decorator -- may need to
132 # rethink
133 decorator = line_stripped
134 continue
135
136 # does this look like an input line?
137 matchin = rgxin.match(line)
138 if matchin:
139 lineno, inputline = int(matchin.group(1)), matchin.group(2)
140
141 # the ....: continuation string
142 continuation = ' %s:'% ''.join(['.']*(len(str(lineno))+2))
143 Nc = len(continuation)
144 # input lines can continue on for more than one line, if
145 # we have a '\' line continuation char or a function call
146 # echo line 'print'. The input line can only be
147 # terminated by the end of the block or an output line, so
148 # we parse out the rest of the input line if it is
149 # multiline as well as any echo text
150
151 rest = []
152 while i<N:
153
154 # look ahead; if the next line is blank, or a comment, or
155 # an output line, we're done
156
157 nextline = lines[i]
158 matchout = rgxout.match(nextline)
159 #print "nextline=%s, continuation=%s, starts=%s"%(nextline, continuation, nextline.startswith(continuation))
160 if matchout or nextline.startswith('#'):
161 break
162 elif nextline.startswith(continuation):
163 inputline += '\n' + nextline[Nc:]
164 else:
165 rest.append(nextline)
166 i+= 1
167
168 block.append((INPUT, (decorator, inputline, '\n'.join(rest))))
169 continue
170
171 # if it looks like an output line grab all the text to the end
172 # of the block
173 matchout = rgxout.match(line)
174 if matchout:
175 lineno, output = int(matchout.group(1)), matchout.group(2)
176 if i<N-1:
177 output = '\n'.join([output] + lines[i:])
178
179 block.append((OUTPUT, output))
180 break
181
182 return block
183
184 class EmbeddedSphinxShell(object):
185 """An embedded IPython instance to run inside Sphinx"""
186
187 def __init__(self):
188
189 self.cout = cStringIO.StringIO()
190
191 # Create config object for IPython
192 config = Config()
193 config.Global.display_banner = False
194 config.Global.exec_lines = ['import numpy as np',
195 'from pylab import *'
196 ]
197 config.InteractiveShell.autocall = False
198 config.InteractiveShell.autoindent = False
199 config.InteractiveShell.colors = 'NoColor'
200 config.InteractiveShell.cache_size = 0
201
202 # create a profile so instance history isn't saved
203 tmp_profile_dir = tempfile.mkdtemp(prefix='profile_')
204 profname = 'auto_profile_sphinx_build'
205 pdir = os.path.join(tmp_profile_dir,profname)
206 profile = ProfileDir.create_profile_dir(pdir)
207
208 # Create and initialize ipython, but don't start its mainloop
209 IP = InteractiveShell.instance(config=config, profile_dir=profile)
210
211 # io.stdout redirect must be done *after* instantiating InteractiveShell
212 io.stdout = self.cout
213 io.stderr = self.cout
214
215 # For debugging, so we can see normal output, use this:
216 #from IPython.utils.io import Tee
217 #io.stdout = Tee(self.cout, channel='stdout') # dbg
218 #io.stderr = Tee(self.cout, channel='stderr') # dbg
219
220 # Store a few parts of IPython we'll need.
221 self.IP = IP
222 self.user_ns = self.IP.user_ns
223 self.user_global_ns = self.IP.user_global_ns
224
225 self.input = ''
226 self.output = ''
227
228 self.is_verbatim = False
229 self.is_doctest = False
230 self.is_suppress = False
231
232 # on the first call to the savefig decorator, we'll import
233 # pyplot as plt so we can make a call to the plt.gcf().savefig
234 self._pyplot_imported = False
235
236 def clear_cout(self):
237 self.cout.seek(0)
238 self.cout.truncate(0)
239
240 def process_input_line(self, line, store_history=True):
241 """process the input, capturing stdout"""
242 #print "input='%s'"%self.input
243 stdout = sys.stdout
244 splitter = self.IP.input_splitter
245 try:
246 sys.stdout = self.cout
247 splitter.push(line)
248 more = splitter.push_accepts_more()
249 if not more:
250 source_raw = splitter.source_raw_reset()[1]
251 self.IP.run_cell(source_raw, store_history=store_history)
252 finally:
253 sys.stdout = stdout
254
255 def process_image(self, decorator):
256 """
257 # build out an image directive like
258 # .. image:: somefile.png
259 # :width 4in
260 #
261 # from an input like
262 # savefig somefile.png width=4in
263 """
264 savefig_dir = self.savefig_dir
265 source_dir = self.source_dir
266 saveargs = decorator.split(' ')
267 filename = saveargs[1]
268 # insert relative path to image file in source
269 outfile = os.path.relpath(os.path.join(savefig_dir,filename),
270 source_dir)
271
272 imagerows = ['.. image:: %s'%outfile]
273
274 for kwarg in saveargs[2:]:
275 arg, val = kwarg.split('=')
276 arg = arg.strip()
277 val = val.strip()
278 imagerows.append(' :%s: %s'%(arg, val))
279
280 image_file = os.path.basename(outfile) # only return file name
281 image_directive = '\n'.join(imagerows)
282 return image_file, image_directive
283
284
285 # Callbacks for each type of token
286 def process_input(self, data, input_prompt, lineno):
287 """Process data block for INPUT token."""
288 decorator, input, rest = data
289 image_file = None
290 image_directive = None
291 #print 'INPUT:', data # dbg
292 is_verbatim = decorator=='@verbatim' or self.is_verbatim
293 is_doctest = decorator=='@doctest' or self.is_doctest
294 is_suppress = decorator=='@suppress' or self.is_suppress
295 is_okexcept = decorator=='@okexcept' or self.is_okexcept
296 is_savefig = decorator is not None and \
297 decorator.startswith('@savefig')
298
299 input_lines = input.split('\n')
300
301 self.datacontent = data
302
303 continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
304
305 if is_savefig:
306 image_file, image_directive = self.process_image(decorator)
307
308 ret = []
309 is_semicolon = False
310 store_history = True
311
312 for i, line in enumerate(input_lines):
313 if line.endswith(';'):
314 is_semicolon = True
315 if is_semicolon or is_suppress:
316 store_history = False
317
318 if i==0:
319 # process the first input line
320 if is_verbatim:
321 self.process_input_line('')
322 self.IP.execution_count += 1 # increment it anyway
323 else:
324 # only submit the line in non-verbatim mode
325 self.process_input_line(line, store_history=store_history)
326 formatted_line = '%s %s'%(input_prompt, line)
327 else:
328 # process a continuation line
329 if not is_verbatim:
330 self.process_input_line(line, store_history=store_history)
331
332 formatted_line = '%s%s'%(continuation, line)
333
334 if not is_suppress:
335 ret.append(formatted_line)
336
337 if not is_suppress:
338 if len(rest.strip()):
339 if is_verbatim:
340 # the "rest" is the standard output of the
341 # input, which needs to be added in
342 # verbatim mode
343 ret.append(rest)
344
345 self.cout.seek(0)
346 output = self.cout.read()
347 if not is_suppress and not is_semicolon:
348 ret.append(output.decode('utf-8'))
349
350 if not is_okexcept and "Traceback" in output:
351 sys.stdout.write(output)
352
353 self.cout.truncate(0)
354 return (ret, input_lines, output, is_doctest, image_file,
355 image_directive)
356 #print 'OUTPUT', output # dbg
357
358 def process_output(self, data, output_prompt,
359 input_lines, output, is_doctest, image_file):
360 """Process data block for OUTPUT token."""
361 if is_doctest:
362 submitted = data.strip()
363 found = output
364 if found is not None:
365 found = found.strip()
366
367 # XXX - fperez: in 0.11, 'output' never comes with the prompt
368 # in it, just the actual output text. So I think all this code
369 # can be nuked...
370
371 # the above comment does not appear to be accurate... (minrk)
372
373 ind = found.find(output_prompt)
374 if ind<0:
375 e='output prompt="%s" does not match out line=%s' % \
376 (output_prompt, found)
377 raise RuntimeError(e)
378 found = found[len(output_prompt):].strip()
379
380 if found!=submitted:
381 e = ('doctest failure for input_lines="%s" with '
382 'found_output="%s" and submitted output="%s"' %
383 (input_lines, found, submitted) )
384 raise RuntimeError(e)
385 #print 'doctest PASSED for input_lines="%s" with found_output="%s" and submitted output="%s"'%(input_lines, found, submitted)
386
387 def process_comment(self, data):
388 """Process data fPblock for COMMENT token."""
389 if not self.is_suppress:
390 return [data]
391
392 def save_image(self, image_file):
393 """
394 Saves the image file to disk.
395 """
396 self.ensure_pyplot()
397 command = ('plt.gcf().savefig("%s", bbox_inches="tight", '
398 'dpi=100)' % image_file)
399 #print 'SAVEFIG', command # dbg
400 self.process_input_line('bookmark ipy_thisdir', store_history=False)
401 self.process_input_line('cd -b ipy_savedir', store_history=False)
402 self.process_input_line(command, store_history=False)
403 self.process_input_line('cd -b ipy_thisdir', store_history=False)
404 self.process_input_line('bookmark -d ipy_thisdir', store_history=False)
405 self.clear_cout()
406
407
408 def process_block(self, block):
409 """
410 process block from the block_parser and return a list of processed lines
411 """
412 ret = []
413 output = None
414 input_lines = None
415 lineno = self.IP.execution_count
416
417 input_prompt = self.promptin%lineno
418 output_prompt = self.promptout%lineno
419 image_file = None
420 image_directive = None
421
422 for token, data in block:
423 if token==COMMENT:
424 out_data = self.process_comment(data)
425 elif token==INPUT:
426 (out_data, input_lines, output, is_doctest, image_file,
427 image_directive) = \
428 self.process_input(data, input_prompt, lineno)
429 elif token==OUTPUT:
430 out_data = \
431 self.process_output(data, output_prompt,
432 input_lines, output, is_doctest,
433 image_file)
434 if out_data:
435 ret.extend(out_data)
436
437 # save the image files
438 if image_file is not None:
439 self.save_image(image_file)
440
441 return ret, image_directive
442
443 def ensure_pyplot(self):
444 if self._pyplot_imported:
445 return
446 self.process_input_line('import matplotlib.pyplot as plt',
447 store_history=False)
448
449 def process_pure_python(self, content):
450 """
451 content is a list of strings. it is unedited directive conent
452
453 This runs it line by line in the InteractiveShell, prepends
454 prompts as needed capturing stderr and stdout, then returns
455 the content as a list as if it were ipython code
456 """
457 output = []
458 savefig = False # keep up with this to clear figure
459 multiline = False # to handle line continuation
460 fmtin = self.promptin
461
462 for lineno, line in enumerate(content):
463
464 line_stripped = line.strip()
465
466 if not len(line):
467 output.append(line) # preserve empty lines in output
468 continue
469
470 # handle decorators
471 if line_stripped.startswith('@'):
472 output.extend([line])
473 if 'savefig' in line:
474 savefig = True # and need to clear figure
475 continue
476
477 # handle comments
478 if line_stripped.startswith('#'):
479 output.extend([line])
480 continue
481
482 # deal with multilines
483 if not multiline: # not currently on a multiline
484
485 if line_stripped.endswith('\\'): # now we are
486 multiline = True
487 cont_len = len(str(lineno)) + 2
488 line_to_process = line.strip('\\')
489 output.extend([u"%s %s" % (fmtin%lineno,line)])
490 continue
491 else: # no we're still not
492 line_to_process = line.strip('\\')
493 else: # we are currently on a multiline
494 line_to_process += line.strip('\\')
495 if line_stripped.endswith('\\'): # and we still are
496 continuation = '.' * cont_len
497 output.extend([(u' %s: '+line_stripped) % continuation])
498 continue
499 # else go ahead and run this multiline then carry on
500
501 # get output of line
502 self.process_input_line(unicode(line_to_process.strip()),
503 store_history=False)
504 out_line = self.cout.getvalue()
505 self.clear_cout()
506
507 # clear current figure if plotted
508 if savefig:
509 self.ensure_pyplot()
510 self.process_input_line('plt.clf()', store_history=False)
511 self.clear_cout()
512 savefig = False
513
514 # line numbers don't actually matter, they're replaced later
515 if not multiline:
516 in_line = u"%s %s" % (fmtin%lineno,line)
517
518 output.extend([in_line])
519 else:
520 output.extend([(u' %s: '+line_stripped) % continuation])
521 multiline = False
522 if len(out_line):
523 output.extend([out_line])
524 output.extend([u''])
525
526 return output
527
528 def process_pure_python2(self, content):
529 """
530 content is a list of strings. it is unedited directive conent
531
532 This runs it line by line in the InteractiveShell, prepends
533 prompts as needed capturing stderr and stdout, then returns
534 the content as a list as if it were ipython code
535 """
536 output = []
537 savefig = False # keep up with this to clear figure
538 multiline = False # to handle line continuation
539 multiline_start = None
540 fmtin = self.promptin
541
542 ct = 0
543
544 # nuke empty lines
545 content = [line for line in content if len(line.strip()) > 0]
546
547 for lineno, line in enumerate(content):
548
549 line_stripped = line.strip()
550 if not len(line):
551 output.append(line)
552 continue
553
554 # handle decorators
555 if line_stripped.startswith('@'):
556 output.extend([line])
557 if 'savefig' in line:
558 savefig = True # and need to clear figure
559 continue
560
561 # handle comments
562 if line_stripped.startswith('#'):
563 output.extend([line])
564 continue
565
566 continuation = u' %s:'% ''.join(['.']*(len(str(ct))+2))
567 if not multiline:
568 modified = u"%s %s" % (fmtin % ct, line_stripped)
569 output.append(modified)
570 ct += 1
571 try:
572 ast.parse(line_stripped)
573 output.append(u'')
574 except Exception:
575 multiline = True
576 multiline_start = lineno
577 else:
578 modified = u'%s %s' % (continuation, line)
579 output.append(modified)
580
581 try:
582 ast.parse('\n'.join(content[multiline_start:lineno+1]))
583
584 if (lineno < len(content) - 1 and
585 _count_indent(content[multiline_start]) <
586 _count_indent(content[lineno + 1])):
587
588 continue
589
590 output.extend([continuation, u''])
591 multiline = False
592 except Exception:
593 pass
594
595 continue
596
597 return output
598
599 def _count_indent(x):
600 import re
601 m = re.match('(\s+)(.*)', x)
602 if not m:
603 return 0
604 return len(m.group(1))
605
606 class IpythonDirective(Directive):
607
608 has_content = True
609 required_arguments = 0
610 optional_arguments = 4 # python, suppress, verbatim, doctest
611 final_argumuent_whitespace = True
612 option_spec = { 'python': directives.unchanged,
613 'suppress' : directives.flag,
614 'verbatim' : directives.flag,
615 'doctest' : directives.flag,
616 'okexcept' : directives.flag,
617 }
618
619 shell = EmbeddedSphinxShell()
620
621 def get_config_options(self):
622 # contains sphinx configuration variables
623 config = self.state.document.settings.env.config
624
625 # get config variables to set figure output directory
626 confdir = self.state.document.settings.env.app.confdir
627 savefig_dir = config.ipython_savefig_dir
628 source_dir = os.path.dirname(self.state.document.current_source)
629 if savefig_dir is None:
630 savefig_dir = config.html_static_path
631 if isinstance(savefig_dir, list):
632 savefig_dir = savefig_dir[0] # safe to assume only one path?
633 savefig_dir = os.path.join(confdir, savefig_dir)
634
635 # get regex and prompt stuff
636 rgxin = config.ipython_rgxin
637 rgxout = config.ipython_rgxout
638 promptin = config.ipython_promptin
639 promptout = config.ipython_promptout
640
641 return savefig_dir, source_dir, rgxin, rgxout, promptin, promptout
642
643 def setup(self):
644 # get config values
645 (savefig_dir, source_dir, rgxin,
646 rgxout, promptin, promptout) = self.get_config_options()
647
648 # and attach to shell so we don't have to pass them around
649 self.shell.rgxin = rgxin
650 self.shell.rgxout = rgxout
651 self.shell.promptin = promptin
652 self.shell.promptout = promptout
653 self.shell.savefig_dir = savefig_dir
654 self.shell.source_dir = source_dir
655
656 # setup bookmark for saving figures directory
657
658 self.shell.process_input_line('bookmark ipy_savedir %s'%savefig_dir,
659 store_history=False)
660 self.shell.clear_cout()
661
662 return rgxin, rgxout, promptin, promptout
663
664
665 def teardown(self):
666 # delete last bookmark
667 self.shell.process_input_line('bookmark -d ipy_savedir',
668 store_history=False)
669 self.shell.clear_cout()
670
671 def run(self):
672 debug = False
673
674 #TODO, any reason block_parser can't be a method of embeddable shell
675 # then we wouldn't have to carry these around
676 rgxin, rgxout, promptin, promptout = self.setup()
677
678 options = self.options
679 self.shell.is_suppress = 'suppress' in options
680 self.shell.is_doctest = 'doctest' in options
681 self.shell.is_verbatim = 'verbatim' in options
682 self.shell.is_okexcept = 'okexcept' in options
683 self.shell.current_content = self.content
684
685 # handle pure python code
686 if 'python' in self.arguments:
687 content = self.content
688 self.content = self.shell.process_pure_python2(content)
689
690 parts = '\n'.join(self.content).split('\n\n')
691
692 lines = ['.. code-block:: ipython','']
693 figures = []
694
695 for part in parts:
696
697 block = block_parser(part, rgxin, rgxout, promptin, promptout)
698
699 if len(block):
700 rows, figure = self.shell.process_block(block)
701 for row in rows:
702 # hack
703 # if row == '':
704 # continue
705
706 # lines.extend([' %s'% row.strip()])
707 lines.extend([' %s' % line
708 for line in re.split('[\n]+', row)])
709
710 if figure is not None:
711 figures.append(figure)
712
713 #text = '\n'.join(lines)
714 #figs = '\n'.join(figures)
715
716 for figure in figures:
717 lines.append('')
718 lines.extend(figure.split('\n'))
719 lines.append('')
720
721 #print lines
722 if len(lines)>2:
723 if debug:
724 print '\n'.join(lines)
725 else: #NOTE: this raises some errors, what's it for?
726 #print 'INSERTING %d lines'%len(lines)
727 self.state_machine.insert_input(
728 lines, self.state_machine.input_lines.source(0))
729
730 text = '\n'.join(lines)
731 txtnode = nodes.literal_block(text, text)
732 txtnode['language'] = 'ipython'
733 #imgnode = nodes.image(figs)
734
735 # cleanup
736 self.teardown()
737
738 return []#, imgnode]
739
740 # Enable as a proper Sphinx directive
741 def setup(app):
742 setup.app = app
743
744 app.add_directive('ipython', IpythonDirective)
745 app.add_config_value('ipython_savefig_dir', None, True)
746 app.add_config_value('ipython_rgxin',
747 re.compile('In \[(\d+)\]:\s?(.*)\s*'), True)
748 app.add_config_value('ipython_rgxout',
749 re.compile('Out\[(\d+)\]:\s?(.*)\s*'), True)
750 app.add_config_value('ipython_promptin', 'In [%d]:', True)
751 app.add_config_value('ipython_promptout', 'Out[%d]:', True)
752
753
754 # Simple smoke test, needs to be converted to a proper automatic test.
755 def test():
756
757 examples = [
758 r"""
759 In [9]: pwd
760 Out[9]: '/home/jdhunter/py4science/book'
761
762 In [10]: cd bookdata/
763 /home/jdhunter/py4science/book/bookdata
764
765 In [2]: from pylab import *
766
767 In [2]: ion()
768
769 In [3]: im = imread('stinkbug.png')
770
771 @savefig mystinkbug.png width=4in
772 In [4]: imshow(im)
773 Out[4]: <matplotlib.image.AxesImage object at 0x39ea850>
774
775 """,
776 r"""
777
778 In [1]: x = 'hello world'
779
780 # string methods can be
781 # used to alter the string
782 @doctest
783 In [2]: x.upper()
784 Out[2]: 'HELLO WORLD'
785
786 @verbatim
787 In [3]: x.st<TAB>
788 x.startswith x.strip
789 """,
790 r"""
791
792 In [130]: url = 'http://ichart.finance.yahoo.com/table.csv?s=CROX\
793 .....: &d=9&e=22&f=2009&g=d&a=1&br=8&c=2006&ignore=.csv'
794
795 In [131]: print url.split('&')
796 ['http://ichart.finance.yahoo.com/table.csv?s=CROX', 'd=9', 'e=22', 'f=2009', 'g=d', 'a=1', 'b=8', 'c=2006', 'ignore=.csv']
797
798 In [60]: import urllib
799
800 """,
801 r"""\
802
803 In [133]: import numpy.random
804
805 @suppress
806 In [134]: numpy.random.seed(2358)
807
808 @doctest
809 In [135]: numpy.random.rand(10,2)
810 Out[135]:
811 array([[ 0.64524308, 0.59943846],
812 [ 0.47102322, 0.8715456 ],
813 [ 0.29370834, 0.74776844],
814 [ 0.99539577, 0.1313423 ],
815 [ 0.16250302, 0.21103583],
816 [ 0.81626524, 0.1312433 ],
817 [ 0.67338089, 0.72302393],
818 [ 0.7566368 , 0.07033696],
819 [ 0.22591016, 0.77731835],
820 [ 0.0072729 , 0.34273127]])
821
822 """,
823
824 r"""
825 In [106]: print x
826 jdh
827
828 In [109]: for i in range(10):
829 n
830 .....: print i
831 .....:
832 .....:
833 0
834 1
835 2
836 3
837 4
838 5
839 6
840 7
841 8
842 9
843 """,
844
845 r"""
846
847 In [144]: from pylab import *
848
849 In [145]: ion()
850
851 # use a semicolon to suppress the output
852 @savefig test_hist.png width=4in
853 In [151]: hist(np.random.randn(10000), 100);
854
855
856 @savefig test_plot.png width=4in
857 In [151]: plot(np.random.randn(10000), 'o');
858 """,
859
860 r"""
861 # use a semicolon to suppress the output
862 In [151]: plt.clf()
863
864 @savefig plot_simple.png width=4in
865 In [151]: plot([1,2,3])
866
867 @savefig hist_simple.png width=4in
868 In [151]: hist(np.random.randn(10000), 100);
869
870 """,
871 r"""
872 # update the current fig
873 In [151]: ylabel('number')
874
875 In [152]: title('normal distribution')
876
877
878 @savefig hist_with_text.png
879 In [153]: grid(True)
880
881 """,
882 ]
883 # skip local-file depending first example:
884 examples = examples[1:]
885
886 #ipython_directive.DEBUG = True # dbg
887 #options = dict(suppress=True) # dbg
888 options = dict()
889 for example in examples:
890 content = example.split('\n')
891 ipython_directive('debug', arguments=None, options=options,
892 content=content, lineno=0,
893 content_offset=None, block_text=None,
894 state=None, state_machine=None,
895 )
896
897 # Run test suite as a script
898 if __name__=='__main__':
899 if not os.path.isdir('_static'):
900 os.mkdir('_static')
901 test()
902 print 'All OK? Check figures in _static/'
903
[end of doc/sphinxext/ipython_directive.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
da14c6e857fd1fc7875cd552779a6063ec9e4ddc
|
allow HDFStore to remain open when TableIterator is returned from read_hdf
Hi,
I'm using a TableIterator from pandas.read_hdf function (with the keyword argument iterator=True), I am unable to retrieve any data due to the error "ClosedNodeError: the node object is closed".
For instance:
```
pandas.DataFrame({'a':[1,2,3], 'b':[4,5,6]}).to_hdf("test.h5", "test", append=True)
it = pandas.read_hdf("test.h5","test",iterator=True)
iter(it).next()
Traceback (most recent call last):
File "<ipython-input-22-5634d86698ab>", line 1, in <module>
iter(it).next()
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 912, in __iter__
v = self.func(current, stop)
...
File "/usr/local/lib/python2.7/site-packages/tables/node.py", line 355, in _g_check_open
raise ClosedNodeError("the node object is closed")
ClosedNodeError: the node object is closed
```
I looked through source code of panda.io.pytables and found that in the
get_store function, store.close() is always run when read_hdf returns, even if
it returns an TableIterator. My assumption is that store should remain open in
order for TableIterator to work. Can you please let me know if this fix is
acceptable, or is there an easier way to do this?
Thanks,
Sean
|
its only open/closed automatically when you use `read_hdf`, otherwise use store as normal. The example uses the more verbose syntax. http://pandas.pydata.org/pandas-docs/dev/io.html#iterator
@seanyeh anything further? otherwise pls close
I do find it odd that it allows an option that doesn't work. Thanks anyway
where does the option not work? it works exactly how its supposed to in `read_hdf`, which provides a context to open/close the file. When used with an already open store it doesn't close it.
How else would you expect it to work?
@jreback what's the point of passing iterator=True if you can't iterate over the result?
It seems intuitive that this should work, right?
``` python
for x in pandas.read_hdf("test.h5", "test", iterator=True):
print x
```
but according to the example above, that would raise the closed node error.
Maybe it would make more sense to have TableIterator handle the cleanup if `read_hdf` is passed a path/string instead of an open store? (so, in `__iter__`, after control passes out of the while loop, do the cleanup that is necessary to close it up).
@jtratner @seanyeh
originally you always had to open/close stores yourself, `read_hdf` does this for you; I suppose it could be enabled with iterator/chunksize support like above (in which the context manager knowns to close it, but only after iter is done).
I guess if the context manager is passed an open handle then it shouldn't close it...
+1 for enabling read_hdf(..., iterator=True).
Arguably, since pytables automatically closes the h5 file via atexit.register(close_open_files), we don't need to explicitly close it.
@adgaudio I disagree.
These files need explicit open/close; or use with a context manager. ]
This is straightforward to address and will be fixed soon.
Relying on the system to close files is not good from a safety perspective nor good programming practice.
Yea, true. Thanks everyone - I'll look forward to using the patch :)
Agree with @jreback. atexit is fragile and there's no reason not to handle
this explicitly.
On a separate note - @is it problematic to pass (or set) a reference to the
store to the TableIterator? Makes it much cleaner to handle that way...
@adgaudio https://github.com/adgaudio I disagree.
These files need explicit open/close; or use with a context manager. ]
This is straightforward to address and will be fixed soon.
Relying on the system to close files is not good from a safety perspective
nor good programming practice.
—
Reply to this email directly or view it on
GitHubhttps://github.com/pydata/pandas/pull/3937#issuecomment-19635061
.
|
2013-06-18T22:05:48Z
|
<patch>
diff --git a/RELEASE.rst b/RELEASE.rst
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -101,6 +101,7 @@ pandas 0.11.1
to select with a Storer; these are invalid parameters at this time
- can now specify an ``encoding`` option to ``append/put``
to enable alternate encodings (GH3750_)
+ - enable support for ``iterator/chunksize`` with ``read_hdf``
- The repr() for (Multi)Index now obeys display.max_seq_items rather
then numpy threshold print options. (GH3426_, GH3466_)
- Added mangle_dupe_cols option to read_table/csv, allowing users
diff --git a/doc/source/io.rst b/doc/source/io.rst
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1925,6 +1925,18 @@ The default is 50,000 rows returned in a chunk.
for df in store.select('df', chunksize=3):
print df
+.. note::
+
+ .. versionadded:: 0.11.1
+
+ You can also use the iterator with ``read_hdf`` which will open, then
+ automatically close the store when finished iterating.
+
+ .. code-block:: python
+
+ for df in read_hdf('store.h5','df', chunsize=3):
+ print df
+
Note, that the chunksize keyword applies to the **returned** rows. So if you
are doing a query, then that set will be subdivided and returned in the
iterator. Keep in mind that if you do not pass a ``where`` selection criteria
diff --git a/doc/source/v0.11.1.txt b/doc/source/v0.11.1.txt
--- a/doc/source/v0.11.1.txt
+++ b/doc/source/v0.11.1.txt
@@ -6,6 +6,11 @@ v0.11.1 (June ??, 2013)
This is a minor release from 0.11.0 and includes several new features and
enhancements along with a large number of bug fixes.
+Highlites include a consistent I/O API naming scheme, routines to read html,
+write multi-indexes to csv files, read & write STATA data files, read & write JSON format
+files, Python 3 support for ``HDFStore``, filtering of groupby expressions via ``filter``, and a
+revamped ``replace`` routine that accepts regular expressions.
+
API changes
~~~~~~~~~~~
@@ -148,8 +153,8 @@ API changes
``bs4`` + ``html5lib`` when lxml fails to parse. a list of parsers to try
until success is also valid
-Enhancements
-~~~~~~~~~~~~
+I/O Enhancements
+~~~~~~~~~~~~~~~~
- ``pd.read_html()`` can now parse HTML strings, files or urls and return
DataFrames, courtesy of @cpcloud. (GH3477_, GH3605_, GH3606_, GH3616_).
@@ -184,28 +189,6 @@ Enhancements
accessable via ``read_json`` top-level function for reading,
and ``to_json`` DataFrame method for writing, :ref:`See the docs<io.json>`
- - ``DataFrame.replace()`` now allows regular expressions on contained
- ``Series`` with object dtype. See the examples section in the regular docs
- :ref:`Replacing via String Expression <missing_data.replace_expression>`
-
- For example you can do
-
- .. ipython :: python
-
- df = DataFrame({'a': list('ab..'), 'b': [1, 2, 3, 4]})
- df.replace(regex=r'\s*\.\s*', value=np.nan)
-
- to replace all occurrences of the string ``'.'`` with zero or more
- instances of surrounding whitespace with ``NaN``.
-
- Regular string replacement still works as expected. For example, you can do
-
- .. ipython :: python
-
- df.replace('.', np.nan)
-
- to replace all occurrences of the string ``'.'`` with ``NaN``.
-
- Multi-index column support for reading and writing csv format files
- The ``header`` option in ``read_csv`` now accepts a
@@ -225,19 +208,62 @@ Enhancements
with ``df.to_csv(..., index=False``), then any ``names`` on the columns index will
be *lost*.
+ .. ipython:: python
+
+ from pandas.util.testing import makeCustomDataframe as mkdf
+ df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
+ df.to_csv('mi.csv',tupleize_cols=False)
+ print open('mi.csv').read()
+ pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1],tupleize_cols=False)
+
+ .. ipython:: python
+ :suppress:
+
+ import os
+ os.remove('mi.csv')
+
+ - Support for ``HDFStore`` (via ``PyTables 3.0.0``) on Python3
+
+ - Iterator support via ``read_hdf`` that automatically opens and closes the
+ store when iteration is finished. This is only for *tables*
+
.. ipython:: python
- from pandas.util.testing import makeCustomDataframe as mkdf
- df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
- df.to_csv('mi.csv',tupleize_cols=False)
- print open('mi.csv').read()
- pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1],tupleize_cols=False)
+ path = 'store_iterator.h5'
+ DataFrame(randn(10,2)).to_hdf(path,'df',table=True)
+ for df in read_hdf(path,'df', chunksize=3):
+ print df
.. ipython:: python
- :suppress:
+ :suppress:
- import os
- os.remove('mi.csv')
+ import os
+ os.remove(path)
+
+Other Enhancements
+~~~~~~~~~~~~~~~~~~
+
+ - ``DataFrame.replace()`` now allows regular expressions on contained
+ ``Series`` with object dtype. See the examples section in the regular docs
+ :ref:`Replacing via String Expression <missing_data.replace_expression>`
+
+ For example you can do
+
+ .. ipython :: python
+
+ df = DataFrame({'a': list('ab..'), 'b': [1, 2, 3, 4]})
+ df.replace(regex=r'\s*\.\s*', value=np.nan)
+
+ to replace all occurrences of the string ``'.'`` with zero or more
+ instances of surrounding whitespace with ``NaN``.
+
+ Regular string replacement still works as expected. For example, you can do
+
+ .. ipython :: python
+
+ df.replace('.', np.nan)
+
+ to replace all occurrences of the string ``'.'`` with ``NaN``.
- ``pd.melt()`` now accepts the optional parameters ``var_name`` and ``value_name``
to specify custom column names of the returned DataFrame.
@@ -261,8 +287,6 @@ Enhancements
pd.get_option('a.b')
pd.get_option('b.c')
- - Support for ``HDFStore`` (via ``PyTables 3.0.0``) on Python3
-
- The ``filter`` method for group objects returns a subset of the original
object. Suppose we want to take only elements that belong to groups with a
group sum greater than 2.
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -196,12 +196,27 @@ def to_hdf(path_or_buf, key, value, mode=None, complevel=None, complib=None, app
def read_hdf(path_or_buf, key, **kwargs):
""" read from the store, closeit if we opened it """
- f = lambda store: store.select(key, **kwargs)
+ f = lambda store, auto_close: store.select(key, auto_close=auto_close, **kwargs)
if isinstance(path_or_buf, basestring):
- with get_store(path_or_buf) as store:
- return f(store)
- f(path_or_buf)
+
+ # can't auto open/close if we are using an iterator
+ # so delegate to the iterator
+ store = HDFStore(path_or_buf)
+ try:
+ return f(store, True)
+ except:
+
+ # if there is an error, close the store
+ try:
+ store.close()
+ except:
+ pass
+
+ raise
+
+ # a passed store; user controls open/close
+ f(path_or_buf, False)
class HDFStore(object):
"""
@@ -405,7 +420,7 @@ def get(self, key):
raise KeyError('No object named %s in the file' % key)
return self._read_group(group)
- def select(self, key, where=None, start=None, stop=None, columns=None, iterator=False, chunksize=None, **kwargs):
+ def select(self, key, where=None, start=None, stop=None, columns=None, iterator=False, chunksize=None, auto_close=False, **kwargs):
"""
Retrieve pandas object stored in file, optionally based on where
criteria
@@ -419,6 +434,7 @@ def select(self, key, where=None, start=None, stop=None, columns=None, iterator=
columns : a list of columns that if not None, will limit the return columns
iterator : boolean, return an iterator, default False
chunksize : nrows to include in iteration, return an iterator
+ auto_close : boolean, should automatically close the store when finished, default is False
"""
group = self.get_node(key)
@@ -434,9 +450,11 @@ def func(_start, _stop):
return s.read(where=where, start=_start, stop=_stop, columns=columns, **kwargs)
if iterator or chunksize is not None:
- return TableIterator(func, nrows=s.nrows, start=start, stop=stop, chunksize=chunksize)
+ if not s.is_table:
+ raise TypeError("can only use an iterator or chunksize on a table")
+ return TableIterator(self, func, nrows=s.nrows, start=start, stop=stop, chunksize=chunksize, auto_close=auto_close)
- return TableIterator(func, nrows=s.nrows, start=start, stop=stop).get_values()
+ return TableIterator(self, func, nrows=s.nrows, start=start, stop=stop, auto_close=auto_close).get_values()
def select_as_coordinates(self, key, where=None, start=None, stop=None, **kwargs):
"""
@@ -473,7 +491,7 @@ def select_column(self, key, column, **kwargs):
"""
return self.get_storer(key).read_column(column = column, **kwargs)
- def select_as_multiple(self, keys, where=None, selector=None, columns=None, start=None, stop=None, iterator=False, chunksize=None, **kwargs):
+ def select_as_multiple(self, keys, where=None, selector=None, columns=None, start=None, stop=None, iterator=False, chunksize=None, auto_close=False, **kwargs):
""" Retrieve pandas objects from multiple tables
Parameters
@@ -541,9 +559,9 @@ def func(_start, _stop):
return concat(objs, axis=axis, verify_integrity=True)
if iterator or chunksize is not None:
- return TableIterator(func, nrows=nrows, start=start, stop=stop, chunksize=chunksize)
+ return TableIterator(self, func, nrows=nrows, start=start, stop=stop, chunksize=chunksize, auto_close=auto_close)
- return TableIterator(func, nrows=nrows, start=start, stop=stop).get_values()
+ return TableIterator(self, func, nrows=nrows, start=start, stop=stop, auto_close=auto_close).get_values()
def put(self, key, value, table=None, append=False, **kwargs):
@@ -916,16 +934,20 @@ class TableIterator(object):
Parameters
----------
- func : the function to get results
+ store : the reference store
+ func : the function to get results
nrows : the rows to iterate on
start : the passed start value (default is None)
- stop : the passed stop value (default is None)
+ stop : the passed stop value (default is None)
chunksize : the passed chunking valeu (default is 50000)
+ auto_close : boolean, automatically close the store at the end of iteration,
+ default is False
kwargs : the passed kwargs
"""
- def __init__(self, func, nrows, start=None, stop=None, chunksize=None):
- self.func = func
+ def __init__(self, store, func, nrows, start=None, stop=None, chunksize=None, auto_close=False):
+ self.store = store
+ self.func = func
self.nrows = nrows or 0
self.start = start or 0
@@ -937,6 +959,7 @@ def __init__(self, func, nrows, start=None, stop=None, chunksize=None):
chunksize = 100000
self.chunksize = chunksize
+ self.auto_close = auto_close
def __iter__(self):
current = self.start
@@ -950,9 +973,16 @@ def __iter__(self):
yield v
+ self.close()
+
+ def close(self):
+ if self.auto_close:
+ self.store.close()
+
def get_values(self):
- return self.func(self.start, self.stop)
-
+ results = self.func(self.start, self.stop)
+ self.close()
+ return results
class IndexCol(object):
""" an index column description class
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-18481
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: rolling.corr() produces wrong result with equal values
#### Code Sample, a copy-pastable example if possible
```python
s = pd.Series([1,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0,0,0,7,0,0,0])
pd.rolling_corr(s,s,6)
```
#### Problem description
rolling_corr is producing the wrong result:
```
python
pd.rolling_corr(s,s,6)
```
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 1.0
6 1.0
7 1.0
8 1.0
9 0.0
10 0.0
11 0.0
12 0.0
13 0.0
14 0.0
15 0.0
16 0.0
17 0.0
18 0.0
19 0.0
20 0.0
21 0.0
22 0.0
23 0.0
24 0.0
25 0.0
26 1.0
27 1.0
28 1.0
29 1.0
30 1.0
31 1.0
32 1.0
33 1.0
This should have nan's instead of 0's for windows with static data.
#### Expected Output
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.1.35-pv-ts2
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.3
pytest: 3.0.7
pip: 9.0.1
setuptools: 36.4.0
Cython: 0.25.2
numpy: 1.11.3
scipy: 0.19.0
xarray: None
IPython: 5.3.0
sphinx: 1.6.2
patsy: 0.4.1
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: None
tables: 3.3.0
numexpr: 2.6.2
feather: None
matplotlib: 1.5.1
openpyxl: 2.4.8
xlrd: 1.0.0
xlwt: None
xlsxwriter: 0.9.8
lxml: None
bs4: 4.5.3
html5lib: 0.999
sqlalchemy: 1.1.10
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td>
13 </tr>
14 <td></td>
15 <td><img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /></td>
16 </tr>
17 <tr>
18 <td>Package Status</td>
19 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
20 </tr>
21 <tr>
22 <td>License</td>
23 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td>
24 </tr>
25 <tr>
26 <td>Build Status</td>
27 <td>
28 <a href="https://travis-ci.org/pandas-dev/pandas">
29 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td></td>
35 <td>
36 <a href="https://circleci.com/gh/pandas-dev/pandas">
37 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td></td>
43 <td>
44 <a href="https://ci.appveyor.com/project/pandas-dev/pandas">
45 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td>Coverage</td>
51 <td><img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /></td>
52 </tr>
53 <tr>
54 <td>Conda</td>
55 <td>
56 <a href="https://pandas.pydata.org">
57 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda default downloads" />
58 </a>
59 </td>
60 </tr>
61 <tr>
62 <td>Conda-forge</td>
63 <td>
64 <a href="https://pandas.pydata.org">
65 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
66 </a>
67 </td>
68 </tr>
69 <tr>
70 <td>PyPI</td>
71 <td>
72 <a href="https://pypi.python.org/pypi/pandas/">
73 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" />
74 </a>
75 </td>
76 </tr>
77 </table>
78
79 [](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
80
81 ## What is it
82
83 **pandas** is a Python package providing fast, flexible, and expressive data
84 structures designed to make working with "relational" or "labeled" data both
85 easy and intuitive. It aims to be the fundamental high-level building block for
86 doing practical, **real world** data analysis in Python. Additionally, it has
87 the broader goal of becoming **the most powerful and flexible open source data
88 analysis / manipulation tool available in any language**. It is already well on
89 its way toward this goal.
90
91 ## Main Features
92 Here are just a few of the things that pandas does well:
93
94 - Easy handling of [**missing data**][missing-data] (represented as
95 `NaN`) in floating point as well as non-floating point data
96 - Size mutability: columns can be [**inserted and
97 deleted**][insertion-deletion] from DataFrame and higher dimensional
98 objects
99 - Automatic and explicit [**data alignment**][alignment]: objects can
100 be explicitly aligned to a set of labels, or the user can simply
101 ignore the labels and let `Series`, `DataFrame`, etc. automatically
102 align the data for you in computations
103 - Powerful, flexible [**group by**][groupby] functionality to perform
104 split-apply-combine operations on data sets, for both aggregating
105 and transforming data
106 - Make it [**easy to convert**][conversion] ragged,
107 differently-indexed data in other Python and NumPy data structures
108 into DataFrame objects
109 - Intelligent label-based [**slicing**][slicing], [**fancy
110 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
111 large data sets
112 - Intuitive [**merging**][merging] and [**joining**][joining] data
113 sets
114 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
115 data sets
116 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
117 labels per tick)
118 - Robust IO tools for loading data from [**flat files**][flat-files]
119 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
120 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
121 - [**Time series**][timeseries]-specific functionality: date range
122 generation and frequency conversion, moving window statistics,
123 moving window linear regressions, date shifting and lagging, etc.
124
125
126 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
127 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
128 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
129 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
130 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
131 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
132 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
133 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
134 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
135 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
136 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
137 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
138 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
139 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
140 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
141 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
142 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
143 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
144
145 ## Where to get it
146 The source code is currently hosted on GitHub at:
147 https://github.com/pandas-dev/pandas
148
149 Binary installers for the latest released version are available at the [Python
150 package index](https://pypi.python.org/pypi/pandas) and on conda.
151
152 ```sh
153 # conda
154 conda install pandas
155 ```
156
157 ```sh
158 # or PyPI
159 pip install pandas
160 ```
161
162 ## Dependencies
163 - [NumPy](http://www.numpy.org): 1.7.0 or higher
164 - [python-dateutil](https://labix.org/python-dateutil): 1.5 or higher
165 - [pytz](https://pythonhosted.org/pytz)
166 - Needed for time zone support with ``pandas.date_range``
167
168 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
169 for recommended and optional dependencies.
170
171 ## Installation from sources
172 To install pandas from source you need Cython in addition to the normal
173 dependencies above. Cython can be installed from pypi:
174
175 ```sh
176 pip install cython
177 ```
178
179 In the `pandas` directory (same one where you found this file after
180 cloning the git repo), execute:
181
182 ```sh
183 python setup.py install
184 ```
185
186 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
187
188 ```sh
189 python setup.py develop
190 ```
191
192 Alternatively, you can use `pip` if you want all the dependencies pulled
193 in automatically (the `-e` option is for installing it in [development
194 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
195
196 ```sh
197 pip install -e .
198 ```
199
200 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
201
202 ## License
203 [BSD 3](LICENSE)
204
205 ## Documentation
206 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
207
208 The Sphinx documentation should provide a good starting point for learning how
209 to use the library. Expect the docs to continue to expand as time goes on.
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Getting Help
216
217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
219
220 ## Discussion and Development
221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
222
223 ## Contributing to pandas
224 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
225
226 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)**
227
228 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [Difficulty Novice](https://github.com/pandas-dev/pandas/issues?q=is%3Aopen+is%3Aissue+label%3A%22Difficulty+Novice%22) where you could start out.
229
230 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
231
232 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
233
[end of README.md]
[start of asv_bench/benchmarks/rolling.py]
1 from .pandas_vb_common import *
2 import pandas as pd
3 import numpy as np
4
5
6 class DataframeRolling(object):
7 goal_time = 0.2
8
9 def setup(self):
10 self.N = 100000
11 self.Ns = 10000
12 self.df = pd.DataFrame({'a': np.random.random(self.N)})
13 self.dfs = pd.DataFrame({'a': np.random.random(self.Ns)})
14 self.wins = 10
15 self.winl = 1000
16
17 def time_rolling_quantile_0(self):
18 (self.df.rolling(self.wins).quantile(0.0))
19
20 def time_rolling_quantile_1(self):
21 (self.df.rolling(self.wins).quantile(1.0))
22
23 def time_rolling_quantile_median(self):
24 (self.df.rolling(self.wins).quantile(0.5))
25
26 def time_rolling_median(self):
27 (self.df.rolling(self.wins).median())
28
29 def time_rolling_mean(self):
30 (self.df.rolling(self.wins).mean())
31
32 def time_rolling_max(self):
33 (self.df.rolling(self.wins).max())
34
35 def time_rolling_min(self):
36 (self.df.rolling(self.wins).min())
37
38 def time_rolling_std(self):
39 (self.df.rolling(self.wins).std())
40
41 def time_rolling_count(self):
42 (self.df.rolling(self.wins).count())
43
44 def time_rolling_skew(self):
45 (self.df.rolling(self.wins).skew())
46
47 def time_rolling_kurt(self):
48 (self.df.rolling(self.wins).kurt())
49
50 def time_rolling_sum(self):
51 (self.df.rolling(self.wins).sum())
52
53 def time_rolling_corr(self):
54 (self.dfs.rolling(self.wins).corr())
55
56 def time_rolling_cov(self):
57 (self.dfs.rolling(self.wins).cov())
58
59 def time_rolling_quantile_0_l(self):
60 (self.df.rolling(self.winl).quantile(0.0))
61
62 def time_rolling_quantile_1_l(self):
63 (self.df.rolling(self.winl).quantile(1.0))
64
65 def time_rolling_quantile_median_l(self):
66 (self.df.rolling(self.winl).quantile(0.5))
67
68 def time_rolling_median_l(self):
69 (self.df.rolling(self.winl).median())
70
71 def time_rolling_mean_l(self):
72 (self.df.rolling(self.winl).mean())
73
74 def time_rolling_max_l(self):
75 (self.df.rolling(self.winl).max())
76
77 def time_rolling_min_l(self):
78 (self.df.rolling(self.winl).min())
79
80 def time_rolling_std_l(self):
81 (self.df.rolling(self.wins).std())
82
83 def time_rolling_count_l(self):
84 (self.df.rolling(self.wins).count())
85
86 def time_rolling_skew_l(self):
87 (self.df.rolling(self.wins).skew())
88
89 def time_rolling_kurt_l(self):
90 (self.df.rolling(self.wins).kurt())
91
92 def time_rolling_sum_l(self):
93 (self.df.rolling(self.wins).sum())
94
95
96 class SeriesRolling(object):
97 goal_time = 0.2
98
99 def setup(self):
100 self.N = 100000
101 self.Ns = 10000
102 self.df = pd.DataFrame({'a': np.random.random(self.N)})
103 self.dfs = pd.DataFrame({'a': np.random.random(self.Ns)})
104 self.sr = self.df.a
105 self.srs = self.dfs.a
106 self.wins = 10
107 self.winl = 1000
108
109 def time_rolling_quantile_0(self):
110 (self.sr.rolling(self.wins).quantile(0.0))
111
112 def time_rolling_quantile_1(self):
113 (self.sr.rolling(self.wins).quantile(1.0))
114
115 def time_rolling_quantile_median(self):
116 (self.sr.rolling(self.wins).quantile(0.5))
117
118 def time_rolling_median(self):
119 (self.sr.rolling(self.wins).median())
120
121 def time_rolling_mean(self):
122 (self.sr.rolling(self.wins).mean())
123
124 def time_rolling_max(self):
125 (self.sr.rolling(self.wins).max())
126
127 def time_rolling_min(self):
128 (self.sr.rolling(self.wins).min())
129
130 def time_rolling_std(self):
131 (self.sr.rolling(self.wins).std())
132
133 def time_rolling_count(self):
134 (self.sr.rolling(self.wins).count())
135
136 def time_rolling_skew(self):
137 (self.sr.rolling(self.wins).skew())
138
139 def time_rolling_kurt(self):
140 (self.sr.rolling(self.wins).kurt())
141
142 def time_rolling_sum(self):
143 (self.sr.rolling(self.wins).sum())
144
145 def time_rolling_corr(self):
146 (self.srs.rolling(self.wins).corr())
147
148 def time_rolling_cov(self):
149 (self.srs.rolling(self.wins).cov())
150
151 def time_rolling_quantile_0_l(self):
152 (self.sr.rolling(self.winl).quantile(0.0))
153
154 def time_rolling_quantile_1_l(self):
155 (self.sr.rolling(self.winl).quantile(1.0))
156
157 def time_rolling_quantile_median_l(self):
158 (self.sr.rolling(self.winl).quantile(0.5))
159
160 def time_rolling_median_l(self):
161 (self.sr.rolling(self.winl).median())
162
163 def time_rolling_mean_l(self):
164 (self.sr.rolling(self.winl).mean())
165
166 def time_rolling_max_l(self):
167 (self.sr.rolling(self.winl).max())
168
169 def time_rolling_min_l(self):
170 (self.sr.rolling(self.winl).min())
171
172 def time_rolling_std_l(self):
173 (self.sr.rolling(self.wins).std())
174
175 def time_rolling_count_l(self):
176 (self.sr.rolling(self.wins).count())
177
178 def time_rolling_skew_l(self):
179 (self.sr.rolling(self.wins).skew())
180
181 def time_rolling_kurt_l(self):
182 (self.sr.rolling(self.wins).kurt())
183
184 def time_rolling_sum_l(self):
185 (self.sr.rolling(self.wins).sum())
186
[end of asv_bench/benchmarks/rolling.py]
[start of pandas/plotting/_misc.py]
1 # being a bit too dynamic
2 # pylint: disable=E1101
3 from __future__ import division
4
5 import numpy as np
6
7 from pandas.util._decorators import deprecate_kwarg
8 from pandas.core.dtypes.missing import notna
9 from pandas.compat import range, lrange, lmap, zip
10 from pandas.io.formats.printing import pprint_thing
11
12
13 from pandas.plotting._style import _get_standard_colors
14 from pandas.plotting._tools import _subplots, _set_ticks_props
15
16
17 def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False,
18 diagonal='hist', marker='.', density_kwds=None,
19 hist_kwds=None, range_padding=0.05, **kwds):
20 """
21 Draw a matrix of scatter plots.
22
23 Parameters
24 ----------
25 frame : DataFrame
26 alpha : float, optional
27 amount of transparency applied
28 figsize : (float,float), optional
29 a tuple (width, height) in inches
30 ax : Matplotlib axis object, optional
31 grid : bool, optional
32 setting this to True will show the grid
33 diagonal : {'hist', 'kde'}
34 pick between 'kde' and 'hist' for
35 either Kernel Density Estimation or Histogram
36 plot in the diagonal
37 marker : str, optional
38 Matplotlib marker type, default '.'
39 hist_kwds : other plotting keyword arguments
40 To be passed to hist function
41 density_kwds : other plotting keyword arguments
42 To be passed to kernel density estimate plot
43 range_padding : float, optional
44 relative extension of axis range in x and y
45 with respect to (x_max - x_min) or (y_max - y_min),
46 default 0.05
47 kwds : other plotting keyword arguments
48 To be passed to scatter function
49
50 Examples
51 --------
52 >>> df = DataFrame(np.random.randn(1000, 4), columns=['A','B','C','D'])
53 >>> scatter_matrix(df, alpha=0.2)
54 """
55
56 df = frame._get_numeric_data()
57 n = df.columns.size
58 naxes = n * n
59 fig, axes = _subplots(naxes=naxes, figsize=figsize, ax=ax,
60 squeeze=False)
61
62 # no gaps between subplots
63 fig.subplots_adjust(wspace=0, hspace=0)
64
65 mask = notna(df)
66
67 marker = _get_marker_compat(marker)
68
69 hist_kwds = hist_kwds or {}
70 density_kwds = density_kwds or {}
71
72 # GH 14855
73 kwds.setdefault('edgecolors', 'none')
74
75 boundaries_list = []
76 for a in df.columns:
77 values = df[a].values[mask[a].values]
78 rmin_, rmax_ = np.min(values), np.max(values)
79 rdelta_ext = (rmax_ - rmin_) * range_padding / 2.
80 boundaries_list.append((rmin_ - rdelta_ext, rmax_ + rdelta_ext))
81
82 for i, a in zip(lrange(n), df.columns):
83 for j, b in zip(lrange(n), df.columns):
84 ax = axes[i, j]
85
86 if i == j:
87 values = df[a].values[mask[a].values]
88
89 # Deal with the diagonal by drawing a histogram there.
90 if diagonal == 'hist':
91 ax.hist(values, **hist_kwds)
92
93 elif diagonal in ('kde', 'density'):
94 from scipy.stats import gaussian_kde
95 y = values
96 gkde = gaussian_kde(y)
97 ind = np.linspace(y.min(), y.max(), 1000)
98 ax.plot(ind, gkde.evaluate(ind), **density_kwds)
99
100 ax.set_xlim(boundaries_list[i])
101
102 else:
103 common = (mask[a] & mask[b]).values
104
105 ax.scatter(df[b][common], df[a][common],
106 marker=marker, alpha=alpha, **kwds)
107
108 ax.set_xlim(boundaries_list[j])
109 ax.set_ylim(boundaries_list[i])
110
111 ax.set_xlabel(b)
112 ax.set_ylabel(a)
113
114 if j != 0:
115 ax.yaxis.set_visible(False)
116 if i != n - 1:
117 ax.xaxis.set_visible(False)
118
119 if len(df.columns) > 1:
120 lim1 = boundaries_list[0]
121 locs = axes[0][1].yaxis.get_majorticklocs()
122 locs = locs[(lim1[0] <= locs) & (locs <= lim1[1])]
123 adj = (locs - lim1[0]) / (lim1[1] - lim1[0])
124
125 lim0 = axes[0][0].get_ylim()
126 adj = adj * (lim0[1] - lim0[0]) + lim0[0]
127 axes[0][0].yaxis.set_ticks(adj)
128
129 if np.all(locs == locs.astype(int)):
130 # if all ticks are int
131 locs = locs.astype(int)
132 axes[0][0].yaxis.set_ticklabels(locs)
133
134 _set_ticks_props(axes, xlabelsize=8, xrot=90, ylabelsize=8, yrot=0)
135
136 return axes
137
138
139 def _get_marker_compat(marker):
140 import matplotlib.lines as mlines
141 import matplotlib as mpl
142 if mpl.__version__ < '1.1.0' and marker == '.':
143 return 'o'
144 if marker not in mlines.lineMarkers:
145 return 'o'
146 return marker
147
148
149 def radviz(frame, class_column, ax=None, color=None, colormap=None, **kwds):
150 """RadViz - a multivariate data visualization algorithm
151
152 Parameters:
153 -----------
154 frame: DataFrame
155 class_column: str
156 Column name containing class names
157 ax: Matplotlib axis object, optional
158 color: list or tuple, optional
159 Colors to use for the different classes
160 colormap : str or matplotlib colormap object, default None
161 Colormap to select colors from. If string, load colormap with that name
162 from matplotlib.
163 kwds: keywords
164 Options to pass to matplotlib scatter plotting method
165
166 Returns:
167 --------
168 ax: Matplotlib axis object
169 """
170 import matplotlib.pyplot as plt
171 import matplotlib.patches as patches
172
173 def normalize(series):
174 a = min(series)
175 b = max(series)
176 return (series - a) / (b - a)
177
178 n = len(frame)
179 classes = frame[class_column].drop_duplicates()
180 class_col = frame[class_column]
181 df = frame.drop(class_column, axis=1).apply(normalize)
182
183 if ax is None:
184 ax = plt.gca(xlim=[-1, 1], ylim=[-1, 1])
185
186 to_plot = {}
187 colors = _get_standard_colors(num_colors=len(classes), colormap=colormap,
188 color_type='random', color=color)
189
190 for kls in classes:
191 to_plot[kls] = [[], []]
192
193 m = len(frame.columns) - 1
194 s = np.array([(np.cos(t), np.sin(t))
195 for t in [2.0 * np.pi * (i / float(m))
196 for i in range(m)]])
197
198 for i in range(n):
199 row = df.iloc[i].values
200 row_ = np.repeat(np.expand_dims(row, axis=1), 2, axis=1)
201 y = (s * row_).sum(axis=0) / row.sum()
202 kls = class_col.iat[i]
203 to_plot[kls][0].append(y[0])
204 to_plot[kls][1].append(y[1])
205
206 for i, kls in enumerate(classes):
207 ax.scatter(to_plot[kls][0], to_plot[kls][1], color=colors[i],
208 label=pprint_thing(kls), **kwds)
209 ax.legend()
210
211 ax.add_patch(patches.Circle((0.0, 0.0), radius=1.0, facecolor='none'))
212
213 for xy, name in zip(s, df.columns):
214
215 ax.add_patch(patches.Circle(xy, radius=0.025, facecolor='gray'))
216
217 if xy[0] < 0.0 and xy[1] < 0.0:
218 ax.text(xy[0] - 0.025, xy[1] - 0.025, name,
219 ha='right', va='top', size='small')
220 elif xy[0] < 0.0 and xy[1] >= 0.0:
221 ax.text(xy[0] - 0.025, xy[1] + 0.025, name,
222 ha='right', va='bottom', size='small')
223 elif xy[0] >= 0.0 and xy[1] < 0.0:
224 ax.text(xy[0] + 0.025, xy[1] - 0.025, name,
225 ha='left', va='top', size='small')
226 elif xy[0] >= 0.0 and xy[1] >= 0.0:
227 ax.text(xy[0] + 0.025, xy[1] + 0.025, name,
228 ha='left', va='bottom', size='small')
229
230 ax.axis('equal')
231 return ax
232
233
234 @deprecate_kwarg(old_arg_name='data', new_arg_name='frame')
235 def andrews_curves(frame, class_column, ax=None, samples=200, color=None,
236 colormap=None, **kwds):
237 """
238 Generates a matplotlib plot of Andrews curves, for visualising clusters of
239 multivariate data.
240
241 Andrews curves have the functional form:
242
243 f(t) = x_1/sqrt(2) + x_2 sin(t) + x_3 cos(t) +
244 x_4 sin(2t) + x_5 cos(2t) + ...
245
246 Where x coefficients correspond to the values of each dimension and t is
247 linearly spaced between -pi and +pi. Each row of frame then corresponds to
248 a single curve.
249
250 Parameters:
251 -----------
252 frame : DataFrame
253 Data to be plotted, preferably normalized to (0.0, 1.0)
254 class_column : Name of the column containing class names
255 ax : matplotlib axes object, default None
256 samples : Number of points to plot in each curve
257 color: list or tuple, optional
258 Colors to use for the different classes
259 colormap : str or matplotlib colormap object, default None
260 Colormap to select colors from. If string, load colormap with that name
261 from matplotlib.
262 kwds: keywords
263 Options to pass to matplotlib plotting method
264
265 Returns:
266 --------
267 ax: Matplotlib axis object
268
269 """
270 from math import sqrt, pi
271 import matplotlib.pyplot as plt
272
273 def function(amplitudes):
274 def f(t):
275 x1 = amplitudes[0]
276 result = x1 / sqrt(2.0)
277
278 # Take the rest of the coefficients and resize them
279 # appropriately. Take a copy of amplitudes as otherwise numpy
280 # deletes the element from amplitudes itself.
281 coeffs = np.delete(np.copy(amplitudes), 0)
282 coeffs.resize(int((coeffs.size + 1) / 2), 2)
283
284 # Generate the harmonics and arguments for the sin and cos
285 # functions.
286 harmonics = np.arange(0, coeffs.shape[0]) + 1
287 trig_args = np.outer(harmonics, t)
288
289 result += np.sum(coeffs[:, 0, np.newaxis] * np.sin(trig_args) +
290 coeffs[:, 1, np.newaxis] * np.cos(trig_args),
291 axis=0)
292 return result
293 return f
294
295 n = len(frame)
296 class_col = frame[class_column]
297 classes = frame[class_column].drop_duplicates()
298 df = frame.drop(class_column, axis=1)
299 t = np.linspace(-pi, pi, samples)
300 used_legends = set([])
301
302 color_values = _get_standard_colors(num_colors=len(classes),
303 colormap=colormap, color_type='random',
304 color=color)
305 colors = dict(zip(classes, color_values))
306 if ax is None:
307 ax = plt.gca(xlim=(-pi, pi))
308 for i in range(n):
309 row = df.iloc[i].values
310 f = function(row)
311 y = f(t)
312 kls = class_col.iat[i]
313 label = pprint_thing(kls)
314 if label not in used_legends:
315 used_legends.add(label)
316 ax.plot(t, y, color=colors[kls], label=label, **kwds)
317 else:
318 ax.plot(t, y, color=colors[kls], **kwds)
319
320 ax.legend(loc='upper right')
321 ax.grid()
322 return ax
323
324
325 def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds):
326 """Bootstrap plot.
327
328 Parameters:
329 -----------
330 series: Time series
331 fig: matplotlib figure object, optional
332 size: number of data points to consider during each sampling
333 samples: number of times the bootstrap procedure is performed
334 kwds: optional keyword arguments for plotting commands, must be accepted
335 by both hist and plot
336
337 Returns:
338 --------
339 fig: matplotlib figure
340 """
341 import random
342 import matplotlib.pyplot as plt
343
344 # random.sample(ndarray, int) fails on python 3.3, sigh
345 data = list(series.values)
346 samplings = [random.sample(data, size) for _ in range(samples)]
347
348 means = np.array([np.mean(sampling) for sampling in samplings])
349 medians = np.array([np.median(sampling) for sampling in samplings])
350 midranges = np.array([(min(sampling) + max(sampling)) * 0.5
351 for sampling in samplings])
352 if fig is None:
353 fig = plt.figure()
354 x = lrange(samples)
355 axes = []
356 ax1 = fig.add_subplot(2, 3, 1)
357 ax1.set_xlabel("Sample")
358 axes.append(ax1)
359 ax1.plot(x, means, **kwds)
360 ax2 = fig.add_subplot(2, 3, 2)
361 ax2.set_xlabel("Sample")
362 axes.append(ax2)
363 ax2.plot(x, medians, **kwds)
364 ax3 = fig.add_subplot(2, 3, 3)
365 ax3.set_xlabel("Sample")
366 axes.append(ax3)
367 ax3.plot(x, midranges, **kwds)
368 ax4 = fig.add_subplot(2, 3, 4)
369 ax4.set_xlabel("Mean")
370 axes.append(ax4)
371 ax4.hist(means, **kwds)
372 ax5 = fig.add_subplot(2, 3, 5)
373 ax5.set_xlabel("Median")
374 axes.append(ax5)
375 ax5.hist(medians, **kwds)
376 ax6 = fig.add_subplot(2, 3, 6)
377 ax6.set_xlabel("Midrange")
378 axes.append(ax6)
379 ax6.hist(midranges, **kwds)
380 for axis in axes:
381 plt.setp(axis.get_xticklabels(), fontsize=8)
382 plt.setp(axis.get_yticklabels(), fontsize=8)
383 return fig
384
385
386 @deprecate_kwarg(old_arg_name='colors', new_arg_name='color')
387 @deprecate_kwarg(old_arg_name='data', new_arg_name='frame', stacklevel=3)
388 def parallel_coordinates(frame, class_column, cols=None, ax=None, color=None,
389 use_columns=False, xticks=None, colormap=None,
390 axvlines=True, axvlines_kwds=None, sort_labels=False,
391 **kwds):
392 """Parallel coordinates plotting.
393
394 Parameters
395 ----------
396 frame: DataFrame
397 class_column: str
398 Column name containing class names
399 cols: list, optional
400 A list of column names to use
401 ax: matplotlib.axis, optional
402 matplotlib axis object
403 color: list or tuple, optional
404 Colors to use for the different classes
405 use_columns: bool, optional
406 If true, columns will be used as xticks
407 xticks: list or tuple, optional
408 A list of values to use for xticks
409 colormap: str or matplotlib colormap, default None
410 Colormap to use for line colors.
411 axvlines: bool, optional
412 If true, vertical lines will be added at each xtick
413 axvlines_kwds: keywords, optional
414 Options to be passed to axvline method for vertical lines
415 sort_labels: bool, False
416 Sort class_column labels, useful when assigning colors
417
418 .. versionadded:: 0.20.0
419
420 kwds: keywords
421 Options to pass to matplotlib plotting method
422
423 Returns
424 -------
425 ax: matplotlib axis object
426
427 Examples
428 --------
429 >>> from pandas import read_csv
430 >>> from pandas.tools.plotting import parallel_coordinates
431 >>> from matplotlib import pyplot as plt
432 >>> df = read_csv('https://raw.github.com/pandas-dev/pandas/master'
433 '/pandas/tests/data/iris.csv')
434 >>> parallel_coordinates(df, 'Name', color=('#556270',
435 '#4ECDC4', '#C7F464'))
436 >>> plt.show()
437 """
438 if axvlines_kwds is None:
439 axvlines_kwds = {'linewidth': 1, 'color': 'black'}
440 import matplotlib.pyplot as plt
441
442 n = len(frame)
443 classes = frame[class_column].drop_duplicates()
444 class_col = frame[class_column]
445
446 if cols is None:
447 df = frame.drop(class_column, axis=1)
448 else:
449 df = frame[cols]
450
451 used_legends = set([])
452
453 ncols = len(df.columns)
454
455 # determine values to use for xticks
456 if use_columns is True:
457 if not np.all(np.isreal(list(df.columns))):
458 raise ValueError('Columns must be numeric to be used as xticks')
459 x = df.columns
460 elif xticks is not None:
461 if not np.all(np.isreal(xticks)):
462 raise ValueError('xticks specified must be numeric')
463 elif len(xticks) != ncols:
464 raise ValueError('Length of xticks must match number of columns')
465 x = xticks
466 else:
467 x = lrange(ncols)
468
469 if ax is None:
470 ax = plt.gca()
471
472 color_values = _get_standard_colors(num_colors=len(classes),
473 colormap=colormap, color_type='random',
474 color=color)
475
476 if sort_labels:
477 classes = sorted(classes)
478 color_values = sorted(color_values)
479 colors = dict(zip(classes, color_values))
480
481 for i in range(n):
482 y = df.iloc[i].values
483 kls = class_col.iat[i]
484 label = pprint_thing(kls)
485 if label not in used_legends:
486 used_legends.add(label)
487 ax.plot(x, y, color=colors[kls], label=label, **kwds)
488 else:
489 ax.plot(x, y, color=colors[kls], **kwds)
490
491 if axvlines:
492 for i in x:
493 ax.axvline(i, **axvlines_kwds)
494
495 ax.set_xticks(x)
496 ax.set_xticklabels(df.columns)
497 ax.set_xlim(x[0], x[-1])
498 ax.legend(loc='upper right')
499 ax.grid()
500 return ax
501
502
503 def lag_plot(series, lag=1, ax=None, **kwds):
504 """Lag plot for time series.
505
506 Parameters:
507 -----------
508 series: Time series
509 lag: lag of the scatter plot, default 1
510 ax: Matplotlib axis object, optional
511 kwds: Matplotlib scatter method keyword arguments, optional
512
513 Returns:
514 --------
515 ax: Matplotlib axis object
516 """
517 import matplotlib.pyplot as plt
518
519 # workaround because `c='b'` is hardcoded in matplotlibs scatter method
520 kwds.setdefault('c', plt.rcParams['patch.facecolor'])
521
522 data = series.values
523 y1 = data[:-lag]
524 y2 = data[lag:]
525 if ax is None:
526 ax = plt.gca()
527 ax.set_xlabel("y(t)")
528 ax.set_ylabel("y(t + {lag})".format(lag=lag))
529 ax.scatter(y1, y2, **kwds)
530 return ax
531
532
533 def autocorrelation_plot(series, ax=None, **kwds):
534 """Autocorrelation plot for time series.
535
536 Parameters:
537 -----------
538 series: Time series
539 ax: Matplotlib axis object, optional
540 kwds : keywords
541 Options to pass to matplotlib plotting method
542
543 Returns:
544 -----------
545 ax: Matplotlib axis object
546 """
547 import matplotlib.pyplot as plt
548 n = len(series)
549 data = np.asarray(series)
550 if ax is None:
551 ax = plt.gca(xlim=(1, n), ylim=(-1.0, 1.0))
552 mean = np.mean(data)
553 c0 = np.sum((data - mean) ** 2) / float(n)
554
555 def r(h):
556 return ((data[:n - h] - mean) *
557 (data[h:] - mean)).sum() / float(n) / c0
558 x = np.arange(n) + 1
559 y = lmap(r, x)
560 z95 = 1.959963984540054
561 z99 = 2.5758293035489004
562 ax.axhline(y=z99 / np.sqrt(n), linestyle='--', color='grey')
563 ax.axhline(y=z95 / np.sqrt(n), color='grey')
564 ax.axhline(y=0.0, color='black')
565 ax.axhline(y=-z95 / np.sqrt(n), color='grey')
566 ax.axhline(y=-z99 / np.sqrt(n), linestyle='--', color='grey')
567 ax.set_xlabel("Lag")
568 ax.set_ylabel("Autocorrelation")
569 ax.plot(x, y, **kwds)
570 if 'label' in kwds:
571 ax.legend()
572 ax.grid()
573 return ax
574
[end of pandas/plotting/_misc.py]
[start of pandas/util/_print_versions.py]
1 import os
2 import platform
3 import sys
4 import struct
5 import subprocess
6 import codecs
7 import locale
8 import importlib
9
10
11 def get_sys_info():
12 "Returns system information as a dict"
13
14 blob = []
15
16 # get full commit hash
17 commit = None
18 if os.path.isdir(".git") and os.path.isdir("pandas"):
19 try:
20 pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "),
21 stdout=subprocess.PIPE,
22 stderr=subprocess.PIPE)
23 so, serr = pipe.communicate()
24 except:
25 pass
26 else:
27 if pipe.returncode == 0:
28 commit = so
29 try:
30 commit = so.decode('utf-8')
31 except ValueError:
32 pass
33 commit = commit.strip().strip('"')
34
35 blob.append(('commit', commit))
36
37 try:
38 (sysname, nodename, release,
39 version, machine, processor) = platform.uname()
40 blob.extend([
41 ("python", '.'.join(map(str, sys.version_info))),
42 ("python-bits", struct.calcsize("P") * 8),
43 ("OS", "{sysname}".format(sysname=sysname)),
44 ("OS-release", "{release}".format(release=release)),
45 # ("Version", "{version}".format(version=version)),
46 ("machine", "{machine}".format(machine=machine)),
47 ("processor", "{processor}".format(processor=processor)),
48 ("byteorder", "{byteorder}".format(byteorder=sys.byteorder)),
49 ("LC_ALL", "{lc}".format(lc=os.environ.get('LC_ALL', "None"))),
50 ("LANG", "{lang}".format(lang=os.environ.get('LANG', "None"))),
51 ("LOCALE", '.'.join(map(str, locale.getlocale()))),
52 ])
53 except:
54 pass
55
56 return blob
57
58
59 def show_versions(as_json=False):
60 sys_info = get_sys_info()
61
62 deps = [
63 # (MODULE_NAME, f(mod) -> mod version)
64 ("pandas", lambda mod: mod.__version__),
65 ("pytest", lambda mod: mod.__version__),
66 ("pip", lambda mod: mod.__version__),
67 ("setuptools", lambda mod: mod.__version__),
68 ("Cython", lambda mod: mod.__version__),
69 ("numpy", lambda mod: mod.version.version),
70 ("scipy", lambda mod: mod.version.version),
71 ("pyarrow", lambda mod: mod.__version__),
72 ("xarray", lambda mod: mod.__version__),
73 ("IPython", lambda mod: mod.__version__),
74 ("sphinx", lambda mod: mod.__version__),
75 ("patsy", lambda mod: mod.__version__),
76 ("dateutil", lambda mod: mod.__version__),
77 ("pytz", lambda mod: mod.VERSION),
78 ("blosc", lambda mod: mod.__version__),
79 ("bottleneck", lambda mod: mod.__version__),
80 ("tables", lambda mod: mod.__version__),
81 ("numexpr", lambda mod: mod.__version__),
82 ("feather", lambda mod: mod.__version__),
83 ("matplotlib", lambda mod: mod.__version__),
84 ("openpyxl", lambda mod: mod.__version__),
85 ("xlrd", lambda mod: mod.__VERSION__),
86 ("xlwt", lambda mod: mod.__VERSION__),
87 ("xlsxwriter", lambda mod: mod.__version__),
88 ("lxml", lambda mod: mod.etree.__version__),
89 ("bs4", lambda mod: mod.__version__),
90 ("html5lib", lambda mod: mod.__version__),
91 ("sqlalchemy", lambda mod: mod.__version__),
92 ("pymysql", lambda mod: mod.__version__),
93 ("psycopg2", lambda mod: mod.__version__),
94 ("jinja2", lambda mod: mod.__version__),
95 ("s3fs", lambda mod: mod.__version__),
96 ("fastparquet", lambda mod: mod.__version__),
97 ("pandas_gbq", lambda mod: mod.__version__),
98 ("pandas_datareader", lambda mod: mod.__version__),
99 ]
100
101 deps_blob = list()
102 for (modname, ver_f) in deps:
103 try:
104 if modname in sys.modules:
105 mod = sys.modules[modname]
106 else:
107 mod = importlib.import_module(modname)
108 ver = ver_f(mod)
109 deps_blob.append((modname, ver))
110 except:
111 deps_blob.append((modname, None))
112
113 if (as_json):
114 try:
115 import json
116 except:
117 import simplejson as json
118
119 j = dict(system=dict(sys_info), dependencies=dict(deps_blob))
120
121 if as_json is True:
122 print(j)
123 else:
124 with codecs.open(as_json, "wb", encoding='utf8') as f:
125 json.dump(j, f, indent=2)
126
127 else:
128
129 print("\nINSTALLED VERSIONS")
130 print("------------------")
131
132 for k, stat in sys_info:
133 print("{k}: {stat}".format(k=k, stat=stat))
134
135 print("")
136 for k, stat in deps_blob:
137 print("{k}: {stat}".format(k=k, stat=stat))
138
139
140 def main():
141 from optparse import OptionParser
142 parser = OptionParser()
143 parser.add_option("-j", "--json", metavar="FILE", nargs=1,
144 help="Save output as JSON into file, pass in "
145 "'-' to output to stdout")
146
147 (options, args) = parser.parse_args()
148
149 if options.json == "-":
150 options.json = True
151
152 show_versions(as_json=options.json)
153
154 return 0
155
156
157 if __name__ == "__main__":
158 sys.exit(main())
159
[end of pandas/util/_print_versions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
b69c1a26899b38adff8390236ee83ba36af0374e
|
BUG: rolling.corr() produces wrong result with equal values
#### Code Sample, a copy-pastable example if possible
```python
s = pd.Series([1,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0,0,0,7,0,0,0])
pd.rolling_corr(s,s,6)
```
#### Problem description
rolling_corr is producing the wrong result:
```
python
pd.rolling_corr(s,s,6)
```
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 1.0
6 1.0
7 1.0
8 1.0
9 0.0
10 0.0
11 0.0
12 0.0
13 0.0
14 0.0
15 0.0
16 0.0
17 0.0
18 0.0
19 0.0
20 0.0
21 0.0
22 0.0
23 0.0
24 0.0
25 0.0
26 1.0
27 1.0
28 1.0
29 1.0
30 1.0
31 1.0
32 1.0
33 1.0
This should have nan's instead of 0's for windows with static data.
#### Expected Output
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.1.35-pv-ts2
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.3
pytest: 3.0.7
pip: 9.0.1
setuptools: 36.4.0
Cython: 0.25.2
numpy: 1.11.3
scipy: 0.19.0
xarray: None
IPython: 5.3.0
sphinx: 1.6.2
patsy: 0.4.1
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: None
tables: 3.3.0
numexpr: 2.6.2
feather: None
matplotlib: 1.5.1
openpyxl: 2.4.8
xlrd: 1.0.0
xlwt: None
xlsxwriter: 0.9.8
lxml: None
bs4: 4.5.3
html5lib: 0.999
sqlalchemy: 1.1.10
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
|
I think this is a very similar problem to https://github.com/pandas-dev/pandas/issues/18044, recently fixed in #18085. We are NaNing out values that are numerically very close to zero (e.g. denominator is std * std), but in this case we are missing it because they are not identically zero.
@byospe want to try a PR to fix?
I am working on this and the fixing is almost done.
Seems that the numerical calculation matter.
https://github.com/pandas-dev/pandas/blob/master/pandas/core/window.py#L1064
|
2017-11-25T13:37:59Z
|
<patch>
diff --git a/doc/source/whatsnew/v0.21.1.txt b/doc/source/whatsnew/v0.21.1.txt
--- a/doc/source/whatsnew/v0.21.1.txt
+++ b/doc/source/whatsnew/v0.21.1.txt
@@ -103,7 +103,7 @@ Groupby/Resample/Rolling
- Bug in ``DataFrame.resample(...).apply(...)`` when there is a callable that returns different columns (:issue:`15169`)
- Bug in ``DataFrame.resample(...)`` when there is a time change (DST) and resampling frequecy is 12h or higher (:issue:`15549`)
- Bug in ``pd.DataFrameGroupBy.count()`` when counting over a datetimelike column (:issue:`13393`)
--
+- Bug in ``rolling.var`` where calculation is inaccurate with a zero-valued array (:issue:`18430`)
-
-
diff --git a/pandas/_libs/window.pyx b/pandas/_libs/window.pyx
--- a/pandas/_libs/window.pyx
+++ b/pandas/_libs/window.pyx
@@ -661,9 +661,11 @@ cdef inline void add_var(double val, double *nobs, double *mean_x,
if val == val:
nobs[0] = nobs[0] + 1
- delta = (val - mean_x[0])
+ # a part of Welford's method for the online variance-calculation
+ # https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
+ delta = val - mean_x[0]
mean_x[0] = mean_x[0] + delta / nobs[0]
- ssqdm_x[0] = ssqdm_x[0] + delta * (val - mean_x[0])
+ ssqdm_x[0] = ssqdm_x[0] + ((nobs[0] - 1) * delta ** 2) / nobs[0]
cdef inline void remove_var(double val, double *nobs, double *mean_x,
@@ -675,9 +677,11 @@ cdef inline void remove_var(double val, double *nobs, double *mean_x,
if val == val:
nobs[0] = nobs[0] - 1
if nobs[0]:
- delta = (val - mean_x[0])
+ # a part of Welford's method for the online variance-calculation
+ # https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
+ delta = val - mean_x[0]
mean_x[0] = mean_x[0] - delta / nobs[0]
- ssqdm_x[0] = ssqdm_x[0] - delta * (val - mean_x[0])
+ ssqdm_x[0] = ssqdm_x[0] - ((nobs[0] + 1) * delta ** 2) / nobs[0]
else:
mean_x[0] = 0
ssqdm_x[0] = 0
@@ -689,7 +693,7 @@ def roll_var(ndarray[double_t] input, int64_t win, int64_t minp,
Numerically stable implementation using Welford's method.
"""
cdef:
- double val, prev, mean_x = 0, ssqdm_x = 0, nobs = 0, delta
+ double val, prev, mean_x = 0, ssqdm_x = 0, nobs = 0, delta, mean_x_old
int64_t s, e
bint is_variable
Py_ssize_t i, j, N
@@ -749,6 +753,9 @@ def roll_var(ndarray[double_t] input, int64_t win, int64_t minp,
add_var(input[i], &nobs, &mean_x, &ssqdm_x)
output[i] = calc_var(minp, ddof, nobs, ssqdm_x)
+ # a part of Welford's method for the online variance-calculation
+ # https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
+
# After the first window, observations can both be added and
# removed
for i from win <= i < N:
@@ -760,10 +767,12 @@ def roll_var(ndarray[double_t] input, int64_t win, int64_t minp,
# Adding one observation and removing another one
delta = val - prev
- prev -= mean_x
+ mean_x_old = mean_x
+
mean_x += delta / nobs
- val -= mean_x
- ssqdm_x += (val + prev) * delta
+ ssqdm_x += ((nobs - 1) * val
+ + (nobs + 1) * prev
+ - 2 * nobs * mean_x_old) * delta / nobs
else:
add_var(val, &nobs, &mean_x, &ssqdm_x)
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-5162
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ALAP scheduling fails when a circuit with custom instruction is supplied
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### What is the current behavior?
Transpiling with `scheduling_method="alap"` but without `backend` fails when a circuit with custom instructions is supplied.
This is due to the instruction name mismatch. It occurs when `ALAPSchedule` pass calls `reverse_ops()`, which changes the instruction name. (So this error does not happen if we use ASAP scheduler.)
### Steps to reproduce the problem
```
bell = QuantumCircuit(2, name="bell")
bell.h(0)
bell.cx(0, 1)
qc = QuantumCircuit(2)
qc.delay(500, 1)
qc.append(bell.to_instruction(), [0, 1])
scheduled = transpile(qc,
scheduling_method='alap',
instruction_durations=[('bell', [0, 1], 1000)])
==> qiskit.transpiler.exceptions.TranspilerError: 'Duration of bell_reverse on qubits [0, 1] is not found.'
```
### What is the expected behavior?
The above circuit should be scheduled successfully.
### Suggested solutions
We may have three options:
1. don't add "_reverse" to the name of the reversed instruction
2. patch the `instruction_durations` with the new .._reverse gate duration as well
3. rewrite the ALAP scheduler without using `reverse_ops()`
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)
4
5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> transpiled_qc = transpile(qc, backend_sim)
40 >>> result = backend_sim.run(assemble(transpiled_qc)).result()
41 >>> print(result.get_counts(qc))
42 ```
43
44 In this case, the output will be:
45
46 ```python
47 {'00': 513, '11': 511}
48 ```
49
50 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
51 run the same program on a real quantum computer via IBMQ.
52
53 ### Executing your code on a real quantum chip
54
55 You can also use Qiskit to execute your code on a
56 **real quantum chip**.
57 In order to do so, you need to configure Qiskit for using the credentials in
58 your IBM Q account:
59
60 #### Configure your IBMQ credentials
61
62 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
63
64 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
65
66 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
67
68 ```python
69 >>> from qiskit import IBMQ
70 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
71 ```
72
73 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
74 Once they are stored, at any point in the future you can load and use them
75 in your program simply via:
76
77 ```python
78 >>> from qiskit import IBMQ
79 >>> IBMQ.load_account()
80 ```
81
82 Those who do not want to save their credentials to disk should use instead:
83
84 ```python
85 >>> from qiskit import IBMQ
86 >>> IBMQ.enable_account('MY_API_TOKEN')
87 ```
88
89 and the token will only be active for the session. For examples using Terra with real
90 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
91 the levels.
92
93 ## Contribution Guidelines
94
95 If you'd like to contribute to Qiskit Terra, please take a look at our
96 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
97
98 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
99 [join the Qiskit Slack community](https://ibm.co/joinqiskitslack)
100 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
101 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
102
103 ## Next Steps
104
105 Now you're set up and ready to check out some of the other examples from our
106 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
107
108 ## Authors and Citation
109
110 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
111 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
112
113 ## Changelog and Release Notes
114
115 The changelog for a particular release is dynamically generated and gets
116 written to the release page on Github for each release. For example, you can
117 find the page for the `0.9.0` release here:
118
119 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
120
121 The changelog for the current release can be found in the releases tab:
122 
123 The changelog provides a quick overview of noteble changes for a given
124 release.
125
126 Additionally, as part of each release detailed release notes are written to
127 document in detail what has changed as part of a release. This includes any
128 documentation on potential breaking changes on upgrade and new features.
129 For example, You can find the release notes for the `0.9.0` release in the
130 Qiskit documentation here:
131
132 https://qiskit.org/documentation/release_notes.html#terra-0-9
133
134 ## License
135
136 [Apache License 2.0](LICENSE.txt)
137
[end of README.md]
[start of qiskit/compiler/transpile.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2019.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """Circuit transpile function"""
14 import logging
15 import warnings
16 from time import time
17 from typing import List, Union, Dict, Callable, Any, Optional, Tuple
18
19 from qiskit import user_config
20 from qiskit.circuit.quantumcircuit import QuantumCircuit
21 from qiskit.circuit.quantumregister import Qubit
22 from qiskit.converters import isinstanceint, isinstancelist, dag_to_circuit, circuit_to_dag
23 from qiskit.dagcircuit import DAGCircuit
24 from qiskit.providers import BaseBackend
25 from qiskit.providers.models import BackendProperties
26 from qiskit.providers.models.backendproperties import Gate
27 from qiskit.pulse import Schedule
28 from qiskit.tools.parallel import parallel_map
29 from qiskit.transpiler import Layout, CouplingMap, PropertySet, PassManager
30 from qiskit.transpiler.basepasses import BasePass
31 from qiskit.transpiler.exceptions import TranspilerError
32 from qiskit.transpiler.instruction_durations import InstructionDurationsType
33 from qiskit.transpiler.passes import ApplyLayout
34 from qiskit.transpiler.passmanager_config import PassManagerConfig
35 from qiskit.transpiler.preset_passmanagers import (level_0_pass_manager,
36 level_1_pass_manager,
37 level_2_pass_manager,
38 level_3_pass_manager)
39
40 LOG = logging.getLogger(__name__)
41
42
43 def transpile(circuits: Union[QuantumCircuit, List[QuantumCircuit]],
44 backend: Optional[BaseBackend] = None,
45 basis_gates: Optional[List[str]] = None,
46 coupling_map: Optional[Union[CouplingMap, List[List[int]]]] = None,
47 backend_properties: Optional[BackendProperties] = None,
48 initial_layout: Optional[Union[Layout, Dict, List]] = None,
49 layout_method: Optional[str] = None,
50 routing_method: Optional[str] = None,
51 translation_method: Optional[str] = None,
52 scheduling_method: Optional[str] = None,
53 instruction_durations: Optional[InstructionDurationsType] = None,
54 dt: Optional[float] = None,
55 seed_transpiler: Optional[int] = None,
56 optimization_level: Optional[int] = None,
57 pass_manager: Optional[PassManager] = None,
58 callback: Optional[Callable[[BasePass, DAGCircuit, float,
59 PropertySet, int], Any]] = None,
60 output_name: Optional[Union[str, List[str]]] = None) -> Union[QuantumCircuit,
61 List[QuantumCircuit]]:
62 """Transpile one or more circuits, according to some desired transpilation targets.
63
64 All arguments may be given as either a singleton or list. In case of a list,
65 the length must be equal to the number of circuits being transpiled.
66
67 Transpilation is done in parallel using multiprocessing.
68
69 Args:
70 circuits: Circuit(s) to transpile
71 backend: If set, transpiler options are automatically grabbed from
72 ``backend.configuration()`` and ``backend.properties()``.
73 If any other option is explicitly set (e.g., ``coupling_map``), it
74 will override the backend's.
75
76 .. note::
77
78 The backend arg is purely for convenience. The resulting
79 circuit may be run on any backend as long as it is compatible.
80 basis_gates: List of basis gate names to unroll to
81 (e.g: ``['u1', 'u2', 'u3', 'cx']``). If ``None``, do not unroll.
82 coupling_map: Coupling map (perhaps custom) to target in mapping.
83 Multiple formats are supported:
84
85 #. ``CouplingMap`` instance
86 #. List, must be given as an adjacency matrix, where each entry
87 specifies all two-qubit interactions supported by backend,
88 e.g: ``[[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]``
89
90 backend_properties: properties returned by a backend, including information on gate
91 errors, readout errors, qubit coherence times, etc. Find a backend
92 that provides this information with: ``backend.properties()``
93 initial_layout: Initial position of virtual qubits on physical qubits.
94 If this layout makes the circuit compatible with the coupling_map
95 constraints, it will be used. The final layout is not guaranteed to be the same,
96 as the transpiler may permute qubits through swaps or other means.
97 Multiple formats are supported:
98
99 #. ``Layout`` instance
100 #. Dict
101 * virtual to physical::
102
103 {qr[0]: 0,
104 qr[1]: 3,
105 qr[2]: 5}
106
107 * physical to virtual::
108
109 {0: qr[0],
110 3: qr[1],
111 5: qr[2]}
112
113 #. List
114
115 * virtual to physical::
116
117 [0, 3, 5] # virtual qubits are ordered (in addition to named)
118
119 * physical to virtual::
120
121 [qr[0], None, None, qr[1], None, qr[2]]
122
123 layout_method: Name of layout selection pass ('trivial', 'dense', 'noise_adaptive', 'sabre')
124 Sometimes a perfect layout can be available in which case the layout_method
125 may not run.
126 routing_method: Name of routing pass ('basic', 'lookahead', 'stochastic', 'sabre')
127 translation_method: Name of translation pass ('unroller', 'translator', 'synthesis')
128 scheduling_method: Name of scheduling pass.
129 * ``'as_soon_as_possible'``: Schedule instructions greedily, as early as possible
130 on a qubit resource. alias: ``'asap'``)
131 * ``'as_late_as_possible'``: Schedule instructions late, i.e. keeping qubits
132 in the ground state when possible. (alias: ``'alap'``)
133 If ``None``, no scheduling will be done.
134 instruction_durations: Durations of instructions.
135 The gate lengths defined in ``backend.properties`` are used as default and
136 they are updated (overwritten) if this ``instruction_durations`` is specified.
137 The format of ``instruction_durations`` must be as follows.
138 The `instruction_durations` must be given as a list of tuples
139 [(instruction_name, qubits, duration, unit), ...].
140 | [('cx', [0, 1], 12.3, 'ns'), ('u3', [0], 4.56, 'ns')]
141 | [('cx', [0, 1], 1000), ('u3', [0], 300)]
142 If unit is omitted, the default is 'dt', which is a sample time depending on backend.
143 If the time unit is 'dt', the duration must be an integer.
144 dt: Backend sample time (resolution) in seconds.
145 If ``None`` (default), ``backend.configuration().dt`` is used.
146 seed_transpiler: Sets random seed for the stochastic parts of the transpiler
147 optimization_level: How much optimization to perform on the circuits.
148 Higher levels generate more optimized circuits,
149 at the expense of longer transpilation time.
150 * 0: no optimization
151 * 1: light optimization
152 * 2: heavy optimization
153 * 3: even heavier optimization
154 If ``None``, level 1 will be chosen as default.
155 pass_manager: The pass manager to use for a custom pipeline of transpiler passes.
156 If this arg is present, all other args will be ignored and the
157 pass manager will be used directly (Qiskit will not attempt to
158 auto-select a pass manager based on transpile options).
159 callback: A callback function that will be called after each
160 pass execution. The function will be called with 5 keyword
161 arguments,
162 | ``pass_``: the pass being run.
163 | ``dag``: the dag output of the pass.
164 | ``time``: the time to execute the pass.
165 | ``property_set``: the property set.
166 | ``count``: the index for the pass execution.
167 The exact arguments passed expose the internals of the pass manager,
168 and are subject to change as the pass manager internals change. If
169 you intend to reuse a callback function over multiple releases, be
170 sure to check that the arguments being passed are the same.
171 To use the callback feature, define a function that will
172 take in kwargs dict and access the variables. For example::
173
174 def callback_func(**kwargs):
175 pass_ = kwargs['pass_']
176 dag = kwargs['dag']
177 time = kwargs['time']
178 property_set = kwargs['property_set']
179 count = kwargs['count']
180 ...
181 transpile(circ, callback=callback_func)
182
183 output_name: A list with strings to identify the output circuits. The length of
184 the list should be exactly the length of the ``circuits`` parameter.
185
186 Returns:
187 The transpiled circuit(s).
188
189 Raises:
190 TranspilerError: in case of bad inputs to transpiler (like conflicting parameters)
191 or errors in passes
192 """
193 circuits = circuits if isinstance(circuits, list) else [circuits]
194
195 # transpiling schedules is not supported yet.
196 start_time = time()
197 if all(isinstance(c, Schedule) for c in circuits):
198 warnings.warn("Transpiling schedules is not supported yet.", UserWarning)
199 if len(circuits) == 1:
200 end_time = time()
201 _log_transpile_time(start_time, end_time)
202 return circuits[0]
203 end_time = time()
204 _log_transpile_time(start_time, end_time)
205 return circuits
206
207 if pass_manager is not None:
208 _check_conflicting_argument(optimization_level=optimization_level, basis_gates=basis_gates,
209 coupling_map=coupling_map, seed_transpiler=seed_transpiler,
210 backend_properties=backend_properties,
211 initial_layout=initial_layout, layout_method=layout_method,
212 routing_method=routing_method,
213 translation_method=translation_method,
214 backend=backend)
215
216 warnings.warn("The parameter pass_manager in transpile is being deprecated. "
217 "The preferred way to tranpile a circuit using a custom pass manager is"
218 " pass_manager.run(circuit)", DeprecationWarning, stacklevel=2)
219 return pass_manager.run(circuits, output_name=output_name, callback=callback)
220
221 if optimization_level is None:
222 # Take optimization level from the configuration or 1 as default.
223 config = user_config.get_config()
224 optimization_level = config.get('transpile_optimization_level', 1)
225
226 if scheduling_method is not None and backend is None and not instruction_durations:
227 warnings.warn("When scheduling circuits without backend,"
228 " 'instruction_durations' should be usually provided.",
229 UserWarning)
230
231 # Get transpile_args to configure the circuit transpilation job(s)
232 transpile_args = _parse_transpile_args(circuits, backend, basis_gates, coupling_map,
233 backend_properties, initial_layout,
234 layout_method, routing_method, translation_method,
235 scheduling_method, instruction_durations, dt,
236 seed_transpiler, optimization_level,
237 callback, output_name)
238
239 _check_circuits_coupling_map(circuits, transpile_args, backend)
240
241 # Transpile circuits in parallel
242 circuits = parallel_map(_transpile_circuit, list(zip(circuits, transpile_args)))
243
244 if len(circuits) == 1:
245 end_time = time()
246 _log_transpile_time(start_time, end_time)
247 return circuits[0]
248 end_time = time()
249 _log_transpile_time(start_time, end_time)
250 return circuits
251
252
253 def _check_conflicting_argument(**kargs):
254 conflicting_args = [arg for arg, value in kargs.items() if value]
255 if conflicting_args:
256 raise TranspilerError("The parameters pass_manager conflicts with the following "
257 "parameter(s): {}.".format(', '.join(conflicting_args)))
258
259
260 def _check_circuits_coupling_map(circuits, transpile_args, backend):
261 # Check circuit width against number of qubits in coupling_map(s)
262 coupling_maps_list = list(config['pass_manager_config'].coupling_map for config in
263 transpile_args)
264 for circuit, parsed_coupling_map in zip(circuits, coupling_maps_list):
265 # If coupling_map is not None or num_qubits == 1
266 num_qubits = len(circuit.qubits)
267 max_qubits = None
268 if isinstance(parsed_coupling_map, CouplingMap):
269 max_qubits = parsed_coupling_map.size()
270
271 # If coupling_map is None, the limit might be in the backend (like in 1Q devices)
272 elif backend is not None and not backend.configuration().simulator:
273 max_qubits = backend.configuration().n_qubits
274
275 if max_qubits is not None and (num_qubits > max_qubits):
276 raise TranspilerError('Number of qubits ({}) '.format(num_qubits) +
277 'in {} '.format(circuit.name) +
278 'is greater than maximum ({}) '.format(max_qubits) +
279 'in the coupling_map')
280
281
282 def _log_transpile_time(start_time, end_time):
283 log_msg = "Total Transpile Time - %.5f (ms)" % ((end_time - start_time) * 1000)
284 LOG.info(log_msg)
285
286
287 def _transpile_circuit(circuit_config_tuple: Tuple[QuantumCircuit, Dict]) -> QuantumCircuit:
288 """Select a PassManager and run a single circuit through it.
289 Args:
290 circuit_config_tuple (tuple):
291 circuit (QuantumCircuit): circuit to transpile
292 transpile_config (dict): configuration dictating how to transpile. The
293 dictionary has the following format:
294 {'optimization_level': int,
295 'output_name': string,
296 'callback': callable,
297 'pass_manager_config': PassManagerConfig}
298 Returns:
299 The transpiled circuit
300 Raises:
301 TranspilerError: if transpile_config is not valid or transpilation incurs error
302 """
303 circuit, transpile_config = circuit_config_tuple
304
305 pass_manager_config = transpile_config['pass_manager_config']
306
307 if transpile_config['faulty_qubits_map']:
308 pass_manager_config.initial_layout = _remap_layout_faulty_backend(
309 pass_manager_config.initial_layout, transpile_config['faulty_qubits_map'])
310
311 # we choose an appropriate one based on desired optimization level
312 level = transpile_config['optimization_level']
313
314 if level == 0:
315 pass_manager = level_0_pass_manager(pass_manager_config)
316 elif level == 1:
317 pass_manager = level_1_pass_manager(pass_manager_config)
318 elif level == 2:
319 pass_manager = level_2_pass_manager(pass_manager_config)
320 elif level == 3:
321 pass_manager = level_3_pass_manager(pass_manager_config)
322 else:
323 raise TranspilerError("optimization_level can range from 0 to 3.")
324
325 if pass_manager_config.scheduling_method is not None:
326 if pass_manager_config.basis_gates:
327 if 'delay' not in pass_manager_config.basis_gates:
328 pass_manager_config.basis_gates.append('delay')
329 else:
330 pass_manager_config.basis_gates = ['delay']
331
332 result = pass_manager.run(circuit, callback=transpile_config['callback'],
333 output_name=transpile_config['output_name'])
334
335 if transpile_config['faulty_qubits_map']:
336 return _remap_circuit_faulty_backend(result, transpile_config['backend_num_qubits'],
337 pass_manager_config.backend_properties,
338 transpile_config['faulty_qubits_map'])
339
340 return result
341
342
343 def _remap_circuit_faulty_backend(circuit, num_qubits, backend_prop, faulty_qubits_map):
344 faulty_qubits = backend_prop.faulty_qubits() if backend_prop else []
345 disconnected_qubits = {k for k, v in faulty_qubits_map.items()
346 if v is None}.difference(faulty_qubits)
347 faulty_qubits_map_reverse = {v: k for k, v in faulty_qubits_map.items()}
348 if faulty_qubits:
349 faulty_qreg = circuit._create_qreg(len(faulty_qubits), 'faulty')
350 else:
351 faulty_qreg = []
352 if disconnected_qubits:
353 disconnected_qreg = circuit._create_qreg(len(disconnected_qubits), 'disconnected')
354 else:
355 disconnected_qreg = []
356
357 new_layout = Layout()
358 faulty_qubit = 0
359 disconnected_qubit = 0
360
361 for real_qubit in range(num_qubits):
362 if faulty_qubits_map[real_qubit] is not None:
363 new_layout[real_qubit] = circuit._layout[faulty_qubits_map[real_qubit]]
364 else:
365 if real_qubit in faulty_qubits:
366 new_layout[real_qubit] = faulty_qreg[faulty_qubit]
367 faulty_qubit += 1
368 else:
369 new_layout[real_qubit] = disconnected_qreg[disconnected_qubit]
370 disconnected_qubit += 1
371 physical_layout_dict = {}
372 for qubit in circuit.qubits:
373 physical_layout_dict[qubit] = faulty_qubits_map_reverse[qubit.index]
374 for qubit in faulty_qreg[:] + disconnected_qreg[:]:
375 physical_layout_dict[qubit] = new_layout[qubit]
376 dag_circuit = circuit_to_dag(circuit)
377 apply_layout_pass = ApplyLayout()
378 apply_layout_pass.property_set['layout'] = Layout(physical_layout_dict)
379 circuit = dag_to_circuit(apply_layout_pass.run(dag_circuit))
380 circuit._layout = new_layout
381 return circuit
382
383
384 def _remap_layout_faulty_backend(layout, faulty_qubits_map):
385 if layout is None:
386 return layout
387 new_layout = Layout()
388 for virtual, physical in layout.get_virtual_bits().items():
389 if faulty_qubits_map[physical] is None:
390 raise TranspilerError("The initial_layout parameter refers to faulty"
391 " or disconnected qubits")
392 new_layout[virtual] = faulty_qubits_map[physical]
393 return new_layout
394
395
396 def _parse_transpile_args(circuits, backend,
397 basis_gates, coupling_map, backend_properties,
398 initial_layout, layout_method, routing_method, translation_method,
399 scheduling_method, instruction_durations, dt,
400 seed_transpiler, optimization_level,
401 callback, output_name) -> List[Dict]:
402 """Resolve the various types of args allowed to the transpile() function through
403 duck typing, overriding args, etc. Refer to the transpile() docstring for details on
404 what types of inputs are allowed.
405
406 Here the args are resolved by converting them to standard instances, and prioritizing
407 them in case a transpile option is passed through multiple args (explicitly setting an
408 arg has more priority than the arg set by backend).
409
410 Returns:
411 list[dicts]: a list of transpile parameters.
412 """
413 if initial_layout is not None and layout_method is not None:
414 warnings.warn("initial_layout provided; layout_method is ignored.",
415 UserWarning)
416 # Each arg could be single or a list. If list, it must be the same size as
417 # number of circuits. If single, duplicate to create a list of that size.
418 num_circuits = len(circuits)
419
420 basis_gates = _parse_basis_gates(basis_gates, backend, circuits)
421 faulty_qubits_map = _parse_faulty_qubits_map(backend, num_circuits)
422 coupling_map = _parse_coupling_map(coupling_map, backend, num_circuits)
423 backend_properties = _parse_backend_properties(backend_properties, backend, num_circuits)
424 backend_num_qubits = _parse_backend_num_qubits(backend, num_circuits)
425 initial_layout = _parse_initial_layout(initial_layout, circuits)
426 layout_method = _parse_layout_method(layout_method, num_circuits)
427 routing_method = _parse_routing_method(routing_method, num_circuits)
428 translation_method = _parse_translation_method(translation_method, num_circuits)
429 durations = _parse_instruction_durations(backend, instruction_durations, dt,
430 scheduling_method, num_circuits)
431 scheduling_method = _parse_scheduling_method(scheduling_method, num_circuits)
432 seed_transpiler = _parse_seed_transpiler(seed_transpiler, num_circuits)
433 optimization_level = _parse_optimization_level(optimization_level, num_circuits)
434 output_name = _parse_output_name(output_name, circuits)
435 callback = _parse_callback(callback, num_circuits)
436
437 list_transpile_args = []
438 for args in zip(basis_gates, coupling_map, backend_properties, initial_layout,
439 layout_method, routing_method, translation_method, scheduling_method,
440 durations, seed_transpiler, optimization_level,
441 output_name, callback, backend_num_qubits, faulty_qubits_map):
442 transpile_args = {'pass_manager_config': PassManagerConfig(basis_gates=args[0],
443 coupling_map=args[1],
444 backend_properties=args[2],
445 initial_layout=args[3],
446 layout_method=args[4],
447 routing_method=args[5],
448 translation_method=args[6],
449 scheduling_method=args[7],
450 instruction_durations=args[8],
451 seed_transpiler=args[9]),
452 'optimization_level': args[10],
453 'output_name': args[11],
454 'callback': args[12],
455 'backend_num_qubits': args[13],
456 'faulty_qubits_map': args[14]}
457 list_transpile_args.append(transpile_args)
458
459 return list_transpile_args
460
461
462 def _create_faulty_qubits_map(backend):
463 """If the backend has faulty qubits, those should be excluded. A faulty_qubit_map is a map
464 from working qubit in the backend to dumnmy qubits that are consecutive and connected."""
465 faulty_qubits_map = None
466 if backend is not None:
467 if backend.properties():
468 faulty_qubits = backend.properties().faulty_qubits()
469 faulty_edges = [gates.qubits for gates in backend.properties().faulty_gates()]
470 else:
471 faulty_qubits = []
472 faulty_edges = []
473
474 if faulty_qubits or faulty_edges:
475 faulty_qubits_map = {}
476 configuration = backend.configuration()
477 full_coupling_map = configuration.coupling_map
478 functional_cm_list = [edge for edge in full_coupling_map
479 if (set(edge).isdisjoint(faulty_qubits) and
480 edge not in faulty_edges)]
481
482 connected_working_qubits = CouplingMap(functional_cm_list).largest_connected_component()
483 dummy_qubit_counter = 0
484 for qubit in range(configuration.n_qubits):
485 if qubit in connected_working_qubits:
486 faulty_qubits_map[qubit] = dummy_qubit_counter
487 dummy_qubit_counter += 1
488 else:
489 faulty_qubits_map[qubit] = None
490 return faulty_qubits_map
491
492
493 def _parse_basis_gates(basis_gates, backend, circuits):
494 # try getting basis_gates from user, else backend
495 if basis_gates is None:
496 if getattr(backend, 'configuration', None):
497 basis_gates = getattr(backend.configuration(), 'basis_gates', None)
498 # basis_gates could be None, or a list of basis, e.g. ['u3', 'cx']
499 if basis_gates is None or (isinstance(basis_gates, list) and
500 all(isinstance(i, str) for i in basis_gates)):
501 basis_gates = [basis_gates] * len(circuits)
502
503 return basis_gates
504
505
506 def _parse_coupling_map(coupling_map, backend, num_circuits):
507 # try getting coupling_map from user, else backend
508 if coupling_map is None:
509 if getattr(backend, 'configuration', None):
510 configuration = backend.configuration()
511 if hasattr(configuration, 'coupling_map') and configuration.coupling_map:
512 faulty_map = _create_faulty_qubits_map(backend)
513 if faulty_map:
514 coupling_map = CouplingMap()
515 for qubit1, qubit2 in configuration.coupling_map:
516 if faulty_map[qubit1] is not None and faulty_map[qubit2] is not None:
517 coupling_map.add_edge(faulty_map[qubit1], faulty_map[qubit2])
518 else:
519 coupling_map = CouplingMap(configuration.coupling_map)
520
521 # coupling_map could be None, or a list of lists, e.g. [[0, 1], [2, 1]]
522 if coupling_map is None or isinstance(coupling_map, CouplingMap):
523 coupling_map = [coupling_map] * num_circuits
524 elif isinstance(coupling_map, list) and all(isinstance(i, list) and len(i) == 2
525 for i in coupling_map):
526 coupling_map = [coupling_map] * num_circuits
527
528 coupling_map = [CouplingMap(cm) if isinstance(cm, list) else cm for cm in coupling_map]
529
530 return coupling_map
531
532
533 def _parse_backend_properties(backend_properties, backend, num_circuits):
534 # try getting backend_properties from user, else backend
535 if backend_properties is None:
536 if getattr(backend, 'properties', None):
537 backend_properties = backend.properties()
538 if backend_properties and \
539 (backend_properties.faulty_qubits() or backend_properties.faulty_gates()):
540 faulty_qubits = sorted(backend_properties.faulty_qubits(), reverse=True)
541 faulty_edges = [gates.qubits for gates in backend_properties.faulty_gates()]
542 # remove faulty qubits in backend_properties.qubits
543 for faulty_qubit in faulty_qubits:
544 del backend_properties.qubits[faulty_qubit]
545
546 gates = []
547 for gate in backend_properties.gates:
548 # remove gates using faulty edges or with faulty qubits (and remap the
549 # gates in terms of faulty_qubits_map)
550 faulty_qubits_map = _create_faulty_qubits_map(backend)
551 if any([faulty_qubits_map[qubits] is not None for qubits in gate.qubits]) or \
552 gate.qubits in faulty_edges:
553 continue
554 gate_dict = gate.to_dict()
555 replacement_gate = Gate.from_dict(gate_dict)
556 gate_dict['qubits'] = [faulty_qubits_map[qubit] for qubit in gate.qubits]
557 args = '_'.join([str(qubit) for qubit in gate_dict['qubits']])
558 gate_dict['name'] = "%s%s" % (gate_dict['gate'], args)
559 gates.append(replacement_gate)
560
561 backend_properties.gates = gates
562 if not isinstance(backend_properties, list):
563 backend_properties = [backend_properties] * num_circuits
564 return backend_properties
565
566
567 def _parse_backend_num_qubits(backend, num_circuits):
568 if backend is None:
569 return [None] * num_circuits
570 if not isinstance(backend, list):
571 return [backend.configuration().n_qubits] * num_circuits
572 backend_num_qubits = []
573 for a_backend in backend:
574 backend_num_qubits.append(a_backend.configuration().n_qubits)
575 return backend_num_qubits
576
577
578 def _parse_initial_layout(initial_layout, circuits):
579 # initial_layout could be None, or a list of ints, e.g. [0, 5, 14]
580 # or a list of tuples/None e.g. [qr[0], None, qr[1]] or a dict e.g. {qr[0]: 0}
581 def _layout_from_raw(initial_layout, circuit):
582 if initial_layout is None or isinstance(initial_layout, Layout):
583 return initial_layout
584 elif isinstancelist(initial_layout):
585 if all(isinstanceint(elem) for elem in initial_layout):
586 initial_layout = Layout.from_intlist(initial_layout, *circuit.qregs)
587 elif all(elem is None or isinstance(elem, Qubit) for elem in initial_layout):
588 initial_layout = Layout.from_qubit_list(initial_layout)
589 elif isinstance(initial_layout, dict):
590 initial_layout = Layout(initial_layout)
591 else:
592 raise TranspilerError("The initial_layout parameter could not be parsed")
593 return initial_layout
594
595 # multiple layouts?
596 if isinstance(initial_layout, list) and \
597 any(isinstance(i, (list, dict)) for i in initial_layout):
598 initial_layout = [_layout_from_raw(lo, circ) if isinstance(lo, (list, dict)) else lo
599 for lo, circ in zip(initial_layout, circuits)]
600 else:
601 # even if one layout, but multiple circuits, the layout needs to be adapted for each
602 initial_layout = [_layout_from_raw(initial_layout, circ) for circ in circuits]
603
604 if not isinstance(initial_layout, list):
605 initial_layout = [initial_layout] * len(circuits)
606
607 return initial_layout
608
609
610 def _parse_layout_method(layout_method, num_circuits):
611 if not isinstance(layout_method, list):
612 layout_method = [layout_method] * num_circuits
613 return layout_method
614
615
616 def _parse_routing_method(routing_method, num_circuits):
617 if not isinstance(routing_method, list):
618 routing_method = [routing_method] * num_circuits
619 return routing_method
620
621
622 def _parse_translation_method(translation_method, num_circuits):
623 if not isinstance(translation_method, list):
624 translation_method = [translation_method] * num_circuits
625 return translation_method
626
627
628 def _parse_scheduling_method(scheduling_method, num_circuits):
629 if not isinstance(scheduling_method, list):
630 scheduling_method = [scheduling_method] * num_circuits
631 return scheduling_method
632
633
634 def _parse_instruction_durations(backend, inst_durations, dt, scheduling_method, num_circuits):
635 durations = None
636 if scheduling_method is not None:
637 from qiskit.transpiler.instruction_durations import InstructionDurations
638 if backend:
639 durations = InstructionDurations.from_backend(backend).update(inst_durations, dt)
640 else:
641 durations = InstructionDurations(inst_durations, dt)
642
643 if not isinstance(durations, list):
644 durations = [durations] * num_circuits
645 return durations
646
647
648 def _parse_seed_transpiler(seed_transpiler, num_circuits):
649 if not isinstance(seed_transpiler, list):
650 seed_transpiler = [seed_transpiler] * num_circuits
651 return seed_transpiler
652
653
654 def _parse_optimization_level(optimization_level, num_circuits):
655 if not isinstance(optimization_level, list):
656 optimization_level = [optimization_level] * num_circuits
657 return optimization_level
658
659
660 def _parse_pass_manager(pass_manager, num_circuits):
661 if not isinstance(pass_manager, list):
662 pass_manager = [pass_manager] * num_circuits
663 return pass_manager
664
665
666 def _parse_callback(callback, num_circuits):
667 if not isinstance(callback, list):
668 callback = [callback] * num_circuits
669 return callback
670
671
672 def _parse_faulty_qubits_map(backend, num_circuits):
673 if backend is None:
674 return [None] * num_circuits
675 if not isinstance(backend, list):
676 return [_create_faulty_qubits_map(backend)] * num_circuits
677 faulty_qubits_map = []
678 for a_backend in backend:
679 faulty_qubits_map.append(_create_faulty_qubits_map(a_backend))
680 return faulty_qubits_map
681
682
683 def _parse_output_name(output_name, circuits):
684 # naming and returning circuits
685 # output_name could be either a string or a list
686 if output_name is not None:
687 if isinstance(output_name, str):
688 # single circuit
689 if len(circuits) == 1:
690 return [output_name]
691 # multiple circuits
692 else:
693 raise TranspilerError("Expected a list object of length equal " +
694 "to that of the number of circuits " +
695 "being transpiled")
696 elif isinstance(output_name, list):
697 if len(circuits) == len(output_name) and \
698 all(isinstance(name, str) for name in output_name):
699 return output_name
700 else:
701 raise TranspilerError("The length of output_name list "
702 "must be equal to the number of "
703 "transpiled circuits and the output_name "
704 "list should be strings.")
705 else:
706 raise TranspilerError("The parameter output_name should be a string or a"
707 "list of strings: %s was used." % type(output_name))
708 else:
709 return [circuit.name for circuit in circuits]
710
[end of qiskit/compiler/transpile.py]
[start of qiskit/transpiler/preset_passmanagers/level0.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2018.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """Pass manager for optimization level 0, providing no explicit optimization.
14
15 Level 0 pass manager: no explicit optimization other than mapping to backend.
16 """
17
18 from qiskit.transpiler.passmanager_config import PassManagerConfig
19 from qiskit.transpiler.passmanager import PassManager
20
21 from qiskit.transpiler.passes import Unroller
22 from qiskit.transpiler.passes import BasisTranslator
23 from qiskit.transpiler.passes import UnrollCustomDefinitions
24 from qiskit.transpiler.passes import Unroll3qOrMore
25 from qiskit.transpiler.passes import CheckMap
26 from qiskit.transpiler.passes import CXDirection
27 from qiskit.transpiler.passes import SetLayout
28 from qiskit.transpiler.passes import TrivialLayout
29 from qiskit.transpiler.passes import DenseLayout
30 from qiskit.transpiler.passes import NoiseAdaptiveLayout
31 from qiskit.transpiler.passes import SabreLayout
32 from qiskit.transpiler.passes import BarrierBeforeFinalMeasurements
33 from qiskit.transpiler.passes import BasicSwap
34 from qiskit.transpiler.passes import LookaheadSwap
35 from qiskit.transpiler.passes import StochasticSwap
36 from qiskit.transpiler.passes import SabreSwap
37 from qiskit.transpiler.passes import FullAncillaAllocation
38 from qiskit.transpiler.passes import EnlargeWithAncilla
39 from qiskit.transpiler.passes import ApplyLayout
40 from qiskit.transpiler.passes import CheckCXDirection
41 from qiskit.transpiler.passes import Collect2qBlocks
42 from qiskit.transpiler.passes import ConsolidateBlocks
43 from qiskit.transpiler.passes import UnitarySynthesis
44 from qiskit.transpiler.passes import TimeUnitAnalysis
45 from qiskit.transpiler.passes import ALAPSchedule
46 from qiskit.transpiler.passes import ASAPSchedule
47
48 from qiskit.transpiler import TranspilerError
49
50
51 def level_0_pass_manager(pass_manager_config: PassManagerConfig) -> PassManager:
52 """Level 0 pass manager: no explicit optimization other than mapping to backend.
53
54 This pass manager applies the user-given initial layout. If none is given, a trivial
55 layout consisting of mapping the i-th virtual qubit to the i-th physical qubit is used.
56 Any unused physical qubit is allocated as ancilla space.
57
58 The pass manager then unrolls the circuit to the desired basis, and transforms the
59 circuit to match the coupling map.
60
61 Note:
62 In simulators where ``coupling_map=None``, only the unrolling and
63 optimization stages are done.
64
65 Args:
66 pass_manager_config: configuration of the pass manager.
67
68 Returns:
69 a level 0 pass manager.
70
71 Raises:
72 TranspilerError: if the passmanager config is invalid.
73 """
74 basis_gates = pass_manager_config.basis_gates
75 coupling_map = pass_manager_config.coupling_map
76 initial_layout = pass_manager_config.initial_layout
77 layout_method = pass_manager_config.layout_method or 'trivial'
78 routing_method = pass_manager_config.routing_method or 'stochastic'
79 translation_method = pass_manager_config.translation_method or 'translator'
80 scheduling_method = pass_manager_config.scheduling_method
81 instruction_durations = pass_manager_config.instruction_durations
82 seed_transpiler = pass_manager_config.seed_transpiler
83 backend_properties = pass_manager_config.backend_properties
84
85 # 1. Choose an initial layout if not set by user (default: trivial layout)
86 _given_layout = SetLayout(initial_layout)
87
88 def _choose_layout_condition(property_set):
89 return not property_set['layout']
90
91 if layout_method == 'trivial':
92 _choose_layout = TrivialLayout(coupling_map)
93 elif layout_method == 'dense':
94 _choose_layout = DenseLayout(coupling_map, backend_properties)
95 elif layout_method == 'noise_adaptive':
96 _choose_layout = NoiseAdaptiveLayout(backend_properties)
97 elif layout_method == 'sabre':
98 _choose_layout = SabreLayout(coupling_map, max_iterations=1, seed=seed_transpiler)
99 else:
100 raise TranspilerError("Invalid layout method %s." % layout_method)
101
102 # 2. Extend dag/layout with ancillas using the full coupling map
103 _embed = [FullAncillaAllocation(coupling_map), EnlargeWithAncilla(), ApplyLayout()]
104
105 # 3. Decompose so only 1-qubit and 2-qubit gates remain
106 _unroll3q = Unroll3qOrMore()
107
108 # 4. Swap to fit the coupling map
109 _swap_check = CheckMap(coupling_map)
110
111 def _swap_condition(property_set):
112 return not property_set['is_swap_mapped']
113
114 _swap = [BarrierBeforeFinalMeasurements()]
115 if routing_method == 'basic':
116 _swap += [BasicSwap(coupling_map)]
117 elif routing_method == 'stochastic':
118 _swap += [StochasticSwap(coupling_map, trials=20, seed=seed_transpiler)]
119 elif routing_method == 'lookahead':
120 _swap += [LookaheadSwap(coupling_map, search_depth=2, search_width=2)]
121 elif routing_method == 'sabre':
122 _swap += [SabreSwap(coupling_map, heuristic='basic', seed=seed_transpiler)]
123 else:
124 raise TranspilerError("Invalid routing method %s." % routing_method)
125
126 # 5. Unroll to the basis
127 if translation_method == 'unroller':
128 _unroll = [Unroller(basis_gates)]
129 elif translation_method == 'translator':
130 from qiskit.circuit.equivalence_library import SessionEquivalenceLibrary as sel
131 _unroll = [UnrollCustomDefinitions(sel, basis_gates),
132 BasisTranslator(sel, basis_gates)]
133 elif translation_method == 'synthesis':
134 _unroll = [
135 Unroll3qOrMore(),
136 Collect2qBlocks(),
137 ConsolidateBlocks(basis_gates=basis_gates),
138 UnitarySynthesis(basis_gates),
139 ]
140 else:
141 raise TranspilerError("Invalid translation method %s." % translation_method)
142
143 # 6. Fix any bad CX directions
144 _direction_check = [CheckCXDirection(coupling_map)]
145
146 def _direction_condition(property_set):
147 return not property_set['is_direction_mapped']
148
149 _direction = [CXDirection(coupling_map)]
150
151 # 7. Schedule the circuit only when scheduling_method is supplied
152 if scheduling_method:
153 _scheduling = [TimeUnitAnalysis(instruction_durations)]
154 if scheduling_method in {'alap', 'as_late_as_possible'}:
155 _scheduling += [ALAPSchedule(instruction_durations)]
156 elif scheduling_method in {'asap', 'as_soon_as_possible'}:
157 _scheduling += [ASAPSchedule(instruction_durations)]
158 else:
159 raise TranspilerError("Invalid scheduling method %s." % scheduling_method)
160
161 # Build pass manager
162 pm0 = PassManager()
163 if coupling_map:
164 pm0.append(_given_layout)
165 pm0.append(_choose_layout, condition=_choose_layout_condition)
166 pm0.append(_embed)
167 pm0.append(_unroll3q)
168 pm0.append(_swap_check)
169 pm0.append(_swap, condition=_swap_condition)
170 pm0.append(_unroll)
171 if coupling_map and not coupling_map.is_symmetric:
172 pm0.append(_direction_check)
173 pm0.append(_direction, condition=_direction_condition)
174 if scheduling_method:
175 pm0.append(_scheduling)
176 return pm0
177
[end of qiskit/transpiler/preset_passmanagers/level0.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
54c8870623e28cb3f9f4eb92828fb8fd5f9dde49
|
ALAP scheduling fails when a circuit with custom instruction is supplied
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### What is the current behavior?
Transpiling with `scheduling_method="alap"` but without `backend` fails when a circuit with custom instructions is supplied.
This is due to the instruction name mismatch. It occurs when `ALAPSchedule` pass calls `reverse_ops()`, which changes the instruction name. (So this error does not happen if we use ASAP scheduler.)
### Steps to reproduce the problem
```
bell = QuantumCircuit(2, name="bell")
bell.h(0)
bell.cx(0, 1)
qc = QuantumCircuit(2)
qc.delay(500, 1)
qc.append(bell.to_instruction(), [0, 1])
scheduled = transpile(qc,
scheduling_method='alap',
instruction_durations=[('bell', [0, 1], 1000)])
==> qiskit.transpiler.exceptions.TranspilerError: 'Duration of bell_reverse on qubits [0, 1] is not found.'
```
### What is the expected behavior?
The above circuit should be scheduled successfully.
### Suggested solutions
We may have three options:
1. don't add "_reverse" to the name of the reversed instruction
2. patch the `instruction_durations` with the new .._reverse gate duration as well
3. rewrite the ALAP scheduler without using `reverse_ops()`
|
2020-10-01T03:37:31Z
|
<patch>
diff --git a/qiskit/transpiler/passes/scheduling/alap.py b/qiskit/transpiler/passes/scheduling/alap.py
--- a/qiskit/transpiler/passes/scheduling/alap.py
+++ b/qiskit/transpiler/passes/scheduling/alap.py
@@ -11,10 +11,13 @@
# that they have been altered from the originals.
"""ALAP Scheduling."""
+from collections import defaultdict
+from typing import List
+from qiskit.circuit.delay import Delay
+from qiskit.dagcircuit import DAGCircuit
from qiskit.transpiler.basepasses import TransformationPass
from qiskit.transpiler.exceptions import TranspilerError
-from qiskit.transpiler.passes.scheduling.asap import ASAPSchedule
class ALAPSchedule(TransformationPass):
@@ -27,7 +30,7 @@ def __init__(self, durations):
durations (InstructionDurations): Durations of instructions to be used in scheduling
"""
super().__init__()
- self._asap = ASAPSchedule(durations)
+ self.durations = durations
def run(self, dag, time_unit=None): # pylint: disable=arguments-differ
"""Run the ALAPSchedule pass on `dag`.
@@ -48,9 +51,42 @@ def run(self, dag, time_unit=None): # pylint: disable=arguments-differ
if not time_unit:
time_unit = self.property_set['time_unit']
- new_dag = dag.reverse_ops()
- new_dag = self._asap.run(new_dag, time_unit)
- new_dag = new_dag.reverse_ops()
+ new_dag = DAGCircuit()
+ for qreg in dag.qregs.values():
+ new_dag.add_qreg(qreg)
+ for creg in dag.cregs.values():
+ new_dag.add_creg(creg)
+
+ qubit_time_available = defaultdict(int)
+
+ def pad_with_delays(qubits: List[int], until, unit) -> None:
+ """Pad idle time-slots in ``qubits`` with delays in ``unit`` until ``until``."""
+ for q in qubits:
+ if qubit_time_available[q] < until:
+ idle_duration = until - qubit_time_available[q]
+ new_dag.apply_operation_front(Delay(idle_duration, unit), [q], [])
+
+ for node in reversed(list(dag.topological_op_nodes())):
+ start_time = max(qubit_time_available[q] for q in node.qargs)
+ pad_with_delays(node.qargs, until=start_time, unit=time_unit)
+
+ new_node = new_dag.apply_operation_front(node.op, node.qargs, node.cargs,
+ node.condition)
+ duration = self.durations.get(node.op, node.qargs, unit=time_unit)
+ # set duration for each instruction (tricky but necessary)
+ new_node.op.duration = duration
+ new_node.op.unit = time_unit
+
+ stop_time = start_time + duration
+ # update time table
+ for q in node.qargs:
+ qubit_time_available[q] = stop_time
+
+ working_qubits = qubit_time_available.keys()
+ circuit_duration = max(qubit_time_available[q] for q in working_qubits)
+ pad_with_delays(new_dag.qubits, until=circuit_duration, unit=time_unit)
new_dag.name = dag.name
+ new_dag.duration = circuit_duration
+ new_dag.unit = time_unit
return new_dag
diff --git a/qiskit/transpiler/passes/scheduling/asap.py b/qiskit/transpiler/passes/scheduling/asap.py
--- a/qiskit/transpiler/passes/scheduling/asap.py
+++ b/qiskit/transpiler/passes/scheduling/asap.py
@@ -50,6 +50,7 @@ def run(self, dag, time_unit=None): # pylint: disable=arguments-differ
if not time_unit:
time_unit = self.property_set['time_unit']
+
new_dag = DAGCircuit()
for qreg in dag.qregs.values():
new_dag.add_qreg(qreg)
</patch>
|
[]
|
[]
| ||||
ytdl-org__youtube-dl-588
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support individual comedycentral videos
Add support for individual videos - currently, only full episodes are supported. Test URL: http://www.colbertnation.com/the-colbert-report-videos/229765/june-08-2009/operation-iraqi-stephen---john-mccain
</issue>
<code>
[start of README.md]
1 % YOUTUBE-DL(1)
2
3 # NAME
4 youtube-dl
5
6 # SYNOPSIS
7 **youtube-dl** [OPTIONS] URL [URL...]
8
9 # DESCRIPTION
10 **youtube-dl** is a small command-line program to download videos from
11 YouTube.com and a few more sites. It requires the Python interpreter, version
12 2.x (x being at least 6), and it is not platform specific. It should work in
13 your Unix box, in Windows or in Mac OS X. It is released to the public domain,
14 which means you can modify it, redistribute it or use it however you like.
15
16 # OPTIONS
17 -h, --help print this help text and exit
18 --version print program version and exit
19 -U, --update update this program to latest version
20 -i, --ignore-errors continue on download errors
21 -r, --rate-limit LIMIT download rate limit (e.g. 50k or 44.6m)
22 -R, --retries RETRIES number of retries (default is 10)
23 --buffer-size SIZE size of download buffer (e.g. 1024 or 16k) (default
24 is 1024)
25 --no-resize-buffer do not automatically adjust the buffer size. By
26 default, the buffer size is automatically resized
27 from an initial value of SIZE.
28 --dump-user-agent display the current browser identification
29 --user-agent UA specify a custom user agent
30 --list-extractors List all supported extractors and the URLs they
31 would handle
32
33 ## Video Selection:
34 --playlist-start NUMBER playlist video to start at (default is 1)
35 --playlist-end NUMBER playlist video to end at (default is last)
36 --match-title REGEX download only matching titles (regex or caseless
37 sub-string)
38 --reject-title REGEX skip download for matching titles (regex or
39 caseless sub-string)
40 --max-downloads NUMBER Abort after downloading NUMBER files
41
42 ## Filesystem Options:
43 -t, --title use title in file name
44 --id use video ID in file name
45 -l, --literal [deprecated] alias of --title
46 -A, --auto-number number downloaded files starting from 00000
47 -o, --output TEMPLATE output filename template. Use %(title)s to get the
48 title, %(uploader)s for the uploader name,
49 %(autonumber)s to get an automatically incremented
50 number, %(ext)s for the filename extension,
51 %(upload_date)s for the upload date (YYYYMMDD),
52 %(extractor)s for the provider (youtube, metacafe,
53 etc), %(id)s for the video id and %% for a literal
54 percent. Use - to output to stdout. Can also be
55 used to download to a different directory, for
56 example with -o '/my/downloads/%(uploader)s/%(title
57 )s-%(id)s.%(ext)s' .
58 --restrict-filenames Restrict filenames to only ASCII characters, and
59 avoid "&" and spaces in filenames
60 -a, --batch-file FILE file containing URLs to download ('-' for stdin)
61 -w, --no-overwrites do not overwrite files
62 -c, --continue resume partially downloaded files
63 --no-continue do not resume partially downloaded files (restart
64 from beginning)
65 --cookies FILE file to read cookies from and dump cookie jar in
66 --no-part do not use .part files
67 --no-mtime do not use the Last-modified header to set the file
68 modification time
69 --write-description write video description to a .description file
70 --write-info-json write video metadata to a .info.json file
71
72 ## Verbosity / Simulation Options:
73 -q, --quiet activates quiet mode
74 -s, --simulate do not download the video and do not write anything
75 to disk
76 --skip-download do not download the video
77 -g, --get-url simulate, quiet but print URL
78 -e, --get-title simulate, quiet but print title
79 --get-thumbnail simulate, quiet but print thumbnail URL
80 --get-description simulate, quiet but print video description
81 --get-filename simulate, quiet but print output filename
82 --get-format simulate, quiet but print output format
83 --no-progress do not print progress bar
84 --console-title display progress in console titlebar
85 -v, --verbose print various debugging information
86
87 ## Video Format Options:
88 -f, --format FORMAT video format code
89 --all-formats download all available video formats
90 --prefer-free-formats prefer free video formats unless a specific one is
91 requested
92 --max-quality FORMAT highest quality format to download
93 -F, --list-formats list all available formats (currently youtube only)
94 --write-srt write video closed captions to a .srt file
95 (currently youtube only)
96 --srt-lang LANG language of the closed captions to download
97 (optional) use IETF language tags like 'en'
98
99 ## Authentication Options:
100 -u, --username USERNAME account username
101 -p, --password PASSWORD account password
102 -n, --netrc use .netrc authentication data
103
104 ## Post-processing Options:
105 -x, --extract-audio convert video files to audio-only files (requires
106 ffmpeg or avconv and ffprobe or avprobe)
107 --audio-format FORMAT "best", "aac", "vorbis", "mp3", "m4a", or "wav";
108 best by default
109 --audio-quality QUALITY ffmpeg/avconv audio quality specification, insert a
110 value between 0 (better) and 9 (worse) for VBR or a
111 specific bitrate like 128K (default 5)
112 -k, --keep-video keeps the video file on disk after the post-
113 processing; the video is erased by default
114
115 # CONFIGURATION
116
117 You can configure youtube-dl by placing default arguments (such as `--extract-audio --no-mtime` to always extract the audio and not copy the mtime) into `/etc/youtube-dl.conf` and/or `~/.local/config/youtube-dl.conf`.
118
119 # OUTPUT TEMPLATE
120
121 The `-o` option allows users to indicate a template for the output file names. The basic usage is not to set any template arguments when downloading a single file, like in `youtube-dl -o funny_video.flv "http://some/video"`. However, it may contain special sequences that will be replaced when downloading each video. The special sequences have the format `%(NAME)s`. To clarify, that is a percent symbol followed by a name in parenthesis, followed by a lowercase S. Allowed names are:
122
123 - `id`: The sequence will be replaced by the video identifier.
124 - `url`: The sequence will be replaced by the video URL.
125 - `uploader`: The sequence will be replaced by the nickname of the person who uploaded the video.
126 - `upload_date`: The sequence will be replaced by the upload date in YYYYMMDD format.
127 - `title`: The sequence will be replaced by the video title.
128 - `ext`: The sequence will be replaced by the appropriate extension (like flv or mp4).
129 - `epoch`: The sequence will be replaced by the Unix epoch when creating the file.
130 - `autonumber`: The sequence will be replaced by a five-digit number that will be increased with each download, starting at zero.
131
132 The current default template is `%(id)s.%(ext)s`, but that will be switchted to `%(title)s-%(id)s.%(ext)s` (which can be requested with `-t` at the moment).
133
134 In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the `--restrict-filenames` flag to get a shorter title:
135
136 $ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc
137 youtube-dl test video ''_ä↭𝕐.mp4 # All kinds of weird characters
138 $ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc --restrict-filenames
139 youtube-dl_test_video_.mp4 # A simple file name
140
141 # FAQ
142
143 ### Can you please put the -b option back?
144
145 Most people asking this question are not aware that youtube-dl now defaults to downloading the highest available quality as reported by YouTube, which will be 1080p or 720p in some cases, so you no longer need the -b option. For some specific videos, maybe YouTube does not report them to be available in a specific high quality format you''re interested in. In that case, simply request it with the -f option and youtube-dl will try to download it.
146
147 ### I get HTTP error 402 when trying to download a video. What's this?
148
149 Apparently YouTube requires you to pass a CAPTCHA test if you download too much. We''re [considering to provide a way to let you solve the CAPTCHA](https://github.com/rg3/youtube-dl/issues/154), but at the moment, your best course of action is pointing a webbrowser to the youtube URL, solving the CAPTCHA, and restart youtube-dl.
150
151 ### I have downloaded a video but how can I play it?
152
153 Once the video is fully downloaded, use any video player, such as [vlc](http://www.videolan.org) or [mplayer](http://www.mplayerhq.hu/).
154
155 ### The links provided by youtube-dl -g are not working anymore
156
157 The URLs youtube-dl outputs require the downloader to have the correct cookies. Use the `--cookies` option to write the required cookies into a file, and advise your downloader to read cookies from that file. Some sites also require a common user agent to be used, use `--dump-user-agent` to see the one in use by youtube-dl.
158
159 ### ERROR: no fmt_url_map or conn information found in video info
160
161 youtube has switched to a new video info format in July 2011 which is not supported by old versions of youtube-dl. You can update youtube-dl with `sudo youtube-dl --update`.
162
163 ### ERROR: unable to download video ###
164
165 youtube requires an additional signature since September 2012 which is not supported by old versions of youtube-dl. You can update youtube-dl with `sudo youtube-dl --update`.
166
167 ### SyntaxError: Non-ASCII character ###
168
169 The error
170
171 File "youtube-dl", line 2
172 SyntaxError: Non-ASCII character '\x93' ...
173
174 means you're using an outdated version of Python. Please update to Python 2.6 or 2.7.
175
176 ### What is this binary file? Where has the code gone?
177
178 Since June 2012 (#342) youtube-dl is packed as an executable zipfile, simply unzip it (might need renaming to `youtube-dl.zip` first on some systems) or clone the git repository, as laid out above. If you modify the code, you can run it by executing the `__main__.py` file. To recompile the executable, run `make youtube-dl`.
179
180 ### The exe throws a *Runtime error from Visual C++*
181
182 To run the exe you need to install first the [Microsoft Visual C++ 2008 Redistributable Package](http://www.microsoft.com/en-us/download/details.aspx?id=29).
183
184 # COPYRIGHT
185
186 youtube-dl is released into the public domain by the copyright holders.
187
188 This README file was originally written by Daniel Bolton (<https://github.com/dbbolton>) and is likewise released into the public domain.
189
190 # BUGS
191
192 Bugs and suggestions should be reported at: <https://github.com/rg3/youtube-dl/issues>
193
194 Please include:
195
196 * Your exact command line, like `youtube-dl -t "http://www.youtube.com/watch?v=uHlDtZ6Oc3s&feature=channel_video_title"`. A common mistake is not to escape the `&`. Putting URLs in quotes should solve this problem.
197 * The output of `youtube-dl --version`
198 * The output of `python --version`
199 * The name and version of your Operating System ("Ubuntu 11.04 x64" or "Windows 7 x64" is usually enough).
200
201 For discussions, join us in the irc channel #youtube-dl on freenode.
202
[end of README.md]
[start of devscripts/transition_helper.py]
1 #!/usr/bin/env python
2
3 import sys, os
4
5 try:
6 import urllib.request as compat_urllib_request
7 except ImportError: # Python 2
8 import urllib2 as compat_urllib_request
9
10 sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
11 sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
12 sys.stderr.write(u'The new location of the binaries is https://github.com/rg3/youtube-dl/downloads, not the git repository.\n\n')
13
14 try:
15 raw_input()
16 except NameError: # Python 3
17 input()
18
19 filename = sys.argv[0]
20
21 API_URL = "https://api.github.com/repos/rg3/youtube-dl/downloads"
22 BIN_URL = "https://github.com/downloads/rg3/youtube-dl/youtube-dl"
23
24 if not os.access(filename, os.W_OK):
25 sys.exit('ERROR: no write permissions on %s' % filename)
26
27 try:
28 urlh = compat_urllib_request.urlopen(BIN_URL)
29 newcontent = urlh.read()
30 urlh.close()
31 except (IOError, OSError) as err:
32 sys.exit('ERROR: unable to download latest version')
33
34 try:
35 with open(filename, 'wb') as outf:
36 outf.write(newcontent)
37 except (IOError, OSError) as err:
38 sys.exit('ERROR: unable to overwrite current version')
39
40 sys.stderr.write(u'Done! Now you can run youtube-dl.\n')
41
[end of devscripts/transition_helper.py]
[start of devscripts/transition_helper_exe/youtube-dl.py]
1 #!/usr/bin/env python
2
3 import sys, os
4 import urllib2
5
6 sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
7 sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
8 sys.stderr.write(u'The new location of the binaries is https://github.com/rg3/youtube-dl/downloads, not the git repository.\n\n')
9
10 raw_input()
11
12 filename = sys.argv[0]
13
14 API_URL = "https://api.github.com/repos/rg3/youtube-dl/downloads"
15 EXE_URL = "https://github.com/downloads/rg3/youtube-dl/youtube-dl.exe"
16
17 if not os.access(filename, os.W_OK):
18 sys.exit('ERROR: no write permissions on %s' % filename)
19
20 exe = os.path.abspath(filename)
21 directory = os.path.dirname(exe)
22 if not os.access(directory, os.W_OK):
23 sys.exit('ERROR: no write permissions on %s' % directory)
24
25 try:
26 urlh = urllib2.urlopen(EXE_URL)
27 newcontent = urlh.read()
28 urlh.close()
29 with open(exe + '.new', 'wb') as outf:
30 outf.write(newcontent)
31 except (IOError, OSError) as err:
32 sys.exit('ERROR: unable to download latest version')
33
34 try:
35 bat = os.path.join(directory, 'youtube-dl-updater.bat')
36 b = open(bat, 'w')
37 b.write("""
38 echo Updating youtube-dl...
39 ping 127.0.0.1 -n 5 -w 1000 > NUL
40 move /Y "%s.new" "%s"
41 del "%s"
42 \n""" %(exe, exe, bat))
43 b.close()
44
45 os.startfile(bat)
46 except (IOError, OSError) as err:
47 sys.exit('ERROR: unable to overwrite current version')
48
49 sys.stderr.write(u'Done! Now you can run youtube-dl.\n')
50
[end of devscripts/transition_helper_exe/youtube-dl.py]
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5 from distutils.core import setup
6 import pkg_resources
7 import sys
8
9 try:
10 import py2exe
11 """This will create an exe that needs Microsoft Visual C++ 2008 Redistributable Package"""
12 except ImportError:
13 if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
14 print("Cannot import py2exe", file=sys.stderr)
15 exit(1)
16
17 py2exe_options = {
18 "bundle_files": 1,
19 "compressed": 1,
20 "optimize": 2,
21 "dist_dir": '.',
22 "dll_excludes": ['w9xpopen.exe']
23 }
24 py2exe_console = [{
25 "script": "./youtube_dl/__main__.py",
26 "dest_base": "youtube-dl",
27 }]
28 py2exe_params = {
29 'console': py2exe_console,
30 'options': { "py2exe": py2exe_options },
31 'zipfile': None
32 }
33
34 if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
35 params = py2exe_params
36 else:
37 params = {
38 'scripts': ['bin/youtube-dl'],
39 'data_files': [('etc/bash_completion.d', ['youtube-dl.bash-completion']), # Installing system-wide would require sudo...
40 ('share/doc/youtube_dl', ['README.txt']),
41 ('share/man/man1/', ['youtube-dl.1'])]
42 }
43
44 # Get the version from youtube_dl/version.py without importing the package
45 exec(compile(open('youtube_dl/version.py').read(), 'youtube_dl/version.py', 'exec'))
46
47 setup(
48 name = 'youtube_dl',
49 version = __version__,
50 description = 'YouTube video downloader',
51 long_description = 'Small command-line program to download videos from YouTube.com and other video sites.',
52 url = 'https://github.com/rg3/youtube-dl',
53 author = 'Ricardo Garcia',
54 maintainer = 'Philipp Hagemeister',
55 maintainer_email = '[email protected]',
56 packages = ['youtube_dl'],
57
58 # Provokes warning on most systems (why?!)
59 #test_suite = 'nose.collector',
60 #test_requires = ['nosetest'],
61
62 classifiers = [
63 "Topic :: Multimedia :: Video",
64 "Development Status :: 5 - Production/Stable",
65 "Environment :: Console",
66 "License :: Public Domain",
67 "Programming Language :: Python :: 2.6",
68 "Programming Language :: Python :: 2.7",
69 "Programming Language :: Python :: 3",
70 "Programming Language :: Python :: 3.3"
71 ],
72
73 **params
74 )
75
[end of setup.py]
[start of youtube_dl/__init__.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import with_statement
5 from __future__ import absolute_import
6
7 __authors__ = (
8 'Ricardo Garcia Gonzalez',
9 'Danny Colligan',
10 'Benjamin Johnson',
11 'Vasyl\' Vavrychuk',
12 'Witold Baryluk',
13 'Paweł Paprota',
14 'Gergely Imreh',
15 'Rogério Brito',
16 'Philipp Hagemeister',
17 'Sören Schulze',
18 'Kevin Ngo',
19 'Ori Avtalion',
20 'shizeeg',
21 'Filippo Valsorda',
22 'Christian Albrecht',
23 )
24
25 __license__ = 'Public Domain'
26
27 import getpass
28 import optparse
29 import os
30 import re
31 import shlex
32 import socket
33 import subprocess
34 import sys
35 import warnings
36
37 from .utils import *
38 from .version import __version__
39 from .FileDownloader import *
40 from .InfoExtractors import *
41 from .PostProcessor import *
42
43 def updateSelf(downloader, filename):
44 """Update the program file with the latest version from the repository"""
45
46 # TODO: at least, check https certificates
47
48 from zipimport import zipimporter
49
50 API_URL = "https://api.github.com/repos/rg3/youtube-dl/downloads"
51 BIN_URL = "https://github.com/downloads/rg3/youtube-dl/youtube-dl"
52 EXE_URL = "https://github.com/downloads/rg3/youtube-dl/youtube-dl.exe"
53
54 if hasattr(sys, "frozen"): # PY2EXE
55 if not os.access(filename, os.W_OK):
56 sys.exit('ERROR: no write permissions on %s' % filename)
57
58 downloader.to_screen(u'Updating to latest version...')
59
60 urla = compat_urllib_request.urlopen(API_URL)
61 download = filter(lambda x: x["name"] == "youtube-dl.exe", json.loads(urla.read()))
62 if not download:
63 downloader.to_screen(u'ERROR: can\'t find the current version. Please try again later.')
64 return
65 newversion = download[0]["description"].strip()
66 if newversion == __version__:
67 downloader.to_screen(u'youtube-dl is up-to-date (' + __version__ + ')')
68 return
69 urla.close()
70
71 exe = os.path.abspath(filename)
72 directory = os.path.dirname(exe)
73 if not os.access(directory, os.W_OK):
74 sys.exit('ERROR: no write permissions on %s' % directory)
75
76 try:
77 urlh = compat_urllib_request.urlopen(EXE_URL)
78 newcontent = urlh.read()
79 urlh.close()
80 with open(exe + '.new', 'wb') as outf:
81 outf.write(newcontent)
82 except (IOError, OSError) as err:
83 sys.exit('ERROR: unable to download latest version')
84
85 try:
86 bat = os.path.join(directory, 'youtube-dl-updater.bat')
87 b = open(bat, 'w')
88 b.write("""
89 echo Updating youtube-dl...
90 ping 127.0.0.1 -n 5 -w 1000 > NUL
91 move /Y "%s.new" "%s"
92 del "%s"
93 \n""" %(exe, exe, bat))
94 b.close()
95
96 os.startfile(bat)
97 except (IOError, OSError) as err:
98 sys.exit('ERROR: unable to overwrite current version')
99
100 elif isinstance(globals().get('__loader__'), zipimporter): # UNIX ZIP
101 if not os.access(filename, os.W_OK):
102 sys.exit('ERROR: no write permissions on %s' % filename)
103
104 downloader.to_screen(u'Updating to latest version...')
105
106 urla = compat_urllib_request.urlopen(API_URL)
107 download = [x for x in json.loads(urla.read().decode('utf8')) if x["name"] == "youtube-dl"]
108 if not download:
109 downloader.to_screen(u'ERROR: can\'t find the current version. Please try again later.')
110 return
111 newversion = download[0]["description"].strip()
112 if newversion == __version__:
113 downloader.to_screen(u'youtube-dl is up-to-date (' + __version__ + ')')
114 return
115 urla.close()
116
117 try:
118 urlh = compat_urllib_request.urlopen(BIN_URL)
119 newcontent = urlh.read()
120 urlh.close()
121 except (IOError, OSError) as err:
122 sys.exit('ERROR: unable to download latest version')
123
124 try:
125 with open(filename, 'wb') as outf:
126 outf.write(newcontent)
127 except (IOError, OSError) as err:
128 sys.exit('ERROR: unable to overwrite current version')
129
130 else:
131 downloader.to_screen(u'It looks like you installed youtube-dl with pip or setup.py. Please use that to update.')
132 return
133
134 downloader.to_screen(u'Updated youtube-dl. Restart youtube-dl to use the new version.')
135
136 def parseOpts():
137 def _readOptions(filename_bytes):
138 try:
139 optionf = open(filename_bytes)
140 except IOError:
141 return [] # silently skip if file is not present
142 try:
143 res = []
144 for l in optionf:
145 res += shlex.split(l, comments=True)
146 finally:
147 optionf.close()
148 return res
149
150 def _format_option_string(option):
151 ''' ('-o', '--option') -> -o, --format METAVAR'''
152
153 opts = []
154
155 if option._short_opts:
156 opts.append(option._short_opts[0])
157 if option._long_opts:
158 opts.append(option._long_opts[0])
159 if len(opts) > 1:
160 opts.insert(1, ', ')
161
162 if option.takes_value(): opts.append(' %s' % option.metavar)
163
164 return "".join(opts)
165
166 def _find_term_columns():
167 columns = os.environ.get('COLUMNS', None)
168 if columns:
169 return int(columns)
170
171 try:
172 sp = subprocess.Popen(['stty', 'size'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
173 out,err = sp.communicate()
174 return int(out.split()[1])
175 except:
176 pass
177 return None
178
179 max_width = 80
180 max_help_position = 80
181
182 # No need to wrap help messages if we're on a wide console
183 columns = _find_term_columns()
184 if columns: max_width = columns
185
186 fmt = optparse.IndentedHelpFormatter(width=max_width, max_help_position=max_help_position)
187 fmt.format_option_strings = _format_option_string
188
189 kw = {
190 'version' : __version__,
191 'formatter' : fmt,
192 'usage' : '%prog [options] url [url...]',
193 'conflict_handler' : 'resolve',
194 }
195
196 parser = optparse.OptionParser(**kw)
197
198 # option groups
199 general = optparse.OptionGroup(parser, 'General Options')
200 selection = optparse.OptionGroup(parser, 'Video Selection')
201 authentication = optparse.OptionGroup(parser, 'Authentication Options')
202 video_format = optparse.OptionGroup(parser, 'Video Format Options')
203 postproc = optparse.OptionGroup(parser, 'Post-processing Options')
204 filesystem = optparse.OptionGroup(parser, 'Filesystem Options')
205 verbosity = optparse.OptionGroup(parser, 'Verbosity / Simulation Options')
206
207 general.add_option('-h', '--help',
208 action='help', help='print this help text and exit')
209 general.add_option('-v', '--version',
210 action='version', help='print program version and exit')
211 general.add_option('-U', '--update',
212 action='store_true', dest='update_self', help='update this program to latest version')
213 general.add_option('-i', '--ignore-errors',
214 action='store_true', dest='ignoreerrors', help='continue on download errors', default=False)
215 general.add_option('-r', '--rate-limit',
216 dest='ratelimit', metavar='LIMIT', help='download rate limit (e.g. 50k or 44.6m)')
217 general.add_option('-R', '--retries',
218 dest='retries', metavar='RETRIES', help='number of retries (default is %default)', default=10)
219 general.add_option('--buffer-size',
220 dest='buffersize', metavar='SIZE', help='size of download buffer (e.g. 1024 or 16k) (default is %default)', default="1024")
221 general.add_option('--no-resize-buffer',
222 action='store_true', dest='noresizebuffer',
223 help='do not automatically adjust the buffer size. By default, the buffer size is automatically resized from an initial value of SIZE.', default=False)
224 general.add_option('--dump-user-agent',
225 action='store_true', dest='dump_user_agent',
226 help='display the current browser identification', default=False)
227 general.add_option('--user-agent',
228 dest='user_agent', help='specify a custom user agent', metavar='UA')
229 general.add_option('--list-extractors',
230 action='store_true', dest='list_extractors',
231 help='List all supported extractors and the URLs they would handle', default=False)
232 general.add_option('--test', action='store_true', dest='test', default=False, help=optparse.SUPPRESS_HELP)
233
234 selection.add_option('--playlist-start',
235 dest='playliststart', metavar='NUMBER', help='playlist video to start at (default is %default)', default=1)
236 selection.add_option('--playlist-end',
237 dest='playlistend', metavar='NUMBER', help='playlist video to end at (default is last)', default=-1)
238 selection.add_option('--match-title', dest='matchtitle', metavar='REGEX',help='download only matching titles (regex or caseless sub-string)')
239 selection.add_option('--reject-title', dest='rejecttitle', metavar='REGEX',help='skip download for matching titles (regex or caseless sub-string)')
240 selection.add_option('--max-downloads', metavar='NUMBER', dest='max_downloads', help='Abort after downloading NUMBER files', default=None)
241
242 authentication.add_option('-u', '--username',
243 dest='username', metavar='USERNAME', help='account username')
244 authentication.add_option('-p', '--password',
245 dest='password', metavar='PASSWORD', help='account password')
246 authentication.add_option('-n', '--netrc',
247 action='store_true', dest='usenetrc', help='use .netrc authentication data', default=False)
248
249
250 video_format.add_option('-f', '--format',
251 action='store', dest='format', metavar='FORMAT', help='video format code')
252 video_format.add_option('--all-formats',
253 action='store_const', dest='format', help='download all available video formats', const='all')
254 video_format.add_option('--prefer-free-formats',
255 action='store_true', dest='prefer_free_formats', default=False, help='prefer free video formats unless a specific one is requested')
256 video_format.add_option('--max-quality',
257 action='store', dest='format_limit', metavar='FORMAT', help='highest quality format to download')
258 video_format.add_option('-F', '--list-formats',
259 action='store_true', dest='listformats', help='list all available formats (currently youtube only)')
260 video_format.add_option('--write-srt',
261 action='store_true', dest='writesubtitles',
262 help='write video closed captions to a .srt file (currently youtube only)', default=False)
263 video_format.add_option('--srt-lang',
264 action='store', dest='subtitleslang', metavar='LANG',
265 help='language of the closed captions to download (optional) use IETF language tags like \'en\'')
266
267
268 verbosity.add_option('-q', '--quiet',
269 action='store_true', dest='quiet', help='activates quiet mode', default=False)
270 verbosity.add_option('-s', '--simulate',
271 action='store_true', dest='simulate', help='do not download the video and do not write anything to disk', default=False)
272 verbosity.add_option('--skip-download',
273 action='store_true', dest='skip_download', help='do not download the video', default=False)
274 verbosity.add_option('-g', '--get-url',
275 action='store_true', dest='geturl', help='simulate, quiet but print URL', default=False)
276 verbosity.add_option('-e', '--get-title',
277 action='store_true', dest='gettitle', help='simulate, quiet but print title', default=False)
278 verbosity.add_option('--get-thumbnail',
279 action='store_true', dest='getthumbnail',
280 help='simulate, quiet but print thumbnail URL', default=False)
281 verbosity.add_option('--get-description',
282 action='store_true', dest='getdescription',
283 help='simulate, quiet but print video description', default=False)
284 verbosity.add_option('--get-filename',
285 action='store_true', dest='getfilename',
286 help='simulate, quiet but print output filename', default=False)
287 verbosity.add_option('--get-format',
288 action='store_true', dest='getformat',
289 help='simulate, quiet but print output format', default=False)
290 verbosity.add_option('--no-progress',
291 action='store_true', dest='noprogress', help='do not print progress bar', default=False)
292 verbosity.add_option('--console-title',
293 action='store_true', dest='consoletitle',
294 help='display progress in console titlebar', default=False)
295 verbosity.add_option('-v', '--verbose',
296 action='store_true', dest='verbose', help='print various debugging information', default=False)
297
298
299 filesystem.add_option('-t', '--title',
300 action='store_true', dest='usetitle', help='use title in file name', default=False)
301 filesystem.add_option('--id',
302 action='store_true', dest='useid', help='use video ID in file name', default=False)
303 filesystem.add_option('-l', '--literal',
304 action='store_true', dest='usetitle', help='[deprecated] alias of --title', default=False)
305 filesystem.add_option('-A', '--auto-number',
306 action='store_true', dest='autonumber',
307 help='number downloaded files starting from 00000', default=False)
308 filesystem.add_option('-o', '--output',
309 dest='outtmpl', metavar='TEMPLATE', help='output filename template. Use %(title)s to get the title, %(uploader)s for the uploader name, %(autonumber)s to get an automatically incremented number, %(ext)s for the filename extension, %(upload_date)s for the upload date (YYYYMMDD), %(extractor)s for the provider (youtube, metacafe, etc), %(id)s for the video id and %% for a literal percent. Use - to output to stdout. Can also be used to download to a different directory, for example with -o \'/my/downloads/%(uploader)s/%(title)s-%(id)s.%(ext)s\' .')
310 filesystem.add_option('--restrict-filenames',
311 action='store_true', dest='restrictfilenames',
312 help='Restrict filenames to only ASCII characters, and avoid "&" and spaces in filenames', default=False)
313 filesystem.add_option('-a', '--batch-file',
314 dest='batchfile', metavar='FILE', help='file containing URLs to download (\'-\' for stdin)')
315 filesystem.add_option('-w', '--no-overwrites',
316 action='store_true', dest='nooverwrites', help='do not overwrite files', default=False)
317 filesystem.add_option('-c', '--continue',
318 action='store_true', dest='continue_dl', help='resume partially downloaded files', default=True)
319 filesystem.add_option('--no-continue',
320 action='store_false', dest='continue_dl',
321 help='do not resume partially downloaded files (restart from beginning)')
322 filesystem.add_option('--cookies',
323 dest='cookiefile', metavar='FILE', help='file to read cookies from and dump cookie jar in')
324 filesystem.add_option('--no-part',
325 action='store_true', dest='nopart', help='do not use .part files', default=False)
326 filesystem.add_option('--no-mtime',
327 action='store_false', dest='updatetime',
328 help='do not use the Last-modified header to set the file modification time', default=True)
329 filesystem.add_option('--write-description',
330 action='store_true', dest='writedescription',
331 help='write video description to a .description file', default=False)
332 filesystem.add_option('--write-info-json',
333 action='store_true', dest='writeinfojson',
334 help='write video metadata to a .info.json file', default=False)
335
336
337 postproc.add_option('-x', '--extract-audio', action='store_true', dest='extractaudio', default=False,
338 help='convert video files to audio-only files (requires ffmpeg or avconv and ffprobe or avprobe)')
339 postproc.add_option('--audio-format', metavar='FORMAT', dest='audioformat', default='best',
340 help='"best", "aac", "vorbis", "mp3", "m4a", or "wav"; best by default')
341 postproc.add_option('--audio-quality', metavar='QUALITY', dest='audioquality', default='5',
342 help='ffmpeg/avconv audio quality specification, insert a value between 0 (better) and 9 (worse) for VBR or a specific bitrate like 128K (default 5)')
343 postproc.add_option('-k', '--keep-video', action='store_true', dest='keepvideo', default=False,
344 help='keeps the video file on disk after the post-processing; the video is erased by default')
345
346
347 parser.add_option_group(general)
348 parser.add_option_group(selection)
349 parser.add_option_group(filesystem)
350 parser.add_option_group(verbosity)
351 parser.add_option_group(video_format)
352 parser.add_option_group(authentication)
353 parser.add_option_group(postproc)
354
355 xdg_config_home = os.environ.get('XDG_CONFIG_HOME')
356 if xdg_config_home:
357 userConf = os.path.join(xdg_config_home, 'youtube-dl.conf')
358 else:
359 userConf = os.path.join(os.path.expanduser('~'), '.config', 'youtube-dl.conf')
360 argv = _readOptions('/etc/youtube-dl.conf') + _readOptions(userConf) + sys.argv[1:]
361 opts, args = parser.parse_args(argv)
362
363 return parser, opts, args
364
365 def gen_extractors():
366 """ Return a list of an instance of every supported extractor.
367 The order does matter; the first extractor matched is the one handling the URL.
368 """
369 return [
370 YoutubePlaylistIE(),
371 YoutubeChannelIE(),
372 YoutubeUserIE(),
373 YoutubeSearchIE(),
374 YoutubeIE(),
375 MetacafeIE(),
376 DailymotionIE(),
377 GoogleIE(),
378 GoogleSearchIE(),
379 PhotobucketIE(),
380 YahooIE(),
381 YahooSearchIE(),
382 DepositFilesIE(),
383 FacebookIE(),
384 BlipTVUserIE(),
385 BlipTVIE(),
386 VimeoIE(),
387 MyVideoIE(),
388 ComedyCentralIE(),
389 EscapistIE(),
390 CollegeHumorIE(),
391 XVideosIE(),
392 SoundcloudIE(),
393 InfoQIE(),
394 MixcloudIE(),
395 StanfordOpenClassroomIE(),
396 MTVIE(),
397 YoukuIE(),
398 XNXXIE(),
399 GooglePlusIE(),
400 ArteTvIE(),
401 GenericIE()
402 ]
403
404 def _real_main():
405 parser, opts, args = parseOpts()
406
407 # Open appropriate CookieJar
408 if opts.cookiefile is None:
409 jar = compat_cookiejar.CookieJar()
410 else:
411 try:
412 jar = compat_cookiejar.MozillaCookieJar(opts.cookiefile)
413 if os.path.isfile(opts.cookiefile) and os.access(opts.cookiefile, os.R_OK):
414 jar.load()
415 except (IOError, OSError) as err:
416 sys.exit(u'ERROR: unable to open cookie file')
417 # Set user agent
418 if opts.user_agent is not None:
419 std_headers['User-Agent'] = opts.user_agent
420
421 # Dump user agent
422 if opts.dump_user_agent:
423 print(std_headers['User-Agent'])
424 sys.exit(0)
425
426 # Batch file verification
427 batchurls = []
428 if opts.batchfile is not None:
429 try:
430 if opts.batchfile == '-':
431 batchfd = sys.stdin
432 else:
433 batchfd = open(opts.batchfile, 'r')
434 batchurls = batchfd.readlines()
435 batchurls = [x.strip() for x in batchurls]
436 batchurls = [x for x in batchurls if len(x) > 0 and not re.search(r'^[#/;]', x)]
437 except IOError:
438 sys.exit(u'ERROR: batch file could not be read')
439 all_urls = batchurls + args
440 all_urls = [url.strip() for url in all_urls]
441
442 # General configuration
443 cookie_processor = compat_urllib_request.HTTPCookieProcessor(jar)
444 proxy_handler = compat_urllib_request.ProxyHandler()
445 opener = compat_urllib_request.build_opener(proxy_handler, cookie_processor, YoutubeDLHandler())
446 compat_urllib_request.install_opener(opener)
447 socket.setdefaulttimeout(300) # 5 minutes should be enough (famous last words)
448
449 extractors = gen_extractors()
450
451 if opts.list_extractors:
452 for ie in extractors:
453 print(ie.IE_NAME + (' (CURRENTLY BROKEN)' if not ie._WORKING else ''))
454 matchedUrls = filter(lambda url: ie.suitable(url), all_urls)
455 all_urls = filter(lambda url: url not in matchedUrls, all_urls)
456 for mu in matchedUrls:
457 print(u' ' + mu)
458 sys.exit(0)
459
460 # Conflicting, missing and erroneous options
461 if opts.usenetrc and (opts.username is not None or opts.password is not None):
462 parser.error(u'using .netrc conflicts with giving username/password')
463 if opts.password is not None and opts.username is None:
464 parser.error(u'account username missing')
465 if opts.outtmpl is not None and (opts.usetitle or opts.autonumber or opts.useid):
466 parser.error(u'using output template conflicts with using title, video ID or auto number')
467 if opts.usetitle and opts.useid:
468 parser.error(u'using title conflicts with using video ID')
469 if opts.username is not None and opts.password is None:
470 opts.password = getpass.getpass(u'Type account password and press return:')
471 if opts.ratelimit is not None:
472 numeric_limit = FileDownloader.parse_bytes(opts.ratelimit)
473 if numeric_limit is None:
474 parser.error(u'invalid rate limit specified')
475 opts.ratelimit = numeric_limit
476 if opts.retries is not None:
477 try:
478 opts.retries = int(opts.retries)
479 except (TypeError, ValueError) as err:
480 parser.error(u'invalid retry count specified')
481 if opts.buffersize is not None:
482 numeric_buffersize = FileDownloader.parse_bytes(opts.buffersize)
483 if numeric_buffersize is None:
484 parser.error(u'invalid buffer size specified')
485 opts.buffersize = numeric_buffersize
486 try:
487 opts.playliststart = int(opts.playliststart)
488 if opts.playliststart <= 0:
489 raise ValueError(u'Playlist start must be positive')
490 except (TypeError, ValueError) as err:
491 parser.error(u'invalid playlist start number specified')
492 try:
493 opts.playlistend = int(opts.playlistend)
494 if opts.playlistend != -1 and (opts.playlistend <= 0 or opts.playlistend < opts.playliststart):
495 raise ValueError(u'Playlist end must be greater than playlist start')
496 except (TypeError, ValueError) as err:
497 parser.error(u'invalid playlist end number specified')
498 if opts.extractaudio:
499 if opts.audioformat not in ['best', 'aac', 'mp3', 'vorbis', 'm4a', 'wav']:
500 parser.error(u'invalid audio format specified')
501 if opts.audioquality:
502 opts.audioquality = opts.audioquality.strip('k').strip('K')
503 if not opts.audioquality.isdigit():
504 parser.error(u'invalid audio quality specified')
505
506 # File downloader
507 fd = FileDownloader({
508 'usenetrc': opts.usenetrc,
509 'username': opts.username,
510 'password': opts.password,
511 'quiet': (opts.quiet or opts.geturl or opts.gettitle or opts.getthumbnail or opts.getdescription or opts.getfilename or opts.getformat),
512 'forceurl': opts.geturl,
513 'forcetitle': opts.gettitle,
514 'forcethumbnail': opts.getthumbnail,
515 'forcedescription': opts.getdescription,
516 'forcefilename': opts.getfilename,
517 'forceformat': opts.getformat,
518 'simulate': opts.simulate,
519 'skip_download': (opts.skip_download or opts.simulate or opts.geturl or opts.gettitle or opts.getthumbnail or opts.getdescription or opts.getfilename or opts.getformat),
520 'format': opts.format,
521 'format_limit': opts.format_limit,
522 'listformats': opts.listformats,
523 'outtmpl': ((opts.outtmpl is not None and opts.outtmpl.decode(preferredencoding()))
524 or (opts.format == '-1' and opts.usetitle and u'%(title)s-%(id)s-%(format)s.%(ext)s')
525 or (opts.format == '-1' and u'%(id)s-%(format)s.%(ext)s')
526 or (opts.usetitle and opts.autonumber and u'%(autonumber)s-%(title)s-%(id)s.%(ext)s')
527 or (opts.usetitle and u'%(title)s-%(id)s.%(ext)s')
528 or (opts.useid and u'%(id)s.%(ext)s')
529 or (opts.autonumber and u'%(autonumber)s-%(id)s.%(ext)s')
530 or u'%(id)s.%(ext)s'),
531 'restrictfilenames': opts.restrictfilenames,
532 'ignoreerrors': opts.ignoreerrors,
533 'ratelimit': opts.ratelimit,
534 'nooverwrites': opts.nooverwrites,
535 'retries': opts.retries,
536 'buffersize': opts.buffersize,
537 'noresizebuffer': opts.noresizebuffer,
538 'continuedl': opts.continue_dl,
539 'noprogress': opts.noprogress,
540 'playliststart': opts.playliststart,
541 'playlistend': opts.playlistend,
542 'logtostderr': opts.outtmpl == '-',
543 'consoletitle': opts.consoletitle,
544 'nopart': opts.nopart,
545 'updatetime': opts.updatetime,
546 'writedescription': opts.writedescription,
547 'writeinfojson': opts.writeinfojson,
548 'writesubtitles': opts.writesubtitles,
549 'subtitleslang': opts.subtitleslang,
550 'matchtitle': opts.matchtitle,
551 'rejecttitle': opts.rejecttitle,
552 'max_downloads': opts.max_downloads,
553 'prefer_free_formats': opts.prefer_free_formats,
554 'verbose': opts.verbose,
555 'test': opts.test,
556 })
557
558 if opts.verbose:
559 fd.to_screen(u'[debug] Proxy map: ' + str(proxy_handler.proxies))
560
561 for extractor in extractors:
562 fd.add_info_extractor(extractor)
563
564 # PostProcessors
565 if opts.extractaudio:
566 fd.add_post_processor(FFmpegExtractAudioPP(preferredcodec=opts.audioformat, preferredquality=opts.audioquality, keepvideo=opts.keepvideo))
567
568 # Update version
569 if opts.update_self:
570 updateSelf(fd, sys.argv[0])
571
572 # Maybe do nothing
573 if len(all_urls) < 1:
574 if not opts.update_self:
575 parser.error(u'you must provide at least one URL')
576 else:
577 sys.exit()
578
579 try:
580 retcode = fd.download(all_urls)
581 except MaxDownloadsReached:
582 fd.to_screen(u'--max-download limit reached, aborting.')
583 retcode = 101
584
585 # Dump cookie jar if requested
586 if opts.cookiefile is not None:
587 try:
588 jar.save()
589 except (IOError, OSError) as err:
590 sys.exit(u'ERROR: unable to save cookie jar')
591
592 sys.exit(retcode)
593
594 def main():
595 try:
596 _real_main()
597 except DownloadError:
598 sys.exit(1)
599 except SameFileError:
600 sys.exit(u'ERROR: fixed output name but more than one file to download')
601 except KeyboardInterrupt:
602 sys.exit(u'\nERROR: Interrupted by user')
603
[end of youtube_dl/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ytdl-org/youtube-dl
|
a4680a590f46b1cee44be04d7f285bbe7b6a8d63
|
Support individual comedycentral videos
Add support for individual videos - currently, only full episodes are supported. Test URL: http://www.colbertnation.com/the-colbert-report-videos/229765/june-08-2009/operation-iraqi-stephen---john-mccain
|
2012-12-12T03:45:54Z
|
<patch>
diff --git a/youtube_dl/InfoExtractors.py b/youtube_dl/InfoExtractors.py
--- a/youtube_dl/InfoExtractors.py
+++ b/youtube_dl/InfoExtractors.py
@@ -2397,7 +2397,19 @@ def _real_extract(self,url):
class ComedyCentralIE(InfoExtractor):
"""Information extractor for The Daily Show and Colbert Report """
- _VALID_URL = r'^(:(?P<shortname>tds|thedailyshow|cr|colbert|colbertnation|colbertreport))|(https?://)?(www\.)?(?P<showname>thedailyshow|colbertnation)\.com/full-episodes/(?P<episode>.*)$'
+ # urls can be abbreviations like :thedailyshow or :colbert
+ # urls for episodes like:
+ # or urls for clips like: http://www.thedailyshow.com/watch/mon-december-10-2012/any-given-gun-day
+ # or: http://www.colbertnation.com/the-colbert-report-videos/421667/november-29-2012/moon-shattering-news
+ # or: http://www.colbertnation.com/the-colbert-report-collections/422008/festival-of-lights/79524
+ _VALID_URL = r"""^(:(?P<shortname>tds|thedailyshow|cr|colbert|colbertnation|colbertreport)
+ |(https?://)?(www\.)?
+ (?P<showname>thedailyshow|colbertnation)\.com/
+ (full-episodes/(?P<episode>.*)|
+ (?P<clip>
+ (the-colbert-report-(videos|collections)/(?P<clipID>[0-9]+)/[^/]*/(?P<cntitle>.*?))
+ |(watch/(?P<date>[^/]*)/(?P<tdstitle>.*)))))
+ $"""
IE_NAME = u'comedycentral'
_available_formats = ['3500', '2200', '1700', '1200', '750', '400']
@@ -2419,6 +2431,10 @@ class ComedyCentralIE(InfoExtractor):
'400': '384x216',
}
+ def suitable(self, url):
+ """Receives a URL and returns True if suitable for this IE."""
+ return re.match(self._VALID_URL, url, re.VERBOSE) is not None
+
def report_extraction(self, episode_id):
self._downloader.to_screen(u'[comedycentral] %s: Extracting information' % episode_id)
@@ -2439,7 +2455,7 @@ def _print_formats(self, formats):
def _real_extract(self, url):
- mobj = re.match(self._VALID_URL, url)
+ mobj = re.match(self._VALID_URL, url, re.VERBOSE)
if mobj is None:
self._downloader.trouble(u'ERROR: invalid URL: %s' % url)
return
@@ -2449,14 +2465,21 @@ def _real_extract(self, url):
url = u'http://www.thedailyshow.com/full-episodes/'
else:
url = u'http://www.colbertnation.com/full-episodes/'
- mobj = re.match(self._VALID_URL, url)
+ mobj = re.match(self._VALID_URL, url, re.VERBOSE)
assert mobj is not None
- dlNewest = not mobj.group('episode')
- if dlNewest:
- epTitle = mobj.group('showname')
+ if mobj.group('clip'):
+ if mobj.group('showname') == 'thedailyshow':
+ epTitle = mobj.group('tdstitle')
+ else:
+ epTitle = mobj.group('cntitle')
+ dlNewest = False
else:
- epTitle = mobj.group('episode')
+ dlNewest = not mobj.group('episode')
+ if dlNewest:
+ epTitle = mobj.group('showname')
+ else:
+ epTitle = mobj.group('episode')
req = compat_urllib_request.Request(url)
self.report_extraction(epTitle)
@@ -2468,7 +2491,7 @@ def _real_extract(self, url):
return
if dlNewest:
url = htmlHandle.geturl()
- mobj = re.match(self._VALID_URL, url)
+ mobj = re.match(self._VALID_URL, url, re.VERBOSE)
if mobj is None:
self._downloader.trouble(u'ERROR: Invalid redirected URL: ' + url)
return
@@ -2477,14 +2500,14 @@ def _real_extract(self, url):
return
epTitle = mobj.group('episode')
- mMovieParams = re.findall('(?:<param name="movie" value="|var url = ")(http://media.mtvnservices.com/([^"]*episode.*?:.*?))"', html)
+ mMovieParams = re.findall('(?:<param name="movie" value="|var url = ")(http://media.mtvnservices.com/([^"]*(?:episode|video).*?:.*?))"', html)
if len(mMovieParams) == 0:
# The Colbert Report embeds the information in a without
# a URL prefix; so extract the alternate reference
# and then add the URL prefix manually.
- altMovieParams = re.findall('data-mgid="([^"]*episode.*?:.*?)"', html)
+ altMovieParams = re.findall('data-mgid="([^"]*(?:episode|video).*?:.*?)"', html)
if len(altMovieParams) == 0:
self._downloader.trouble(u'ERROR: unable to find Flash URL in webpage ' + url)
return
</patch>
|
[]
|
[]
| ||||
open-mmlab__mmdetection-4467
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid link in readme of cityscape
Hi, Thank you very much for the amazing project!
I would like to report that the [model](http://download.openmmlab.com/mmdetection/v2.0/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes_20200502-6ea77f0e.pth) in the [readme](https://github.com/open-mmlab/mmdetection/blob/master/configs/cityscapes/README.md) in the cityscape is invalid.
Meanwhile, may I ask whether there are more pre-trained models on Cityscape? I would like to evaluate the models elaborately.
Thank you!
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="resources/mmdet-logo.png" width="600"/>
3 </div>
4
5 **News**: We released the technical report on [ArXiv](https://arxiv.org/abs/1906.07155).
6
7 Documentation: https://mmdetection.readthedocs.io/
8
9 ## Introduction
10
11 MMDetection is an open source object detection toolbox based on PyTorch. It is
12 a part of the OpenMMLab project developed by [Multimedia Laboratory, CUHK](http://mmlab.ie.cuhk.edu.hk/).
13
14 The master branch works with **PyTorch 1.3 to 1.6**.
15 The old v1.x branch works with PyTorch 1.1 to 1.4, but v2.0 is strongly recommended for faster speed, higher performance, better design and more friendly usage.
16
17 
18
19 ### Major features
20
21 - **Modular Design**
22
23 We decompose the detection framework into different components and one can easily construct a customized object detection framework by combining different modules.
24
25 - **Support of multiple frameworks out of box**
26
27 The toolbox directly supports popular and contemporary detection frameworks, *e.g.* Faster RCNN, Mask RCNN, RetinaNet, etc.
28
29 - **High efficiency**
30
31 All basic bbox and mask operations run on GPUs. The training speed is faster than or comparable to other codebases, including [Detectron2](https://github.com/facebookresearch/detectron2), [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) and [SimpleDet](https://github.com/TuSimple/simpledet).
32
33 - **State of the art**
34
35 The toolbox stems from the codebase developed by the *MMDet* team, who won [COCO Detection Challenge](http://cocodataset.org/#detection-leaderboard) in 2018, and we keep pushing it forward.
36
37 Apart from MMDetection, we also released a library [mmcv](https://github.com/open-mmlab/mmcv) for computer vision research, which is heavily depended on by this toolbox.
38
39 ## License
40
41 This project is released under the [Apache 2.0 license](LICENSE).
42
43 ## Changelog
44
45 v2.8.0 was released in 04/01/2021.
46 Please refer to [changelog.md](docs/changelog.md) for details and release history.
47 A comparison between v1.x and v2.0 codebases can be found in [compatibility.md](docs/compatibility.md).
48
49 ## Benchmark and model zoo
50
51 Results and models are available in the [model zoo](docs/model_zoo.md).
52
53 Supported backbones:
54
55 - [x] ResNet
56 - [x] ResNeXt
57 - [x] VGG
58 - [x] HRNet
59 - [x] RegNet
60 - [x] Res2Net
61 - [x] ResNeSt
62
63 Supported methods:
64
65 - [x] [RPN](configs/rpn)
66 - [x] [Fast R-CNN](configs/fast_rcnn)
67 - [x] [Faster R-CNN](configs/faster_rcnn)
68 - [x] [Mask R-CNN](configs/mask_rcnn)
69 - [x] [Cascade R-CNN](configs/cascade_rcnn)
70 - [x] [Cascade Mask R-CNN](configs/cascade_rcnn)
71 - [x] [SSD](configs/ssd)
72 - [x] [RetinaNet](configs/retinanet)
73 - [x] [GHM](configs/ghm)
74 - [x] [Mask Scoring R-CNN](configs/ms_rcnn)
75 - [x] [Double-Head R-CNN](configs/double_heads)
76 - [x] [Hybrid Task Cascade](configs/htc)
77 - [x] [Libra R-CNN](configs/libra_rcnn)
78 - [x] [Guided Anchoring](configs/guided_anchoring)
79 - [x] [FCOS](configs/fcos)
80 - [x] [RepPoints](configs/reppoints)
81 - [x] [Foveabox](configs/foveabox)
82 - [x] [FreeAnchor](configs/free_anchor)
83 - [x] [NAS-FPN](configs/nas_fpn)
84 - [x] [ATSS](configs/atss)
85 - [x] [FSAF](configs/fsaf)
86 - [x] [PAFPN](configs/pafpn)
87 - [x] [Dynamic R-CNN](configs/dynamic_rcnn)
88 - [x] [PointRend](configs/point_rend)
89 - [x] [CARAFE](configs/carafe/README.md)
90 - [x] [DCNv2](configs/dcn/README.md)
91 - [x] [Group Normalization](configs/gn/README.md)
92 - [x] [Weight Standardization](configs/gn+ws/README.md)
93 - [x] [OHEM](configs/faster_rcnn/faster_rcnn_r50_fpn_ohem_1x_coco.py)
94 - [x] [Soft-NMS](configs/faster_rcnn/faster_rcnn_r50_fpn_soft_nms_1x_coco.py)
95 - [x] [Generalized Attention](configs/empirical_attention/README.md)
96 - [x] [GCNet](configs/gcnet/README.md)
97 - [x] [Mixed Precision (FP16) Training](configs/fp16/README.md)
98 - [x] [InstaBoost](configs/instaboost/README.md)
99 - [x] [GRoIE](configs/groie/README.md)
100 - [x] [DetectoRS](configs/detectors/README.md)
101 - [x] [Generalized Focal Loss](configs/gfl/README.md)
102 - [x] [CornerNet](configs/cornernet/README.md)
103 - [x] [Side-Aware Boundary Localization](configs/sabl/README.md)
104 - [x] [YOLOv3](configs/yolo/README.md)
105 - [x] [PAA](configs/paa/README.md)
106 - [x] [YOLACT](configs/yolact/README.md)
107 - [x] [CentripetalNet](configs/centripetalnet/README.md)
108 - [x] [VFNet](configs/vfnet/README.md)
109 - [x] [DETR](configs/detr/README.md)
110 - [x] [CascadeRPN](configs/cascade_rpn/README.md)
111
112 Some other methods are also supported in [projects using MMDetection](./docs/projects.md).
113
114 ## Installation
115
116 Please refer to [get_started.md](docs/get_started.md) for installation.
117
118 ## Getting Started
119
120 Please see [get_started.md](docs/get_started.md) for the basic usage of MMDetection.
121 We provide [colab tutorial](demo/MMDet_Tutorial.ipynb), and full guidance for quick run [with existing dataset](docs/1_exist_data_model.md) and [with new dataset](docs/2_new_data_model.md) for beginners.
122 There are also tutorials for [finetuning models](docs/tutorials/finetune.md), [adding new dataset](docs/tutorials/new_dataset.md), [designing data pipeline](docs/tutorials/data_pipeline.md), [customizing models](docs/tutorials/customize_models.md), [customizing runtime settings](docs/tutorials/customize_runtime.md) and [useful tools](docs/useful_tools.md).
123
124 Please refer to [FAQ](docs/faq.md) for frequently asked questions.
125
126 ## Contributing
127
128 We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
129
130 ## Acknowledgement
131
132 MMDetection is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
133 We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new detectors.
134
135 ## Citation
136
137 If you use this toolbox or benchmark in your research, please cite this project.
138
139 ```
140 @article{mmdetection,
141 title = {{MMDetection}: Open MMLab Detection Toolbox and Benchmark},
142 author = {Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and
143 Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and
144 Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and
145 Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and
146 Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong
147 and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua},
148 journal= {arXiv preprint arXiv:1906.07155},
149 year={2019}
150 }
151 ```
152
153 ## Projects in OpenMMLab
154
155 - [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
156 - [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
157 - [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
158 - [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
159 - [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
160 - [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
161 - [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
162 - [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
163 - [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
164
[end of README.md]
[start of configs/cityscapes/faster_rcnn_r50_fpn_1x_cityscapes.py]
1 _base_ = [
2 '../_base_/models/faster_rcnn_r50_fpn.py',
3 '../_base_/datasets/cityscapes_detection.py',
4 '../_base_/default_runtime.py'
5 ]
6 model = dict(
7 pretrained=None,
8 roi_head=dict(
9 bbox_head=dict(
10 type='Shared2FCBBoxHead',
11 in_channels=256,
12 fc_out_channels=1024,
13 roi_feat_size=7,
14 num_classes=8,
15 bbox_coder=dict(
16 type='DeltaXYWHBBoxCoder',
17 target_means=[0., 0., 0., 0.],
18 target_stds=[0.1, 0.1, 0.2, 0.2]),
19 reg_class_agnostic=False,
20 loss_cls=dict(
21 type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
22 loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))))
23 # optimizer
24 # lr is set for a batch size of 8
25 optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
26 optimizer_config = dict(grad_clip=None)
27 # learning policy
28 lr_config = dict(
29 policy='step',
30 warmup='linear',
31 warmup_iters=500,
32 warmup_ratio=0.001,
33 # [7] yields higher performance than [6]
34 step=[7])
35 total_epochs = 8 # actual epoch = 8 * 8 = 64
36 log_config = dict(interval=100)
37 # For better, more stable performance initialize from COCO
38 load_from = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth' # noqa
39
[end of configs/cityscapes/faster_rcnn_r50_fpn_1x_cityscapes.py]
[start of configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py]
1 _base_ = [
2 '../_base_/models/mask_rcnn_r50_fpn.py',
3 '../_base_/datasets/cityscapes_instance.py', '../_base_/default_runtime.py'
4 ]
5 model = dict(
6 pretrained=None,
7 roi_head=dict(
8 bbox_head=dict(
9 type='Shared2FCBBoxHead',
10 in_channels=256,
11 fc_out_channels=1024,
12 roi_feat_size=7,
13 num_classes=8,
14 bbox_coder=dict(
15 type='DeltaXYWHBBoxCoder',
16 target_means=[0., 0., 0., 0.],
17 target_stds=[0.1, 0.1, 0.2, 0.2]),
18 reg_class_agnostic=False,
19 loss_cls=dict(
20 type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
21 loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)),
22 mask_head=dict(
23 type='FCNMaskHead',
24 num_convs=4,
25 in_channels=256,
26 conv_out_channels=256,
27 num_classes=8,
28 loss_mask=dict(
29 type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))))
30 # optimizer
31 # lr is set for a batch size of 8
32 optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
33 optimizer_config = dict(grad_clip=None)
34 # learning policy
35 lr_config = dict(
36 policy='step',
37 warmup='linear',
38 warmup_iters=500,
39 warmup_ratio=0.001,
40 # [7] yields higher performance than [6]
41 step=[7])
42 total_epochs = 8 # actual epoch = 8 * 8 = 64
43 log_config = dict(interval=100)
44 # For better, more stable performance initialize from COCO
45 load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth' # noqa
46
[end of configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py]
[start of docs/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12 #
13 import os
14 import subprocess
15 import sys
16
17 sys.path.insert(0, os.path.abspath('..'))
18
19 # -- Project information -----------------------------------------------------
20
21 project = 'MMDetection'
22 copyright = '2018-2020, OpenMMLab'
23 author = 'MMDetection Authors'
24 version_file = '../mmdet/version.py'
25
26
27 def get_version():
28 with open(version_file, 'r') as f:
29 exec(compile(f.read(), version_file, 'exec'))
30 return locals()['__version__']
31
32
33 # The full version, including alpha/beta/rc tags
34 release = get_version()
35
36 # -- General configuration ---------------------------------------------------
37
38 # Add any Sphinx extension module names here, as strings. They can be
39 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
40 # ones.
41 extensions = [
42 'sphinx.ext.autodoc',
43 'sphinx.ext.napoleon',
44 'sphinx.ext.viewcode',
45 'recommonmark',
46 'sphinx_markdown_tables',
47 ]
48
49 autodoc_mock_imports = [
50 'matplotlib', 'pycocotools', 'terminaltables', 'mmdet.version', 'mmcv.ops'
51 ]
52
53 # Add any paths that contain templates here, relative to this directory.
54 templates_path = ['_templates']
55
56 # The suffix(es) of source filenames.
57 # You can specify multiple suffix as a list of string:
58 #
59 source_suffix = {
60 '.rst': 'restructuredtext',
61 '.md': 'markdown',
62 }
63
64 # The master toctree document.
65 master_doc = 'index'
66
67 # List of patterns, relative to source directory, that match files and
68 # directories to ignore when looking for source files.
69 # This pattern also affects html_static_path and html_extra_path.
70 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
71
72 # -- Options for HTML output -------------------------------------------------
73
74 # The theme to use for HTML and HTML Help pages. See the documentation for
75 # a list of builtin themes.
76 #
77 html_theme = 'sphinx_rtd_theme'
78
79 # Add any paths that contain custom static files (such as style sheets) here,
80 # relative to this directory. They are copied after the builtin static files,
81 # so a file named "default.css" will overwrite the builtin "default.css".
82 html_static_path = ['_static']
83
84
85 def builder_inited_handler(app):
86 subprocess.run(['./stat.py'])
87
88
89 def setup(app):
90 app.connect('builder-inited', builder_inited_handler)
91
[end of docs/conf.py]
[start of docs/stat.py]
1 #!/usr/bin/env python
2 import functools as func
3 import glob
4 import os.path as osp
5 import re
6
7 import numpy as np
8
9 url_prefix = 'https://github.com/open-mmlab/mmdetection/blob/master/'
10
11 files = sorted(glob.glob('../configs/*/README.md'))
12
13 stats = []
14 titles = []
15 num_ckpts = 0
16
17 for f in files:
18 url = osp.dirname(f.replace('../', url_prefix))
19
20 with open(f, 'r') as content_file:
21 content = content_file.read()
22
23 title = content.split('\n')[0].replace('# ', '').strip()
24 ckpts = set(x.lower().strip()
25 for x in re.findall(r'\[model\]\((https?.*)\)', content))
26
27 if len(ckpts) == 0:
28 continue
29
30 _papertype = [x for x in re.findall(r'\[([A-Z]+)\]', content)]
31 assert len(_papertype) > 0
32 papertype = _papertype[0]
33
34 paper = set([(papertype, title)])
35
36 titles.append(title)
37 num_ckpts += len(ckpts)
38
39 statsmsg = f"""
40 \t* [{papertype}] [{title}]({url}) ({len(ckpts)} ckpts)
41 """
42 stats.append((paper, ckpts, statsmsg))
43
44 allpapers = func.reduce(lambda a, b: a.union(b), [p for p, _, _ in stats])
45 msglist = '\n'.join(x for _, _, x in stats)
46
47 papertypes, papercounts = np.unique([t for t, _ in allpapers],
48 return_counts=True)
49 countstr = '\n'.join(
50 [f' - {t}: {c}' for t, c in zip(papertypes, papercounts)])
51
52 modelzoo = f"""
53 # Model Zoo Statistics
54
55 * Number of papers: {len(set(titles))}
56 {countstr}
57
58 * Number of checkpoints: {num_ckpts}
59
60 {msglist}
61 """
62
63 with open('modelzoo_statistics.md', 'w') as f:
64 f.write(modelzoo)
65
[end of docs/stat.py]
[start of mmdet/datasets/cityscapes.py]
1 # Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/cityscapes.py # noqa
2 # and https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa
3
4 import glob
5 import os
6 import os.path as osp
7 import tempfile
8 from collections import OrderedDict
9
10 import mmcv
11 import numpy as np
12 import pycocotools.mask as maskUtils
13 from mmcv.utils import print_log
14
15 from .builder import DATASETS
16 from .coco import CocoDataset
17
18
19 @DATASETS.register_module()
20 class CityscapesDataset(CocoDataset):
21
22 CLASSES = ('person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle',
23 'bicycle')
24
25 def _filter_imgs(self, min_size=32):
26 """Filter images too small or without ground truths."""
27 valid_inds = []
28 # obtain images that contain annotation
29 ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values())
30 # obtain images that contain annotations of the required categories
31 ids_in_cat = set()
32 for i, class_id in enumerate(self.cat_ids):
33 ids_in_cat |= set(self.coco.cat_img_map[class_id])
34 # merge the image id sets of the two conditions and use the merged set
35 # to filter out images if self.filter_empty_gt=True
36 ids_in_cat &= ids_with_ann
37
38 valid_img_ids = []
39 for i, img_info in enumerate(self.data_infos):
40 img_id = img_info['id']
41 ann_ids = self.coco.getAnnIds(imgIds=[img_id])
42 ann_info = self.coco.loadAnns(ann_ids)
43 all_iscrowd = all([_['iscrowd'] for _ in ann_info])
44 if self.filter_empty_gt and (self.img_ids[i] not in ids_in_cat
45 or all_iscrowd):
46 continue
47 if min(img_info['width'], img_info['height']) >= min_size:
48 valid_inds.append(i)
49 valid_img_ids.append(img_id)
50 self.img_ids = valid_img_ids
51 return valid_inds
52
53 def _parse_ann_info(self, img_info, ann_info):
54 """Parse bbox and mask annotation.
55
56 Args:
57 img_info (dict): Image info of an image.
58 ann_info (list[dict]): Annotation info of an image.
59
60 Returns:
61 dict: A dict containing the following keys: bboxes, \
62 bboxes_ignore, labels, masks, seg_map. \
63 "masks" are already decoded into binary masks.
64 """
65 gt_bboxes = []
66 gt_labels = []
67 gt_bboxes_ignore = []
68 gt_masks_ann = []
69
70 for i, ann in enumerate(ann_info):
71 if ann.get('ignore', False):
72 continue
73 x1, y1, w, h = ann['bbox']
74 if ann['area'] <= 0 or w < 1 or h < 1:
75 continue
76 if ann['category_id'] not in self.cat_ids:
77 continue
78 bbox = [x1, y1, x1 + w, y1 + h]
79 if ann.get('iscrowd', False):
80 gt_bboxes_ignore.append(bbox)
81 else:
82 gt_bboxes.append(bbox)
83 gt_labels.append(self.cat2label[ann['category_id']])
84 gt_masks_ann.append(ann['segmentation'])
85
86 if gt_bboxes:
87 gt_bboxes = np.array(gt_bboxes, dtype=np.float32)
88 gt_labels = np.array(gt_labels, dtype=np.int64)
89 else:
90 gt_bboxes = np.zeros((0, 4), dtype=np.float32)
91 gt_labels = np.array([], dtype=np.int64)
92
93 if gt_bboxes_ignore:
94 gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32)
95 else:
96 gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32)
97
98 ann = dict(
99 bboxes=gt_bboxes,
100 labels=gt_labels,
101 bboxes_ignore=gt_bboxes_ignore,
102 masks=gt_masks_ann,
103 seg_map=img_info['segm_file'])
104
105 return ann
106
107 def results2txt(self, results, outfile_prefix):
108 """Dump the detection results to a txt file.
109
110 Args:
111 results (list[list | tuple]): Testing results of the
112 dataset.
113 outfile_prefix (str): The filename prefix of the json files.
114 If the prefix is "somepath/xxx",
115 the txt files will be named "somepath/xxx.txt".
116
117 Returns:
118 list[str]: Result txt files which contains corresponding \
119 instance segmentation images.
120 """
121 try:
122 import cityscapesscripts.helpers.labels as CSLabels
123 except ImportError:
124 raise ImportError('Please run "pip install citscapesscripts" to '
125 'install cityscapesscripts first.')
126 result_files = []
127 os.makedirs(outfile_prefix, exist_ok=True)
128 prog_bar = mmcv.ProgressBar(len(self))
129 for idx in range(len(self)):
130 result = results[idx]
131 filename = self.data_infos[idx]['filename']
132 basename = osp.splitext(osp.basename(filename))[0]
133 pred_txt = osp.join(outfile_prefix, basename + '_pred.txt')
134
135 bbox_result, segm_result = result
136 bboxes = np.vstack(bbox_result)
137 # segm results
138 if isinstance(segm_result, tuple):
139 # Some detectors use different scores for bbox and mask,
140 # like Mask Scoring R-CNN. Score of segm will be used instead
141 # of bbox score.
142 segms = mmcv.concat_list(segm_result[0])
143 mask_score = segm_result[1]
144 else:
145 # use bbox score for mask score
146 segms = mmcv.concat_list(segm_result)
147 mask_score = [bbox[-1] for bbox in bboxes]
148 labels = [
149 np.full(bbox.shape[0], i, dtype=np.int32)
150 for i, bbox in enumerate(bbox_result)
151 ]
152 labels = np.concatenate(labels)
153
154 assert len(bboxes) == len(segms) == len(labels)
155 num_instances = len(bboxes)
156 prog_bar.update()
157 with open(pred_txt, 'w') as fout:
158 for i in range(num_instances):
159 pred_class = labels[i]
160 classes = self.CLASSES[pred_class]
161 class_id = CSLabels.name2label[classes].id
162 score = mask_score[i]
163 mask = maskUtils.decode(segms[i]).astype(np.uint8)
164 png_filename = osp.join(outfile_prefix,
165 basename + f'_{i}_{classes}.png')
166 mmcv.imwrite(mask, png_filename)
167 fout.write(f'{osp.basename(png_filename)} {class_id} '
168 f'{score}\n')
169 result_files.append(pred_txt)
170
171 return result_files
172
173 def format_results(self, results, txtfile_prefix=None):
174 """Format the results to txt (standard format for Cityscapes
175 evaluation).
176
177 Args:
178 results (list): Testing results of the dataset.
179 txtfile_prefix (str | None): The prefix of txt files. It includes
180 the file path and the prefix of filename, e.g., "a/b/prefix".
181 If not specified, a temp file will be created. Default: None.
182
183 Returns:
184 tuple: (result_files, tmp_dir), result_files is a dict containing \
185 the json filepaths, tmp_dir is the temporal directory created \
186 for saving txt/png files when txtfile_prefix is not specified.
187 """
188 assert isinstance(results, list), 'results must be a list'
189 assert len(results) == len(self), (
190 'The length of results is not equal to the dataset len: {} != {}'.
191 format(len(results), len(self)))
192
193 assert isinstance(results, list), 'results must be a list'
194 assert len(results) == len(self), (
195 'The length of results is not equal to the dataset len: {} != {}'.
196 format(len(results), len(self)))
197
198 if txtfile_prefix is None:
199 tmp_dir = tempfile.TemporaryDirectory()
200 txtfile_prefix = osp.join(tmp_dir.name, 'results')
201 else:
202 tmp_dir = None
203 result_files = self.results2txt(results, txtfile_prefix)
204
205 return result_files, tmp_dir
206
207 def evaluate(self,
208 results,
209 metric='bbox',
210 logger=None,
211 outfile_prefix=None,
212 classwise=False,
213 proposal_nums=(100, 300, 1000),
214 iou_thrs=np.arange(0.5, 0.96, 0.05)):
215 """Evaluation in Cityscapes/COCO protocol.
216
217 Args:
218 results (list[list | tuple]): Testing results of the dataset.
219 metric (str | list[str]): Metrics to be evaluated. Options are
220 'bbox', 'segm', 'proposal', 'proposal_fast'.
221 logger (logging.Logger | str | None): Logger used for printing
222 related information during evaluation. Default: None.
223 outfile_prefix (str | None): The prefix of output file. It includes
224 the file path and the prefix of filename, e.g., "a/b/prefix".
225 If results are evaluated with COCO protocol, it would be the
226 prefix of output json file. For example, the metric is 'bbox'
227 and 'segm', then json files would be "a/b/prefix.bbox.json" and
228 "a/b/prefix.segm.json".
229 If results are evaluated with cityscapes protocol, it would be
230 the prefix of output txt/png files. The output files would be
231 png images under folder "a/b/prefix/xxx/" and the file name of
232 images would be written into a txt file
233 "a/b/prefix/xxx_pred.txt", where "xxx" is the video name of
234 cityscapes. If not specified, a temp file will be created.
235 Default: None.
236 classwise (bool): Whether to evaluating the AP for each class.
237 proposal_nums (Sequence[int]): Proposal number used for evaluating
238 recalls, such as recall@100, recall@1000.
239 Default: (100, 300, 1000).
240 iou_thrs (Sequence[float]): IoU threshold used for evaluating
241 recalls. If set to a list, the average recall of all IoUs will
242 also be computed. Default: 0.5.
243
244 Returns:
245 dict[str, float]: COCO style evaluation metric or cityscapes mAP \
246 and AP@50.
247 """
248 eval_results = dict()
249
250 metrics = metric.copy() if isinstance(metric, list) else [metric]
251
252 if 'cityscapes' in metrics:
253 eval_results.update(
254 self._evaluate_cityscapes(results, outfile_prefix, logger))
255 metrics.remove('cityscapes')
256
257 # left metrics are all coco metric
258 if len(metrics) > 0:
259 # create CocoDataset with CityscapesDataset annotation
260 self_coco = CocoDataset(self.ann_file, self.pipeline.transforms,
261 None, self.data_root, self.img_prefix,
262 self.seg_prefix, self.proposal_file,
263 self.test_mode, self.filter_empty_gt)
264 # TODO: remove this in the future
265 # reload annotations of correct class
266 self_coco.CLASSES = self.CLASSES
267 self_coco.data_infos = self_coco.load_annotations(self.ann_file)
268 eval_results.update(
269 self_coco.evaluate(results, metrics, logger, outfile_prefix,
270 classwise, proposal_nums, iou_thrs))
271
272 return eval_results
273
274 def _evaluate_cityscapes(self, results, txtfile_prefix, logger):
275 """Evaluation in Cityscapes protocol.
276
277 Args:
278 results (list): Testing results of the dataset.
279 txtfile_prefix (str | None): The prefix of output txt file
280 logger (logging.Logger | str | None): Logger used for printing
281 related information during evaluation. Default: None.
282
283 Returns:
284 dict[str: float]: Cityscapes evaluation results, contains 'mAP' \
285 and 'AP@50'.
286 """
287
288 try:
289 import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as CSEval # noqa
290 except ImportError:
291 raise ImportError('Please run "pip install citscapesscripts" to '
292 'install cityscapesscripts first.')
293 msg = 'Evaluating in Cityscapes style'
294 if logger is None:
295 msg = '\n' + msg
296 print_log(msg, logger=logger)
297
298 result_files, tmp_dir = self.format_results(results, txtfile_prefix)
299
300 if tmp_dir is None:
301 result_dir = osp.join(txtfile_prefix, 'results')
302 else:
303 result_dir = osp.join(tmp_dir.name, 'results')
304
305 eval_results = OrderedDict()
306 print_log(f'Evaluating results under {result_dir} ...', logger=logger)
307
308 # set global states in cityscapes evaluation API
309 CSEval.args.cityscapesPath = os.path.join(self.img_prefix, '../..')
310 CSEval.args.predictionPath = os.path.abspath(result_dir)
311 CSEval.args.predictionWalk = None
312 CSEval.args.JSONOutput = False
313 CSEval.args.colorized = False
314 CSEval.args.gtInstancesFile = os.path.join(result_dir,
315 'gtInstances.json')
316 CSEval.args.groundTruthSearch = os.path.join(
317 self.img_prefix.replace('leftImg8bit', 'gtFine'),
318 '*/*_gtFine_instanceIds.png')
319
320 groundTruthImgList = glob.glob(CSEval.args.groundTruthSearch)
321 assert len(groundTruthImgList), 'Cannot find ground truth images' \
322 f' in {CSEval.args.groundTruthSearch}.'
323 predictionImgList = []
324 for gt in groundTruthImgList:
325 predictionImgList.append(CSEval.getPrediction(gt, CSEval.args))
326 CSEval_results = CSEval.evaluateImgLists(predictionImgList,
327 groundTruthImgList,
328 CSEval.args)['averages']
329
330 eval_results['mAP'] = CSEval_results['allAp']
331 eval_results['AP@50'] = CSEval_results['allAp50%']
332 if tmp_dir is not None:
333 tmp_dir.cleanup()
334 return eval_results
335
[end of mmdet/datasets/cityscapes.py]
[start of setup.py]
1 #!/usr/bin/env python
2 import os
3 from setuptools import find_packages, setup
4
5 import torch
6 from torch.utils.cpp_extension import (BuildExtension, CppExtension,
7 CUDAExtension)
8
9
10 def readme():
11 with open('README.md', encoding='utf-8') as f:
12 content = f.read()
13 return content
14
15
16 version_file = 'mmdet/version.py'
17
18
19 def get_version():
20 with open(version_file, 'r') as f:
21 exec(compile(f.read(), version_file, 'exec'))
22 return locals()['__version__']
23
24
25 def make_cuda_ext(name, module, sources, sources_cuda=[]):
26
27 define_macros = []
28 extra_compile_args = {'cxx': []}
29
30 if torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1':
31 define_macros += [('WITH_CUDA', None)]
32 extension = CUDAExtension
33 extra_compile_args['nvcc'] = [
34 '-D__CUDA_NO_HALF_OPERATORS__',
35 '-D__CUDA_NO_HALF_CONVERSIONS__',
36 '-D__CUDA_NO_HALF2_OPERATORS__',
37 ]
38 sources += sources_cuda
39 else:
40 print(f'Compiling {name} without CUDA')
41 extension = CppExtension
42
43 return extension(
44 name=f'{module}.{name}',
45 sources=[os.path.join(*module.split('.'), p) for p in sources],
46 define_macros=define_macros,
47 extra_compile_args=extra_compile_args)
48
49
50 def parse_requirements(fname='requirements.txt', with_version=True):
51 """Parse the package dependencies listed in a requirements file but strips
52 specific versioning information.
53
54 Args:
55 fname (str): path to requirements file
56 with_version (bool, default=False): if True include version specs
57
58 Returns:
59 List[str]: list of requirements items
60
61 CommandLine:
62 python -c "import setup; print(setup.parse_requirements())"
63 """
64 import sys
65 from os.path import exists
66 import re
67 require_fpath = fname
68
69 def parse_line(line):
70 """Parse information from a line in a requirements text file."""
71 if line.startswith('-r '):
72 # Allow specifying requirements in other files
73 target = line.split(' ')[1]
74 for info in parse_require_file(target):
75 yield info
76 else:
77 info = {'line': line}
78 if line.startswith('-e '):
79 info['package'] = line.split('#egg=')[1]
80 elif '@git+' in line:
81 info['package'] = line
82 else:
83 # Remove versioning from the package
84 pat = '(' + '|'.join(['>=', '==', '>']) + ')'
85 parts = re.split(pat, line, maxsplit=1)
86 parts = [p.strip() for p in parts]
87
88 info['package'] = parts[0]
89 if len(parts) > 1:
90 op, rest = parts[1:]
91 if ';' in rest:
92 # Handle platform specific dependencies
93 # http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-platform-specific-dependencies
94 version, platform_deps = map(str.strip,
95 rest.split(';'))
96 info['platform_deps'] = platform_deps
97 else:
98 version = rest # NOQA
99 info['version'] = (op, version)
100 yield info
101
102 def parse_require_file(fpath):
103 with open(fpath, 'r') as f:
104 for line in f.readlines():
105 line = line.strip()
106 if line and not line.startswith('#'):
107 for info in parse_line(line):
108 yield info
109
110 def gen_packages_items():
111 if exists(require_fpath):
112 for info in parse_require_file(require_fpath):
113 parts = [info['package']]
114 if with_version and 'version' in info:
115 parts.extend(info['version'])
116 if not sys.version.startswith('3.4'):
117 # apparently package_deps are broken in 3.4
118 platform_deps = info.get('platform_deps')
119 if platform_deps is not None:
120 parts.append(';' + platform_deps)
121 item = ''.join(parts)
122 yield item
123
124 packages = list(gen_packages_items())
125 return packages
126
127
128 if __name__ == '__main__':
129 setup(
130 name='mmdet',
131 version=get_version(),
132 description='OpenMMLab Detection Toolbox and Benchmark',
133 long_description=readme(),
134 long_description_content_type='text/markdown',
135 author='OpenMMLab',
136 author_email='[email protected]',
137 keywords='computer vision, object detection',
138 url='https://github.com/open-mmlab/mmdetection',
139 packages=find_packages(exclude=('configs', 'tools', 'demo')),
140 classifiers=[
141 'Development Status :: 5 - Production/Stable',
142 'License :: OSI Approved :: Apache Software License',
143 'Operating System :: OS Independent',
144 'Programming Language :: Python :: 3',
145 'Programming Language :: Python :: 3.6',
146 'Programming Language :: Python :: 3.7',
147 'Programming Language :: Python :: 3.8',
148 ],
149 license='Apache License 2.0',
150 setup_requires=parse_requirements('requirements/build.txt'),
151 tests_require=parse_requirements('requirements/tests.txt'),
152 install_requires=parse_requirements('requirements/runtime.txt'),
153 extras_require={
154 'all': parse_requirements('requirements.txt'),
155 'tests': parse_requirements('requirements/tests.txt'),
156 'build': parse_requirements('requirements/build.txt'),
157 'optional': parse_requirements('requirements/optional.txt'),
158 },
159 ext_modules=[],
160 cmdclass={'build_ext': BuildExtension},
161 zip_safe=False)
162
[end of setup.py]
[start of tools/robustness_eval.py]
1 import os.path as osp
2 from argparse import ArgumentParser
3
4 import mmcv
5 import numpy as np
6
7
8 def print_coco_results(results):
9
10 def _print(result, ap=1, iouThr=None, areaRng='all', maxDets=100):
11 titleStr = 'Average Precision' if ap == 1 else 'Average Recall'
12 typeStr = '(AP)' if ap == 1 else '(AR)'
13 iouStr = '0.50:0.95' \
14 if iouThr is None else f'{iouThr:0.2f}'
15 iStr = f' {titleStr:<18} {typeStr} @[ IoU={iouStr:<9} | '
16 iStr += f'area={areaRng:>6s} | maxDets={maxDets:>3d} ] = {result:0.3f}'
17 print(iStr)
18
19 stats = np.zeros((12, ))
20 stats[0] = _print(results[0], 1)
21 stats[1] = _print(results[1], 1, iouThr=.5)
22 stats[2] = _print(results[2], 1, iouThr=.75)
23 stats[3] = _print(results[3], 1, areaRng='small')
24 stats[4] = _print(results[4], 1, areaRng='medium')
25 stats[5] = _print(results[5], 1, areaRng='large')
26 stats[6] = _print(results[6], 0, maxDets=1)
27 stats[7] = _print(results[7], 0, maxDets=10)
28 stats[8] = _print(results[8], 0)
29 stats[9] = _print(results[9], 0, areaRng='small')
30 stats[10] = _print(results[10], 0, areaRng='medium')
31 stats[11] = _print(results[11], 0, areaRng='large')
32
33
34 def get_coco_style_results(filename,
35 task='bbox',
36 metric=None,
37 prints='mPC',
38 aggregate='benchmark'):
39
40 assert aggregate in ['benchmark', 'all']
41
42 if prints == 'all':
43 prints = ['P', 'mPC', 'rPC']
44 elif isinstance(prints, str):
45 prints = [prints]
46 for p in prints:
47 assert p in ['P', 'mPC', 'rPC']
48
49 if metric is None:
50 metrics = [
51 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10', 'AR100',
52 'ARs', 'ARm', 'ARl'
53 ]
54 elif isinstance(metric, list):
55 metrics = metric
56 else:
57 metrics = [metric]
58
59 for metric_name in metrics:
60 assert metric_name in [
61 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10', 'AR100',
62 'ARs', 'ARm', 'ARl'
63 ]
64
65 eval_output = mmcv.load(filename)
66
67 num_distortions = len(list(eval_output.keys()))
68 results = np.zeros((num_distortions, 6, len(metrics)), dtype='float32')
69
70 for corr_i, distortion in enumerate(eval_output):
71 for severity in eval_output[distortion]:
72 for metric_j, metric_name in enumerate(metrics):
73 mAP = eval_output[distortion][severity][task][metric_name]
74 results[corr_i, severity, metric_j] = mAP
75
76 P = results[0, 0, :]
77 if aggregate == 'benchmark':
78 mPC = np.mean(results[:15, 1:, :], axis=(0, 1))
79 else:
80 mPC = np.mean(results[:, 1:, :], axis=(0, 1))
81 rPC = mPC / P
82
83 print(f'\nmodel: {osp.basename(filename)}')
84 if metric is None:
85 if 'P' in prints:
86 print(f'Performance on Clean Data [P] ({task})')
87 print_coco_results(P)
88 if 'mPC' in prints:
89 print(f'Mean Performance under Corruption [mPC] ({task})')
90 print_coco_results(mPC)
91 if 'rPC' in prints:
92 print(f'Realtive Performance under Corruption [rPC] ({task})')
93 print_coco_results(rPC)
94 else:
95 if 'P' in prints:
96 print(f'Performance on Clean Data [P] ({task})')
97 for metric_i, metric_name in enumerate(metrics):
98 print(f'{metric_name:5} = {P[metric_i]:0.3f}')
99 if 'mPC' in prints:
100 print(f'Mean Performance under Corruption [mPC] ({task})')
101 for metric_i, metric_name in enumerate(metrics):
102 print(f'{metric_name:5} = {mPC[metric_i]:0.3f}')
103 if 'rPC' in prints:
104 print(f'Relative Performance under Corruption [rPC] ({task})')
105 for metric_i, metric_name in enumerate(metrics):
106 print(f'{metric_name:5} => {rPC[metric_i] * 100:0.1f} %')
107
108 return results
109
110
111 def get_voc_style_results(filename, prints='mPC', aggregate='benchmark'):
112
113 assert aggregate in ['benchmark', 'all']
114
115 if prints == 'all':
116 prints = ['P', 'mPC', 'rPC']
117 elif isinstance(prints, str):
118 prints = [prints]
119 for p in prints:
120 assert p in ['P', 'mPC', 'rPC']
121
122 eval_output = mmcv.load(filename)
123
124 num_distortions = len(list(eval_output.keys()))
125 results = np.zeros((num_distortions, 6, 20), dtype='float32')
126
127 for i, distortion in enumerate(eval_output):
128 for severity in eval_output[distortion]:
129 mAP = [
130 eval_output[distortion][severity][j]['ap']
131 for j in range(len(eval_output[distortion][severity]))
132 ]
133 results[i, severity, :] = mAP
134
135 P = results[0, 0, :]
136 if aggregate == 'benchmark':
137 mPC = np.mean(results[:15, 1:, :], axis=(0, 1))
138 else:
139 mPC = np.mean(results[:, 1:, :], axis=(0, 1))
140 rPC = mPC / P
141
142 print(f'\nmodel: {osp.basename(filename)}')
143 if 'P' in prints:
144 print(f'Performance on Clean Data [P] in AP50 = {np.mean(P):0.3f}')
145 if 'mPC' in prints:
146 print('Mean Performance under Corruption [mPC] in AP50 = '
147 f'{np.mean(mPC):0.3f}')
148 if 'rPC' in prints:
149 print('Realtive Performance under Corruption [rPC] in % = '
150 f'{np.mean(rPC) * 100:0.1f}')
151
152 return np.mean(results, axis=2, keepdims=True)
153
154
155 def get_results(filename,
156 dataset='coco',
157 task='bbox',
158 metric=None,
159 prints='mPC',
160 aggregate='benchmark'):
161 assert dataset in ['coco', 'voc', 'cityscapes']
162
163 if dataset in ['coco', 'cityscapes']:
164 results = get_coco_style_results(
165 filename,
166 task=task,
167 metric=metric,
168 prints=prints,
169 aggregate=aggregate)
170 elif dataset == 'voc':
171 if task != 'bbox':
172 print('Only bbox analysis is supported for Pascal VOC')
173 print('Will report bbox results\n')
174 if metric not in [None, ['AP'], ['AP50']]:
175 print('Only the AP50 metric is supported for Pascal VOC')
176 print('Will report AP50 metric\n')
177 results = get_voc_style_results(
178 filename, prints=prints, aggregate=aggregate)
179
180 return results
181
182
183 def get_distortions_from_file(filename):
184
185 eval_output = mmcv.load(filename)
186
187 return get_distortions_from_results(eval_output)
188
189
190 def get_distortions_from_results(eval_output):
191 distortions = []
192 for i, distortion in enumerate(eval_output):
193 distortions.append(distortion.replace('_', ' '))
194 return distortions
195
196
197 def main():
198 parser = ArgumentParser(description='Corruption Result Analysis')
199 parser.add_argument('filename', help='result file path')
200 parser.add_argument(
201 '--dataset',
202 type=str,
203 choices=['coco', 'voc', 'cityscapes'],
204 default='coco',
205 help='dataset type')
206 parser.add_argument(
207 '--task',
208 type=str,
209 nargs='+',
210 choices=['bbox', 'segm'],
211 default=['bbox'],
212 help='task to report')
213 parser.add_argument(
214 '--metric',
215 nargs='+',
216 choices=[
217 None, 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10',
218 'AR100', 'ARs', 'ARm', 'ARl'
219 ],
220 default=None,
221 help='metric to report')
222 parser.add_argument(
223 '--prints',
224 type=str,
225 nargs='+',
226 choices=['P', 'mPC', 'rPC'],
227 default='mPC',
228 help='corruption benchmark metric to print')
229 parser.add_argument(
230 '--aggregate',
231 type=str,
232 choices=['all', 'benchmark'],
233 default='benchmark',
234 help='aggregate all results or only those \
235 for benchmark corruptions')
236
237 args = parser.parse_args()
238
239 for task in args.task:
240 get_results(
241 args.filename,
242 dataset=args.dataset,
243 task=task,
244 metric=args.metric,
245 prints=args.prints,
246 aggregate=args.aggregate)
247
248
249 if __name__ == '__main__':
250 main()
251
[end of tools/robustness_eval.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
open-mmlab/mmdetection
|
2d2e5c6f42b6f4b8ab4f68985b1fd7823e3be505
|
Invalid link in readme of cityscape
Hi, Thank you very much for the amazing project!
I would like to report that the [model](http://download.openmmlab.com/mmdetection/v2.0/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes_20200502-6ea77f0e.pth) in the [readme](https://github.com/open-mmlab/mmdetection/blob/master/configs/cityscapes/README.md) in the cityscape is invalid.
Meanwhile, may I ask whether there are more pre-trained models on Cityscape? I would like to evaluate the models elaborately.
Thank you!
|
Hi @luzai, thanks for your comments, and we will train the model and provide the checkpoints soon.
|
2021-01-18T03:33:41Z
|
<patch>
diff --git a/mmdet/models/dense_heads/anchor_head.py b/mmdet/models/dense_heads/anchor_head.py
--- a/mmdet/models/dense_heads/anchor_head.py
+++ b/mmdet/models/dense_heads/anchor_head.py
@@ -23,7 +23,10 @@ class AnchorHead(BaseDenseHead, BBoxTestMixin):
anchor_generator (dict): Config dict for anchor generator
bbox_coder (dict): Config of bounding box coder.
reg_decoded_bbox (bool): If true, the regression loss would be
- applied on decoded bounding boxes. Default: False
+ applied directly on decoded bounding boxes, converting both
+ the predicted boxes and regression targets to absolute
+ coordinates format. Default False. It should be `True` when
+ using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head.
loss_cls (dict): Config of classification loss.
loss_bbox (dict): Config of localization loss.
train_cfg (dict): Training config of anchor head.
@@ -408,6 +411,9 @@ def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights,
bbox_weights = bbox_weights.reshape(-1, 4)
bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4)
if self.reg_decoded_bbox:
+ # When the regression loss (e.g. `IouLoss`, `GIouLoss`)
+ # is applied directly on the decoded bounding boxes, it
+ # decodes the already encoded coordinates to absolute format.
anchors = anchors.reshape(-1, 4)
bbox_pred = self.bbox_coder.decode(anchors, bbox_pred)
loss_bbox = self.loss_bbox(
diff --git a/mmdet/models/dense_heads/cascade_rpn_head.py b/mmdet/models/dense_heads/cascade_rpn_head.py
--- a/mmdet/models/dense_heads/cascade_rpn_head.py
+++ b/mmdet/models/dense_heads/cascade_rpn_head.py
@@ -410,6 +410,9 @@ def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights,
bbox_weights = bbox_weights.reshape(-1, 4)
bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4)
if self.reg_decoded_bbox:
+ # When the regression loss (e.g. `IouLoss`, `GIouLoss`)
+ # is applied directly on the decoded bounding boxes, it
+ # decodes the already encoded coordinates to absolute format.
anchors = anchors.reshape(-1, 4)
bbox_pred = self.bbox_coder.decode(anchors, bbox_pred)
loss_reg = self.loss_bbox(
diff --git a/mmdet/models/dense_heads/fsaf_head.py b/mmdet/models/dense_heads/fsaf_head.py
--- a/mmdet/models/dense_heads/fsaf_head.py
+++ b/mmdet/models/dense_heads/fsaf_head.py
@@ -118,6 +118,10 @@ def _get_targets_single(self,
pos_bbox_targets = self.bbox_coder.encode(
sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes)
else:
+ # When the regression loss (e.g. `IouLoss`, `GIouLoss`)
+ # is applied directly on the decoded bounding boxes, both
+ # the predicted boxes and regression targets should be with
+ # absolute coordinate format.
pos_bbox_targets = sampling_result.pos_gt_bboxes
bbox_targets[pos_inds, :] = pos_bbox_targets
bbox_weights[pos_inds, :] = 1.0
diff --git a/mmdet/models/dense_heads/guided_anchor_head.py b/mmdet/models/dense_heads/guided_anchor_head.py
--- a/mmdet/models/dense_heads/guided_anchor_head.py
+++ b/mmdet/models/dense_heads/guided_anchor_head.py
@@ -75,6 +75,11 @@ class GuidedAnchorHead(AnchorHead):
square_anchor_generator (dict): Config dict for square generator
anchor_coder (dict): Config dict for anchor coder
bbox_coder (dict): Config dict for bbox coder
+ reg_decoded_bbox (bool): If true, the regression loss would be
+ applied directly on decoded bounding boxes, converting both
+ the predicted boxes and regression targets to absolute
+ coordinates format. Default False. It should be `True` when
+ using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head.
deform_groups: (int): Group number of DCN in
FeatureAdaption module.
loc_filter_thr (float): Threshold to filter out unconcerned regions.
diff --git a/mmdet/models/dense_heads/sabl_retina_head.py b/mmdet/models/dense_heads/sabl_retina_head.py
--- a/mmdet/models/dense_heads/sabl_retina_head.py
+++ b/mmdet/models/dense_heads/sabl_retina_head.py
@@ -33,8 +33,11 @@ class SABLRetinaHead(BaseDenseHead):
conv_cfg (dict): Config dict for ConvModule. Defaults to None.
norm_cfg (dict): Config dict for Norm Layer. Defaults to None.
bbox_coder (dict): Config dict for bbox coder.
- reg_decoded_bbox (bool): Whether to regress decoded bbox. \
- Defaults to False.
+ reg_decoded_bbox (bool): If true, the regression loss would be
+ applied directly on decoded bounding boxes, converting both
+ the predicted boxes and regression targets to absolute
+ coordinates format. Default False. It should be `True` when
+ using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head.
train_cfg (dict): Training config of SABLRetinaHead.
test_cfg (dict): Testing config of SABLRetinaHead.
loss_cls (dict): Config of classification loss.
diff --git a/mmdet/models/dense_heads/ssd_head.py b/mmdet/models/dense_heads/ssd_head.py
--- a/mmdet/models/dense_heads/ssd_head.py
+++ b/mmdet/models/dense_heads/ssd_head.py
@@ -23,7 +23,10 @@ class SSDHead(AnchorHead):
anchor_generator (dict): Config dict for anchor generator
bbox_coder (dict): Config of bounding box coder.
reg_decoded_bbox (bool): If true, the regression loss would be
- applied on decoded bounding boxes. Default: False
+ applied directly on decoded bounding boxes, converting both
+ the predicted boxes and regression targets to absolute
+ coordinates format. Default False. It should be `True` when
+ using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head.
train_cfg (dict): Training config of anchor head.
test_cfg (dict): Testing config of anchor head.
""" # noqa: W605
@@ -161,6 +164,9 @@ def loss_single(self, cls_score, bbox_pred, anchor, labels, label_weights,
loss_cls = (loss_cls_pos + loss_cls_neg) / num_total_samples
if self.reg_decoded_bbox:
+ # When the regression loss (e.g. `IouLoss`, `GIouLoss`)
+ # is applied directly on the decoded bounding boxes, it
+ # decodes the already encoded coordinates to absolute format.
bbox_pred = self.bbox_coder.decode(anchor, bbox_pred)
loss_bbox = smooth_l1_loss(
diff --git a/mmdet/models/dense_heads/yolact_head.py b/mmdet/models/dense_heads/yolact_head.py
--- a/mmdet/models/dense_heads/yolact_head.py
+++ b/mmdet/models/dense_heads/yolact_head.py
@@ -280,6 +280,9 @@ def loss_single_OHEM(self, cls_score, bbox_pred, anchors, labels,
loss_cls_neg = topk_loss_cls_neg.sum()
loss_cls = (loss_cls_pos + loss_cls_neg) / num_total_samples
if self.reg_decoded_bbox:
+ # When the regression loss (e.g. `IouLoss`, `GIouLoss`)
+ # is applied directly on the decoded bounding boxes, it
+ # decodes the already encoded coordinates to absolute format.
bbox_pred = self.bbox_coder.decode(anchors, bbox_pred)
loss_bbox = self.loss_bbox(
bbox_pred,
diff --git a/mmdet/models/roi_heads/bbox_heads/bbox_head.py b/mmdet/models/roi_heads/bbox_heads/bbox_head.py
--- a/mmdet/models/roi_heads/bbox_heads/bbox_head.py
+++ b/mmdet/models/roi_heads/bbox_heads/bbox_head.py
@@ -105,6 +105,10 @@ def _get_target_single(self, pos_bboxes, neg_bboxes, pos_gt_bboxes,
pos_bbox_targets = self.bbox_coder.encode(
pos_bboxes, pos_gt_bboxes)
else:
+ # When the regression loss (e.g. `IouLoss`, `GIouLoss`)
+ # is applied directly on the decoded bounding boxes, both
+ # the predicted boxes and regression targets should be with
+ # absolute coordinate format.
pos_bbox_targets = pos_gt_bboxes
bbox_targets[:num_pos, :] = pos_bbox_targets
bbox_weights[:num_pos, :] = 1
@@ -166,6 +170,10 @@ def loss(self,
# do not perform bounding box regression for BG anymore.
if pos_inds.any():
if self.reg_decoded_bbox:
+ # When the regression loss (e.g. `IouLoss`,
+ # `GIouLoss`, `DIouLoss`) is applied directly on
+ # the decoded bounding boxes, it decodes the
+ # already encoded coordinates to absolute format.
bbox_pred = self.bbox_coder.decode(rois[:, 1:], bbox_pred)
if self.reg_class_agnostic:
pos_bbox_pred = bbox_pred.view(
</patch>
|
[]
|
[]
| |||
huggingface__transformers-8245
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pytest Errors
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
```
ai) ubuntu@ip-10-0-1-82:~/transformers$ transformers-cli env
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "/home/ubuntu/anaconda2/envs/ai/bin/transformers-cli", line 33, in <module>
sys.exit(load_entry_point('transformers==3.4.0', 'console_scripts', 'transformers-cli')())
File "/home/ubuntu/anaconda2/envs/ai/bin/transformers-cli", line 25, in importlib_load_entry_point
return next(matches).load()
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/importlib_metadata/__init__.py", line 105, in load
module = import_module(match.group('module'))
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/__init__.py", line 135, in <module>
from .pipelines import (
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/pipelines.py", line 38, in <module>
from .tokenization_auto import AutoTokenizer
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/tokenization_auto.py", line 210, in <module>
(XLMProphetNetConfig, (XLMProphetNetTokenizer, None)),
NameError: name 'XLMProphetNetTokenizer' is not defined
- `transformers` version: 3.4.0
- Platform: Ubuntu 20.04 LTS
- Python version: 3.6.11
- PyTorch version (GPU?): 1.7.0 (no GPU)
- Tensorflow version (GPU?): 2.2.0 (no GPU)
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
```
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
tokenizers: @mfuntowicz
examples/distillation: @VictorSanh
-->
## Information
## To reproduce
Steps to reproduce the behavior:
1. RUN_SLOW=1 pytest examples
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <br>
3 <img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png" width="400"/>
4 <br>
5 <p>
6 <p align="center">
7 <a href="https://circleci.com/gh/huggingface/transformers">
8 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
9 </a>
10 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE">
11 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
12 </a>
13 <a href="https://huggingface.co/transformers/index.html">
14 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/transformers/index.html.svg?down_color=red&down_message=offline&up_message=online">
15 </a>
16 <a href="https://github.com/huggingface/transformers/releases">
17 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
18 </a>
19 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md">
20 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
21 </a>
22 </p>
23
24 <h3 align="center">
25 <p>State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0
26 </h3>
27
28 🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction, question answering, summarization, translation, text generation, etc in 100+ languages. Its aim is to make cutting-edge NLP easier to use for everyone.
29
30 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture can be used as a standalone and modified to enable quick research experiments.
31
32 🤗 Transformers is backed by the two most popular deep learning libraries, [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/), with a seamless integration between them, allowing you to train your models with one then load it for inference with the other.
33
34 ### Recent contributors
35 [](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/0)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/1)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/2)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/3)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/4)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/5)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/6)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/7)
36
37 ## Online demos
38
39 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer an [inference API](https://huggingface.co/pricing) to use those models.
40
41 Here are a few examples:
42 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
43 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
44 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
45 - [Natural Langugage Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
46 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
47 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
48 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
49
50 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities.
51
52 ## Quick tour
53
54 To immediately use a model on a given text, we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model training. Here is how to quickly use a pipeline to classify positive versus negative texts
55
56 ```python
57 >>> from transformers import pipeline
58
59 # Allocate a pipeline for sentiment-analysis
60 >>> classifier = pipeline('sentiment-analysis')
61 >>> classifier('We are very happy to include pipeline into the transformers repository.')
62 [{'label': 'POSITIVE', 'score': 0.9978193640708923}]
63 ```
64
65 The second line of code downloads and caches the pretrained model used by the pipeline, the third line evaluates it on the given text. Here the answer is "positive" with a confidence of 99.8%.
66
67 This is another example of pipeline used for that can extract question answers from some context:
68
69 ``` python
70 >>> from transformers import pipeline
71
72 # Allocate a pipeline for question-answering
73 >>> question_answerer = pipeline('question-answering')
74 >>> question_answerer({
75 ... 'question': 'What is the name of the repository ?',
76 ... 'context': 'Pipeline have been included in the huggingface/transformers repository'
77 ... })
78 {'score': 0.5135612454720828, 'start': 35, 'end': 59, 'answer': 'huggingface/transformers'}
79
80 ```
81
82 On top of the answer, the pretrained model used here returned its confidence score, along with the start position and its end position in the tokenized sentence. You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/transformers/task_summary.html).
83
84 To download and use any of the pretrained models on your given task, you just need to use those three lines of codes (PyTorch version):
85 ```python
86 >>> from transformers import AutoTokenizer, AutoModel
87
88 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
89 >>> model = AutoModel.from_pretrained("bert-base-uncased")
90
91 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
92 >>> outputs = model(**inputs)
93 ```
94 or for TensorFlow:
95 ```python
96 >>> from transformers import AutoTokenizer, TFAutoModel
97
98 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
99 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
100
101 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
102 >>> outputs = model(**inputs)
103 ```
104
105 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on one (or list) of texts (as we can see on the fourth line of both code examples). It will output a dictionary you can directly pass to your model (which is done on the fifth line).
106
107 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use normally. For instance, [this tutorial](https://huggingface.co/transformers/training.html) explains how to integrate such a model in classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune the on a new dataset.
108
109 ## Why should I use transformers?
110
111 1. Easy-to-use state-of-the-art models:
112 - High performance on NLU and NLG tasks.
113 - Low barrier to entry for educators and practitioners.
114 - Few user-facing abstractions with just three classes to learn.
115 - A unified API for using all our pretrained models.
116
117 1. Lower compute costs, smaller carbon footprint:
118 - Researchers can share trained models instead of always retraining.
119 - Practitioners can reduce compute time and production costs.
120 - Dozens of architectures with over 2,000 pretrained models, some in more than 100 languages.
121
122 1. Choose the right framework for every part of a model's lifetime:
123 - Train state-of-the-art models in 3 lines of code.
124 - Move a single model between TF2.0/PyTorch frameworks at will.
125 - Seamlessly pick the right framework for training, evaluation, production.
126
127 1. Easily customize a model or an example to your needs:
128 - Examples for each architecture to reproduce the results by the official authors of said architecture.
129 - Expose the models internal as consistently as possible.
130 - Model files can be used independently of the library for quick experiments.
131
132 ## Why shouldn't I use transformers?
133
134 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving in additional abstractions/files.
135 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library.
136 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/master/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs.
137
138 ## Installation
139
140 This repository is tested on Python 3.6+, PyTorch 1.0.0+ (PyTorch 1.3.1+ for [examples](https://github.com/huggingface/transformers/tree/master/examples)) and TensorFlow 2.0.
141
142 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
143
144 First, create a virtual environment with the version of Python you're going to use and activate it.
145
146 Then, you will need to install one of, or both, TensorFlow 2.0 and PyTorch.
147 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/pip#tensorflow-2.0-rc-is-available) and/or [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) regarding the specific install command for your platform.
148
149 When TensorFlow 2.0 and/or PyTorch has been installed, 🤗 Transformers can be installed using pip as follows:
150
151 ```bash
152 pip install transformers
153 ```
154
155 If you'd like to play with the examples, you must [install the library from source](https://huggingface.co/transformers/installation.html#installing-from-source).
156
157 ## Models architectures
158
159 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/transformers/model_summary.html) for a high-level summary of each them):
160
161 1. **[ALBERT](https://huggingface.co/transformers/model_doc/albert.html)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
162 1. **[BART](https://huggingface.co/transformers/model_doc/bart.html)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
163 1. **[BERT](https://huggingface.co/transformers/model_doc/bert.html)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
164 1. **[BERT For Sequence Generation](https://huggingface.co/transformers/model_doc/bertgeneration.html)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
165 1. **[Blenderbot](https://huggingface.co/transformers/model_doc/blenderbot.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
166 1. **[CamemBERT](https://huggingface.co/transformers/model_doc/camembert.html)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
167 1. **[CTRL](https://huggingface.co/transformers/model_doc/ctrl.html)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
168 1. **[DeBERTa](https://huggingface.co/transformers/model_doc/deberta.html)** (from Microsoft Research) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
169 1. **[DialoGPT](https://huggingface.co/transformers/model_doc/dialogpt.html)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
170 1. **[DistilBERT](https://huggingface.co/transformers/model_doc/distilbert.html)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT.
171 1. **[DPR](https://huggingface.co/transformers/model_doc/dpr.html)** (from Facebook) released with the paper [Dense Passage Retrieval
172 for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon
173 Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
174 1. **[ELECTRA](https://huggingface.co/transformers/model_doc/electra.html)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
175 1. **[FlauBERT](https://huggingface.co/transformers/model_doc/flaubert.html)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
176 1. **[Funnel Transformer](https://huggingface.co/transformers/model_doc/funnel.html)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
177 1. **[GPT](https://huggingface.co/transformers/model_doc/gpt.html)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
178 1. **[GPT-2](https://huggingface.co/transformers/model_doc/gpt2.html)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
179 1. **[LayoutLM](https://huggingface.co/transformers/model_doc/layoutlm.html)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
180 1. **[Longformer](https://huggingface.co/transformers/model_doc/longformer.html)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
181 1. **[LXMERT](https://huggingface.co/transformers/model_doc/lxmert.html)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
182 1. **[MarianMT](https://huggingface.co/transformers/model_doc/marian.html)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
183 1. **[MBart](https://huggingface.co/transformers/model_doc/mbart.html)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
184 1. **[Pegasus](https://huggingface.co/transformers/model_doc/pegasus.html)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777)> by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
185 1. **[ProphetNet](https://huggingface.co/transformers/model_doc/prophetnet.html)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
186 1. **[Reformer](https://huggingface.co/transformers/model_doc/reformer.html)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
187 1. **[RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
188 ultilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT.
189 1. **[SqueezeBert](https://huggingface.co/transformers/model_doc/squeezebert.html)** released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
190 1. **[T5](https://huggingface.co/transformers/model_doc/t5.html)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
191 1. **[Transformer-XL](https://huggingface.co/transformers/model_doc/transformerxl.html)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
192 1. **[XLM](https://huggingface.co/transformers/model_doc/xlm.html)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
193 1. **[XLM-ProphetNet](https://huggingface.co/transformers/model_doc/xlmprophetnet.html)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
194 1. **[XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
195 1. **[XLNet](https://huggingface.co/transformers/model_doc/xlnet.html)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
196 1. **[Other community models](https://huggingface.co/models)**, contributed by the [community](https://huggingface.co/users).
197 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
198
199 These implementations have been tested on several datasets (see the example scripts) and should match the performances of the original implementations. You can find more details on the performances in the Examples section of the [documentation](https://huggingface.co/transformers/examples.html).
200
201
202 ## Learn more
203
204 | Section | Description |
205 |-|-|
206 | [Documentation](https://huggingface.co/transformers/) | Full API documentation and tutorials |
207 | [Task summary](https://huggingface.co/transformers/task_summary.html) | Tasks supported by 🤗 Transformers |
208 | [Preprocessing tutorial](https://huggingface.co/transformers/preprocessing.html) | Using the `Tokenizer` class to prepare data for the models |
209 | [Training and fine-tuning](https://huggingface.co/transformers/training.html) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API |
210 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/master/examples) | Example scripts for fine-tuning models on a wide range of tasks |
211 | [Model sharing and uploading](https://huggingface.co/transformers/model_sharing.html) | Upload and share your fine-tuned models with the community |
212 | [Migration](https://huggingface.co/transformers/migration.html) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` |
213
214 ## Citation
215
216 We now have a [paper](https://arxiv.org/abs/1910.03771) you can cite for the 🤗 Transformers library:
217 ```bibtex
218 @article{Wolf2019HuggingFacesTS,
219 title={HuggingFace's Transformers: State-of-the-art Natural Language Processing},
220 author={Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush},
221 journal={ArXiv},
222 year={2019},
223 volume={abs/1910.03771}
224 }
225 ```
226
[end of README.md]
[start of setup.py]
1 """
2 Simple check list from AllenNLP repo: https://github.com/allenai/allennlp/blob/master/setup.py
3
4 To create the package for pypi.
5
6 1. Change the version in __init__.py, setup.py as well as docs/source/conf.py. Remove the master from the links in
7 the new models of the README:
8 (https://huggingface.co/transformers/master/model_doc/ -> https://huggingface.co/transformers/model_doc/)
9 then run `make fix-copies` to fix the index of the documentation.
10
11 2. Unpin specific versions from setup.py that use a git install.
12
13 2. Commit these changes with the message: "Release: VERSION"
14
15 3. Add a tag in git to mark the release: "git tag VERSION -m'Adds tag VERSION for pypi' "
16 Push the tag to git: git push --tags origin master
17
18 4. Build both the sources and the wheel. Do not change anything in setup.py between
19 creating the wheel and the source distribution (obviously).
20
21 For the wheel, run: "python setup.py bdist_wheel" in the top level directory.
22 (this will build a wheel for the python version you use to build it).
23
24 For the sources, run: "python setup.py sdist"
25 You should now have a /dist directory with both .whl and .tar.gz source versions.
26
27 5. Check that everything looks correct by uploading the package to the pypi test server:
28
29 twine upload dist/* -r pypitest
30 (pypi suggest using twine as other methods upload files via plaintext.)
31 You may have to specify the repository url, use the following command then:
32 twine upload dist/* -r pypitest --repository-url=https://test.pypi.org/legacy/
33
34 Check that you can install it in a virtualenv by running:
35 pip install -i https://testpypi.python.org/pypi transformers
36
37 6. Upload the final version to actual pypi:
38 twine upload dist/* -r pypi
39
40 7. Copy the release notes from RELEASE.md to the tag in github once everything is looking hunky-dory.
41
42 8. Add the release version to docs/source/_static/js/custom.js and .circleci/deploy.sh
43
44 9. Update README.md to redirect to correct documentation.
45 """
46
47 import os
48 import shutil
49 from pathlib import Path
50
51 from setuptools import find_packages, setup
52
53
54 # Remove stale transformers.egg-info directory to avoid https://github.com/pypa/pip/issues/5466
55 stale_egg_info = Path(__file__).parent / "transformers.egg-info"
56 if stale_egg_info.exists():
57 print(
58 (
59 "Warning: {} exists.\n\n"
60 "If you recently updated transformers to 3.0 or later, this is expected,\n"
61 "but it may prevent transformers from installing in editable mode.\n\n"
62 "This directory is automatically generated by Python's packaging tools.\n"
63 "I will remove it now.\n\n"
64 "See https://github.com/pypa/pip/issues/5466 for details.\n"
65 ).format(stale_egg_info)
66 )
67 shutil.rmtree(stale_egg_info)
68
69
70 extras = {}
71
72 extras["ja"] = ["fugashi>=1.0", "ipadic>=1.0.0,<2.0", "unidic_lite>=1.0.7", "unidic>=1.0.2"]
73 extras["sklearn"] = ["scikit-learn"]
74
75 # keras2onnx and onnxconverter-common version is specific through a commit until 1.7.0 lands on pypi
76 extras["tf"] = [
77 "tensorflow>=2.0",
78 "onnxconverter-common",
79 "keras2onnx"
80 # "onnxconverter-common @ git+git://github.com/microsoft/onnxconverter-common.git@f64ca15989b6dc95a1f3507ff6e4c395ba12dff5#egg=onnxconverter-common",
81 # "keras2onnx @ git+git://github.com/onnx/keras-onnx.git@cbdc75cb950b16db7f0a67be96a278f8d2953b48#egg=keras2onnx",
82 ]
83 extras["tf-cpu"] = [
84 "tensorflow-cpu>=2.0",
85 "onnxconverter-common",
86 "keras2onnx"
87 # "onnxconverter-common @ git+git://github.com/microsoft/onnxconverter-common.git@f64ca15989b6dc95a1f3507ff6e4c395ba12dff5#egg=onnxconverter-common",
88 # "keras2onnx @ git+git://github.com/onnx/keras-onnx.git@cbdc75cb950b16db7f0a67be96a278f8d2953b48#egg=keras2onnx",
89 ]
90 extras["torch"] = ["torch>=1.0"]
91
92 if os.name == "nt": # windows
93 extras["retrieval"] = ["datasets"] # faiss is not supported on windows
94 extras["flax"] = [] # jax is not supported on windows
95 else:
96 extras["retrieval"] = ["faiss-cpu", "datasets"]
97 extras["flax"] = ["jaxlib==0.1.55", "jax>=0.2.0", "flax==0.2.2"]
98
99 extras["tokenizers"] = ["tokenizers==0.9.2"]
100 extras["onnxruntime"] = ["onnxruntime>=1.4.0", "onnxruntime-tools>=1.4.2"]
101
102 extras["serving"] = ["pydantic", "uvicorn", "fastapi", "starlette"]
103
104 extras["sentencepiece"] = ["sentencepiece==0.1.91"]
105 extras["retrieval"] = ["faiss-cpu", "datasets"]
106 extras["testing"] = ["pytest", "pytest-xdist", "timeout-decorator", "parameterized", "psutil"] + extras["retrieval"]
107 # sphinx-rtd-theme==0.5.0 introduced big changes in the style.
108 extras["docs"] = ["recommonmark", "sphinx", "sphinx-markdown-tables", "sphinx-rtd-theme==0.4.3", "sphinx-copybutton"]
109 extras["quality"] = ["black >= 20.8b1", "isort >= 5.5.4", "flake8 >= 3.8.3"]
110
111
112 extras["all"] = extras["tf"] + extras["torch"] + extras["flax"] + extras["sentencepiece"] + extras["tokenizers"]
113
114 extras["dev"] = extras["all"] + extras["testing"] + extras["quality"] + extras["ja"] + extras["docs"] + extras["sklearn"]
115
116
117 setup(
118 name="transformers",
119 version="3.4.0",
120 author="Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Sam Shleifer, Patrick von Platen, Sylvain Gugger, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors",
121 author_email="[email protected]",
122 description="State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch",
123 long_description=open("README.md", "r", encoding="utf-8").read(),
124 long_description_content_type="text/markdown",
125 keywords="NLP deep learning transformer pytorch tensorflow BERT GPT GPT-2 google openai CMU",
126 license="Apache",
127 url="https://github.com/huggingface/transformers",
128 package_dir={"": "src"},
129 packages=find_packages("src"),
130 install_requires=[
131 "numpy",
132 "tokenizers == 0.9.2",
133 # dataclasses for Python versions that don't have it
134 "dataclasses;python_version<'3.7'",
135 # utilities from PyPA to e.g. compare versions
136 "packaging",
137 # filesystem locks e.g. to prevent parallel downloads
138 "filelock",
139 # for downloading models over HTTPS
140 "requests",
141 # progress bars in model download and training scripts
142 "tqdm >= 4.27",
143 # for OpenAI GPT
144 "regex != 2019.12.17",
145 # for SentencePiece models
146 "sentencepiece == 0.1.91",
147 "protobuf",
148 # for XLM
149 "sacremoses",
150 ],
151 extras_require=extras,
152 entry_points={"console_scripts": ["transformers-cli=transformers.commands.transformers_cli:main"]},
153 python_requires=">=3.6.0",
154 classifiers=[
155 "Development Status :: 5 - Production/Stable",
156 "Intended Audience :: Developers",
157 "Intended Audience :: Education",
158 "Intended Audience :: Science/Research",
159 "License :: OSI Approved :: Apache Software License",
160 "Operating System :: OS Independent",
161 "Programming Language :: Python :: 3",
162 "Programming Language :: Python :: 3.6",
163 "Programming Language :: Python :: 3.7",
164 "Topic :: Scientific/Engineering :: Artificial Intelligence",
165 ],
166 )
167
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
huggingface/transformers
|
e1b1b614b132b64e2bd7c3aaf7909d38956c8dc2
|
pytest Errors
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
```
ai) ubuntu@ip-10-0-1-82:~/transformers$ transformers-cli env
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "/home/ubuntu/anaconda2/envs/ai/bin/transformers-cli", line 33, in <module>
sys.exit(load_entry_point('transformers==3.4.0', 'console_scripts', 'transformers-cli')())
File "/home/ubuntu/anaconda2/envs/ai/bin/transformers-cli", line 25, in importlib_load_entry_point
return next(matches).load()
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/importlib_metadata/__init__.py", line 105, in load
module = import_module(match.group('module'))
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/__init__.py", line 135, in <module>
from .pipelines import (
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/pipelines.py", line 38, in <module>
from .tokenization_auto import AutoTokenizer
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/tokenization_auto.py", line 210, in <module>
(XLMProphetNetConfig, (XLMProphetNetTokenizer, None)),
NameError: name 'XLMProphetNetTokenizer' is not defined
- `transformers` version: 3.4.0
- Platform: Ubuntu 20.04 LTS
- Python version: 3.6.11
- PyTorch version (GPU?): 1.7.0 (no GPU)
- Tensorflow version (GPU?): 2.2.0 (no GPU)
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
```
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
tokenizers: @mfuntowicz
examples/distillation: @VictorSanh
-->
## Information
## To reproduce
Steps to reproduce the behavior:
1. RUN_SLOW=1 pytest examples
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
|
I got the same error while loading BERT tokeniser and model from torch hub
Hello! Do you mind pasting the result of `pip list` done in your environment? Thank you!
It’s an Anaconda virtual environment.
Python 3.6.11
$ pip list
Package Version Location
--------------------------------- ------------------- ----------------------------------------------------------
absl-py 0.11.0
aiohttp 3.7.2
appdirs 1.4.4
argon2-cffi 20.1.0
astor 0.8.1
astunparse 1.6.3
async-generator 1.10
async-timeout 3.0.1
attrs 20.2.0
Automat 20.2.0
awscli 1.18.169
Babel 2.8.0
backcall 0.2.0
backports.functools-lru-cache 1.6.1
bcrypt 3.2.0
beautifulsoup4 4.9.3
bertopic 0.2.3
black 20.8b1
bleach 3.2.1
blinker 1.4
bokeh 2.2.3
boto 2.49.0
boto3 1.16.9
botocore 1.19.9
brotlipy 0.7.0
bz2file 0.98
cachetools 4.1.1
certifi 2020.6.20
cffi 1.14.3
chainer 7.7.0
chardet 3.0.4
click 7.1.2
cloudpickle 1.2.2
colorama 0.4.3
constantly 15.1.0
cryptography 3.2.1
cssselect 1.1.0
cycler 0.10.0
cymem 1.31.2
Cython 0.29.21
dataclasses 0.7
decorator 4.4.2
deepdist 0.1
defusedxml 0.6.0
dill 0.3.2
diskcache 4.0.0
docutils 0.15.2
entrypoints 0.3
feynman 2.0.0
filelock 3.0.12
findspark 1.3.0
Flask 1.1.2
flatbuffers 1.12
funcy 1.15
future 0.18.2
gast 0.3.3
gensim 3.8.3
google-auth 1.23.0
google-auth-oauthlib 0.4.2
google-pasta 0.2.0
googleapis-common-protos 1.52.0
grpcio 1.33.2
h5py 2.10.0
hdbscan 0.8.26
html5lib 1.1
hyperlink 20.0.1
hypothesis 5.41.0
idna 2.10
idna-ssl 1.1.0
importlib-metadata 2.0.0
incremental 17.5.0
iniconfig 1.1.1
ipykernel 5.3.4
ipython 7.12.0
ipython-genutils 0.2.0
ipywidgets 7.5.1
itemadapter 0.1.1
itemloaders 1.0.3
itsdangerous 1.1.0
jedi 0.17.2
Jinja2 2.11.2
jmespath 0.10.0
joblib 0.17.0
json5 0.9.5
jsonschema 3.2.0
jupyter-client 6.1.7
jupyter-console 6.2.0
jupyter-contrib-core 0.3.3
jupyter-core 4.6.3
jupyter-nbextensions-configurator 0.4.1
jupyterlab 2.2.9
jupyterlab-pygments 0.1.2
jupyterlab-server 1.2.0
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
kiwisolver 1.3.0
llvmlite 0.34.0
lxml 4.6.1
Markdown 3.3.3
MarkupSafe 1.1.1
matplotlib 3.3.2
mistune 0.8.4
mnist 0.2.2
more-itertools 8.6.0
mpmath 1.1.0
MulticoreTSNE 0.1
multidict 4.7.5
murmurhash 0.26.4
mypy-extensions 0.4.3
nbclient 0.5.1
nbconvert 6.0.7
nbformat 5.0.8
nest-asyncio 1.4.1
nltk 3.4.4
notebook 6.1.4
numba 0.51.2
numexpr 2.7.1
numpy 1.19.2
oauthlib 3.0.1
olefile 0.46
opt-einsum 3.3.0
packaging 20.4
pandas 1.1.4
pandocfilters 1.4.2
parameterized 0.7.4
parsel 1.6.0
parso 0.7.1
pathspec 0.8.0
patsy 0.5.1
petastorm 0.7.6 /home/ubuntu/petastorm
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.0.1
pip 20.2.4
plac 1.0.0
pluggy 0.13.1
preshed 0.46.4
prometheus-client 0.8.0
promise 2.3
prompt-toolkit 3.0.8
Protego 0.1.16
protobuf 3.13.0
psutil 5.7.3
ptyprocess 0.6.0
py 1.9.0
py4j 0.10.9
pyarrow 2.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.7
pycparser 2.20
PyDispatcher 2.0.5
pydot 1.4.1
Pygments 2.7.2
PyHamcrest 2.0.2
PyJWT 1.7.1
pyLDAvis 2.1.2
pyOpenSSL 19.1.0
pyparsing 2.4.7
PyQt5 5.12.3
PyQt5-sip 4.19.18
PyQtChart 5.12
PyQtWebEngine 5.12.1
pyrsistent 0.17.3
PySocks 1.7.1
pyspark 3.0.1
pytest 6.1.2
python-dateutil 2.8.1
pytz 2020.1
PyWavelets 1.1.1
PyYAML 5.3.1
pyzmq 19.0.2
qtconsole 4.7.7
QtPy 1.9.0
queuelib 1.5.0
regex 2020.10.28
requests 2.24.0
requests-oauthlib 1.3.0
rsa 4.4.1
s3transfer 0.3.3
sacremoses 0.0.43
scapy 2.4.4
scikit-learn 0.23.2
scipy 1.5.2
Scrapy 2.4.0
seaborn 0.11.0
semver 2.8.1
Send2Trash 1.5.0
sense2vec 0.6.0
sentence-transformers 0.3.6
sentencepiece 0.1.91
service-identity 18.1.0
setuptools 49.6.0.post20201009
six 1.15.0
sklearn 0.0
smart-open 1.6.0
sortedcontainers 2.2.2
soupsieve 2.0.1
spacy 0.101.0
sputnik 0.9.3
statsmodels 0.12.1
sympy 1.6.2
tensorboard 2.3.0
tensorboard-plugin-wit 1.7.0
tensorflow 2.2.0
tensorflow-datasets 1.2.0
tensorflow-estimator 2.2.0
tensorflow-metadata 0.14.0
tensorflow-probability 0.6.0
tensorflowonspark 1.4.1
termcolor 1.1.0
terminado 0.9.1
testpath 0.4.4
tfp-nightly 0.5.0.dev20190522
thinc 5.0.8
threadpoolctl 2.1.0
timeout-decorator 0.4.1
tokenizers 0.9.2
toml 0.10.1
torch 1.7.0
torchaudio 0.7.0a0+ac17b64
torchvision 0.8.1
tornado 6.1
tqdm 4.51.0
traitlets 4.3.3
transformers 3.1.0 /home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages
Twisted 20.3.0
twython 3.8.2
typed-ast 1.4.1
typing-extensions 3.7.4.3
umap-learn 0.4.6
urllib3 1.25.11
w3lib 1.22.0
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.35.1
widgetsnbextension 3.5.1
wordcloud 1.8.0
wrapt 1.12.1
yarl 1.6.2
zipp 3.4.0
zope.interface 5.1.2
> On Nov 2, 2020, at 7:33 AM, Lysandre Debut <[email protected]> wrote:
>
>
> Hello! Do you mind pasting the result of pip list done in your environment? Thank you!
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/8196#issuecomment-720544945>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAXWFW2Z4DMWNXRLVPMHSD3SN3GLNANCNFSM4TFJGMGQ>.
>
It seems you have a conflict between your `transformers` version, as `transformers-cli env` returns v3.4.0, while your `pip list` returns v3.1.0?
Mea culpa! I sent you the pip list from my Mac.
Here’s the Ubuntu 20.04 LTS results
$ conda list transformers
# packages in environment at /home/ubuntu/anaconda2/envs/ai:
#
# Name Version Build Channel
sentence-transformers 0.3.6 pypi_0 pypi
transformers 3.4.0 dev_0 <develop>
(ai) ubuntu@ip-10-0-1-82:~/transformers$
$ pip list
Package Version Location
--------------------------------- ------------------- ---------------------------------------------------------------------------------------
absl-py 0.11.0
aiohttp 3.7.2
appdirs 1.4.4
argon2-cffi 20.1.0
astor 0.8.1
astunparse 1.6.3
async-generator 1.10
async-timeout 3.0.1
attrs 20.2.0
Automat 20.2.0
awscli 1.18.169
Babel 2.8.0
backcall 0.2.0
backports.functools-lru-cache 1.6.1
bcrypt 3.2.0
beautifulsoup4 4.9.3
bertopic 0.2.3
black 20.8b1
bleach 3.2.1
blinker 1.4
bokeh 2.2.3
boto 2.49.0
boto3 1.16.9
botocore 1.19.9
brotlipy 0.7.0
bz2file 0.98
cachetools 4.1.1
certifi 2020.6.20
cffi 1.14.3
chainer 7.7.0
chardet 3.0.4
click 7.1.2
cloudpickle 1.2.2
colorama 0.4.3
constantly 15.1.0
cryptography 3.2.1
cssselect 1.1.0
cycler 0.10.0
cymem 1.31.2
Cython 0.29.21
dataclasses 0.7
decorator 4.4.2
deepdist 0.1
defusedxml 0.6.0
dill 0.3.2
diskcache 4.0.0
docutils 0.15.2
entrypoints 0.3
feynman 2.0.0
filelock 3.0.12
findspark 1.3.0
Flask 1.1.2
flatbuffers 1.12
funcy 1.15
future 0.18.2
gast 0.3.3
gensim 3.8.3
google-auth 1.23.0
google-auth-oauthlib 0.4.2
google-pasta 0.2.0
googleapis-common-protos 1.52.0
grpcio 1.33.2
h5py 2.10.0
hdbscan 0.8.26
html5lib 1.1
hyperlink 20.0.1
hypothesis 5.41.0
idna 2.10
idna-ssl 1.1.0
importlib-metadata 2.0.0
incremental 17.5.0
iniconfig 1.1.1
ipykernel 5.3.4
ipython 7.12.0
ipython-genutils 0.2.0
ipywidgets 7.5.1
itemadapter 0.1.1
itemloaders 1.0.3
itsdangerous 1.1.0
jedi 0.17.2
Jinja2 2.11.2
jmespath 0.10.0
joblib 0.17.0
json5 0.9.5
jsonschema 3.2.0
jupyter-client 6.1.7
jupyter-console 6.2.0
jupyter-contrib-core 0.3.3
jupyter-core 4.6.3
jupyter-nbextensions-configurator 0.4.1
jupyterlab 2.2.9
jupyterlab-pygments 0.1.2
jupyterlab-server 1.2.0
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
kiwisolver 1.3.0
llvmlite 0.34.0
lxml 4.6.1
Markdown 3.3.3
MarkupSafe 1.1.1
matplotlib 3.3.2
mistune 0.8.4
mnist 0.2.2
more-itertools 8.6.0
mpmath 1.1.0
MulticoreTSNE 0.1
multidict 4.7.5
murmurhash 0.26.4
mypy-extensions 0.4.3
nbclient 0.5.1
nbconvert 6.0.7
nbformat 5.0.8
nest-asyncio 1.4.1
nltk 3.4.4
notebook 6.1.4
numba 0.51.2
numexpr 2.7.1
numpy 1.19.2
oauthlib 3.0.1
olefile 0.46
opt-einsum 3.3.0
packaging 20.4
pandas 1.1.4
pandocfilters 1.4.2
parameterized 0.7.4
parsel 1.6.0
parso 0.7.1
pathspec 0.8.0
patsy 0.5.1
petastorm 0.7.6 /home/ubuntu/petastorm
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.0.1
pip 20.2.4
plac 1.0.0
pluggy 0.13.1
preshed 0.46.4
prometheus-client 0.8.0
promise 2.3
prompt-toolkit 3.0.8
Protego 0.1.16
protobuf 3.13.0
psutil 5.7.3
ptyprocess 0.6.0
py 1.9.0
py4j 0.10.9
pyarrow 2.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.7
pycparser 2.20
PyDispatcher 2.0.5
pydot 1.4.1
Pygments 2.7.2
PyHamcrest 2.0.2
PyJWT 1.7.1
pyLDAvis 2.1.2
pyOpenSSL 19.1.0
pyparsing 2.4.7
PyQt5 5.12.3
PyQt5-sip 4.19.18
PyQtChart 5.12
PyQtWebEngine 5.12.1
pyrsistent 0.17.3
PySocks 1.7.1
pyspark 3.0.1
pytest 6.1.2
python-dateutil 2.8.1
pytz 2020.1
PyWavelets 1.1.1
PyYAML 5.3.1
pyzmq 19.0.2
qtconsole 4.7.7
QtPy 1.9.0
queuelib 1.5.0
regex 2020.10.28
requests 2.24.0
requests-oauthlib 1.3.0
rsa 4.4.1
s3transfer 0.3.3
sacremoses 0.0.43
scapy 2.4.4
scikit-learn 0.23.2
scipy 1.5.2
Scrapy 2.4.0
seaborn 0.11.0
semver 2.8.1
Send2Trash 1.5.0
sense2vec 0.6.0
sentence-transformers 0.3.6
sentencepiece 0.1.91
service-identity 18.1.0
setuptools 49.6.0.post20201009
six 1.15.0
sklearn 0.0
smart-open 1.6.0
sortedcontainers 2.2.2
soupsieve 2.0.1
spacy 0.101.0
sputnik 0.9.3
statsmodels 0.12.1
sympy 1.6.2
tensorboard 2.3.0
tensorboard-plugin-wit 1.7.0
tensorflow 2.2.0
tensorflow-datasets 1.2.0
tensorflow-estimator 2.2.0
tensorflow-metadata 0.14.0
tensorflow-probability 0.6.0
tensorflowonspark 1.4.1
termcolor 1.1.0
terminado 0.9.1
testpath 0.4.4
tfp-nightly 0.5.0.dev20190522
thinc 5.0.8
threadpoolctl 2.1.0
timeout-decorator 0.4.1
tokenizers 0.9.2
toml 0.10.1
torch 1.7.0
torchaudio 0.7.0a0+ac17b64
torchvision 0.8.1
tornado 6.1
tqdm 4.51.0
traitlets 4.3.3
transformers 3.4.0 /home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg
Twisted 20.3.0
twython 3.8.2
typed-ast 1.4.1
typing-extensions 3.7.4.3
umap-learn 0.4.6
urllib3 1.25.11
w3lib 1.22.0
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.35.1
widgetsnbextension 3.5.1
wordcloud 1.8.0
wrapt 1.12.1
yarl 1.6.2
zipp 3.4.0
zope.interface 5.1.2
> On Nov 2, 2020, at 9:15 AM, Lysandre Debut <[email protected]> wrote:
>
>
> It seems you have a conflict between your transformers version, as transformers-cli env returns v3.4.0, while your pip list returns v3.1.0?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/8196#issuecomment-720607259>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAXWFWYIWSYOAK3B7CD7PRTSN3SM7ANCNFSM4TFJGMGQ>.
>
|
2020-11-02T18:57:05Z
|
<patch>
diff --git a/src/transformers/tokenization_auto.py b/src/transformers/tokenization_auto.py
--- a/src/transformers/tokenization_auto.py
+++ b/src/transformers/tokenization_auto.py
@@ -113,6 +113,7 @@
T5Tokenizer = None
XLMRobertaTokenizer = None
XLNetTokenizer = None
+ XLMProphetNetTokenizer = None
if is_tokenizers_available():
from .tokenization_albert_fast import AlbertTokenizerFast
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-7019
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cumsum sums the groupby column
It shouldn't sum the groupby'd col (in fact index col should be the index, if groupby as_index).
```
In [13]: df = pd.DataFrame([[1, 2, np.nan], [1, np.nan, 9], [3, 4, 9]], columns=['A', 'B', 'C'])
In [14]: g = df.groupby('A')
In [16]: g.cumsum()
Out[16]:
A B C
0 1 2 NaN
1 2 NaN 9
2 3 4 9
[3 rows x 3 columns]
```
Nature of it being dispatch. Should fix up for 0.14 possibly along with some other whitelisted groupby functions.
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 [](http://scatterci.github.io/pydata/pandas)
6
7 ## What is it
8
9 **pandas** is a Python package providing fast, flexible, and expressive data
10 structures designed to make working with "relational" or "labeled" data both
11 easy and intuitive. It aims to be the fundamental high-level building block for
12 doing practical, **real world** data analysis in Python. Additionally, it has
13 the broader goal of becoming **the most powerful and flexible open source data
14 analysis / manipulation tool available in any language**. It is already well on
15 its way toward this goal.
16
17 ## Main Features
18 Here are just a few of the things that pandas does well:
19
20 - Easy handling of [**missing data**][missing-data] (represented as
21 `NaN`) in floating point as well as non-floating point data
22 - Size mutability: columns can be [**inserted and
23 deleted**][insertion-deletion] from DataFrame and higher dimensional
24 objects
25 - Automatic and explicit [**data alignment**][alignment]: objects can
26 be explicitly aligned to a set of labels, or the user can simply
27 ignore the labels and let `Series`, `DataFrame`, etc. automatically
28 align the data for you in computations
29 - Powerful, flexible [**group by**][groupby] functionality to perform
30 split-apply-combine operations on data sets, for both aggregating
31 and transforming data
32 - Make it [**easy to convert**][conversion] ragged,
33 differently-indexed data in other Python and NumPy data structures
34 into DataFrame objects
35 - Intelligent label-based [**slicing**][slicing], [**fancy
36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
37 large data sets
38 - Intuitive [**merging**][merging] and [**joining**][joining] data
39 sets
40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
41 data sets
42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
43 labels per tick)
44 - Robust IO tools for loading data from [**flat files**][flat-files]
45 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
47 - [**Time series**][timeseries]-specific functionality: date range
48 generation and frequency conversion, moving window statistics,
49 moving window linear regressions, date shifting and lagging, etc.
50
51
52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
70
71 ## Where to get it
72 The source code is currently hosted on GitHub at:
73 http://github.com/pydata/pandas
74
75 Binary installers for the latest released version are available at the Python
76 package index
77
78 http://pypi.python.org/pypi/pandas/
79
80 And via `easy_install`:
81
82 ```sh
83 easy_install pandas
84 ```
85
86 or `pip`:
87
88 ```sh
89 pip install pandas
90 ```
91
92 ## Dependencies
93 - [NumPy](http://www.numpy.org): 1.6.1 or higher
94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
95 - [pytz](http://pytz.sourceforge.net)
96 - Needed for time zone support with ``pandas.date_range``
97
98 ### Highly Recommended Dependencies
99 - [numexpr](http://code.google.com/p/numexpr/)
100 - Needed to accelerate some expression evaluation operations
101 - Required by PyTables
102 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
103 - Needed to accelerate certain numerical operations
104
105 ### Optional dependencies
106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
111 - [statsmodels](http://statsmodels.sourceforge.net/)
112 - Needed for parts of `pandas.stats`
113 - For Excel I/O:
114 - [xlrd/xlwt](http://www.python-excel.org/)
115 - Excel reading (xlrd) and writing (xlwt)
116 - [openpyxl](http://packages.python.org/openpyxl/)
117 - openpyxl version 1.6.1 or higher, for writing .xlsx files
118 - xlrd >= 0.9.0
119 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
120 - Alternative Excel writer.
121 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
122 - Needed for `pandas.io.gbq`
123 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
124 - One of the following combinations of libraries is needed to use the
125 top-level [`pandas.read_html`][read-html-docs] function:
126 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
127 recent version of [html5lib][html5lib] is okay.)
128 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
129 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
130 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
131 for reasons as to why you should probably **not** take this approach.
132
133 #### Notes about HTML parsing libraries
134 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
135 either [lxml][lxml] or [html5lib][html5lib] or both.
136 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
137 installed.
138 - You are strongly encouraged to read [HTML reading
139 gotchas][html-gotchas]. It explains issues surrounding the
140 installation and usage of the above three libraries.
141 - You may need to install an older version of
142 [BeautifulSoup4][BeautifulSoup4]:
143 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
144 32-bit Ubuntu/Debian
145 - Additionally, if you're using [Anaconda][Anaconda] you should
146 definitely read [the gotchas about HTML parsing][html-gotchas]
147 libraries
148 - If you're on a system with `apt-get` you can do
149
150 ```sh
151 sudo apt-get build-dep python-lxml
152 ```
153
154 to get the necessary dependencies for installation of [lxml][lxml].
155 This will prevent further headaches down the line.
156
157 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
158 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
159 [lxml]: http://lxml.de
160 [Anaconda]: https://store.continuum.io/cshop/anaconda
161 [NumPy]: http://numpy.scipy.org/
162 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
163 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
164
165 ## Installation from sources
166 To install pandas from source you need Cython in addition to the normal
167 dependencies above. Cython can be installed from pypi:
168
169 ```sh
170 pip install cython
171 ```
172
173 In the `pandas` directory (same one where you found this file after
174 cloning the git repo), execute:
175
176 ```sh
177 python setup.py install
178 ```
179
180 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
181
182 ```sh
183 python setup.py develop
184 ```
185
186 Alternatively, you can use `pip` if you want all the dependencies pulled
187 in automatically (the `-e` option is for installing it in [development
188 mode](http://www.pip-installer.org/en/latest/usage.html)):
189
190 ```sh
191 pip install -e .
192 ```
193
194 On Windows, you will need to install MinGW and execute:
195
196 ```sh
197 python setup.py build --compiler=mingw32
198 python setup.py install
199 ```
200
201 See http://pandas.pydata.org/ for more information.
202
203 ## License
204 BSD
205
206 ## Documentation
207 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
208
209 The Sphinx documentation should provide a good starting point for learning how
210 to use the library. Expect the docs to continue to expand as time goes on.
211
212 ## Background
213 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
214 has been under active development since then.
215
216 ## Discussion and Development
217 Since pandas development is related to a number of other scientific
218 Python projects, questions are welcome on the scipy-user mailing
219 list. Specialized discussions or design issues should take place on
220 the pystatsmodels mailing list / Google group, where
221 ``scikits.statsmodels`` and other libraries will also be discussed:
222
223 http://groups.google.com/group/pystatsmodels
224
[end of README.md]
[start of pandas/tools/pivot.py]
1 # pylint: disable=E1103
2
3 import warnings
4
5 from pandas import Series, DataFrame
6 from pandas.core.index import MultiIndex
7 from pandas.core.groupby import Grouper
8 from pandas.tools.merge import concat
9 from pandas.tools.util import cartesian_product
10 from pandas.compat import range, lrange, zip
11 from pandas.util.decorators import deprecate_kwarg
12 from pandas import compat
13 import pandas.core.common as com
14 import numpy as np
15
16 @deprecate_kwarg(old_arg_name='cols', new_arg_name='columns')
17 @deprecate_kwarg(old_arg_name='rows', new_arg_name='index')
18 def pivot_table(data, values=None, index=None, columns=None, aggfunc='mean',
19 fill_value=None, margins=False, dropna=True):
20 """
21 Create a spreadsheet-style pivot table as a DataFrame. The levels in the
22 pivot table will be stored in MultiIndex objects (hierarchical indexes) on
23 the index and columns of the result DataFrame
24
25 Parameters
26 ----------
27 data : DataFrame
28 values : column to aggregate, optional
29 index : a column, Grouper, array which has the same length as data, or list of them.
30 Keys to group by on the pivot table index.
31 If an array is passed, it is being used as the same manner as column values.
32 columns : a column, Grouper, array which has the same length as data, or list of them.
33 Keys to group by on the pivot table column.
34 If an array is passed, it is being used as the same manner as column values.
35 aggfunc : function, default numpy.mean, or list of functions
36 If list of functions passed, the resulting pivot table will have
37 hierarchical columns whose top level are the function names (inferred
38 from the function objects themselves)
39 fill_value : scalar, default None
40 Value to replace missing values with
41 margins : boolean, default False
42 Add all row / columns (e.g. for subtotal / grand totals)
43 dropna : boolean, default True
44 Do not include columns whose entries are all NaN
45 rows : kwarg only alias of index [deprecated]
46 cols : kwarg only alias of columns [deprecated]
47
48 Examples
49 --------
50 >>> df
51 A B C D
52 0 foo one small 1
53 1 foo one large 2
54 2 foo one large 2
55 3 foo two small 3
56 4 foo two small 3
57 5 bar one large 4
58 6 bar one small 5
59 7 bar two small 6
60 8 bar two large 7
61
62 >>> table = pivot_table(df, values='D', index=['A', 'B'],
63 ... columns=['C'], aggfunc=np.sum)
64 >>> table
65 small large
66 foo one 1 4
67 two 6 NaN
68 bar one 5 4
69 two 6 7
70
71 Returns
72 -------
73 table : DataFrame
74 """
75 index = _convert_by(index)
76 columns = _convert_by(columns)
77
78 if isinstance(aggfunc, list):
79 pieces = []
80 keys = []
81 for func in aggfunc:
82 table = pivot_table(data, values=values, index=index, columns=columns,
83 fill_value=fill_value, aggfunc=func,
84 margins=margins)
85 pieces.append(table)
86 keys.append(func.__name__)
87 return concat(pieces, keys=keys, axis=1)
88
89 keys = index + columns
90
91 values_passed = values is not None
92 if values_passed:
93 if isinstance(values, (list, tuple)):
94 values_multi = True
95 else:
96 values_multi = False
97 values = [values]
98 else:
99 values = list(data.columns.drop(keys))
100
101 if values_passed:
102 to_filter = []
103 for x in keys + values:
104 if isinstance(x, Grouper):
105 x = x.key
106 try:
107 if x in data:
108 to_filter.append(x)
109 except TypeError:
110 pass
111 if len(to_filter) < len(data.columns):
112 data = data[to_filter]
113
114 grouped = data.groupby(keys)
115 agged = grouped.agg(aggfunc)
116
117 table = agged
118 if table.index.nlevels > 1:
119 to_unstack = [agged.index.names[i]
120 for i in range(len(index), len(keys))]
121 table = agged.unstack(to_unstack)
122
123 if not dropna:
124 try:
125 m = MultiIndex.from_arrays(cartesian_product(table.index.levels))
126 table = table.reindex_axis(m, axis=0)
127 except AttributeError:
128 pass # it's a single level
129
130 try:
131 m = MultiIndex.from_arrays(cartesian_product(table.columns.levels))
132 table = table.reindex_axis(m, axis=1)
133 except AttributeError:
134 pass # it's a single level or a series
135
136 if isinstance(table, DataFrame):
137 if isinstance(table.columns, MultiIndex):
138 table = table.sortlevel(axis=1)
139 else:
140 table = table.sort_index(axis=1)
141
142 if fill_value is not None:
143 table = table.fillna(value=fill_value, downcast='infer')
144
145 if margins:
146 table = _add_margins(table, data, values, rows=index,
147 cols=columns, aggfunc=aggfunc)
148
149 # discard the top level
150 if values_passed and not values_multi:
151 table = table[values[0]]
152
153 if len(index) == 0 and len(columns) > 0:
154 table = table.T
155
156 return table
157
158
159 DataFrame.pivot_table = pivot_table
160
161
162 def _add_margins(table, data, values, rows, cols, aggfunc):
163
164 grand_margin = _compute_grand_margin(data, values, aggfunc)
165
166 if not values and isinstance(table, Series):
167 # If there are no values and the table is a series, then there is only
168 # one column in the data. Compute grand margin and return it.
169 row_key = ('All',) + ('',) * (len(rows) - 1) if len(rows) > 1 else 'All'
170 return table.append(Series({row_key: grand_margin['All']}))
171
172 if values:
173 marginal_result_set = _generate_marginal_results(table, data, values, rows, cols, aggfunc, grand_margin)
174 if not isinstance(marginal_result_set, tuple):
175 return marginal_result_set
176 result, margin_keys, row_margin = marginal_result_set
177 else:
178 marginal_result_set = _generate_marginal_results_without_values(table, data, rows, cols, aggfunc)
179 if not isinstance(marginal_result_set, tuple):
180 return marginal_result_set
181 result, margin_keys, row_margin = marginal_result_set
182
183 key = ('All',) + ('',) * (len(rows) - 1) if len(rows) > 1 else 'All'
184
185 row_margin = row_margin.reindex(result.columns)
186 # populate grand margin
187 for k in margin_keys:
188 if isinstance(k, compat.string_types):
189 row_margin[k] = grand_margin[k]
190 else:
191 row_margin[k] = grand_margin[k[0]]
192
193 margin_dummy = DataFrame(row_margin, columns=[key]).T
194
195 row_names = result.index.names
196 result = result.append(margin_dummy)
197 result.index.names = row_names
198
199 return result
200
201
202 def _compute_grand_margin(data, values, aggfunc):
203
204 if values:
205 grand_margin = {}
206 for k, v in data[values].iteritems():
207 try:
208 if isinstance(aggfunc, compat.string_types):
209 grand_margin[k] = getattr(v, aggfunc)()
210 else:
211 grand_margin[k] = aggfunc(v)
212 except TypeError:
213 pass
214 return grand_margin
215 else:
216 return {'All': aggfunc(data.index)}
217
218
219 def _generate_marginal_results(table, data, values, rows, cols, aggfunc, grand_margin):
220 if len(cols) > 0:
221 # need to "interleave" the margins
222 table_pieces = []
223 margin_keys = []
224
225 def _all_key(key):
226 return (key, 'All') + ('',) * (len(cols) - 1)
227
228 if len(rows) > 0:
229 margin = data[rows + values].groupby(rows).agg(aggfunc)
230 cat_axis = 1
231 for key, piece in table.groupby(level=0, axis=cat_axis):
232 all_key = _all_key(key)
233 piece[all_key] = margin[key]
234 table_pieces.append(piece)
235 margin_keys.append(all_key)
236 else:
237 margin = grand_margin
238 cat_axis = 0
239 for key, piece in table.groupby(level=0, axis=cat_axis):
240 all_key = _all_key(key)
241 table_pieces.append(piece)
242 table_pieces.append(Series(margin[key], index=[all_key]))
243 margin_keys.append(all_key)
244
245 result = concat(table_pieces, axis=cat_axis)
246
247 if len(rows) == 0:
248 return result
249 else:
250 result = table
251 margin_keys = table.columns
252
253 if len(cols) > 0:
254 row_margin = data[cols + values].groupby(cols).agg(aggfunc)
255 row_margin = row_margin.stack()
256
257 # slight hack
258 new_order = [len(cols)] + lrange(len(cols))
259 row_margin.index = row_margin.index.reorder_levels(new_order)
260 else:
261 row_margin = Series(np.nan, index=result.columns)
262
263 return result, margin_keys, row_margin
264
265
266 def _generate_marginal_results_without_values(table, data, rows, cols, aggfunc):
267 if len(cols) > 0:
268 # need to "interleave" the margins
269 margin_keys = []
270
271 def _all_key():
272 if len(cols) == 1:
273 return 'All'
274 return ('All', ) + ('', ) * (len(cols) - 1)
275
276 if len(rows) > 0:
277 margin = data[rows].groupby(rows).apply(aggfunc)
278 all_key = _all_key()
279 table[all_key] = margin
280 result = table
281 margin_keys.append(all_key)
282
283 else:
284 margin = data.groupby(level=0, axis=0).apply(aggfunc)
285 all_key = _all_key()
286 table[all_key] = margin
287 result = table
288 margin_keys.append(all_key)
289 return result
290 else:
291 result = table
292 margin_keys = table.columns
293
294 if len(cols):
295 row_margin = data[cols].groupby(cols).apply(aggfunc)
296 else:
297 row_margin = Series(np.nan, index=result.columns)
298
299 return result, margin_keys, row_margin
300
301
302 def _convert_by(by):
303 if by is None:
304 by = []
305 elif (np.isscalar(by) or isinstance(by, (np.ndarray, Series, Grouper))
306 or hasattr(by, '__call__')):
307 by = [by]
308 else:
309 by = list(by)
310 return by
311
312 @deprecate_kwarg(old_arg_name='cols', new_arg_name='columns')
313 @deprecate_kwarg(old_arg_name='rows', new_arg_name='index')
314 def crosstab(index, columns, values=None, rownames=None, colnames=None,
315 aggfunc=None, margins=False, dropna=True):
316 """
317 Compute a simple cross-tabulation of two (or more) factors. By default
318 computes a frequency table of the factors unless an array of values and an
319 aggregation function are passed
320
321 Parameters
322 ----------
323 index : array-like, Series, or list of arrays/Series
324 Values to group by in the rows
325 columns : array-like, Series, or list of arrays/Series
326 Values to group by in the columns
327 values : array-like, optional
328 Array of values to aggregate according to the factors
329 aggfunc : function, optional
330 If no values array is passed, computes a frequency table
331 rownames : sequence, default None
332 If passed, must match number of row arrays passed
333 colnames : sequence, default None
334 If passed, must match number of column arrays passed
335 margins : boolean, default False
336 Add row/column margins (subtotals)
337 dropna : boolean, default True
338 Do not include columns whose entries are all NaN
339 rows : kwarg only alias of index [deprecated]
340 cols : kwarg only alias of columns [deprecated]
341
342 Notes
343 -----
344 Any Series passed will have their name attributes used unless row or column
345 names for the cross-tabulation are specified
346
347 Examples
348 --------
349 >>> a
350 array([foo, foo, foo, foo, bar, bar,
351 bar, bar, foo, foo, foo], dtype=object)
352 >>> b
353 array([one, one, one, two, one, one,
354 one, two, two, two, one], dtype=object)
355 >>> c
356 array([dull, dull, shiny, dull, dull, shiny,
357 shiny, dull, shiny, shiny, shiny], dtype=object)
358
359 >>> crosstab(a, [b, c], rownames=['a'], colnames=['b', 'c'])
360 b one two
361 c dull shiny dull shiny
362 a
363 bar 1 2 1 0
364 foo 2 2 1 2
365
366 Returns
367 -------
368 crosstab : DataFrame
369 """
370
371 index = com._maybe_make_list(index)
372 columns = com._maybe_make_list(columns)
373
374 rownames = _get_names(index, rownames, prefix='row')
375 colnames = _get_names(columns, colnames, prefix='col')
376
377 data = {}
378 data.update(zip(rownames, index))
379 data.update(zip(colnames, columns))
380
381 if values is None:
382 df = DataFrame(data)
383 df['__dummy__'] = 0
384 table = df.pivot_table('__dummy__', index=rownames, columns=colnames,
385 aggfunc=len, margins=margins, dropna=dropna)
386 return table.fillna(0).astype(np.int64)
387 else:
388 data['__dummy__'] = values
389 df = DataFrame(data)
390 table = df.pivot_table('__dummy__', index=rownames, columns=colnames,
391 aggfunc=aggfunc, margins=margins, dropna=dropna)
392 return table
393
394
395 def _get_names(arrs, names, prefix='row'):
396 if names is None:
397 names = []
398 for i, arr in enumerate(arrs):
399 if isinstance(arr, Series) and arr.name is not None:
400 names.append(arr.name)
401 else:
402 names.append('%s_%d' % (prefix, i))
403 else:
404 if len(names) != len(arrs):
405 raise AssertionError('arrays and names must have the same length')
406 if not isinstance(names, list):
407 names = list(names)
408
409 return names
410
[end of pandas/tools/pivot.py]
[start of vb_suite/groupby.py]
1 from vbench.api import Benchmark
2 from datetime import datetime
3
4 common_setup = """from pandas_vb_common import *
5 """
6
7 setup = common_setup + """
8 N = 100000
9 ngroups = 100
10
11 def get_test_data(ngroups=100, n=N):
12 unique_groups = range(ngroups)
13 arr = np.asarray(np.tile(unique_groups, n / ngroups), dtype=object)
14
15 if len(arr) < n:
16 arr = np.asarray(list(arr) + unique_groups[:n - len(arr)],
17 dtype=object)
18
19 random.shuffle(arr)
20 return arr
21
22 # aggregate multiple columns
23 df = DataFrame({'key1' : get_test_data(ngroups=ngroups),
24 'key2' : get_test_data(ngroups=ngroups),
25 'data1' : np.random.randn(N),
26 'data2' : np.random.randn(N)})
27 def f():
28 df.groupby(['key1', 'key2']).agg(lambda x: x.values.sum())
29
30 simple_series = Series(np.random.randn(N))
31 key1 = df['key1']
32 """
33
34 stmt1 = "df.groupby(['key1', 'key2'])['data1'].agg(lambda x: x.values.sum())"
35 groupby_multi_python = Benchmark(stmt1, setup,
36 start_date=datetime(2011, 7, 1))
37
38 stmt3 = "df.groupby(['key1', 'key2']).sum()"
39 groupby_multi_cython = Benchmark(stmt3, setup,
40 start_date=datetime(2011, 7, 1))
41
42 stmt = "df.groupby(['key1', 'key2'])['data1'].agg(np.std)"
43 groupby_multi_series_op = Benchmark(stmt, setup,
44 start_date=datetime(2011, 8, 1))
45
46 groupby_series_simple_cython = \
47 Benchmark('simple_series.groupby(key1).sum()', setup,
48 start_date=datetime(2011, 3, 1))
49
50
51 stmt4 = "df.groupby('key1').rank(pct=True)"
52 groupby_series_simple_cython = Benchmark(stmt4, setup,
53 start_date=datetime(2014, 1, 16))
54
55 #----------------------------------------------------------------------
56 # 2d grouping, aggregate many columns
57
58 setup = common_setup + """
59 labels = np.random.randint(0, 100, size=1000)
60 df = DataFrame(randn(1000, 1000))
61 """
62
63 groupby_frame_cython_many_columns = Benchmark(
64 'df.groupby(labels).sum()', setup,
65 start_date=datetime(2011, 8, 1),
66 logy=True)
67
68 #----------------------------------------------------------------------
69 # single key, long, integer key
70
71 setup = common_setup + """
72 data = np.random.randn(100000, 1)
73 labels = np.random.randint(0, 1000, size=100000)
74 df = DataFrame(data)
75 """
76
77 groupby_frame_singlekey_integer = \
78 Benchmark('df.groupby(labels).sum()', setup,
79 start_date=datetime(2011, 8, 1), logy=True)
80
81 #----------------------------------------------------------------------
82 # group with different functions per column
83
84 setup = common_setup + """
85 fac1 = np.array(['A', 'B', 'C'], dtype='O')
86 fac2 = np.array(['one', 'two'], dtype='O')
87
88 df = DataFrame({'key1': fac1.take(np.random.randint(0, 3, size=100000)),
89 'key2': fac2.take(np.random.randint(0, 2, size=100000)),
90 'value1' : np.random.randn(100000),
91 'value2' : np.random.randn(100000),
92 'value3' : np.random.randn(100000)})
93 """
94
95 groupby_multi_different_functions = \
96 Benchmark("""df.groupby(['key1', 'key2']).agg({'value1' : 'mean',
97 'value2' : 'var',
98 'value3' : 'sum'})""",
99 setup, start_date=datetime(2011, 9, 1))
100
101 groupby_multi_different_numpy_functions = \
102 Benchmark("""df.groupby(['key1', 'key2']).agg({'value1' : np.mean,
103 'value2' : np.var,
104 'value3' : np.sum})""",
105 setup, start_date=datetime(2011, 9, 1))
106
107 #----------------------------------------------------------------------
108 # size() speed
109
110 setup = common_setup + """
111 df = DataFrame({'key1': np.random.randint(0, 500, size=100000),
112 'key2': np.random.randint(0, 100, size=100000),
113 'value1' : np.random.randn(100000),
114 'value2' : np.random.randn(100000),
115 'value3' : np.random.randn(100000)})
116 """
117
118 groupby_multi_size = Benchmark("df.groupby(['key1', 'key2']).size()",
119 setup, start_date=datetime(2011, 10, 1))
120
121 #----------------------------------------------------------------------
122 # Series.value_counts
123
124 setup = common_setup + """
125 s = Series(np.random.randint(0, 1000, size=100000))
126 """
127
128 series_value_counts_int64 = Benchmark('s.value_counts()', setup,
129 start_date=datetime(2011, 10, 21))
130
131 # value_counts on lots of strings
132
133 setup = common_setup + """
134 K = 1000
135 N = 100000
136 uniques = np.array([rands(10) for x in xrange(K)], dtype='O')
137 s = Series(np.tile(uniques, N // K))
138 """
139
140 series_value_counts_strings = Benchmark('s.value_counts()', setup,
141 start_date=datetime(2011, 10, 21))
142
143 #----------------------------------------------------------------------
144 # pivot_table
145
146 setup = common_setup + """
147 fac1 = np.array(['A', 'B', 'C'], dtype='O')
148 fac2 = np.array(['one', 'two'], dtype='O')
149
150 ind1 = np.random.randint(0, 3, size=100000)
151 ind2 = np.random.randint(0, 2, size=100000)
152
153 df = DataFrame({'key1': fac1.take(ind1),
154 'key2': fac2.take(ind2),
155 'key3': fac2.take(ind2),
156 'value1' : np.random.randn(100000),
157 'value2' : np.random.randn(100000),
158 'value3' : np.random.randn(100000)})
159 """
160
161 stmt = "df.pivot_table(rows='key1', cols=['key2', 'key3'])"
162 groupby_pivot_table = Benchmark(stmt, setup, start_date=datetime(2011, 12, 15))
163
164
165 #----------------------------------------------------------------------
166 # dict return values
167
168 setup = common_setup + """
169 labels = np.arange(1000).repeat(10)
170 data = Series(randn(len(labels)))
171 f = lambda x: {'first': x.values[0], 'last': x.values[-1]}
172 """
173
174 groupby_apply_dict_return = Benchmark('data.groupby(labels).apply(f)',
175 setup, start_date=datetime(2011, 12, 15))
176
177 #----------------------------------------------------------------------
178 # First / last functions
179
180 setup = common_setup + """
181 labels = np.arange(10000).repeat(10)
182 data = Series(randn(len(labels)))
183 data[::3] = np.nan
184 data[1::3] = np.nan
185 data2 = Series(randn(len(labels)),dtype='float32')
186 data2[::3] = np.nan
187 data2[1::3] = np.nan
188 labels = labels.take(np.random.permutation(len(labels)))
189 """
190
191 groupby_first = Benchmark('data.groupby(labels).first()', setup,
192 start_date=datetime(2012, 5, 1))
193
194 groupby_first_float32 = Benchmark('data2.groupby(labels).first()', setup,
195 start_date=datetime(2013, 1, 1))
196
197 groupby_last = Benchmark('data.groupby(labels).last()', setup,
198 start_date=datetime(2012, 5, 1))
199
200 groupby_last_float32 = Benchmark('data2.groupby(labels).last()', setup,
201 start_date=datetime(2013, 1, 1))
202
203
204 #----------------------------------------------------------------------
205 # groupby_indices replacement, chop up Series
206
207 setup = common_setup + """
208 try:
209 rng = date_range('1/1/2000', '12/31/2005', freq='H')
210 year, month, day = rng.year, rng.month, rng.day
211 except:
212 rng = date_range('1/1/2000', '12/31/2000', offset=datetools.Hour())
213 year = rng.map(lambda x: x.year)
214 month = rng.map(lambda x: x.month)
215 day = rng.map(lambda x: x.day)
216
217 ts = Series(np.random.randn(len(rng)), index=rng)
218 """
219
220 groupby_indices = Benchmark('len(ts.groupby([year, month, day]))',
221 setup, start_date=datetime(2012, 1, 1))
222
223 #----------------------------------------------------------------------
224 # median
225
226 #----------------------------------------------------------------------
227 # single key, long, integer key
228
229 setup = common_setup + """
230 data = np.random.randn(100000, 2)
231 labels = np.random.randint(0, 1000, size=100000)
232 df = DataFrame(data)
233 """
234
235 groupby_frame_median = \
236 Benchmark('df.groupby(labels).median()', setup,
237 start_date=datetime(2011, 8, 1), logy=True)
238
239
240 setup = common_setup + """
241 data = np.random.randn(1000000, 2)
242 labels = np.random.randint(0, 1000, size=1000000)
243 df = DataFrame(data)
244 """
245
246 groupby_simple_compress_timing = \
247 Benchmark('df.groupby(labels).mean()', setup,
248 start_date=datetime(2011, 8, 1))
249
250
251 #----------------------------------------------------------------------
252 # DataFrame Apply overhead
253
254 setup = common_setup + """
255 N = 10000
256 labels = np.random.randint(0, 2000, size=N)
257 labels2 = np.random.randint(0, 3, size=N)
258 df = DataFrame({'key': labels,
259 'key2': labels2,
260 'value1': randn(N),
261 'value2': ['foo', 'bar', 'baz', 'qux'] * (N / 4)})
262 def f(g):
263 return 1
264 """
265
266 groupby_frame_apply_overhead = Benchmark("df.groupby('key').apply(f)", setup,
267 start_date=datetime(2011, 10, 1))
268
269 groupby_frame_apply = Benchmark("df.groupby(['key', 'key2']).apply(f)", setup,
270 start_date=datetime(2011, 10, 1))
271
272
273 #----------------------------------------------------------------------
274 # DataFrame nth
275
276 setup = common_setup + """
277 df = DataFrame(np.random.randint(1, 100, (10000, 2)))
278 """
279
280 # Not really a fair test as behaviour has changed!
281 groupby_frame_nth = Benchmark("df.groupby(0).nth(0)", setup,
282 start_date=datetime(2014, 3, 1))
283
284 groupby_series_nth = Benchmark("df[1].groupby(df[0]).nth(0)", setup,
285 start_date=datetime(2014, 3, 1))
286
287
288 #----------------------------------------------------------------------
289 # Sum booleans #2692
290
291 setup = common_setup + """
292 N = 500
293 df = DataFrame({'ii':range(N),'bb':[True for x in range(N)]})
294 """
295
296 groupby_sum_booleans = Benchmark("df.groupby('ii').sum()", setup)
297
298 #----------------------------------------------------------------------
299 # Transform testing
300
301 setup = common_setup + """
302 n_dates = 400
303 n_securities = 250
304 n_columns = 3
305 share_na = 0.1
306
307 dates = date_range('1997-12-31', periods=n_dates, freq='B')
308 dates = Index(map(lambda x: x.year * 10000 + x.month * 100 + x.day, dates))
309
310 secid_min = int('10000000', 16)
311 secid_max = int('F0000000', 16)
312 step = (secid_max - secid_min) // (n_securities - 1)
313 security_ids = map(lambda x: hex(x)[2:10].upper(), range(secid_min, secid_max + 1, step))
314
315 data_index = MultiIndex(levels=[dates.values, security_ids],
316 labels=[[i for i in xrange(n_dates) for _ in xrange(n_securities)], range(n_securities) * n_dates],
317 names=['date', 'security_id'])
318 n_data = len(data_index)
319
320 columns = Index(['factor{}'.format(i) for i in xrange(1, n_columns + 1)])
321
322 data = DataFrame(np.random.randn(n_data, n_columns), index=data_index, columns=columns)
323
324 step = int(n_data * share_na)
325 for column_index in xrange(n_columns):
326 index = column_index
327 while index < n_data:
328 data.set_value(data_index[index], columns[column_index], np.nan)
329 index += step
330
331 f_fillna = lambda x: x.fillna(method='pad')
332 """
333
334 groupby_transform = Benchmark("data.groupby(level='security_id').transform(f_fillna)", setup)
335
[end of vb_suite/groupby.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
bf3a9c6cb2a32815c3d3775eceb7ac32a6881700
|
cumsum sums the groupby column
It shouldn't sum the groupby'd col (in fact index col should be the index, if groupby as_index).
```
In [13]: df = pd.DataFrame([[1, 2, np.nan], [1, np.nan, 9], [3, 4, 9]], columns=['A', 'B', 'C'])
In [14]: g = df.groupby('A')
In [16]: g.cumsum()
Out[16]:
A B C
0 1 2 NaN
1 2 NaN 9
2 3 4 9
[3 rows x 3 columns]
```
Nature of it being dispatch. Should fix up for 0.14 possibly along with some other whitelisted groupby functions.
|
What would be the expected output? Something like this?:
```
In [29]: g.cumsum
Out[29]:
B C
A
0 1 2 NaN
1 NaN 9
2 3 4 9
```
And should it then also be the case for `cumcount` if `as_index=True`?
@jorisvandenbossche RE the index of cumcount, possibly yes it should respect as_index... I think it's debatable if this would ever be desired though... the main problem however is it's slow (I don't think efficient way to append index to index to make MI) and this is the default. I had thought I had posted about this somewhere but can't find issue...
I think so, though like I say I think we need to have a discussion about as_index (there are at least three different ways used in groupby atm)... I had a partially filled in issue about it from a week or so ago... :s will look at it again after the weekend and try to post it. It's kinda a mess and some conventions are of dubious value (e.g. that of head)
Yes, you did :-) Here: https://github.com/pydata/pandas/issues/4646#issuecomment-28942566
And it is indeed, for `cumcount`, maybe in some way more consistent to also return a MI, but I also think you mostly wouldn't want it.
I think should add some UserWarnings in 0.14 about this kind of behaviour, link to #5755.
@hayd you have anything in the works about this? push to 0.15 otherwise
I think I do, hope to get in the week.
ping!
I think this is closed by #7000, maybe just add a test?
|
2014-05-01T14:32:42Z
|
<patch>
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -188,7 +188,7 @@ API Changes
validation warnings in :func:`read_csv`/:func:`read_table` (:issue:`6607`)
- Raise a ``TypeError`` when ``DataFrame`` is passed an iterator as the
``data`` argument (:issue:`5357`)
-- groupby will now not return the grouped column for non-cython functions (:issue:`5610`),
+- groupby will now not return the grouped column for non-cython functions (:issue:`5610`, :issue:`5614`),
as its already the index
Deprecations
diff --git a/doc/source/v0.14.0.txt b/doc/source/v0.14.0.txt
--- a/doc/source/v0.14.0.txt
+++ b/doc/source/v0.14.0.txt
@@ -124,7 +124,7 @@ API changes
g.nth(0, dropna='any') # similar to old behaviour
- groupby will now not return the grouped column for non-cython functions (:issue:`5610`),
+ groupby will now not return the grouped column for non-cython functions (:issue:`5610`, :issue:`5614`),
as its already the index
.. ipython:: python
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-17455
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MultiIndex is_monotonic_decreasing is incorrect
Or just isn't implemented properly yet?
This MultiIndex is identical on the second level, so its `is_monotonic_decreasing` should be the same as `get_level_values(0).is_monotonic_decreasing` (True)
```python
In [27]: idx = pd.MultiIndex([['baz', 'bar'], ['a', 'b']], labels=[[0, 1], [0, 0]])
In [28]: idx
Out[28]:
MultiIndex(levels=[['baz', 'bar'], ['a', 'b']],
labels=[[0, 1], [0, 0]])
In [29]: idx.is_monotonic_decreasing
Out[29]: False
In [30]: idx.get_level_values(0).is_monotonic_decreasing
Out[30]: True
```
They should both return True.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td>
13 </tr>
14 <td></td>
15 <td><img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /></td>
16 </tr>
17 <tr>
18 <td>Package Status</td>
19 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
20 </tr>
21 <tr>
22 <td>License</td>
23 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td>
24 </tr>
25 <tr>
26 <td>Build Status</td>
27 <td>
28 <a href="https://travis-ci.org/pandas-dev/pandas">
29 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td></td>
35 <td>
36 <a href="https://circleci.com/gh/pandas-dev/pandas">
37 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td></td>
43 <td>
44 <a href="https://ci.appveyor.com/project/pandas-dev/pandas">
45 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td>Coverage</td>
51 <td><img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /></td>
52 </tr>
53 <tr>
54 <td>Conda</td>
55 <td>
56 <a href="https://pandas.pydata.org">
57 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda default downloads" />
58 </a>
59 </td>
60 </tr>
61 <tr>
62 <td>Conda-forge</td>
63 <td>
64 <a href="https://pandas.pydata.org">
65 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
66 </a>
67 </td>
68 </tr>
69 <tr>
70 <td>PyPI</td>
71 <td>
72 <a href="https://pypi.python.org/pypi/pandas/">
73 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" />
74 </a>
75 </td>
76 </tr>
77 </table>
78
79 [](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
80
81 ## What is it
82
83 **pandas** is a Python package providing fast, flexible, and expressive data
84 structures designed to make working with "relational" or "labeled" data both
85 easy and intuitive. It aims to be the fundamental high-level building block for
86 doing practical, **real world** data analysis in Python. Additionally, it has
87 the broader goal of becoming **the most powerful and flexible open source data
88 analysis / manipulation tool available in any language**. It is already well on
89 its way toward this goal.
90
91 ## Main Features
92 Here are just a few of the things that pandas does well:
93
94 - Easy handling of [**missing data**][missing-data] (represented as
95 `NaN`) in floating point as well as non-floating point data
96 - Size mutability: columns can be [**inserted and
97 deleted**][insertion-deletion] from DataFrame and higher dimensional
98 objects
99 - Automatic and explicit [**data alignment**][alignment]: objects can
100 be explicitly aligned to a set of labels, or the user can simply
101 ignore the labels and let `Series`, `DataFrame`, etc. automatically
102 align the data for you in computations
103 - Powerful, flexible [**group by**][groupby] functionality to perform
104 split-apply-combine operations on data sets, for both aggregating
105 and transforming data
106 - Make it [**easy to convert**][conversion] ragged,
107 differently-indexed data in other Python and NumPy data structures
108 into DataFrame objects
109 - Intelligent label-based [**slicing**][slicing], [**fancy
110 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
111 large data sets
112 - Intuitive [**merging**][merging] and [**joining**][joining] data
113 sets
114 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
115 data sets
116 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
117 labels per tick)
118 - Robust IO tools for loading data from [**flat files**][flat-files]
119 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
120 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
121 - [**Time series**][timeseries]-specific functionality: date range
122 generation and frequency conversion, moving window statistics,
123 moving window linear regressions, date shifting and lagging, etc.
124
125
126 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
127 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
128 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
129 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
130 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
131 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
132 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
133 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
134 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
135 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
136 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
137 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
138 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
139 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
140 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
141 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
142 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
143 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
144
145 ## Where to get it
146 The source code is currently hosted on GitHub at:
147 https://github.com/pandas-dev/pandas
148
149 Binary installers for the latest released version are available at the [Python
150 package index](https://pypi.python.org/pypi/pandas) and on conda.
151
152 ```sh
153 # conda
154 conda install pandas
155 ```
156
157 ```sh
158 # or PyPI
159 pip install pandas
160 ```
161
162 ## Dependencies
163 - [NumPy](http://www.numpy.org): 1.7.0 or higher
164 - [python-dateutil](https://labix.org/python-dateutil): 1.5 or higher
165 - [pytz](https://pythonhosted.org/pytz)
166 - Needed for time zone support with ``pandas.date_range``
167
168 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
169 for recommended and optional dependencies.
170
171 ## Installation from sources
172 To install pandas from source you need Cython in addition to the normal
173 dependencies above. Cython can be installed from pypi:
174
175 ```sh
176 pip install cython
177 ```
178
179 In the `pandas` directory (same one where you found this file after
180 cloning the git repo), execute:
181
182 ```sh
183 python setup.py install
184 ```
185
186 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
187
188 ```sh
189 python setup.py develop
190 ```
191
192 Alternatively, you can use `pip` if you want all the dependencies pulled
193 in automatically (the `-e` option is for installing it in [development
194 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
195
196 ```sh
197 pip install -e .
198 ```
199
200 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
201
202 ## License
203 [BSD 3](LICENSE)
204
205 ## Documentation
206 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
207
208 The Sphinx documentation should provide a good starting point for learning how
209 to use the library. Expect the docs to continue to expand as time goes on.
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Getting Help
216
217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
219
220 ## Discussion and Development
221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
222
223 ## Contributing to pandas
224 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
225
226 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)**
227
228 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [Difficulty Novice](https://github.com/pandas-dev/pandas/issues?q=is%3Aopen+is%3Aissue+label%3A%22Difficulty+Novice%22) where you could start out.
229
230 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
231
232 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
233
[end of README.md]
[start of pandas/core/tools/datetimes.py]
1 from datetime import datetime, timedelta, time
2 import numpy as np
3 from collections import MutableMapping
4
5 from pandas._libs import lib, tslib
6
7 from pandas.core.dtypes.common import (
8 _ensure_object,
9 is_datetime64_ns_dtype,
10 is_datetime64_dtype,
11 is_datetime64tz_dtype,
12 is_integer_dtype,
13 is_integer,
14 is_float,
15 is_list_like,
16 is_scalar,
17 is_numeric_dtype)
18 from pandas.core.dtypes.generic import (
19 ABCIndexClass, ABCSeries,
20 ABCDataFrame, ABCDateOffset)
21 from pandas.core.dtypes.missing import notna
22 from pandas.core import algorithms
23
24 import pandas.compat as compat
25
26 _DATEUTIL_LEXER_SPLIT = None
27 try:
28 # Since these are private methods from dateutil, it is safely imported
29 # here so in case this interface changes, pandas will just fallback
30 # to not using the functionality
31 from dateutil.parser import _timelex
32
33 if hasattr(_timelex, 'split'):
34 def _lexer_split_from_str(dt_str):
35 # The StringIO(str(_)) is for dateutil 2.2 compatibility
36 return _timelex.split(compat.StringIO(str(dt_str)))
37
38 _DATEUTIL_LEXER_SPLIT = _lexer_split_from_str
39 except (ImportError, AttributeError):
40 pass
41
42
43 def _infer_tzinfo(start, end):
44 def _infer(a, b):
45 tz = a.tzinfo
46 if b and b.tzinfo:
47 if not (tslib.get_timezone(tz) == tslib.get_timezone(b.tzinfo)):
48 raise AssertionError('Inputs must both have the same timezone,'
49 ' {timezone1} != {timezone2}'
50 .format(timezone1=tz, timezone2=b.tzinfo))
51 return tz
52
53 tz = None
54 if start is not None:
55 tz = _infer(start, end)
56 elif end is not None:
57 tz = _infer(end, start)
58 return tz
59
60
61 def _guess_datetime_format(dt_str, dayfirst=False,
62 dt_str_parse=compat.parse_date,
63 dt_str_split=_DATEUTIL_LEXER_SPLIT):
64 """
65 Guess the datetime format of a given datetime string.
66
67 Parameters
68 ----------
69 dt_str : string, datetime string to guess the format of
70 dayfirst : boolean, default False
71 If True parses dates with the day first, eg 20/01/2005
72 Warning: dayfirst=True is not strict, but will prefer to parse
73 with day first (this is a known bug).
74 dt_str_parse : function, defaults to `compat.parse_date` (dateutil)
75 This function should take in a datetime string and return
76 a `datetime.datetime` guess that the datetime string represents
77 dt_str_split : function, defaults to `_DATEUTIL_LEXER_SPLIT` (dateutil)
78 This function should take in a datetime string and return
79 a list of strings, the guess of the various specific parts
80 e.g. '2011/12/30' -> ['2011', '/', '12', '/', '30']
81
82 Returns
83 -------
84 ret : datetime format string (for `strftime` or `strptime`)
85 """
86 if dt_str_parse is None or dt_str_split is None:
87 return None
88
89 if not isinstance(dt_str, compat.string_types):
90 return None
91
92 day_attribute_and_format = (('day',), '%d', 2)
93
94 # attr name, format, padding (if any)
95 datetime_attrs_to_format = [
96 (('year', 'month', 'day'), '%Y%m%d', 0),
97 (('year',), '%Y', 0),
98 (('month',), '%B', 0),
99 (('month',), '%b', 0),
100 (('month',), '%m', 2),
101 day_attribute_and_format,
102 (('hour',), '%H', 2),
103 (('minute',), '%M', 2),
104 (('second',), '%S', 2),
105 (('microsecond',), '%f', 6),
106 (('second', 'microsecond'), '%S.%f', 0),
107 ]
108
109 if dayfirst:
110 datetime_attrs_to_format.remove(day_attribute_and_format)
111 datetime_attrs_to_format.insert(0, day_attribute_and_format)
112
113 try:
114 parsed_datetime = dt_str_parse(dt_str, dayfirst=dayfirst)
115 except:
116 # In case the datetime can't be parsed, its format cannot be guessed
117 return None
118
119 if parsed_datetime is None:
120 return None
121
122 try:
123 tokens = dt_str_split(dt_str)
124 except:
125 # In case the datetime string can't be split, its format cannot
126 # be guessed
127 return None
128
129 format_guess = [None] * len(tokens)
130 found_attrs = set()
131
132 for attrs, attr_format, padding in datetime_attrs_to_format:
133 # If a given attribute has been placed in the format string, skip
134 # over other formats for that same underlying attribute (IE, month
135 # can be represented in multiple different ways)
136 if set(attrs) & found_attrs:
137 continue
138
139 if all(getattr(parsed_datetime, attr) is not None for attr in attrs):
140 for i, token_format in enumerate(format_guess):
141 token_filled = tokens[i].zfill(padding)
142 if (token_format is None and
143 token_filled == parsed_datetime.strftime(attr_format)):
144 format_guess[i] = attr_format
145 tokens[i] = token_filled
146 found_attrs.update(attrs)
147 break
148
149 # Only consider it a valid guess if we have a year, month and day
150 if len(set(['year', 'month', 'day']) & found_attrs) != 3:
151 return None
152
153 output_format = []
154 for i, guess in enumerate(format_guess):
155 if guess is not None:
156 # Either fill in the format placeholder (like %Y)
157 output_format.append(guess)
158 else:
159 # Or just the token separate (IE, the dashes in "01-01-2013")
160 try:
161 # If the token is numeric, then we likely didn't parse it
162 # properly, so our guess is wrong
163 float(tokens[i])
164 return None
165 except ValueError:
166 pass
167
168 output_format.append(tokens[i])
169
170 guessed_format = ''.join(output_format)
171
172 # rebuild string, capturing any inferred padding
173 dt_str = ''.join(tokens)
174 if parsed_datetime.strftime(guessed_format) == dt_str:
175 return guessed_format
176
177
178 def _guess_datetime_format_for_array(arr, **kwargs):
179 # Try to guess the format based on the first non-NaN element
180 non_nan_elements = notna(arr).nonzero()[0]
181 if len(non_nan_elements):
182 return _guess_datetime_format(arr[non_nan_elements[0]], **kwargs)
183
184
185 def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False,
186 utc=None, box=True, format=None, exact=True,
187 unit=None, infer_datetime_format=False, origin='unix'):
188 """
189 Convert argument to datetime.
190
191 Parameters
192 ----------
193 arg : integer, float, string, datetime, list, tuple, 1-d array, Series
194
195 .. versionadded: 0.18.1
196
197 or DataFrame/dict-like
198
199 errors : {'ignore', 'raise', 'coerce'}, default 'raise'
200
201 - If 'raise', then invalid parsing will raise an exception
202 - If 'coerce', then invalid parsing will be set as NaT
203 - If 'ignore', then invalid parsing will return the input
204 dayfirst : boolean, default False
205 Specify a date parse order if `arg` is str or its list-likes.
206 If True, parses dates with the day first, eg 10/11/12 is parsed as
207 2012-11-10.
208 Warning: dayfirst=True is not strict, but will prefer to parse
209 with day first (this is a known bug, based on dateutil behavior).
210 yearfirst : boolean, default False
211 Specify a date parse order if `arg` is str or its list-likes.
212
213 - If True parses dates with the year first, eg 10/11/12 is parsed as
214 2010-11-12.
215 - If both dayfirst and yearfirst are True, yearfirst is preceded (same
216 as dateutil).
217
218 Warning: yearfirst=True is not strict, but will prefer to parse
219 with year first (this is a known bug, based on dateutil beahavior).
220
221 .. versionadded: 0.16.1
222
223 utc : boolean, default None
224 Return UTC DatetimeIndex if True (converting any tz-aware
225 datetime.datetime objects as well).
226 box : boolean, default True
227
228 - If True returns a DatetimeIndex
229 - If False returns ndarray of values.
230 format : string, default None
231 strftime to parse time, eg "%d/%m/%Y", note that "%f" will parse
232 all the way up to nanoseconds.
233 exact : boolean, True by default
234
235 - If True, require an exact format match.
236 - If False, allow the format to match anywhere in the target string.
237
238 unit : string, default 'ns'
239 unit of the arg (D,s,ms,us,ns) denote the unit, which is an
240 integer or float number. This will be based off the origin.
241 Example, with unit='ms' and origin='unix' (the default), this
242 would calculate the number of milliseconds to the unix epoch start.
243 infer_datetime_format : boolean, default False
244 If True and no `format` is given, attempt to infer the format of the
245 datetime strings, and if it can be inferred, switch to a faster
246 method of parsing them. In some cases this can increase the parsing
247 speed by ~5-10x.
248 origin : scalar, default is 'unix'
249 Define the reference date. The numeric values would be parsed as number
250 of units (defined by `unit`) since this reference date.
251
252 - If 'unix' (or POSIX) time; origin is set to 1970-01-01.
253 - If 'julian', unit must be 'D', and origin is set to beginning of
254 Julian Calendar. Julian day number 0 is assigned to the day starting
255 at noon on January 1, 4713 BC.
256 - If Timestamp convertible, origin is set to Timestamp identified by
257 origin.
258
259 .. versionadded: 0.20.0
260
261 Returns
262 -------
263 ret : datetime if parsing succeeded.
264 Return type depends on input:
265
266 - list-like: DatetimeIndex
267 - Series: Series of datetime64 dtype
268 - scalar: Timestamp
269
270 In case when it is not possible to return designated types (e.g. when
271 any element of input is before Timestamp.min or after Timestamp.max)
272 return will have datetime.datetime type (or correspoding array/Series).
273
274 Examples
275 --------
276
277 Assembling a datetime from multiple columns of a DataFrame. The keys can be
278 common abbreviations like ['year', 'month', 'day', 'minute', 'second',
279 'ms', 'us', 'ns']) or plurals of the same
280
281 >>> df = pd.DataFrame({'year': [2015, 2016],
282 'month': [2, 3],
283 'day': [4, 5]})
284 >>> pd.to_datetime(df)
285 0 2015-02-04
286 1 2016-03-05
287 dtype: datetime64[ns]
288
289 If a date does not meet the `timestamp limitations
290 <http://pandas.pydata.org/pandas-docs/stable/timeseries.html
291 #timeseries-timestamp-limits>`_, passing errors='ignore'
292 will return the original input instead of raising any exception.
293
294 Passing errors='coerce' will force an out-of-bounds date to NaT,
295 in addition to forcing non-dates (or non-parseable dates) to NaT.
296
297 >>> pd.to_datetime('13000101', format='%Y%m%d', errors='ignore')
298 datetime.datetime(1300, 1, 1, 0, 0)
299 >>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce')
300 NaT
301
302 Passing infer_datetime_format=True can often-times speedup a parsing
303 if its not an ISO8601 format exactly, but in a regular format.
304
305 >>> s = pd.Series(['3/11/2000', '3/12/2000', '3/13/2000']*1000)
306
307 >>> s.head()
308 0 3/11/2000
309 1 3/12/2000
310 2 3/13/2000
311 3 3/11/2000
312 4 3/12/2000
313 dtype: object
314
315 >>> %timeit pd.to_datetime(s,infer_datetime_format=True)
316 100 loops, best of 3: 10.4 ms per loop
317
318 >>> %timeit pd.to_datetime(s,infer_datetime_format=False)
319 1 loop, best of 3: 471 ms per loop
320
321 Using a unix epoch time
322
323 >>> pd.to_datetime(1490195805, unit='s')
324 Timestamp('2017-03-22 15:16:45')
325 >>> pd.to_datetime(1490195805433502912, unit='ns')
326 Timestamp('2017-03-22 15:16:45.433502912')
327
328 .. warning:: For float arg, precision rounding might happen. To prevent
329 unexpected behavior use a fixed-width exact type.
330
331 Using a non-unix epoch origin
332
333 >>> pd.to_datetime([1, 2, 3], unit='D',
334 origin=pd.Timestamp('1960-01-01'))
335 0 1960-01-02
336 1 1960-01-03
337 2 1960-01-04
338
339 See also
340 --------
341 pandas.DataFrame.astype : Cast argument to a specified dtype.
342 pandas.to_timedelta : Convert argument to timedelta.
343 """
344 from pandas.core.indexes.datetimes import DatetimeIndex
345
346 tz = 'utc' if utc else None
347
348 def _convert_listlike(arg, box, format, name=None, tz=tz):
349
350 if isinstance(arg, (list, tuple)):
351 arg = np.array(arg, dtype='O')
352
353 # these are shortcutable
354 if is_datetime64tz_dtype(arg):
355 if not isinstance(arg, DatetimeIndex):
356 return DatetimeIndex(arg, tz=tz, name=name)
357 if utc:
358 arg = arg.tz_convert(None).tz_localize('UTC')
359 return arg
360
361 elif is_datetime64_ns_dtype(arg):
362 if box and not isinstance(arg, DatetimeIndex):
363 try:
364 return DatetimeIndex(arg, tz=tz, name=name)
365 except ValueError:
366 pass
367
368 return arg
369
370 elif unit is not None:
371 if format is not None:
372 raise ValueError("cannot specify both format and unit")
373 arg = getattr(arg, 'values', arg)
374 result = tslib.array_with_unit_to_datetime(arg, unit,
375 errors=errors)
376 if box:
377 if errors == 'ignore':
378 from pandas import Index
379 return Index(result)
380
381 return DatetimeIndex(result, tz=tz, name=name)
382 return result
383 elif getattr(arg, 'ndim', 1) > 1:
384 raise TypeError('arg must be a string, datetime, list, tuple, '
385 '1-d array, or Series')
386
387 arg = _ensure_object(arg)
388 require_iso8601 = False
389
390 if infer_datetime_format and format is None:
391 format = _guess_datetime_format_for_array(arg, dayfirst=dayfirst)
392
393 if format is not None:
394 # There is a special fast-path for iso8601 formatted
395 # datetime strings, so in those cases don't use the inferred
396 # format because this path makes process slower in this
397 # special case
398 format_is_iso8601 = _format_is_iso(format)
399 if format_is_iso8601:
400 require_iso8601 = not infer_datetime_format
401 format = None
402
403 try:
404 result = None
405
406 if format is not None:
407 # shortcut formatting here
408 if format == '%Y%m%d':
409 try:
410 result = _attempt_YYYYMMDD(arg, errors=errors)
411 except:
412 raise ValueError("cannot convert the input to "
413 "'%Y%m%d' date format")
414
415 # fallback
416 if result is None:
417 try:
418 result = tslib.array_strptime(arg, format, exact=exact,
419 errors=errors)
420 except tslib.OutOfBoundsDatetime:
421 if errors == 'raise':
422 raise
423 result = arg
424 except ValueError:
425 # if format was inferred, try falling back
426 # to array_to_datetime - terminate here
427 # for specified formats
428 if not infer_datetime_format:
429 if errors == 'raise':
430 raise
431 result = arg
432
433 if result is None and (format is None or infer_datetime_format):
434 result = tslib.array_to_datetime(
435 arg,
436 errors=errors,
437 utc=utc,
438 dayfirst=dayfirst,
439 yearfirst=yearfirst,
440 require_iso8601=require_iso8601
441 )
442
443 if is_datetime64_dtype(result) and box:
444 result = DatetimeIndex(result, tz=tz, name=name)
445 return result
446
447 except ValueError as e:
448 try:
449 values, tz = tslib.datetime_to_datetime64(arg)
450 return DatetimeIndex._simple_new(values, name=name, tz=tz)
451 except (ValueError, TypeError):
452 raise e
453
454 if arg is None:
455 return None
456
457 # handle origin
458 if origin == 'julian':
459
460 original = arg
461 j0 = tslib.Timestamp(0).to_julian_date()
462 if unit != 'D':
463 raise ValueError("unit must be 'D' for origin='julian'")
464 try:
465 arg = arg - j0
466 except:
467 raise ValueError("incompatible 'arg' type for given "
468 "'origin'='julian'")
469
470 # premptively check this for a nice range
471 j_max = tslib.Timestamp.max.to_julian_date() - j0
472 j_min = tslib.Timestamp.min.to_julian_date() - j0
473 if np.any(arg > j_max) or np.any(arg < j_min):
474 raise tslib.OutOfBoundsDatetime(
475 "{original} is Out of Bounds for "
476 "origin='julian'".format(original=original))
477
478 elif origin not in ['unix', 'julian']:
479
480 # arg must be a numeric
481 original = arg
482 if not ((is_scalar(arg) and (is_integer(arg) or is_float(arg))) or
483 is_numeric_dtype(np.asarray(arg))):
484 raise ValueError(
485 "'{arg}' is not compatible with origin='{origin}'; "
486 "it must be numeric with a unit specified ".format(
487 arg=arg,
488 origin=origin))
489
490 # we are going to offset back to unix / epoch time
491 try:
492 offset = tslib.Timestamp(origin)
493 except tslib.OutOfBoundsDatetime:
494 raise tslib.OutOfBoundsDatetime(
495 "origin {origin} is Out of Bounds".format(origin=origin))
496 except ValueError:
497 raise ValueError("origin {origin} cannot be converted "
498 "to a Timestamp".format(origin=origin))
499
500 if offset.tz is not None:
501 raise ValueError(
502 "origin offset {} must be tz-naive".format(offset))
503 offset -= tslib.Timestamp(0)
504
505 # convert the offset to the unit of the arg
506 # this should be lossless in terms of precision
507 offset = offset // tslib.Timedelta(1, unit=unit)
508
509 # scalars & ndarray-like can handle the addition
510 if is_list_like(arg) and not isinstance(
511 arg, (ABCSeries, ABCIndexClass, np.ndarray)):
512 arg = np.asarray(arg)
513 arg = arg + offset
514
515 if isinstance(arg, tslib.Timestamp):
516 result = arg
517 elif isinstance(arg, ABCSeries):
518 from pandas import Series
519 values = _convert_listlike(arg._values, True, format)
520 result = Series(values, index=arg.index, name=arg.name)
521 elif isinstance(arg, (ABCDataFrame, MutableMapping)):
522 result = _assemble_from_unit_mappings(arg, errors=errors)
523 elif isinstance(arg, ABCIndexClass):
524 result = _convert_listlike(arg, box, format, name=arg.name)
525 elif is_list_like(arg):
526 result = _convert_listlike(arg, box, format)
527 else:
528 result = _convert_listlike(np.array([arg]), box, format)[0]
529
530 return result
531
532
533 # mappings for assembling units
534 _unit_map = {'year': 'year',
535 'years': 'year',
536 'month': 'month',
537 'months': 'month',
538 'day': 'day',
539 'days': 'day',
540 'hour': 'h',
541 'hours': 'h',
542 'minute': 'm',
543 'minutes': 'm',
544 'second': 's',
545 'seconds': 's',
546 'ms': 'ms',
547 'millisecond': 'ms',
548 'milliseconds': 'ms',
549 'us': 'us',
550 'microsecond': 'us',
551 'microseconds': 'us',
552 'ns': 'ns',
553 'nanosecond': 'ns',
554 'nanoseconds': 'ns'
555 }
556
557
558 def _assemble_from_unit_mappings(arg, errors):
559 """
560 assemble the unit specifed fields from the arg (DataFrame)
561 Return a Series for actual parsing
562
563 Parameters
564 ----------
565 arg : DataFrame
566 errors : {'ignore', 'raise', 'coerce'}, default 'raise'
567
568 - If 'raise', then invalid parsing will raise an exception
569 - If 'coerce', then invalid parsing will be set as NaT
570 - If 'ignore', then invalid parsing will return the input
571
572 Returns
573 -------
574 Series
575 """
576 from pandas import to_timedelta, to_numeric, DataFrame
577 arg = DataFrame(arg)
578 if not arg.columns.is_unique:
579 raise ValueError("cannot assemble with duplicate keys")
580
581 # replace passed unit with _unit_map
582 def f(value):
583 if value in _unit_map:
584 return _unit_map[value]
585
586 # m is case significant
587 if value.lower() in _unit_map:
588 return _unit_map[value.lower()]
589
590 return value
591
592 unit = {k: f(k) for k in arg.keys()}
593 unit_rev = {v: k for k, v in unit.items()}
594
595 # we require at least Ymd
596 required = ['year', 'month', 'day']
597 req = sorted(list(set(required) - set(unit_rev.keys())))
598 if len(req):
599 raise ValueError("to assemble mappings requires at least that "
600 "[year, month, day] be specified: [{required}] "
601 "is missing".format(required=','.join(req)))
602
603 # keys we don't recognize
604 excess = sorted(list(set(unit_rev.keys()) - set(_unit_map.values())))
605 if len(excess):
606 raise ValueError("extra keys have been passed "
607 "to the datetime assemblage: "
608 "[{excess}]".format(excess=','.join(excess)))
609
610 def coerce(values):
611 # we allow coercion to if errors allows
612 values = to_numeric(values, errors=errors)
613
614 # prevent overflow in case of int8 or int16
615 if is_integer_dtype(values):
616 values = values.astype('int64', copy=False)
617 return values
618
619 values = (coerce(arg[unit_rev['year']]) * 10000 +
620 coerce(arg[unit_rev['month']]) * 100 +
621 coerce(arg[unit_rev['day']]))
622 try:
623 values = to_datetime(values, format='%Y%m%d', errors=errors)
624 except (TypeError, ValueError) as e:
625 raise ValueError("cannot assemble the "
626 "datetimes: {error}".format(error=e))
627
628 for u in ['h', 'm', 's', 'ms', 'us', 'ns']:
629 value = unit_rev.get(u)
630 if value is not None and value in arg:
631 try:
632 values += to_timedelta(coerce(arg[value]),
633 unit=u,
634 errors=errors)
635 except (TypeError, ValueError) as e:
636 raise ValueError("cannot assemble the datetimes [{value}]: "
637 "{error}".format(value=value, error=e))
638
639 return values
640
641
642 def _attempt_YYYYMMDD(arg, errors):
643 """ try to parse the YYYYMMDD/%Y%m%d format, try to deal with NaT-like,
644 arg is a passed in as an object dtype, but could really be ints/strings
645 with nan-like/or floats (e.g. with nan)
646
647 Parameters
648 ----------
649 arg : passed value
650 errors : 'raise','ignore','coerce'
651 """
652
653 def calc(carg):
654 # calculate the actual result
655 carg = carg.astype(object)
656 parsed = lib.try_parse_year_month_day(carg / 10000,
657 carg / 100 % 100,
658 carg % 100)
659 return tslib.array_to_datetime(parsed, errors=errors)
660
661 def calc_with_mask(carg, mask):
662 result = np.empty(carg.shape, dtype='M8[ns]')
663 iresult = result.view('i8')
664 iresult[~mask] = tslib.iNaT
665 result[mask] = calc(carg[mask].astype(np.float64).astype(np.int64)). \
666 astype('M8[ns]')
667 return result
668
669 # try intlike / strings that are ints
670 try:
671 return calc(arg.astype(np.int64))
672 except:
673 pass
674
675 # a float with actual np.nan
676 try:
677 carg = arg.astype(np.float64)
678 return calc_with_mask(carg, notna(carg))
679 except:
680 pass
681
682 # string with NaN-like
683 try:
684 mask = ~algorithms.isin(arg, list(tslib._nat_strings))
685 return calc_with_mask(arg, mask)
686 except:
687 pass
688
689 return None
690
691
692 def _format_is_iso(f):
693 """
694 Does format match the iso8601 set that can be handled by the C parser?
695 Generally of form YYYY-MM-DDTHH:MM:SS - date separator can be different
696 but must be consistent. Leading 0s in dates and times are optional.
697 """
698 iso_template = '%Y{date_sep}%m{date_sep}%d{time_sep}%H:%M:%S.%f'.format
699 excluded_formats = ['%Y%m%d', '%Y%m', '%Y']
700
701 for date_sep in [' ', '/', '\\', '-', '.', '']:
702 for time_sep in [' ', 'T']:
703 if (iso_template(date_sep=date_sep,
704 time_sep=time_sep
705 ).startswith(f) and f not in excluded_formats):
706 return True
707 return False
708
709
710 def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
711 """
712 Try hard to parse datetime string, leveraging dateutil plus some extra
713 goodies like quarter recognition.
714
715 Parameters
716 ----------
717 arg : compat.string_types
718 freq : str or DateOffset, default None
719 Helps with interpreting time string if supplied
720 dayfirst : bool, default None
721 If None uses default from print_config
722 yearfirst : bool, default None
723 If None uses default from print_config
724
725 Returns
726 -------
727 datetime, datetime/dateutil.parser._result, str
728 """
729 from pandas.core.config import get_option
730 if not isinstance(arg, compat.string_types):
731 return arg
732
733 if isinstance(freq, ABCDateOffset):
734 freq = freq.rule_code
735
736 if dayfirst is None:
737 dayfirst = get_option("display.date_dayfirst")
738 if yearfirst is None:
739 yearfirst = get_option("display.date_yearfirst")
740
741 return tslib.parse_datetime_string_with_reso(arg, freq=freq,
742 dayfirst=dayfirst,
743 yearfirst=yearfirst)
744
745
746 DateParseError = tslib.DateParseError
747 normalize_date = tslib.normalize_date
748
749 # Fixed time formats for time parsing
750 _time_formats = ["%H:%M", "%H%M", "%I:%M%p", "%I%M%p",
751 "%H:%M:%S", "%H%M%S", "%I:%M:%S%p", "%I%M%S%p"]
752
753
754 def _guess_time_format_for_array(arr):
755 # Try to guess the format based on the first non-NaN element
756 non_nan_elements = notna(arr).nonzero()[0]
757 if len(non_nan_elements):
758 element = arr[non_nan_elements[0]]
759 for time_format in _time_formats:
760 try:
761 datetime.strptime(element, time_format)
762 return time_format
763 except ValueError:
764 pass
765
766 return None
767
768
769 def to_time(arg, format=None, infer_time_format=False, errors='raise'):
770 """
771 Parse time strings to time objects using fixed strptime formats ("%H:%M",
772 "%H%M", "%I:%M%p", "%I%M%p", "%H:%M:%S", "%H%M%S", "%I:%M:%S%p",
773 "%I%M%S%p")
774
775 Use infer_time_format if all the strings are in the same format to speed
776 up conversion.
777
778 Parameters
779 ----------
780 arg : string in time format, datetime.time, list, tuple, 1-d array, Series
781 format : str, default None
782 Format used to convert arg into a time object. If None, fixed formats
783 are used.
784 infer_time_format: bool, default False
785 Infer the time format based on the first non-NaN element. If all
786 strings are in the same format, this will speed up conversion.
787 errors : {'ignore', 'raise', 'coerce'}, default 'raise'
788 - If 'raise', then invalid parsing will raise an exception
789 - If 'coerce', then invalid parsing will be set as None
790 - If 'ignore', then invalid parsing will return the input
791
792 Returns
793 -------
794 datetime.time
795 """
796 from pandas.core.series import Series
797
798 def _convert_listlike(arg, format):
799
800 if isinstance(arg, (list, tuple)):
801 arg = np.array(arg, dtype='O')
802
803 elif getattr(arg, 'ndim', 1) > 1:
804 raise TypeError('arg must be a string, datetime, list, tuple, '
805 '1-d array, or Series')
806
807 arg = _ensure_object(arg)
808
809 if infer_time_format and format is None:
810 format = _guess_time_format_for_array(arg)
811
812 times = []
813 if format is not None:
814 for element in arg:
815 try:
816 times.append(datetime.strptime(element, format).time())
817 except (ValueError, TypeError):
818 if errors == 'raise':
819 msg = ("Cannot convert {element} to a time with given "
820 "format {format}").format(element=element,
821 format=format)
822 raise ValueError(msg)
823 elif errors == 'ignore':
824 return arg
825 else:
826 times.append(None)
827 else:
828 formats = _time_formats[:]
829 format_found = False
830 for element in arg:
831 time_object = None
832 for time_format in formats:
833 try:
834 time_object = datetime.strptime(element,
835 time_format).time()
836 if not format_found:
837 # Put the found format in front
838 fmt = formats.pop(formats.index(time_format))
839 formats.insert(0, fmt)
840 format_found = True
841 break
842 except (ValueError, TypeError):
843 continue
844
845 if time_object is not None:
846 times.append(time_object)
847 elif errors == 'raise':
848 raise ValueError("Cannot convert arg {arg} to "
849 "a time".format(arg=arg))
850 elif errors == 'ignore':
851 return arg
852 else:
853 times.append(None)
854
855 return times
856
857 if arg is None:
858 return arg
859 elif isinstance(arg, time):
860 return arg
861 elif isinstance(arg, Series):
862 values = _convert_listlike(arg._values, format)
863 return Series(values, index=arg.index, name=arg.name)
864 elif isinstance(arg, ABCIndexClass):
865 return _convert_listlike(arg, format)
866 elif is_list_like(arg):
867 return _convert_listlike(arg, format)
868
869 return _convert_listlike(np.array([arg]), format)[0]
870
871
872 def format(dt):
873 """Returns date in YYYYMMDD format."""
874 return dt.strftime('%Y%m%d')
875
876
877 OLE_TIME_ZERO = datetime(1899, 12, 30, 0, 0, 0)
878
879
880 def ole2datetime(oledt):
881 """function for converting excel date to normal date format"""
882 val = float(oledt)
883
884 # Excel has a bug where it thinks the date 2/29/1900 exists
885 # we just reject any date before 3/1/1900.
886 if val < 61:
887 msg = "Value is outside of acceptable range: {value}".format(value=val)
888 raise ValueError(msg)
889
890 return OLE_TIME_ZERO + timedelta(days=val)
891
[end of pandas/core/tools/datetimes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
6630c4eddf2762d519507304ad73de189a7e0c6c
|
MultiIndex is_monotonic_decreasing is incorrect
Or just isn't implemented properly yet?
This MultiIndex is identical on the second level, so its `is_monotonic_decreasing` should be the same as `get_level_values(0).is_monotonic_decreasing` (True)
```python
In [27]: idx = pd.MultiIndex([['baz', 'bar'], ['a', 'b']], labels=[[0, 1], [0, 0]])
In [28]: idx
Out[28]:
MultiIndex(levels=[['baz', 'bar'], ['a', 'b']],
labels=[[0, 1], [0, 0]])
In [29]: idx.is_monotonic_decreasing
Out[29]: False
In [30]: idx.get_level_values(0).is_monotonic_decreasing
Out[30]: True
```
They should both return True.
|
only implemented is_monotonic_increasing. It needs the same impl. (but its not used anywhere ATM).
`is_monotonic_increasing` is based on `numpy.lexsort`which seems not to support ordering in descending order.
Can we define `is_monotonic_decreasing` in terms of increasing? Something like
```
def is_monotonic_decreasing(self):
return not self.is_strictly_monotonic_increasing or idx.nunique() == 1
```
not sure if that will work. Looks like `nunique` may not be implemented on MI.
I don't think that will work in general. You can't get monotonic decreasing from monotonic increasing in all scenarios, unless there's an assumption that the data is monotonic/sorted in some regard. For example, a MultiIndex generated by `[(3, 3), (1, 1), (2, 2)]` is not monotonic decreasing, but will return `True`.
I think you can implement this similar to `is_monotonic_increasing`. It's true that `numpy.lexsort` only seems to support ordering in descending order, but you can get around this. My initial thought was to just reverse the output of `numpy.lexsort`, but that messes up the order if you have non-unique indices, e.g. `[(3, 3), (2, 2), (2, 2), (1, 1)]`. The fix would be to add a fake unique decreasing level to the input of `numpy.lexsort` to force uniqueness. Looking at the source code for `is_monotonic_increasing`, I think you'd just need to make changes along the lines of `values = [np.arange(len(idx) - 1, -1, -1)] + [existing comprehension]` followed by `sort_order = np.lexsort(values)[::-1]`. And in the `except` clause `pd.Index(idx.values).is_monotonic_decreasing`.
|
2017-09-06T22:46:15Z
|
<patch>
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -114,7 +114,7 @@ Other Enhancements
- :func:`pd.read_sas()` now recognizes much more of the most frequently used date (datetime) formats in SAS7BDAT files (:issue:`15871`).
- :func:`DataFrame.items` and :func:`Series.items` is now present in both Python 2 and 3 and is lazy in all cases (:issue:`13918`, :issue:`17213`)
- :func:`Styler.where` has been implemented. It is as a convenience for :func:`Styler.applymap` and enables simple DataFrame styling on the Jupyter notebook (:issue:`17474`).
-
+- :func:`MultiIndex.is_monotonic_decreasing` has been implemented. Previously returned ``False`` in all cases. (:issue:`16554`)
.. _whatsnew_0210.api_breaking:
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -706,13 +706,14 @@ def is_monotonic_increasing(self):
# we have mixed types and np.lexsort is not happy
return Index(self.values).is_monotonic
- @property
+ @cache_readonly
def is_monotonic_decreasing(self):
"""
return if the index is monotonic decreasing (only equal or
decreasing) values.
"""
- return False
+ # monotonic decreasing if and only if reverse is monotonic increasing
+ return self[::-1].is_monotonic_increasing
@cache_readonly
def is_unique(self):
</patch>
|
[]
|
[]
| |||
google__jax-735
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
lax_scan/lattice_join shape inconsistencies
For certain native scalar types, the newly implemented lattice_join code doesn't seem to be able to match the types. This is a follow-up of #650
``` python
def lattice_join(x, y):
if x is None:
return y
elif y is None:
return x
elif isinstance(x, type(y)):
return y.join(x)
elif isinstance(y, type(x)):
return x.join(y)
else:
> raise TypeError((x, y))
E TypeError: (ShapedArray(int64[]), ())
```
Code to reproduce:
``` python
import unittest
import numpy as onp
import jax.numpy as np
import functools
import jax
from jax.config import config; config.update("jax_enable_x64", True)
from jax.experimental import optimizers
from jax.test_util import check_grads
def harmonic_bond(conf, params):
return np.sum(conf * params)
class TestOptimizeGeometry(unittest.TestCase):
def test_case(self):
opt_init, opt_update, get_params = optimizers.sgd(5e-2)
x0 = onp.array([0.5], dtype=onp.float64)
params = onp.array([0.3], dtype=onp.float64)
def minimize_structure(test_params):
energy_fn = functools.partial(harmonic_bond, params=test_params)
grad_fn = jax.jit(jax.grad(energy_fn, argnums=(0,)))
opt_state = opt_init(x0)
# use lax.scan, way faster compilation times.
def apply_carry(carry, _):
i, x = carry
g = grad_fn(get_params(x))[0]
new_state = opt_update(i, g, x)
new_carry = (i+1, new_state)
return new_carry, _
carry_final, _ = jax.lax.scan(apply_carry, (np.array(0), opt_state), np.zeros((75, 0)))
trip, opt_final = carry_final
assert trip == 75
return opt_final
initial_params = 0.5
minimize_structure(initial_params)
def loss(test_params):
opt_final = minimize_structure(test_params)
return 1.0-opt_final
loss_opt_init, loss_opt_update, loss_get_params = optimizers.sgd(5e-2)
loss_grad_fn = jax.grad(loss, argnums=(0,))
loss_opt_state = loss_opt_init(initial_params)
loss_params = loss_get_params(loss_opt_state)
loss_grad = loss_grad_fn(loss_params)[0]
```
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://raw.githubusercontent.com/google/jax/master/images/jax_logo_250px.png" alt="logo"></img>
3 </div>
4
5 # JAX: Autograd and XLA [](https://travis-ci.org/google/jax)
6
7 [**Reference docs**](https://jax.readthedocs.io/en/latest/)
8 | [**Install guide**](#installation)
9 | [**Quickstart**](#quickstart-colab-in-the-cloud)
10
11 JAX is [Autograd](https://github.com/hips/autograd) and
12 [XLA](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/overview.md),
13 brought together for high-performance machine learning research.
14
15 With its updated version of [Autograd](https://github.com/hips/autograd),
16 JAX can automatically differentiate native
17 Python and NumPy functions. It can differentiate through loops, branches,
18 recursion, and closures, and it can take derivatives of derivatives of
19 derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation)
20 via [`grad`](#automatic-differentiation-with-grad) as well as forward-mode differentiation,
21 and the two can be composed arbitrarily to any order.
22
23 What’s new is that JAX uses
24 [XLA](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/overview.md)
25 to compile and run your NumPy programs on GPUs and TPUs. Compilation happens
26 under the hood by default, with library calls getting just-in-time compiled and
27 executed. But JAX also lets you just-in-time compile your own Python functions
28 into XLA-optimized kernels using a one-function API,
29 [`jit`](#compilation-with-jit). Compilation and automatic differentiation can be
30 composed arbitrarily, so you can express sophisticated algorithms and get
31 maximal performance without leaving Python.
32
33 Dig a little deeper, and you'll see that JAX is really an extensible system for
34 [composable function transformations](#transformations). Both
35 [`grad`](#automatic-differentiation-with-grad) and [`jit`](#compilation-with-jit)
36 are instances of such transformations. Another is [`vmap`](#auto-vectorization-with-vmap)
37 for automatic vectorization, with more to come.
38
39 This is a research project, not an official Google product. Expect bugs and
40 [sharp edges](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb).
41 Please help by trying it out, [reporting
42 bugs](https://github.com/google/jax/issues), and letting us know what you
43 think!
44
45 ```python
46 import jax.numpy as np
47 from jax import grad, jit, vmap
48 from functools import partial
49
50 def predict(params, inputs):
51 for W, b in params:
52 outputs = np.dot(inputs, W) + b
53 inputs = np.tanh(outputs)
54 return outputs
55
56 def logprob_fun(params, inputs, targets):
57 preds = predict(params, inputs)
58 return np.sum((preds - targets)**2)
59
60 grad_fun = jit(grad(logprob_fun)) # compiled gradient evaluation function
61 perex_grads = jit(vmap(grad_fun, in_axes=(None, 0, 0))) # fast per-example grads
62 ```
63
64 JAX started as a research project by [Matt Johnson](https://github.com/mattjj),
65 [Roy Frostig](https://github.com/froystig), [Dougal
66 Maclaurin](https://github.com/dougalm), and [Chris
67 Leary](https://github.com/learyg), and is now developed [in the
68 open](https://github.com/google/jax) by a growing number of
69 [contributors](#contributors).
70
71 ### Contents
72 * [Quickstart: Colab in the Cloud](#quickstart-colab-in-the-cloud)
73 * [Installation](#installation)
74 * [Running the tests](#running-the-tests)
75 * [Reference documentation](#reference-documentation)
76 * [A brief tour](#a-brief-tour)
77 * [What's supported](#whats-supported)
78 * [Transformations](#transformations)
79 * [Random numbers are different](#random-numbers-are-different)
80 * [Mini-libraries](#mini-libraries)
81 * [How it works](#how-it-works)
82 * [What we're working on](#what-were-working-on)
83 * [Current gotchas](#current-gotchas)
84
85 ## Quickstart: Colab in the Cloud
86 Jump right in using a notebook in your browser, connected to a Google Cloud GPU. Here are some starter notebooks:
87 - [The basics: NumPy on accelerators, `grad` for differentiation, `jit` for compilation, and `vmap` for vectorization](https://colab.research.google.com/github/google/jax/blob/master/notebooks/quickstart.ipynb)
88 - [Training a Simple Neural Network, with PyTorch Data Loading](https://colab.research.google.com/github/google/jax/blob/master/notebooks/neural_network_and_data_loading.ipynb)
89 - [Training a Simple Neural Network, with TensorFlow Dataset Data Loading](https://colab.research.google.com/github/google/jax/blob/master/notebooks/neural_network_with_tfds_data.ipynb)
90
91 And for a deeper dive into JAX:
92 - [Common gotchas and sharp edges](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb)
93 - [The Autodiff Cookbook, Part 1: easy and powerful automatic differentiation in JAX](https://colab.research.google.com/github/google/jax/blob/master/notebooks/autodiff_cookbook.ipynb)
94 - [Directly using XLA in Python](https://colab.research.google.com/github/google/jax/blob/master/notebooks/XLA_in_Python.ipynb)
95 - [MAML Tutorial with JAX](https://colab.research.google.com/github/google/jax/blob/master/notebooks/maml.ipynb).
96
97 ## Installation
98 JAX is written in pure Python, but it depends on XLA, which needs to be compiled
99 and installed as the `jaxlib` package. Use the following instructions to build
100 JAX from source or install a binary package with pip.
101
102 We support installing or building `jaxlib` on Linux and macOS platforms, but not
103 Windows. We're not currently working on Windows support, but contributions are
104 welcome (see [#438](https://github.com/google/jax/issues/438)).
105
106 ### Building JAX from source
107 First, obtain the JAX source code, and make sure `scipy` is installed.
108
109 ```bash
110 git clone https://github.com/google/jax
111 cd jax
112 pip install scipy
113 ```
114
115 If you are building on a Mac, make sure XCode and the XCode command line tools
116 are installed.
117
118 To build XLA with CUDA support, you can run
119
120 ```bash
121 python build/build.py --enable_cuda
122 pip install -e build # install jaxlib (includes XLA)
123 pip install -e . # install jax (pure Python)
124 ```
125
126 See `python build/build.py --help` for configuration options, including ways to
127 specify the paths to CUDA and CUDNN, which you must have installed. The build
128 also depends on NumPy, and a compiler toolchain corresponding to that of
129 Ubuntu 16.04 or newer.
130
131 To build XLA without CUDA GPU support (CPU only), drop the `--enable_cuda`:
132
133 ```bash
134 python build/build.py
135 pip install -e build # install jaxlib (includes XLA)
136 pip install -e . # install jax
137 ```
138
139 To upgrade to the latest version from GitHub, just run `git pull` from the JAX
140 repository root, and rebuild by running `build.py` if necessary. You shouldn't have
141 to reinstall because `pip install -e` sets up symbolic links from site-packages
142 into the repository.
143
144 ### pip installation
145
146 Installing XLA with prebuilt binaries via `pip` is still experimental,
147 especially with GPU support. Let us know on [the issue
148 tracker](https://github.com/google/jax/issues) if you run into any errors.
149
150 To install a CPU-only version, which might be useful for doing local
151 development on a laptop, you can run
152
153 ```bash
154 pip install --upgrade jax jaxlib # CPU-only version
155 ```
156
157 If you want to install JAX with both CPU and GPU support, using existing CUDA
158 and CUDNN7 installations on your machine (for example, preinstalled on your
159 cloud VM), you can run
160
161 ```bash
162 # install jaxlib
163 PYTHON_VERSION=cp27 # alternatives: cp27, cp35, cp36, cp37
164 CUDA_VERSION=cuda92 # alternatives: cuda90, cuda92, cuda100
165 PLATFORM=linux_x86_64 # alternatives: linux_x86_64
166 BASE_URL='https://storage.googleapis.com/jax-wheels'
167 pip install --upgrade $BASE_URL/$CUDA_VERSION/jaxlib-0.1.15-$PYTHON_VERSION-none-$PLATFORM.whl
168
169 pip install --upgrade jax # install jax
170 ```
171
172 The library package name must correspond to the version of the existing CUDA
173 installation you want to use, with `cuda100` for CUDA 10.0, `cuda92` for CUDA
174 9.2, and `cuda90` for CUDA 9.0. To find your CUDA and CUDNN versions, you can
175 run commands like these, depending on your CUDNN install path:
176
177 ```bash
178 nvcc --version
179 grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h # might need different path
180 ```
181
182 The Python version must match your Python interpreter. There are prebuilt wheels
183 for Python 2.7, 3.6, and 3.7; for anything else, you must build from source.
184
185
186 ## Running the tests
187
188 To run all the JAX tests, we recommend using `pytest-xdist`, which can run tests in
189 parallel. First, install `pytest-xdist` by running `pip install pytest-xdist`.
190 Then, from the repository root directory run
191
192 ```bash
193 pytest -n auto tests
194 ```
195
196 JAX generates test cases combinatorially, and you can control the number of
197 cases that are generated and checked for each test (default 10):
198
199 ```bash
200 JAX_NUM_GENERATED_CASES=100 pytest -n auto tests
201 ```
202
203 You can run a more specific set of tests using
204 [`pytest`](https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests)'s
205 built-in selection mechanisms, or alternatively you can run a specific test
206 file directly to see more detailed information about the cases being run:
207
208 ```bash
209 python tests/lax_numpy_test.py --num_generated_cases=5
210 ```
211
212 ## Reference documentation
213
214 For details about the JAX API, see the
215 [reference documentation](https://jax.readthedocs.io/).
216
217 ## A brief tour
218
219 ```python
220 In [1]: import jax.numpy as np
221
222 In [2]: from jax import random
223
224 In [3]: key = random.PRNGKey(0)
225
226 In [4]: x = random.normal(key, (5000, 5000))
227
228 In [5]: print(np.dot(x, x.T) / 2) # fast!
229 [[ 2.52727051e+03 8.15895557e+00 -8.53276134e-01 ..., # ...
230
231 In [6]: print(np.dot(x, x.T) / 2) # even faster!
232 # JIT-compiled code is cached and reused in the 2nd call
233 [[ 2.52727051e+03 8.15895557e+00 -8.53276134e-01 ..., # ...
234 ```
235
236 What’s happening behind-the-scenes is that JAX is using XLA to just-in-time
237 (JIT) compile and execute these individual operations on the GPU. First the
238 `random.normal` call is compiled and the array referred to by `x` is generated
239 on the GPU. Next, each function called on `x` (namely `transpose`, `dot`, and
240 `divide`) is individually JIT-compiled and executed, each keeping its results on
241 the device.
242 It’s only when a value needs to be printed, plotted, saved, or passed into a raw
243 NumPy function that a read-only copy of the value is brought back to the host as
244 an ndarray and cached. The second call to `dot` is faster because the
245 JIT-compiled code is cached and reused, saving the compilation time.
246
247 The fun really starts when you use `grad` for automatic differentiation and
248 `jit` to compile your own functions end-to-end. Here’s a more complete toy
249 example:
250
251 ```python
252 from jax import grad, jit
253 import jax.numpy as np
254
255 def sigmoid(x):
256 return 0.5 * (np.tanh(x / 2.) + 1)
257
258 # Outputs probability of a label being true according to logistic model.
259 def logistic_predictions(weights, inputs):
260 return sigmoid(np.dot(inputs, weights))
261
262 # Training loss is the negative log-likelihood of the training labels.
263 def loss(weights, inputs, targets):
264 preds = logistic_predictions(weights, inputs)
265 label_logprobs = np.log(preds) * targets + np.log(1 - preds) * (1 - targets)
266 return -np.sum(label_logprobs)
267
268 # Build a toy dataset.
269 inputs = np.array([[0.52, 1.12, 0.77],
270 [0.88, -1.08, 0.15],
271 [0.52, 0.06, -1.30],
272 [0.74, -2.49, 1.39]])
273 targets = np.array([True, True, False, True])
274
275 # Define a compiled function that returns gradients of the training loss
276 training_gradient_fun = jit(grad(loss))
277
278 # Optimize weights using gradient descent.
279 weights = np.array([0.0, 0.0, 0.0])
280 print("Initial loss: {:0.2f}".format(loss(weights, inputs, targets)))
281 for i in range(100):
282 weights -= 0.1 * training_gradient_fun(weights, inputs, targets)
283
284 print("Trained loss: {:0.2f}".format(loss(weights, inputs, targets)))
285 ```
286
287 To see more, check out the [quickstart
288 notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/quickstart.ipynb),
289 a [simple MNIST classifier
290 example](https://github.com/google/jax/blob/master/examples/mnist_classifier.py)
291 and the rest of the [JAX
292 examples](https://github.com/google/jax/blob/master/examples/).
293
294 ## What's supported
295
296 If you’re using JAX just as an accelerator-backed NumPy, without using `grad` or
297 `jit` in your code, then in principle there are no constraints, though some
298 NumPy functions haven’t been implemented yet. A list of supported functions can
299 be found in the [reference documentation](https://jax.readthedocs.io/).
300
301 Generally using `np.dot(A, B)` is
302 better than `A.dot(B)` because the former gives us more opportunities to run the
303 computation on the device. NumPy also does a lot of work to cast any array-like
304 function arguments to arrays, as in `np.sum([x, y])`, while `jax.numpy`
305 typically requires explicit casting of array arguments, like
306 `np.sum(np.array([x, y]))`.
307
308 For automatic differentiation with `grad`, JAX has the same restrictions
309 as [Autograd](https://github.com/hips/autograd). Specifically, differentiation
310 works with indexing (`x = A[i, j, :]`) but not indexed assignment (`A[i, j] =
311 x`) or indexed in-place updating (`A[i] += b`). You can use lists, tuples, and
312 dicts freely: JAX doesn't even see them. Using `np.dot(A, B)` rather than
313 `A.dot(B)` is required for automatic differentiation when `A` is a raw ndarray.
314
315 For compiling your own functions with `jit` there are a few more requirements.
316 Because `jit` aims to specialize Python functions only on shapes and dtypes
317 during tracing, rather than on concrete values, Python control flow that depends
318 on concrete values won’t be able to execute and will instead raise an error. If
319 you want compiled control flow, use structured control flow primitives like
320 lax.cond and lax.while_loop. Some indexing features, like slice-based indexing
321 `A[i:i+5]` for argument-dependent `i`, or boolean-based indexing `A[bool_ind]`
322 for argument-dependent `bool_ind`, produce abstract values of unknown shape and
323 are thus unsupported in `jit` functions.
324
325 In general, JAX is intended to be used with a functional style of Python
326 programming. Functions passed to transformations like `grad` and `jit` are
327 expected to be free of side-effects. You can write print statements for
328 debugging but they may only be executed once if they're under a `jit` decorator.
329
330 > TLDR **Do use**
331 >
332 > * Functional programming
333 > * [Many](https://jax.readthedocs.io/en/latest/jax.numpy.html) of NumPy’s
334 > functions (help us add more!)
335 > * [Some](https://jax.readthedocs.io/en/latest/jax.scipy.html) SciPy functions
336 > * Indexing and slicing of arrays like `x = A[[5, 1, 7], :, 2:4]`
337 > * Explicit array creation from lists like `A = np.array([x, y])`
338 >
339 > **Don’t use**
340 >
341 > * Assignment into arrays like `A[0, 0] = x`
342 > * Implicit casting to arrays like `np.sum([x, y])` (use `np.sum(np.array([x,
343 > y])` instead)
344 > * `A.dot(B)` method syntax for functions of more than one argument (use
345 > `np.dot(A, B)` instead)
346 > * Side-effects like mutation of arguments or mutation of global variables
347 > * The `out` argument of NumPy functions
348 > * Dtype casting like `np.float64(x)` (use `x.astype('float64')` or
349 > `x.astype(np.float64)` instead).
350 >
351 > **For jit functions, also don’t use**
352 >
353 > * Control flow based on dynamic values `if x > 0: ...`. Control flow based
354 > on shapes is fine: `if x.shape[0] > 2: ...` and `for subarr in array`.
355 > * Slicing `A[i:i+5]` for dynamic index `i` (use `lax.dynamic_slice` instead)
356 > or boolean indexing `A[bool_ind]` for traced values `bool_ind`.
357
358 You should get loud errors if your code violates any of these.
359
360 ## Transformations
361
362 At its core, JAX is an extensible system for transforming numerical functions.
363 We currently expose three important transformations: `grad`, `jit`, and `vmap`.
364
365 ### Automatic differentiation with grad
366
367 JAX has roughly the same API as [Autograd](https://github.com/hips/autograd).
368 The most popular function is `grad` for reverse-mode gradients:
369
370 ```python
371 from jax import grad
372 import jax.numpy as np
373
374 def tanh(x): # Define a function
375 y = np.exp(-2.0 * x)
376 return (1.0 - y) / (1.0 + y)
377
378 grad_tanh = grad(tanh) # Obtain its gradient function
379 print(grad_tanh(1.0)) # Evaluate it at x = 1.0
380 # prints 0.41997434161402603
381 ```
382
383 You can differentiate to any order with `grad`.
384
385 For more advanced autodiff, you can use `jax.vjp` for reverse-mode
386 vector-Jacobian products and `jax.jvp` for forward-mode Jacobian-vector
387 products. The two can be composed arbitrarily with one another, and with other
388 JAX transformations. Here's one way to compose
389 those to make a function that efficiently computes full Hessian matrices:
390
391 ```python
392 from jax import jit, jacfwd, jacrev
393 def hessian(fun):
394 return jit(jacfwd(jacrev(fun)))
395 ```
396
397 As with Autograd, you're free to use differentiation with Python control
398 structures:
399
400 ```python
401 def abs_val(x):
402 if x > 0:
403 return x
404 else:
405 return -x
406
407 abs_val_grad = grad(abs_val)
408 print(abs_val_grad(1.0)) # prints 1.0
409 print(abs_val_grad(-1.0)) # prints -1.0 (abs_val is re-evaluated)
410 ```
411
412 ### Compilation with jit
413
414 You can use XLA to compile your functions end-to-end with `jit`, used either as
415 an `@jit` decorator or as a higher-order function.
416
417 ```python
418 import jax.numpy as np
419 from jax import jit
420
421 def slow_f(x):
422 # Element-wise ops see a large benefit from fusion
423 return x * x + x * 2.0
424
425 x = np.ones((5000, 5000))
426 fast_f = jit(slow_f)
427 %timeit -n10 -r3 fast_f(x) # ~ 4.5 ms / loop on Titan X
428 %timeit -n10 -r3 slow_f(x) # ~ 14.5 ms / loop (also on GPU via JAX)
429 ```
430
431 You can mix `jit` and `grad` and any other JAX transformation however you like.
432
433 ### Auto-vectorization with vmap
434
435 `vmap` is the vectorizing map.
436 It has the familiar semantics of mapping a function along array axes, but
437 instead of keeping the loop on the outside, it pushes the loop down into a
438 function’s primitive operations for better performance.
439
440 Using `vmap` can save you from having to carry around batch dimensions in your
441 code. For example, consider this simple *unbatched* neural network prediction
442 function:
443
444 ```python
445 def predict(params, input_vec):
446 assert input_vec.ndim == 1
447 for W, b in params:
448 output_vec = np.dot(W, input_vec) + b # `input_vec` on the right-hand side!
449 input_vec = np.tanh(output_vec)
450 return output_vec
451 ```
452
453 We often instead write `np.dot(inputs, W)` to allow for a batch dimension on the
454 left side of `inputs`, but we’ve written this particular prediction function to
455 apply only to single input vectors. If we wanted to apply this function to a
456 batch of inputs at once, semantically we could just write
457
458 ```python
459 from functools import partial
460 predictions = np.stack(list(map(partial(predict, params), input_batch)))
461 ```
462
463 But pushing one example through the network at a time would be slow! It’s better
464 to vectorize the computation, so that at every layer we’re doing matrix-matrix
465 multiplies rather than matrix-vector multiplies.
466
467 The `vmap` function does that transformation for us. That is, if we write
468
469 ```python
470 from jax import vmap
471 predictions = vmap(partial(predict, params))(input_batch)
472 # or, alternatively
473 predictions = vmap(predict, in_axes=(None, 0))(params, input_batch)
474 ```
475
476 then the `vmap` function will push the outer loop inside the function, and our
477 machine will end up executing matrix-matrix multiplications exactly as if we’d
478 done the batching by hand.
479
480 It’s easy enough to manually batch a simple neural network without `vmap`, but
481 in other cases manual vectorization can be impractical or impossible. Take the
482 problem of efficiently computing per-example gradients: that is, for a fixed set
483 of parameters, we want to compute the gradient of our loss function evaluated
484 separately at each example in a batch. With `vmap`, it’s easy:
485
486 ```python
487 per_example_gradients = vmap(partial(grad(loss), params))(inputs, targets)
488 ```
489
490 Of course, `vmap` can be arbitrarily composed with `jit`, `grad`, and any other
491 JAX transformation! We use `vmap` with both forward- and reverse-mode automatic
492 differentiation for fast Jacobian and Hessian matrix calculations in
493 `jax.jacfwd`, `jax.jacrev`, and `jax.hessian`.
494
495
496 ## Random numbers are different
497
498 JAX needs a [functional pseudo-random number generator (PRNG) system](design_notes/prng.md) to provide
499 reproducible results invariant to compilation boundaries and backends, while
500 also maximizing performance by enabling vectorized generation and
501 parallelization across random calls. The `numpy.random` library doesn’t have
502 those properties. The `jax.random` library meets those needs: it’s functionally
503 pure, but it doesn’t require you to pass stateful random objects back out of
504 every function.
505
506 The `jax.random` library uses
507 [count-based PRNGs](http://www.thesalmons.org/john/random123/papers/random123sc11.pdf)
508 and a functional array-oriented
509 [splitting model](http://publications.lib.chalmers.se/records/fulltext/183348/local_183348.pdf).
510 To generate random values, you call a function like `jax.random.normal` and give
511 it a PRNG key:
512
513 ```python
514 import jax.random as random
515
516 key = random.PRNGKey(0)
517 print(random.normal(key, shape=(3,))) # [ 1.81608593 -0.48262325 0.33988902]
518 ```
519
520 If we make the same call again with the same key, we get the same values:
521
522 ```python
523 print(random.normal(key, shape=(3,))) # [ 1.81608593 -0.48262325 0.33988902]
524 ```
525
526 The key never gets updated. So how do we get fresh random values? We use
527 `jax.random.split` to create new keys from existing ones. A common pattern is to
528 split off a new key for every function call that needs random values:
529
530 ```python
531 key = random.PRNGKey(0)
532
533 key, subkey = random.split(key)
534 print(random.normal(subkey, shape=(3,))) # [ 1.1378783 -1.22095478 -0.59153646]
535
536 key, subkey = random.split(key)
537 print(random.normal(subkey, shape=(3,))) # [-0.06607265 0.16676566 1.17800343]
538 ```
539
540 By splitting the PRNG key, not only do we avoid having to thread random states
541 back out of every function call, but also we can generate multiple random arrays
542 in parallel because we can avoid unnecessary sequential dependencies.
543
544 There's a gotcha here, which is that it's easy to unintentionally reuse a key
545 without splitting. We intend to add a check for this (a sort of dynamic linear
546 typing) but for now it's something to be careful about.
547
548 For more detailed information on the design and the reasoning behind it, see the
549 [PRNG design doc](design_notes/prng.md).
550
551
552 ## Mini-libraries
553
554 JAX provides some small, experimental libraries for machine learning. These
555 libraries are in part about providing tools and in part about serving as
556 examples for how to build such libraries using JAX. Each one is only a few
557 hundred lines of code, so take a look inside and adapt them as you need!
558
559 ### Neural-net building with Stax
560
561 **Stax** is a functional neural network building library. The basic idea is that
562 a single layer or an entire network can be modeled as an `(init_fun, apply_fun)`
563 pair. The `init_fun` is used to initialize network parameters and the
564 `apply_fun` takes parameters and inputs to produce outputs. There are
565 constructor functions for common basic pairs, like `Conv` and `Relu`, and these
566 pairs can be composed in series using `stax.serial` or in parallel using
567 `stax.parallel`.
568
569 Here’s an example:
570
571 ```python
572 import jax.numpy as np
573 from jax import random
574 from jax.experimental import stax
575 from jax.experimental.stax import Conv, Dense, MaxPool, Relu, Flatten, LogSoftmax
576
577 # Use stax to set up network initialization and evaluation functions
578 net_init, net_apply = stax.serial(
579 Conv(32, (3, 3), padding='SAME'), Relu,
580 Conv(64, (3, 3), padding='SAME'), Relu,
581 MaxPool((2, 2)), Flatten,
582 Dense(128), Relu,
583 Dense(10), LogSoftmax,
584 )
585
586 # Initialize parameters, not committing to a batch shape
587 rng = random.PRNGKey(0)
588 in_shape = (-1, 28, 28, 1)
589 out_shape, net_params = net_init(rng, in_shape)
590
591 # Apply network to dummy inputs
592 inputs = np.zeros((128, 28, 28, 1))
593 predictions = net_apply(net_params, inputs)
594 ```
595
596 ### First-order optimization
597
598 JAX has a minimal optimization library focused on stochastic first-order
599 optimizers. Every optimizer is modeled as an `(init_fun, update_fun,
600 get_params)` triple of functions. The `init_fun` is used to initialize the
601 optimizer state, which could include things like momentum variables, and the
602 `update_fun` accepts a gradient and an optimizer state to produce a new
603 optimizer state. The `get_params` function extracts the current iterate (i.e.
604 the current parameters) from the optimizer state. The parameters being optimized
605 can be ndarrays or arbitrarily-nested list/tuple/dict structures, so you can
606 store your parameters however you’d like.
607
608 Here’s an example, using `jit` to compile the whole update end-to-end:
609
610 ```python
611 from jax.experimental import optimizers
612 from jax import jit, grad
613
614 # Define a simple squared-error loss
615 def loss(params, batch):
616 inputs, targets = batch
617 predictions = net_apply(params, inputs)
618 return np.sum((predictions - targets)**2)
619
620 # Use optimizers to set optimizer initialization and update functions
621 opt_init, opt_update, get_params = optimizers.momentum(step_size=1e-3, mass=0.9)
622
623 # Define a compiled update step
624 @jit
625 def step(i, opt_state, batch):
626 params = get_params(opt_state)
627 g = grad(loss)(params, batch)
628 return opt_update(i, g, opt_state)
629
630 # Dummy input data stream
631 data_generator = ((np.zeros((128, 28, 28, 1)), np.zeros((128, 10)))
632 for _ in range(10))
633
634 # Optimize parameters in a loop
635 opt_state = opt_init(net_params)
636 for i in range(10):
637 opt_state = step(i, opt_state, next(data_generator))
638 net_params = get_params(opt_state)
639 ```
640
641 ## How it works
642
643 Programming in machine learning is about expressing and transforming functions.
644 Transformations include automatic differentiation, compilation for accelerators,
645 and automatic batching. High-level languages like Python are great for
646 expressing functions, but usually all we can do with them is apply them. We lose
647 access to their internal structure which would let us perform transformations.
648
649 JAX is a tool for specializing and translating high-level Python+NumPy functions
650 into a representation that can be transformed and then lifted back into a Python
651 function.
652
653 
654
655 JAX specializes Python functions by tracing. Tracing a function means monitoring
656 all the basic operations that are applied to its input to produce its output,
657 and recording these operations and the data-flow between them in a directed
658 acyclic graph (DAG). To perform tracing, JAX wraps primitive operations, like
659 basic numerical kernels, so that when they’re called they add themselves to a
660 list of operations performed along with their inputs and outputs. To keep track
661 of how data flows between these primitives, values being tracked are wrapped in
662 instances of the `Tracer` class.
663
664 When a Python function is provided to `grad` or `jit`, it’s wrapped for tracing
665 and returned. When the wrapped function is called, we abstract the concrete
666 arguments provided into instances of the `AbstractValue` class, box them for
667 tracing in instances of the `Tracer` class, and call the function on them.
668 Abstract arguments represent sets of possible values rather than specific
669 values: for example, `jit` abstracts ndarray arguments to abstract values that
670 represent all ndarrays with the same shape and dtype. In contrast, `grad`
671 abstracts ndarray arguments to represent an infinitesimal neighborhood of the
672 underlying
673 value. By tracing the Python function on these abstract values, we ensure that
674 it’s specialized enough so that it’s tractable to transform, and that it’s still
675 general enough so that the transformed result is useful, and possibly reusable.
676 These transformed functions are then lifted back into Python callables in a way
677 that allows them to be traced and transformed again as needed.
678
679 The primitive functions that JAX traces are mostly in 1:1 correspondence with
680 [XLA HLO](https://www.tensorflow.org/xla/operation_semantics) and are defined
681 in [lax.py](https://github.com/google/jax/blob/master/jax/lax.py). This 1:1
682 correspondence makes most of the translations to XLA essentially trivial, and
683 ensures we only have a small set of primitives to cover for other
684 transformations like automatic differentiation. The [`jax.numpy`
685 layer](https://github.com/google/jax/blob/master/jax/numpy/) is written in pure
686 Python simply by expressing NumPy functions in terms of the LAX functions (and
687 other NumPy functions we’ve already written). That makes `jax.numpy` easy to
688 extend.
689
690 When you use `jax.numpy`, the underlying LAX primitives are `jit`-compiled
691 behind the scenes, allowing you to write unrestricted Python+Numpy code while
692 still executing each primitive operation on an accelerator.
693
694 But JAX can do more: instead of just compiling and dispatching to a fixed set of
695 individual primitives, you can use `jit` on larger and larger functions to be
696 end-to-end compiled and optimized. For example, instead of just compiling and
697 dispatching a convolution op, you can compile a whole network, or a whole
698 gradient evaluation and optimizer update step.
699
700 The tradeoff is that `jit` functions have to satisfy some additional
701 specialization requirements: since we want to compile traces that are
702 specialized on shapes and dtypes, but not specialized all the way to concrete
703 values, the Python code under a `jit` decorator must be applicable to abstract
704 values. If we try to evaluate `x > 0` on an abstract `x`, the result is an
705 abstract value representing the set `{True, False}`, and so a Python branch like
706 `if x > 0` will raise an error: it doesn’t know which way to go!
707 See [What’s supported](#whats-supported) for more
708 information about `jit` requirements.
709
710 The good news about this tradeoff is that `jit` is opt-in: JAX libraries use
711 `jit` on individual operations and functions behind the scenes, allowing you to
712 write unrestricted Python+Numpy and still make use of a hardware accelerator.
713 But when you want to maximize performance, you can often use `jit` in your own
714 code to compile and end-to-end optimize much bigger functions.
715
716 ## What we're working on
717 1. Documentation!
718 2. Cloud TPU support
719 3. Multi-GPU and multi-TPU support
720 4. Full NumPy coverage and some SciPy coverage
721 5. Full coverage for vmap
722 6. Make everything faster
723 * Lowering the XLA function dispatch overhead
724 * Linear algebra routines (MKL on CPU, MAGMA on GPU)
725 7. `cond` and `while` primitives with efficient automatic differentiation
726
727 ## Current gotchas
728
729 For a survey of current gotchas, with examples and explanations, we highly
730 recommend reading the [Gotchas Notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb).
731
732 Some stand-out gotchas that might surprise NumPy users:
733 1. JAX enforces single-precision (32-bit, e.g. `float32`) values by default, and
734 to enable double-precision (64-bit, e.g. `float64`) one needs to set the
735 `jax_enable_x64` variable **at startup** (or set the environment variable
736 `JAX_ENABLE_x64=True`, see [the Gotchas Notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb#scrollTo=YTktlwTTMgFl))
737 2. Some of NumPy's dtype promotion semantics involving a mix of Python scalars
738 and NumPy types aren't preserved, namely `np.add(1, np.array([2],
739 np.float32)).dtype` is `float64` rather than `float32`.
740 3. In-place mutation of arrays isn't supported, though [there is an
741 alternative](https://jax.readthedocs.io/en/latest/jax.ops.html). Generally
742 JAX requires functional code.
743 4. PRNGs are different and can be awkward, though for [good
744 reasons](https://github.com/google/jax/blob/master/design_notes/prng.md), and
745 non-reuse (linearity) is not yet checked.
746 5. NumPy's nan semantics aren't preserved on some backends
747
748 See [the notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb) for much more information.
749
750 ## Contributors
751
752 So far, JAX includes lots of help and [contributions](https://github.com/google/jax/graphs/contributors). In addition to the code contributions reflected on GitHub, JAX has benefitted substantially from the advice of
753 [Jamie Townsend](https://github.com/j-towns),
754 [Peter Hawkins](https://github.com/hawkinsp),
755 [Jonathan Ragan-Kelley](https://people.eecs.berkeley.edu/~jrk/),
756 [Alex Wiltschko](http://github.com/alexbw),
757 George Dahl,
758 [Stephan Hoyer](http://stephanhoyer.com/),
759 Sam Schoenholz,
760 [Eli Bendersky](https://github.com/eliben),
761 Zak Stone,
762 [Alexey Radul](https://github.com/axch),
763 Michael Isard,
764 Skye Wanderman-Milne,
765 and many others.
766
[end of README.md]
[start of examples/differentially_private_sgd.py]
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 r"""JAX efficiently trains a differentially private conv net on MNIST.
16
17 This script contains a JAX implementation of Differentially Private Stochastic
18 Gradient Descent (https://arxiv.org/abs/1607.00133). DPSGD requires clipping
19 the per-example parameter gradients, which is non-trivial to implement
20 efficiently for convolutional neural networks. The JAX XLA compiler shines in
21 this setting by optimizing the minibatch-vectorized computation for
22 convolutional architectures. Train time takes a few seconds per epoch on a
23 commodity GPU.
24
25 This code depends on tensorflow_privacy (https://github.com/tensorflow/privacy)
26 Install instructions:
27 $ pip install tensorflow
28 $ git clone https://github.com/tensorflow/privacy
29 $ cd privacy
30 $ pip install .
31
32 The results match those in the reference TensorFlow baseline implementation:
33 https://github.com/tensorflow/privacy/tree/master/tutorials
34
35 Example invocations:
36 # this non-private baseline should get ~99% acc
37 python -m examples.differentially_private_sgd \
38 --dpsgd=False \
39 --learning_rate=.1 \
40 --epochs=20 \
41
42 this private baseline should get ~95% acc
43 python -m examples.differentially_private_sgd \
44 --dpsgd=True \
45 --noise_multiplier=1.3 \
46 --l2_norm_clip=1.5 \
47 --epochs=15 \
48 --learning_rate=.25 \
49
50 # this private baseline should get ~96.6% acc
51 python -m examples.differentially_private_sgd \
52 --dpsgd=True \
53 --noise_multiplier=1.1 \
54 --l2_norm_clip=1.0 \
55 --epochs=60 \
56 --learning_rate=.15 \
57
58 # this private baseline should get ~97% acc
59 python -m examples.differentially_private_sgd \
60 --dpsgd=True \
61 --noise_multiplier=0.7 \
62 --l2_norm_clip=1.5 \
63 --epochs=45 \
64 --learning_rate=.25 \
65 """
66 from __future__ import absolute_import
67 from __future__ import division
68 from __future__ import print_function
69
70 import itertools
71 import time
72 import warnings
73
74 from absl import app
75 from absl import flags
76
77 from jax import grad
78 from jax import jit
79 from jax import partial
80 from jax import random
81 from jax import tree_util
82 from jax import vmap
83 from jax.experimental import optimizers
84 from jax.experimental import stax
85 from jax.lax import stop_gradient
86 import jax.numpy as np
87 from examples import datasets
88 import numpy.random as npr
89
90 # https://github.com/tensorflow/privacy
91 from privacy.analysis.rdp_accountant import compute_rdp
92 from privacy.analysis.rdp_accountant import get_privacy_spent
93
94 FLAGS = flags.FLAGS
95
96 flags.DEFINE_boolean(
97 'dpsgd', True, 'If True, train with DP-SGD. If False, '
98 'train with vanilla SGD.')
99 flags.DEFINE_float('learning_rate', .15, 'Learning rate for training')
100 flags.DEFINE_float('noise_multiplier', 1.1,
101 'Ratio of the standard deviation to the clipping norm')
102 flags.DEFINE_float('l2_norm_clip', 1.0, 'Clipping norm')
103 flags.DEFINE_integer('batch_size', 256, 'Batch size')
104 flags.DEFINE_integer('epochs', 60, 'Number of epochs')
105 flags.DEFINE_integer('seed', 0, 'Seed for jax PRNG')
106 flags.DEFINE_integer(
107 'microbatches', None, 'Number of microbatches '
108 '(must evenly divide batch_size)')
109 flags.DEFINE_string('model_dir', None, 'Model directory')
110
111
112 init_random_params, predict = stax.serial(
113 stax.Conv(16, (8, 8), padding='SAME', strides=(2, 2)),
114 stax.Relu,
115 stax.MaxPool((2, 2), (1, 1)),
116 stax.Conv(32, (4, 4), padding='VALID', strides=(2, 2)),
117 stax.Relu,
118 stax.MaxPool((2, 2), (1, 1)),
119 stax.Flatten,
120 stax.Dense(32),
121 stax.Relu,
122 stax.Dense(10),
123 )
124
125
126 def loss(params, batch):
127 inputs, targets = batch
128 logits = predict(params, inputs)
129 logits = stax.logsoftmax(logits) # log normalize
130 return -np.mean(np.sum(logits * targets, 1)) # cross entropy loss
131
132
133 def accuracy(params, batch):
134 inputs, targets = batch
135 target_class = np.argmax(targets, axis=1)
136 predicted_class = np.argmax(predict(params, inputs), axis=1)
137 return np.mean(predicted_class == target_class)
138
139
140 def private_grad(params, batch, rng, l2_norm_clip, noise_multiplier,
141 batch_size):
142 """Return differentially private gradients for params, evaluated on batch."""
143
144 def _clipped_grad(params, single_example_batch):
145 """Evaluate gradient for a single-example batch and clip its grad norm."""
146 grads = grad(loss)(params, single_example_batch)
147
148 nonempty_grads, tree_def = tree_util.tree_flatten(grads)
149 total_grad_norm = np.linalg.norm(
150 [np.linalg.norm(neg.ravel()) for neg in nonempty_grads])
151 divisor = stop_gradient(np.amax((total_grad_norm / l2_norm_clip, 1.)))
152 normalized_nonempty_grads = [g / divisor for g in nonempty_grads]
153 return tree_util.tree_unflatten(tree_def, normalized_nonempty_grads)
154
155 px_clipped_grad_fn = vmap(partial(_clipped_grad, params))
156 std_dev = l2_norm_clip * noise_multiplier
157 noise_ = lambda n: n + std_dev * random.normal(rng, n.shape)
158 normalize_ = lambda n: n / float(batch_size)
159 tree_map = tree_util.tree_map
160 sum_ = lambda n: np.sum(n, 0) # aggregate
161 aggregated_clipped_grads = tree_map(sum_, px_clipped_grad_fn(batch))
162 noised_aggregated_clipped_grads = tree_map(noise_, aggregated_clipped_grads)
163 normalized_noised_aggregated_clipped_grads = (
164 tree_map(normalize_, noised_aggregated_clipped_grads)
165 )
166 return normalized_noised_aggregated_clipped_grads
167
168
169 def shape_as_image(images, labels, dummy_dim=False):
170 target_shape = (-1, 1, 28, 28, 1) if dummy_dim else (-1, 28, 28, 1)
171 return np.reshape(images, target_shape), labels
172
173
174 def compute_epsilon(steps, num_examples=60000, target_delta=1e-5):
175 if num_examples * target_delta > 1.:
176 warnings.warn('Your delta might be too high.')
177 q = FLAGS.batch_size / float(num_examples)
178 orders = list(np.linspace(1.1, 10.9, 99)) + range(11, 64)
179 rdp_const = compute_rdp(q, FLAGS.noise_multiplier, steps, orders)
180 eps, _, _ = get_privacy_spent(orders, rdp_const, target_delta=target_delta)
181 return eps
182
183
184 def main(_):
185
186 if FLAGS.microbatches:
187 raise NotImplementedError(
188 'Microbatches < batch size not currently supported'
189 )
190
191 train_images, train_labels, test_images, test_labels = datasets.mnist()
192 num_train = train_images.shape[0]
193 num_complete_batches, leftover = divmod(num_train, FLAGS.batch_size)
194 num_batches = num_complete_batches + bool(leftover)
195 key = random.PRNGKey(FLAGS.seed)
196
197 def data_stream():
198 rng = npr.RandomState(FLAGS.seed)
199 while True:
200 perm = rng.permutation(num_train)
201 for i in range(num_batches):
202 batch_idx = perm[i * FLAGS.batch_size:(i + 1) * FLAGS.batch_size]
203 yield train_images[batch_idx], train_labels[batch_idx]
204
205 batches = data_stream()
206
207 opt_init, opt_update, get_params = optimizers.sgd(FLAGS.learning_rate)
208
209 @jit
210 def update(_, i, opt_state, batch):
211 params = get_params(opt_state)
212 return opt_update(i, grad(loss)(params, batch), opt_state)
213
214 @jit
215 def private_update(rng, i, opt_state, batch):
216 params = get_params(opt_state)
217 rng = random.fold_in(rng, i) # get new key for new random numbers
218 return opt_update(
219 i,
220 private_grad(params, batch, rng, FLAGS.l2_norm_clip,
221 FLAGS.noise_multiplier, FLAGS.batch_size), opt_state)
222
223 _, init_params = init_random_params(key, (-1, 28, 28, 1))
224 opt_state = opt_init(init_params)
225 itercount = itertools.count()
226
227 steps_per_epoch = 60000 // FLAGS.batch_size
228 print('\nStarting training...')
229 for epoch in range(1, FLAGS.epochs + 1):
230 start_time = time.time()
231 # pylint: disable=no-value-for-parameter
232 for _ in range(num_batches):
233 if FLAGS.dpsgd:
234 opt_state = \
235 private_update(
236 key, next(itercount), opt_state,
237 shape_as_image(*next(batches), dummy_dim=True))
238 else:
239 opt_state = update(
240 key, next(itercount), opt_state, shape_as_image(*next(batches)))
241 # pylint: enable=no-value-for-parameter
242 epoch_time = time.time() - start_time
243 print('Epoch {} in {:0.2f} sec'.format(epoch, epoch_time))
244
245 # evaluate test accuracy
246 params = get_params(opt_state)
247 test_acc = accuracy(params, shape_as_image(test_images, test_labels))
248 test_loss = loss(params, shape_as_image(test_images, test_labels))
249 print('Test set loss, accuracy (%): ({:.2f}, {:.2f})'.format(
250 test_loss, 100 * test_acc))
251
252 # determine privacy loss so far
253 if FLAGS.dpsgd:
254 delta = 1e-5
255 num_examples = 60000
256 eps = compute_epsilon(epoch * steps_per_epoch, num_examples, delta)
257 print(
258 'For delta={:.0e}, the current epsilon is: {:.2f}'.format(delta, eps))
259 else:
260 print('Trained with vanilla non-private SGD optimizer')
261
262
263 if __name__ == '__main__':
264 app.run(main)
265
[end of examples/differentially_private_sgd.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
google/jax
|
85881672158eeda9de2414776368d8d1ae5da599
|
lax_scan/lattice_join shape inconsistencies
For certain native scalar types, the newly implemented lattice_join code doesn't seem to be able to match the types. This is a follow-up of #650
``` python
def lattice_join(x, y):
if x is None:
return y
elif y is None:
return x
elif isinstance(x, type(y)):
return y.join(x)
elif isinstance(y, type(x)):
return x.join(y)
else:
> raise TypeError((x, y))
E TypeError: (ShapedArray(int64[]), ())
```
Code to reproduce:
``` python
import unittest
import numpy as onp
import jax.numpy as np
import functools
import jax
from jax.config import config; config.update("jax_enable_x64", True)
from jax.experimental import optimizers
from jax.test_util import check_grads
def harmonic_bond(conf, params):
return np.sum(conf * params)
class TestOptimizeGeometry(unittest.TestCase):
def test_case(self):
opt_init, opt_update, get_params = optimizers.sgd(5e-2)
x0 = onp.array([0.5], dtype=onp.float64)
params = onp.array([0.3], dtype=onp.float64)
def minimize_structure(test_params):
energy_fn = functools.partial(harmonic_bond, params=test_params)
grad_fn = jax.jit(jax.grad(energy_fn, argnums=(0,)))
opt_state = opt_init(x0)
# use lax.scan, way faster compilation times.
def apply_carry(carry, _):
i, x = carry
g = grad_fn(get_params(x))[0]
new_state = opt_update(i, g, x)
new_carry = (i+1, new_state)
return new_carry, _
carry_final, _ = jax.lax.scan(apply_carry, (np.array(0), opt_state), np.zeros((75, 0)))
trip, opt_final = carry_final
assert trip == 75
return opt_final
initial_params = 0.5
minimize_structure(initial_params)
def loss(test_params):
opt_final = minimize_structure(test_params)
return 1.0-opt_final
loss_opt_init, loss_opt_update, loss_get_params = optimizers.sgd(5e-2)
loss_grad_fn = jax.grad(loss, argnums=(0,))
loss_opt_state = loss_opt_init(initial_params)
loss_params = loss_get_params(loss_opt_state)
loss_grad = loss_grad_fn(loss_params)[0]
```
|
I'm seeing this happen in yet another test case. I'll keep looking.
Thanks for raising this!
Minimal test cases are easier for us to make progress on. Any chance you can pare that down?
Sorry for being lazy/sloppy - I'll make more self contained repros in the future. I've updated the original issue with a much smaller test case. I think issue is that somewhere along the way the iteration count is being casted to a vanilla scalar (as opposed to a zero sized jax IntArray[]).
|
2019-05-20T16:10:35Z
|
<patch>
diff --git a/jax/lax/lax_control_flow.py b/jax/lax/lax_control_flow.py
--- a/jax/lax/lax_control_flow.py
+++ b/jax/lax/lax_control_flow.py
@@ -428,16 +428,25 @@ def _maybe_tracer_tuple_to_abstract_tuple(tup):
### scan
-def _convert_zeros(convert_symbolic, example, tangent):
- if tangent is ad.zero:
- if not convert_symbolic:
+def _convert_zeros(instantiate, example, tangent):
+ t = type(instantiate)
+ if t is bool:
+ if instantiate:
+ return ad.instantiate_zeros(example, tangent)
+ elif tangent is ad_util.zero:
return core.unit
else:
- return ad.zeros_like_jaxval(example)
- elif type(tangent) is ad.TangentTuple:
- return core.pack(map(_convert_zeros, convert_symbolic, example, tangent))
+ raise TypeError(tangent) # not clear if ever reachable
+ elif t is tuple:
+ if type(tangent) is ad.TangentTuple:
+ return core.pack(map(_convert_zeros, instantiate, example, tangent))
+ elif tangent is ad_util.zero:
+ zeros = [ad_util.zero] * len(instantiate)
+ return core.pack(map(_convert_zeros, instantiate, example, zeros))
+ else:
+ raise TypeError(tangent)
else:
- return tangent
+ raise TypeError(t)
def _demote_aval_rank(xs):
assert isinstance(xs, core.AbstractValue)
@@ -641,7 +650,7 @@ def _scan_partial_eval(trace, *tracers, **kwargs):
length = kwargs.pop('length')
forward = kwargs.pop('forward')
assert not kwargs
- in_pvs, in_consts = unzip2([t.pval for t in tracers])
+ in_pvs, _ = unzip2([t.pval for t in tracers])
sc_consts, sc_init, sc_xs = map(pe.unknown, in_pvs)
sc_carry = sc_init
@@ -819,7 +828,19 @@ def _make_typed_jaxpr(traceable, in_avals):
class FixedPointError(Exception): pass
+# We use a custom bind for scan just to add some error checks
+def scan_bind(consts, init, xs, forward, length, jaxpr):
+ if not core.skip_checks:
+ assert type(jaxpr.in_avals) is tuple
+ consts_aval, init_aval, xs_aval = jaxpr.in_avals
+ assert type(jaxpr.out_aval) is core.AbstractTuple
+ carry_aval, y_aval = jaxpr.out_aval
+ assert init_aval == carry_aval
+ return core.Primitive.bind(scan_p, consts, init, xs,
+ forward=forward, length=length, jaxpr=jaxpr)
+
scan_p = core.Primitive("scan")
+scan_p.def_custom_bind(scan_bind)
scan_p.def_impl(_scan_impl)
ad.primitive_jvps[scan_p] = _scan_jvp
ad.primitive_transposes[scan_p] = _scan_transpose
</patch>
|
[]
|
[]
| |||
pyca__cryptography-1865
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update openssl bindings to allow server side OCSP stapling
This is the server side for #1863
I could not find public docs
I found code in OpenSSL sample server
https://github.com/openssl/openssl/blob/master/apps/s_server.c#L2131
```
SSL_CTX_set_tlsext_status_cb(ctx, cert_status_cb);
# callback signature
static int cert_status_cb(SSL *s, void *arg);
# Set response in callback based on OCSP response
rspderlen = i2d_OCSP_RESPONSE(resp, &rspder);
SSL_set_tlsext_status_ocsp_resp(s, rspder, rspderlen);
```
</issue>
<code>
[start of README.rst]
1 Cryptography
2 ============
3
4 .. image:: https://pypip.in/version/cryptography/badge.svg?style=flat
5 :target: https://pypi.python.org/pypi/cryptography/
6 :alt: Latest Version
7
8 .. image:: https://readthedocs.org/projects/cryptography/badge/?version=latest
9 :target: https://cryptography.io
10 :alt: Latest Docs
11
12 .. image:: https://travis-ci.org/pyca/cryptography.svg?branch=master
13 :target: https://travis-ci.org/pyca/cryptography
14
15 .. image:: https://img.shields.io/coveralls/pyca/cryptography/master.svg
16 :target: https://coveralls.io/r/pyca/cryptography?branch=master
17
18
19 ``cryptography`` is a package which provides cryptographic recipes and
20 primitives to Python developers. Our goal is for it to be your "cryptographic
21 standard library". It supports Python 2.6-2.7, Python 3.3+, and PyPy.
22
23 ``cryptography`` includes both high level recipes, and low level interfaces to
24 common cryptographic algorithms such as symmetric ciphers, message digests and
25 key derivation functions. For example, to encrypt something with
26 ``cryptography``'s high level symmetric encryption recipe:
27
28 .. code-block:: pycon
29
30 >>> from cryptography.fernet import Fernet
31 >>> # Put this somewhere safe!
32 >>> key = Fernet.generate_key()
33 >>> f = Fernet(key)
34 >>> token = f.encrypt(b"A really secret message. Not for prying eyes.")
35 >>> token
36 '...'
37 >>> f.decrypt(token)
38 'A really secret message. Not for prying eyes.'
39
40 You can find more information in the `documentation`_.
41
42 Discussion
43 ~~~~~~~~~~
44
45 If you run into bugs, you can file them in our `issue tracker`_.
46
47 We maintain a `cryptography-dev`_ mailing list for development discussion.
48
49 You can also join ``#cryptography-dev`` on Freenode to ask questions or get
50 involved.
51
52
53 .. _`documentation`: https://cryptography.io/
54 .. _`issue tracker`: https://github.com/pyca/cryptography/issues
55 .. _`cryptography-dev`: https://mail.python.org/mailman/listinfo/cryptography-dev
56
[end of README.rst]
[start of setup.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import os
8 import platform
9 import subprocess
10 import sys
11 from distutils.command.build import build
12
13 import pkg_resources
14
15 from setuptools import find_packages, setup
16 from setuptools.command.install import install
17 from setuptools.command.test import test
18
19
20 base_dir = os.path.dirname(__file__)
21 src_dir = os.path.join(base_dir, "src")
22
23 # When executing the setup.py, we need to be able to import ourselves, this
24 # means that we need to add the src/ directory to the sys.path.
25 sys.path.insert(0, src_dir)
26
27 about = {}
28 with open(os.path.join(src_dir, "cryptography", "__about__.py")) as f:
29 exec(f.read(), about)
30
31
32 VECTORS_DEPENDENCY = "cryptography_vectors=={0}".format(about['__version__'])
33
34 requirements = [
35 "idna",
36 "pyasn1",
37 "six>=1.4.1",
38 "setuptools"
39 ]
40
41 if sys.version_info < (3, 4):
42 requirements.append("enum34")
43
44 if sys.version_info < (3, 3):
45 requirements.append("ipaddress")
46
47 if platform.python_implementation() != "PyPy":
48 requirements.append("cffi>=0.8")
49
50 # If you add a new dep here you probably need to add it in the tox.ini as well
51 test_requirements = [
52 "pytest",
53 "pretend",
54 "iso8601",
55 ]
56
57 # If there's no vectors locally that probably means we are in a tarball and
58 # need to go and get the matching vectors package from PyPi
59 if not os.path.exists(os.path.join(base_dir, "vectors/setup.py")):
60 test_requirements.append(VECTORS_DEPENDENCY)
61
62
63 def cc_is_available():
64 return sys.platform == "darwin" and list(map(
65 int, platform.mac_ver()[0].split("."))) >= [10, 8, 0]
66
67
68 backends = [
69 "openssl = cryptography.hazmat.backends.openssl:backend"
70 ]
71
72 if cc_is_available():
73 backends.append(
74 "commoncrypto = cryptography.hazmat.backends.commoncrypto:backend",
75 )
76
77
78 def get_ext_modules():
79 from cryptography.hazmat.bindings.commoncrypto.binding import (
80 Binding as CommonCryptoBinding
81 )
82 from cryptography.hazmat.bindings.openssl.binding import (
83 Binding as OpenSSLBinding
84 )
85 from cryptography.hazmat.primitives import constant_time, padding
86
87 ext_modules = [
88 OpenSSLBinding.ffi.verifier.get_extension(),
89 constant_time._ffi.verifier.get_extension(),
90 padding._ffi.verifier.get_extension()
91 ]
92 if cc_is_available():
93 ext_modules.append(CommonCryptoBinding.ffi.verifier.get_extension())
94 return ext_modules
95
96
97 class CFFIBuild(build):
98 """
99 This class exists, instead of just providing ``ext_modules=[...]`` directly
100 in ``setup()`` because importing cryptography requires we have several
101 packages installed first.
102
103 By doing the imports here we ensure that packages listed in
104 ``setup_requires`` are already installed.
105 """
106
107 def finalize_options(self):
108 self.distribution.ext_modules = get_ext_modules()
109 build.finalize_options(self)
110
111
112 class CFFIInstall(install):
113 """
114 As a consequence of CFFIBuild and it's late addition of ext_modules, we
115 need the equivalent for the ``install`` command to install into platlib
116 install-dir rather than purelib.
117 """
118
119 def finalize_options(self):
120 self.distribution.ext_modules = get_ext_modules()
121 install.finalize_options(self)
122
123
124 class PyTest(test):
125 def finalize_options(self):
126 test.finalize_options(self)
127 self.test_args = []
128 self.test_suite = True
129
130 # This means there's a vectors/ folder with the package in here.
131 # cd into it, install the vectors package and then refresh sys.path
132 if VECTORS_DEPENDENCY not in test_requirements:
133 subprocess.check_call(
134 [sys.executable, "setup.py", "install"], cwd="vectors"
135 )
136 pkg_resources.get_distribution("cryptography_vectors").activate()
137
138 def run_tests(self):
139 # Import here because in module scope the eggs are not loaded.
140 import pytest
141 test_args = [os.path.join(base_dir, "tests")]
142 errno = pytest.main(test_args)
143 sys.exit(errno)
144
145
146 def keywords_with_side_effects(argv):
147 """
148 Get a dictionary with setup keywords that (can) have side effects.
149
150 :param argv: A list of strings with command line arguments.
151 :returns: A dictionary with keyword arguments for the ``setup()`` function.
152
153 This setup.py script uses the setuptools 'setup_requires' feature because
154 this is required by the cffi package to compile extension modules. The
155 purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi
156 build process as a result of setup.py invocations that don't need the cffi
157 module to be built (setup.py serves the dual purpose of exposing package
158 metadata).
159
160 All of the options listed by ``python setup.py --help`` that print
161 information should be recognized here. The commands ``clean``,
162 ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.
163 Any combination of these options and commands is also supported.
164
165 This function was originally based on the `setup.py script`_ of SciPy (see
166 also the discussion in `pip issue #25`_).
167
168 .. _pip issue #25: https://github.com/pypa/pip/issues/25
169 .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py
170 """
171 no_setup_requires_arguments = (
172 '-h', '--help',
173 '-n', '--dry-run',
174 '-q', '--quiet',
175 '-v', '--verbose',
176 '-V', '--version',
177 '--author',
178 '--author-email',
179 '--classifiers',
180 '--contact',
181 '--contact-email',
182 '--description',
183 '--egg-base',
184 '--fullname',
185 '--help-commands',
186 '--keywords',
187 '--licence',
188 '--license',
189 '--long-description',
190 '--maintainer',
191 '--maintainer-email',
192 '--name',
193 '--no-user-cfg',
194 '--obsoletes',
195 '--platforms',
196 '--provides',
197 '--requires',
198 '--url',
199 'clean',
200 'egg_info',
201 'register',
202 'sdist',
203 'upload',
204 )
205
206 def is_short_option(argument):
207 """Check whether a command line argument is a short option."""
208 return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'
209
210 def expand_short_options(argument):
211 """Expand combined short options into canonical short options."""
212 return ('-' + char for char in argument[1:])
213
214 def argument_without_setup_requirements(argv, i):
215 """Check whether a command line argument needs setup requirements."""
216 if argv[i] in no_setup_requires_arguments:
217 # Simple case: An argument which is either an option or a command
218 # which doesn't need setup requirements.
219 return True
220 elif (is_short_option(argv[i]) and
221 all(option in no_setup_requires_arguments
222 for option in expand_short_options(argv[i]))):
223 # Not so simple case: Combined short options none of which need
224 # setup requirements.
225 return True
226 elif argv[i - 1:i] == ['--egg-base']:
227 # Tricky case: --egg-info takes an argument which should not make
228 # us use setup_requires (defeating the purpose of this code).
229 return True
230 else:
231 return False
232
233 if all(argument_without_setup_requirements(argv, i)
234 for i in range(1, len(argv))):
235 return {
236 "cmdclass": {
237 "build": DummyCFFIBuild,
238 "install": DummyCFFIInstall,
239 "test": DummyPyTest,
240 }
241 }
242 else:
243 return {
244 "setup_requires": requirements,
245 "cmdclass": {
246 "build": CFFIBuild,
247 "install": CFFIInstall,
248 "test": PyTest,
249 }
250 }
251
252
253 setup_requires_error = ("Requested setup command that needs 'setup_requires' "
254 "while command line arguments implied a side effect "
255 "free command or option.")
256
257
258 class DummyCFFIBuild(build):
259 """
260 This class makes it very obvious when ``keywords_with_side_effects()`` has
261 incorrectly interpreted the command line arguments to ``setup.py build`` as
262 one of the 'side effect free' commands or options.
263 """
264
265 def run(self):
266 raise RuntimeError(setup_requires_error)
267
268
269 class DummyCFFIInstall(install):
270 """
271 This class makes it very obvious when ``keywords_with_side_effects()`` has
272 incorrectly interpreted the command line arguments to ``setup.py install``
273 as one of the 'side effect free' commands or options.
274 """
275
276 def run(self):
277 raise RuntimeError(setup_requires_error)
278
279
280 class DummyPyTest(test):
281 """
282 This class makes it very obvious when ``keywords_with_side_effects()`` has
283 incorrectly interpreted the command line arguments to ``setup.py test`` as
284 one of the 'side effect free' commands or options.
285 """
286
287 def run_tests(self):
288 raise RuntimeError(setup_requires_error)
289
290
291 with open(os.path.join(base_dir, "README.rst")) as f:
292 long_description = f.read()
293
294
295 setup(
296 name=about["__title__"],
297 version=about["__version__"],
298
299 description=about["__summary__"],
300 long_description=long_description,
301 license=about["__license__"],
302 url=about["__uri__"],
303
304 author=about["__author__"],
305 author_email=about["__email__"],
306
307 classifiers=[
308 "Intended Audience :: Developers",
309 "License :: OSI Approved :: Apache Software License",
310 "License :: OSI Approved :: BSD License",
311 "Natural Language :: English",
312 "Operating System :: MacOS :: MacOS X",
313 "Operating System :: POSIX",
314 "Operating System :: POSIX :: BSD",
315 "Operating System :: POSIX :: Linux",
316 "Operating System :: Microsoft :: Windows",
317 "Programming Language :: Python",
318 "Programming Language :: Python :: 2",
319 "Programming Language :: Python :: 2.6",
320 "Programming Language :: Python :: 2.7",
321 "Programming Language :: Python :: 3",
322 "Programming Language :: Python :: 3.3",
323 "Programming Language :: Python :: 3.4",
324 "Programming Language :: Python :: Implementation :: CPython",
325 "Programming Language :: Python :: Implementation :: PyPy",
326 "Topic :: Security :: Cryptography",
327 ],
328
329 package_dir={"": "src"},
330 packages=find_packages(where="src", exclude=["tests", "tests.*"]),
331 include_package_data=True,
332
333 install_requires=requirements,
334 tests_require=test_requirements,
335
336 # for cffi
337 zip_safe=False,
338 ext_package="cryptography",
339 entry_points={
340 "cryptography.backends": backends,
341 },
342 **keywords_with_side_effects(sys.argv)
343 )
344
[end of setup.py]
[start of src/cryptography/hazmat/backends/openssl/x509.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 import datetime
17
18 import idna
19
20 from cryptography import utils, x509
21 from cryptography.exceptions import UnsupportedAlgorithm
22 from cryptography.hazmat.primitives import hashes
23
24
25 def _obj2txt(backend, obj):
26 # Set to 80 on the recommendation of
27 # https://www.openssl.org/docs/crypto/OBJ_nid2ln.html#return_values
28 buf_len = 80
29 buf = backend._ffi.new("char[]", buf_len)
30 res = backend._lib.OBJ_obj2txt(buf, buf_len, obj, 1)
31 assert res > 0
32 return backend._ffi.buffer(buf, res)[:].decode()
33
34
35 def _build_x509_name(backend, x509_name):
36 count = backend._lib.X509_NAME_entry_count(x509_name)
37 attributes = []
38 for x in range(count):
39 entry = backend._lib.X509_NAME_get_entry(x509_name, x)
40 obj = backend._lib.X509_NAME_ENTRY_get_object(entry)
41 assert obj != backend._ffi.NULL
42 data = backend._lib.X509_NAME_ENTRY_get_data(entry)
43 assert data != backend._ffi.NULL
44 buf = backend._ffi.new("unsigned char **")
45 res = backend._lib.ASN1_STRING_to_UTF8(buf, data)
46 assert res >= 0
47 assert buf[0] != backend._ffi.NULL
48 buf = backend._ffi.gc(
49 buf, lambda buffer: backend._lib.OPENSSL_free(buffer[0])
50 )
51 value = backend._ffi.buffer(buf[0], res)[:].decode('utf8')
52 oid = _obj2txt(backend, obj)
53 attributes.append(
54 x509.NameAttribute(
55 x509.ObjectIdentifier(oid), value
56 )
57 )
58
59 return x509.Name(attributes)
60
61
62 def _build_general_name(backend, gn):
63 if gn.type == backend._lib.GEN_DNS:
64 data = backend._ffi.buffer(gn.d.dNSName.data, gn.d.dNSName.length)[:]
65 return x509.DNSName(idna.decode(data))
66 elif gn.type == backend._lib.GEN_RID:
67 oid = _obj2txt(backend, gn.d.registeredID)
68 return x509.RegisteredID(x509.ObjectIdentifier(oid))
69 else:
70 # otherName, x400Address or ediPartyName
71 raise x509.UnsupportedGeneralNameType(
72 "{0} is not a supported type".format(
73 x509._GENERAL_NAMES.get(gn.type, gn.type)
74 ),
75 gn.type
76 )
77
78
79 @utils.register_interface(x509.Certificate)
80 class _Certificate(object):
81 def __init__(self, backend, x509):
82 self._backend = backend
83 self._x509 = x509
84
85 def fingerprint(self, algorithm):
86 h = hashes.Hash(algorithm, self._backend)
87 bio = self._backend._create_mem_bio()
88 res = self._backend._lib.i2d_X509_bio(
89 bio, self._x509
90 )
91 assert res == 1
92 der = self._backend._read_mem_bio(bio)
93 h.update(der)
94 return h.finalize()
95
96 @property
97 def version(self):
98 version = self._backend._lib.X509_get_version(self._x509)
99 if version == 0:
100 return x509.Version.v1
101 elif version == 2:
102 return x509.Version.v3
103 else:
104 raise x509.InvalidVersion(
105 "{0} is not a valid X509 version".format(version), version
106 )
107
108 @property
109 def serial(self):
110 asn1_int = self._backend._lib.X509_get_serialNumber(self._x509)
111 assert asn1_int != self._backend._ffi.NULL
112 bn = self._backend._lib.ASN1_INTEGER_to_BN(
113 asn1_int, self._backend._ffi.NULL
114 )
115 assert bn != self._backend._ffi.NULL
116 bn = self._backend._ffi.gc(bn, self._backend._lib.BN_free)
117 return self._backend._bn_to_int(bn)
118
119 def public_key(self):
120 pkey = self._backend._lib.X509_get_pubkey(self._x509)
121 assert pkey != self._backend._ffi.NULL
122 pkey = self._backend._ffi.gc(pkey, self._backend._lib.EVP_PKEY_free)
123
124 return self._backend._evp_pkey_to_public_key(pkey)
125
126 @property
127 def not_valid_before(self):
128 asn1_time = self._backend._lib.X509_get_notBefore(self._x509)
129 return self._parse_asn1_time(asn1_time)
130
131 @property
132 def not_valid_after(self):
133 asn1_time = self._backend._lib.X509_get_notAfter(self._x509)
134 return self._parse_asn1_time(asn1_time)
135
136 def _parse_asn1_time(self, asn1_time):
137 assert asn1_time != self._backend._ffi.NULL
138 generalized_time = self._backend._lib.ASN1_TIME_to_generalizedtime(
139 asn1_time, self._backend._ffi.NULL
140 )
141 assert generalized_time != self._backend._ffi.NULL
142 generalized_time = self._backend._ffi.gc(
143 generalized_time, self._backend._lib.ASN1_GENERALIZEDTIME_free
144 )
145 time = self._backend._ffi.string(
146 self._backend._lib.ASN1_STRING_data(
147 self._backend._ffi.cast("ASN1_STRING *", generalized_time)
148 )
149 ).decode("ascii")
150 return datetime.datetime.strptime(time, "%Y%m%d%H%M%SZ")
151
152 @property
153 def issuer(self):
154 issuer = self._backend._lib.X509_get_issuer_name(self._x509)
155 assert issuer != self._backend._ffi.NULL
156 return _build_x509_name(self._backend, issuer)
157
158 @property
159 def subject(self):
160 subject = self._backend._lib.X509_get_subject_name(self._x509)
161 assert subject != self._backend._ffi.NULL
162 return _build_x509_name(self._backend, subject)
163
164 @property
165 def signature_hash_algorithm(self):
166 oid = _obj2txt(self._backend, self._x509.sig_alg.algorithm)
167 try:
168 return x509._SIG_OIDS_TO_HASH[oid]
169 except KeyError:
170 raise UnsupportedAlgorithm(
171 "Signature algorithm OID:{0} not recognized".format(oid)
172 )
173
174 @property
175 def extensions(self):
176 extensions = []
177 seen_oids = set()
178 extcount = self._backend._lib.X509_get_ext_count(self._x509)
179 for i in range(0, extcount):
180 ext = self._backend._lib.X509_get_ext(self._x509, i)
181 assert ext != self._backend._ffi.NULL
182 crit = self._backend._lib.X509_EXTENSION_get_critical(ext)
183 critical = crit == 1
184 oid = x509.ObjectIdentifier(_obj2txt(self._backend, ext.object))
185 if oid in seen_oids:
186 raise x509.DuplicateExtension(
187 "Duplicate {0} extension found".format(oid), oid
188 )
189 elif oid == x509.OID_BASIC_CONSTRAINTS:
190 value = self._build_basic_constraints(ext)
191 elif oid == x509.OID_SUBJECT_KEY_IDENTIFIER:
192 value = self._build_subject_key_identifier(ext)
193 elif oid == x509.OID_KEY_USAGE:
194 value = self._build_key_usage(ext)
195 elif oid == x509.OID_SUBJECT_ALTERNATIVE_NAME:
196 value = self._build_subject_alt_name(ext)
197 elif critical:
198 raise x509.UnsupportedExtension(
199 "{0} is not currently supported".format(oid), oid
200 )
201 else:
202 # Unsupported non-critical extension, silently skipping for now
203 seen_oids.add(oid)
204 continue
205
206 seen_oids.add(oid)
207 extensions.append(x509.Extension(oid, critical, value))
208
209 return x509.Extensions(extensions)
210
211 def _build_basic_constraints(self, ext):
212 bc_st = self._backend._lib.X509V3_EXT_d2i(ext)
213 assert bc_st != self._backend._ffi.NULL
214 basic_constraints = self._backend._ffi.cast(
215 "BASIC_CONSTRAINTS *", bc_st
216 )
217 basic_constraints = self._backend._ffi.gc(
218 basic_constraints, self._backend._lib.BASIC_CONSTRAINTS_free
219 )
220 # The byte representation of an ASN.1 boolean true is \xff. OpenSSL
221 # chooses to just map this to its ordinal value, so true is 255 and
222 # false is 0.
223 ca = basic_constraints.ca == 255
224 if basic_constraints.pathlen == self._backend._ffi.NULL:
225 path_length = None
226 else:
227 bn = self._backend._lib.ASN1_INTEGER_to_BN(
228 basic_constraints.pathlen, self._backend._ffi.NULL
229 )
230 assert bn != self._backend._ffi.NULL
231 bn = self._backend._ffi.gc(bn, self._backend._lib.BN_free)
232 path_length = self._backend._bn_to_int(bn)
233
234 return x509.BasicConstraints(ca, path_length)
235
236 def _build_subject_key_identifier(self, ext):
237 asn1_string = self._backend._lib.X509V3_EXT_d2i(ext)
238 assert asn1_string != self._backend._ffi.NULL
239 asn1_string = self._backend._ffi.cast(
240 "ASN1_OCTET_STRING *", asn1_string
241 )
242 asn1_string = self._backend._ffi.gc(
243 asn1_string, self._backend._lib.ASN1_OCTET_STRING_free
244 )
245 return x509.SubjectKeyIdentifier(
246 self._backend._ffi.buffer(asn1_string.data, asn1_string.length)[:]
247 )
248
249 def _build_key_usage(self, ext):
250 bit_string = self._backend._lib.X509V3_EXT_d2i(ext)
251 assert bit_string != self._backend._ffi.NULL
252 bit_string = self._backend._ffi.cast("ASN1_BIT_STRING *", bit_string)
253 bit_string = self._backend._ffi.gc(
254 bit_string, self._backend._lib.ASN1_BIT_STRING_free
255 )
256 get_bit = self._backend._lib.ASN1_BIT_STRING_get_bit
257 digital_signature = get_bit(bit_string, 0) == 1
258 content_commitment = get_bit(bit_string, 1) == 1
259 key_encipherment = get_bit(bit_string, 2) == 1
260 data_encipherment = get_bit(bit_string, 3) == 1
261 key_agreement = get_bit(bit_string, 4) == 1
262 key_cert_sign = get_bit(bit_string, 5) == 1
263 crl_sign = get_bit(bit_string, 6) == 1
264 encipher_only = get_bit(bit_string, 7) == 1
265 decipher_only = get_bit(bit_string, 8) == 1
266 return x509.KeyUsage(
267 digital_signature,
268 content_commitment,
269 key_encipherment,
270 data_encipherment,
271 key_agreement,
272 key_cert_sign,
273 crl_sign,
274 encipher_only,
275 decipher_only
276 )
277
278 def _build_subject_alt_name(self, ext):
279 gns = self._backend._ffi.cast(
280 "GENERAL_NAMES *", self._backend._lib.X509V3_EXT_d2i(ext)
281 )
282 assert gns != self._backend._ffi.NULL
283 gns = self._backend._ffi.gc(gns, self._backend._lib.GENERAL_NAMES_free)
284 num = self._backend._lib.sk_GENERAL_NAME_num(gns)
285 general_names = []
286
287 for i in range(num):
288 gn = self._backend._lib.sk_GENERAL_NAME_value(gns, i)
289 assert gn != self._backend._ffi.NULL
290 value = _build_general_name(self._backend, gn)
291
292 general_names.append(value)
293
294 return x509.SubjectAlternativeName(general_names)
295
296
297 @utils.register_interface(x509.CertificateSigningRequest)
298 class _CertificateSigningRequest(object):
299 def __init__(self, backend, x509_req):
300 self._backend = backend
301 self._x509_req = x509_req
302
303 def public_key(self):
304 pkey = self._backend._lib.X509_REQ_get_pubkey(self._x509_req)
305 assert pkey != self._backend._ffi.NULL
306 pkey = self._backend._ffi.gc(pkey, self._backend._lib.EVP_PKEY_free)
307 return self._backend._evp_pkey_to_public_key(pkey)
308
309 @property
310 def subject(self):
311 subject = self._backend._lib.X509_REQ_get_subject_name(self._x509_req)
312 assert subject != self._backend._ffi.NULL
313 return _build_x509_name(self._backend, subject)
314
315 @property
316 def signature_hash_algorithm(self):
317 oid = _obj2txt(self._backend, self._x509_req.sig_alg.algorithm)
318 try:
319 return x509._SIG_OIDS_TO_HASH[oid]
320 except KeyError:
321 raise UnsupportedAlgorithm(
322 "Signature algorithm OID:{0} not recognized".format(oid)
323 )
324
[end of src/cryptography/hazmat/backends/openssl/x509.py]
[start of src/cryptography/hazmat/bindings/openssl/asn1.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 INCLUDES = """
8 #include <openssl/asn1.h>
9 """
10
11 TYPES = """
12 /*
13 * TODO: This typedef is wrong.
14 *
15 * This is due to limitations of cffi.
16 * See https://bitbucket.org/cffi/cffi/issue/69
17 *
18 * For another possible work-around (not used here because it involves more
19 * complicated use of the cffi API which falls outside the general pattern used
20 * by this package), see
21 * http://paste.pound-python.org/show/iJcTUMkKeBeS6yXpZWUU/
22 *
23 * The work-around used here is to just be sure to declare a type that is at
24 * least as large as the real type. Maciej explains:
25 *
26 * <fijal> I think you want to declare your value too large (e.g. long)
27 * <fijal> that way you'll never pass garbage
28 */
29 typedef intptr_t time_t;
30
31 typedef int ASN1_BOOLEAN;
32 typedef ... ASN1_INTEGER;
33
34 struct asn1_string_st {
35 int length;
36 int type;
37 unsigned char *data;
38 long flags;
39 };
40
41 typedef struct asn1_string_st ASN1_OCTET_STRING;
42 typedef struct asn1_string_st ASN1_IA5STRING;
43 typedef ... ASN1_BIT_STRING;
44 typedef ... ASN1_OBJECT;
45 typedef ... ASN1_STRING;
46 typedef ... ASN1_TYPE;
47 typedef ... ASN1_GENERALIZEDTIME;
48 typedef ... ASN1_ENUMERATED;
49 typedef ... ASN1_ITEM;
50 typedef ... ASN1_VALUE;
51
52 typedef struct {
53 ...;
54 } ASN1_TIME;
55 typedef ... ASN1_ITEM_EXP;
56
57 typedef ... ASN1_UTCTIME;
58
59 static const int V_ASN1_GENERALIZEDTIME;
60
61 static const int MBSTRING_FLAG;
62 static const int MBSTRING_ASC;
63 static const int MBSTRING_BMP;
64 static const int MBSTRING_UTF8;
65 static const int MBSTRING_UNIV;
66 """
67
68 FUNCTIONS = """
69 ASN1_OBJECT *ASN1_OBJECT_new(void);
70 void ASN1_OBJECT_free(ASN1_OBJECT *);
71
72 /* ASN1 OBJECT IDENTIFIER */
73 ASN1_OBJECT *d2i_ASN1_OBJECT(ASN1_OBJECT **, const unsigned char **, long);
74 int i2d_ASN1_OBJECT(ASN1_OBJECT *, unsigned char **);
75
76 /* ASN1 STRING */
77 ASN1_STRING *ASN1_STRING_new(void);
78 ASN1_STRING *ASN1_STRING_type_new(int);
79 void ASN1_STRING_free(ASN1_STRING *);
80 unsigned char *ASN1_STRING_data(ASN1_STRING *);
81 int ASN1_STRING_set(ASN1_STRING *, const void *, int);
82 int ASN1_STRING_type(ASN1_STRING *);
83 int ASN1_STRING_to_UTF8(unsigned char **, ASN1_STRING *);
84
85 /* ASN1 OCTET STRING */
86 ASN1_OCTET_STRING *ASN1_OCTET_STRING_new(void);
87 void ASN1_OCTET_STRING_free(ASN1_OCTET_STRING *);
88 int ASN1_OCTET_STRING_set(ASN1_OCTET_STRING *, const unsigned char *, int);
89
90 /* ASN1 INTEGER */
91 ASN1_INTEGER *ASN1_INTEGER_new(void);
92 void ASN1_INTEGER_free(ASN1_INTEGER *);
93 int ASN1_INTEGER_set(ASN1_INTEGER *, long);
94 int i2a_ASN1_INTEGER(BIO *, ASN1_INTEGER *);
95
96 /* ASN1 TIME */
97 ASN1_TIME *ASN1_TIME_new(void);
98 void ASN1_TIME_free(ASN1_TIME *);
99 ASN1_GENERALIZEDTIME *ASN1_TIME_to_generalizedtime(ASN1_TIME *,
100 ASN1_GENERALIZEDTIME **);
101
102 /* ASN1 UTCTIME */
103 ASN1_UTCTIME *ASN1_UTCTIME_new(void);
104 void ASN1_UTCTIME_free(ASN1_UTCTIME *);
105 int ASN1_UTCTIME_cmp_time_t(const ASN1_UTCTIME *, time_t);
106 ASN1_UTCTIME *ASN1_UTCTIME_set(ASN1_UTCTIME *, time_t);
107
108 /* ASN1 GENERALIZEDTIME */
109 int ASN1_GENERALIZEDTIME_set_string(ASN1_GENERALIZEDTIME *, const char *);
110 void ASN1_GENERALIZEDTIME_free(ASN1_GENERALIZEDTIME *);
111
112 /* ASN1 ENUMERATED */
113 ASN1_ENUMERATED *ASN1_ENUMERATED_new(void);
114 void ASN1_ENUMERATED_free(ASN1_ENUMERATED *);
115 int ASN1_ENUMERATED_set(ASN1_ENUMERATED *, long);
116
117 ASN1_VALUE *ASN1_item_d2i(ASN1_VALUE **, const unsigned char **, long,
118 const ASN1_ITEM *);
119 int ASN1_BIT_STRING_set_bit(ASN1_BIT_STRING *, int, int);
120 """
121
122 MACROS = """
123 void ASN1_BIT_STRING_free(ASN1_BIT_STRING *);
124 /* This is not a macro, but is const on some versions of OpenSSL */
125 int ASN1_BIT_STRING_get_bit(ASN1_BIT_STRING *, int);
126 ASN1_TIME *M_ASN1_TIME_dup(void *);
127 const ASN1_ITEM *ASN1_ITEM_ptr(ASN1_ITEM_EXP *);
128
129 /* These aren't macros these arguments are all const X on openssl > 1.0.x */
130
131 int ASN1_TIME_print(BIO *, ASN1_TIME *);
132 int ASN1_STRING_length(ASN1_STRING *);
133 ASN1_STRING *ASN1_STRING_dup(ASN1_STRING *);
134 int ASN1_STRING_cmp(ASN1_STRING *, ASN1_STRING *);
135 int ASN1_UTCTIME_print(BIO *, ASN1_UTCTIME *);
136
137 ASN1_OCTET_STRING *ASN1_OCTET_STRING_dup(ASN1_OCTET_STRING *);
138 int ASN1_OCTET_STRING_cmp(ASN1_OCTET_STRING *, ASN1_OCTET_STRING *);
139
140 ASN1_INTEGER *ASN1_INTEGER_dup(ASN1_INTEGER *);
141 int ASN1_INTEGER_cmp(ASN1_INTEGER *, ASN1_INTEGER *);
142 long ASN1_INTEGER_get(ASN1_INTEGER *);
143
144 BIGNUM *ASN1_INTEGER_to_BN(ASN1_INTEGER *, BIGNUM *);
145 ASN1_INTEGER *BN_to_ASN1_INTEGER(BIGNUM *, ASN1_INTEGER *);
146
147 /* These isn't a macro the arg is const on openssl 1.0.2+ */
148 int ASN1_GENERALIZEDTIME_check(ASN1_GENERALIZEDTIME *);
149 int ASN1_UTCTIME_check(ASN1_UTCTIME *);
150
151 /* Not a macro, const on openssl 1.0 */
152 int ASN1_STRING_set_default_mask_asc(char *);
153 """
154
155 CUSTOMIZATIONS = """
156 """
157
158 CONDITIONAL_NAMES = {}
159
[end of src/cryptography/hazmat/bindings/openssl/asn1.py]
[start of src/cryptography/hazmat/bindings/openssl/binding.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import os
8 import sys
9 import threading
10
11 from cryptography.hazmat.bindings.utils import (
12 build_ffi_for_binding, load_library_for_binding,
13 )
14
15
16 _OSX_PRE_INCLUDE = """
17 #ifdef __APPLE__
18 #include <AvailabilityMacros.h>
19 #define __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \
20 DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
21 #undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
22 #define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
23 #endif
24 """
25
26 _OSX_POST_INCLUDE = """
27 #ifdef __APPLE__
28 #undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
29 #define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \
30 __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
31 #endif
32 """
33
34
35 def _get_libraries(platform):
36 # OpenSSL goes by a different library name on different operating systems.
37 if platform != "win32":
38 # In some circumstances, the order in which these libs are
39 # specified on the linker command-line is significant;
40 # libssl must come before libcrypto
41 # (http://marc.info/?l=openssl-users&m=135361825921871)
42 return ["ssl", "crypto"]
43 else:
44 link_type = os.environ.get("PYCA_WINDOWS_LINK_TYPE", "static")
45 return _get_windows_libraries(link_type)
46
47
48 def _get_windows_libraries(link_type):
49 if link_type == "dynamic":
50 return ["libeay32", "ssleay32", "advapi32"]
51 elif link_type == "static" or link_type == "":
52 return ["libeay32mt", "ssleay32mt", "advapi32",
53 "crypt32", "gdi32", "user32", "ws2_32"]
54 else:
55 raise ValueError(
56 "PYCA_WINDOWS_LINK_TYPE must be 'static' or 'dynamic'"
57 )
58
59
60 class Binding(object):
61 """
62 OpenSSL API wrapper.
63 """
64 _module_prefix = "cryptography.hazmat.bindings.openssl."
65 _modules = [
66 "aes",
67 "asn1",
68 "bignum",
69 "bio",
70 "cmac",
71 "cms",
72 "conf",
73 "crypto",
74 "dh",
75 "dsa",
76 "ec",
77 "ecdh",
78 "ecdsa",
79 "engine",
80 "err",
81 "evp",
82 "hmac",
83 "nid",
84 "objects",
85 "opensslv",
86 "osrandom_engine",
87 "pem",
88 "pkcs7",
89 "pkcs12",
90 "rand",
91 "rsa",
92 "ssl",
93 "x509",
94 "x509name",
95 "x509v3",
96 "x509_vfy"
97 ]
98
99 _locks = None
100 _lock_cb_handle = None
101 _init_lock = threading.Lock()
102 _lock_init_lock = threading.Lock()
103
104 ffi = build_ffi_for_binding(
105 module_prefix=_module_prefix,
106 modules=_modules,
107 pre_include=_OSX_PRE_INCLUDE,
108 post_include=_OSX_POST_INCLUDE,
109 libraries=_get_libraries(sys.platform)
110 )
111 lib = None
112
113 def __init__(self):
114 self._ensure_ffi_initialized()
115
116 @classmethod
117 def _ensure_ffi_initialized(cls):
118 if cls.lib is not None:
119 return
120
121 with cls._init_lock:
122 if cls.lib is None:
123 cls.lib = load_library_for_binding(
124 cls.ffi,
125 cls._module_prefix,
126 cls._modules,
127 )
128
129 res = cls.lib.Cryptography_add_osrandom_engine()
130 assert res != 0
131
132 @classmethod
133 def init_static_locks(cls):
134 with cls._lock_init_lock:
135 cls._ensure_ffi_initialized()
136
137 if not cls._lock_cb_handle:
138 cls._lock_cb_handle = cls.ffi.callback(
139 "void(int, int, const char *, int)",
140 cls._lock_cb
141 )
142
143 # Use Python's implementation if available, importing _ssl triggers
144 # the setup for this.
145 __import__("_ssl")
146
147 if cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL:
148 return
149
150 # If nothing else has setup a locking callback already, we set up
151 # our own
152 num_locks = cls.lib.CRYPTO_num_locks()
153 cls._locks = [threading.Lock() for n in range(num_locks)]
154
155 cls.lib.CRYPTO_set_locking_callback(cls._lock_cb_handle)
156
157 @classmethod
158 def _lock_cb(cls, mode, n, file, line):
159 lock = cls._locks[n]
160
161 if mode & cls.lib.CRYPTO_LOCK:
162 lock.acquire()
163 elif mode & cls.lib.CRYPTO_UNLOCK:
164 lock.release()
165 else:
166 raise RuntimeError(
167 "Unknown lock mode {0}: lock={1}, file={2}, line={3}.".format(
168 mode, n, file, line
169 )
170 )
171
[end of src/cryptography/hazmat/bindings/openssl/binding.py]
[start of src/cryptography/hazmat/bindings/openssl/dsa.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 INCLUDES = """
8 #include <openssl/dsa.h>
9 """
10
11 TYPES = """
12 typedef struct dsa_st {
13 /* Prime number (public) */
14 BIGNUM *p;
15 /* Subprime (160-bit, q | p-1, public) */
16 BIGNUM *q;
17 /* Generator of subgroup (public) */
18 BIGNUM *g;
19 /* Private key x */
20 BIGNUM *priv_key;
21 /* Public key y = g^x */
22 BIGNUM *pub_key;
23 ...;
24 } DSA;
25 typedef struct {
26 BIGNUM *r;
27 BIGNUM *s;
28 } DSA_SIG;
29 """
30
31 FUNCTIONS = """
32 DSA *DSA_generate_parameters(int, unsigned char *, int, int *, unsigned long *,
33 void (*)(int, int, void *), void *);
34 int DSA_generate_key(DSA *);
35 DSA *DSA_new(void);
36 void DSA_free(DSA *);
37 DSA_SIG *DSA_SIG_new(void);
38 void DSA_SIG_free(DSA_SIG *);
39 int i2d_DSA_SIG(const DSA_SIG *, unsigned char **);
40 DSA_SIG *d2i_DSA_SIG(DSA_SIG **, const unsigned char **, long);
41 int DSA_size(const DSA *);
42 int DSA_sign(int, const unsigned char *, int, unsigned char *, unsigned int *,
43 DSA *);
44 int DSA_verify(int, const unsigned char *, int, const unsigned char *, int,
45 DSA *);
46 """
47
48 MACROS = """
49 int DSA_generate_parameters_ex(DSA *, int, unsigned char *, int,
50 int *, unsigned long *, BN_GENCB *);
51 """
52
53 CUSTOMIZATIONS = """
54 """
55
56 CONDITIONAL_NAMES = {}
57
[end of src/cryptography/hazmat/bindings/openssl/dsa.py]
[start of src/cryptography/hazmat/bindings/openssl/ecdsa.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 INCLUDES = """
8 #ifndef OPENSSL_NO_ECDSA
9 #include <openssl/ecdsa.h>
10 #endif
11 """
12
13 TYPES = """
14 static const int Cryptography_HAS_ECDSA;
15
16 typedef struct {
17 BIGNUM *r;
18 BIGNUM *s;
19 } ECDSA_SIG;
20
21 typedef ... CRYPTO_EX_new;
22 typedef ... CRYPTO_EX_dup;
23 typedef ... CRYPTO_EX_free;
24 """
25
26 FUNCTIONS = """
27 """
28
29 MACROS = """
30 ECDSA_SIG *ECDSA_SIG_new();
31 void ECDSA_SIG_free(ECDSA_SIG *);
32 int i2d_ECDSA_SIG(const ECDSA_SIG *, unsigned char **);
33 ECDSA_SIG *d2i_ECDSA_SIG(ECDSA_SIG **s, const unsigned char **, long);
34 ECDSA_SIG *ECDSA_do_sign(const unsigned char *, int, EC_KEY *);
35 ECDSA_SIG *ECDSA_do_sign_ex(const unsigned char *, int, const BIGNUM *,
36 const BIGNUM *, EC_KEY *);
37 int ECDSA_do_verify(const unsigned char *, int, const ECDSA_SIG *, EC_KEY *);
38 int ECDSA_sign_setup(EC_KEY *, BN_CTX *, BIGNUM **, BIGNUM **);
39 int ECDSA_sign(int, const unsigned char *, int, unsigned char *,
40 unsigned int *, EC_KEY *);
41 int ECDSA_sign_ex(int, const unsigned char *, int dgstlen, unsigned char *,
42 unsigned int *, const BIGNUM *, const BIGNUM *, EC_KEY *);
43 int ECDSA_verify(int, const unsigned char *, int, const unsigned char *, int,
44 EC_KEY *);
45 int ECDSA_size(const EC_KEY *);
46
47 const ECDSA_METHOD *ECDSA_OpenSSL();
48 void ECDSA_set_default_method(const ECDSA_METHOD *);
49 const ECDSA_METHOD *ECDSA_get_default_method();
50 int ECDSA_get_ex_new_index(long, void *, CRYPTO_EX_new *,
51 CRYPTO_EX_dup *, CRYPTO_EX_free *);
52 int ECDSA_set_method(EC_KEY *, const ECDSA_METHOD *);
53 int ECDSA_set_ex_data(EC_KEY *, int, void *);
54 void *ECDSA_get_ex_data(EC_KEY *, int);
55 """
56
57 CUSTOMIZATIONS = """
58 #ifdef OPENSSL_NO_ECDSA
59 static const long Cryptography_HAS_ECDSA = 0;
60
61 typedef struct {
62 BIGNUM *r;
63 BIGNUM *s;
64 } ECDSA_SIG;
65
66 ECDSA_SIG* (*ECDSA_SIG_new)() = NULL;
67 void (*ECDSA_SIG_free)(ECDSA_SIG *) = NULL;
68 int (*i2d_ECDSA_SIG)(const ECDSA_SIG *, unsigned char **) = NULL;
69 ECDSA_SIG* (*d2i_ECDSA_SIG)(ECDSA_SIG **s, const unsigned char **,
70 long) = NULL;
71 ECDSA_SIG* (*ECDSA_do_sign)(const unsigned char *, int, EC_KEY *eckey) = NULL;
72 ECDSA_SIG* (*ECDSA_do_sign_ex)(const unsigned char *, int, const BIGNUM *,
73 const BIGNUM *, EC_KEY *) = NULL;
74 int (*ECDSA_do_verify)(const unsigned char *, int, const ECDSA_SIG *,
75 EC_KEY *) = NULL;
76 int (*ECDSA_sign_setup)(EC_KEY *, BN_CTX *, BIGNUM **, BIGNUM **) = NULL;
77 int (*ECDSA_sign)(int, const unsigned char *, int, unsigned char *,
78 unsigned int *, EC_KEY *) = NULL;
79 int (*ECDSA_sign_ex)(int, const unsigned char *, int dgstlen, unsigned char *,
80 unsigned int *, const BIGNUM *, const BIGNUM *,
81 EC_KEY *) = NULL;
82 int (*ECDSA_verify)(int, const unsigned char *, int, const unsigned char *,
83 int, EC_KEY *) = NULL;
84 int (*ECDSA_size)(const EC_KEY *) = NULL;
85
86 const ECDSA_METHOD* (*ECDSA_OpenSSL)() = NULL;
87 void (*ECDSA_set_default_method)(const ECDSA_METHOD *) = NULL;
88 const ECDSA_METHOD* (*ECDSA_get_default_method)() = NULL;
89 int (*ECDSA_set_method)(EC_KEY *, const ECDSA_METHOD *) = NULL;
90 int (*ECDSA_get_ex_new_index)(long, void *, CRYPTO_EX_new *,
91 CRYPTO_EX_dup *, CRYPTO_EX_free *) = NULL;
92 int (*ECDSA_set_ex_data)(EC_KEY *, int, void *) = NULL;
93 void* (*ECDSA_get_ex_data)(EC_KEY *, int) = NULL;
94 #else
95 static const long Cryptography_HAS_ECDSA = 1;
96 #endif
97 """
98
99 CONDITIONAL_NAMES = {
100 "Cryptography_HAS_ECDSA": [
101 "ECDSA_SIG_new",
102 "ECDSA_SIG_free",
103 "i2d_ECDSA_SIG",
104 "d2i_ECDSA_SIG",
105 "ECDSA_do_sign",
106 "ECDSA_do_sign_ex",
107 "ECDSA_do_verify",
108 "ECDSA_sign_setup",
109 "ECDSA_sign",
110 "ECDSA_sign_ex",
111 "ECDSA_verify",
112 "ECDSA_size",
113 "ECDSA_OpenSSL",
114 "ECDSA_set_default_method",
115 "ECDSA_get_default_method",
116 "ECDSA_set_method",
117 "ECDSA_get_ex_new_index",
118 "ECDSA_set_ex_data",
119 "ECDSA_get_ex_data",
120 ],
121 }
122
[end of src/cryptography/hazmat/bindings/openssl/ecdsa.py]
[start of tasks.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import getpass
8 import os
9 import time
10
11 import invoke
12
13 import requests
14
15
16 JENKINS_URL = "https://jenkins.cryptography.io/job/cryptography-wheel-builder"
17
18
19 def wait_for_build_completed(session):
20 # Wait 20 seconds before actually checking if the build is complete, to
21 # ensure that it had time to really start.
22 time.sleep(20)
23 while True:
24 response = session.get(
25 "{0}/lastBuild/api/json/".format(JENKINS_URL),
26 headers={
27 "Accept": "application/json",
28 }
29 )
30 response.raise_for_status()
31 if not response.json()["building"]:
32 assert response.json()["result"] == "SUCCESS"
33 break
34 time.sleep(0.1)
35
36
37 def download_artifacts(session):
38 response = session.get(
39 "{0}/lastBuild/api/json/".format(JENKINS_URL),
40 headers={
41 "Accept": "application/json"
42 }
43 )
44 response.raise_for_status()
45 assert not response.json()["building"]
46 assert response.json()["result"] == "SUCCESS"
47
48 paths = []
49
50 for run in response.json()["runs"]:
51 response = session.get(
52 run["url"] + "api/json/",
53 headers={
54 "Accept": "application/json",
55 }
56 )
57 response.raise_for_status()
58 for artifact in response.json()["artifacts"]:
59 response = session.get(
60 "{0}artifact/{1}".format(run["url"], artifact["relativePath"])
61 )
62 out_path = os.path.join(
63 os.path.dirname(__file__),
64 "dist",
65 artifact["fileName"],
66 )
67 with open(out_path, "wb") as f:
68 f.write(response.content)
69 paths.append(out_path)
70 return paths
71
72
73 @invoke.task
74 def release(version):
75 """
76 ``version`` should be a string like '0.4' or '1.0'.
77 """
78 invoke.run("git tag -s {0} -m '{0} release'".format(version))
79 invoke.run("git push --tags")
80
81 invoke.run("python setup.py sdist")
82 invoke.run("cd vectors/ && python setup.py sdist bdist_wheel")
83
84 invoke.run(
85 "twine upload -s dist/cryptography-{0}* "
86 "vectors/dist/cryptography_vectors-{0}*".format(version)
87 )
88
89 session = requests.Session()
90
91 # This tells the CDN to delete the cached response for the URL. We do this
92 # so that the Jenkins builders will see the new sdist immediately when they
93 # go to build the wheels.
94 response = session.request(
95 "PURGE", "https://pypi.python.org/simple/cryptography/"
96 )
97 response.raise_for_status()
98
99 username = getpass.getpass("Input the GitHub/Jenkins username: ")
100 token = getpass.getpass("Input the Jenkins token: ")
101 response = session.post(
102 "{0}/build".format(JENKINS_URL),
103 auth=requests.auth.HTTPBasicAuth(
104 username, token
105 ),
106 params={
107 "cause": "Building wheels for {0}".format(version)
108 }
109 )
110 response.raise_for_status()
111 wait_for_build_completed(session)
112 paths = download_artifacts(session)
113 invoke.run("twine upload {0}".format(" ".join(paths)))
114
[end of tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pyca/cryptography
|
6abe5674f23e9173350385862ea2da5c4a43d3c4
|
Update openssl bindings to allow server side OCSP stapling
This is the server side for #1863
I could not find public docs
I found code in OpenSSL sample server
https://github.com/openssl/openssl/blob/master/apps/s_server.c#L2131
```
SSL_CTX_set_tlsext_status_cb(ctx, cert_status_cb);
# callback signature
static int cert_status_cb(SSL *s, void *arg);
# Set response in callback based on OCSP response
rspderlen = i2d_OCSP_RESPONSE(resp, &rspder);
SSL_set_tlsext_status_ocsp_resp(s, rspder, rspderlen);
```
|
2015-04-23T13:01:35Z
|
<patch>
diff --git a/src/cryptography/hazmat/bindings/openssl/ssl.py b/src/cryptography/hazmat/bindings/openssl/ssl.py
--- a/src/cryptography/hazmat/bindings/openssl/ssl.py
+++ b/src/cryptography/hazmat/bindings/openssl/ssl.py
@@ -20,6 +20,8 @@
static const long Cryptography_HAS_TLSv1_2;
static const long Cryptography_HAS_SECURE_RENEGOTIATION;
static const long Cryptography_HAS_COMPRESSION;
+static const long Cryptography_HAS_TLSEXT_STATUS_REQ_CB;
+static const long Cryptography_HAS_STATUS_REQ_OCSP_RESP;
/* Internally invented symbol to tell us if SNI is supported */
static const long Cryptography_HAS_TLSEXT_HOSTNAME;
@@ -315,6 +317,12 @@
SSL_CTX *,
int (*)(const SSL *, int *, void *));
+/* These were added in OpenSSL 0.9.8h, but since version testing in OpenSSL
+ is fraught with peril thanks to OS distributions we check some constants
+ to determine if they are supported or not */
+long SSL_set_tlsext_status_ocsp_resp(SSL *, unsigned char *, int);
+long SSL_CTX_set_tlsext_status_cb(SSL_CTX *, int(*)(SSL *, void *));
+
long SSL_session_reused(SSL *);
/* The following were macros in 0.9.8e. Once we drop support for RHEL/CentOS 5
@@ -410,6 +418,20 @@
int (*)(const SSL *, int *, void *)) = NULL;
#endif
+#ifdef SSL_CTRL_SET_TLSEXT_STATUS_REQ_CB
+static const long Cryptography_HAS_TLSEXT_STATUS_REQ_CB = 1;
+#else
+static const long Cryptography_HAS_TLSEXT_STATUS_REQ_CB = 0;
+long (*SSL_CTX_set_tlsext_status_cb)(SSL_CTX *, int(*)(SSL *, void *)) = NULL;
+#endif
+
+#ifdef SSL_CTRL_SET_TLSEXT_STATUS_REQ_OCSP_RESP
+static const long Cryptography_HAS_STATUS_REQ_OCSP_RESP = 1;
+#else
+static const long Cryptography_HAS_STATUS_REQ_OCSP_RESP = 0;
+long (*SSL_set_tlsext_status_ocsp_resp)(SSL *, unsigned char *, int) = NULL;
+#endif
+
#ifdef SSL_MODE_RELEASE_BUFFERS
static const long Cryptography_HAS_RELEASE_BUFFERS = 1;
#else
@@ -588,6 +610,14 @@
"SSL_CTX_set_tlsext_servername_callback",
],
+ "Cryptography_HAS_TLSEXT_STATUS_REQ_CB": [
+ "SSL_CTX_set_tlsext_status_cb",
+ ],
+
+ "Cryptography_HAS_STATUS_REQ_OCSP_RESP": [
+ "SSL_set_tlsext_status_ocsp_resp",
+ ],
+
"Cryptography_HAS_RELEASE_BUFFERS": [
"SSL_MODE_RELEASE_BUFFERS",
],
</patch>
|
[]
|
[]
| ||||
mesonbuild__meson-9390
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support "cpp_std=c++20" for Windows Visual Studio backend
**Describe the bug**
Since [Visual Studio 2019 v16.11](https://docs.microsoft.com/en-us/cpp/build/reference/std-specify-language-standard-version?view=msvc-160#c-standards-support), it supports C++20 standard. However, when using "cpp_std=c++20", I get error: Value "c++20" (of type "string") for combo option "C++ language standard to use" is not one of the choices. Possible choices are (as string): "none", "c++11", "vc++11", "c++14", "c++latest", "vc++latest", "vc++14", "c++17", "vc++17".
**To Reproduce**
meson.build:
```
project(
'my_project',
'cpp',
default_options : ['cpp_std=c++latest']
)
```
**Expected behavior**
`meson builddir` should complete without error
**system parameters**
* Is this a [cross build](https://mesonbuild.com/Cross-compilation.html) or just a plain native build (for the same computer)? No
* what operating system (e.g. MacOS Catalina, Windows 10, CentOS 8.0, Ubuntu 18.04, etc.): Windows 10
* what Python version are you using e.g. 3.8.0: 3.9.7
* what `meson --version`: 0.59.1
* what `ninja --version` if it's a Ninja build: 1.10.2
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="https://mesonbuild.com/assets/images/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://dev.azure.com/jussi0947/jussi/_build/latest?definitionId=1)
11 [](https://codecov.io/gh/mesonbuild/meson/branch/master)
12 [](https://lgtm.com/projects/g/mesonbuild/meson/context:python)
13 [](https://lgtm.com/projects/g/mesonbuild/meson/alerts)
14
15 #### Dependencies
16
17 - [Python](https://python.org) (version 3.6 or newer)
18 - [Ninja](https://ninja-build.org) (version 1.8.2 or newer)
19
20 #### Installing from source
21
22 Meson is available on [PyPi](https://pypi.python.org/pypi/meson), so
23 it can be installed with `pip3 install meson`. The exact command to
24 type to install with `pip` can vary between systems, be sure to use
25 the Python 3 version of `pip`.
26
27 If you wish you can install it locally with the standard Python command:
28
29 ```console
30 python3 -m pip install meson
31 ```
32
33 For builds using Ninja, Ninja can be downloaded directly from Ninja
34 [GitHub release page](https://github.com/ninja-build/ninja/releases)
35 or via [PyPi](https://pypi.python.org/pypi/ninja)
36
37 ```console
38 python3 -m pip install ninja
39 ```
40
41 More on Installing Meson build can be found at the
42 [getting meson page](https://mesonbuild.com/Getting-meson.html).
43
44 #### Creating a standalone script
45
46 Meson can be run as a [Python zip
47 app](https://docs.python.org/3/library/zipapp.html). To generate the
48 executable run the following command:
49
50 ./packaging/create_zipapp.py --outfile meson.pyz --interpreter '/usr/bin/env python3' <source checkout>
51
52 #### Running
53
54 Meson requires that you have a source directory and a build directory
55 and that these two are different. In your source root must exist a
56 file called `meson.build`. To generate the build system run this
57 command:
58
59 `meson setup <source directory> <build directory>`
60
61 Depending on how you obtained Meson the command might also be called
62 `meson.py` instead of plain `meson`. In the rest of this document we
63 are going to use the latter form.
64
65 You can omit either of the two directories, and Meson will substitute
66 the current directory and autodetect what you mean. This allows you to
67 do things like this:
68
69 ```console
70 cd <source root>
71 meson setup builddir
72 ```
73
74 To compile, cd into your build directory and type `ninja`. To run unit
75 tests, type `ninja test`.
76
77 More on running Meson build system commands can be found at the
78 [running meson page](https://mesonbuild.com/Running-Meson.html)
79 or by typing `meson --help`.
80
81 #### Contributing
82
83 We love code contributions. See the [contribution
84 page](https://mesonbuild.com/Contributing.html) on the website for
85 details.
86
87
88 #### IRC
89
90 The channel to use is `#mesonbuild` either via Matrix ([web
91 interface][matrix_web]) or [OFTC IRC][oftc_irc].
92
93 [matrix_web]: https://app.element.io/#/room/#mesonbuild:matrix.org
94 [oftc_irc]: https://www.oftc.net/
95
96 #### Further info
97
98 More information about the Meson build system can be found at the
99 [project's home page](https://mesonbuild.com).
100
101 Meson is a registered trademark of ***Jussi Pakkanen***.
102
[end of README.md]
[start of mesonbuild/compilers/cpp.py]
1 # Copyright 2012-2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import functools
17 import os.path
18 import typing as T
19
20 from .. import coredata
21 from .. import mlog
22 from ..mesonlib import MesonException, MachineChoice, version_compare, OptionKey
23
24 from .compilers import (
25 gnu_winlibs,
26 msvc_winlibs,
27 Compiler,
28 CompileCheckMode,
29 )
30 from .c_function_attributes import CXX_FUNC_ATTRIBUTES, C_FUNC_ATTRIBUTES
31 from .mixins.clike import CLikeCompiler
32 from .mixins.ccrx import CcrxCompiler
33 from .mixins.c2000 import C2000Compiler
34 from .mixins.arm import ArmCompiler, ArmclangCompiler
35 from .mixins.visualstudio import MSVCCompiler, ClangClCompiler
36 from .mixins.gnu import GnuCompiler
37 from .mixins.intel import IntelGnuLikeCompiler, IntelVisualStudioLikeCompiler
38 from .mixins.clang import ClangCompiler
39 from .mixins.elbrus import ElbrusCompiler
40 from .mixins.pgi import PGICompiler
41 from .mixins.emscripten import EmscriptenMixin
42
43 if T.TYPE_CHECKING:
44 from ..coredata import KeyedOptionDictType
45 from ..dependencies import Dependency
46 from ..envconfig import MachineInfo
47 from ..environment import Environment
48 from ..linkers import DynamicLinker
49 from ..programs import ExternalProgram
50 CompilerMixinBase = CLikeCompiler
51 else:
52 CompilerMixinBase = object
53
54
55 def non_msvc_eh_options(eh: str, args: T.List[str]) -> None:
56 if eh == 'none':
57 args.append('-fno-exceptions')
58 elif eh == 's' or eh == 'c':
59 mlog.warning('non-MSVC compilers do not support ' + eh + ' exception handling.' +
60 'You may want to set eh to \'default\'.')
61
62 class CPPCompiler(CLikeCompiler, Compiler):
63
64 @classmethod
65 def attribute_check_func(cls, name: str) -> str:
66 try:
67 return CXX_FUNC_ATTRIBUTES.get(name, C_FUNC_ATTRIBUTES[name])
68 except KeyError:
69 raise MesonException(f'Unknown function attribute "{name}"')
70
71 language = 'cpp'
72
73 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
74 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
75 linker: T.Optional['DynamicLinker'] = None,
76 full_version: T.Optional[str] = None):
77 # If a child ObjCPP class has already set it, don't set it ourselves
78 Compiler.__init__(self, exelist, version, for_machine, info,
79 is_cross=is_cross, linker=linker,
80 full_version=full_version)
81 CLikeCompiler.__init__(self, exe_wrapper)
82
83 @staticmethod
84 def get_display_language() -> str:
85 return 'C++'
86
87 def get_no_stdinc_args(self) -> T.List[str]:
88 return ['-nostdinc++']
89
90 def sanity_check(self, work_dir: str, environment: 'Environment') -> None:
91 code = 'class breakCCompiler;int main(void) { return 0; }\n'
92 return self._sanity_check_impl(work_dir, environment, 'sanitycheckcpp.cc', code)
93
94 def get_compiler_check_args(self, mode: CompileCheckMode) -> T.List[str]:
95 # -fpermissive allows non-conforming code to compile which is necessary
96 # for many C++ checks. Particularly, the has_header_symbol check is
97 # too strict without this and always fails.
98 return super().get_compiler_check_args(mode) + ['-fpermissive']
99
100 def has_header_symbol(self, hname: str, symbol: str, prefix: str,
101 env: 'Environment', *,
102 extra_args: T.Union[None, T.List[str], T.Callable[[CompileCheckMode], T.List[str]]] = None,
103 dependencies: T.Optional[T.List['Dependency']] = None) -> T.Tuple[bool, bool]:
104 # Check if it's a C-like symbol
105 found, cached = super().has_header_symbol(hname, symbol, prefix, env,
106 extra_args=extra_args,
107 dependencies=dependencies)
108 if found:
109 return True, cached
110 # Check if it's a class or a template
111 if extra_args is None:
112 extra_args = []
113 t = f'''{prefix}
114 #include <{hname}>
115 using {symbol};
116 int main(void) {{ return 0; }}'''
117 return self.compiles(t, env, extra_args=extra_args,
118 dependencies=dependencies)
119
120 def _test_cpp_std_arg(self, cpp_std_value: str) -> bool:
121 # Test whether the compiler understands a -std=XY argument
122 assert cpp_std_value.startswith('-std=')
123
124 # This test does not use has_multi_arguments() for two reasons:
125 # 1. has_multi_arguments() requires an env argument, which the compiler
126 # object does not have at this point.
127 # 2. even if it did have an env object, that might contain another more
128 # recent -std= argument, which might lead to a cascaded failure.
129 CPP_TEST = 'int i = static_cast<int>(0);'
130 with self.compile(CPP_TEST, extra_args=[cpp_std_value], mode='compile') as p:
131 if p.returncode == 0:
132 mlog.debug(f'Compiler accepts {cpp_std_value}:', 'YES')
133 return True
134 else:
135 mlog.debug(f'Compiler accepts {cpp_std_value}:', 'NO')
136 return False
137
138 @functools.lru_cache()
139 def _find_best_cpp_std(self, cpp_std: str) -> str:
140 # The initial version mapping approach to make falling back
141 # from '-std=c++14' to '-std=c++1y' was too brittle. For instance,
142 # Apple's Clang uses a different versioning scheme to upstream LLVM,
143 # making the whole detection logic awfully brittle. Instead, let's
144 # just see if feeding GCC or Clang our '-std=' setting works, and
145 # if not, try the fallback argument.
146 CPP_FALLBACKS = {
147 'c++11': 'c++0x',
148 'gnu++11': 'gnu++0x',
149 'c++14': 'c++1y',
150 'gnu++14': 'gnu++1y',
151 'c++17': 'c++1z',
152 'gnu++17': 'gnu++1z',
153 'c++20': 'c++2a',
154 'gnu++20': 'gnu++2a',
155 }
156
157 # Currently, remapping is only supported for Clang, Elbrus and GCC
158 assert self.id in frozenset(['clang', 'lcc', 'gcc', 'emscripten'])
159
160 if cpp_std not in CPP_FALLBACKS:
161 # 'c++03' and 'c++98' don't have fallback types
162 return '-std=' + cpp_std
163
164 for i in (cpp_std, CPP_FALLBACKS[cpp_std]):
165 cpp_std_value = '-std=' + i
166 if self._test_cpp_std_arg(cpp_std_value):
167 return cpp_std_value
168
169 raise MesonException(f'C++ Compiler does not support -std={cpp_std}')
170
171 def get_options(self) -> 'KeyedOptionDictType':
172 opts = super().get_options()
173 key = OptionKey('std', machine=self.for_machine, lang=self.language)
174 opts.update({
175 key: coredata.UserComboOption(
176 'C++ language standard to use',
177 ['none'],
178 'none',
179 ),
180 })
181 return opts
182
183
184 class ClangCPPCompiler(ClangCompiler, CPPCompiler):
185 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
186 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
187 linker: T.Optional['DynamicLinker'] = None,
188 defines: T.Optional[T.Dict[str, str]] = None,
189 full_version: T.Optional[str] = None):
190 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
191 info, exe_wrapper, linker=linker, full_version=full_version)
192 ClangCompiler.__init__(self, defines)
193 default_warn_args = ['-Wall', '-Winvalid-pch', '-Wnon-virtual-dtor']
194 self.warn_args = {'0': [],
195 '1': default_warn_args,
196 '2': default_warn_args + ['-Wextra'],
197 '3': default_warn_args + ['-Wextra', '-Wpedantic']}
198
199 def get_options(self) -> 'KeyedOptionDictType':
200 opts = CPPCompiler.get_options(self)
201 key = OptionKey('key', machine=self.for_machine, lang=self.language)
202 opts.update({
203 key.evolve('eh'): coredata.UserComboOption(
204 'C++ exception handling type.',
205 ['none', 'default', 'a', 's', 'sc'],
206 'default',
207 ),
208 key.evolve('rtti'): coredata.UserBooleanOption('Enable RTTI', True),
209 })
210 opts[key.evolve('std')].choices = [
211 'none', 'c++98', 'c++03', 'c++11', 'c++14', 'c++17', 'c++1z',
212 'c++2a', 'c++20', 'gnu++11', 'gnu++14', 'gnu++17', 'gnu++1z',
213 'gnu++2a', 'gnu++20',
214 ]
215 if self.info.is_windows() or self.info.is_cygwin():
216 opts.update({
217 key.evolve('winlibs'): coredata.UserArrayOption(
218 'Standard Win libraries to link against',
219 gnu_winlibs,
220 ),
221 })
222 return opts
223
224 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
225 args = []
226 key = OptionKey('std', machine=self.for_machine, lang=self.language)
227 std = options[key]
228 if std.value != 'none':
229 args.append(self._find_best_cpp_std(std.value))
230
231 non_msvc_eh_options(options[key.evolve('eh')].value, args)
232
233 if not options[key.evolve('rtti')].value:
234 args.append('-fno-rtti')
235
236 return args
237
238 def get_option_link_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
239 if self.info.is_windows() or self.info.is_cygwin():
240 # without a typedict mypy can't understand this.
241 key = OptionKey('winlibs', machine=self.for_machine, lang=self.language)
242 libs = options[key].value.copy()
243 assert isinstance(libs, list)
244 for l in libs:
245 assert isinstance(l, str)
246 return libs
247 return []
248
249 def language_stdlib_only_link_flags(self, env: 'Environment') -> T.List[str]:
250 # We need to apply the search prefix here, as these link arguments may
251 # be passed to a different compiler with a different set of default
252 # search paths, such as when using Clang for C/C++ and gfortran for
253 # fortran,
254 search_dir = self._get_search_dirs(env)
255 search_dirs: T.List[str] = []
256 if search_dir is not None:
257 for d in search_dir.split()[-1][len('libraries: ='):].split(':'):
258 search_dirs.append(f'-L{d}')
259 return search_dirs + ['-lstdc++']
260
261
262 class AppleClangCPPCompiler(ClangCPPCompiler):
263 def language_stdlib_only_link_flags(self, env: 'Environment') -> T.List[str]:
264 # We need to apply the search prefix here, as these link arguments may
265 # be passed to a different compiler with a different set of default
266 # search paths, such as when using Clang for C/C++ and gfortran for
267 # fortran,
268 search_dir = self._get_search_dirs(env)
269 search_dirs: T.List[str] = []
270 if search_dir is not None:
271 for d in search_dir.split()[-1][len('libraries: ='):].split(':'):
272 search_dirs.append(f'-L{d}')
273 return search_dirs + ['-lc++']
274
275
276 class EmscriptenCPPCompiler(EmscriptenMixin, ClangCPPCompiler):
277 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
278 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
279 linker: T.Optional['DynamicLinker'] = None,
280 defines: T.Optional[T.Dict[str, str]] = None,
281 full_version: T.Optional[str] = None):
282 if not is_cross:
283 raise MesonException('Emscripten compiler can only be used for cross compilation.')
284 ClangCPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
285 info, exe_wrapper=exe_wrapper, linker=linker,
286 defines=defines, full_version=full_version)
287 self.id = 'emscripten'
288
289 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
290 args = []
291 key = OptionKey('std', machine=self.for_machine, lang=self.language)
292 std = options[key]
293 if std.value != 'none':
294 args.append(self._find_best_cpp_std(std.value))
295 return args
296
297
298 class ArmclangCPPCompiler(ArmclangCompiler, CPPCompiler):
299 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
300 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
301 linker: T.Optional['DynamicLinker'] = None,
302 full_version: T.Optional[str] = None):
303 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
304 info, exe_wrapper, linker=linker, full_version=full_version)
305 ArmclangCompiler.__init__(self)
306 default_warn_args = ['-Wall', '-Winvalid-pch', '-Wnon-virtual-dtor']
307 self.warn_args = {'0': [],
308 '1': default_warn_args,
309 '2': default_warn_args + ['-Wextra'],
310 '3': default_warn_args + ['-Wextra', '-Wpedantic']}
311
312 def get_options(self) -> 'KeyedOptionDictType':
313 opts = CPPCompiler.get_options(self)
314 key = OptionKey('std', machine=self.for_machine, lang=self.language)
315 opts.update({
316 key.evolve('eh'): coredata.UserComboOption(
317 'C++ exception handling type.',
318 ['none', 'default', 'a', 's', 'sc'],
319 'default',
320 ),
321 })
322 opts[key].choices = [
323 'none', 'c++98', 'c++03', 'c++11', 'c++14', 'c++17', 'gnu++98',
324 'gnu++03', 'gnu++11', 'gnu++14', 'gnu++17',
325 ]
326 return opts
327
328 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
329 args = []
330 key = OptionKey('std', machine=self.for_machine, lang=self.language)
331 std = options[key]
332 if std.value != 'none':
333 args.append('-std=' + std.value)
334
335 non_msvc_eh_options(options[key.evolve('eh')].value, args)
336
337 return args
338
339 def get_option_link_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
340 return []
341
342
343 class GnuCPPCompiler(GnuCompiler, CPPCompiler):
344 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
345 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
346 linker: T.Optional['DynamicLinker'] = None,
347 defines: T.Optional[T.Dict[str, str]] = None,
348 full_version: T.Optional[str] = None):
349 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
350 info, exe_wrapper, linker=linker, full_version=full_version)
351 GnuCompiler.__init__(self, defines)
352 default_warn_args = ['-Wall', '-Winvalid-pch', '-Wnon-virtual-dtor']
353 self.warn_args = {'0': [],
354 '1': default_warn_args,
355 '2': default_warn_args + ['-Wextra'],
356 '3': default_warn_args + ['-Wextra', '-Wpedantic']}
357
358 def get_options(self) -> 'KeyedOptionDictType':
359 key = OptionKey('std', machine=self.for_machine, lang=self.language)
360 opts = CPPCompiler.get_options(self)
361 opts.update({
362 key.evolve('eh'): coredata.UserComboOption(
363 'C++ exception handling type.',
364 ['none', 'default', 'a', 's', 'sc'],
365 'default',
366 ),
367 key.evolve('rtti'): coredata.UserBooleanOption('Enable RTTI', True),
368 key.evolve('debugstl'): coredata.UserBooleanOption(
369 'STL debug mode',
370 False,
371 )
372 })
373 opts[key].choices = [
374 'none', 'c++98', 'c++03', 'c++11', 'c++14', 'c++17', 'c++1z',
375 'c++2a', 'c++20', 'gnu++03', 'gnu++11', 'gnu++14', 'gnu++17',
376 'gnu++1z', 'gnu++2a', 'gnu++20',
377 ]
378 if self.info.is_windows() or self.info.is_cygwin():
379 opts.update({
380 key.evolve('winlibs'): coredata.UserArrayOption(
381 'Standard Win libraries to link against',
382 gnu_winlibs,
383 ),
384 })
385 return opts
386
387 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
388 args = []
389 key = OptionKey('std', machine=self.for_machine, lang=self.language)
390 std = options[key]
391 if std.value != 'none':
392 args.append(self._find_best_cpp_std(std.value))
393
394 non_msvc_eh_options(options[key.evolve('eh')].value, args)
395
396 if not options[key.evolve('rtti')].value:
397 args.append('-fno-rtti')
398
399 if options[key.evolve('debugstl')].value:
400 args.append('-D_GLIBCXX_DEBUG=1')
401 return args
402
403 def get_option_link_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
404 if self.info.is_windows() or self.info.is_cygwin():
405 # without a typedict mypy can't understand this.
406 key = OptionKey('winlibs', machine=self.for_machine, lang=self.language)
407 libs = options[key].value.copy()
408 assert isinstance(libs, list)
409 for l in libs:
410 assert isinstance(l, str)
411 return libs
412 return []
413
414 def get_pch_use_args(self, pch_dir: str, header: str) -> T.List[str]:
415 return ['-fpch-preprocess', '-include', os.path.basename(header)]
416
417 def language_stdlib_only_link_flags(self, env: 'Environment') -> T.List[str]:
418 # We need to apply the search prefix here, as these link arguments may
419 # be passed to a different compiler with a different set of default
420 # search paths, such as when using Clang for C/C++ and gfortran for
421 # fortran,
422 search_dir = self._get_search_dirs(env)
423 search_dirs: T.List[str] = []
424 if search_dir is not None:
425 for d in search_dir.split()[-1][len('libraries: ='):].split(':'):
426 search_dirs.append(f'-L{d}')
427 return ['-lstdc++']
428
429
430 class PGICPPCompiler(PGICompiler, CPPCompiler):
431 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
432 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
433 linker: T.Optional['DynamicLinker'] = None,
434 full_version: T.Optional[str] = None):
435 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
436 info, exe_wrapper, linker=linker, full_version=full_version)
437 PGICompiler.__init__(self)
438
439
440 class NvidiaHPC_CPPCompiler(PGICompiler, CPPCompiler):
441 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
442 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
443 linker: T.Optional['DynamicLinker'] = None,
444 full_version: T.Optional[str] = None):
445 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
446 info, exe_wrapper, linker=linker, full_version=full_version)
447 PGICompiler.__init__(self)
448
449 self.id = 'nvidia_hpc'
450
451
452 class ElbrusCPPCompiler(ElbrusCompiler, CPPCompiler):
453 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
454 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
455 linker: T.Optional['DynamicLinker'] = None,
456 defines: T.Optional[T.Dict[str, str]] = None,
457 full_version: T.Optional[str] = None):
458 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
459 info, exe_wrapper, linker=linker, full_version=full_version)
460 ElbrusCompiler.__init__(self)
461
462 def get_options(self) -> 'KeyedOptionDictType':
463 opts = CPPCompiler.get_options(self)
464
465 cpp_stds = ['none', 'c++98', 'gnu++98']
466 if version_compare(self.version, '>=1.20.00'):
467 cpp_stds += ['c++03', 'c++0x', 'c++11', 'gnu++03', 'gnu++0x', 'gnu++11']
468 if version_compare(self.version, '>=1.21.00') and version_compare(self.version, '<1.22.00'):
469 cpp_stds += ['c++14', 'gnu++14', 'c++1y', 'gnu++1y']
470 if version_compare(self.version, '>=1.22.00'):
471 cpp_stds += ['c++14', 'gnu++14']
472 if version_compare(self.version, '>=1.23.00'):
473 cpp_stds += ['c++1y', 'gnu++1y']
474 if version_compare(self.version, '>=1.24.00'):
475 cpp_stds += ['c++1z', 'c++17', 'gnu++1z', 'gnu++17']
476 if version_compare(self.version, '>=1.25.00'):
477 cpp_stds += ['c++2a', 'gnu++2a']
478 if version_compare(self.version, '>=1.26.00'):
479 cpp_stds += ['c++20', 'gnu++20']
480
481 key = OptionKey('std', machine=self.for_machine, lang=self.language)
482 opts.update({
483 key.evolve('eh'): coredata.UserComboOption(
484 'C++ exception handling type.',
485 ['none', 'default', 'a', 's', 'sc'],
486 'default',
487 ),
488 key.evolve('debugstl'): coredata.UserBooleanOption(
489 'STL debug mode',
490 False,
491 ),
492 })
493 opts[key].choices = cpp_stds
494 return opts
495
496 # Elbrus C++ compiler does not have lchmod, but there is only linker warning, not compiler error.
497 # So we should explicitly fail at this case.
498 def has_function(self, funcname: str, prefix: str, env: 'Environment', *,
499 extra_args: T.Optional[T.List[str]] = None,
500 dependencies: T.Optional[T.List['Dependency']] = None) -> T.Tuple[bool, bool]:
501 if funcname == 'lchmod':
502 return False, False
503 else:
504 return super().has_function(funcname, prefix, env,
505 extra_args=extra_args,
506 dependencies=dependencies)
507
508 # Elbrus C++ compiler does not support RTTI, so don't check for it.
509 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
510 args = []
511 key = OptionKey('std', machine=self.for_machine, lang=self.language)
512 std = options[key]
513 if std.value != 'none':
514 args.append(self._find_best_cpp_std(std.value))
515
516 non_msvc_eh_options(options[key.evolve('eh')].value, args)
517
518 if options[key.evolve('debugstl')].value:
519 args.append('-D_GLIBCXX_DEBUG=1')
520 return args
521
522
523 class IntelCPPCompiler(IntelGnuLikeCompiler, CPPCompiler):
524 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
525 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
526 linker: T.Optional['DynamicLinker'] = None,
527 full_version: T.Optional[str] = None):
528 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
529 info, exe_wrapper, linker=linker, full_version=full_version)
530 IntelGnuLikeCompiler.__init__(self)
531 self.lang_header = 'c++-header'
532 default_warn_args = ['-Wall', '-w3', '-diag-disable:remark',
533 '-Wpch-messages', '-Wnon-virtual-dtor']
534 self.warn_args = {'0': [],
535 '1': default_warn_args,
536 '2': default_warn_args + ['-Wextra'],
537 '3': default_warn_args + ['-Wextra']}
538
539 def get_options(self) -> 'KeyedOptionDictType':
540 opts = CPPCompiler.get_options(self)
541 # Every Unix compiler under the sun seems to accept -std=c++03,
542 # with the exception of ICC. Instead of preventing the user from
543 # globally requesting C++03, we transparently remap it to C++98
544 c_stds = ['c++98', 'c++03']
545 g_stds = ['gnu++98', 'gnu++03']
546 if version_compare(self.version, '>=15.0.0'):
547 c_stds += ['c++11', 'c++14']
548 g_stds += ['gnu++11']
549 if version_compare(self.version, '>=16.0.0'):
550 c_stds += ['c++17']
551 if version_compare(self.version, '>=17.0.0'):
552 g_stds += ['gnu++14']
553 if version_compare(self.version, '>=19.1.0'):
554 c_stds += ['c++2a']
555 g_stds += ['gnu++2a']
556
557 key = OptionKey('std', machine=self.for_machine, lang=self.language)
558 opts.update({
559 key.evolve('eh'): coredata.UserComboOption(
560 'C++ exception handling type.',
561 ['none', 'default', 'a', 's', 'sc'],
562 'default',
563 ),
564 key.evolve('rtti'): coredata.UserBooleanOption('Enable RTTI', True),
565 key.evolve('debugstl'): coredata.UserBooleanOption('STL debug mode', False),
566 })
567 opts[key].choices = ['none'] + c_stds + g_stds
568 return opts
569
570 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
571 args = []
572 key = OptionKey('std', machine=self.for_machine, lang=self.language)
573 std = options[key]
574 if std.value != 'none':
575 remap_cpp03 = {
576 'c++03': 'c++98',
577 'gnu++03': 'gnu++98'
578 }
579 args.append('-std=' + remap_cpp03.get(std.value, std.value))
580 if options[key.evolve('eh')].value == 'none':
581 args.append('-fno-exceptions')
582 if not options[key.evolve('rtti')].value:
583 args.append('-fno-rtti')
584 if options[key.evolve('debugstl')].value:
585 args.append('-D_GLIBCXX_DEBUG=1')
586 return args
587
588 def get_option_link_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
589 return []
590
591
592 class VisualStudioLikeCPPCompilerMixin(CompilerMixinBase):
593
594 """Mixin for C++ specific method overrides in MSVC-like compilers."""
595
596 VC_VERSION_MAP = {
597 'none': (True, None),
598 'vc++11': (True, 11),
599 'vc++14': (True, 14),
600 'vc++17': (True, 17),
601 'vc++latest': (True, "latest"),
602 'c++11': (False, 11),
603 'c++14': (False, 14),
604 'c++17': (False, 17),
605 'c++latest': (False, "latest"),
606 }
607
608 def get_option_link_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
609 # need a typeddict for this
610 key = OptionKey('winlibs', machine=self.for_machine, lang=self.language)
611 return T.cast(T.List[str], options[key].value[:])
612
613 def _get_options_impl(self, opts: 'KeyedOptionDictType', cpp_stds: T.List[str]) -> 'KeyedOptionDictType':
614 key = OptionKey('std', machine=self.for_machine, lang=self.language)
615 opts.update({
616 key.evolve('eh'): coredata.UserComboOption(
617 'C++ exception handling type.',
618 ['none', 'default', 'a', 's', 'sc'],
619 'default',
620 ),
621 key.evolve('rtti'): coredata.UserBooleanOption('Enable RTTI', True),
622 key.evolve('winlibs'): coredata.UserArrayOption(
623 'Windows libs to link against.',
624 msvc_winlibs,
625 ),
626 })
627 opts[key.evolve('std')].choices = cpp_stds
628 return opts
629
630 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
631 args = []
632 key = OptionKey('std', machine=self.for_machine, lang=self.language)
633
634 eh = options[key.evolve('eh')]
635 if eh.value == 'default':
636 args.append('/EHsc')
637 elif eh.value == 'none':
638 args.append('/EHs-c-')
639 else:
640 args.append('/EH' + eh.value)
641
642 if not options[key.evolve('rtti')].value:
643 args.append('/GR-')
644
645 permissive, ver = self.VC_VERSION_MAP[options[key].value]
646
647 if ver is not None:
648 args.append(f'/std:c++{ver}')
649
650 if not permissive:
651 args.append('/permissive-')
652
653 return args
654
655 def get_compiler_check_args(self, mode: CompileCheckMode) -> T.List[str]:
656 # XXX: this is a hack because so much GnuLike stuff is in the base CPPCompiler class.
657 return Compiler.get_compiler_check_args(self, mode)
658
659
660 class CPP11AsCPP14Mixin(CompilerMixinBase):
661
662 """Mixin class for VisualStudio and ClangCl to replace C++11 std with C++14.
663
664 This is a limitation of Clang and MSVC that ICL doesn't share.
665 """
666
667 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
668 # Note: there is no explicit flag for supporting C++11; we attempt to do the best we can
669 # which means setting the C++ standard version to C++14, in compilers that support it
670 # (i.e., after VS2015U3)
671 # if one is using anything before that point, one cannot set the standard.
672 key = OptionKey('std', machine=self.for_machine, lang=self.language)
673 if options[key].value in {'vc++11', 'c++11'}:
674 mlog.warning(self.id, 'does not support C++11;',
675 'attempting best effort; setting the standard to C++14', once=True)
676 # Don't mutate anything we're going to change, we need to use
677 # deepcopy since we're messing with members, and we can't simply
678 # copy the members because the option proxy doesn't support it.
679 options = copy.deepcopy(options)
680 if options[key].value == 'vc++11':
681 options[key].value = 'vc++14'
682 else:
683 options[key].value = 'c++14'
684 return super().get_option_compile_args(options)
685
686
687 class VisualStudioCPPCompiler(CPP11AsCPP14Mixin, VisualStudioLikeCPPCompilerMixin, MSVCCompiler, CPPCompiler):
688 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,
689 is_cross: bool, info: 'MachineInfo', target: str,
690 exe_wrapper: T.Optional['ExternalProgram'] = None,
691 linker: T.Optional['DynamicLinker'] = None,
692 full_version: T.Optional[str] = None):
693 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
694 info, exe_wrapper, linker=linker, full_version=full_version)
695 MSVCCompiler.__init__(self, target)
696 self.id = 'msvc'
697
698 def get_options(self) -> 'KeyedOptionDictType':
699 cpp_stds = ['none', 'c++11', 'vc++11']
700 # Visual Studio 2015 and later
701 if version_compare(self.version, '>=19'):
702 cpp_stds.extend(['c++14', 'c++latest', 'vc++latest'])
703 # Visual Studio 2017 and later
704 if version_compare(self.version, '>=19.11'):
705 cpp_stds.extend(['vc++14', 'c++17', 'vc++17'])
706 return self._get_options_impl(super().get_options(), cpp_stds)
707
708 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
709 key = OptionKey('std', machine=self.for_machine, lang=self.language)
710 if options[key].value != 'none' and version_compare(self.version, '<19.00.24210'):
711 mlog.warning('This version of MSVC does not support cpp_std arguments')
712 options = copy.copy(options)
713 options[key].value = 'none'
714
715 args = super().get_option_compile_args(options)
716
717 if version_compare(self.version, '<19.11'):
718 try:
719 i = args.index('/permissive-')
720 except ValueError:
721 return args
722 del args[i]
723 return args
724
725 class ClangClCPPCompiler(CPP11AsCPP14Mixin, VisualStudioLikeCPPCompilerMixin, ClangClCompiler, CPPCompiler):
726 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,
727 is_cross: bool, info: 'MachineInfo', target: str,
728 exe_wrapper: T.Optional['ExternalProgram'] = None,
729 linker: T.Optional['DynamicLinker'] = None,
730 full_version: T.Optional[str] = None):
731 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
732 info, exe_wrapper, linker=linker, full_version=full_version)
733 ClangClCompiler.__init__(self, target)
734 self.id = 'clang-cl'
735
736 def get_options(self) -> 'KeyedOptionDictType':
737 cpp_stds = ['none', 'c++11', 'vc++11', 'c++14', 'vc++14', 'c++17', 'vc++17', 'c++latest']
738 return self._get_options_impl(super().get_options(), cpp_stds)
739
740
741 class IntelClCPPCompiler(VisualStudioLikeCPPCompilerMixin, IntelVisualStudioLikeCompiler, CPPCompiler):
742
743 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,
744 is_cross: bool, info: 'MachineInfo', target: str,
745 exe_wrapper: T.Optional['ExternalProgram'] = None,
746 linker: T.Optional['DynamicLinker'] = None,
747 full_version: T.Optional[str] = None):
748 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
749 info, exe_wrapper, linker=linker, full_version=full_version)
750 IntelVisualStudioLikeCompiler.__init__(self, target)
751
752 def get_options(self) -> 'KeyedOptionDictType':
753 # This has only been tested with version 19.0,
754 cpp_stds = ['none', 'c++11', 'vc++11', 'c++14', 'vc++14', 'c++17', 'vc++17', 'c++latest']
755 return self._get_options_impl(super().get_options(), cpp_stds)
756
757 def get_compiler_check_args(self, mode: CompileCheckMode) -> T.List[str]:
758 # XXX: this is a hack because so much GnuLike stuff is in the base CPPCompiler class.
759 return IntelVisualStudioLikeCompiler.get_compiler_check_args(self, mode)
760
761
762 class ArmCPPCompiler(ArmCompiler, CPPCompiler):
763 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
764 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
765 linker: T.Optional['DynamicLinker'] = None,
766 full_version: T.Optional[str] = None):
767 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
768 info, exe_wrapper, linker=linker, full_version=full_version)
769 ArmCompiler.__init__(self)
770
771 def get_options(self) -> 'KeyedOptionDictType':
772 opts = CPPCompiler.get_options(self)
773 key = OptionKey('std', machine=self.for_machine, lang=self.language)
774 opts[key].choices = ['none', 'c++03', 'c++11']
775 return opts
776
777 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
778 args = []
779 key = OptionKey('std', machine=self.for_machine, lang=self.language)
780 std = options[key]
781 if std.value == 'c++11':
782 args.append('--cpp11')
783 elif std.value == 'c++03':
784 args.append('--cpp')
785 return args
786
787 def get_option_link_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
788 return []
789
790 def get_compiler_check_args(self, mode: CompileCheckMode) -> T.List[str]:
791 return []
792
793
794 class CcrxCPPCompiler(CcrxCompiler, CPPCompiler):
795 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
796 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
797 linker: T.Optional['DynamicLinker'] = None,
798 full_version: T.Optional[str] = None):
799 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
800 info, exe_wrapper, linker=linker, full_version=full_version)
801 CcrxCompiler.__init__(self)
802
803 # Override CCompiler.get_always_args
804 def get_always_args(self) -> T.List[str]:
805 return ['-nologo', '-lang=cpp']
806
807 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
808 return []
809
810 def get_compile_only_args(self) -> T.List[str]:
811 return []
812
813 def get_output_args(self, target: str) -> T.List[str]:
814 return ['-output=obj=%s' % target]
815
816 def get_option_link_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
817 return []
818
819 def get_compiler_check_args(self, mode: CompileCheckMode) -> T.List[str]:
820 return []
821
822 class C2000CPPCompiler(C2000Compiler, CPPCompiler):
823 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice, is_cross: bool,
824 info: 'MachineInfo', exe_wrapper: T.Optional['ExternalProgram'] = None,
825 linker: T.Optional['DynamicLinker'] = None,
826 full_version: T.Optional[str] = None):
827 CPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
828 info, exe_wrapper, linker=linker, full_version=full_version)
829 C2000Compiler.__init__(self)
830
831 def get_options(self) -> 'KeyedOptionDictType':
832 opts = CPPCompiler.get_options(self)
833 key = OptionKey('std', machine=self.for_machine, lang=self.language)
834 opts[key].choices = ['none', 'c++03']
835 return opts
836
837 def get_always_args(self) -> T.List[str]:
838 return ['-nologo', '-lang=cpp']
839
840 def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
841 return []
842
843 def get_compile_only_args(self) -> T.List[str]:
844 return []
845
846 def get_output_args(self, target: str) -> T.List[str]:
847 return ['-output=obj=%s' % target]
848
849 def get_option_link_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
850 return []
851
852 def get_compiler_check_args(self, mode: CompileCheckMode) -> T.List[str]:
853 return []
854
[end of mesonbuild/compilers/cpp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
cf587f39eac997219a47acbc8f5a6f38c5d25b23
|
Support "cpp_std=c++20" for Windows Visual Studio backend
**Describe the bug**
Since [Visual Studio 2019 v16.11](https://docs.microsoft.com/en-us/cpp/build/reference/std-specify-language-standard-version?view=msvc-160#c-standards-support), it supports C++20 standard. However, when using "cpp_std=c++20", I get error: Value "c++20" (of type "string") for combo option "C++ language standard to use" is not one of the choices. Possible choices are (as string): "none", "c++11", "vc++11", "c++14", "c++latest", "vc++latest", "vc++14", "c++17", "vc++17".
**To Reproduce**
meson.build:
```
project(
'my_project',
'cpp',
default_options : ['cpp_std=c++latest']
)
```
**Expected behavior**
`meson builddir` should complete without error
**system parameters**
* Is this a [cross build](https://mesonbuild.com/Cross-compilation.html) or just a plain native build (for the same computer)? No
* what operating system (e.g. MacOS Catalina, Windows 10, CentOS 8.0, Ubuntu 18.04, etc.): Windows 10
* what Python version are you using e.g. 3.8.0: 3.9.7
* what `meson --version`: 0.59.1
* what `ninja --version` if it's a Ninja build: 1.10.2
|
2021-10-12T06:19:48Z
|
<patch>
diff --git a/mesonbuild/compilers/cpp.py b/mesonbuild/compilers/cpp.py
--- a/mesonbuild/compilers/cpp.py
+++ b/mesonbuild/compilers/cpp.py
@@ -598,10 +598,12 @@ class VisualStudioLikeCPPCompilerMixin(CompilerMixinBase):
'vc++11': (True, 11),
'vc++14': (True, 14),
'vc++17': (True, 17),
+ 'vc++20': (True, 20),
'vc++latest': (True, "latest"),
'c++11': (False, 11),
'c++14': (False, 14),
'c++17': (False, 17),
+ 'c++20': (False, 20),
'c++latest': (False, "latest"),
}
@@ -703,6 +705,8 @@ def get_options(self) -> 'KeyedOptionDictType':
# Visual Studio 2017 and later
if version_compare(self.version, '>=19.11'):
cpp_stds.extend(['vc++14', 'c++17', 'vc++17'])
+ if version_compare(self.version, '>=19.29'):
+ cpp_stds.extend(['c++20', 'vc++20'])
return self._get_options_impl(super().get_options(), cpp_stds)
def get_option_compile_args(self, options: 'KeyedOptionDictType') -> T.List[str]:
</patch>
|
[]
|
[]
| ||||
docker__compose-6388
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docker Compose doesn't work when SSH connection used to remote Docker Engine
## Description of the issue
Just trying out the new SSH connection introduced in Docker 18.09 and I noticed an error when attempting to do `docker-compose up` whilst targeting a remote Docker Engine instance.
Errors message below appears to indicate that Compose isn't aware of the SSH protocol for this purpose
```
docker.errors.DockerException: Invalid bind address protocol: ssh://xfoxy.secinternal.local
[486] Failed to execute script docker-compose
```
## Context information (for bug reports)
**Output of `docker-compose version`**
```
docker-compose version 1.23.1, build b02f1306
docker-py version: 3.5.0
CPython version: 3.6.7
OpenSSL version: OpenSSL 1.1.0f 25 May 2017
```
**Output of `docker version`**
```
Client:
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:49:01 2018
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:16:44 2018
OS/Arch: linux/amd64
Experimental: false
```
**Output of `docker-compose config`**
(Make sure to add the relevant `-f` and other flags)
```
networks:
testnet: {}
services:
dradis:
image: raesene/dradis
networks:
testnet: null
ports:
- 3000/tcp
volumes:
- data:/data:rw
sectest:
image: raesene/sectest
networks:
testnet: null
ports:
- 22/tcp
volumes:
- data:/data:rw
version: '3.0'
volumes:
data: {}
```
## Steps to reproduce the issue
1. Configure a Docker client (18.09) to connect to a remote Docker engine instance via SSH
2. Run `docker-compose up` in a directory with a docker-compose.yml file.
3. Error occors.
### Observed result
Error occurs
### Expected result
Docker compose contacts the remote docker engine instance to create the containers.
### Stacktrace / full error message
```
Traceback (most recent call last):
File "bin/docker-compose", line 6, in <module>
File "compose/cli/main.py", line 71, in main
File "compose/cli/main.py", line 124, in perform_command
File "compose/cli/command.py", line 42, in project_from_options
File "compose/cli/command.py", line 123, in get_project
File "compose/cli/command.py", line 94, in get_client
File "compose/cli/docker_client.py", line 127, in docker_client
File "site-packages/docker/api/client.py", line 118, in __init__
File "site-packages/docker/utils/utils.py", line 256, in parse_host
docker.errors.DockerException: Invalid bind address protocol: ssh://xfoxy.secinternal.local
[486] Failed to execute script docker-compose
```
## Additional information
Client is WSL (Ubuntu 18.04) Server is Ubuntu 18.04 running Docker 18.09.
</issue>
<code>
[start of README.md]
1 Docker Compose
2 ==============
3 
4
5 Compose is a tool for defining and running multi-container Docker applications.
6 With Compose, you use a Compose file to configure your application's services.
7 Then, using a single command, you create and start all the services
8 from your configuration. To learn more about all the features of Compose
9 see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#features).
10
11 Compose is great for development, testing, and staging environments, as well as
12 CI workflows. You can learn more about each case in
13 [Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#common-use-cases).
14
15 Using Compose is basically a three-step process.
16
17 1. Define your app's environment with a `Dockerfile` so it can be
18 reproduced anywhere.
19 2. Define the services that make up your app in `docker-compose.yml` so
20 they can be run together in an isolated environment.
21 3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
22
23 A `docker-compose.yml` looks like this:
24
25 version: '2'
26
27 services:
28 web:
29 build: .
30 ports:
31 - "5000:5000"
32 volumes:
33 - .:/code
34 redis:
35 image: redis
36
37 For more information about the Compose file, see the
38 [Compose file reference](https://github.com/docker/docker.github.io/blob/master/compose/compose-file/compose-versioning.md).
39
40 Compose has commands for managing the whole lifecycle of your application:
41
42 * Start, stop and rebuild services
43 * View the status of running services
44 * Stream the log output of running services
45 * Run a one-off command on a service
46
47 Installation and documentation
48 ------------------------------
49
50 - Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
51 - Code repository for Compose is on [GitHub](https://github.com/docker/compose).
52 - If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new/choose). Thank you!
53
54 Contributing
55 ------------
56
57 [](https://jenkins.dockerproject.org/job/docker/job/compose/job/master/)
58
59 Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
60
61 Releasing
62 ---------
63
64 Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/project/RELEASE-PROCESS.md).
65
[end of README.md]
[start of compose/cli/errors.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import contextlib
5 import logging
6 import socket
7 from distutils.spawn import find_executable
8 from textwrap import dedent
9
10 from docker.errors import APIError
11 from requests.exceptions import ConnectionError as RequestsConnectionError
12 from requests.exceptions import ReadTimeout
13 from requests.exceptions import SSLError
14 from requests.packages.urllib3.exceptions import ReadTimeoutError
15
16 from ..const import API_VERSION_TO_ENGINE_VERSION
17 from .utils import binarystr_to_unicode
18 from .utils import is_docker_for_mac_installed
19 from .utils import is_mac
20 from .utils import is_ubuntu
21 from .utils import is_windows
22
23
24 log = logging.getLogger(__name__)
25
26
27 class UserError(Exception):
28
29 def __init__(self, msg):
30 self.msg = dedent(msg).strip()
31
32 def __unicode__(self):
33 return self.msg
34
35 __str__ = __unicode__
36
37
38 class ConnectionError(Exception):
39 pass
40
41
42 @contextlib.contextmanager
43 def handle_connection_errors(client):
44 try:
45 yield
46 except SSLError as e:
47 log.error('SSL error: %s' % e)
48 raise ConnectionError()
49 except RequestsConnectionError as e:
50 if e.args and isinstance(e.args[0], ReadTimeoutError):
51 log_timeout_error(client.timeout)
52 raise ConnectionError()
53 exit_with_error(get_conn_error_message(client.base_url))
54 except APIError as e:
55 log_api_error(e, client.api_version)
56 raise ConnectionError()
57 except (ReadTimeout, socket.timeout):
58 log_timeout_error(client.timeout)
59 raise ConnectionError()
60 except Exception as e:
61 if is_windows():
62 import pywintypes
63 if isinstance(e, pywintypes.error):
64 log_windows_pipe_error(e)
65 raise ConnectionError()
66 raise
67
68
69 def log_windows_pipe_error(exc):
70 if exc.winerror == 2:
71 log.error("Couldn't connect to Docker daemon. You might need to start Docker for Windows.")
72 elif exc.winerror == 232: # https://github.com/docker/compose/issues/5005
73 log.error(
74 "The current Compose file version is not compatible with your engine version. "
75 "Please upgrade your Compose file to a more recent version, or set "
76 "a COMPOSE_API_VERSION in your environment."
77 )
78 else:
79 log.error(
80 "Windows named pipe error: {} (code: {})".format(
81 binarystr_to_unicode(exc.strerror), exc.winerror
82 )
83 )
84
85
86 def log_timeout_error(timeout):
87 log.error(
88 "An HTTP request took too long to complete. Retry with --verbose to "
89 "obtain debug information.\n"
90 "If you encounter this issue regularly because of slow network "
91 "conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher "
92 "value (current value: %s)." % timeout)
93
94
95 def log_api_error(e, client_version):
96 explanation = binarystr_to_unicode(e.explanation)
97
98 if 'client is newer than server' not in explanation:
99 log.error(explanation)
100 return
101
102 version = API_VERSION_TO_ENGINE_VERSION.get(client_version)
103 if not version:
104 # They've set a custom API version
105 log.error(explanation)
106 return
107
108 log.error(
109 "The Docker Engine version is less than the minimum required by "
110 "Compose. Your current project requires a Docker Engine of "
111 "version {version} or greater.".format(version=version)
112 )
113
114
115 def exit_with_error(msg):
116 log.error(dedent(msg).strip())
117 raise ConnectionError()
118
119
120 def get_conn_error_message(url):
121 try:
122 if find_executable('docker') is None:
123 return docker_not_found_msg("Couldn't connect to Docker daemon.")
124 if is_docker_for_mac_installed():
125 return conn_error_docker_for_mac
126 if find_executable('docker-machine') is not None:
127 return conn_error_docker_machine
128 except UnicodeDecodeError:
129 # https://github.com/docker/compose/issues/5442
130 # Ignore the error and print the generic message instead.
131 pass
132 return conn_error_generic.format(url=url)
133
134
135 def docker_not_found_msg(problem):
136 return "{} You might need to install Docker:\n\n{}".format(
137 problem, docker_install_url())
138
139
140 def docker_install_url():
141 if is_mac():
142 return docker_install_url_mac
143 elif is_ubuntu():
144 return docker_install_url_ubuntu
145 elif is_windows():
146 return docker_install_url_windows
147 else:
148 return docker_install_url_generic
149
150
151 docker_install_url_mac = "https://docs.docker.com/engine/installation/mac/"
152 docker_install_url_ubuntu = "https://docs.docker.com/engine/installation/ubuntulinux/"
153 docker_install_url_windows = "https://docs.docker.com/engine/installation/windows/"
154 docker_install_url_generic = "https://docs.docker.com/engine/installation/"
155
156
157 conn_error_docker_machine = """
158 Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
159 """
160
161 conn_error_docker_for_mac = """
162 Couldn't connect to Docker daemon. You might need to start Docker for Mac.
163 """
164
165
166 conn_error_generic = """
167 Couldn't connect to Docker daemon at {url} - is it running?
168
169 If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
170 """
171
[end of compose/cli/errors.py]
[start of compose/cli/utils.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import unicode_literals
4
5 import math
6 import os
7 import platform
8 import ssl
9 import subprocess
10 import sys
11
12 import docker
13 import six
14
15 import compose
16 from ..const import IS_WINDOWS_PLATFORM
17
18 # WindowsError is not defined on non-win32 platforms. Avoid runtime errors by
19 # defining it as OSError (its parent class) if missing.
20 try:
21 WindowsError
22 except NameError:
23 WindowsError = OSError
24
25
26 def yesno(prompt, default=None):
27 """
28 Prompt the user for a yes or no.
29
30 Can optionally specify a default value, which will only be
31 used if they enter a blank line.
32
33 Unrecognised input (anything other than "y", "n", "yes",
34 "no" or "") will return None.
35 """
36 answer = input(prompt).strip().lower()
37
38 if answer == "y" or answer == "yes":
39 return True
40 elif answer == "n" or answer == "no":
41 return False
42 elif answer == "":
43 return default
44 else:
45 return None
46
47
48 def input(prompt):
49 """
50 Version of input (raw_input in Python 2) which forces a flush of sys.stdout
51 to avoid problems where the prompt fails to appear due to line buffering
52 """
53 sys.stdout.write(prompt)
54 sys.stdout.flush()
55 return sys.stdin.readline().rstrip('\n')
56
57
58 def call_silently(*args, **kwargs):
59 """
60 Like subprocess.call(), but redirects stdout and stderr to /dev/null.
61 """
62 with open(os.devnull, 'w') as shutup:
63 try:
64 return subprocess.call(*args, stdout=shutup, stderr=shutup, **kwargs)
65 except WindowsError:
66 # On Windows, subprocess.call() can still raise exceptions. Normalize
67 # to POSIXy behaviour by returning a nonzero exit code.
68 return 1
69
70
71 def is_mac():
72 return platform.system() == 'Darwin'
73
74
75 def is_ubuntu():
76 return platform.system() == 'Linux' and platform.linux_distribution()[0] == 'Ubuntu'
77
78
79 def is_windows():
80 return IS_WINDOWS_PLATFORM
81
82
83 def get_version_info(scope):
84 versioninfo = 'docker-compose version {}, build {}'.format(
85 compose.__version__,
86 get_build_version())
87
88 if scope == 'compose':
89 return versioninfo
90 if scope == 'full':
91 return (
92 "{}\n"
93 "docker-py version: {}\n"
94 "{} version: {}\n"
95 "OpenSSL version: {}"
96 ).format(
97 versioninfo,
98 docker.version,
99 platform.python_implementation(),
100 platform.python_version(),
101 ssl.OPENSSL_VERSION)
102
103 raise ValueError("{} is not a valid version scope".format(scope))
104
105
106 def get_build_version():
107 filename = os.path.join(os.path.dirname(compose.__file__), 'GITSHA')
108 if not os.path.exists(filename):
109 return 'unknown'
110
111 with open(filename) as fh:
112 return fh.read().strip()
113
114
115 def is_docker_for_mac_installed():
116 return is_mac() and os.path.isdir('/Applications/Docker.app')
117
118
119 def generate_user_agent():
120 parts = [
121 "docker-compose/{}".format(compose.__version__),
122 "docker-py/{}".format(docker.__version__),
123 ]
124 try:
125 p_system = platform.system()
126 p_release = platform.release()
127 except IOError:
128 pass
129 else:
130 parts.append("{}/{}".format(p_system, p_release))
131 return " ".join(parts)
132
133
134 def human_readable_file_size(size):
135 suffixes = ['B', 'kB', 'MB', 'GB', 'TB', 'PB', 'EB', ]
136 order = int(math.log(size, 2) / 10) if size else 0
137 if order >= len(suffixes):
138 order = len(suffixes) - 1
139
140 return '{0:.3g} {1}'.format(
141 size / float(1 << (order * 10)),
142 suffixes[order]
143 )
144
145
146 def binarystr_to_unicode(s):
147 if not isinstance(s, six.binary_type):
148 return s
149
150 if IS_WINDOWS_PLATFORM:
151 try:
152 return s.decode('windows-1250')
153 except UnicodeDecodeError:
154 pass
155 return s.decode('utf-8', 'replace')
156
[end of compose/cli/utils.py]
[start of compose/config/validation.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import json
5 import logging
6 import os
7 import re
8 import sys
9
10 import six
11 from docker.utils.ports import split_port
12 from jsonschema import Draft4Validator
13 from jsonschema import FormatChecker
14 from jsonschema import RefResolver
15 from jsonschema import ValidationError
16
17 from ..const import COMPOSEFILE_V1 as V1
18 from ..const import NANOCPUS_SCALE
19 from .errors import ConfigurationError
20 from .errors import VERSION_EXPLANATION
21 from .sort_services import get_service_name_from_network_mode
22
23
24 log = logging.getLogger(__name__)
25
26
27 DOCKER_CONFIG_HINTS = {
28 'cpu_share': 'cpu_shares',
29 'add_host': 'extra_hosts',
30 'hosts': 'extra_hosts',
31 'extra_host': 'extra_hosts',
32 'device': 'devices',
33 'link': 'links',
34 'memory_swap': 'memswap_limit',
35 'port': 'ports',
36 'privilege': 'privileged',
37 'priviliged': 'privileged',
38 'privilige': 'privileged',
39 'volume': 'volumes',
40 'workdir': 'working_dir',
41 }
42
43
44 VALID_NAME_CHARS = r'[a-zA-Z0-9\._\-]'
45 VALID_EXPOSE_FORMAT = r'^\d+(\-\d+)?(\/[a-zA-Z]+)?$'
46
47 VALID_IPV4_SEG = r'(\d{1,2}|1\d{2}|2[0-4]\d|25[0-5])'
48 VALID_IPV4_ADDR = r"({IPV4_SEG}\.){{3}}{IPV4_SEG}".format(IPV4_SEG=VALID_IPV4_SEG)
49 VALID_REGEX_IPV4_CIDR = r"^{IPV4_ADDR}/(\d|[1-2]\d|3[0-2])$".format(IPV4_ADDR=VALID_IPV4_ADDR)
50
51 VALID_IPV6_SEG = r'[0-9a-fA-F]{1,4}'
52 VALID_REGEX_IPV6_CIDR = "".join(r"""
53 ^
54 (
55 (({IPV6_SEG}:){{7}}{IPV6_SEG})|
56 (({IPV6_SEG}:){{1,7}}:)|
57 (({IPV6_SEG}:){{1,6}}(:{IPV6_SEG}){{1,1}})|
58 (({IPV6_SEG}:){{1,5}}(:{IPV6_SEG}){{1,2}})|
59 (({IPV6_SEG}:){{1,4}}(:{IPV6_SEG}){{1,3}})|
60 (({IPV6_SEG}:){{1,3}}(:{IPV6_SEG}){{1,4}})|
61 (({IPV6_SEG}:){{1,2}}(:{IPV6_SEG}){{1,5}})|
62 (({IPV6_SEG}:){{1,1}}(:{IPV6_SEG}){{1,6}})|
63 (:((:{IPV6_SEG}){{1,7}}|:))|
64 (fe80:(:{IPV6_SEG}){{0,4}}%[0-9a-zA-Z]{{1,}})|
65 (::(ffff(:0{{1,4}}){{0,1}}:){{0,1}}{IPV4_ADDR})|
66 (({IPV6_SEG}:){{1,4}}:{IPV4_ADDR})
67 )
68 /(\d|[1-9]\d|1[0-1]\d|12[0-8])
69 $
70 """.format(IPV6_SEG=VALID_IPV6_SEG, IPV4_ADDR=VALID_IPV4_ADDR).split())
71
72
73 @FormatChecker.cls_checks(format="ports", raises=ValidationError)
74 def format_ports(instance):
75 try:
76 split_port(instance)
77 except ValueError as e:
78 raise ValidationError(six.text_type(e))
79 return True
80
81
82 @FormatChecker.cls_checks(format="expose", raises=ValidationError)
83 def format_expose(instance):
84 if isinstance(instance, six.string_types):
85 if not re.match(VALID_EXPOSE_FORMAT, instance):
86 raise ValidationError(
87 "should be of the format 'PORT[/PROTOCOL]'")
88
89 return True
90
91
92 @FormatChecker.cls_checks("subnet_ip_address", raises=ValidationError)
93 def format_subnet_ip_address(instance):
94 if isinstance(instance, six.string_types):
95 if not re.match(VALID_REGEX_IPV4_CIDR, instance) and \
96 not re.match(VALID_REGEX_IPV6_CIDR, instance):
97 raise ValidationError("should use the CIDR format")
98
99 return True
100
101
102 def match_named_volumes(service_dict, project_volumes):
103 service_volumes = service_dict.get('volumes', [])
104 for volume_spec in service_volumes:
105 if volume_spec.is_named_volume and volume_spec.external not in project_volumes:
106 raise ConfigurationError(
107 'Named volume "{0}" is used in service "{1}" but no'
108 ' declaration was found in the volumes section.'.format(
109 volume_spec.repr(), service_dict.get('name')
110 )
111 )
112
113
114 def python_type_to_yaml_type(type_):
115 type_name = type(type_).__name__
116 return {
117 'dict': 'mapping',
118 'list': 'array',
119 'int': 'number',
120 'float': 'number',
121 'bool': 'boolean',
122 'unicode': 'string',
123 'str': 'string',
124 'bytes': 'string',
125 }.get(type_name, type_name)
126
127
128 def validate_config_section(filename, config, section):
129 """Validate the structure of a configuration section. This must be done
130 before interpolation so it's separate from schema validation.
131 """
132 if not isinstance(config, dict):
133 raise ConfigurationError(
134 "In file '{filename}', {section} must be a mapping, not "
135 "{type}.".format(
136 filename=filename,
137 section=section,
138 type=anglicize_json_type(python_type_to_yaml_type(config))))
139
140 for key, value in config.items():
141 if not isinstance(key, six.string_types):
142 raise ConfigurationError(
143 "In file '{filename}', the {section} name {name} must be a "
144 "quoted string, i.e. '{name}'.".format(
145 filename=filename,
146 section=section,
147 name=key))
148
149 if not isinstance(value, (dict, type(None))):
150 raise ConfigurationError(
151 "In file '{filename}', {section} '{name}' must be a mapping not "
152 "{type}.".format(
153 filename=filename,
154 section=section,
155 name=key,
156 type=anglicize_json_type(python_type_to_yaml_type(value))))
157
158
159 def validate_top_level_object(config_file):
160 if not isinstance(config_file.config, dict):
161 raise ConfigurationError(
162 "Top level object in '{}' needs to be an object not '{}'.".format(
163 config_file.filename,
164 type(config_file.config)))
165
166
167 def validate_ulimits(service_config):
168 ulimit_config = service_config.config.get('ulimits', {})
169 for limit_name, soft_hard_values in six.iteritems(ulimit_config):
170 if isinstance(soft_hard_values, dict):
171 if not soft_hard_values['soft'] <= soft_hard_values['hard']:
172 raise ConfigurationError(
173 "Service '{s.name}' has invalid ulimit '{ulimit}'. "
174 "'soft' value can not be greater than 'hard' value ".format(
175 s=service_config,
176 ulimit=ulimit_config))
177
178
179 def validate_extends_file_path(service_name, extends_options, filename):
180 """
181 The service to be extended must either be defined in the config key 'file',
182 or within 'filename'.
183 """
184 error_prefix = "Invalid 'extends' configuration for %s:" % service_name
185
186 if 'file' not in extends_options and filename is None:
187 raise ConfigurationError(
188 "%s you need to specify a 'file', e.g. 'file: something.yml'" % error_prefix
189 )
190
191
192 def validate_network_mode(service_config, service_names):
193 network_mode = service_config.config.get('network_mode')
194 if not network_mode:
195 return
196
197 if 'networks' in service_config.config:
198 raise ConfigurationError("'network_mode' and 'networks' cannot be combined")
199
200 dependency = get_service_name_from_network_mode(network_mode)
201 if not dependency:
202 return
203
204 if dependency not in service_names:
205 raise ConfigurationError(
206 "Service '{s.name}' uses the network stack of service '{dep}' which "
207 "is undefined.".format(s=service_config, dep=dependency))
208
209
210 def validate_pid_mode(service_config, service_names):
211 pid_mode = service_config.config.get('pid')
212 if not pid_mode:
213 return
214
215 dependency = get_service_name_from_network_mode(pid_mode)
216 if not dependency:
217 return
218 if dependency not in service_names:
219 raise ConfigurationError(
220 "Service '{s.name}' uses the PID namespace of service '{dep}' which "
221 "is undefined.".format(s=service_config, dep=dependency)
222 )
223
224
225 def validate_links(service_config, service_names):
226 for link in service_config.config.get('links', []):
227 if link.split(':')[0] not in service_names:
228 raise ConfigurationError(
229 "Service '{s.name}' has a link to service '{link}' which is "
230 "undefined.".format(s=service_config, link=link))
231
232
233 def validate_depends_on(service_config, service_names):
234 deps = service_config.config.get('depends_on', {})
235 for dependency in deps.keys():
236 if dependency not in service_names:
237 raise ConfigurationError(
238 "Service '{s.name}' depends on service '{dep}' which is "
239 "undefined.".format(s=service_config, dep=dependency)
240 )
241
242
243 def get_unsupported_config_msg(path, error_key):
244 msg = "Unsupported config option for {}: '{}'".format(path_string(path), error_key)
245 if error_key in DOCKER_CONFIG_HINTS:
246 msg += " (did you mean '{}'?)".format(DOCKER_CONFIG_HINTS[error_key])
247 return msg
248
249
250 def anglicize_json_type(json_type):
251 if json_type.startswith(('a', 'e', 'i', 'o', 'u')):
252 return 'an ' + json_type
253 return 'a ' + json_type
254
255
256 def is_service_dict_schema(schema_id):
257 return schema_id in ('config_schema_v1.json', '#/properties/services')
258
259
260 def handle_error_for_schema_with_id(error, path):
261 schema_id = error.schema['id']
262
263 if is_service_dict_schema(schema_id) and error.validator == 'additionalProperties':
264 return "Invalid service name '{}' - only {} characters are allowed".format(
265 # The service_name is one of the keys in the json object
266 [i for i in list(error.instance) if not i or any(filter(
267 lambda c: not re.match(VALID_NAME_CHARS, c), i
268 ))][0],
269 VALID_NAME_CHARS
270 )
271
272 if error.validator == 'additionalProperties':
273 if schema_id == '#/definitions/service':
274 invalid_config_key = parse_key_from_error_msg(error)
275 return get_unsupported_config_msg(path, invalid_config_key)
276
277 if schema_id.startswith('config_schema_v'):
278 invalid_config_key = parse_key_from_error_msg(error)
279 return ('Invalid top-level property "{key}". Valid top-level '
280 'sections for this Compose file are: {properties}, and '
281 'extensions starting with "x-".\n\n{explanation}').format(
282 key=invalid_config_key,
283 properties=', '.join(error.schema['properties'].keys()),
284 explanation=VERSION_EXPLANATION
285 )
286
287 if not error.path:
288 return '{}\n\n{}'.format(error.message, VERSION_EXPLANATION)
289
290
291 def handle_generic_error(error, path):
292 msg_format = None
293 error_msg = error.message
294
295 if error.validator == 'oneOf':
296 msg_format = "{path} {msg}"
297 config_key, error_msg = _parse_oneof_validator(error)
298 if config_key:
299 path.append(config_key)
300
301 elif error.validator == 'type':
302 msg_format = "{path} contains an invalid type, it should be {msg}"
303 error_msg = _parse_valid_types_from_validator(error.validator_value)
304
305 elif error.validator == 'required':
306 error_msg = ", ".join(error.validator_value)
307 msg_format = "{path} is invalid, {msg} is required."
308
309 elif error.validator == 'dependencies':
310 config_key = list(error.validator_value.keys())[0]
311 required_keys = ",".join(error.validator_value[config_key])
312
313 msg_format = "{path} is invalid: {msg}"
314 path.append(config_key)
315 error_msg = "when defining '{}' you must set '{}' as well".format(
316 config_key,
317 required_keys)
318
319 elif error.cause:
320 error_msg = six.text_type(error.cause)
321 msg_format = "{path} is invalid: {msg}"
322
323 elif error.path:
324 msg_format = "{path} value {msg}"
325
326 if msg_format:
327 return msg_format.format(path=path_string(path), msg=error_msg)
328
329 return error.message
330
331
332 def parse_key_from_error_msg(error):
333 try:
334 return error.message.split("'")[1]
335 except IndexError:
336 return error.message.split('(')[1].split(' ')[0].strip("'")
337
338
339 def path_string(path):
340 return ".".join(c for c in path if isinstance(c, six.string_types))
341
342
343 def _parse_valid_types_from_validator(validator):
344 """A validator value can be either an array of valid types or a string of
345 a valid type. Parse the valid types and prefix with the correct article.
346 """
347 if not isinstance(validator, list):
348 return anglicize_json_type(validator)
349
350 if len(validator) == 1:
351 return anglicize_json_type(validator[0])
352
353 return "{}, or {}".format(
354 ", ".join([anglicize_json_type(validator[0])] + validator[1:-1]),
355 anglicize_json_type(validator[-1]))
356
357
358 def _parse_oneof_validator(error):
359 """oneOf has multiple schemas, so we need to reason about which schema, sub
360 schema or constraint the validation is failing on.
361 Inspecting the context value of a ValidationError gives us information about
362 which sub schema failed and which kind of error it is.
363 """
364 types = []
365 for context in error.context:
366 if context.validator == 'oneOf':
367 _, error_msg = _parse_oneof_validator(context)
368 return path_string(context.path), error_msg
369
370 if context.validator == 'required':
371 return (None, context.message)
372
373 if context.validator == 'additionalProperties':
374 invalid_config_key = parse_key_from_error_msg(context)
375 return (None, "contains unsupported option: '{}'".format(invalid_config_key))
376
377 if context.validator == 'uniqueItems':
378 return (
379 path_string(context.path) if context.path else None,
380 "contains non-unique items, please remove duplicates from {}".format(
381 context.instance),
382 )
383
384 if context.path:
385 return (
386 path_string(context.path),
387 "contains {}, which is an invalid type, it should be {}".format(
388 json.dumps(context.instance),
389 _parse_valid_types_from_validator(context.validator_value)),
390 )
391
392 if context.validator == 'type':
393 types.append(context.validator_value)
394
395 valid_types = _parse_valid_types_from_validator(types)
396 return (None, "contains an invalid type, it should be {}".format(valid_types))
397
398
399 def process_service_constraint_errors(error, service_name, version):
400 if version == V1:
401 if 'image' in error.instance and 'build' in error.instance:
402 return (
403 "Service {} has both an image and build path specified. "
404 "A service can either be built to image or use an existing "
405 "image, not both.".format(service_name))
406
407 if 'image' in error.instance and 'dockerfile' in error.instance:
408 return (
409 "Service {} has both an image and alternate Dockerfile. "
410 "A service can either be built to image or use an existing "
411 "image, not both.".format(service_name))
412
413 if 'image' not in error.instance and 'build' not in error.instance:
414 return (
415 "Service {} has neither an image nor a build context specified. "
416 "At least one must be provided.".format(service_name))
417
418
419 def process_config_schema_errors(error):
420 path = list(error.path)
421
422 if 'id' in error.schema:
423 error_msg = handle_error_for_schema_with_id(error, path)
424 if error_msg:
425 return error_msg
426
427 return handle_generic_error(error, path)
428
429
430 def validate_against_config_schema(config_file):
431 schema = load_jsonschema(config_file)
432 format_checker = FormatChecker(["ports", "expose", "subnet_ip_address"])
433 validator = Draft4Validator(
434 schema,
435 resolver=RefResolver(get_resolver_path(), schema),
436 format_checker=format_checker)
437 handle_errors(
438 validator.iter_errors(config_file.config),
439 process_config_schema_errors,
440 config_file.filename)
441
442
443 def validate_service_constraints(config, service_name, config_file):
444 def handler(errors):
445 return process_service_constraint_errors(
446 errors, service_name, config_file.version)
447
448 schema = load_jsonschema(config_file)
449 validator = Draft4Validator(schema['definitions']['constraints']['service'])
450 handle_errors(validator.iter_errors(config), handler, None)
451
452
453 def validate_cpu(service_config):
454 cpus = service_config.config.get('cpus')
455 if not cpus:
456 return
457 nano_cpus = cpus * NANOCPUS_SCALE
458 if isinstance(nano_cpus, float) and not nano_cpus.is_integer():
459 raise ConfigurationError(
460 "cpus must have nine or less digits after decimal point")
461
462
463 def get_schema_path():
464 return os.path.dirname(os.path.abspath(__file__))
465
466
467 def load_jsonschema(config_file):
468 filename = os.path.join(
469 get_schema_path(),
470 "config_schema_v{0}.json".format(config_file.version))
471
472 if not os.path.exists(filename):
473 raise ConfigurationError(
474 'Version in "{}" is unsupported. {}'
475 .format(config_file.filename, VERSION_EXPLANATION))
476
477 with open(filename, "r") as fh:
478 return json.load(fh)
479
480
481 def get_resolver_path():
482 schema_path = get_schema_path()
483 if sys.platform == "win32":
484 scheme = "///"
485 # TODO: why is this necessary?
486 schema_path = schema_path.replace('\\', '/')
487 else:
488 scheme = "//"
489 return "file:{}{}/".format(scheme, schema_path)
490
491
492 def handle_errors(errors, format_error_func, filename):
493 """jsonschema returns an error tree full of information to explain what has
494 gone wrong. Process each error and pull out relevant information and re-write
495 helpful error messages that are relevant.
496 """
497 errors = list(sorted(errors, key=str))
498 if not errors:
499 return
500
501 error_msg = '\n'.join(format_error_func(error) for error in errors)
502 raise ConfigurationError(
503 "The Compose file{file_msg} is invalid because:\n{error_msg}".format(
504 file_msg=" '{}'".format(filename) if filename else "",
505 error_msg=error_msg))
506
507
508 def validate_healthcheck(service_config):
509 healthcheck = service_config.config.get('healthcheck', {})
510
511 if 'test' in healthcheck and isinstance(healthcheck['test'], list):
512 if len(healthcheck['test']) == 0:
513 raise ConfigurationError(
514 'Service "{}" defines an invalid healthcheck: '
515 '"test" is an empty list'
516 .format(service_config.name))
517
518 # when disable is true config.py::process_healthcheck adds "test: ['NONE']" to service_config
519 elif healthcheck['test'][0] == 'NONE' and len(healthcheck) > 1:
520 raise ConfigurationError(
521 'Service "{}" defines an invalid healthcheck: '
522 '"disable: true" cannot be combined with other options'
523 .format(service_config.name))
524
525 elif healthcheck['test'][0] not in ('NONE', 'CMD', 'CMD-SHELL'):
526 raise ConfigurationError(
527 'Service "{}" defines an invalid healthcheck: '
528 'when "test" is a list the first item must be either NONE, CMD or CMD-SHELL'
529 .format(service_config.name))
530
[end of compose/config/validation.py]
[start of script/release/release.py]
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 import argparse
6 import os
7 import shutil
8 import sys
9 import time
10 from distutils.core import run_setup
11
12 import pypandoc
13 from jinja2 import Template
14 from release.bintray import BintrayAPI
15 from release.const import BINTRAY_ORG
16 from release.const import NAME
17 from release.const import REPO_ROOT
18 from release.downloader import BinaryDownloader
19 from release.images import ImageManager
20 from release.pypi import check_pypirc
21 from release.pypi import pypi_upload
22 from release.repository import delete_assets
23 from release.repository import get_contributors
24 from release.repository import Repository
25 from release.repository import upload_assets
26 from release.utils import branch_name
27 from release.utils import compatibility_matrix
28 from release.utils import read_release_notes_from_changelog
29 from release.utils import ScriptError
30 from release.utils import update_init_py_version
31 from release.utils import update_run_sh_version
32 from release.utils import yesno
33
34
35 def create_initial_branch(repository, args):
36 release_branch = repository.create_release_branch(args.release, args.base)
37 if args.base and args.cherries:
38 print('Detected patch version.')
39 cherries = input('Indicate (space-separated) PR numbers to cherry-pick then press Enter:\n')
40 repository.cherry_pick_prs(release_branch, cherries.split())
41
42 return create_bump_commit(repository, release_branch, args.bintray_user, args.bintray_org)
43
44
45 def create_bump_commit(repository, release_branch, bintray_user, bintray_org):
46 with release_branch.config_reader() as cfg:
47 release = cfg.get('release')
48 print('Updating version info in __init__.py and run.sh')
49 update_run_sh_version(release)
50 update_init_py_version(release)
51
52 input('Please add the release notes to the CHANGELOG.md file, then press Enter to continue.')
53 proceed = None
54 while not proceed:
55 print(repository.diff())
56 proceed = yesno('Are these changes ok? y/N ', default=False)
57
58 if repository.diff():
59 repository.create_bump_commit(release_branch, release)
60 repository.push_branch_to_remote(release_branch)
61
62 bintray_api = BintrayAPI(os.environ['BINTRAY_TOKEN'], bintray_user)
63 if not bintray_api.repository_exists(bintray_org, release_branch.name):
64 print('Creating data repository {} on bintray'.format(release_branch.name))
65 bintray_api.create_repository(bintray_org, release_branch.name, 'generic')
66 else:
67 print('Bintray repository {} already exists. Skipping'.format(release_branch.name))
68
69
70 def monitor_pr_status(pr_data):
71 print('Waiting for CI to complete...')
72 last_commit = pr_data.get_commits().reversed[0]
73 while True:
74 status = last_commit.get_combined_status()
75 if status.state == 'pending' or status.state == 'failure':
76 summary = {
77 'pending': 0,
78 'success': 0,
79 'failure': 0,
80 'error': 0,
81 }
82 for detail in status.statuses:
83 if detail.context == 'dco-signed':
84 # dco-signed check breaks on merge remote-tracking ; ignore it
85 continue
86 if detail.state in summary:
87 summary[detail.state] += 1
88 print(
89 '{pending} pending, {success} successes, {failure} failures, '
90 '{error} errors'.format(**summary)
91 )
92 if summary['failure'] > 0 or summary['error'] > 0:
93 raise ScriptError('CI failures detected!')
94 elif summary['pending'] == 0 and summary['success'] > 0:
95 # This check assumes at least 1 non-DCO CI check to avoid race conditions.
96 # If testing on a repo without CI, use --skip-ci-check to avoid looping eternally
97 return True
98 time.sleep(30)
99 elif status.state == 'success':
100 print('{} successes: all clear!'.format(status.total_count))
101 return True
102
103
104 def check_pr_mergeable(pr_data):
105 if pr_data.mergeable is False:
106 # mergeable can also be null, in which case the warning would be a false positive.
107 print(
108 'WARNING!! PR #{} can not currently be merged. You will need to '
109 'resolve the conflicts manually before finalizing the release.'.format(pr_data.number)
110 )
111
112 return pr_data.mergeable is True
113
114
115 def create_release_draft(repository, version, pr_data, files):
116 print('Creating Github release draft')
117 with open(os.path.join(os.path.dirname(__file__), 'release.md.tmpl'), 'r') as f:
118 template = Template(f.read())
119 print('Rendering release notes based on template')
120 release_notes = template.render(
121 version=version,
122 compat_matrix=compatibility_matrix(),
123 integrity=files,
124 contributors=get_contributors(pr_data),
125 changelog=read_release_notes_from_changelog(),
126 )
127 gh_release = repository.create_release(
128 version, release_notes, draft=True, prerelease='-rc' in version,
129 target_commitish='release'
130 )
131 print('Release draft initialized')
132 return gh_release
133
134
135 def print_final_instructions(args):
136 print(
137 "You're almost done! Please verify that everything is in order and "
138 "you are ready to make the release public, then run the following "
139 "command:\n{exe} -b {user} finalize {version}".format(
140 exe='./script/release/release.sh', user=args.bintray_user, version=args.release
141 )
142 )
143
144
145 def distclean():
146 print('Running distclean...')
147 dirs = [
148 os.path.join(REPO_ROOT, 'build'), os.path.join(REPO_ROOT, 'dist'),
149 os.path.join(REPO_ROOT, 'docker-compose.egg-info')
150 ]
151 files = []
152 for base, dirnames, fnames in os.walk(REPO_ROOT):
153 for fname in fnames:
154 path = os.path.normpath(os.path.join(base, fname))
155 if fname.endswith('.pyc'):
156 files.append(path)
157 elif fname.startswith('.coverage.'):
158 files.append(path)
159 for dirname in dirnames:
160 path = os.path.normpath(os.path.join(base, dirname))
161 if dirname == '__pycache__':
162 dirs.append(path)
163 elif dirname == '.coverage-binfiles':
164 dirs.append(path)
165
166 for file in files:
167 os.unlink(file)
168
169 for folder in dirs:
170 shutil.rmtree(folder, ignore_errors=True)
171
172
173 def resume(args):
174 try:
175 distclean()
176 repository = Repository(REPO_ROOT, args.repo)
177 br_name = branch_name(args.release)
178 if not repository.branch_exists(br_name):
179 raise ScriptError('No local branch exists for this release.')
180 gh_release = repository.find_release(args.release)
181 if gh_release and not gh_release.draft:
182 print('WARNING!! Found non-draft (public) release for this version!')
183 proceed = yesno(
184 'Are you sure you wish to proceed? Modifying an already '
185 'released version is dangerous! y/N ', default=False
186 )
187 if proceed.lower() is not True:
188 raise ScriptError('Aborting release')
189
190 release_branch = repository.checkout_branch(br_name)
191 if args.cherries:
192 cherries = input('Indicate (space-separated) PR numbers to cherry-pick then press Enter:\n')
193 repository.cherry_pick_prs(release_branch, cherries.split())
194
195 create_bump_commit(repository, release_branch, args.bintray_user, args.bintray_org)
196 pr_data = repository.find_release_pr(args.release)
197 if not pr_data:
198 pr_data = repository.create_release_pull_request(args.release)
199 check_pr_mergeable(pr_data)
200 if not args.skip_ci:
201 monitor_pr_status(pr_data)
202 downloader = BinaryDownloader(args.destination)
203 files = downloader.download_all(args.release)
204 if not gh_release:
205 gh_release = create_release_draft(repository, args.release, pr_data, files)
206 delete_assets(gh_release)
207 upload_assets(gh_release, files)
208 img_manager = ImageManager(args.release)
209 img_manager.build_images(repository, files)
210 except ScriptError as e:
211 print(e)
212 return 1
213
214 print_final_instructions(args)
215 return 0
216
217
218 def cancel(args):
219 try:
220 repository = Repository(REPO_ROOT, args.repo)
221 repository.close_release_pr(args.release)
222 repository.remove_release(args.release)
223 repository.remove_bump_branch(args.release)
224 bintray_api = BintrayAPI(os.environ['BINTRAY_TOKEN'], args.bintray_user)
225 print('Removing Bintray data repository for {}'.format(args.release))
226 bintray_api.delete_repository(args.bintray_org, branch_name(args.release))
227 distclean()
228 except ScriptError as e:
229 print(e)
230 return 1
231 print('Release cancellation complete.')
232 return 0
233
234
235 def start(args):
236 distclean()
237 try:
238 repository = Repository(REPO_ROOT, args.repo)
239 create_initial_branch(repository, args)
240 pr_data = repository.create_release_pull_request(args.release)
241 check_pr_mergeable(pr_data)
242 if not args.skip_ci:
243 monitor_pr_status(pr_data)
244 downloader = BinaryDownloader(args.destination)
245 files = downloader.download_all(args.release)
246 gh_release = create_release_draft(repository, args.release, pr_data, files)
247 upload_assets(gh_release, files)
248 img_manager = ImageManager(args.release)
249 img_manager.build_images(repository, files)
250 except ScriptError as e:
251 print(e)
252 return 1
253
254 print_final_instructions(args)
255 return 0
256
257
258 def finalize(args):
259 distclean()
260 try:
261 check_pypirc()
262 repository = Repository(REPO_ROOT, args.repo)
263 img_manager = ImageManager(args.release)
264 pr_data = repository.find_release_pr(args.release)
265 if not pr_data:
266 raise ScriptError('No PR found for {}'.format(args.release))
267 if not check_pr_mergeable(pr_data):
268 raise ScriptError('Can not finalize release with an unmergeable PR')
269 if not img_manager.check_images():
270 raise ScriptError('Missing release image')
271 br_name = branch_name(args.release)
272 if not repository.branch_exists(br_name):
273 raise ScriptError('No local branch exists for this release.')
274 gh_release = repository.find_release(args.release)
275 if not gh_release:
276 raise ScriptError('No Github release draft for this version')
277
278 repository.checkout_branch(br_name)
279
280 pypandoc.convert_file(
281 os.path.join(REPO_ROOT, 'README.md'), 'rst', outputfile=os.path.join(REPO_ROOT, 'README.rst')
282 )
283 run_setup(os.path.join(REPO_ROOT, 'setup.py'), script_args=['sdist', 'bdist_wheel'])
284
285 merge_status = pr_data.merge()
286 if not merge_status.merged and not args.finalize_resume:
287 raise ScriptError(
288 'Unable to merge PR #{}: {}'.format(pr_data.number, merge_status.message)
289 )
290
291 pypi_upload(args)
292
293 img_manager.push_images()
294 repository.publish_release(gh_release)
295 except ScriptError as e:
296 print(e)
297 return 1
298
299 return 0
300
301
302 ACTIONS = [
303 'start',
304 'cancel',
305 'resume',
306 'finalize',
307 ]
308
309 EPILOG = '''Example uses:
310 * Start a new feature release (includes all changes currently in master)
311 release.sh -b user start 1.23.0
312 * Start a new patch release
313 release.sh -b user --patch 1.21.0 start 1.21.1
314 * Cancel / rollback an existing release draft
315 release.sh -b user cancel 1.23.0
316 * Restart a previously aborted patch release
317 release.sh -b user -p 1.21.0 resume 1.21.1
318 '''
319
320
321 def main():
322 if 'GITHUB_TOKEN' not in os.environ:
323 print('GITHUB_TOKEN environment variable must be set')
324 return 1
325
326 if 'BINTRAY_TOKEN' not in os.environ:
327 print('BINTRAY_TOKEN environment variable must be set')
328 return 1
329
330 parser = argparse.ArgumentParser(
331 description='Orchestrate a new release of docker/compose. This tool assumes that you have '
332 'obtained a Github API token and Bintray API key and set the GITHUB_TOKEN and '
333 'BINTRAY_TOKEN environment variables accordingly.',
334 epilog=EPILOG, formatter_class=argparse.RawTextHelpFormatter)
335 parser.add_argument(
336 'action', choices=ACTIONS, help='The action to be performed for this release'
337 )
338 parser.add_argument('release', help='Release number, e.g. 1.9.0-rc1, 2.1.1')
339 parser.add_argument(
340 '--patch', '-p', dest='base',
341 help='Which version is being patched by this release'
342 )
343 parser.add_argument(
344 '--repo', '-r', dest='repo', default=NAME,
345 help='Start a release for the given repo (default: {})'.format(NAME)
346 )
347 parser.add_argument(
348 '-b', dest='bintray_user', required=True, metavar='USER',
349 help='Username associated with the Bintray API key'
350 )
351 parser.add_argument(
352 '--bintray-org', dest='bintray_org', metavar='ORG', default=BINTRAY_ORG,
353 help='Organization name on bintray where the data repository will be created.'
354 )
355 parser.add_argument(
356 '--destination', '-o', metavar='DIR', default='binaries',
357 help='Directory where release binaries will be downloaded relative to the project root'
358 )
359 parser.add_argument(
360 '--no-cherries', '-C', dest='cherries', action='store_false',
361 help='If set, the program will not prompt the user for PR numbers to cherry-pick'
362 )
363 parser.add_argument(
364 '--skip-ci-checks', dest='skip_ci', action='store_true',
365 help='If set, the program will not wait for CI jobs to complete'
366 )
367 parser.add_argument(
368 '--finalize-resume', dest='finalize_resume', action='store_true',
369 help='If set, finalize will continue through steps that have already been completed.'
370 )
371 args = parser.parse_args()
372
373 if args.action == 'start':
374 return start(args)
375 elif args.action == 'resume':
376 return resume(args)
377 elif args.action == 'cancel':
378 return cancel(args)
379 elif args.action == 'finalize':
380 return finalize(args)
381
382 print('Unexpected action "{}"'.format(args.action), file=sys.stderr)
383 return 1
384
385
386 if __name__ == '__main__':
387 sys.exit(main())
388
[end of script/release/release.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
docker/compose
|
cfa5d02b52b9bff3a07aae8eca407d03da375c69
|
Docker Compose doesn't work when SSH connection used to remote Docker Engine
## Description of the issue
Just trying out the new SSH connection introduced in Docker 18.09 and I noticed an error when attempting to do `docker-compose up` whilst targeting a remote Docker Engine instance.
Errors message below appears to indicate that Compose isn't aware of the SSH protocol for this purpose
```
docker.errors.DockerException: Invalid bind address protocol: ssh://xfoxy.secinternal.local
[486] Failed to execute script docker-compose
```
## Context information (for bug reports)
**Output of `docker-compose version`**
```
docker-compose version 1.23.1, build b02f1306
docker-py version: 3.5.0
CPython version: 3.6.7
OpenSSL version: OpenSSL 1.1.0f 25 May 2017
```
**Output of `docker version`**
```
Client:
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:49:01 2018
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:16:44 2018
OS/Arch: linux/amd64
Experimental: false
```
**Output of `docker-compose config`**
(Make sure to add the relevant `-f` and other flags)
```
networks:
testnet: {}
services:
dradis:
image: raesene/dradis
networks:
testnet: null
ports:
- 3000/tcp
volumes:
- data:/data:rw
sectest:
image: raesene/sectest
networks:
testnet: null
ports:
- 22/tcp
volumes:
- data:/data:rw
version: '3.0'
volumes:
data: {}
```
## Steps to reproduce the issue
1. Configure a Docker client (18.09) to connect to a remote Docker engine instance via SSH
2. Run `docker-compose up` in a directory with a docker-compose.yml file.
3. Error occors.
### Observed result
Error occurs
### Expected result
Docker compose contacts the remote docker engine instance to create the containers.
### Stacktrace / full error message
```
Traceback (most recent call last):
File "bin/docker-compose", line 6, in <module>
File "compose/cli/main.py", line 71, in main
File "compose/cli/main.py", line 124, in perform_command
File "compose/cli/command.py", line 42, in project_from_options
File "compose/cli/command.py", line 123, in get_project
File "compose/cli/command.py", line 94, in get_client
File "compose/cli/docker_client.py", line 127, in docker_client
File "site-packages/docker/api/client.py", line 118, in __init__
File "site-packages/docker/utils/utils.py", line 256, in parse_host
docker.errors.DockerException: Invalid bind address protocol: ssh://xfoxy.secinternal.local
[486] Failed to execute script docker-compose
```
## Additional information
Client is WSL (Ubuntu 18.04) Server is Ubuntu 18.04 running Docker 18.09.
|
Support for the SSH protocol will be added in the next version of Compose. https://github.com/docker/docker-py/issues/2159
Cool Thanks for the info. :)
|
2018-12-01T00:25:22Z
|
<patch>
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -36,7 +36,7 @@ def find_version(*file_paths):
'requests >= 2.6.1, != 2.11.0, != 2.12.2, != 2.18.0, < 2.21',
'texttable >= 0.9.0, < 0.10',
'websocket-client >= 0.32.0, < 1.0',
- 'docker >= 3.6.0, < 4.0',
+ 'docker[ssh] >= 3.6.0, < 4.0',
'dockerpty >= 0.4.1, < 0.5',
'six >= 1.3.0, < 2',
'jsonschema >= 2.5.1, < 3',
</patch>
|
[]
|
[]
| |||
ipython__ipython-3558
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Install Sphinx extensions
For what seems like a long time now, but matplotlib and IPython have both included `ipython_console_highlighting.py` and `ipython_directive.py`. (I think I wrote the former and John Hunter wrote the latter, originally).
Over time, these extensions have diverged. I think the IPython ones are better maintained as they actually track changes to IPython etc., and logically, I think these belong in IPython and not in matplotlib.
Unfortunately for third-party users of these packages, IPython does not install the sphinx extensions, but merely keeps them in the source tree. This has encouraged a lot of people to rely on the copies in matplotlib (which are installed). Any chance IPython could install them, too, then we could properly deprecate the copies in matplotlib?
</issue>
<code>
[start of README.rst]
1 ===========================================
2 IPython: Productive Interactive Computing
3 ===========================================
4
5 Overview
6 ========
7
8 Welcome to IPython. Our full documentation is available on `our website
9 <http://ipython.org/documentation.html>`_; if you downloaded a built source
10 distribution the ``docs/source`` directory contains the plaintext version of
11 these manuals. If you have Sphinx installed, you can build them by typing
12 ``cd docs; make html`` for local browsing.
13
14
15 Dependencies and supported Python versions
16 ==========================================
17
18 For full details, see the installation section of the manual. The basic parts
19 of IPython only need the Python standard library, but much of its more advanced
20 functionality requires extra packages.
21
22 Officially, IPython requires Python version 2.6, 2.7, or 3.1 and above.
23
24
25 Instant running
26 ===============
27
28 You can run IPython from this directory without even installing it system-wide
29 by typing at the terminal::
30
31 $ python -m IPython
32
33
34 Development installation
35 ========================
36
37 If you want to hack on certain parts, e.g. the IPython notebook, in a clean
38 environment (such as a virtualenv) you can use ``pip`` to grab the necessary
39 dependencies quickly::
40
41 $ pip install -e .[notebook]
42
43 This installs the necessary packages and symlinks IPython into your current
44 environment so that you can work on your local repo copy and run it from anywhere::
45
46 $ ipython notebook
47
48 The same process applies for other parts, such as the qtconsole (the
49 ``extras_require`` attribute in the setup.py file lists all the possibilities).
50
51 Git Hooks and Submodules
52 ************************
53
54 IPython now uses git submodules to ship its javascript dependencies.
55 If you run IPython from git master, you may need to update submodules once in a while with::
56
57 $ git submodule update
58
59 or::
60
61 $ python setup.py submodule
62
63 We have some git hooks for helping keep your submodules always in sync,
64 see our ``git-hooks`` directory for more info.
65
[end of README.rst]
[start of IPython/core/shellapp.py]
1 # encoding: utf-8
2 """
3 A mixin for :class:`~IPython.core.application.Application` classes that
4 launch InteractiveShell instances, load extensions, etc.
5
6 Authors
7 -------
8
9 * Min Ragan-Kelley
10 """
11
12 #-----------------------------------------------------------------------------
13 # Copyright (C) 2008-2011 The IPython Development Team
14 #
15 # Distributed under the terms of the BSD License. The full license is in
16 # the file COPYING, distributed as part of this software.
17 #-----------------------------------------------------------------------------
18
19 #-----------------------------------------------------------------------------
20 # Imports
21 #-----------------------------------------------------------------------------
22
23 from __future__ import absolute_import
24
25 import glob
26 import os
27 import sys
28
29 from IPython.config.application import boolean_flag
30 from IPython.config.configurable import Configurable
31 from IPython.config.loader import Config
32 from IPython.core import pylabtools
33 from IPython.utils import py3compat
34 from IPython.utils.contexts import preserve_keys
35 from IPython.utils.path import filefind
36 from IPython.utils.traitlets import (
37 Unicode, Instance, List, Bool, CaselessStrEnum
38 )
39
40 #-----------------------------------------------------------------------------
41 # Aliases and Flags
42 #-----------------------------------------------------------------------------
43
44 shell_flags = {}
45
46 addflag = lambda *args: shell_flags.update(boolean_flag(*args))
47 addflag('autoindent', 'InteractiveShell.autoindent',
48 'Turn on autoindenting.', 'Turn off autoindenting.'
49 )
50 addflag('automagic', 'InteractiveShell.automagic',
51 """Turn on the auto calling of magic commands. Type %%magic at the
52 IPython prompt for more information.""",
53 'Turn off the auto calling of magic commands.'
54 )
55 addflag('pdb', 'InteractiveShell.pdb',
56 "Enable auto calling the pdb debugger after every exception.",
57 "Disable auto calling the pdb debugger after every exception."
58 )
59 # pydb flag doesn't do any config, as core.debugger switches on import,
60 # which is before parsing. This just allows the flag to be passed.
61 shell_flags.update(dict(
62 pydb = ({},
63 """Use the third party 'pydb' package as debugger, instead of pdb.
64 Requires that pydb is installed."""
65 )
66 ))
67 addflag('pprint', 'PlainTextFormatter.pprint',
68 "Enable auto pretty printing of results.",
69 "Disable auto auto pretty printing of results."
70 )
71 addflag('color-info', 'InteractiveShell.color_info',
72 """IPython can display information about objects via a set of func-
73 tions, and optionally can use colors for this, syntax highlighting
74 source code and various other elements. However, because this
75 information is passed through a pager (like 'less') and many pagers get
76 confused with color codes, this option is off by default. You can test
77 it and turn it on permanently in your ipython_config.py file if it
78 works for you. Test it and turn it on permanently if it works with
79 your system. The magic function %%color_info allows you to toggle this
80 interactively for testing.""",
81 "Disable using colors for info related things."
82 )
83 addflag('deep-reload', 'InteractiveShell.deep_reload',
84 """Enable deep (recursive) reloading by default. IPython can use the
85 deep_reload module which reloads changes in modules recursively (it
86 replaces the reload() function, so you don't need to change anything to
87 use it). deep_reload() forces a full reload of modules whose code may
88 have changed, which the default reload() function does not. When
89 deep_reload is off, IPython will use the normal reload(), but
90 deep_reload will still be available as dreload(). This feature is off
91 by default [which means that you have both normal reload() and
92 dreload()].""",
93 "Disable deep (recursive) reloading by default."
94 )
95 nosep_config = Config()
96 nosep_config.InteractiveShell.separate_in = ''
97 nosep_config.InteractiveShell.separate_out = ''
98 nosep_config.InteractiveShell.separate_out2 = ''
99
100 shell_flags['nosep']=(nosep_config, "Eliminate all spacing between prompts.")
101 shell_flags['pylab'] = (
102 {'InteractiveShellApp' : {'pylab' : 'auto'}},
103 """Pre-load matplotlib and numpy for interactive use with
104 the default matplotlib backend."""
105 )
106
107 # it's possible we don't want short aliases for *all* of these:
108 shell_aliases = dict(
109 autocall='InteractiveShell.autocall',
110 colors='InteractiveShell.colors',
111 logfile='InteractiveShell.logfile',
112 logappend='InteractiveShell.logappend',
113 c='InteractiveShellApp.code_to_run',
114 m='InteractiveShellApp.module_to_run',
115 ext='InteractiveShellApp.extra_extension',
116 gui='InteractiveShellApp.gui',
117 pylab='InteractiveShellApp.pylab',
118 )
119 shell_aliases['cache-size'] = 'InteractiveShell.cache_size'
120
121 #-----------------------------------------------------------------------------
122 # Main classes and functions
123 #-----------------------------------------------------------------------------
124
125 class InteractiveShellApp(Configurable):
126 """A Mixin for applications that start InteractiveShell instances.
127
128 Provides configurables for loading extensions and executing files
129 as part of configuring a Shell environment.
130
131 The following methods should be called by the :meth:`initialize` method
132 of the subclass:
133
134 - :meth:`init_path`
135 - :meth:`init_shell` (to be implemented by the subclass)
136 - :meth:`init_gui_pylab`
137 - :meth:`init_extensions`
138 - :meth:`init_code`
139 """
140 extensions = List(Unicode, config=True,
141 help="A list of dotted module names of IPython extensions to load."
142 )
143 extra_extension = Unicode('', config=True,
144 help="dotted module name of an IPython extension to load."
145 )
146 def _extra_extension_changed(self, name, old, new):
147 if new:
148 # add to self.extensions
149 self.extensions.append(new)
150
151 # Extensions that are always loaded (not configurable)
152 default_extensions = List(Unicode, [u'storemagic'], config=False)
153
154 exec_files = List(Unicode, config=True,
155 help="""List of files to run at IPython startup."""
156 )
157 file_to_run = Unicode('', config=True,
158 help="""A file to be run""")
159
160 exec_lines = List(Unicode, config=True,
161 help="""lines of code to run at IPython startup."""
162 )
163 code_to_run = Unicode('', config=True,
164 help="Execute the given command string."
165 )
166 module_to_run = Unicode('', config=True,
167 help="Run the module as a script."
168 )
169 gui = CaselessStrEnum(('qt', 'wx', 'gtk', 'glut', 'pyglet', 'osx'), config=True,
170 help="Enable GUI event loop integration ('qt', 'wx', 'gtk', 'glut', 'pyglet', 'osx')."
171 )
172 pylab = CaselessStrEnum(['tk', 'qt', 'wx', 'gtk', 'osx', 'inline', 'auto'],
173 config=True,
174 help="""Pre-load matplotlib and numpy for interactive use,
175 selecting a particular matplotlib backend and loop integration.
176 """
177 )
178 pylab_import_all = Bool(True, config=True,
179 help="""If true, an 'import *' is done from numpy and pylab,
180 when using pylab"""
181 )
182 shell = Instance('IPython.core.interactiveshell.InteractiveShellABC')
183
184 def init_path(self):
185 """Add current working directory, '', to sys.path"""
186 if sys.path[0] != '':
187 sys.path.insert(0, '')
188
189 def init_shell(self):
190 raise NotImplementedError("Override in subclasses")
191
192 def init_gui_pylab(self):
193 """Enable GUI event loop integration, taking pylab into account."""
194 if self.gui or self.pylab:
195 shell = self.shell
196 try:
197 if self.pylab:
198 gui, backend = pylabtools.find_gui_and_backend(self.pylab)
199 self.log.info("Enabling GUI event loop integration, "
200 "toolkit=%s, pylab=%s" % (gui, self.pylab))
201 shell.enable_pylab(gui, import_all=self.pylab_import_all, welcome_message=True)
202 else:
203 self.log.info("Enabling GUI event loop integration, "
204 "toolkit=%s" % self.gui)
205 shell.enable_gui(self.gui)
206 except ImportError:
207 self.log.warn("pylab mode doesn't work as matplotlib could not be found." + \
208 "\nIs it installed on the system?")
209 self.shell.showtraceback()
210 except Exception:
211 self.log.warn("GUI event loop or pylab initialization failed")
212 self.shell.showtraceback()
213
214 def init_extensions(self):
215 """Load all IPython extensions in IPythonApp.extensions.
216
217 This uses the :meth:`ExtensionManager.load_extensions` to load all
218 the extensions listed in ``self.extensions``.
219 """
220 try:
221 self.log.debug("Loading IPython extensions...")
222 extensions = self.default_extensions + self.extensions
223 for ext in extensions:
224 try:
225 self.log.info("Loading IPython extension: %s" % ext)
226 self.shell.extension_manager.load_extension(ext)
227 except:
228 self.log.warn("Error in loading extension: %s" % ext +
229 "\nCheck your config files in %s" % self.profile_dir.location
230 )
231 self.shell.showtraceback()
232 except:
233 self.log.warn("Unknown error in loading extensions:")
234 self.shell.showtraceback()
235
236 def init_code(self):
237 """run the pre-flight code, specified via exec_lines"""
238 self._run_startup_files()
239 self._run_exec_lines()
240 self._run_exec_files()
241 self._run_cmd_line_code()
242 self._run_module()
243
244 # flush output, so itwon't be attached to the first cell
245 sys.stdout.flush()
246 sys.stderr.flush()
247
248 # Hide variables defined here from %who etc.
249 self.shell.user_ns_hidden.update(self.shell.user_ns)
250
251 def _run_exec_lines(self):
252 """Run lines of code in IPythonApp.exec_lines in the user's namespace."""
253 if not self.exec_lines:
254 return
255 try:
256 self.log.debug("Running code from IPythonApp.exec_lines...")
257 for line in self.exec_lines:
258 try:
259 self.log.info("Running code in user namespace: %s" %
260 line)
261 self.shell.run_cell(line, store_history=False)
262 except:
263 self.log.warn("Error in executing line in user "
264 "namespace: %s" % line)
265 self.shell.showtraceback()
266 except:
267 self.log.warn("Unknown error in handling IPythonApp.exec_lines:")
268 self.shell.showtraceback()
269
270 def _exec_file(self, fname):
271 try:
272 full_filename = filefind(fname, [u'.', self.ipython_dir])
273 except IOError as e:
274 self.log.warn("File not found: %r"%fname)
275 return
276 # Make sure that the running script gets a proper sys.argv as if it
277 # were run from a system shell.
278 save_argv = sys.argv
279 sys.argv = [full_filename] + self.extra_args[1:]
280 # protect sys.argv from potential unicode strings on Python 2:
281 if not py3compat.PY3:
282 sys.argv = [ py3compat.cast_bytes(a) for a in sys.argv ]
283 try:
284 if os.path.isfile(full_filename):
285 self.log.info("Running file in user namespace: %s" %
286 full_filename)
287 # Ensure that __file__ is always defined to match Python
288 # behavior.
289 with preserve_keys(self.shell.user_ns, '__file__'):
290 self.shell.user_ns['__file__'] = fname
291 if full_filename.endswith('.ipy'):
292 self.shell.safe_execfile_ipy(full_filename)
293 else:
294 # default to python, even without extension
295 self.shell.safe_execfile(full_filename,
296 self.shell.user_ns)
297 finally:
298 sys.argv = save_argv
299
300 def _run_startup_files(self):
301 """Run files from profile startup directory"""
302 startup_dir = self.profile_dir.startup_dir
303 startup_files = glob.glob(os.path.join(startup_dir, '*.py'))
304 startup_files += glob.glob(os.path.join(startup_dir, '*.ipy'))
305 if not startup_files:
306 return
307
308 self.log.debug("Running startup files from %s...", startup_dir)
309 try:
310 for fname in sorted(startup_files):
311 self._exec_file(fname)
312 except:
313 self.log.warn("Unknown error in handling startup files:")
314 self.shell.showtraceback()
315
316 def _run_exec_files(self):
317 """Run files from IPythonApp.exec_files"""
318 if not self.exec_files:
319 return
320
321 self.log.debug("Running files in IPythonApp.exec_files...")
322 try:
323 for fname in self.exec_files:
324 self._exec_file(fname)
325 except:
326 self.log.warn("Unknown error in handling IPythonApp.exec_files:")
327 self.shell.showtraceback()
328
329 def _run_cmd_line_code(self):
330 """Run code or file specified at the command-line"""
331 if self.code_to_run:
332 line = self.code_to_run
333 try:
334 self.log.info("Running code given at command line (c=): %s" %
335 line)
336 self.shell.run_cell(line, store_history=False)
337 except:
338 self.log.warn("Error in executing line in user namespace: %s" %
339 line)
340 self.shell.showtraceback()
341
342 # Like Python itself, ignore the second if the first of these is present
343 elif self.file_to_run:
344 fname = self.file_to_run
345 try:
346 self._exec_file(fname)
347 except:
348 self.log.warn("Error in executing file in user namespace: %s" %
349 fname)
350 self.shell.showtraceback()
351
352 def _run_module(self):
353 """Run module specified at the command-line."""
354 if self.module_to_run:
355 # Make sure that the module gets a proper sys.argv as if it were
356 # run using `python -m`.
357 save_argv = sys.argv
358 sys.argv = [sys.executable] + self.extra_args
359 try:
360 self.shell.safe_run_module(self.module_to_run,
361 self.shell.user_ns)
362 finally:
363 sys.argv = save_argv
364
[end of IPython/core/shellapp.py]
[start of docs/sphinxext/ipython_directive.py]
1 # -*- coding: utf-8 -*-
2 """Sphinx directive to support embedded IPython code.
3
4 This directive allows pasting of entire interactive IPython sessions, prompts
5 and all, and their code will actually get re-executed at doc build time, with
6 all prompts renumbered sequentially. It also allows you to input code as a pure
7 python input by giving the argument python to the directive. The output looks
8 like an interactive ipython section.
9
10 To enable this directive, simply list it in your Sphinx ``conf.py`` file
11 (making sure the directory where you placed it is visible to sphinx, as is
12 needed for all Sphinx directives).
13
14 By default this directive assumes that your prompts are unchanged IPython ones,
15 but this can be customized. The configurable options that can be placed in
16 conf.py are
17
18 ipython_savefig_dir:
19 The directory in which to save the figures. This is relative to the
20 Sphinx source directory. The default is `html_static_path`.
21 ipython_rgxin:
22 The compiled regular expression to denote the start of IPython input
23 lines. The default is re.compile('In \[(\d+)\]:\s?(.*)\s*'). You
24 shouldn't need to change this.
25 ipython_rgxout:
26 The compiled regular expression to denote the start of IPython output
27 lines. The default is re.compile('Out\[(\d+)\]:\s?(.*)\s*'). You
28 shouldn't need to change this.
29 ipython_promptin:
30 The string to represent the IPython input prompt in the generated ReST.
31 The default is 'In [%d]:'. This expects that the line numbers are used
32 in the prompt.
33 ipython_promptout:
34
35 The string to represent the IPython prompt in the generated ReST. The
36 default is 'Out [%d]:'. This expects that the line numbers are used
37 in the prompt.
38
39 ToDo
40 ----
41
42 - Turn the ad-hoc test() function into a real test suite.
43 - Break up ipython-specific functionality from matplotlib stuff into better
44 separated code.
45
46 Authors
47 -------
48
49 - John D Hunter: orignal author.
50 - Fernando Perez: refactoring, documentation, cleanups, port to 0.11.
51 - VáclavŠmilauer <eudoxos-AT-arcig.cz>: Prompt generalizations.
52 - Skipper Seabold, refactoring, cleanups, pure python addition
53 """
54
55 #-----------------------------------------------------------------------------
56 # Imports
57 #-----------------------------------------------------------------------------
58
59 # Stdlib
60 import cStringIO
61 import os
62 import re
63 import sys
64 import tempfile
65 import ast
66
67 # To keep compatibility with various python versions
68 try:
69 from hashlib import md5
70 except ImportError:
71 from md5 import md5
72
73 # Third-party
74 import matplotlib
75 import sphinx
76 from docutils.parsers.rst import directives
77 from docutils import nodes
78 from sphinx.util.compat import Directive
79
80 matplotlib.use('Agg')
81
82 # Our own
83 from IPython import Config, InteractiveShell
84 from IPython.core.profiledir import ProfileDir
85 from IPython.utils import io
86
87 #-----------------------------------------------------------------------------
88 # Globals
89 #-----------------------------------------------------------------------------
90 # for tokenizing blocks
91 COMMENT, INPUT, OUTPUT = range(3)
92
93 #-----------------------------------------------------------------------------
94 # Functions and class declarations
95 #-----------------------------------------------------------------------------
96 def block_parser(part, rgxin, rgxout, fmtin, fmtout):
97 """
98 part is a string of ipython text, comprised of at most one
99 input, one ouput, comments, and blank lines. The block parser
100 parses the text into a list of::
101
102 blocks = [ (TOKEN0, data0), (TOKEN1, data1), ...]
103
104 where TOKEN is one of [COMMENT | INPUT | OUTPUT ] and
105 data is, depending on the type of token::
106
107 COMMENT : the comment string
108
109 INPUT: the (DECORATOR, INPUT_LINE, REST) where
110 DECORATOR: the input decorator (or None)
111 INPUT_LINE: the input as string (possibly multi-line)
112 REST : any stdout generated by the input line (not OUTPUT)
113
114
115 OUTPUT: the output string, possibly multi-line
116 """
117
118 block = []
119 lines = part.split('\n')
120 N = len(lines)
121 i = 0
122 decorator = None
123 while 1:
124
125 if i==N:
126 # nothing left to parse -- the last line
127 break
128
129 line = lines[i]
130 i += 1
131 line_stripped = line.strip()
132 if line_stripped.startswith('#'):
133 block.append((COMMENT, line))
134 continue
135
136 if line_stripped.startswith('@'):
137 # we're assuming at most one decorator -- may need to
138 # rethink
139 decorator = line_stripped
140 continue
141
142 # does this look like an input line?
143 matchin = rgxin.match(line)
144 if matchin:
145 lineno, inputline = int(matchin.group(1)), matchin.group(2)
146
147 # the ....: continuation string
148 continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
149 Nc = len(continuation)
150 # input lines can continue on for more than one line, if
151 # we have a '\' line continuation char or a function call
152 # echo line 'print'. The input line can only be
153 # terminated by the end of the block or an output line, so
154 # we parse out the rest of the input line if it is
155 # multiline as well as any echo text
156
157 rest = []
158 while i<N:
159
160 # look ahead; if the next line is blank, or a comment, or
161 # an output line, we're done
162
163 nextline = lines[i]
164 matchout = rgxout.match(nextline)
165 #print "nextline=%s, continuation=%s, starts=%s"%(nextline, continuation, nextline.startswith(continuation))
166 if matchout or nextline.startswith('#'):
167 break
168 elif nextline.startswith(continuation):
169 inputline += '\n' + nextline[Nc:]
170 else:
171 rest.append(nextline)
172 i+= 1
173
174 block.append((INPUT, (decorator, inputline, '\n'.join(rest))))
175 continue
176
177 # if it looks like an output line grab all the text to the end
178 # of the block
179 matchout = rgxout.match(line)
180 if matchout:
181 lineno, output = int(matchout.group(1)), matchout.group(2)
182 if i<N-1:
183 output = '\n'.join([output] + lines[i:])
184
185 block.append((OUTPUT, output))
186 break
187
188 return block
189
190 class EmbeddedSphinxShell(object):
191 """An embedded IPython instance to run inside Sphinx"""
192
193 def __init__(self):
194
195 self.cout = cStringIO.StringIO()
196
197
198 # Create config object for IPython
199 config = Config()
200 config.Global.display_banner = False
201 config.Global.exec_lines = ['import numpy as np',
202 'from pylab import *'
203 ]
204 config.InteractiveShell.autocall = False
205 config.InteractiveShell.autoindent = False
206 config.InteractiveShell.colors = 'NoColor'
207
208 # create a profile so instance history isn't saved
209 tmp_profile_dir = tempfile.mkdtemp(prefix='profile_')
210 profname = 'auto_profile_sphinx_build'
211 pdir = os.path.join(tmp_profile_dir,profname)
212 profile = ProfileDir.create_profile_dir(pdir)
213
214 # Create and initialize ipython, but don't start its mainloop
215 IP = InteractiveShell.instance(config=config, profile_dir=profile)
216 # io.stdout redirect must be done *after* instantiating InteractiveShell
217 io.stdout = self.cout
218 io.stderr = self.cout
219
220 # For debugging, so we can see normal output, use this:
221 #from IPython.utils.io import Tee
222 #io.stdout = Tee(self.cout, channel='stdout') # dbg
223 #io.stderr = Tee(self.cout, channel='stderr') # dbg
224
225 # Store a few parts of IPython we'll need.
226 self.IP = IP
227 self.user_ns = self.IP.user_ns
228 self.user_global_ns = self.IP.user_global_ns
229
230 self.input = ''
231 self.output = ''
232
233 self.is_verbatim = False
234 self.is_doctest = False
235 self.is_suppress = False
236
237 # on the first call to the savefig decorator, we'll import
238 # pyplot as plt so we can make a call to the plt.gcf().savefig
239 self._pyplot_imported = False
240
241 def clear_cout(self):
242 self.cout.seek(0)
243 self.cout.truncate(0)
244
245 def process_input_line(self, line, store_history=True):
246 """process the input, capturing stdout"""
247 #print "input='%s'"%self.input
248 stdout = sys.stdout
249 splitter = self.IP.input_splitter
250 try:
251 sys.stdout = self.cout
252 splitter.push(line)
253 more = splitter.push_accepts_more()
254 if not more:
255 source_raw = splitter.source_raw_reset()[1]
256 self.IP.run_cell(source_raw, store_history=store_history)
257 finally:
258 sys.stdout = stdout
259
260 def process_image(self, decorator):
261 """
262 # build out an image directive like
263 # .. image:: somefile.png
264 # :width 4in
265 #
266 # from an input like
267 # savefig somefile.png width=4in
268 """
269 savefig_dir = self.savefig_dir
270 source_dir = self.source_dir
271 saveargs = decorator.split(' ')
272 filename = saveargs[1]
273 # insert relative path to image file in source
274 outfile = os.path.relpath(os.path.join(savefig_dir,filename),
275 source_dir)
276
277 imagerows = ['.. image:: %s'%outfile]
278
279 for kwarg in saveargs[2:]:
280 arg, val = kwarg.split('=')
281 arg = arg.strip()
282 val = val.strip()
283 imagerows.append(' :%s: %s'%(arg, val))
284
285 image_file = os.path.basename(outfile) # only return file name
286 image_directive = '\n'.join(imagerows)
287 return image_file, image_directive
288
289
290 # Callbacks for each type of token
291 def process_input(self, data, input_prompt, lineno):
292 """Process data block for INPUT token."""
293 decorator, input, rest = data
294 image_file = None
295 image_directive = None
296 #print 'INPUT:', data # dbg
297 is_verbatim = decorator=='@verbatim' or self.is_verbatim
298 is_doctest = decorator=='@doctest' or self.is_doctest
299 is_suppress = decorator=='@suppress' or self.is_suppress
300 is_savefig = decorator is not None and \
301 decorator.startswith('@savefig')
302
303 input_lines = input.split('\n')
304 if len(input_lines) > 1:
305 if input_lines[-1] != "":
306 input_lines.append('') # make sure there's a blank line
307 # so splitter buffer gets reset
308
309 continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
310 Nc = len(continuation)
311
312 if is_savefig:
313 image_file, image_directive = self.process_image(decorator)
314
315 ret = []
316 is_semicolon = False
317
318 for i, line in enumerate(input_lines):
319 if line.endswith(';'):
320 is_semicolon = True
321
322 if i==0:
323 # process the first input line
324 if is_verbatim:
325 self.process_input_line('')
326 self.IP.execution_count += 1 # increment it anyway
327 else:
328 # only submit the line in non-verbatim mode
329 self.process_input_line(line, store_history=True)
330 formatted_line = '%s %s'%(input_prompt, line)
331 else:
332 # process a continuation line
333 if not is_verbatim:
334 self.process_input_line(line, store_history=True)
335
336 formatted_line = '%s %s'%(continuation, line)
337
338 if not is_suppress:
339 ret.append(formatted_line)
340
341 if not is_suppress and len(rest.strip()) and is_verbatim:
342 # the "rest" is the standard output of the
343 # input, which needs to be added in
344 # verbatim mode
345 ret.append(rest)
346
347 self.cout.seek(0)
348 output = self.cout.read()
349 if not is_suppress and not is_semicolon:
350 ret.append(output)
351 elif is_semicolon: # get spacing right
352 ret.append('')
353
354 self.cout.truncate(0)
355 return (ret, input_lines, output, is_doctest, image_file,
356 image_directive)
357 #print 'OUTPUT', output # dbg
358
359 def process_output(self, data, output_prompt,
360 input_lines, output, is_doctest, image_file):
361 """Process data block for OUTPUT token."""
362 if is_doctest:
363 submitted = data.strip()
364 found = output
365 if found is not None:
366 found = found.strip()
367
368 # XXX - fperez: in 0.11, 'output' never comes with the prompt
369 # in it, just the actual output text. So I think all this code
370 # can be nuked...
371
372 # the above comment does not appear to be accurate... (minrk)
373
374 ind = found.find(output_prompt)
375 if ind<0:
376 e='output prompt="%s" does not match out line=%s' % \
377 (output_prompt, found)
378 raise RuntimeError(e)
379 found = found[len(output_prompt):].strip()
380
381 if found!=submitted:
382 e = ('doctest failure for input_lines="%s" with '
383 'found_output="%s" and submitted output="%s"' %
384 (input_lines, found, submitted) )
385 raise RuntimeError(e)
386 #print 'doctest PASSED for input_lines="%s" with found_output="%s" and submitted output="%s"'%(input_lines, found, submitted)
387
388 def process_comment(self, data):
389 """Process data fPblock for COMMENT token."""
390 if not self.is_suppress:
391 return [data]
392
393 def save_image(self, image_file):
394 """
395 Saves the image file to disk.
396 """
397 self.ensure_pyplot()
398 command = 'plt.gcf().savefig("%s")'%image_file
399 #print 'SAVEFIG', command # dbg
400 self.process_input_line('bookmark ipy_thisdir', store_history=False)
401 self.process_input_line('cd -b ipy_savedir', store_history=False)
402 self.process_input_line(command, store_history=False)
403 self.process_input_line('cd -b ipy_thisdir', store_history=False)
404 self.process_input_line('bookmark -d ipy_thisdir', store_history=False)
405 self.clear_cout()
406
407
408 def process_block(self, block):
409 """
410 process block from the block_parser and return a list of processed lines
411 """
412 ret = []
413 output = None
414 input_lines = None
415 lineno = self.IP.execution_count
416
417 input_prompt = self.promptin%lineno
418 output_prompt = self.promptout%lineno
419 image_file = None
420 image_directive = None
421
422 for token, data in block:
423 if token==COMMENT:
424 out_data = self.process_comment(data)
425 elif token==INPUT:
426 (out_data, input_lines, output, is_doctest, image_file,
427 image_directive) = \
428 self.process_input(data, input_prompt, lineno)
429 elif token==OUTPUT:
430 out_data = \
431 self.process_output(data, output_prompt,
432 input_lines, output, is_doctest,
433 image_file)
434 if out_data:
435 ret.extend(out_data)
436
437 # save the image files
438 if image_file is not None:
439 self.save_image(image_file)
440
441 return ret, image_directive
442
443 def ensure_pyplot(self):
444 if self._pyplot_imported:
445 return
446 self.process_input_line('import matplotlib.pyplot as plt',
447 store_history=False)
448
449 def process_pure_python(self, content):
450 """
451 content is a list of strings. it is unedited directive conent
452
453 This runs it line by line in the InteractiveShell, prepends
454 prompts as needed capturing stderr and stdout, then returns
455 the content as a list as if it were ipython code
456 """
457 output = []
458 savefig = False # keep up with this to clear figure
459 multiline = False # to handle line continuation
460 multiline_start = None
461 fmtin = self.promptin
462
463 ct = 0
464
465 for lineno, line in enumerate(content):
466
467 line_stripped = line.strip()
468 if not len(line):
469 output.append(line)
470 continue
471
472 # handle decorators
473 if line_stripped.startswith('@'):
474 output.extend([line])
475 if 'savefig' in line:
476 savefig = True # and need to clear figure
477 continue
478
479 # handle comments
480 if line_stripped.startswith('#'):
481 output.extend([line])
482 continue
483
484 # deal with lines checking for multiline
485 continuation = u' %s:'% ''.join(['.']*(len(str(ct))+2))
486 if not multiline:
487 modified = u"%s %s" % (fmtin % ct, line_stripped)
488 output.append(modified)
489 ct += 1
490 try:
491 ast.parse(line_stripped)
492 output.append(u'')
493 except Exception: # on a multiline
494 multiline = True
495 multiline_start = lineno
496 else: # still on a multiline
497 modified = u'%s %s' % (continuation, line)
498 output.append(modified)
499
500 # if the next line is indented, it should be part of multiline
501 if len(content) > lineno + 1:
502 nextline = content[lineno + 1]
503 if len(nextline) - len(nextline.lstrip()) > 3:
504 continue
505 try:
506 mod = ast.parse(
507 '\n'.join(content[multiline_start:lineno+1]))
508 if isinstance(mod.body[0], ast.FunctionDef):
509 # check to see if we have the whole function
510 for element in mod.body[0].body:
511 if isinstance(element, ast.Return):
512 multiline = False
513 else:
514 output.append(u'')
515 multiline = False
516 except Exception:
517 pass
518
519 if savefig: # clear figure if plotted
520 self.ensure_pyplot()
521 self.process_input_line('plt.clf()', store_history=False)
522 self.clear_cout()
523 savefig = False
524
525 return output
526
527 class IpythonDirective(Directive):
528
529 has_content = True
530 required_arguments = 0
531 optional_arguments = 4 # python, suppress, verbatim, doctest
532 final_argumuent_whitespace = True
533 option_spec = { 'python': directives.unchanged,
534 'suppress' : directives.flag,
535 'verbatim' : directives.flag,
536 'doctest' : directives.flag,
537 }
538
539 shell = EmbeddedSphinxShell()
540
541 seen_docs = set()
542
543 def get_config_options(self):
544 # contains sphinx configuration variables
545 config = self.state.document.settings.env.config
546
547 # get config variables to set figure output directory
548 confdir = self.state.document.settings.env.app.confdir
549 savefig_dir = config.ipython_savefig_dir
550 source_dir = os.path.dirname(self.state.document.current_source)
551 if savefig_dir is None:
552 savefig_dir = config.html_static_path
553 if isinstance(savefig_dir, list):
554 savefig_dir = savefig_dir[0] # safe to assume only one path?
555 savefig_dir = os.path.join(confdir, savefig_dir)
556
557 # get regex and prompt stuff
558 rgxin = config.ipython_rgxin
559 rgxout = config.ipython_rgxout
560 promptin = config.ipython_promptin
561 promptout = config.ipython_promptout
562
563 return savefig_dir, source_dir, rgxin, rgxout, promptin, promptout
564
565 def setup(self):
566 # reset the execution count if we haven't processed this doc
567 #NOTE: this may be borked if there are multiple seen_doc tmp files
568 #check time stamp?
569
570
571 if not self.state.document.current_source in self.seen_docs:
572 self.shell.IP.history_manager.reset()
573 self.shell.IP.execution_count = 1
574 self.seen_docs.add(self.state.document.current_source)
575
576
577
578 # get config values
579 (savefig_dir, source_dir, rgxin,
580 rgxout, promptin, promptout) = self.get_config_options()
581
582 # and attach to shell so we don't have to pass them around
583 self.shell.rgxin = rgxin
584 self.shell.rgxout = rgxout
585 self.shell.promptin = promptin
586 self.shell.promptout = promptout
587 self.shell.savefig_dir = savefig_dir
588 self.shell.source_dir = source_dir
589
590 # setup bookmark for saving figures directory
591
592 self.shell.process_input_line('bookmark ipy_savedir %s'%savefig_dir,
593 store_history=False)
594 self.shell.clear_cout()
595
596 return rgxin, rgxout, promptin, promptout
597
598
599 def teardown(self):
600 # delete last bookmark
601 self.shell.process_input_line('bookmark -d ipy_savedir',
602 store_history=False)
603 self.shell.clear_cout()
604
605 def run(self):
606 debug = False
607
608 #TODO, any reason block_parser can't be a method of embeddable shell
609 # then we wouldn't have to carry these around
610 rgxin, rgxout, promptin, promptout = self.setup()
611
612 options = self.options
613 self.shell.is_suppress = 'suppress' in options
614 self.shell.is_doctest = 'doctest' in options
615 self.shell.is_verbatim = 'verbatim' in options
616
617
618 # handle pure python code
619 if 'python' in self.arguments:
620 content = self.content
621 self.content = self.shell.process_pure_python(content)
622
623 parts = '\n'.join(self.content).split('\n\n')
624
625 lines = ['.. code-block:: ipython','']
626 figures = []
627
628 for part in parts:
629
630 block = block_parser(part, rgxin, rgxout, promptin, promptout)
631
632 if len(block):
633 rows, figure = self.shell.process_block(block)
634 for row in rows:
635 lines.extend([' %s'%line for line in row.split('\n')])
636
637 if figure is not None:
638 figures.append(figure)
639
640 #text = '\n'.join(lines)
641 #figs = '\n'.join(figures)
642
643 for figure in figures:
644 lines.append('')
645 lines.extend(figure.split('\n'))
646 lines.append('')
647
648 #print lines
649 if len(lines)>2:
650 if debug:
651 print '\n'.join(lines)
652 else: #NOTE: this raises some errors, what's it for?
653 #print 'INSERTING %d lines'%len(lines)
654 self.state_machine.insert_input(
655 lines, self.state_machine.input_lines.source(0))
656
657 text = '\n'.join(lines)
658 txtnode = nodes.literal_block(text, text)
659 txtnode['language'] = 'ipython'
660 #imgnode = nodes.image(figs)
661
662 # cleanup
663 self.teardown()
664
665 return []#, imgnode]
666
667 # Enable as a proper Sphinx directive
668 def setup(app):
669 setup.app = app
670
671 app.add_directive('ipython', IpythonDirective)
672 app.add_config_value('ipython_savefig_dir', None, True)
673 app.add_config_value('ipython_rgxin',
674 re.compile('In \[(\d+)\]:\s?(.*)\s*'), True)
675 app.add_config_value('ipython_rgxout',
676 re.compile('Out\[(\d+)\]:\s?(.*)\s*'), True)
677 app.add_config_value('ipython_promptin', 'In [%d]:', True)
678 app.add_config_value('ipython_promptout', 'Out[%d]:', True)
679
680
681 # Simple smoke test, needs to be converted to a proper automatic test.
682 def test():
683
684 examples = [
685 r"""
686 In [9]: pwd
687 Out[9]: '/home/jdhunter/py4science/book'
688
689 In [10]: cd bookdata/
690 /home/jdhunter/py4science/book/bookdata
691
692 In [2]: from pylab import *
693
694 In [2]: ion()
695
696 In [3]: im = imread('stinkbug.png')
697
698 @savefig mystinkbug.png width=4in
699 In [4]: imshow(im)
700 Out[4]: <matplotlib.image.AxesImage object at 0x39ea850>
701
702 """,
703 r"""
704
705 In [1]: x = 'hello world'
706
707 # string methods can be
708 # used to alter the string
709 @doctest
710 In [2]: x.upper()
711 Out[2]: 'HELLO WORLD'
712
713 @verbatim
714 In [3]: x.st<TAB>
715 x.startswith x.strip
716 """,
717 r"""
718
719 In [130]: url = 'http://ichart.finance.yahoo.com/table.csv?s=CROX\
720 .....: &d=9&e=22&f=2009&g=d&a=1&br=8&c=2006&ignore=.csv'
721
722 In [131]: print url.split('&')
723 ['http://ichart.finance.yahoo.com/table.csv?s=CROX', 'd=9', 'e=22', 'f=2009', 'g=d', 'a=1', 'b=8', 'c=2006', 'ignore=.csv']
724
725 In [60]: import urllib
726
727 """,
728 r"""\
729
730 In [133]: import numpy.random
731
732 @suppress
733 In [134]: numpy.random.seed(2358)
734
735 @doctest
736 In [135]: numpy.random.rand(10,2)
737 Out[135]:
738 array([[ 0.64524308, 0.59943846],
739 [ 0.47102322, 0.8715456 ],
740 [ 0.29370834, 0.74776844],
741 [ 0.99539577, 0.1313423 ],
742 [ 0.16250302, 0.21103583],
743 [ 0.81626524, 0.1312433 ],
744 [ 0.67338089, 0.72302393],
745 [ 0.7566368 , 0.07033696],
746 [ 0.22591016, 0.77731835],
747 [ 0.0072729 , 0.34273127]])
748
749 """,
750
751 r"""
752 In [106]: print x
753 jdh
754
755 In [109]: for i in range(10):
756 .....: print i
757 .....:
758 .....:
759 0
760 1
761 2
762 3
763 4
764 5
765 6
766 7
767 8
768 9
769 """,
770
771 r"""
772
773 In [144]: from pylab import *
774
775 In [145]: ion()
776
777 # use a semicolon to suppress the output
778 @savefig test_hist.png width=4in
779 In [151]: hist(np.random.randn(10000), 100);
780
781
782 @savefig test_plot.png width=4in
783 In [151]: plot(np.random.randn(10000), 'o');
784 """,
785
786 r"""
787 # use a semicolon to suppress the output
788 In [151]: plt.clf()
789
790 @savefig plot_simple.png width=4in
791 In [151]: plot([1,2,3])
792
793 @savefig hist_simple.png width=4in
794 In [151]: hist(np.random.randn(10000), 100);
795
796 """,
797 r"""
798 # update the current fig
799 In [151]: ylabel('number')
800
801 In [152]: title('normal distribution')
802
803
804 @savefig hist_with_text.png
805 In [153]: grid(True)
806
807 """,
808 ]
809 # skip local-file depending first example:
810 examples = examples[1:]
811
812 #ipython_directive.DEBUG = True # dbg
813 #options = dict(suppress=True) # dbg
814 options = dict()
815 for example in examples:
816 content = example.split('\n')
817 ipython_directive('debug', arguments=None, options=options,
818 content=content, lineno=0,
819 content_offset=None, block_text=None,
820 state=None, state_machine=None,
821 )
822
823 # Run test suite as a script
824 if __name__=='__main__':
825 if not os.path.isdir('_static'):
826 os.mkdir('_static')
827 test()
828 print 'All OK? Check figures in _static/'
829
[end of docs/sphinxext/ipython_directive.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ipython/ipython
|
b60405b63c655dc5596962874309172c6851d1cd
|
Install Sphinx extensions
For what seems like a long time now, but matplotlib and IPython have both included `ipython_console_highlighting.py` and `ipython_directive.py`. (I think I wrote the former and John Hunter wrote the latter, originally).
Over time, these extensions have diverged. I think the IPython ones are better maintained as they actually track changes to IPython etc., and logically, I think these belong in IPython and not in matplotlib.
Unfortunately for third-party users of these packages, IPython does not install the sphinx extensions, but merely keeps them in the source tree. This has encouraged a lot of people to rely on the copies in matplotlib (which are installed). Any chance IPython could install them, too, then we could properly deprecate the copies in matplotlib?
|
I can't think of a reason that we shouldn't be doing that. Move them to `IPython.sphinxext`?
EDIT: I overstated the case by accidentally comparing the wrong branches -- there don't seem to be any meaningful differences between the files in matplotlib and IPython. My thesis still stands (I think) that these more logically should live in IPython.
|
2013-07-06T04:25:25Z
|
<patch>
diff --git a/docs/sphinxext/ipython_console_highlighting.py b/IPython/sphinxext/ipython_console_highlighting.py
similarity index 100%
rename from docs/sphinxext/ipython_console_highlighting.py
rename to IPython/sphinxext/ipython_console_highlighting.py
diff --git a/docs/sphinxext/ipython_directive.py b/IPython/sphinxext/ipython_directive.py
similarity index 100%
rename from docs/sphinxext/ipython_directive.py
rename to IPython/sphinxext/ipython_directive.py
diff --git a/docs/autogen_api.py b/docs/autogen_api.py
--- a/docs/autogen_api.py
+++ b/docs/autogen_api.py
@@ -63,4 +63,4 @@
docwriter.write_index(outdir, 'gen.rst',
relative_to = pjoin('source','api')
)
- print '%d files written' % len(docwriter.written_modules)
+ print ('%d files written' % len(docwriter.written_modules))
diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -30,9 +30,9 @@
# absolute, like shown here.
sys.path.insert(0, os.path.abspath('../sphinxext'))
-# Import support for ipython console session syntax highlighting (lives
-# in the sphinxext directory defined above)
-import ipython_console_highlighting
+# Import support for ipython console session syntax highlighting
+# (lives IPython's sphinxext subpackage)
+from IPython.sphinxext import ipython_console_highlighting
# We load the ipython release info into a dict by explicit execution
iprelease = {}
@@ -50,8 +50,8 @@
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.inheritance_diagram',
- 'ipython_console_highlighting',
- 'ipython_directive',
+ 'IPython.sphinxext.ipython_console_highlighting',
+ 'IPython.sphinxext.ipython_directive',
'numpydoc', # to preprocess docstrings
'github', # for easy GitHub links
]
@@ -61,7 +61,7 @@
extensions.remove('matplotlib.sphinxext.only_directives')
extensions.remove('matplotlib.sphinxext.mathmpl')
extensions.remove('matplotlib.sphinxext.plot_directive')
- extensions.remove('ipython_directive')
+ extensions.remove('IPython.sphinxext.ipython_directive')
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/docs/sphinxext/apigen.py b/docs/sphinxext/apigen.py
--- a/docs/sphinxext/apigen.py
+++ b/docs/sphinxext/apigen.py
@@ -17,6 +17,8 @@
PyMVPA project, which we've adapted for NIPY use. PyMVPA is an MIT-licensed
project."""
+from __future__ import print_function
+
# Stdlib imports
import ast
import os
@@ -210,7 +212,7 @@ def generate_api_doc(self, uri):
# get the names of all classes and functions
functions, classes = self._parse_module(uri)
if not len(functions) and not len(classes):
- print 'WARNING: Empty -',uri # dbg
+ print ('WARNING: Empty -', uri) # dbg
return ''
# Make a shorter version of the uri that omits the package name for
</patch>
|
[]
|
[]
| |||
pypa__pip-2766
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Version self check should not warn for post releases
Post releases are explicitly designed to just fix small errors that won't affect the code itself, things like doc updates. However if we release a post release then the pip version self check will tell everyone to go download it, even though using it isn't really all that important.
Ideally this should just ignore post releases.
</issue>
<code>
[start of README.rst]
1 pip
2 ===
3
4 The `PyPA recommended
5 <https://python-packaging-user-guide.readthedocs.org/en/latest/current.html>`_
6 tool for installing Python packages.
7
8 * `Installation <https://pip.pypa.io/en/stable/installing.html>`_
9 * `Documentation <https://pip.pypa.io/>`_
10 * `Changelog <https://pip.pypa.io/en/stable/news.html>`_
11 * `Github Page <https://github.com/pypa/pip>`_
12 * `Issue Tracking <https://github.com/pypa/pip/issues>`_
13 * `User mailing list <http://groups.google.com/group/python-virtualenv>`_
14 * `Dev mailing list <http://groups.google.com/group/pypa-dev>`_
15 * User IRC: #pypa on Freenode.
16 * Dev IRC: #pypa-dev on Freenode.
17
18
19 .. image:: https://pypip.in/v/pip/badge.png
20 :target: https://pypi.python.org/pypi/pip
21
22 .. image:: https://secure.travis-ci.org/pypa/pip.png?branch=develop
23 :target: http://travis-ci.org/pypa/pip
24
[end of README.rst]
[start of pip/_vendor/distlib/index.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) 2013 Vinay Sajip.
4 # Licensed to the Python Software Foundation under a contributor agreement.
5 # See LICENSE.txt and CONTRIBUTORS.txt.
6 #
7 import hashlib
8 import logging
9 import os
10 import shutil
11 import subprocess
12 import tempfile
13 try:
14 from threading import Thread
15 except ImportError:
16 from dummy_threading import Thread
17
18 from . import DistlibException
19 from .compat import (HTTPBasicAuthHandler, Request, HTTPPasswordMgr,
20 urlparse, build_opener, string_types)
21 from .util import cached_property, zip_dir, ServerProxy
22
23 logger = logging.getLogger(__name__)
24
25 DEFAULT_INDEX = 'https://pypi.python.org/pypi'
26 DEFAULT_REALM = 'pypi'
27
28 class PackageIndex(object):
29 """
30 This class represents a package index compatible with PyPI, the Python
31 Package Index.
32 """
33
34 boundary = b'----------ThIs_Is_tHe_distlib_index_bouNdaRY_$'
35
36 def __init__(self, url=None):
37 """
38 Initialise an instance.
39
40 :param url: The URL of the index. If not specified, the URL for PyPI is
41 used.
42 """
43 self.url = url or DEFAULT_INDEX
44 self.read_configuration()
45 scheme, netloc, path, params, query, frag = urlparse(self.url)
46 if params or query or frag or scheme not in ('http', 'https'):
47 raise DistlibException('invalid repository: %s' % self.url)
48 self.password_handler = None
49 self.ssl_verifier = None
50 self.gpg = None
51 self.gpg_home = None
52 self.rpc_proxy = None
53 with open(os.devnull, 'w') as sink:
54 for s in ('gpg2', 'gpg'):
55 try:
56 rc = subprocess.check_call([s, '--version'], stdout=sink,
57 stderr=sink)
58 if rc == 0:
59 self.gpg = s
60 break
61 except OSError:
62 pass
63
64 def _get_pypirc_command(self):
65 """
66 Get the distutils command for interacting with PyPI configurations.
67 :return: the command.
68 """
69 from distutils.core import Distribution
70 from distutils.config import PyPIRCCommand
71 d = Distribution()
72 return PyPIRCCommand(d)
73
74 def read_configuration(self):
75 """
76 Read the PyPI access configuration as supported by distutils, getting
77 PyPI to do the acutal work. This populates ``username``, ``password``,
78 ``realm`` and ``url`` attributes from the configuration.
79 """
80 # get distutils to do the work
81 c = self._get_pypirc_command()
82 c.repository = self.url
83 cfg = c._read_pypirc()
84 self.username = cfg.get('username')
85 self.password = cfg.get('password')
86 self.realm = cfg.get('realm', 'pypi')
87 self.url = cfg.get('repository', self.url)
88
89 def save_configuration(self):
90 """
91 Save the PyPI access configuration. You must have set ``username`` and
92 ``password`` attributes before calling this method.
93
94 Again, distutils is used to do the actual work.
95 """
96 self.check_credentials()
97 # get distutils to do the work
98 c = self._get_pypirc_command()
99 c._store_pypirc(self.username, self.password)
100
101 def check_credentials(self):
102 """
103 Check that ``username`` and ``password`` have been set, and raise an
104 exception if not.
105 """
106 if self.username is None or self.password is None:
107 raise DistlibException('username and password must be set')
108 pm = HTTPPasswordMgr()
109 _, netloc, _, _, _, _ = urlparse(self.url)
110 pm.add_password(self.realm, netloc, self.username, self.password)
111 self.password_handler = HTTPBasicAuthHandler(pm)
112
113 def register(self, metadata):
114 """
115 Register a distribution on PyPI, using the provided metadata.
116
117 :param metadata: A :class:`Metadata` instance defining at least a name
118 and version number for the distribution to be
119 registered.
120 :return: The HTTP response received from PyPI upon submission of the
121 request.
122 """
123 self.check_credentials()
124 metadata.validate()
125 d = metadata.todict()
126 d[':action'] = 'verify'
127 request = self.encode_request(d.items(), [])
128 response = self.send_request(request)
129 d[':action'] = 'submit'
130 request = self.encode_request(d.items(), [])
131 return self.send_request(request)
132
133 def _reader(self, name, stream, outbuf):
134 """
135 Thread runner for reading lines of from a subprocess into a buffer.
136
137 :param name: The logical name of the stream (used for logging only).
138 :param stream: The stream to read from. This will typically a pipe
139 connected to the output stream of a subprocess.
140 :param outbuf: The list to append the read lines to.
141 """
142 while True:
143 s = stream.readline()
144 if not s:
145 break
146 s = s.decode('utf-8').rstrip()
147 outbuf.append(s)
148 logger.debug('%s: %s' % (name, s))
149 stream.close()
150
151 def get_sign_command(self, filename, signer, sign_password,
152 keystore=None):
153 """
154 Return a suitable command for signing a file.
155
156 :param filename: The pathname to the file to be signed.
157 :param signer: The identifier of the signer of the file.
158 :param sign_password: The passphrase for the signer's
159 private key used for signing.
160 :param keystore: The path to a directory which contains the keys
161 used in verification. If not specified, the
162 instance's ``gpg_home`` attribute is used instead.
163 :return: The signing command as a list suitable to be
164 passed to :class:`subprocess.Popen`.
165 """
166 cmd = [self.gpg, '--status-fd', '2', '--no-tty']
167 if keystore is None:
168 keystore = self.gpg_home
169 if keystore:
170 cmd.extend(['--homedir', keystore])
171 if sign_password is not None:
172 cmd.extend(['--batch', '--passphrase-fd', '0'])
173 td = tempfile.mkdtemp()
174 sf = os.path.join(td, os.path.basename(filename) + '.asc')
175 cmd.extend(['--detach-sign', '--armor', '--local-user',
176 signer, '--output', sf, filename])
177 logger.debug('invoking: %s', ' '.join(cmd))
178 return cmd, sf
179
180 def run_command(self, cmd, input_data=None):
181 """
182 Run a command in a child process , passing it any input data specified.
183
184 :param cmd: The command to run.
185 :param input_data: If specified, this must be a byte string containing
186 data to be sent to the child process.
187 :return: A tuple consisting of the subprocess' exit code, a list of
188 lines read from the subprocess' ``stdout``, and a list of
189 lines read from the subprocess' ``stderr``.
190 """
191 kwargs = {
192 'stdout': subprocess.PIPE,
193 'stderr': subprocess.PIPE,
194 }
195 if input_data is not None:
196 kwargs['stdin'] = subprocess.PIPE
197 stdout = []
198 stderr = []
199 p = subprocess.Popen(cmd, **kwargs)
200 # We don't use communicate() here because we may need to
201 # get clever with interacting with the command
202 t1 = Thread(target=self._reader, args=('stdout', p.stdout, stdout))
203 t1.start()
204 t2 = Thread(target=self._reader, args=('stderr', p.stderr, stderr))
205 t2.start()
206 if input_data is not None:
207 p.stdin.write(input_data)
208 p.stdin.close()
209
210 p.wait()
211 t1.join()
212 t2.join()
213 return p.returncode, stdout, stderr
214
215 def sign_file(self, filename, signer, sign_password, keystore=None):
216 """
217 Sign a file.
218
219 :param filename: The pathname to the file to be signed.
220 :param signer: The identifier of the signer of the file.
221 :param sign_password: The passphrase for the signer's
222 private key used for signing.
223 :param keystore: The path to a directory which contains the keys
224 used in signing. If not specified, the instance's
225 ``gpg_home`` attribute is used instead.
226 :return: The absolute pathname of the file where the signature is
227 stored.
228 """
229 cmd, sig_file = self.get_sign_command(filename, signer, sign_password,
230 keystore)
231 rc, stdout, stderr = self.run_command(cmd,
232 sign_password.encode('utf-8'))
233 if rc != 0:
234 raise DistlibException('sign command failed with error '
235 'code %s' % rc)
236 return sig_file
237
238 def upload_file(self, metadata, filename, signer=None, sign_password=None,
239 filetype='sdist', pyversion='source', keystore=None):
240 """
241 Upload a release file to the index.
242
243 :param metadata: A :class:`Metadata` instance defining at least a name
244 and version number for the file to be uploaded.
245 :param filename: The pathname of the file to be uploaded.
246 :param signer: The identifier of the signer of the file.
247 :param sign_password: The passphrase for the signer's
248 private key used for signing.
249 :param filetype: The type of the file being uploaded. This is the
250 distutils command which produced that file, e.g.
251 ``sdist`` or ``bdist_wheel``.
252 :param pyversion: The version of Python which the release relates
253 to. For code compatible with any Python, this would
254 be ``source``, otherwise it would be e.g. ``3.2``.
255 :param keystore: The path to a directory which contains the keys
256 used in signing. If not specified, the instance's
257 ``gpg_home`` attribute is used instead.
258 :return: The HTTP response received from PyPI upon submission of the
259 request.
260 """
261 self.check_credentials()
262 if not os.path.exists(filename):
263 raise DistlibException('not found: %s' % filename)
264 metadata.validate()
265 d = metadata.todict()
266 sig_file = None
267 if signer:
268 if not self.gpg:
269 logger.warning('no signing program available - not signed')
270 else:
271 sig_file = self.sign_file(filename, signer, sign_password,
272 keystore)
273 with open(filename, 'rb') as f:
274 file_data = f.read()
275 md5_digest = hashlib.md5(file_data).hexdigest()
276 sha256_digest = hashlib.sha256(file_data).hexdigest()
277 d.update({
278 ':action': 'file_upload',
279 'protcol_version': '1',
280 'filetype': filetype,
281 'pyversion': pyversion,
282 'md5_digest': md5_digest,
283 'sha256_digest': sha256_digest,
284 })
285 files = [('content', os.path.basename(filename), file_data)]
286 if sig_file:
287 with open(sig_file, 'rb') as f:
288 sig_data = f.read()
289 files.append(('gpg_signature', os.path.basename(sig_file),
290 sig_data))
291 shutil.rmtree(os.path.dirname(sig_file))
292 request = self.encode_request(d.items(), files)
293 return self.send_request(request)
294
295 def upload_documentation(self, metadata, doc_dir):
296 """
297 Upload documentation to the index.
298
299 :param metadata: A :class:`Metadata` instance defining at least a name
300 and version number for the documentation to be
301 uploaded.
302 :param doc_dir: The pathname of the directory which contains the
303 documentation. This should be the directory that
304 contains the ``index.html`` for the documentation.
305 :return: The HTTP response received from PyPI upon submission of the
306 request.
307 """
308 self.check_credentials()
309 if not os.path.isdir(doc_dir):
310 raise DistlibException('not a directory: %r' % doc_dir)
311 fn = os.path.join(doc_dir, 'index.html')
312 if not os.path.exists(fn):
313 raise DistlibException('not found: %r' % fn)
314 metadata.validate()
315 name, version = metadata.name, metadata.version
316 zip_data = zip_dir(doc_dir).getvalue()
317 fields = [(':action', 'doc_upload'),
318 ('name', name), ('version', version)]
319 files = [('content', name, zip_data)]
320 request = self.encode_request(fields, files)
321 return self.send_request(request)
322
323 def get_verify_command(self, signature_filename, data_filename,
324 keystore=None):
325 """
326 Return a suitable command for verifying a file.
327
328 :param signature_filename: The pathname to the file containing the
329 signature.
330 :param data_filename: The pathname to the file containing the
331 signed data.
332 :param keystore: The path to a directory which contains the keys
333 used in verification. If not specified, the
334 instance's ``gpg_home`` attribute is used instead.
335 :return: The verifying command as a list suitable to be
336 passed to :class:`subprocess.Popen`.
337 """
338 cmd = [self.gpg, '--status-fd', '2', '--no-tty']
339 if keystore is None:
340 keystore = self.gpg_home
341 if keystore:
342 cmd.extend(['--homedir', keystore])
343 cmd.extend(['--verify', signature_filename, data_filename])
344 logger.debug('invoking: %s', ' '.join(cmd))
345 return cmd
346
347 def verify_signature(self, signature_filename, data_filename,
348 keystore=None):
349 """
350 Verify a signature for a file.
351
352 :param signature_filename: The pathname to the file containing the
353 signature.
354 :param data_filename: The pathname to the file containing the
355 signed data.
356 :param keystore: The path to a directory which contains the keys
357 used in verification. If not specified, the
358 instance's ``gpg_home`` attribute is used instead.
359 :return: True if the signature was verified, else False.
360 """
361 if not self.gpg:
362 raise DistlibException('verification unavailable because gpg '
363 'unavailable')
364 cmd = self.get_verify_command(signature_filename, data_filename,
365 keystore)
366 rc, stdout, stderr = self.run_command(cmd)
367 if rc not in (0, 1):
368 raise DistlibException('verify command failed with error '
369 'code %s' % rc)
370 return rc == 0
371
372 def download_file(self, url, destfile, digest=None, reporthook=None):
373 """
374 This is a convenience method for downloading a file from an URL.
375 Normally, this will be a file from the index, though currently
376 no check is made for this (i.e. a file can be downloaded from
377 anywhere).
378
379 The method is just like the :func:`urlretrieve` function in the
380 standard library, except that it allows digest computation to be
381 done during download and checking that the downloaded data
382 matched any expected value.
383
384 :param url: The URL of the file to be downloaded (assumed to be
385 available via an HTTP GET request).
386 :param destfile: The pathname where the downloaded file is to be
387 saved.
388 :param digest: If specified, this must be a (hasher, value)
389 tuple, where hasher is the algorithm used (e.g.
390 ``'md5'``) and ``value`` is the expected value.
391 :param reporthook: The same as for :func:`urlretrieve` in the
392 standard library.
393 """
394 if digest is None:
395 digester = None
396 logger.debug('No digest specified')
397 else:
398 if isinstance(digest, (list, tuple)):
399 hasher, digest = digest
400 else:
401 hasher = 'md5'
402 digester = getattr(hashlib, hasher)()
403 logger.debug('Digest specified: %s' % digest)
404 # The following code is equivalent to urlretrieve.
405 # We need to do it this way so that we can compute the
406 # digest of the file as we go.
407 with open(destfile, 'wb') as dfp:
408 # addinfourl is not a context manager on 2.x
409 # so we have to use try/finally
410 sfp = self.send_request(Request(url))
411 try:
412 headers = sfp.info()
413 blocksize = 8192
414 size = -1
415 read = 0
416 blocknum = 0
417 if "content-length" in headers:
418 size = int(headers["Content-Length"])
419 if reporthook:
420 reporthook(blocknum, blocksize, size)
421 while True:
422 block = sfp.read(blocksize)
423 if not block:
424 break
425 read += len(block)
426 dfp.write(block)
427 if digester:
428 digester.update(block)
429 blocknum += 1
430 if reporthook:
431 reporthook(blocknum, blocksize, size)
432 finally:
433 sfp.close()
434
435 # check that we got the whole file, if we can
436 if size >= 0 and read < size:
437 raise DistlibException(
438 'retrieval incomplete: got only %d out of %d bytes'
439 % (read, size))
440 # if we have a digest, it must match.
441 if digester:
442 actual = digester.hexdigest()
443 if digest != actual:
444 raise DistlibException('%s digest mismatch for %s: expected '
445 '%s, got %s' % (hasher, destfile,
446 digest, actual))
447 logger.debug('Digest verified: %s', digest)
448
449 def send_request(self, req):
450 """
451 Send a standard library :class:`Request` to PyPI and return its
452 response.
453
454 :param req: The request to send.
455 :return: The HTTP response from PyPI (a standard library HTTPResponse).
456 """
457 handlers = []
458 if self.password_handler:
459 handlers.append(self.password_handler)
460 if self.ssl_verifier:
461 handlers.append(self.ssl_verifier)
462 opener = build_opener(*handlers)
463 return opener.open(req)
464
465 def encode_request(self, fields, files):
466 """
467 Encode fields and files for posting to an HTTP server.
468
469 :param fields: The fields to send as a list of (fieldname, value)
470 tuples.
471 :param files: The files to send as a list of (fieldname, filename,
472 file_bytes) tuple.
473 """
474 # Adapted from packaging, which in turn was adapted from
475 # http://code.activestate.com/recipes/146306
476
477 parts = []
478 boundary = self.boundary
479 for k, values in fields:
480 if not isinstance(values, (list, tuple)):
481 values = [values]
482
483 for v in values:
484 parts.extend((
485 b'--' + boundary,
486 ('Content-Disposition: form-data; name="%s"' %
487 k).encode('utf-8'),
488 b'',
489 v.encode('utf-8')))
490 for key, filename, value in files:
491 parts.extend((
492 b'--' + boundary,
493 ('Content-Disposition: form-data; name="%s"; filename="%s"' %
494 (key, filename)).encode('utf-8'),
495 b'',
496 value))
497
498 parts.extend((b'--' + boundary + b'--', b''))
499
500 body = b'\r\n'.join(parts)
501 ct = b'multipart/form-data; boundary=' + boundary
502 headers = {
503 'Content-type': ct,
504 'Content-length': str(len(body))
505 }
506 return Request(self.url, body, headers)
507
508 def search(self, terms, operator=None):
509 if isinstance(terms, string_types):
510 terms = {'name': terms}
511 if self.rpc_proxy is None:
512 self.rpc_proxy = ServerProxy(self.url, timeout=3.0)
513 return self.rpc_proxy.search(terms, operator or 'and')
514
[end of pip/_vendor/distlib/index.py]
[start of pip/_vendor/packaging/specifiers.py]
1 # Copyright 2014 Donald Stufft
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function
15
16 import abc
17 import functools
18 import itertools
19 import re
20
21 from ._compat import string_types, with_metaclass
22 from .version import Version, LegacyVersion, parse
23
24
25 class InvalidSpecifier(ValueError):
26 """
27 An invalid specifier was found, users should refer to PEP 440.
28 """
29
30
31 class BaseSpecifier(with_metaclass(abc.ABCMeta, object)):
32
33 @abc.abstractmethod
34 def __str__(self):
35 """
36 Returns the str representation of this Specifier like object. This
37 should be representative of the Specifier itself.
38 """
39
40 @abc.abstractmethod
41 def __hash__(self):
42 """
43 Returns a hash value for this Specifier like object.
44 """
45
46 @abc.abstractmethod
47 def __eq__(self, other):
48 """
49 Returns a boolean representing whether or not the two Specifier like
50 objects are equal.
51 """
52
53 @abc.abstractmethod
54 def __ne__(self, other):
55 """
56 Returns a boolean representing whether or not the two Specifier like
57 objects are not equal.
58 """
59
60 @abc.abstractproperty
61 def prereleases(self):
62 """
63 Returns whether or not pre-releases as a whole are allowed by this
64 specifier.
65 """
66
67 @prereleases.setter
68 def prereleases(self, value):
69 """
70 Sets whether or not pre-releases as a whole are allowed by this
71 specifier.
72 """
73
74 @abc.abstractmethod
75 def contains(self, item, prereleases=None):
76 """
77 Determines if the given item is contained within this specifier.
78 """
79
80 @abc.abstractmethod
81 def filter(self, iterable, prereleases=None):
82 """
83 Takes an iterable of items and filters them so that only items which
84 are contained within this specifier are allowed in it.
85 """
86
87
88 class _IndividualSpecifier(BaseSpecifier):
89
90 _operators = {}
91
92 def __init__(self, spec="", prereleases=None):
93 match = self._regex.search(spec)
94 if not match:
95 raise InvalidSpecifier("Invalid specifier: '{0}'".format(spec))
96
97 self._spec = (
98 match.group("operator").strip(),
99 match.group("version").strip(),
100 )
101
102 # Store whether or not this Specifier should accept prereleases
103 self._prereleases = prereleases
104
105 def __repr__(self):
106 pre = (
107 ", prereleases={0!r}".format(self.prereleases)
108 if self._prereleases is not None
109 else ""
110 )
111
112 return "<{0}({1!r}{2})>".format(
113 self.__class__.__name__,
114 str(self),
115 pre,
116 )
117
118 def __str__(self):
119 return "{0}{1}".format(*self._spec)
120
121 def __hash__(self):
122 return hash(self._spec)
123
124 def __eq__(self, other):
125 if isinstance(other, string_types):
126 try:
127 other = self.__class__(other)
128 except InvalidSpecifier:
129 return NotImplemented
130 elif not isinstance(other, self.__class__):
131 return NotImplemented
132
133 return self._spec == other._spec
134
135 def __ne__(self, other):
136 if isinstance(other, string_types):
137 try:
138 other = self.__class__(other)
139 except InvalidSpecifier:
140 return NotImplemented
141 elif not isinstance(other, self.__class__):
142 return NotImplemented
143
144 return self._spec != other._spec
145
146 def _get_operator(self, op):
147 return getattr(self, "_compare_{0}".format(self._operators[op]))
148
149 def _coerce_version(self, version):
150 if not isinstance(version, (LegacyVersion, Version)):
151 version = parse(version)
152 return version
153
154 @property
155 def prereleases(self):
156 return self._prereleases
157
158 @prereleases.setter
159 def prereleases(self, value):
160 self._prereleases = value
161
162 def contains(self, item, prereleases=None):
163 # Determine if prereleases are to be allowed or not.
164 if prereleases is None:
165 prereleases = self.prereleases
166
167 # Normalize item to a Version or LegacyVersion, this allows us to have
168 # a shortcut for ``"2.0" in Specifier(">=2")
169 item = self._coerce_version(item)
170
171 # Determine if we should be supporting prereleases in this specifier
172 # or not, if we do not support prereleases than we can short circuit
173 # logic if this version is a prereleases.
174 if item.is_prerelease and not prereleases:
175 return False
176
177 # Actually do the comparison to determine if this item is contained
178 # within this Specifier or not.
179 return self._get_operator(self._spec[0])(item, self._spec[1])
180
181 def filter(self, iterable, prereleases=None):
182 yielded = False
183 found_prereleases = []
184
185 kw = {"prereleases": prereleases if prereleases is not None else True}
186
187 # Attempt to iterate over all the values in the iterable and if any of
188 # them match, yield them.
189 for version in iterable:
190 parsed_version = self._coerce_version(version)
191
192 if self.contains(parsed_version, **kw):
193 # If our version is a prerelease, and we were not set to allow
194 # prereleases, then we'll store it for later incase nothing
195 # else matches this specifier.
196 if (parsed_version.is_prerelease
197 and not (prereleases or self.prereleases)):
198 found_prereleases.append(version)
199 # Either this is not a prerelease, or we should have been
200 # accepting prereleases from the begining.
201 else:
202 yielded = True
203 yield version
204
205 # Now that we've iterated over everything, determine if we've yielded
206 # any values, and if we have not and we have any prereleases stored up
207 # then we will go ahead and yield the prereleases.
208 if not yielded and found_prereleases:
209 for version in found_prereleases:
210 yield version
211
212
213 class LegacySpecifier(_IndividualSpecifier):
214
215 _regex = re.compile(
216 r"""
217 ^
218 \s*
219 (?P<operator>(==|!=|<=|>=|<|>))
220 \s*
221 (?P<version>
222 [^\s]* # We just match everything, except for whitespace since this
223 # is a "legacy" specifier and the version string can be just
224 # about anything.
225 )
226 \s*
227 $
228 """,
229 re.VERBOSE | re.IGNORECASE,
230 )
231
232 _operators = {
233 "==": "equal",
234 "!=": "not_equal",
235 "<=": "less_than_equal",
236 ">=": "greater_than_equal",
237 "<": "less_than",
238 ">": "greater_than",
239 }
240
241 def _coerce_version(self, version):
242 if not isinstance(version, LegacyVersion):
243 version = LegacyVersion(str(version))
244 return version
245
246 def _compare_equal(self, prospective, spec):
247 return prospective == self._coerce_version(spec)
248
249 def _compare_not_equal(self, prospective, spec):
250 return prospective != self._coerce_version(spec)
251
252 def _compare_less_than_equal(self, prospective, spec):
253 return prospective <= self._coerce_version(spec)
254
255 def _compare_greater_than_equal(self, prospective, spec):
256 return prospective >= self._coerce_version(spec)
257
258 def _compare_less_than(self, prospective, spec):
259 return prospective < self._coerce_version(spec)
260
261 def _compare_greater_than(self, prospective, spec):
262 return prospective > self._coerce_version(spec)
263
264
265 def _require_version_compare(fn):
266 @functools.wraps(fn)
267 def wrapped(self, prospective, spec):
268 if not isinstance(prospective, Version):
269 return False
270 return fn(self, prospective, spec)
271 return wrapped
272
273
274 class Specifier(_IndividualSpecifier):
275
276 _regex = re.compile(
277 r"""
278 ^
279 \s*
280 (?P<operator>(~=|==|!=|<=|>=|<|>|===))
281 (?P<version>
282 (?:
283 # The identity operators allow for an escape hatch that will
284 # do an exact string match of the version you wish to install.
285 # This will not be parsed by PEP 440 and we cannot determine
286 # any semantic meaning from it. This operator is discouraged
287 # but included entirely as an escape hatch.
288 (?<====) # Only match for the identity operator
289 \s*
290 [^\s]* # We just match everything, except for whitespace
291 # since we are only testing for strict identity.
292 )
293 |
294 (?:
295 # The (non)equality operators allow for wild card and local
296 # versions to be specified so we have to define these two
297 # operators separately to enable that.
298 (?<===|!=) # Only match for equals and not equals
299
300 \s*
301 v?
302 (?:[0-9]+!)? # epoch
303 [0-9]+(?:\.[0-9]+)* # release
304 (?: # pre release
305 [-_\.]?
306 (a|b|c|rc|alpha|beta|pre|preview)
307 [-_\.]?
308 [0-9]*
309 )?
310 (?: # post release
311 (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*)
312 )?
313
314 # You cannot use a wild card and a dev or local version
315 # together so group them with a | and make them optional.
316 (?:
317 (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release
318 (?:\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*)? # local
319 |
320 \.\* # Wild card syntax of .*
321 )?
322 )
323 |
324 (?:
325 # The compatible operator requires at least two digits in the
326 # release segment.
327 (?<=~=) # Only match for the compatible operator
328
329 \s*
330 v?
331 (?:[0-9]+!)? # epoch
332 [0-9]+(?:\.[0-9]+)+ # release (We have a + instead of a *)
333 (?: # pre release
334 [-_\.]?
335 (a|b|c|rc|alpha|beta|pre|preview)
336 [-_\.]?
337 [0-9]*
338 )?
339 (?: # post release
340 (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*)
341 )?
342 (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release
343 )
344 |
345 (?:
346 # All other operators only allow a sub set of what the
347 # (non)equality operators do. Specifically they do not allow
348 # local versions to be specified nor do they allow the prefix
349 # matching wild cards.
350 (?<!==|!=|~=) # We have special cases for these
351 # operators so we want to make sure they
352 # don't match here.
353
354 \s*
355 v?
356 (?:[0-9]+!)? # epoch
357 [0-9]+(?:\.[0-9]+)* # release
358 (?: # pre release
359 [-_\.]?
360 (a|b|c|rc|alpha|beta|pre|preview)
361 [-_\.]?
362 [0-9]*
363 )?
364 (?: # post release
365 (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*)
366 )?
367 (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release
368 )
369 )
370 \s*
371 $
372 """,
373 re.VERBOSE | re.IGNORECASE,
374 )
375
376 _operators = {
377 "~=": "compatible",
378 "==": "equal",
379 "!=": "not_equal",
380 "<=": "less_than_equal",
381 ">=": "greater_than_equal",
382 "<": "less_than",
383 ">": "greater_than",
384 "===": "arbitrary",
385 }
386
387 @_require_version_compare
388 def _compare_compatible(self, prospective, spec):
389 # Compatible releases have an equivalent combination of >= and ==. That
390 # is that ~=2.2 is equivalent to >=2.2,==2.*. This allows us to
391 # implement this in terms of the other specifiers instead of
392 # implementing it ourselves. The only thing we need to do is construct
393 # the other specifiers.
394
395 # We want everything but the last item in the version, but we want to
396 # ignore post and dev releases and we want to treat the pre-release as
397 # it's own separate segment.
398 prefix = ".".join(
399 list(
400 itertools.takewhile(
401 lambda x: (not x.startswith("post")
402 and not x.startswith("dev")),
403 _version_split(spec),
404 )
405 )[:-1]
406 )
407
408 # Add the prefix notation to the end of our string
409 prefix += ".*"
410
411 return (self._get_operator(">=")(prospective, spec)
412 and self._get_operator("==")(prospective, prefix))
413
414 @_require_version_compare
415 def _compare_equal(self, prospective, spec):
416 # We need special logic to handle prefix matching
417 if spec.endswith(".*"):
418 # Split the spec out by dots, and pretend that there is an implicit
419 # dot in between a release segment and a pre-release segment.
420 spec = _version_split(spec[:-2]) # Remove the trailing .*
421
422 # Split the prospective version out by dots, and pretend that there
423 # is an implicit dot in between a release segment and a pre-release
424 # segment.
425 prospective = _version_split(str(prospective))
426
427 # Shorten the prospective version to be the same length as the spec
428 # so that we can determine if the specifier is a prefix of the
429 # prospective version or not.
430 prospective = prospective[:len(spec)]
431
432 # Pad out our two sides with zeros so that they both equal the same
433 # length.
434 spec, prospective = _pad_version(spec, prospective)
435 else:
436 # Convert our spec string into a Version
437 spec = Version(spec)
438
439 # If the specifier does not have a local segment, then we want to
440 # act as if the prospective version also does not have a local
441 # segment.
442 if not spec.local:
443 prospective = Version(prospective.public)
444
445 return prospective == spec
446
447 @_require_version_compare
448 def _compare_not_equal(self, prospective, spec):
449 return not self._compare_equal(prospective, spec)
450
451 @_require_version_compare
452 def _compare_less_than_equal(self, prospective, spec):
453 return prospective <= Version(spec)
454
455 @_require_version_compare
456 def _compare_greater_than_equal(self, prospective, spec):
457 return prospective >= Version(spec)
458
459 @_require_version_compare
460 def _compare_less_than(self, prospective, spec):
461 # Convert our spec to a Version instance, since we'll want to work with
462 # it as a version.
463 spec = Version(spec)
464
465 # Check to see if the prospective version is less than the spec
466 # version. If it's not we can short circuit and just return False now
467 # instead of doing extra unneeded work.
468 if not prospective < spec:
469 return False
470
471 # This special case is here so that, unless the specifier itself
472 # includes is a pre-release version, that we do not accept pre-release
473 # versions for the version mentioned in the specifier (e.g. <3.1 should
474 # not match 3.1.dev0, but should match 3.0.dev0).
475 if not spec.is_prerelease and prospective.is_prerelease:
476 if Version(prospective.base_version) == Version(spec.base_version):
477 return False
478
479 # If we've gotten to here, it means that prospective version is both
480 # less than the spec version *and* it's not a pre-release of the same
481 # version in the spec.
482 return True
483
484 @_require_version_compare
485 def _compare_greater_than(self, prospective, spec):
486 # Convert our spec to a Version instance, since we'll want to work with
487 # it as a version.
488 spec = Version(spec)
489
490 # Check to see if the prospective version is greater than the spec
491 # version. If it's not we can short circuit and just return False now
492 # instead of doing extra unneeded work.
493 if not prospective > spec:
494 return False
495
496 # This special case is here so that, unless the specifier itself
497 # includes is a post-release version, that we do not accept
498 # post-release versions for the version mentioned in the specifier
499 # (e.g. >3.1 should not match 3.0.post0, but should match 3.2.post0).
500 if not spec.is_postrelease and prospective.is_postrelease:
501 if Version(prospective.base_version) == Version(spec.base_version):
502 return False
503
504 # Ensure that we do not allow a local version of the version mentioned
505 # in the specifier, which is techincally greater than, to match.
506 if prospective.local is not None:
507 if Version(prospective.base_version) == Version(spec.base_version):
508 return False
509
510 # If we've gotten to here, it means that prospective version is both
511 # greater than the spec version *and* it's not a pre-release of the
512 # same version in the spec.
513 return True
514
515 def _compare_arbitrary(self, prospective, spec):
516 return str(prospective).lower() == str(spec).lower()
517
518 @property
519 def prereleases(self):
520 # If there is an explicit prereleases set for this, then we'll just
521 # blindly use that.
522 if self._prereleases is not None:
523 return self._prereleases
524
525 # Look at all of our specifiers and determine if they are inclusive
526 # operators, and if they are if they are including an explicit
527 # prerelease.
528 operator, version = self._spec
529 if operator in ["==", ">=", "<=", "~="]:
530 # The == specifier can include a trailing .*, if it does we
531 # want to remove before parsing.
532 if operator == "==" and version.endswith(".*"):
533 version = version[:-2]
534
535 # Parse the version, and if it is a pre-release than this
536 # specifier allows pre-releases.
537 if parse(version).is_prerelease:
538 return True
539
540 return False
541
542 @prereleases.setter
543 def prereleases(self, value):
544 self._prereleases = value
545
546
547 _prefix_regex = re.compile(r"^([0-9]+)((?:a|b|c|rc)[0-9]+)$")
548
549
550 def _version_split(version):
551 result = []
552 for item in version.split("."):
553 match = _prefix_regex.search(item)
554 if match:
555 result.extend(match.groups())
556 else:
557 result.append(item)
558 return result
559
560
561 def _pad_version(left, right):
562 left_split, right_split = [], []
563
564 # Get the release segment of our versions
565 left_split.append(list(itertools.takewhile(lambda x: x.isdigit(), left)))
566 right_split.append(list(itertools.takewhile(lambda x: x.isdigit(), right)))
567
568 # Get the rest of our versions
569 left_split.append(left[len(left_split):])
570 right_split.append(left[len(right_split):])
571
572 # Insert our padding
573 left_split.insert(
574 1,
575 ["0"] * max(0, len(right_split[0]) - len(left_split[0])),
576 )
577 right_split.insert(
578 1,
579 ["0"] * max(0, len(left_split[0]) - len(right_split[0])),
580 )
581
582 return (
583 list(itertools.chain(*left_split)),
584 list(itertools.chain(*right_split)),
585 )
586
587
588 class SpecifierSet(BaseSpecifier):
589
590 def __init__(self, specifiers="", prereleases=None):
591 # Split on , to break each indidivual specifier into it's own item, and
592 # strip each item to remove leading/trailing whitespace.
593 specifiers = [s.strip() for s in specifiers.split(",") if s.strip()]
594
595 # Parsed each individual specifier, attempting first to make it a
596 # Specifier and falling back to a LegacySpecifier.
597 parsed = set()
598 for specifier in specifiers:
599 try:
600 parsed.add(Specifier(specifier))
601 except InvalidSpecifier:
602 parsed.add(LegacySpecifier(specifier))
603
604 # Turn our parsed specifiers into a frozen set and save them for later.
605 self._specs = frozenset(parsed)
606
607 # Store our prereleases value so we can use it later to determine if
608 # we accept prereleases or not.
609 self._prereleases = prereleases
610
611 def __repr__(self):
612 pre = (
613 ", prereleases={0!r}".format(self.prereleases)
614 if self._prereleases is not None
615 else ""
616 )
617
618 return "<SpecifierSet({0!r}{1})>".format(str(self), pre)
619
620 def __str__(self):
621 return ",".join(sorted(str(s) for s in self._specs))
622
623 def __hash__(self):
624 return hash(self._specs)
625
626 def __and__(self, other):
627 if isinstance(other, string_types):
628 other = SpecifierSet(other)
629 elif not isinstance(other, SpecifierSet):
630 return NotImplemented
631
632 specifier = SpecifierSet()
633 specifier._specs = frozenset(self._specs | other._specs)
634
635 if self._prereleases is None and other._prereleases is not None:
636 specifier._prereleases = other._prereleases
637 elif self._prereleases is not None and other._prereleases is None:
638 specifier._prereleases = self._prereleases
639 elif self._prereleases == other._prereleases:
640 specifier._prereleases = self._prereleases
641 else:
642 raise ValueError(
643 "Cannot combine SpecifierSets with True and False prerelease "
644 "overrides."
645 )
646
647 return specifier
648
649 def __eq__(self, other):
650 if isinstance(other, string_types):
651 other = SpecifierSet(other)
652 elif isinstance(other, _IndividualSpecifier):
653 other = SpecifierSet(str(other))
654 elif not isinstance(other, SpecifierSet):
655 return NotImplemented
656
657 return self._specs == other._specs
658
659 def __ne__(self, other):
660 if isinstance(other, string_types):
661 other = SpecifierSet(other)
662 elif isinstance(other, _IndividualSpecifier):
663 other = SpecifierSet(str(other))
664 elif not isinstance(other, SpecifierSet):
665 return NotImplemented
666
667 return self._specs != other._specs
668
669 @property
670 def prereleases(self):
671 # If we have been given an explicit prerelease modifier, then we'll
672 # pass that through here.
673 if self._prereleases is not None:
674 return self._prereleases
675
676 # If we don't have any specifiers, and we don't have a forced value,
677 # then we'll just return None since we don't know if this should have
678 # pre-releases or not.
679 if not self._specs:
680 return None
681
682 # Otherwise we'll see if any of the given specifiers accept
683 # prereleases, if any of them do we'll return True, otherwise False.
684 return any(s.prereleases for s in self._specs)
685
686 @prereleases.setter
687 def prereleases(self, value):
688 self._prereleases = value
689
690 def contains(self, item, prereleases=None):
691 # Ensure that our item is a Version or LegacyVersion instance.
692 if not isinstance(item, (LegacyVersion, Version)):
693 item = parse(item)
694
695 # Determine if we're forcing a prerelease or not, if we're not forcing
696 # one for this particular filter call, then we'll use whatever the
697 # SpecifierSet thinks for whether or not we should support prereleases.
698 if prereleases is None:
699 prereleases = self.prereleases
700
701 # We can determine if we're going to allow pre-releases by looking to
702 # see if any of the underlying items supports them. If none of them do
703 # and this item is a pre-release then we do not allow it and we can
704 # short circuit that here.
705 # Note: This means that 1.0.dev1 would not be contained in something
706 # like >=1.0.devabc however it would be in >=1.0.debabc,>0.0.dev0
707 if not prereleases and item.is_prerelease:
708 return False
709
710 # We simply dispatch to the underlying specs here to make sure that the
711 # given version is contained within all of them.
712 # Note: This use of all() here means that an empty set of specifiers
713 # will always return True, this is an explicit design decision.
714 return all(
715 s.contains(item, prereleases=prereleases)
716 for s in self._specs
717 )
718
719 def filter(self, iterable, prereleases=None):
720 # Determine if we're forcing a prerelease or not, if we're not forcing
721 # one for this particular filter call, then we'll use whatever the
722 # SpecifierSet thinks for whether or not we should support prereleases.
723 if prereleases is None:
724 prereleases = self.prereleases
725
726 # If we have any specifiers, then we want to wrap our iterable in the
727 # filter method for each one, this will act as a logical AND amongst
728 # each specifier.
729 if self._specs:
730 for spec in self._specs:
731 iterable = spec.filter(iterable, prereleases=bool(prereleases))
732 return iterable
733 # If we do not have any specifiers, then we need to have a rough filter
734 # which will filter out any pre-releases, unless there are no final
735 # releases, and which will filter out LegacyVersion in general.
736 else:
737 filtered = []
738 found_prereleases = []
739
740 for item in iterable:
741 # Ensure that we some kind of Version class for this item.
742 if not isinstance(item, (LegacyVersion, Version)):
743 parsed_version = parse(item)
744 else:
745 parsed_version = item
746
747 # Filter out any item which is parsed as a LegacyVersion
748 if isinstance(parsed_version, LegacyVersion):
749 continue
750
751 # Store any item which is a pre-release for later unless we've
752 # already found a final version or we are accepting prereleases
753 if parsed_version.is_prerelease and not prereleases:
754 if not filtered:
755 found_prereleases.append(item)
756 else:
757 filtered.append(item)
758
759 # If we've found no items except for pre-releases, then we'll go
760 # ahead and use the pre-releases
761 if not filtered and found_prereleases and prereleases is None:
762 return found_prereleases
763
764 return filtered
765
[end of pip/_vendor/packaging/specifiers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pypa/pip
|
43ac83fb1b83e8f5ad773418491eae376bda210d
|
Version self check should not warn for post releases
Post releases are explicitly designed to just fix small errors that won't affect the code itself, things like doc updates. However if we release a post release then the pip version self check will tell everyone to go download it, even though using it isn't really all that important.
Ideally this should just ignore post releases.
|
2015-05-09T17:25:16Z
|
<patch>
diff --git a/pip/utils/outdated.py b/pip/utils/outdated.py
--- a/pip/utils/outdated.py
+++ b/pip/utils/outdated.py
@@ -7,7 +7,7 @@
import sys
from pip._vendor import lockfile
-from pip._vendor import pkg_resources
+from pip._vendor.packaging import version as packaging_version
from pip.compat import total_seconds
from pip.index import PyPI
@@ -122,15 +122,23 @@ def pip_version_check(session):
headers={"Accept": "application/json"},
)
resp.raise_for_status()
- pypi_version = resp.json()["info"]["version"]
+ pypi_version = [
+ v for v in sorted(
+ list(resp.json()["releases"]),
+ key=packaging_version.parse,
+ )
+ if not packaging_version.parse(v).is_prerelease
+ ][-1]
# save that we've performed a check
state.save(pypi_version, current_time)
- pip_version = pkg_resources.parse_version(pip.__version__)
+ pip_version = packaging_version.parse(pip.__version__)
+ remote_version = packaging_version.parse(pypi_version)
# Determine if our pypi_version is older
- if pip_version < pkg_resources.parse_version(pypi_version):
+ if (pip_version < remote_version and
+ pip_version.base_version != remote_version.base_version):
logger.warning(
"You are using pip version %s, however version %s is "
"available.\nYou should consider upgrading via the "
</patch>
|
[]
|
[]
| ||||
huggingface__transformers-8845
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Inconsistent PreTrainedTokenizerBase.pad argument default value & docstring
The docstring states the argument `padding` has a default of `False` but its default is `True`
docstring:
https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2469-L2470
arg:
https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2431-L2472
This causes issues when using `DataCollatorForLanguageModeling` with an already padded dataset as it resets the attention mask.
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <br>
3 <img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png" width="400"/>
4 <br>
5 <p>
6 <p align="center">
7 <a href="https://circleci.com/gh/huggingface/transformers">
8 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
9 </a>
10 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE">
11 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
12 </a>
13 <a href="https://huggingface.co/transformers/index.html">
14 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/transformers/index.html.svg?down_color=red&down_message=offline&up_message=online">
15 </a>
16 <a href="https://github.com/huggingface/transformers/releases">
17 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
18 </a>
19 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md">
20 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
21 </a>
22 </p>
23
24 <h3 align="center">
25 <p>State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0
26 </h3>
27
28 🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction, question answering, summarization, translation, text generation, etc in 100+ languages. Its aim is to make cutting-edge NLP easier to use for everyone.
29
30 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture can be used as a standalone and modified to enable quick research experiments.
31
32 🤗 Transformers is backed by the two most popular deep learning libraries, [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/), with a seamless integration between them, allowing you to train your models with one then load it for inference with the other.
33
34 ### Recent contributors
35 [](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/0)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/1)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/2)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/3)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/4)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/5)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/6)[](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/7)
36
37 ## Online demos
38
39 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer an [inference API](https://huggingface.co/pricing) to use those models.
40
41 Here are a few examples:
42 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
43 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
44 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
45 - [Natural Langugage Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
46 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
47 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
48 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
49
50 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities.
51
52 ## Quick tour
53
54 To immediately use a model on a given text, we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model training. Here is how to quickly use a pipeline to classify positive versus negative texts
55
56 ```python
57 >>> from transformers import pipeline
58
59 # Allocate a pipeline for sentiment-analysis
60 >>> classifier = pipeline('sentiment-analysis')
61 >>> classifier('We are very happy to include pipeline into the transformers repository.')
62 [{'label': 'POSITIVE', 'score': 0.9978193640708923}]
63 ```
64
65 The second line of code downloads and caches the pretrained model used by the pipeline, the third line evaluates it on the given text. Here the answer is "positive" with a confidence of 99.8%.
66
67 This is another example of pipeline used for that can extract question answers from some context:
68
69 ``` python
70 >>> from transformers import pipeline
71
72 # Allocate a pipeline for question-answering
73 >>> question_answerer = pipeline('question-answering')
74 >>> question_answerer({
75 ... 'question': 'What is the name of the repository ?',
76 ... 'context': 'Pipeline have been included in the huggingface/transformers repository'
77 ... })
78 {'score': 0.5135612454720828, 'start': 35, 'end': 59, 'answer': 'huggingface/transformers'}
79
80 ```
81
82 On top of the answer, the pretrained model used here returned its confidence score, along with the start position and its end position in the tokenized sentence. You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/transformers/task_summary.html).
83
84 To download and use any of the pretrained models on your given task, you just need to use those three lines of codes (PyTorch version):
85 ```python
86 >>> from transformers import AutoTokenizer, AutoModel
87
88 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
89 >>> model = AutoModel.from_pretrained("bert-base-uncased")
90
91 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
92 >>> outputs = model(**inputs)
93 ```
94 or for TensorFlow:
95 ```python
96 >>> from transformers import AutoTokenizer, TFAutoModel
97
98 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
99 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
100
101 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
102 >>> outputs = model(**inputs)
103 ```
104
105 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on one (or list) of texts (as we can see on the fourth line of both code examples). It will output a dictionary you can directly pass to your model (which is done on the fifth line).
106
107 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use normally. For instance, [this tutorial](https://huggingface.co/transformers/training.html) explains how to integrate such a model in classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune the on a new dataset.
108
109 ## Why should I use transformers?
110
111 1. Easy-to-use state-of-the-art models:
112 - High performance on NLU and NLG tasks.
113 - Low barrier to entry for educators and practitioners.
114 - Few user-facing abstractions with just three classes to learn.
115 - A unified API for using all our pretrained models.
116
117 1. Lower compute costs, smaller carbon footprint:
118 - Researchers can share trained models instead of always retraining.
119 - Practitioners can reduce compute time and production costs.
120 - Dozens of architectures with over 2,000 pretrained models, some in more than 100 languages.
121
122 1. Choose the right framework for every part of a model's lifetime:
123 - Train state-of-the-art models in 3 lines of code.
124 - Move a single model between TF2.0/PyTorch frameworks at will.
125 - Seamlessly pick the right framework for training, evaluation, production.
126
127 1. Easily customize a model or an example to your needs:
128 - Examples for each architecture to reproduce the results by the official authors of said architecture.
129 - Expose the models internal as consistently as possible.
130 - Model files can be used independently of the library for quick experiments.
131
132 ## Why shouldn't I use transformers?
133
134 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving in additional abstractions/files.
135 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library.
136 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/master/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs.
137
138 ## Installation
139
140 This repository is tested on Python 3.6+, PyTorch 1.0.0+ (PyTorch 1.3.1+ for [examples](https://github.com/huggingface/transformers/tree/master/examples)) and TensorFlow 2.0.
141
142 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
143
144 First, create a virtual environment with the version of Python you're going to use and activate it.
145
146 Then, you will need to install one of, or both, TensorFlow 2.0 and PyTorch.
147 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/pip#tensorflow-2.0-rc-is-available) and/or [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) regarding the specific install command for your platform.
148
149 When TensorFlow 2.0 and/or PyTorch has been installed, 🤗 Transformers can be installed using pip as follows:
150
151 ```bash
152 pip install transformers
153 ```
154
155 If you'd like to play with the examples, you must [install the library from source](https://huggingface.co/transformers/installation.html#installing-from-source).
156
157 ## Models architectures
158
159 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/transformers/model_summary.html) for a high-level summary of each them):
160
161 1. **[ALBERT](https://huggingface.co/transformers/model_doc/albert.html)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
162 1. **[BART](https://huggingface.co/transformers/model_doc/bart.html)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
163 1. **[BARThez](https://huggingface.co/transformers/model_doc/barthez.html)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
164 1. **[BERT](https://huggingface.co/transformers/model_doc/bert.html)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
165 1. **[BERT For Sequence Generation](https://huggingface.co/transformers/model_doc/bertgeneration.html)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
166 1. **[Blenderbot](https://huggingface.co/transformers/model_doc/blenderbot.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
167 1. **[CamemBERT](https://huggingface.co/transformers/model_doc/camembert.html)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
168 1. **[CTRL](https://huggingface.co/transformers/model_doc/ctrl.html)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
169 1. **[DeBERTa](https://huggingface.co/transformers/model_doc/deberta.html)** (from Microsoft Research) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
170 1. **[DialoGPT](https://huggingface.co/transformers/model_doc/dialogpt.html)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
171 1. **[DistilBERT](https://huggingface.co/transformers/model_doc/distilbert.html)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT.
172 1. **[DPR](https://huggingface.co/transformers/model_doc/dpr.html)** (from Facebook) released with the paper [Dense Passage Retrieval
173 for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon
174 Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
175 1. **[ELECTRA](https://huggingface.co/transformers/model_doc/electra.html)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
176 1. **[FlauBERT](https://huggingface.co/transformers/model_doc/flaubert.html)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
177 1. **[Funnel Transformer](https://huggingface.co/transformers/model_doc/funnel.html)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
178 1. **[GPT](https://huggingface.co/transformers/model_doc/gpt.html)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
179 1. **[GPT-2](https://huggingface.co/transformers/model_doc/gpt2.html)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
180 1. **[LayoutLM](https://huggingface.co/transformers/model_doc/layoutlm.html)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
181 1. **[Longformer](https://huggingface.co/transformers/model_doc/longformer.html)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
182 1. **[LXMERT](https://huggingface.co/transformers/model_doc/lxmert.html)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
183 1. **[MarianMT](https://huggingface.co/transformers/model_doc/marian.html)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
184 1. **[MBart](https://huggingface.co/transformers/model_doc/mbart.html)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
185 1. **[MT5](https://huggingface.co/transformers/model_doc/mt5.html)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
186 1. **[Pegasus](https://huggingface.co/transformers/model_doc/pegasus.html)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777)> by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
187 1. **[ProphetNet](https://huggingface.co/transformers/model_doc/prophetnet.html)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
188 1. **[Reformer](https://huggingface.co/transformers/model_doc/reformer.html)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
189 1. **[RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
190 ultilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT.
191 1. **[SqueezeBert](https://huggingface.co/transformers/model_doc/squeezebert.html)** released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
192 1. **[T5](https://huggingface.co/transformers/model_doc/t5.html)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
193 1. **[Transformer-XL](https://huggingface.co/transformers/model_doc/transformerxl.html)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
194 1. **[XLM](https://huggingface.co/transformers/model_doc/xlm.html)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
195 1. **[XLM-ProphetNet](https://huggingface.co/transformers/model_doc/xlmprophetnet.html)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
196 1. **[XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
197 1. **[XLNet](https://huggingface.co/transformers/model_doc/xlnet.html)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
198 1. **[Other community models](https://huggingface.co/models)**, contributed by the [community](https://huggingface.co/users).
199 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
200
201 These implementations have been tested on several datasets (see the example scripts) and should match the performances of the original implementations. You can find more details on the performances in the Examples section of the [documentation](https://huggingface.co/transformers/examples.html).
202
203
204 ## Learn more
205
206 | Section | Description |
207 |-|-|
208 | [Documentation](https://huggingface.co/transformers/) | Full API documentation and tutorials |
209 | [Task summary](https://huggingface.co/transformers/task_summary.html) | Tasks supported by 🤗 Transformers |
210 | [Preprocessing tutorial](https://huggingface.co/transformers/preprocessing.html) | Using the `Tokenizer` class to prepare data for the models |
211 | [Training and fine-tuning](https://huggingface.co/transformers/training.html) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API |
212 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/master/examples) | Example scripts for fine-tuning models on a wide range of tasks |
213 | [Model sharing and uploading](https://huggingface.co/transformers/model_sharing.html) | Upload and share your fine-tuned models with the community |
214 | [Migration](https://huggingface.co/transformers/migration.html) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` |
215
216 ## Citation
217
218 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library:
219 ```bibtex
220 @inproceedings{wolf-etal-2020-transformers,
221 title = "Transformers: State-of-the-Art Natural Language Processing",
222 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
223 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
224 month = oct,
225 year = "2020",
226 address = "Online",
227 publisher = "Association for Computational Linguistics",
228 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
229 pages = "38--45"
230 }
231 ```
232
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
huggingface/transformers
|
610cb106a216cfb99d840648b576f9502189e4d1
|
Inconsistent PreTrainedTokenizerBase.pad argument default value & docstring
The docstring states the argument `padding` has a default of `False` but its default is `True`
docstring:
https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2469-L2470
arg:
https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2431-L2472
This causes issues when using `DataCollatorForLanguageModeling` with an already padded dataset as it resets the attention mask.
|
Seems to have been added in this commit:
https://github.com/huggingface/transformers/commit/f3065abdb8805f5beaed9ff1e92ce874e655f5c9#diff-85b29486a884f445b1014a26fecfb189141f2e6b09f4ae701ee758a754fddcc1R2146-R2168
As part of merge https://github.com/huggingface/transformers/pull/6110
Hi, indeed! The docs should be changed to reflect the method signature. Do you want to open a PR?
|
2020-11-30T07:27:13Z
|
<patch>
diff --git a/src/transformers/tokenization_utils_base.py b/src/transformers/tokenization_utils_base.py
--- a/src/transformers/tokenization_utils_base.py
+++ b/src/transformers/tokenization_utils_base.py
@@ -2604,7 +2604,7 @@ def pad(
Instead of :obj:`List[int]` you can have tensors (numpy arrays, PyTorch tensors or TensorFlow tensors),
see the note above for the return type.
- padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`False`):
+ padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding
index) among:
</patch>
|
[]
|
[]
| |||
docker__compose-1705
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
_get_legacy_containers_iter() TypeError: argument of type 'NoneType' is not iterable
I just upgraded from 1.1.0 to 1.3.2, and after dealing with the /tmp directory being mounted as noexec (issue #1339), I ran into another issue that I couldn't find in the backlog;
```
$ docker-compose up -d
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 32, in main
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 21, in sys_dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 34, in dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 24, in dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 66, in perform_command
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 471, in up
File "/code/build/docker-compose/out00-PYZ.pyz/compose.project", line 230, in up
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 398, in remove_duplicate_containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 405, in duplicate_containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 112, in containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.legacy", line 56, in check_for_legacy_containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.legacy", line 138, in get_legacy_containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.legacy", line 152, in _get_legacy_containers_iter
TypeError: argument of type 'NoneType' is not iterable
```
Downgrading to 1.3.1 seems to alleviate this behavior.
</issue>
<code>
[start of README.md]
1 Docker Compose
2 ==============
3 *(Previously known as Fig)*
4
5 Compose is a tool for defining and running multi-container applications with
6 Docker. With Compose, you define a multi-container application in a single
7 file, then spin your application up in a single command which does everything
8 that needs to be done to get it running.
9
10 Compose is great for development environments, staging servers, and CI. We don't
11 recommend that you use it in production yet.
12
13 Using Compose is basically a three-step process.
14
15 1. Define your app's environment with a `Dockerfile` so it can be
16 reproduced anywhere.
17 2. Define the services that make up your app in `docker-compose.yml` so
18 they can be run together in an isolated environment:
19 3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
20
21 A `docker-compose.yml` looks like this:
22
23 web:
24 build: .
25 ports:
26 - "5000:5000"
27 volumes:
28 - .:/code
29 links:
30 - redis
31 redis:
32 image: redis
33
34 Compose has commands for managing the whole lifecycle of your application:
35
36 * Start, stop and rebuild services
37 * View the status of running services
38 * Stream the log output of running services
39 * Run a one-off command on a service
40
41 Installation and documentation
42 ------------------------------
43
44 - Full documentation is available on [Docker's website](http://docs.docker.com/compose/).
45 - If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
46
47 Contributing
48 ------------
49
50 [](http://jenkins.dockerproject.org/job/Compose%20Master/)
51
52 Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
53
54 Releasing
55 ---------
56
57 Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/RELEASE_PROCESS.md).
[end of README.md]
[start of compose/cli/colors.py]
1 from __future__ import unicode_literals
2 NAMES = [
3 'grey',
4 'red',
5 'green',
6 'yellow',
7 'blue',
8 'magenta',
9 'cyan',
10 'white'
11 ]
12
13
14 def get_pairs():
15 for i, name in enumerate(NAMES):
16 yield(name, str(30 + i))
17 yield('intense_' + name, str(30 + i) + ';1')
18
19
20 def ansi(code):
21 return '\033[{0}m'.format(code)
22
23
24 def ansi_color(code, s):
25 return '{0}{1}{2}'.format(ansi(code), s, ansi(0))
26
27
28 def make_color_fn(code):
29 return lambda s: ansi_color(code, s)
30
31
32 for (name, code) in get_pairs():
33 globals()[name] = make_color_fn(code)
34
35
36 def rainbow():
37 cs = ['cyan', 'yellow', 'green', 'magenta', 'red', 'blue',
38 'intense_cyan', 'intense_yellow', 'intense_green',
39 'intense_magenta', 'intense_red', 'intense_blue']
40
41 for c in cs:
42 yield globals()[c]
43
[end of compose/cli/colors.py]
[start of compose/cli/command.py]
1 from __future__ import unicode_literals
2 from __future__ import absolute_import
3 from requests.exceptions import ConnectionError, SSLError
4 import logging
5 import os
6 import re
7 import six
8
9 from .. import config
10 from ..project import Project
11 from ..service import ConfigError
12 from .docopt_command import DocoptCommand
13 from .utils import call_silently, is_mac, is_ubuntu
14 from .docker_client import docker_client
15 from . import verbose_proxy
16 from . import errors
17 from .. import __version__
18
19 log = logging.getLogger(__name__)
20
21
22 class Command(DocoptCommand):
23 base_dir = '.'
24
25 def dispatch(self, *args, **kwargs):
26 try:
27 super(Command, self).dispatch(*args, **kwargs)
28 except SSLError as e:
29 raise errors.UserError('SSL error: %s' % e)
30 except ConnectionError:
31 if call_silently(['which', 'docker']) != 0:
32 if is_mac():
33 raise errors.DockerNotFoundMac()
34 elif is_ubuntu():
35 raise errors.DockerNotFoundUbuntu()
36 else:
37 raise errors.DockerNotFoundGeneric()
38 elif call_silently(['which', 'boot2docker']) == 0:
39 raise errors.ConnectionErrorBoot2Docker()
40 else:
41 raise errors.ConnectionErrorGeneric(self.get_client().base_url)
42
43 def perform_command(self, options, handler, command_options):
44 if options['COMMAND'] in ('help', 'version'):
45 # Skip looking up the compose file.
46 handler(None, command_options)
47 return
48
49 if 'FIG_FILE' in os.environ:
50 log.warn('The FIG_FILE environment variable is deprecated.')
51 log.warn('Please use COMPOSE_FILE instead.')
52
53 explicit_config_path = options.get('--file') or os.environ.get('COMPOSE_FILE') or os.environ.get('FIG_FILE')
54 project = self.get_project(
55 explicit_config_path,
56 project_name=options.get('--project-name'),
57 verbose=options.get('--verbose'))
58
59 handler(project, command_options)
60
61 def get_client(self, verbose=False):
62 client = docker_client()
63 if verbose:
64 version_info = six.iteritems(client.version())
65 log.info("Compose version %s", __version__)
66 log.info("Docker base_url: %s", client.base_url)
67 log.info("Docker version: %s",
68 ", ".join("%s=%s" % item for item in version_info))
69 return verbose_proxy.VerboseProxy('docker', client)
70 return client
71
72 def get_project(self, config_path=None, project_name=None, verbose=False):
73 config_details = config.find(self.base_dir, config_path)
74
75 try:
76 return Project.from_dicts(
77 self.get_project_name(config_details.working_dir, project_name),
78 config.load(config_details),
79 self.get_client(verbose=verbose))
80 except ConfigError as e:
81 raise errors.UserError(six.text_type(e))
82
83 def get_project_name(self, working_dir, project_name=None):
84 def normalize_name(name):
85 return re.sub(r'[^a-z0-9]', '', name.lower())
86
87 if 'FIG_PROJECT_NAME' in os.environ:
88 log.warn('The FIG_PROJECT_NAME environment variable is deprecated.')
89 log.warn('Please use COMPOSE_PROJECT_NAME instead.')
90
91 project_name = (
92 project_name or
93 os.environ.get('COMPOSE_PROJECT_NAME') or
94 os.environ.get('FIG_PROJECT_NAME'))
95 if project_name is not None:
96 return normalize_name(project_name)
97
98 project = os.path.basename(os.path.abspath(working_dir))
99 if project:
100 return normalize_name(project)
101
102 return 'default'
103
[end of compose/cli/command.py]
[start of compose/cli/docker_client.py]
1 from docker import Client
2 from docker import tls
3 import ssl
4 import os
5
6
7 def docker_client():
8 """
9 Returns a docker-py client configured using environment variables
10 according to the same logic as the official Docker client.
11 """
12 cert_path = os.environ.get('DOCKER_CERT_PATH', '')
13 if cert_path == '':
14 cert_path = os.path.join(os.environ.get('HOME', ''), '.docker')
15
16 base_url = os.environ.get('DOCKER_HOST')
17 tls_config = None
18
19 if os.environ.get('DOCKER_TLS_VERIFY', '') != '':
20 parts = base_url.split('://', 1)
21 base_url = '%s://%s' % ('https', parts[1])
22
23 client_cert = (os.path.join(cert_path, 'cert.pem'), os.path.join(cert_path, 'key.pem'))
24 ca_cert = os.path.join(cert_path, 'ca.pem')
25
26 tls_config = tls.TLSConfig(
27 ssl_version=ssl.PROTOCOL_TLSv1,
28 verify=True,
29 assert_hostname=False,
30 client_cert=client_cert,
31 ca_cert=ca_cert,
32 )
33
34 timeout = int(os.environ.get('DOCKER_CLIENT_TIMEOUT', 60))
35 return Client(base_url=base_url, tls=tls_config, version='1.18', timeout=timeout)
36
[end of compose/cli/docker_client.py]
[start of compose/cli/log_printer.py]
1 from __future__ import unicode_literals
2 from __future__ import absolute_import
3 import sys
4
5 from itertools import cycle
6
7 from .multiplexer import Multiplexer, STOP
8 from . import colors
9 from .utils import split_buffer
10
11
12 class LogPrinter(object):
13 def __init__(self, containers, attach_params=None, output=sys.stdout, monochrome=False):
14 self.containers = containers
15 self.attach_params = attach_params or {}
16 self.prefix_width = self._calculate_prefix_width(containers)
17 self.generators = self._make_log_generators(monochrome)
18 self.output = output
19
20 def run(self):
21 mux = Multiplexer(self.generators)
22 for line in mux.loop():
23 self.output.write(line)
24
25 def _calculate_prefix_width(self, containers):
26 """
27 Calculate the maximum width of container names so we can make the log
28 prefixes line up like so:
29
30 db_1 | Listening
31 web_1 | Listening
32 """
33 prefix_width = 0
34 for container in containers:
35 prefix_width = max(prefix_width, len(container.name_without_project))
36 return prefix_width
37
38 def _make_log_generators(self, monochrome):
39 color_fns = cycle(colors.rainbow())
40 generators = []
41
42 def no_color(text):
43 return text
44
45 for container in self.containers:
46 if monochrome:
47 color_fn = no_color
48 else:
49 color_fn = next(color_fns)
50 generators.append(self._make_log_generator(container, color_fn))
51
52 return generators
53
54 def _make_log_generator(self, container, color_fn):
55 prefix = color_fn(self._generate_prefix(container)).encode('utf-8')
56 # Attach to container before log printer starts running
57 line_generator = split_buffer(self._attach(container), '\n')
58
59 for line in line_generator:
60 yield prefix + line
61
62 exit_code = container.wait()
63 yield color_fn("%s exited with code %s\n" % (container.name, exit_code))
64 yield STOP
65
66 def _generate_prefix(self, container):
67 """
68 Generate the prefix for a log line without colour
69 """
70 name = container.name_without_project
71 padding = ' ' * (self.prefix_width - len(name))
72 return ''.join([name, padding, ' | '])
73
74 def _attach(self, container):
75 params = {
76 'stdout': True,
77 'stderr': True,
78 'stream': True,
79 }
80 params.update(self.attach_params)
81 params = dict((name, 1 if value else 0) for (name, value) in list(params.items()))
82 return container.attach(**params)
83
[end of compose/cli/log_printer.py]
[start of compose/cli/main.py]
1 from __future__ import print_function
2 from __future__ import unicode_literals
3 from inspect import getdoc
4 from operator import attrgetter
5 import logging
6 import re
7 import signal
8 import sys
9
10 from docker.errors import APIError
11 import dockerpty
12
13 from .. import __version__
14 from .. import legacy
15 from ..const import DEFAULT_TIMEOUT
16 from ..project import NoSuchService, ConfigurationError
17 from ..service import BuildError, NeedsBuildError
18 from ..config import parse_environment
19 from .command import Command
20 from .docopt_command import NoSuchCommand
21 from .errors import UserError
22 from .formatter import Formatter
23 from .log_printer import LogPrinter
24 from .utils import yesno, get_version_info
25
26 log = logging.getLogger(__name__)
27
28
29 def main():
30 setup_logging()
31 try:
32 command = TopLevelCommand()
33 command.sys_dispatch()
34 except KeyboardInterrupt:
35 log.error("\nAborting.")
36 sys.exit(1)
37 except (UserError, NoSuchService, ConfigurationError, legacy.LegacyError) as e:
38 log.error(e.msg)
39 sys.exit(1)
40 except NoSuchCommand as e:
41 log.error("No such command: %s", e.command)
42 log.error("")
43 log.error("\n".join(parse_doc_section("commands:", getdoc(e.supercommand))))
44 sys.exit(1)
45 except APIError as e:
46 log.error(e.explanation)
47 sys.exit(1)
48 except BuildError as e:
49 log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
50 sys.exit(1)
51 except NeedsBuildError as e:
52 log.error("Service '%s' needs to be built, but --no-build was passed." % e.service.name)
53 sys.exit(1)
54
55
56 def setup_logging():
57 console_handler = logging.StreamHandler(sys.stderr)
58 console_handler.setFormatter(logging.Formatter())
59 console_handler.setLevel(logging.INFO)
60 root_logger = logging.getLogger()
61 root_logger.addHandler(console_handler)
62 root_logger.setLevel(logging.DEBUG)
63
64 # Disable requests logging
65 logging.getLogger("requests").propagate = False
66
67
68 # stolen from docopt master
69 def parse_doc_section(name, source):
70 pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)',
71 re.IGNORECASE | re.MULTILINE)
72 return [s.strip() for s in pattern.findall(source)]
73
74
75 class TopLevelCommand(Command):
76 """Define and run multi-container applications with Docker.
77
78 Usage:
79 docker-compose [options] [COMMAND] [ARGS...]
80 docker-compose -h|--help
81
82 Options:
83 -f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
84 -p, --project-name NAME Specify an alternate project name (default: directory name)
85 --verbose Show more output
86 -v, --version Print version and exit
87
88 Commands:
89 build Build or rebuild services
90 help Get help on a command
91 kill Kill containers
92 logs View output from containers
93 port Print the public port for a port binding
94 ps List containers
95 pull Pulls service images
96 restart Restart services
97 rm Remove stopped containers
98 run Run a one-off command
99 scale Set number of containers for a service
100 start Start services
101 stop Stop services
102 up Create and start containers
103 migrate-to-labels Recreate containers to add labels
104 version Show the Docker-Compose version information
105
106 """
107 def docopt_options(self):
108 options = super(TopLevelCommand, self).docopt_options()
109 options['version'] = get_version_info('compose')
110 return options
111
112 def build(self, project, options):
113 """
114 Build or rebuild services.
115
116 Services are built once and then tagged as `project_service`,
117 e.g. `composetest_db`. If you change a service's `Dockerfile` or the
118 contents of its build directory, you can run `docker-compose build` to rebuild it.
119
120 Usage: build [options] [SERVICE...]
121
122 Options:
123 --no-cache Do not use cache when building the image.
124 """
125 no_cache = bool(options.get('--no-cache', False))
126 project.build(service_names=options['SERVICE'], no_cache=no_cache)
127
128 def help(self, project, options):
129 """
130 Get help on a command.
131
132 Usage: help COMMAND
133 """
134 handler = self.get_handler(options['COMMAND'])
135 raise SystemExit(getdoc(handler))
136
137 def kill(self, project, options):
138 """
139 Force stop service containers.
140
141 Usage: kill [options] [SERVICE...]
142
143 Options:
144 -s SIGNAL SIGNAL to send to the container.
145 Default signal is SIGKILL.
146 """
147 signal = options.get('-s', 'SIGKILL')
148
149 project.kill(service_names=options['SERVICE'], signal=signal)
150
151 def logs(self, project, options):
152 """
153 View output from containers.
154
155 Usage: logs [options] [SERVICE...]
156
157 Options:
158 --no-color Produce monochrome output.
159 """
160 containers = project.containers(service_names=options['SERVICE'], stopped=True)
161
162 monochrome = options['--no-color']
163 print("Attaching to", list_containers(containers))
164 LogPrinter(containers, attach_params={'logs': True}, monochrome=monochrome).run()
165
166 def port(self, project, options):
167 """
168 Print the public port for a port binding.
169
170 Usage: port [options] SERVICE PRIVATE_PORT
171
172 Options:
173 --protocol=proto tcp or udp [default: tcp]
174 --index=index index of the container if there are multiple
175 instances of a service [default: 1]
176 """
177 index = int(options.get('--index'))
178 service = project.get_service(options['SERVICE'])
179 try:
180 container = service.get_container(number=index)
181 except ValueError as e:
182 raise UserError(str(e))
183 print(container.get_local_port(
184 options['PRIVATE_PORT'],
185 protocol=options.get('--protocol') or 'tcp') or '')
186
187 def ps(self, project, options):
188 """
189 List containers.
190
191 Usage: ps [options] [SERVICE...]
192
193 Options:
194 -q Only display IDs
195 """
196 containers = sorted(
197 project.containers(service_names=options['SERVICE'], stopped=True) +
198 project.containers(service_names=options['SERVICE'], one_off=True),
199 key=attrgetter('name'))
200
201 if options['-q']:
202 for container in containers:
203 print(container.id)
204 else:
205 headers = [
206 'Name',
207 'Command',
208 'State',
209 'Ports',
210 ]
211 rows = []
212 for container in containers:
213 command = container.human_readable_command
214 if len(command) > 30:
215 command = '%s ...' % command[:26]
216 rows.append([
217 container.name,
218 command,
219 container.human_readable_state,
220 container.human_readable_ports,
221 ])
222 print(Formatter().table(headers, rows))
223
224 def pull(self, project, options):
225 """
226 Pulls images for services.
227
228 Usage: pull [options] [SERVICE...]
229
230 Options:
231 --allow-insecure-ssl Allow insecure connections to the docker
232 registry
233 """
234 insecure_registry = options['--allow-insecure-ssl']
235 project.pull(
236 service_names=options['SERVICE'],
237 insecure_registry=insecure_registry
238 )
239
240 def rm(self, project, options):
241 """
242 Remove stopped service containers.
243
244 Usage: rm [options] [SERVICE...]
245
246 Options:
247 -f, --force Don't ask to confirm removal
248 -v Remove volumes associated with containers
249 """
250 all_containers = project.containers(service_names=options['SERVICE'], stopped=True)
251 stopped_containers = [c for c in all_containers if not c.is_running]
252
253 if len(stopped_containers) > 0:
254 print("Going to remove", list_containers(stopped_containers))
255 if options.get('--force') \
256 or yesno("Are you sure? [yN] ", default=False):
257 project.remove_stopped(
258 service_names=options['SERVICE'],
259 v=options.get('-v', False)
260 )
261 else:
262 print("No stopped containers")
263
264 def run(self, project, options):
265 """
266 Run a one-off command on a service.
267
268 For example:
269
270 $ docker-compose run web python manage.py shell
271
272 By default, linked services will be started, unless they are already
273 running. If you do not want to start linked services, use
274 `docker-compose run --no-deps SERVICE COMMAND [ARGS...]`.
275
276 Usage: run [options] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...]
277
278 Options:
279 --allow-insecure-ssl Allow insecure connections to the docker
280 registry
281 -d Detached mode: Run container in the background, print
282 new container name.
283 --entrypoint CMD Override the entrypoint of the image.
284 -e KEY=VAL Set an environment variable (can be used multiple times)
285 -u, --user="" Run as specified username or uid
286 --no-deps Don't start linked services.
287 --rm Remove container after run. Ignored in detached mode.
288 --service-ports Run command with the service's ports enabled and mapped
289 to the host.
290 -T Disable pseudo-tty allocation. By default `docker-compose run`
291 allocates a TTY.
292 """
293 service = project.get_service(options['SERVICE'])
294
295 insecure_registry = options['--allow-insecure-ssl']
296
297 if not options['--no-deps']:
298 deps = service.get_linked_names()
299
300 if len(deps) > 0:
301 project.up(
302 service_names=deps,
303 start_deps=True,
304 allow_recreate=False,
305 insecure_registry=insecure_registry,
306 )
307
308 tty = True
309 if options['-d'] or options['-T'] or not sys.stdin.isatty():
310 tty = False
311
312 if options['COMMAND']:
313 command = [options['COMMAND']] + options['ARGS']
314 else:
315 command = service.options.get('command')
316
317 container_options = {
318 'command': command,
319 'tty': tty,
320 'stdin_open': not options['-d'],
321 'detach': options['-d'],
322 }
323
324 if options['-e']:
325 container_options['environment'] = parse_environment(options['-e'])
326
327 if options['--entrypoint']:
328 container_options['entrypoint'] = options.get('--entrypoint')
329
330 if options['--rm']:
331 container_options['restart'] = None
332
333 if options['--user']:
334 container_options['user'] = options.get('--user')
335
336 if not options['--service-ports']:
337 container_options['ports'] = []
338
339 try:
340 container = service.create_container(
341 quiet=True,
342 one_off=True,
343 insecure_registry=insecure_registry,
344 **container_options
345 )
346 except APIError as e:
347 legacy.check_for_legacy_containers(
348 project.client,
349 project.name,
350 [service.name],
351 allow_one_off=False,
352 )
353
354 raise e
355
356 if options['-d']:
357 service.start_container(container)
358 print(container.name)
359 else:
360 dockerpty.start(project.client, container.id, interactive=not options['-T'])
361 exit_code = container.wait()
362 if options['--rm']:
363 project.client.remove_container(container.id)
364 sys.exit(exit_code)
365
366 def scale(self, project, options):
367 """
368 Set number of containers to run for a service.
369
370 Numbers are specified in the form `service=num` as arguments.
371 For example:
372
373 $ docker-compose scale web=2 worker=3
374
375 Usage: scale [SERVICE=NUM...]
376 """
377 for s in options['SERVICE=NUM']:
378 if '=' not in s:
379 raise UserError('Arguments to scale should be in the form service=num')
380 service_name, num = s.split('=', 1)
381 try:
382 num = int(num)
383 except ValueError:
384 raise UserError('Number of containers for service "%s" is not a '
385 'number' % service_name)
386 project.get_service(service_name).scale(num)
387
388 def start(self, project, options):
389 """
390 Start existing containers.
391
392 Usage: start [SERVICE...]
393 """
394 project.start(service_names=options['SERVICE'])
395
396 def stop(self, project, options):
397 """
398 Stop running containers without removing them.
399
400 They can be started again with `docker-compose start`.
401
402 Usage: stop [options] [SERVICE...]
403
404 Options:
405 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
406 (default: 10)
407 """
408 timeout = float(options.get('--timeout') or DEFAULT_TIMEOUT)
409 project.stop(service_names=options['SERVICE'], timeout=timeout)
410
411 def restart(self, project, options):
412 """
413 Restart running containers.
414
415 Usage: restart [options] [SERVICE...]
416
417 Options:
418 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
419 (default: 10)
420 """
421 timeout = float(options.get('--timeout') or DEFAULT_TIMEOUT)
422 project.restart(service_names=options['SERVICE'], timeout=timeout)
423
424 def up(self, project, options):
425 """
426 Build, (re)create, start and attach to containers for a service.
427
428 By default, `docker-compose up` will aggregate the output of each container, and
429 when it exits, all containers will be stopped. If you run `docker-compose up -d`,
430 it'll start the containers in the background and leave them running.
431
432 If there are existing containers for a service, `docker-compose up` will stop
433 and recreate them (preserving mounted volumes with volumes-from),
434 so that changes in `docker-compose.yml` are picked up. If you do not want existing
435 containers to be recreated, `docker-compose up --no-recreate` will re-use existing
436 containers.
437
438 Usage: up [options] [SERVICE...]
439
440 Options:
441 --allow-insecure-ssl Allow insecure connections to the docker
442 registry
443 -d Detached mode: Run containers in the background,
444 print new container names.
445 --no-color Produce monochrome output.
446 --no-deps Don't start linked services.
447 --x-smart-recreate Only recreate containers whose configuration or
448 image needs to be updated. (EXPERIMENTAL)
449 --no-recreate If containers already exist, don't recreate them.
450 --no-build Don't build an image, even if it's missing
451 -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown
452 when attached or when containers are already
453 running. (default: 10)
454 """
455 insecure_registry = options['--allow-insecure-ssl']
456 detached = options['-d']
457
458 monochrome = options['--no-color']
459
460 start_deps = not options['--no-deps']
461 allow_recreate = not options['--no-recreate']
462 smart_recreate = options['--x-smart-recreate']
463 service_names = options['SERVICE']
464 timeout = float(options.get('--timeout') or DEFAULT_TIMEOUT)
465
466 to_attach = project.up(
467 service_names=service_names,
468 start_deps=start_deps,
469 allow_recreate=allow_recreate,
470 smart_recreate=smart_recreate,
471 insecure_registry=insecure_registry,
472 do_build=not options['--no-build'],
473 timeout=timeout
474 )
475
476 if not detached:
477 print("Attaching to", list_containers(to_attach))
478 log_printer = LogPrinter(to_attach, attach_params={"logs": True}, monochrome=monochrome)
479
480 try:
481 log_printer.run()
482 finally:
483 def handler(signal, frame):
484 project.kill(service_names=service_names)
485 sys.exit(0)
486 signal.signal(signal.SIGINT, handler)
487
488 print("Gracefully stopping... (press Ctrl+C again to force)")
489 project.stop(service_names=service_names, timeout=timeout)
490
491 def migrate_to_labels(self, project, _options):
492 """
493 Recreate containers to add labels
494
495 If you're coming from Compose 1.2 or earlier, you'll need to remove or
496 migrate your existing containers after upgrading Compose. This is
497 because, as of version 1.3, Compose uses Docker labels to keep track
498 of containers, and so they need to be recreated with labels added.
499
500 If Compose detects containers that were created without labels, it
501 will refuse to run so that you don't end up with two sets of them. If
502 you want to keep using your existing containers (for example, because
503 they have data volumes you want to preserve) you can migrate them with
504 the following command:
505
506 docker-compose migrate-to-labels
507
508 Alternatively, if you're not worried about keeping them, you can
509 remove them - Compose will just create new ones.
510
511 docker rm -f myapp_web_1 myapp_db_1 ...
512
513 Usage: migrate-to-labels
514 """
515 legacy.migrate_project_to_labels(project)
516
517 def version(self, project, options):
518 """
519 Show version informations
520
521 Usage: version [--short]
522
523 Options:
524 --short Shows only Compose's version number.
525 """
526 if options['--short']:
527 print(__version__)
528 else:
529 print(get_version_info('full'))
530
531
532 def list_containers(containers):
533 return ", ".join(c.name for c in containers)
534
[end of compose/cli/main.py]
[start of compose/config.py]
1 import logging
2 import os
3 import sys
4 import yaml
5 from collections import namedtuple
6
7 import six
8
9 from compose.cli.utils import find_candidates_in_parent_dirs
10
11
12 DOCKER_CONFIG_KEYS = [
13 'cap_add',
14 'cap_drop',
15 'cpu_shares',
16 'cpuset',
17 'command',
18 'detach',
19 'devices',
20 'dns',
21 'dns_search',
22 'domainname',
23 'entrypoint',
24 'env_file',
25 'environment',
26 'extra_hosts',
27 'hostname',
28 'image',
29 'labels',
30 'links',
31 'mac_address',
32 'mem_limit',
33 'memswap_limit',
34 'net',
35 'log_driver',
36 'log_opt',
37 'pid',
38 'ports',
39 'privileged',
40 'read_only',
41 'restart',
42 'security_opt',
43 'stdin_open',
44 'tty',
45 'user',
46 'volumes',
47 'volumes_from',
48 'working_dir',
49 ]
50
51 ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
52 'build',
53 'dockerfile',
54 'expose',
55 'external_links',
56 'name',
57 ]
58
59 DOCKER_CONFIG_HINTS = {
60 'cpu_share': 'cpu_shares',
61 'add_host': 'extra_hosts',
62 'hosts': 'extra_hosts',
63 'extra_host': 'extra_hosts',
64 'device': 'devices',
65 'link': 'links',
66 'memory_swap': 'memswap_limit',
67 'port': 'ports',
68 'privilege': 'privileged',
69 'priviliged': 'privileged',
70 'privilige': 'privileged',
71 'volume': 'volumes',
72 'workdir': 'working_dir',
73 }
74
75
76 SUPPORTED_FILENAMES = [
77 'docker-compose.yml',
78 'docker-compose.yaml',
79 'fig.yml',
80 'fig.yaml',
81 ]
82
83
84 log = logging.getLogger(__name__)
85
86
87 ConfigDetails = namedtuple('ConfigDetails', 'config working_dir filename')
88
89
90 def find(base_dir, filename):
91 if filename == '-':
92 return ConfigDetails(yaml.safe_load(sys.stdin), os.getcwd(), None)
93
94 if filename:
95 filename = os.path.join(base_dir, filename)
96 else:
97 filename = get_config_path(base_dir)
98 return ConfigDetails(load_yaml(filename), os.path.dirname(filename), filename)
99
100
101 def get_config_path(base_dir):
102 (candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, base_dir)
103
104 if len(candidates) == 0:
105 raise ComposeFileNotFound(SUPPORTED_FILENAMES)
106
107 winner = candidates[0]
108
109 if len(candidates) > 1:
110 log.warn("Found multiple config files with supported names: %s", ", ".join(candidates))
111 log.warn("Using %s\n", winner)
112
113 if winner == 'docker-compose.yaml':
114 log.warn("Please be aware that .yml is the expected extension "
115 "in most cases, and using .yaml can cause compatibility "
116 "issues in future.\n")
117
118 if winner.startswith("fig."):
119 log.warn("%s is deprecated and will not be supported in future. "
120 "Please rename your config file to docker-compose.yml\n" % winner)
121
122 return os.path.join(path, winner)
123
124
125 def load(config_details):
126 dictionary, working_dir, filename = config_details
127 service_dicts = []
128
129 for service_name, service_dict in list(dictionary.items()):
130 if not isinstance(service_dict, dict):
131 raise ConfigurationError('Service "%s" doesn\'t have any configuration options. All top level keys in your docker-compose.yml must map to a dictionary of configuration options.' % service_name)
132 loader = ServiceLoader(working_dir=working_dir, filename=filename)
133 service_dict = loader.make_service_dict(service_name, service_dict)
134 validate_paths(service_dict)
135 service_dicts.append(service_dict)
136
137 return service_dicts
138
139
140 class ServiceLoader(object):
141 def __init__(self, working_dir, filename=None, already_seen=None):
142 self.working_dir = os.path.abspath(working_dir)
143 if filename:
144 self.filename = os.path.abspath(filename)
145 else:
146 self.filename = filename
147 self.already_seen = already_seen or []
148
149 def detect_cycle(self, name):
150 if self.signature(name) in self.already_seen:
151 raise CircularReference(self.already_seen + [self.signature(name)])
152
153 def make_service_dict(self, name, service_dict):
154 service_dict = service_dict.copy()
155 service_dict['name'] = name
156 service_dict = resolve_environment(service_dict, working_dir=self.working_dir)
157 service_dict = self.resolve_extends(service_dict)
158 return process_container_options(service_dict, working_dir=self.working_dir)
159
160 def resolve_extends(self, service_dict):
161 if 'extends' not in service_dict:
162 return service_dict
163
164 extends_options = self.validate_extends_options(service_dict['name'], service_dict['extends'])
165
166 if self.working_dir is None:
167 raise Exception("No working_dir passed to ServiceLoader()")
168
169 if 'file' in extends_options:
170 extends_from_filename = extends_options['file']
171 other_config_path = expand_path(self.working_dir, extends_from_filename)
172 else:
173 other_config_path = self.filename
174
175 other_working_dir = os.path.dirname(other_config_path)
176 other_already_seen = self.already_seen + [self.signature(service_dict['name'])]
177 other_loader = ServiceLoader(
178 working_dir=other_working_dir,
179 filename=other_config_path,
180 already_seen=other_already_seen,
181 )
182
183 other_config = load_yaml(other_config_path)
184 other_service_dict = other_config[extends_options['service']]
185 other_loader.detect_cycle(extends_options['service'])
186 other_service_dict = other_loader.make_service_dict(
187 service_dict['name'],
188 other_service_dict,
189 )
190 validate_extended_service_dict(
191 other_service_dict,
192 filename=other_config_path,
193 service=extends_options['service'],
194 )
195
196 return merge_service_dicts(other_service_dict, service_dict)
197
198 def signature(self, name):
199 return (self.filename, name)
200
201 def validate_extends_options(self, service_name, extends_options):
202 error_prefix = "Invalid 'extends' configuration for %s:" % service_name
203
204 if not isinstance(extends_options, dict):
205 raise ConfigurationError("%s must be a dictionary" % error_prefix)
206
207 if 'service' not in extends_options:
208 raise ConfigurationError(
209 "%s you need to specify a service, e.g. 'service: web'" % error_prefix
210 )
211
212 if 'file' not in extends_options and self.filename is None:
213 raise ConfigurationError(
214 "%s you need to specify a 'file', e.g. 'file: something.yml'" % error_prefix
215 )
216
217 for k, _ in extends_options.items():
218 if k not in ['file', 'service']:
219 raise ConfigurationError(
220 "%s unsupported configuration option '%s'" % (error_prefix, k)
221 )
222
223 return extends_options
224
225
226 def validate_extended_service_dict(service_dict, filename, service):
227 error_prefix = "Cannot extend service '%s' in %s:" % (service, filename)
228
229 if 'links' in service_dict:
230 raise ConfigurationError("%s services with 'links' cannot be extended" % error_prefix)
231
232 if 'volumes_from' in service_dict:
233 raise ConfigurationError("%s services with 'volumes_from' cannot be extended" % error_prefix)
234
235 if 'net' in service_dict:
236 if get_service_name_from_net(service_dict['net']) is not None:
237 raise ConfigurationError("%s services with 'net: container' cannot be extended" % error_prefix)
238
239
240 def process_container_options(service_dict, working_dir=None):
241 for k in service_dict:
242 if k not in ALLOWED_KEYS:
243 msg = "Unsupported config option for %s service: '%s'" % (service_dict['name'], k)
244 if k in DOCKER_CONFIG_HINTS:
245 msg += " (did you mean '%s'?)" % DOCKER_CONFIG_HINTS[k]
246 raise ConfigurationError(msg)
247
248 service_dict = service_dict.copy()
249
250 if 'memswap_limit' in service_dict and 'mem_limit' not in service_dict:
251 raise ConfigurationError("Invalid 'memswap_limit' configuration for %s service: when defining 'memswap_limit' you must set 'mem_limit' as well" % service_dict['name'])
252
253 if 'volumes' in service_dict:
254 service_dict['volumes'] = resolve_volume_paths(service_dict['volumes'], working_dir=working_dir)
255
256 if 'build' in service_dict:
257 service_dict['build'] = resolve_build_path(service_dict['build'], working_dir=working_dir)
258
259 if 'labels' in service_dict:
260 service_dict['labels'] = parse_labels(service_dict['labels'])
261
262 return service_dict
263
264
265 def merge_service_dicts(base, override):
266 d = base.copy()
267
268 if 'environment' in base or 'environment' in override:
269 d['environment'] = merge_environment(
270 base.get('environment'),
271 override.get('environment'),
272 )
273
274 path_mapping_keys = ['volumes', 'devices']
275
276 for key in path_mapping_keys:
277 if key in base or key in override:
278 d[key] = merge_path_mappings(
279 base.get(key),
280 override.get(key),
281 )
282
283 if 'labels' in base or 'labels' in override:
284 d['labels'] = merge_labels(
285 base.get('labels'),
286 override.get('labels'),
287 )
288
289 if 'image' in override and 'build' in d:
290 del d['build']
291
292 if 'build' in override and 'image' in d:
293 del d['image']
294
295 list_keys = ['ports', 'expose', 'external_links']
296
297 for key in list_keys:
298 if key in base or key in override:
299 d[key] = base.get(key, []) + override.get(key, [])
300
301 list_or_string_keys = ['dns', 'dns_search']
302
303 for key in list_or_string_keys:
304 if key in base or key in override:
305 d[key] = to_list(base.get(key)) + to_list(override.get(key))
306
307 already_merged_keys = ['environment', 'labels'] + path_mapping_keys + list_keys + list_or_string_keys
308
309 for k in set(ALLOWED_KEYS) - set(already_merged_keys):
310 if k in override:
311 d[k] = override[k]
312
313 return d
314
315
316 def merge_environment(base, override):
317 env = parse_environment(base)
318 env.update(parse_environment(override))
319 return env
320
321
322 def parse_links(links):
323 return dict(parse_link(l) for l in links)
324
325
326 def parse_link(link):
327 if ':' in link:
328 source, alias = link.split(':', 1)
329 return (alias, source)
330 else:
331 return (link, link)
332
333
334 def get_env_files(options, working_dir=None):
335 if 'env_file' not in options:
336 return {}
337
338 if working_dir is None:
339 raise Exception("No working_dir passed to get_env_files()")
340
341 env_files = options.get('env_file', [])
342 if not isinstance(env_files, list):
343 env_files = [env_files]
344
345 return [expand_path(working_dir, path) for path in env_files]
346
347
348 def resolve_environment(service_dict, working_dir=None):
349 service_dict = service_dict.copy()
350
351 if 'environment' not in service_dict and 'env_file' not in service_dict:
352 return service_dict
353
354 env = {}
355
356 if 'env_file' in service_dict:
357 for f in get_env_files(service_dict, working_dir=working_dir):
358 env.update(env_vars_from_file(f))
359 del service_dict['env_file']
360
361 env.update(parse_environment(service_dict.get('environment')))
362 env = dict(resolve_env_var(k, v) for k, v in six.iteritems(env))
363
364 service_dict['environment'] = env
365 return service_dict
366
367
368 def parse_environment(environment):
369 if not environment:
370 return {}
371
372 if isinstance(environment, list):
373 return dict(split_env(e) for e in environment)
374
375 if isinstance(environment, dict):
376 return environment
377
378 raise ConfigurationError(
379 "environment \"%s\" must be a list or mapping," %
380 environment
381 )
382
383
384 def split_env(env):
385 if '=' in env:
386 return env.split('=', 1)
387 else:
388 return env, None
389
390
391 def resolve_env_var(key, val):
392 if val is not None:
393 return key, val
394 elif key in os.environ:
395 return key, os.environ[key]
396 else:
397 return key, ''
398
399
400 def env_vars_from_file(filename):
401 """
402 Read in a line delimited file of environment variables.
403 """
404 if not os.path.exists(filename):
405 raise ConfigurationError("Couldn't find env file: %s" % filename)
406 env = {}
407 for line in open(filename, 'r'):
408 line = line.strip()
409 if line and not line.startswith('#'):
410 k, v = split_env(line)
411 env[k] = v
412 return env
413
414
415 def resolve_volume_paths(volumes, working_dir=None):
416 if working_dir is None:
417 raise Exception("No working_dir passed to resolve_volume_paths()")
418
419 return [resolve_volume_path(v, working_dir) for v in volumes]
420
421
422 def resolve_volume_path(volume, working_dir):
423 container_path, host_path = split_path_mapping(volume)
424 container_path = os.path.expanduser(os.path.expandvars(container_path))
425 if host_path is not None:
426 host_path = os.path.expanduser(os.path.expandvars(host_path))
427 return "%s:%s" % (expand_path(working_dir, host_path), container_path)
428 else:
429 return container_path
430
431
432 def resolve_build_path(build_path, working_dir=None):
433 if working_dir is None:
434 raise Exception("No working_dir passed to resolve_build_path")
435 return expand_path(working_dir, build_path)
436
437
438 def validate_paths(service_dict):
439 if 'build' in service_dict:
440 build_path = service_dict['build']
441 if not os.path.exists(build_path) or not os.access(build_path, os.R_OK):
442 raise ConfigurationError("build path %s either does not exist or is not accessible." % build_path)
443
444
445 def merge_path_mappings(base, override):
446 d = dict_from_path_mappings(base)
447 d.update(dict_from_path_mappings(override))
448 return path_mappings_from_dict(d)
449
450
451 def dict_from_path_mappings(path_mappings):
452 if path_mappings:
453 return dict(split_path_mapping(v) for v in path_mappings)
454 else:
455 return {}
456
457
458 def path_mappings_from_dict(d):
459 return [join_path_mapping(v) for v in d.items()]
460
461
462 def split_path_mapping(string):
463 if ':' in string:
464 (host, container) = string.split(':', 1)
465 return (container, host)
466 else:
467 return (string, None)
468
469
470 def join_path_mapping(pair):
471 (container, host) = pair
472 if host is None:
473 return container
474 else:
475 return ":".join((host, container))
476
477
478 def merge_labels(base, override):
479 labels = parse_labels(base)
480 labels.update(parse_labels(override))
481 return labels
482
483
484 def parse_labels(labels):
485 if not labels:
486 return {}
487
488 if isinstance(labels, list):
489 return dict(split_label(e) for e in labels)
490
491 if isinstance(labels, dict):
492 return labels
493
494 raise ConfigurationError(
495 "labels \"%s\" must be a list or mapping" %
496 labels
497 )
498
499
500 def split_label(label):
501 if '=' in label:
502 return label.split('=', 1)
503 else:
504 return label, ''
505
506
507 def expand_path(working_dir, path):
508 return os.path.abspath(os.path.join(working_dir, path))
509
510
511 def to_list(value):
512 if value is None:
513 return []
514 elif isinstance(value, six.string_types):
515 return [value]
516 else:
517 return value
518
519
520 def get_service_name_from_net(net_config):
521 if not net_config:
522 return
523
524 if not net_config.startswith('container:'):
525 return
526
527 _, net_name = net_config.split(':', 1)
528 return net_name
529
530
531 def load_yaml(filename):
532 try:
533 with open(filename, 'r') as fh:
534 return yaml.safe_load(fh)
535 except IOError as e:
536 raise ConfigurationError(six.text_type(e))
537
538
539 class ConfigurationError(Exception):
540 def __init__(self, msg):
541 self.msg = msg
542
543 def __str__(self):
544 return self.msg
545
546
547 class CircularReference(ConfigurationError):
548 def __init__(self, trail):
549 self.trail = trail
550
551 @property
552 def msg(self):
553 lines = [
554 "{} in {}".format(service_name, filename)
555 for (filename, service_name) in self.trail
556 ]
557 return "Circular reference:\n {}".format("\n extends ".join(lines))
558
559
560 class ComposeFileNotFound(ConfigurationError):
561 def __init__(self, supported_filenames):
562 super(ComposeFileNotFound, self).__init__("""
563 Can't find a suitable configuration file in this directory or any parent. Are you in the right directory?
564
565 Supported filenames: %s
566 """ % ", ".join(supported_filenames))
567
[end of compose/config.py]
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import unicode_literals
4 from __future__ import absolute_import
5 from setuptools import setup, find_packages
6 import codecs
7 import os
8 import re
9 import sys
10
11
12 def read(*parts):
13 path = os.path.join(os.path.dirname(__file__), *parts)
14 with codecs.open(path, encoding='utf-8') as fobj:
15 return fobj.read()
16
17
18 def find_version(*file_paths):
19 version_file = read(*file_paths)
20 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
21 version_file, re.M)
22 if version_match:
23 return version_match.group(1)
24 raise RuntimeError("Unable to find version string.")
25
26
27 install_requires = [
28 'docopt >= 0.6.1, < 0.7',
29 'PyYAML >= 3.10, < 4',
30 'requests >= 2.6.1, < 2.7',
31 'texttable >= 0.8.1, < 0.9',
32 'websocket-client >= 0.11.0, < 1.0',
33 'docker-py >= 1.3.0, < 1.4',
34 'dockerpty >= 0.3.4, < 0.4',
35 'six >= 1.3.0, < 2',
36 'futures >= 3.0.3',
37 ]
38
39
40 tests_require = [
41 'mock >= 1.0.1',
42 'nose',
43 'pyinstaller',
44 'flake8',
45 ]
46
47
48 if sys.version_info < (2, 7):
49 tests_require.append('unittest2')
50
51
52 setup(
53 name='docker-compose',
54 version=find_version("compose", "__init__.py"),
55 description='Multi-container orchestration for Docker',
56 url='https://www.docker.com/',
57 author='Docker, Inc.',
58 license='Apache License 2.0',
59 packages=find_packages(exclude=['tests.*', 'tests']),
60 include_package_data=True,
61 test_suite='nose.collector',
62 install_requires=install_requires,
63 tests_require=tests_require,
64 entry_points="""
65 [console_scripts]
66 docker-compose=compose.cli.main:main
67 """,
68 )
69
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
docker/compose
|
5a462305552de3272d9b54f25644f1e1de39f174
|
_get_legacy_containers_iter() TypeError: argument of type 'NoneType' is not iterable
I just upgraded from 1.1.0 to 1.3.2, and after dealing with the /tmp directory being mounted as noexec (issue #1339), I ran into another issue that I couldn't find in the backlog;
```
$ docker-compose up -d
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 32, in main
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 21, in sys_dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 34, in dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 24, in dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 66, in perform_command
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 471, in up
File "/code/build/docker-compose/out00-PYZ.pyz/compose.project", line 230, in up
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 398, in remove_duplicate_containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 405, in duplicate_containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 112, in containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.legacy", line 56, in check_for_legacy_containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.legacy", line 138, in get_legacy_containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.legacy", line 152, in _get_legacy_containers_iter
TypeError: argument of type 'NoneType' is not iterable
```
Downgrading to 1.3.1 seems to alleviate this behavior.
|
Which docker version are you using? In docker 1.6 "no labels" is an empty mapping, but this errors seems like it's returning `null` instead of the empty mapping.
My docker version:
```
Docker version 1.6.2, build 7c8fca2
```
I suspect this pull is related: #1643
Can confirm the same behaviour - `Docker version 1.6.2, build 7c8fca2`.
Upgraded from docker-compose 1.1.0 to 1.3.2 via `pip install -U docker-compose` and I get the following result from `docker-compose migrate-to-labels`:
```
Traceback (most recent call last):
File "/usr/local/bin/docker-compose", line 9, in <module>
load_entry_point('docker-compose==1.3.2', 'console_scripts', 'docker-compose')()
File "/usr/local/lib/python2.7/dist-packages/compose/cli/main.py", line 32, in main
command.sys_dispatch()
File "/usr/local/lib/python2.7/dist-packages/compose/cli/docopt_command.py", line 21, in sys_dispatch
self.dispatch(sys.argv[1:], None)
File "/usr/local/lib/python2.7/dist-packages/compose/cli/command.py", line 34, in dispatch
super(Command, self).dispatch(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/compose/cli/docopt_command.py", line 24, in dispatch
self.perform_command(*self.parse(argv, global_options))
File "/usr/local/lib/python2.7/dist-packages/compose/cli/command.py", line 66, in perform_command
handler(project, command_options)
File "/usr/local/lib/python2.7/dist-packages/compose/cli/main.py", line 515, in migrate_to_labels
legacy.migrate_project_to_labels(project)
File "/usr/local/lib/python2.7/dist-packages/compose/legacy.py", line 121, in migrate_project_to_labels
one_off=False,
File "/usr/local/lib/python2.7/dist-packages/compose/legacy.py", line 138, in get_legacy_containers
one_off=one_off,
File "/usr/local/lib/python2.7/dist-packages/compose/legacy.py", line 152, in _get_legacy_containers_iter
if LABEL_VERSION in container['Labels']:
TypeError: argument of type 'NoneType' is not iterable
```
Suggest simply prefixing the erroneous conditional on the last frame of that stacktrace (compose/legacy.py:152) with `container['Labels'] and`. If I get a chance later today I'll submit a PR myself, but I'm happy for someone to beat me to the punch!
Seeing this with `Docker version 1.7.0, build 0baf609`, so it's not just a docker-1.6 issue.
Can also confirm this same behavior with `Docker version 1.7.0, build 0baf609` and `Docker version 1.6.1, build 97cd073` with `docker-compose version: 1.3.2`. After reverting back to `docker-compose version: 1.2.0`, I no longer see the issue.
I think the easy fix is to change `container['Labels']` to `container.get('Labels', {})`, but I'm still not sure how this is happening, since it seems like the default empty value should always be the empty mapping `{}` not None.
Maybe this behaviour differs based on the version of docker that was used to create the container? Do you happen to know if the old containers were created with a version of docker < 1.6 ?
A paste of the `docker inspect <container name>` for one of the old containers would be really helpful for debugging this.
I'm seeing this and I think had a set of container that I upgraded to labels with the `docker-compose migrate-to-labels` command.
|
2015-07-15T16:14:53Z
|
<patch>
diff --git a/compose/legacy.py b/compose/legacy.py
--- a/compose/legacy.py
+++ b/compose/legacy.py
@@ -149,7 +149,7 @@ def _get_legacy_containers_iter(
for service in services:
for container in containers:
- if LABEL_VERSION in container['Labels']:
+ if LABEL_VERSION in (container.get('Labels') or {}):
continue
name = get_container_name(container)
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-37672
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: Loc changes dtype when condition is completly False
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
df = pd.DataFrame({
"a": ["a"],
"b": [1],
"c": [1]
})
df.loc[[False], ["b"]] = 10 - df["c"]
print(df)
```
Thus converts ``b`` to float and returns
```
a b c
0 a 1.0 1
```
#### Problem description
[this should explain **why** the current behaviour is a problem and why the expected output is a better solution]
#### Expected Output
Would expect that this returns an integer.
```
a b c
0 a 1 1
```
Interestingly, this is returned if we drop the column ``a``.
```
df = pd.DataFrame({
"b": [1],
"c": [1]
})
df.loc[[False], ["b"]] = 10 - df["c"]
print(df)
```
or if we are not using df["c"] and an integer instead.
```
df = pd.DataFrame({
"a": ["a"],
"b": [1],
"c": [1]
})
df.loc[[False], ["b"]] = 10 - 1
print(df)
```
Both return ``1`` instead of ``1.0`` for column ``b``
Edit: The condition [False] is just a simplification. This also happens if we use a condition, which evaluates sometimes to completly False based on the input data.
#### Output of ``pd.show_versions()``
<details>
master
</details>
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://pandas.pydata.org/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://doi.org/10.5281/zenodo.3509134)
11 [](https://pypi.org/project/pandas/)
12 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
14 [](https://codecov.io/gh/pandas-dev/pandas)
15 [](https://pepy.tech/project/pandas)
16 [](https://gitter.im/pydata/pandas)
17 [](https://numfocus.org)
18 [](https://github.com/psf/black)
19 [](https://pycqa.github.io/isort/)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#advanced
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#database-style-dataframe-or-named-series-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 Package Index (PyPI)](https://pypi.org/project/pandas) and on [Conda](https://docs.conda.io/en/latest/).
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy - Adds support for large, multi-dimensional arrays, matrices and high-level mathematical functions to operate on these arrays](https://www.numpy.org)
104 - [python-dateutil - Provides powerful extensions to the standard datetime module](https://dateutil.readthedocs.io/en/stable/index.html)
105 - [pytz - Brings the Olson tz database into Python which allows accurate and cross platform timezone calculations](https://github.com/stub42/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need [Cython](https://cython.org/) in addition to the normal
111 dependencies above. Cython can be installed from PyPI:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/cli/pip_install/#install-editable):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at [AQR](https://www.aqr.com/) (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on GitHub in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
[start of pandas/core/shared_docs.py]
1 from __future__ import annotations
2
3 _shared_docs: dict[str, str] = {}
4
5 _shared_docs[
6 "aggregate"
7 ] = """
8 Aggregate using one or more operations over the specified axis.
9
10 Parameters
11 ----------
12 func : function, str, list or dict
13 Function to use for aggregating the data. If a function, must either
14 work when passed a {klass} or when passed to {klass}.apply.
15
16 Accepted combinations are:
17
18 - function
19 - string function name
20 - list of functions and/or function names, e.g. ``[np.sum, 'mean']``
21 - dict of axis labels -> functions, function names or list of such.
22 {axis}
23 *args
24 Positional arguments to pass to `func`.
25 **kwargs
26 Keyword arguments to pass to `func`.
27
28 Returns
29 -------
30 scalar, Series or DataFrame
31
32 The return can be:
33
34 * scalar : when Series.agg is called with single function
35 * Series : when DataFrame.agg is called with a single function
36 * DataFrame : when DataFrame.agg is called with several functions
37
38 Return scalar, Series or DataFrame.
39 {see_also}
40 Notes
41 -----
42 `agg` is an alias for `aggregate`. Use the alias.
43
44 Functions that mutate the passed object can produce unexpected
45 behavior or errors and are not supported. See :ref:`gotchas.udf-mutation`
46 for more details.
47
48 A passed user-defined-function will be passed a Series for evaluation.
49 {examples}"""
50
51 _shared_docs[
52 "compare"
53 ] = """
54 Compare to another {klass} and show the differences.
55
56 .. versionadded:: 1.1.0
57
58 Parameters
59 ----------
60 other : {klass}
61 Object to compare with.
62
63 align_axis : {{0 or 'index', 1 or 'columns'}}, default 1
64 Determine which axis to align the comparison on.
65
66 * 0, or 'index' : Resulting differences are stacked vertically
67 with rows drawn alternately from self and other.
68 * 1, or 'columns' : Resulting differences are aligned horizontally
69 with columns drawn alternately from self and other.
70
71 keep_shape : bool, default False
72 If true, all rows and columns are kept.
73 Otherwise, only the ones with different values are kept.
74
75 keep_equal : bool, default False
76 If true, the result keeps values that are equal.
77 Otherwise, equal values are shown as NaNs.
78 """
79
80 _shared_docs[
81 "groupby"
82 ] = """
83 Group %(klass)s using a mapper or by a Series of columns.
84
85 A groupby operation involves some combination of splitting the
86 object, applying a function, and combining the results. This can be
87 used to group large amounts of data and compute operations on these
88 groups.
89
90 Parameters
91 ----------
92 by : mapping, function, label, or list of labels
93 Used to determine the groups for the groupby.
94 If ``by`` is a function, it's called on each value of the object's
95 index. If a dict or Series is passed, the Series or dict VALUES
96 will be used to determine the groups (the Series' values are first
97 aligned; see ``.align()`` method). If a list or ndarray of length
98 equal to the selected axis is passed (see the `groupby user guide
99 <https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#splitting-an-object-into-groups>`),
100 the values are used as-is to determine the groups. A label or list
101 of labels may be passed to group by the columns in ``self``.
102 Notice that a tuple is interpreted as a (single) key.
103 axis : {0 or 'index', 1 or 'columns'}, default 0
104 Split along rows (0) or columns (1).
105 level : int, level name, or sequence of such, default None
106 If the axis is a MultiIndex (hierarchical), group by a particular
107 level or levels.
108 as_index : bool, default True
109 For aggregated output, return object with group labels as the
110 index. Only relevant for DataFrame input. as_index=False is
111 effectively "SQL-style" grouped output.
112 sort : bool, default True
113 Sort group keys. Get better performance by turning this off.
114 Note this does not influence the order of observations within each
115 group. Groupby preserves the order of rows within each group.
116 group_keys : bool, default True
117 When calling apply, add group keys to index to identify pieces.
118 squeeze : bool, default False
119 Reduce the dimensionality of the return type if possible,
120 otherwise return a consistent type.
121
122 .. deprecated:: 1.1.0
123
124 observed : bool, default False
125 This only applies if any of the groupers are Categoricals.
126 If True: only show observed values for categorical groupers.
127 If False: show all values for categorical groupers.
128 dropna : bool, default True
129 If True, and if group keys contain NA values, NA values together
130 with row/column will be dropped.
131 If False, NA values will also be treated as the key in groups.
132
133 .. versionadded:: 1.1.0
134
135 Returns
136 -------
137 %(klass)sGroupBy
138 Returns a groupby object that contains information about the groups.
139
140 See Also
141 --------
142 resample : Convenience method for frequency conversion and resampling
143 of time series.
144
145 Notes
146 -----
147 See the `user guide
148 <https://pandas.pydata.org/pandas-docs/stable/groupby.html>`__ for more
149 detailed usage and examples, including splitting an object into groups,
150 iterating through groups, selecting a group, aggregation, and more.
151 """
152
153 _shared_docs[
154 "melt"
155 ] = """
156 Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
157
158 This function is useful to massage a DataFrame into a format where one
159 or more columns are identifier variables (`id_vars`), while all other
160 columns, considered measured variables (`value_vars`), are "unpivoted" to
161 the row axis, leaving just two non-identifier columns, 'variable' and
162 'value'.
163
164 Parameters
165 ----------
166 id_vars : tuple, list, or ndarray, optional
167 Column(s) to use as identifier variables.
168 value_vars : tuple, list, or ndarray, optional
169 Column(s) to unpivot. If not specified, uses all columns that
170 are not set as `id_vars`.
171 var_name : scalar
172 Name to use for the 'variable' column. If None it uses
173 ``frame.columns.name`` or 'variable'.
174 value_name : scalar, default 'value'
175 Name to use for the 'value' column.
176 col_level : int or str, optional
177 If columns are a MultiIndex then use this level to melt.
178 ignore_index : bool, default True
179 If True, original index is ignored. If False, the original index is retained.
180 Index labels will be repeated as necessary.
181
182 .. versionadded:: 1.1.0
183
184 Returns
185 -------
186 DataFrame
187 Unpivoted DataFrame.
188
189 See Also
190 --------
191 %(other)s : Identical method.
192 pivot_table : Create a spreadsheet-style pivot table as a DataFrame.
193 DataFrame.pivot : Return reshaped DataFrame organized
194 by given index / column values.
195 DataFrame.explode : Explode a DataFrame from list-like
196 columns to long format.
197
198 Examples
199 --------
200 >>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
201 ... 'B': {0: 1, 1: 3, 2: 5},
202 ... 'C': {0: 2, 1: 4, 2: 6}})
203 >>> df
204 A B C
205 0 a 1 2
206 1 b 3 4
207 2 c 5 6
208
209 >>> %(caller)sid_vars=['A'], value_vars=['B'])
210 A variable value
211 0 a B 1
212 1 b B 3
213 2 c B 5
214
215 >>> %(caller)sid_vars=['A'], value_vars=['B', 'C'])
216 A variable value
217 0 a B 1
218 1 b B 3
219 2 c B 5
220 3 a C 2
221 4 b C 4
222 5 c C 6
223
224 The names of 'variable' and 'value' columns can be customized:
225
226 >>> %(caller)sid_vars=['A'], value_vars=['B'],
227 ... var_name='myVarname', value_name='myValname')
228 A myVarname myValname
229 0 a B 1
230 1 b B 3
231 2 c B 5
232
233 Original index values can be kept around:
234
235 >>> %(caller)sid_vars=['A'], value_vars=['B', 'C'], ignore_index=False)
236 A variable value
237 0 a B 1
238 1 b B 3
239 2 c B 5
240 0 a C 2
241 1 b C 4
242 2 c C 6
243
244 If you have multi-index columns:
245
246 >>> df.columns = [list('ABC'), list('DEF')]
247 >>> df
248 A B C
249 D E F
250 0 a 1 2
251 1 b 3 4
252 2 c 5 6
253
254 >>> %(caller)scol_level=0, id_vars=['A'], value_vars=['B'])
255 A variable value
256 0 a B 1
257 1 b B 3
258 2 c B 5
259
260 >>> %(caller)sid_vars=[('A', 'D')], value_vars=[('B', 'E')])
261 (A, D) variable_0 variable_1 value
262 0 a B E 1
263 1 b B E 3
264 2 c B E 5
265 """
266
267 _shared_docs[
268 "transform"
269 ] = """
270 Call ``func`` on self producing a {klass} with transformed values.
271
272 Produced {klass} will have same axis length as self.
273
274 Parameters
275 ----------
276 func : function, str, list-like or dict-like
277 Function to use for transforming the data. If a function, must either
278 work when passed a {klass} or when passed to {klass}.apply. If func
279 is both list-like and dict-like, dict-like behavior takes precedence.
280
281 Accepted combinations are:
282
283 - function
284 - string function name
285 - list-like of functions and/or function names, e.g. ``[np.exp, 'sqrt']``
286 - dict-like of axis labels -> functions, function names or list-like of such.
287 {axis}
288 *args
289 Positional arguments to pass to `func`.
290 **kwargs
291 Keyword arguments to pass to `func`.
292
293 Returns
294 -------
295 {klass}
296 A {klass} that must have the same length as self.
297
298 Raises
299 ------
300 ValueError : If the returned {klass} has a different length than self.
301
302 See Also
303 --------
304 {klass}.agg : Only perform aggregating type operations.
305 {klass}.apply : Invoke function on a {klass}.
306
307 Notes
308 -----
309 Functions that mutate the passed object can produce unexpected
310 behavior or errors and are not supported. See :ref:`gotchas.udf-mutation`
311 for more details.
312
313 Examples
314 --------
315 >>> df = pd.DataFrame({{'A': range(3), 'B': range(1, 4)}})
316 >>> df
317 A B
318 0 0 1
319 1 1 2
320 2 2 3
321 >>> df.transform(lambda x: x + 1)
322 A B
323 0 1 2
324 1 2 3
325 2 3 4
326
327 Even though the resulting {klass} must have the same length as the
328 input {klass}, it is possible to provide several input functions:
329
330 >>> s = pd.Series(range(3))
331 >>> s
332 0 0
333 1 1
334 2 2
335 dtype: int64
336 >>> s.transform([np.sqrt, np.exp])
337 sqrt exp
338 0 0.000000 1.000000
339 1 1.000000 2.718282
340 2 1.414214 7.389056
341
342 You can call transform on a GroupBy object:
343
344 >>> df = pd.DataFrame({{
345 ... "Date": [
346 ... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05",
347 ... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05"],
348 ... "Data": [5, 8, 6, 1, 50, 100, 60, 120],
349 ... }})
350 >>> df
351 Date Data
352 0 2015-05-08 5
353 1 2015-05-07 8
354 2 2015-05-06 6
355 3 2015-05-05 1
356 4 2015-05-08 50
357 5 2015-05-07 100
358 6 2015-05-06 60
359 7 2015-05-05 120
360 >>> df.groupby('Date')['Data'].transform('sum')
361 0 55
362 1 108
363 2 66
364 3 121
365 4 55
366 5 108
367 6 66
368 7 121
369 Name: Data, dtype: int64
370
371 >>> df = pd.DataFrame({{
372 ... "c": [1, 1, 1, 2, 2, 2, 2],
373 ... "type": ["m", "n", "o", "m", "m", "n", "n"]
374 ... }})
375 >>> df
376 c type
377 0 1 m
378 1 1 n
379 2 1 o
380 3 2 m
381 4 2 m
382 5 2 n
383 6 2 n
384 >>> df['size'] = df.groupby('c')['type'].transform(len)
385 >>> df
386 c type size
387 0 1 m 3
388 1 1 n 3
389 2 1 o 3
390 3 2 m 4
391 4 2 m 4
392 5 2 n 4
393 6 2 n 4
394 """
395
396 _shared_docs[
397 "storage_options"
398 ] = """storage_options : dict, optional
399 Extra options that make sense for a particular storage connection, e.g.
400 host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
401 are forwarded to ``urllib`` as header options. For other URLs (e.g.
402 starting with "s3://", and "gcs://") the key-value pairs are forwarded to
403 ``fsspec``. Please see ``fsspec`` and ``urllib`` for more details."""
404
405 _shared_docs[
406 "replace"
407 ] = """
408 Replace values given in `to_replace` with `value`.
409
410 Values of the {klass} are replaced with other values dynamically.
411 {replace_iloc}
412
413 Parameters
414 ----------
415 to_replace : str, regex, list, dict, Series, int, float, or None
416 How to find the values that will be replaced.
417
418 * numeric, str or regex:
419
420 - numeric: numeric values equal to `to_replace` will be
421 replaced with `value`
422 - str: string exactly matching `to_replace` will be replaced
423 with `value`
424 - regex: regexs matching `to_replace` will be replaced with
425 `value`
426
427 * list of str, regex, or numeric:
428
429 - First, if `to_replace` and `value` are both lists, they
430 **must** be the same length.
431 - Second, if ``regex=True`` then all of the strings in **both**
432 lists will be interpreted as regexs otherwise they will match
433 directly. This doesn't matter much for `value` since there
434 are only a few possible substitution regexes you can use.
435 - str, regex and numeric rules apply as above.
436
437 * dict:
438
439 - Dicts can be used to specify different replacement values
440 for different existing values. For example,
441 ``{{'a': 'b', 'y': 'z'}}`` replaces the value 'a' with 'b' and
442 'y' with 'z'. To use a dict in this way the `value`
443 parameter should be `None`.
444 - For a DataFrame a dict can specify that different values
445 should be replaced in different columns. For example,
446 ``{{'a': 1, 'b': 'z'}}`` looks for the value 1 in column 'a'
447 and the value 'z' in column 'b' and replaces these values
448 with whatever is specified in `value`. The `value` parameter
449 should not be ``None`` in this case. You can treat this as a
450 special case of passing two lists except that you are
451 specifying the column to search in.
452 - For a DataFrame nested dictionaries, e.g.,
453 ``{{'a': {{'b': np.nan}}}}``, are read as follows: look in column
454 'a' for the value 'b' and replace it with NaN. The `value`
455 parameter should be ``None`` to use a nested dict in this
456 way. You can nest regular expressions as well. Note that
457 column names (the top-level dictionary keys in a nested
458 dictionary) **cannot** be regular expressions.
459
460 * None:
461
462 - This means that the `regex` argument must be a string,
463 compiled regular expression, or list, dict, ndarray or
464 Series of such elements. If `value` is also ``None`` then
465 this **must** be a nested dictionary or Series.
466
467 See the examples section for examples of each of these.
468 value : scalar, dict, list, str, regex, default None
469 Value to replace any values matching `to_replace` with.
470 For a DataFrame a dict of values can be used to specify which
471 value to use for each column (columns not in the dict will not be
472 filled). Regular expressions, strings and lists or dicts of such
473 objects are also allowed.
474 {inplace}
475 limit : int, default None
476 Maximum size gap to forward or backward fill.
477 regex : bool or same types as `to_replace`, default False
478 Whether to interpret `to_replace` and/or `value` as regular
479 expressions. If this is ``True`` then `to_replace` *must* be a
480 string. Alternatively, this could be a regular expression or a
481 list, dict, or array of regular expressions in which case
482 `to_replace` must be ``None``.
483 method : {{'pad', 'ffill', 'bfill', `None`}}
484 The method to use when for replacement, when `to_replace` is a
485 scalar, list or tuple and `value` is ``None``.
486
487 .. versionchanged:: 0.23.0
488 Added to DataFrame.
489
490 Returns
491 -------
492 {klass}
493 Object after replacement.
494
495 Raises
496 ------
497 AssertionError
498 * If `regex` is not a ``bool`` and `to_replace` is not
499 ``None``.
500
501 TypeError
502 * If `to_replace` is not a scalar, array-like, ``dict``, or ``None``
503 * If `to_replace` is a ``dict`` and `value` is not a ``list``,
504 ``dict``, ``ndarray``, or ``Series``
505 * If `to_replace` is ``None`` and `regex` is not compilable
506 into a regular expression or is a list, dict, ndarray, or
507 Series.
508 * When replacing multiple ``bool`` or ``datetime64`` objects and
509 the arguments to `to_replace` does not match the type of the
510 value being replaced
511
512 ValueError
513 * If a ``list`` or an ``ndarray`` is passed to `to_replace` and
514 `value` but they are not the same length.
515
516 See Also
517 --------
518 {klass}.fillna : Fill NA values.
519 {klass}.where : Replace values based on boolean condition.
520 Series.str.replace : Simple string replacement.
521
522 Notes
523 -----
524 * Regex substitution is performed under the hood with ``re.sub``. The
525 rules for substitution for ``re.sub`` are the same.
526 * Regular expressions will only substitute on strings, meaning you
527 cannot provide, for example, a regular expression matching floating
528 point numbers and expect the columns in your frame that have a
529 numeric dtype to be matched. However, if those floating point
530 numbers *are* strings, then you can do this.
531 * This method has *a lot* of options. You are encouraged to experiment
532 and play with this method to gain intuition about how it works.
533 * When dict is used as the `to_replace` value, it is like
534 key(s) in the dict are the to_replace part and
535 value(s) in the dict are the value parameter.
536
537 Examples
538 --------
539
540 **Scalar `to_replace` and `value`**
541
542 >>> s = pd.Series([1, 2, 3, 4, 5])
543 >>> s.replace(1, 5)
544 0 5
545 1 2
546 2 3
547 3 4
548 4 5
549 dtype: int64
550
551 >>> df = pd.DataFrame({{'A': [0, 1, 2, 3, 4],
552 ... 'B': [5, 6, 7, 8, 9],
553 ... 'C': ['a', 'b', 'c', 'd', 'e']}})
554 >>> df.replace(0, 5)
555 A B C
556 0 5 5 a
557 1 1 6 b
558 2 2 7 c
559 3 3 8 d
560 4 4 9 e
561
562 **List-like `to_replace`**
563
564 >>> df.replace([0, 1, 2, 3], 4)
565 A B C
566 0 4 5 a
567 1 4 6 b
568 2 4 7 c
569 3 4 8 d
570 4 4 9 e
571
572 >>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
573 A B C
574 0 4 5 a
575 1 3 6 b
576 2 2 7 c
577 3 1 8 d
578 4 4 9 e
579
580 >>> s.replace([1, 2], method='bfill')
581 0 3
582 1 3
583 2 3
584 3 4
585 4 5
586 dtype: int64
587
588 **dict-like `to_replace`**
589
590 >>> df.replace({{0: 10, 1: 100}})
591 A B C
592 0 10 5 a
593 1 100 6 b
594 2 2 7 c
595 3 3 8 d
596 4 4 9 e
597
598 >>> df.replace({{'A': 0, 'B': 5}}, 100)
599 A B C
600 0 100 100 a
601 1 1 6 b
602 2 2 7 c
603 3 3 8 d
604 4 4 9 e
605
606 >>> df.replace({{'A': {{0: 100, 4: 400}}}})
607 A B C
608 0 100 5 a
609 1 1 6 b
610 2 2 7 c
611 3 3 8 d
612 4 400 9 e
613
614 **Regular expression `to_replace`**
615
616 >>> df = pd.DataFrame({{'A': ['bat', 'foo', 'bait'],
617 ... 'B': ['abc', 'bar', 'xyz']}})
618 >>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
619 A B
620 0 new abc
621 1 foo new
622 2 bait xyz
623
624 >>> df.replace({{'A': r'^ba.$'}}, {{'A': 'new'}}, regex=True)
625 A B
626 0 new abc
627 1 foo bar
628 2 bait xyz
629
630 >>> df.replace(regex=r'^ba.$', value='new')
631 A B
632 0 new abc
633 1 foo new
634 2 bait xyz
635
636 >>> df.replace(regex={{r'^ba.$': 'new', 'foo': 'xyz'}})
637 A B
638 0 new abc
639 1 xyz new
640 2 bait xyz
641
642 >>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
643 A B
644 0 new abc
645 1 new new
646 2 bait xyz
647
648 Compare the behavior of ``s.replace({{'a': None}})`` and
649 ``s.replace('a', None)`` to understand the peculiarities
650 of the `to_replace` parameter:
651
652 >>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
653
654 When one uses a dict as the `to_replace` value, it is like the
655 value(s) in the dict are equal to the `value` parameter.
656 ``s.replace({{'a': None}})`` is equivalent to
657 ``s.replace(to_replace={{'a': None}}, value=None, method=None)``:
658
659 >>> s.replace({{'a': None}})
660 0 10
661 1 None
662 2 None
663 3 b
664 4 None
665 dtype: object
666
667 When ``value=None`` and `to_replace` is a scalar, list or
668 tuple, `replace` uses the method parameter (default 'pad') to do the
669 replacement. So this is why the 'a' values are being replaced by 10
670 in rows 1 and 2 and 'b' in row 4 in this case.
671 The command ``s.replace('a', None)`` is actually equivalent to
672 ``s.replace(to_replace='a', value=None, method='pad')``:
673
674 >>> s.replace('a', None)
675 0 10
676 1 10
677 2 10
678 3 b
679 4 b
680 dtype: object
681 """
682
[end of pandas/core/shared_docs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
bc17343f934a33dc231c8c74be95d8365537c376
|
BUG: Loc changes dtype when condition is completly False
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
df = pd.DataFrame({
"a": ["a"],
"b": [1],
"c": [1]
})
df.loc[[False], ["b"]] = 10 - df["c"]
print(df)
```
Thus converts ``b`` to float and returns
```
a b c
0 a 1.0 1
```
#### Problem description
[this should explain **why** the current behaviour is a problem and why the expected output is a better solution]
#### Expected Output
Would expect that this returns an integer.
```
a b c
0 a 1 1
```
Interestingly, this is returned if we drop the column ``a``.
```
df = pd.DataFrame({
"b": [1],
"c": [1]
})
df.loc[[False], ["b"]] = 10 - df["c"]
print(df)
```
or if we are not using df["c"] and an integer instead.
```
df = pd.DataFrame({
"a": ["a"],
"b": [1],
"c": [1]
})
df.loc[[False], ["b"]] = 10 - 1
print(df)
```
Both return ``1`` instead of ``1.0`` for column ``b``
Edit: The condition [False] is just a simplification. This also happens if we use a condition, which evaluates sometimes to completly False based on the input data.
#### Output of ``pd.show_versions()``
<details>
master
</details>
|
2020-11-06T20:49:53Z
|
<patch>
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -697,6 +697,7 @@ Indexing
- Bug in :meth:`DataFrame.loc.__getitem__` incorrectly raising ``KeyError`` when selecting a single column with a boolean key (:issue:`44322`).
- Bug in setting :meth:`DataFrame.iloc` with a single ``ExtensionDtype`` column and setting 2D values e.g. ``df.iloc[:] = df.values`` incorrectly raising (:issue:`44514`)
- Bug in indexing on columns with ``loc`` or ``iloc`` using a slice with a negative step with ``ExtensionDtype`` columns incorrectly raising (:issue:`44551`)
+- Bug in :meth:`DataFrame.loc.__setitem__` changing dtype when indexer was completely ``False`` (:issue:`37550`)
- Bug in :meth:`IntervalIndex.get_indexer_non_unique` returning boolean mask instead of array of integers for a non unique and non monotonic index (:issue:`44084`)
- Bug in :meth:`IntervalIndex.get_indexer_non_unique` not handling targets of ``dtype`` 'object' with NaNs correctly (:issue:`44482`)
-
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -2058,6 +2058,8 @@ def ravel(i):
# we have a frame, with multiple indexers on both axes; and a
# series, so need to broadcast (see GH5206)
if sum_aligners == self.ndim and all(is_sequence(_) for _ in indexer):
+ if is_empty_indexer(indexer[0], ser._values):
+ return ser._values.copy()
ser = ser.reindex(obj.axes[0][indexer[0]], copy=True)._values
# single indexer
</patch>
|
[]
|
[]
| ||||
apache__airflow-9531
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support airflowignore for plugins
Hello,
Airflow has a mechanism to ignore files before they are automatically loaded by Airflow using a .airflowignore file.
A .airflowignore file specifies the directories or files in DAG_FOLDER that Airflow should intentionally ignore.
> For example, you can prepare a .airflowignore file with contents
> ```
> project_a
> tenant_[\d]
> ```
> Then files like project_a_dag_1.py, TESTING_project_a.py, tenant_1.py, project_a/dag_1.py, and tenant_1/dag_1.py in your DAG_FOLDER would be ignored (If a directory’s name matches any of the patterns, this directory and all its subfolders would not be scanned by Airflow at all. This improves efficiency of DAG finding).
More information: https://airflow.readthedocs.io/en/latest/concepts.html?highlight=airflowignore
It would be helpful to make a similar feature available to plugins. This improves the efficiency of plugins finding.
If anyone is interested in this, I am willing to provide all the necessary tips and information.
Are you wondering how to start contributing to this project? Start by reading our [contributor guide](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst)
Cheers
</issue>
<code>
[start of README.md]
1 <!--
2 Licensed to the Apache Software Foundation (ASF) under one
3 or more contributor license agreements. See the NOTICE file
4 distributed with this work for additional information
5 regarding copyright ownership. The ASF licenses this file
6 to you under the Apache License, Version 2.0 (the
7 "License"); you may not use this file except in compliance
8 with the License. You may obtain a copy of the License at
9
10 http://www.apache.org/licenses/LICENSE-2.0
11
12 Unless required by applicable law or agreed to in writing,
13 software distributed under the License is distributed on an
14 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 KIND, either express or implied. See the License for the
16 specific language governing permissions and limitations
17 under the License.
18 -->
19 # Apache Airflow
20
21 [](https://badge.fury.io/py/apache-airflow)
22 
23 [](https://codecov.io/github/apache/airflow?branch=master)
24 [](https://airflow.readthedocs.io/en/latest/?badge=latest)
25 [](http://www.apache.org/licenses/LICENSE-2.0.txt)
26 [](https://pypi.org/project/apache-airflow/)
27
28 [](https://twitter.com/ApacheAirflow)
29 [](https://apache-airflow-slack.herokuapp.com/)
30
31 [Apache Airflow](https://airflow.apache.org/docs/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor
32 workflows.
33
34 When workflows are defined as code, they become more maintainable,
35 versionable, testable, and collaborative.
36
37 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
38
39 <!-- START doctoc generated TOC please keep comment here to allow auto update -->
40 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
41 **Table of contents**
42
43 - [Requirements](#requirements)
44 - [Getting started](#getting-started)
45 - [Installing from PyPI](#installing-from-pypi)
46 - [Beyond the Horizon](#beyond-the-horizon)
47 - [Principles](#principles)
48 - [User Interface](#user-interface)
49 - [Backport packages](#backport-packages)
50 - [Contributing](#contributing)
51 - [Who uses Apache Airflow?](#who-uses-apache-airflow)
52 - [Who Maintains Apache Airflow?](#who-maintains-apache-airflow)
53 - [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation)
54 - [Links](#links)
55
56 <!-- END doctoc generated TOC please keep comment here to allow auto update -->
57
58 ## Requirements
59
60 Apache Airflow is tested with:
61
62 ### Master version (2.0.0dev)
63
64 * Python versions: 3.6, 3.7, 3.8
65 * Postgres DB: 9.6, 10
66 * MySQL DB: 5.7
67 * Sqlite - latest stable (it is used mainly for development purpose)
68 * Kubernetes - 1.16.2, 1.17.0
69
70 ### Stable version (1.10.10)
71
72 * Python versions: 2.7, 3.5, 3.6, 3.7
73 * Postgres DB: 9.6, 10
74 * MySQL DB: 5.6, 5.7
75 * Sqlite - latest stable (it is used mainly for development purpose)
76 * Kubernetes - 1.16.2, 1.17.0
77
78 ### Additional notes on Python version requirements
79
80 * Stable version [requires](https://github.com/apache/airflow/issues/8162) at least Python 3.5.3 when using Python 3
81 * Stable version is currently incompatible with Python 3.8 due to [a known compatibility issue](https://github.com/Tinche/cattrs/issues/77) with a dependent library
82
83 ## Getting started
84
85 Please visit the Airflow Platform documentation (latest **stable** release) for help with [installing Airflow](https://airflow.apache.org/installation.html), getting a [quick start](https://airflow.apache.org/start.html), or a more complete [tutorial](https://airflow.apache.org/tutorial.html).
86
87 Documentation of GitHub master (latest development branch): [ReadTheDocs Documentation](https://airflow.readthedocs.io/en/latest/)
88
89 For further information, please visit the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Home).
90
91 Official container (Docker) images for Apache Airflow are described in [IMAGES.rst](IMAGES.rst).
92
93 ## Installing from PyPI
94
95 Airflow is published as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky
96 because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open and
97 applications usually pin them, but we should do neither and both at the same time. We decided to keep
98 our dependencies as open as possible (in `setup.py`) so users can install different versions of libraries
99 if needed. This means that from time to time plain `pip install apache-airflow` will not work or will
100 produce unusable Airflow installation.
101
102 In order to have repeatable installation, however, starting from **Airflow 1.10.10** we also keep a set of
103 "known-to-be-working" requirement files in the `requirements` folder. Those "known-to-be-working"
104 requirements are per major/minor python version (3.6/3.7/3.8). You can use them as constraint files
105 when installing Airflow from PyPI. Note that you have to specify correct Airflow version and python versions
106 in the URL.
107
108 1. Installing just airflow:
109
110 ```bash
111 pip install apache-airflow==1.10.10 \
112 --constraint https://raw.githubusercontent.com/apache/airflow/1.10.10/requirements/requirements-python3.7.txt
113 ```
114
115 2. Installing with extras (for example postgres,gcp)
116 ```bash
117 pip install apache-airflow[postgres,gcp]==1.10.10 \
118 --constraint https://raw.githubusercontent.com/apache/airflow/1.10.10/requirements/requirements-python3.7.txt
119 ```
120
121 ## Beyond the Horizon
122
123 Airflow **is not** a data streaming solution. Tasks do not move data from
124 one to the other (though tasks can exchange metadata!). Airflow is not
125 in the [Spark Streaming](http://spark.apache.org/streaming/)
126 or [Storm](https://storm.apache.org/) space, it is more comparable to
127 [Oozie](http://oozie.apache.org/) or
128 [Azkaban](https://azkaban.github.io/).
129
130 Workflows are expected to be mostly static or slowly changing. You can think
131 of the structure of the tasks in your workflow as slightly more dynamic
132 than a database structure would be. Airflow workflows are expected to look
133 similar from a run to the next, this allows for clarity around
134 unit of work and continuity.
135
136 ## Principles
137
138 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically.
139 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment.
140 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine.
141 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers.
142
143 ## User Interface
144
145 - **DAGs**: Overview of all DAGs in your environment.
146
147 
148
149 - **Tree View**: Tree representation of a DAG that spans across time.
150
151 
152
153 - **Graph View**: Visualization of a DAG's dependencies and their current status for a specific run.
154
155 
156
157 - **Task Duration**: Total time spent on different tasks over time.
158
159 
160
161 - **Gantt View**: Duration and overlap of a DAG.
162
163 
164
165 - **Code View**: Quick way to view source code of a DAG.
166
167 
168
169
170 ## Backport packages
171
172 ### Context: Airflow 2.0 operators, hooks, and secrets
173
174 Currently, stable Apache Airflow versions are from the 1.10.* series.
175 We are working on the future, major version of Airflow from the 2.0.* series.
176 It is going to be released in 2020. However, the exact time of release depends on many factors and is
177 not yet confirmed.
178
179 We have already a lot of changes in the operators, transfers, hooks, sensors, secrets for many external
180 systems, but they are not used nor tested widely because they are part of the master/2.0 release.
181
182 In the Airflow 2.0 - following AIP-21 "change in import paths" all the non-core interfaces to external
183 systems of Apache Airflow have been moved to the "airflow.providers" package.
184
185 Thanks to that and automated backport effort we took, the operators from Airflow 2.0
186 can be used in Airflow 1.10 as separately installable packages, with the constraint that
187 those packages can only be used in python3.6+ environment.
188
189 ### Installing Airflow 2.0 operators in Airflow 1.10
190
191 We released backport packages that can be installed for older Airflow versions.
192 Those backport packages are going to be released more frequently that main Airflow 1.10.& releases.
193
194 You will not have to upgrade your Airflow version to use those packages. You can find those packages in the
195 [PyPI](https://pypi.org/search/?q=apache-airflow-backport-providers&o=) and install them separately for each
196 provider.
197
198 Those packages are available now and can be used in the latest Airflow 1.10* version. Most of those
199 packages are also installable and usable in most Airflow 1.10.* releases but there is no extensive testing
200 done beyond the latest released version, so you might expect more problems in earlier Airflow versions.
201
202 ### An easier migration path to 2.0
203
204 With backported providers package users can migrate their DAGs to the new providers package incrementally
205 and once they convert to the new operators/sensors/hooks they can seamlessly migrate their
206 environments to Airflow 2.0. The nice thing about providers backport packages is that you can use
207 both old and new classes at the same time - even in the same DAG. So your migration can be gradual and smooth.
208 Note that in Airflow 2.0 old classes raise deprecation warning and redirect to the new classes wherever
209 it is possible. In some rare cases the new operators will not be fully backwards compatible - you will find
210 information about those cases in [UPDATING.md](UPDATING.md) where we explained all such cases. Switching
211 early to the Airflow 2.0 operators while still running Airflow 1.10 will make your migration much easier.
212
213 More information about the status and releases of the back-ported packages are available
214 at [Backported providers package page](https://cwiki.apache.org/confluence/display/AIRFLOW/Backported+providers+packages+for+Airflow+1.10.*+series)
215
216
217 ### Installing backport packages
218
219 Note that the backport packages might require extra dependencies. Pip installs the required dependencies
220 automatically when it installs the backport package, but there are sometimes cross-dependencies between the
221 backport packages. For example `google` package has cross-dependency with `amazon` package to allow
222 transfers between those two cloud providers. You might need to install those packages in case you use
223 cross-dependent packages. The easiest way to install them is to use "extras" when installing the package,
224 for example the below will install both `google` and `amazon` backport packages:
225
226 ```bash
227 pip install apache-airflow-backport-providers-google[amazon]
228 ```
229
230 This is all documented in the PyPI description of the packages
231 as well as in the README.md file available for each provider package. For example for google package
232 you can find the readme in [README.md](airflow/providers/google/README.md). You will also find there
233 the summary of both - new classes and moved classes as well as requirement information.
234
235 ### Troubleshooting installing backport packages
236
237 Backport providers only work when they are installed in the same namespace as the 'apache-airflow' 1.10
238 package. This is majority of cases when you simply run `pip install` - it installs all packages
239 in the same folder (usually in `/usr/local/lib/pythonX.Y/site-packages`). But when you install
240 the `apache-airflow` and `apache-airflow-backport-package-*` using different methods (for example using
241 `pip install -e .` or `pip install --user` they might be installed in different namespaces.
242 If that's the case, the provider packages will not be importable (the error in such case is
243 `ModuleNotFoundError: No module named 'airflow.providers'`).
244
245 If you experience the problem, you can easily fix it by creating symbolic link
246 in your installed "airflow" folder to the "providers" folder where you installed your backport packages.
247 If you installed it with `-e`, this link should be created in your airflow
248 sources, if you installed it with the `--user` flag it should be from the
249 `~/.local/lib/pythonX.Y/site-packages/airflow/` folder,
250
251
252 ## Contributing
253
254 Want to help build Apache Airflow? Check out our [contributing documentation](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst).
255
256
257 ## Who uses Apache Airflow?
258
259 As the Apache Airflow community grows, we'd like to keep track of who is using
260 the platform. Please send a PR with your company name and @githubhandle
261 if you may.
262
263 Currently **officially** using Airflow:
264
265 1. [4G Capital](http://www.4g-capital.com/) [[@posei](https://github.com/posei)]
266 1. [6play](https://www.6play.fr) [[@lemourA](https://github.com/lemoura), [@achaussende](https://github.com/achaussende), [@d-nguyen](https://github.com/d-nguyen), [@julien-gm](https://github.com/julien-gm)]
267 1. [8fit](https://8fit.com/) [[@nicor88](https://github.com/nicor88), [@frnzska](https://github.com/frnzska)]
268 1. [90 Seconds](https://90seconds.tv/) [[@aaronmak](https://github.com/aaronmak)]
269 1. [99](https://99taxis.com) [[@fbenevides](https://github.com/fbenevides), [@gustavoamigo](https://github.com/gustavoamigo) & [@mmmaia](https://github.com/mmmaia)]
270 1. [AdBOOST](https://www.adboost.sk) [[AdBOOST](https://github.com/AdBOOST)]
271 1. [Adobe](https://www.adobe.com/) [[@mishikaSingh](https://github.com/mishikaSingh), [@ramandumcs](https://github.com/ramandumcs), [@vardancse](https://github.com/vardancse)]
272 1. [Agari](https://github.com/agaridata) [[@r39132](https://github.com/r39132)]
273 1. [Agoda](https://agoda.com) [[@akki](https://github.com/akki)]
274 1. [Airbnb](http://airbnb.io/) [[@mistercrunch](https://github.com/mistercrunch), [@artwr](https://github.com/artwr)]
275 1. [AirDNA](https://www.airdna.co)
276 1. [Airfinity](https://www.airfinity.com) [[@sibowyer](https://github.com/sibowyer)]
277 1. [Airtel](https://www.airtel.in/) [[@harishbisht](https://github.com/harishbisht)]
278 1. [Akamas](https://akamas.io) [[@GiovanniPaoloGibilisco](https://github.com/GiovanniPaoloGibilisco), [@lucacavazzana](https://github.com/lucacavazzana)]
279 1. [Alan](https://alan.eu) [[@charles-go](https://github.com/charles-go)]
280 1. [allegro.pl](http://allegro.tech/) [[@kretes](https://github.com/kretes)]
281 1. [AloPeyk](https://alopeyk.com) [[@blcksrx](https://github.com/blcksrx), [@AloPeyk](https://github.com/AloPeyk)]
282 1. [AltX](https://www.getaltx.com/about) [[@pedromduarte](https://github.com/pedromduarte)]
283 1. [AMPATH](https://www.ampathkenya.org/)[[@AMPATH](https://github.com/AMPATH), [@fatmali](https://github.com/fatmali)]
284 1. [Apigee](https://apigee.com) [[@btallman](https://github.com/btallman)]
285 1. [ARGO Labs](http://www.argolabs.org) [[@California Data Collaborative](https://github.com/California-Data-Collaborative)]
286 1. [ARMEDANGELS](https://www.armedangels.de) [[@swiffer](https://github.com/swiffer)]
287 1. [Arquivei](https://www.arquivei.com.br/) [[@arquivei](https://github.com/arquivei)]
288 1. [Arrive](https://www.arrive.com/)
289 1. [Asana](https://asana.com/) [[@chang](https://github.com/chang), [@dima-asana](https://github.com/dima-asana), [@jdavidheiser](https://github.com/jdavidheiser), [@ricardoandresrojas](https://github.com/ricardoandresrojas)]
290 1. [Astronomer](http://www.astronomer.io) [[@schnie](https://github.com/schnie), [@ashb](https://github.com/ashb), [@kaxil](https://github.com/kaxil), [@dimberman](https://github.com/dimberman), [@andriisoldatenko](https://github.com/andriisoldatenko), [@ryw](https://github.com/ryw), [@andrewhharmon](https://github.com/andrewhharmon)]
291 1. [Auth0](https://auth0.com) [[@sicarul](https://github.com/sicarul)]
292 1. [Automattic](https://automattic.com/) [[@anandnalya](https://github.com/anandnalya), [@bperson](https://github.com/bperson), [@khrol](https://github.com/Khrol), [@xyu](https://github.com/xyu)]
293 1. [Away](https://awaytravel.com) [[@trunsky](https://github.com/trunsky)]
294 1. [Azri Solutions](http://www.azrisolutions.com/) [[@userimack](https://github.com/userimack)]
295 1. [Bagelcode](https://site.bagelcode.com/)
296 1. [BalanceHero](http://truebalance.io/) [[@swalloow](https://github.com/swalloow)]
297 1. [Banco de Formaturas](https://www.bancodeformaturas.com.br) [[@guiligan](https://github.com/guiligan)]
298 1. [BandwidthX](http://www.bandwidthx.com) [[@dineshdsharma](https://github.com/dineshdsharma)]
299 1. [Basetis](http://www.basetis.com)
300 1. [BBM](https://www.bbm.com/)
301 1. [Beamly](https://www.beamly.com/) [[@christopheralcock](https://github.com/christopheralcock)]
302 1. [Beeswax](https://beeswax.com/)
303 1. [Bellhops](https://github.com/bellhops)
304 1. [BelugaDB](https://belugadb.com) [[@fabio-nukui](https://github.com/fabio-nukui) & [@joao-sallaberry](http://github.com/joao-sallaberry) & [@lucianoviola](https://github.com/lucianoviola) & [@tmatuki](https://github.com/tmatuki)]
305 1. [Betterment](https://www.betterment.com/) [[@betterment](https://github.com/Betterment)]
306 1. [Bexs Bank](https://www.bexs.com.br/en) [[@felipefb](https://github.com/felipefb) & [@ilarsen](https://github.com/ishvann)]
307 1. [BigQuant](https://bigquant.com/) [[@bigquant](https://github.com/bigquant)]
308 1. [Birdz by Veolia](https://www.birdz.com/en/) [[@benjamingrenier](https://github.com/benjamingrenier)]
309 1. [BlaBlaCar](https://www.blablacar.com) [[@puckel](https://github.com/puckel) & [@wmorin](https://github.com/wmorin)]
310 1. [Blacklane](https://www.blacklane.com) [[@serkef](https://github.com/serkef)]
311 1. [Bloc](https://www.bloc.io) [[@dpaola2](https://github.com/dpaola2)]
312 1. [Bloomberg](https://www.techatbloomberg.com) [[@dimberman](https://github.com/dimberman)]
313 1. [Blue Yonder](http://www.blue-yonder.com) [[@blue-yonder](https://github.com/blue-yonder)]
314 1. [BlueApron](https://www.blueapron.com) [[@jasonjho](https://github.com/jasonjho) & [@matthewdavidhauser](https://github.com/matthewdavidhauser)]
315 1. [Bluecore](https://www.bluecore.com) [[@JLDLaughlin](https://github.com/JLDLaughlin)]
316 1. [Bluekiri](https://bluekiri.com) [[@Bluekiri](https://github.com/bluekiri)]
317 1. [Boda Telecom Suite - CE](https://github.com/bodastage/bts-ce) [[@erssebaggala](https://github.com/erssebaggala), [@bodastage](https://github.com/bodastage)]
318 1. [Bodastage Solutions](http://bodastage.com) [[@erssebaggala](https://github.com/erssebaggala), [@bodastage](https://github.com/bodastage)]
319 1. [Bombora Inc](https://bombora.com/) [[@jeffkpayne](https://github.com/jeffkpayne), [@pakelley](https://github.com/pakelley), [@dNavalta](https://github.com/dNavalta), [@austynh](https://github.com/austynh), [@TheOriginalAlex](https://github.com/TheOriginalAlex)]
320 1. [Bonial International GmbH](https://www.bonial.com/)
321 1. [Bonnier Broadcasting](http://www.bonnierbroadcasting.com) [[@wileeam](https://github.com/wileeam)]
322 1. [BounceX](http://www.bouncex.com) [[@JoshFerge](https://github.com/JoshFerge), [@hudsonrio](https://github.com/hudsonrio), [@ronniekritou](https://github.com/ronniekritou)]
323 1. [Braintree](https://www.braintreepayments.com) [[@coopergillan](https://github.com/coopergillan), [@curiousjazz77](https://github.com/curiousjazz77), [@raymondberg](https://github.com/raymondberg)]
324 1. [Branch](https://branch.io) [[@sdebarshi](https://github.com/sdebarshi), [@dmitrig01](https://github.com/dmitrig01)]
325 1. [Caesars Entertainment](https://www.caesars.com)
326 1. [California Data Collaborative](https://github.com/California-Data-Collaborative) powered by [ARGO Labs](http://www.argolabs.org)
327 1. [Capital One](https://www.capitalone.com) [[@anoopengineer](https://github.com/anoopengineer)]
328 1. [Carbonite](https://www.carbonite.com) [[@ajbosco](https://github.com/ajbosco)]
329 1. [CarLabs](https://www.carlabs.ai/) [[@sganz](https://github.com/sganz) & [@odannyc](https://github.com/odannyc)]
330 1. [CAVA](https://www.cava.com) [[@minh5](http://github.com/minh5) & [@patchus](http://github.com/patchus)]
331 1. [Celect](http://www.celect.com) [[@superdosh](https://github.com/superdosh) & [@chadcelect](https://github.com/chadcelect)]
332 1. [Censys](https://censys.io) [[@zakird](https://github.com/zakird), [@dadrian](https://github.com/dadrian), & [@andrewsardone](https://github.com/andrewsardone)]
333 1. [Change.org](https://www.change.org) [[@change](https://github.com/change), [@vijaykramesh](https://github.com/vijaykramesh)]
334 1. [Chartboost](https://www.chartboost.com) [[@cgelman](https://github.com/cgelman) & [@dclubb](https://github.com/dclubb)]
335 1. [Checkr](https://checkr.com) [[@tongboh](https://github.com/tongboh)]
336 1. [Children's Hospital of Philadelphia Division of Genomic Diagnostics](http://www.chop.edu/centers-programs/division-genomic-diagnostics) [[@genomics-geek](https://github.com/genomics-geek/)]
337 1. [Cinimex DataLab](http://cinimex.ru) [[@kdubovikov](https://github.com/kdubovikov)]
338 1. [City of San Diego](http://sandiego.gov) [[@MrMaksimize](https://github.com/mrmaksimize), [@andrell81](https://github.com/andrell81) & [@arnaudvedy](https://github.com/arnaudvedy)]
339 1. [City of Toronto](https://www.toronto.ca/) [[@CityofToronto](https://github.com/CityofToronto), [@radumas](https://github.com/radumas)]
340 1. [ciValue](https://civalue.com/) [[@chencivalue](https://github.com/chencivalue), [@YoavGaudin](https://github.com/YoavGaudin), [@saleem-boshnak](https://github.com/saleem-boshnak)]
341 1. [Civey](https://civey.com/) [[@WesleyBatista](https://github.com/WesleyBatista)]
342 1. [Clairvoyant](https://clairvoyantsoft.com) [[@shekharv](https://github.com/shekharv)]
343 1. [Classmethod, Inc.](https://classmethod.jp/) [[@shoito](https://github.com/shoito)]
344 1. [Cleartax](https://cleartax.in/) [[@anks](https://github.com/anks) & [@codebuff](https://github.com/codebuff)]
345 1. [Clover Health](https://www.cloverhealth.com) [[@gwax](https://github.com/gwax) & [@vansivallab](https://github.com/vansivallab)]
346 1. [Colgate-Palmolive](https://www.colgatepalmolive.com/) [[@fhoda](https://github.com/fhoda)]
347 1. [Collectivehealth Inc.](https://www.collectivehealth.com) [[@retornam](https://github.com/retornam)]
348 1. [Compass](https://www.compass.com) [[@wdhorton](https://github.com/wdhorton)]
349 1. [ConnectWise](https://www.connectwise.com/) [[@jacobeturpin](https://github.com/jacobeturpin)]
350 1. [ContaAzul](https://www.contaazul.com) [[@bern4rdelli](https://github.com/bern4rdelli), [@renanleme](https://github.com/renanleme) & [@sabino](https://github.com/sabino)]
351 1. [Cotap](https://github.com/cotap/) [[@maraca](https://github.com/maraca) & [@richardchew](https://github.com/richardchew)]
352 1. [Craig@Work](https://www.craigatwork.com)
353 1. [Crealytics](https://crealytics.com)
354 1. [Credit Karma](https://www.creditkarma.com/) [[@preete-dixit-ck](https://github.com/preete-dixit-ck) & [@harish-gaggar-ck](https://github.com/harish-gaggar-ck) & [@greg-finley-ck](https://github.com/greg-finley-ck)]
355 1. [Creditas](https://www.creditas.com.br) [[@dcassiano](https://github.com/dcassiano)]
356 1. [CreditCards.com](https://www.creditcards.com/)[[@vmAggies](https://github.com/vmAggies) & [@jay-wallaby](https://github.com/jay-wallaby)]
357 1. [Cryptalizer.com](https://www.cryptalizer.com/)
358 1. [Custom Ink](https://www.customink.com/) [[@david-dalisay](https://github.com/david-dalisay), [@dmartin11](https://github.com/dmartin11) & [@mpeteuil](https://github.com/mpeteuil)]
359 1. [Cyscale](https://cyscale.com) [[@ocical](https://github.com/ocical)]
360 1. [Dailymotion](http://www.dailymotion.com/fr) [[@germaintanguy](https://github.com/germaintanguy) & [@hc](https://github.com/hc)]
361 1. [Danamica](https://www.danamica.dk) [[@testvinder](https://github.com/testvinder)]
362 1. [Data Reply](https://www.datareply.co.uk/) [[@kaxil](https://github.com/kaxil)]
363 1. [DataCamp](https://datacamp.com/) [[@dgrtwo](https://github.com/dgrtwo)]
364 1. [DataFox](https://www.datafox.com/) [[@sudowork](https://github.com/sudowork)]
365 1. [Dentsu Inc.](http://www.dentsu.com/) [[@bryan831](https://github.com/bryan831) & [@loozhengyuan](https://github.com/loozhengyuan)]
366 1. [Deseret Digital Media](http://deseretdigital.com/) [[@formigone](https://github.com/formigone)
367 1. [Digital First Media](http://www.digitalfirstmedia.com/) [[@duffn](https://github.com/duffn) & [@mschmo](https://github.com/mschmo) & [@seanmuth](https://github.com/seanmuth)]
368 1. [DigitalOcean](https://digitalocean.com/) [[@ajbosco](https://github.com/ajbosco)]
369 1. [Digitas Pixelpark](https://www.digitaspixelpark.com/) [[@feluelle](https://github.com/feluelle)]
370 1. [DoorDash](https://www.doordash.com/)
371 1. [Dotmodus](http://dotmodus.com) [[@dannylee12](https://github.com/dannylee12)]
372 1. [Drivy](https://www.drivy.com) [[@AntoineAugusti](https://github.com/AntoineAugusti)]
373 1. [Easy Taxi](http://www.easytaxi.com/) [[@caique-lima](https://github.com/caique-lima) & [@diraol](https://github.com/diraol)]
374 1. [EllisDon](http://www.ellisdon.com/) [[@d2kalra](https://github.com/d2kalra) & [@zbasama](https://github.com/zbasama)]
375 1. [Endesa](https://www.endesa.com) [[@drexpp](https://github.com/drexpp)]
376 1. [Enigma](https://www.enigma.com) [[@hydrosquall](https://github.com/hydrosquall)]
377 1. [Datamaran](https://www.datamaran.com) [[@valexharo](https://github.com/valexharo)]
378 1. [Etsy](https://www.etsy.com) [[@mchalek](https://github.com/mchalek)]
379 1. [evo.company](https://evo.company/) [[@orhideous](https://github.com/orhideous)]
380 1. [Experity (formerly DocuTAP)](https://www.experityhealth.com/) [[@cloneluke](https://github.com/cloneluke) & [@tobyjoliver](https://github.com/tobyjoliver)]
381 1. [Fathom Health](https://www.fathomhealth.co/)
382 1. [Firestone Inventing](https://www.hsmap.com/) [[@zihengCat](https://github.com/zihengCat)]
383 1. [Flipp](https://www.flipp.com) [[@sethwilsonwishabi](https://github.com/sethwilsonwishabi)]
384 1. [Format](https://www.format.com) [[@format](https://github.com/4ormat) & [@jasonicarter](https://github.com/jasonicarter)]
385 1. [FreeNow](https://free-now.com) [[@freenowtech](https://github.com/freenowtech)]
386 1. [FreshBooks](https://github.com/freshbooks) [[@DinoCow](https://github.com/DinoCow)]
387 1. [Freshworks](https://www.freshworks.com/) [[@shaikshakeel](https://github.com/shaikshakeel)]
388 1. [FullContact](https://github.com/fullcontact)
389 1. [Fuller, Inc.](https://en.fuller-inc.com/) [[@wutali](https://github.com/wutali) & [@sh-tech](https://github.com/sh-tech)]
390 1. [Fundera](https://fundera.com) [[@andyxhadji](https://github.com/andyxhadji)]
391 1. [G Adventures](https://gadventures.com) [[@chchtv11](https://github.com/chchtv11), [@tgumbley](https://github.com/tgumbley), [@tomwross](https://github.com/tomwross)]
392 1. [GameWisp](https://gamewisp.com) [[@tjbiii](https://github.com/TJBIII) & [@theryanwalls](https://github.com/theryanwalls)]
393 1. [Geekie](https://www.geekie.com.br) [[@wolney](https://github.com/wolney)]
394 1. [GeneCards](https://www.genecards.org) [[@oferze](https://github.com/oferze)]
395 1. [Gentner Lab](http://github.com/gentnerlab) [[@neuromusic](https://github.com/neuromusic)]
396 1. [Get Simpl](https://getsimpl.com/) [[@rootcss](https://github.com/rootcss)]
397 1. [GitLab](https://about.gitlab.com/) [[@tayloramurphy](https://gitlab.com/tayloramurphy) & [@m_walker](https://gitlab.com/m_walker)]
398 1. [Glassdoor](https://github.com/Glassdoor) [[@syvineckruyk](https://github.com/syvineckruyk) & [@sid88in](https://github.com/sid88in)]
399 1. [Global Fashion Group](http://global-fashion-group.com) [[@GFG](https://github.com/GFG)]
400 1. [GoDataDriven](https://godatadriven.com/) [[@BasPH](https://github.com/basph), [@danielvdende](https://github.com/danielvdende), [@ffinfo](https://github.com/ffinfo), [@Fokko](https://github.com/Fokko), [@gglanzani](https://github.com/gglanzani), [@hgrif](https://github.com/hgrif), [@jrderuiter](https://github.com/jrderuiter), [@NielsZeilemaker](https://github.com/NielsZeilemaker)]
401 1. [Gojek](https://gojek.com/) [[@gojek](https://github.com/gojek)]
402 1. [GovTech GDS](https://gds-gov.tech) [[@chrissng](https://github.com/chrissng) & [@datagovsg](https://github.com/datagovsg)]
403 1. [Grab](https://www.grab.com/sg/) [[@calvintran](https://github.com/canhtran)]
404 1. [Gradeup](https://gradeup.co) [[@gradeup](https://github.com/gradeup)]
405 1. [Grand Rounds](https://www.grandrounds.com/) [[@richddr](https://github.com/richddr), [@timz1290](https://github.com/timz1290), [@wenever](https://github.com/@wenever), & [@runongirlrunon](https://github.com/runongirlrunon)]
406 1. [Greytip](https://www.greytip.com) [[@greytip](https://github.com/greytip)]
407 1. [Groupalia](http://es.groupalia.com) [[@jesusfcr](https://github.com/jesusfcr)]
408 1. [Groupon](https://groupon.com) [[@stevencasey](https://github.com/stevencasey)]
409 1. [Growbots](https://www.growbots.com/)[[@exploy](https://github.com/exploy)]
410 1. [GrowthSimple](https://growthsimple.ai/)
411 1. [GSN Games](https://www.gsngames.com)
412 1. [Gusto](https://gusto.com) [[@frankhsu](https://github.com/frankhsu)]
413 1. [Handshake](https://joinhandshake.com/) [[@mhickman](https://github.com/mhickman)]
414 1. [Handy](http://www.handy.com/careers/73115?gh_jid=73115&gh_src=o5qcxn) [[@marcintustin](https://github.com/marcintustin) / [@mtustin-handy](https://github.com/mtustin-handy)]
415 1. [happn](https://www.happn.com) [[@pcorbel](https://github.com/pcorbel)]
416 1. [HAVAN](https://www.havan.com.br) [[@botbiz](https://github.com/botbiz)]
417 1. [HBC Digital](http://tech.hbc.com) [[@tmccartan](https://github.com/tmccartan) & [@dmateusp](https://github.com/dmateusp)]
418 1. [HBO](http://www.hbo.com/)[[@yiwang](https://github.com/yiwang)]
419 1. [Healthjump](http://www.healthjump.com/) [[@miscbits](https://github.com/miscbits)]
420 1. [HelloFresh](https://www.hellofresh.com) [[@tammymendt](https://github.com/tammymendt) & [@davidsbatista](https://github.com/davidsbatista) & [@iuriinedostup](https://github.com/iuriinedostup)]
421 1. [Hipages](https://www.hipages.com.au/) [[@arihantsurana](https://github.com/arihantsurana)]
422 1. [Holimetrix](http://holimetrix.com/) [[@thibault-ketterer](https://github.com/thibault-ketterer)]
423 1. [HomeToGo](https://www.hometogo.com/) [[@HomeToGo](https://github.com/hometogo), [@AurimasGr](https://github.com/AurimasGr)]
424 1. [Hootsuite](https://github.com/hootsuite)
425 1. [Hostnfly](https://www.hostnfly.com/) [[@CyrilLeMat](https://github.com/CyrilLeMat) & [@pierrechopin](https://github.com/pierrechopin) & [@alexisrosuel](https://github.com/alexisrosuel)]
426 1. [HotelQuickly](https://github.com/HotelQuickly) [[@zinuzoid](https://github.com/zinuzoid)]
427 1. [Huq Industries](https://huq.io) [[@huqindustries](https://github.com/huq-industries), [@alepuccetti](https://github.com/alepuccetti), [@turbomerl](https://github.com/turbomerl)]
428 1. [Iflix](https://piay.iflix.com) [[@ChaturvediSulabh](https://github.com/ChaturvediSulabh)]
429 1. [IFTTT](https://www.ifttt.com/) [[@apurvajoshi](https://github.com/apurvajoshi)]
430 1. [iHeartRadio](http://www.iheart.com/)[[@yiwang](https://github.com/yiwang)]
431 1. [imgix](https://www.imgix.com/) [[@dclubb](https://github.com/dclubb)]
432 1. [ING](http://www.ing.com/)
433 1. [Instacart 🥕](http://www.instacart.com/) [[@arp1t](https://github.com/arp1t) & [@code-sauce](https://github.com/code-sauce) & [@jasonlew](https://github.com/jasonlew) & [@j4p3](https://github.com/j4p3) & [@lubert](https://github.com/lubert) & [@mmontagna](https://github.com/mmontagna) & [@RyanAD](https://github.com/RyanAD) &[@zzadeh](https://github.com/zzadeh)]
434 1. [Intercom](http://www.intercom.com/) [[@fox](https://github.com/fox) & [@paulvic](https://github.com/paulvic)]
435 1. [Interia](http://www.interia.pl)
436 1. [Investorise](https://investorise.com/) [[@svenvarkel](https://github.com/svenvarkel)]
437 1. [iS2.co](https://www.is2.co) [[@iS2co](https://github.com/iS2co)]
438 1. [Jampp](https://github.com/jampp)
439 1. [Jeitto](https://www.jeitto.com.br) [[@BrennerPablo](https://github.com/BrennerPablo) & [@ds-mauri](https://github.com/ds-mauri)]
440 1. [Jetlore](http://www.jetlore.com/) [[@bderose](https://github.com/bderose)]
441 1. [JobTeaser](https://www.jobteaser.com) [[@stefani75](https://github.com/stefani75) & [@knil-sama](https://github.com/knil-sama)]
442 1. [JULO](https://www.julo.co.id/) [[@sepam](https://github.com/sepam) & [@tenapril](https://github.com/tenapril) & [@verzqy](https://github.com/verzqy)]
443 1. [Kalibrr](https://www.kalibrr.com/) [[@charlesverdad](https://github.com/charlesverdad)]
444 1. [Kargo](https://kargo.com) [[@chaithra-yenikapati](https://github.com/chaithra-yenikapati), [@akarsh3007](https://github.com/akarsh3007) & [@dineshanchan](https://github.com/dineshanchan)]
445 1. [Karmic](https://karmiclabs.com) [[@hyw](https://github.com/hyw)]
446 1. [King](https://king.com) [[@nathadfield](https://github.com/nathadfield)]
447 1. [King Abdullah Petroleum Studies and Research Center(KAPSARC)](https://github.com/kapsarc) [[@saianupkumarp](https://github.com/saianupkumarp)]
448 1. [Kiwi.com](https://kiwi.com/) [[@underyx](https://github.com/underyx)]
449 1. [Kogan.com](https://github.com/kogan) [[@geeknam](https://github.com/geeknam)]
450 1. [Korbit](https://www.korbit.co.kr/) [[@jensenity](https://github.com/jensenity)]
451 1. [KPN B.V.](https://www.kpn.com/) [[@biyanisuraj](https://github.com/biyanisuraj) & [@gmic](https://github.com/gmic)]
452 1. [Kroton Educacional](http://www.kroton.com.br/)
453 1. [Lemann Foundation](http://fundacaolemann.org.br) [[@fernandosjp](https://github.com/fernandosjp)]
454 1. [LeMans Corporation](https://www.parts-unlimited.com/) [[@alloydwhitlock](https://github.com/alloydwhitlock)] & [[@tinyrye](https://github.com/tinyrye)]
455 1. [LendUp](https://www.lendup.com/) [[@lendup](https://github.com/lendup)]
456 1. [LetsBonus](http://www.letsbonus.com) [[@jesusfcr](https://github.com/jesusfcr) & [@OpringaoDoTurno](https://github.com/OpringaoDoTurno)]
457 1. [Liberty Global](https://www.libertyglobal.com/) [[@LibertyGlobal](https://github.com/LibertyGlobal/)]
458 1. [liligo](http://liligo.com/) [[@tromika](https://github.com/tromika)]
459 1. [LingoChamp](http://www.liulishuo.com/) [[@haitaoyao](https://github.com/haitaoyao)]
460 1. [Logitravel Group](https://www.logitravel.com/)
461 1. [Los Angeles Times](http://www.latimes.com/) [[@standyro](https://github.com/standyro)]
462 1. [LokSuvidha](http://loksuvidha.com/) [[@saurabhwahile](https://github.com/saurabhwahile)]
463 1. [Lucid](http://luc.id) [[@jbrownlucid](https://github.com/jbrownlucid) & [@kkourtchikov](https://github.com/kkourtchikov)]
464 1. [Lumos Labs](https://www.lumosity.com/) [[@rfroetscher](https://github.com/rfroetscher/) & [@zzztimbo](https://github.com/zzztimbo/)]
465 1. [Lyft](https://www.lyft.com/) [[@feng-tao](https://github.com/feng-tao), [@milton0825](https://github.com/milton0825), [@astahlman](https://github.com/astahlman),
466 [@youngyjd](https://github.com/youngyjd), [@ArgentFalcon](https://github.com/ArgentFalcon)]
467 1. [M4U](https://www.m4u.com.br/) [[@msantino](https://github.com/msantino)]
468 1. [Madrone](http://madroneco.com/) [[@mbreining](https://github.com/mbreining) & [@scotthb](https://github.com/scotthb)]
469 1. [Markovian](https://markovian.com/) [[@al-xv](https://github.com/al-xv), [@skogsbaeck](https://github.com/skogsbaeck), [@waltherg](https://github.com/waltherg)]
470 1. [Mercadoni](https://www.mercadoni.com.co) [[@demorenoc](https://github.com/demorenoc)]
471 1. [Mercari](http://www.mercari.com/) [[@yu-iskw](https://github.com/yu-iskw)]
472 1. [MFG Labs](https://github.com/MfgLabs)
473 1. [MiNODES](https://www.minodes.com) [[@dice89](https://github.com/dice89), [@diazcelsa](https://github.com/diazcelsa)]
474 1. [Modernizing Medicine](https://www.modmed.com/)[[@kehv1n](https://github.com/kehv1n), [@dalupus](https://github.com/dalupus)]
475 1. [Movember](https://movember.com)
476 1. [Multiply](https://www.multiply.com) [[@nrhvyc](https://github.com/nrhvyc)]
477 1. [National Bank of Canada](https://nbc.ca) [[@brilhana](https://github.com/brilhana)]
478 1. [Neoway](https://www.neoway.com.br/) [[@neowaylabs](https://github.com/orgs/NeowayLabs/people)]
479 1. [Nerdwallet](https://www.nerdwallet.com)
480 1. [New Relic](https://www.newrelic.com) [[@marcweil](https://github.com/marcweil)]
481 1. [Newzoo](https://www.newzoo.com) [[@newzoo-nexus](https://github.com/newzoo-nexus)]
482 1. [NEXT Trucking](https://www.nexttrucking.com/) [[@earthmancash2](https://github.com/earthmancash2), [@kppullin](https://github.com/kppullin)]
483 1. [Nextdoor](https://nextdoor.com) [[@SivaPandeti](https://github.com/SivaPandeti), [@zshapiro](https://github.com/zshapiro) & [@jthomas123](https://github.com/jthomas123)]
484 1. [Nine](https://nine.com.au) [[@TheZepto](https://github.com/TheZepto)]
485 1. [OdysseyPrime](https://www.goprime.io/) [[@davideberdin](https://github.com/davideberdin)]
486 1. [OfferUp](https://offerupnow.com)
487 1. [OneFineStay](https://www.onefinestay.com) [[@slangwald](https://github.com/slangwald)]
488 1. [Open Knowledge International](https://okfn.org) [@vitorbaptista](https://github.com/vitorbaptista)
489 1. [Optum](https://www.optum.com/) - [UnitedHealthGroup](https://www.unitedhealthgroup.com/) [[@fhoda](https://github.com/fhoda), [@ianstanton](https://github.com/ianstanton), [@nilaybhatt](https://github.com/NilayBhatt),[@hiteshrd](https://github.com/hiteshrd)]
490 1. [OrangeBank](https://www.orangebank.fr/) [[@HamzaBoukraa](https://github.com/HamzaBoukraa)]
491 1. [Outcome Health](https://www.outcomehealth.com/) [[@mikethoun](https://github.com/mikethoun), [@rolandotribo](https://github.com/rolandotribo)]
492 1. [Overstock](https://www.github.com/overstock) [[@mhousley](https://github.com/mhousley) & [@mct0006](https://github.com/mct0006)]
493 1. [OVH](https://www.ovh.com) [[@ncrocfer](https://github.com/ncrocfer) & [@anthonyolea](https://github.com/anthonyolea)]
494 1. [Pagar.me](https://pagar.me/) [[@pagarme](https://github.com/pagarme)]
495 1. [Palo Alto Networks](https://www.paloaltonetworks.com/) [[@PaloAltoNetworks](https://github.com/PaloAltoNetworks)]
496 1. [Pandora Media](https://www.pandora.com/) [[@Acehaidrey](https://github.com/Acehaidrey) & [@wolfier](https://github.com/wolfier)]
497 1. [Paraná Banco](https://paranabanco.com.br/) [[@lopesdiego12](https://github.com/lopesdiego12/)]
498 1. [PayFit](https://payfit.com) [[@pcorbel](https://github.com/pcorbel)]
499 1. [PAYMILL](https://www.paymill.com/) [[@paymill](https://github.com/paymill) & [@matthiashuschle](https://github.com/matthiashuschle)]
500 1. [PayPal](https://www.paypal.com/) [[@r39132](https://github.com/r39132) & [@jhsenjaliya](https://github.com/jhsenjaliya)]
501 1. [Pecan](https://www.pecan.ai) [[@ohadmata](https://github.com/ohadmata)]
502 1. [Pernod-Ricard](https://www.pernod-ricard.com/) [[@romain-nio](https://github.com/romain-nio)]
503 1. [Plaid](https://www.plaid.com/) [[@plaid](https://github.com/plaid), [@AustinBGibbons](https://github.com/AustinBGibbons) & [@jeeyoungk](https://github.com/jeeyoungk)]
504 1. [Playbuzz](https://www.playbuzz.com/) [[@clintonboys](https://github.com/clintonboys) & [@dbn](https://github.com/dbn)]
505 1. [PMC](https://pmc.com/) [[@andrewm4894](https://github.com/andrewm4894)]
506 1. [Polidea](https://www.polidea.com/) [[@potiuk](https://github.com/potiuk), [@mschickensoup](https://github.com/mschickensoup), [@mik-laj](https://github.com/mik-laj), [@turbaszek](https://github.com/turbaszek), [@michalslowikowski00](https://github.com/michalslowikowski00), [@olchas](https://github.com/olchas)]
507 1. [Poshmark](https://www.poshmark.com)
508 1. [Postmates](http://www.postmates.com) [[@syeoryn](https://github.com/syeoryn)]
509 1. [Premise](http://www.premise.com) [[@jmccallum-premise](https://github.com/jmccallum-premise)]
510 1. [Promofarma](https://www.promofarma.com/) [[@JavierLopezT](https://github.com/JavierLopezT)]
511 1. [Pronto Tools](http://www.prontotools.io/) [[@zkan](https://github.com/zkan) & [@mesodiar](https://github.com/mesodiar)]
512 1. [proton.ai](https://proton.ai/) [[@prmsolutions](https://github.com/prmsolutions)]
513 1. [PubNub](https://pubnub.com) [[@jzucker2](https://github.com/jzucker2)]
514 1. [PXYData](https://www.pxydata.com) [[@patchus](http://github.com/patchus)]
515 1. [Qplum](https://qplum.co) [[@manti](https://github.com/manti)]
516 1. [Quantopian](https://www.quantopian.com/) [[@eronarn](http://github.com/eronarn)]
517 1. [Qubole](https://qubole.com) [[@msumit](https://github.com/msumit)]
518 1. [QuintoAndar](https://quintoandar.com.br) [[@quintoandar](https://github.com/quintoandar)]
519 1. [Quizlet](https://quizlet.com) [[@quizlet](https://github.com/quizlet)]
520 1. [Quora](https://www.quora.com/)
521 1. [Qoala](https://www.qoala.id) [[@gnomeria](https://github.com/gnomeria), [@qoala-engineering](https://github.com/qoala-engineering)]
522 1. [Rakuten](https://www.rakuten.com)
523 1. [Raízen](https://www.raizen.com.br/) [[@rudlac](https://github.com/rudlac) & [@guifneves](https://github.com/guifneves)]
524 1. [Rapido](https://rapido.bike/) [[@ChethanUK](https://github.com/ChethanUK)]
525 1. [REA Group](https://www.rea-group.com/)
526 1. [Reddit](https://www.reddit.com/) [[@reddit](https://github.com/reddit/)]
527 1. [Reverb](https://reverb.com)[[@reverbdotcom](https://github.com/reverbdotcom)]
528 1. [Revolut](https://www.revolut.com/) [[@sztanko](https://github.com/sztanko) & [@nautilus28](https://github.com/nautilus28)]
529 1. [Robinhood](https://robinhood.com) [[@vineet-rh](https://github.com/vineet-rh)]
530 1. [RushOwl](https://www.rushowl.sg) [[@songyanho](https://github.com/songyanho)]
531 1. [Scaleway](https://scaleway.com) [[@kdeldycke](https://github.com/kdeldycke)]
532 1. [Seasoned](https://www.seasoned.co/) [[@joshuacano](https://github.com/joshuacano)] & [[@mmyers](https://github.com/mmyers5)] & [[@tjward](https://github.com/tjward)]
533 1. [Secret Escapes](https://www.secretescapes.com) [[@secretescapes](https://github.com/secretescapes)]
534 1. [Semantics3](https://www.semantics3.com) [[@abishekk92](https://github.com/abishekk92)]
535 1. [Sense360](https://github.com/Sense360) [[@kamilmroczek](https://github.com/KamilMroczek)]
536 1. [Sentry.io](https://www.sentry.io) [[@tiopi](https://github.com/tiopi)]
537 1. [ShopBack](https://www.shopback.sg/) [[@shopback](https://github.com/shopback)]
538 1. [Shopkick](https://shopkick.com/) [[@shopkick](https://github.com/shopkick)]
539 1. [Sidecar](https://hello.getsidecar.com/) [[@getsidecar](https://github.com/getsidecar)]
540 1. [SimilarWeb](https://www.similarweb.com/) [[@similarweb](https://github.com/similarweb)]
541 1. [Simply Business](https://www.simplybusiness.com/) [[@simplybusiness](https://github.com/simplybusiness)]
542 1. [Skyscanner](https://www.skyscanner.net/) [[@skyscanner](https://github.com/Skyscanner)]
543 1. [SmartNews](https://www.smartnews.com/) [[@takus](https://github.com/takus)]
544 1. [SnapTravel](https://www.snaptravel.com/)
545 1. [SocialCops](https://www.socialcops.com/) [[@vinayak-mehta](https://github.com/vinayak-mehta) & [@sharky93](https://github.com/sharky93)]
546 1. [Société générale](https://www.societegenerale.fr/) [[@medmrgh](https://github.com/medmrgh) & [@s83](https://github.com/s83)]
547 1. [Spotahome](https://www.spotahome.com/) [[@spotahome](https://github.com/spotahome)]
548 1. [SpotHero](https://github.com/spothero) [[@benjigoldberg](https://github.com/benjigoldberg)]
549 1. [Spotify](https://github.com/spotify) [[@znichols](https://github.com/znichols)]
550 1. [Square](https://squareup.com/)
551 1. [Stackspace](https://beta.stackspace.io/)
552 1. [StoneCo](https://www.stone.co) [[@lgwacker](https://github.com/lgwacker)]
553 1. [Strava](https://strava.com) [[@strava](https://github.com/strava), [@dhuang](https://github.com/dhuang) & [@liamstewart](https://github.com/liamstewart)]
554 1. [Stripe](https://stripe.com) [[@jbalogh](https://github.com/jbalogh)]
555 1. [Strongmind](https://www.strongmind.com) [[@tomchapin](https://github.com/tomchapin) & [@wongstein](https://github.com/wongstein)]
556 1. [Surfline](https://www.surfline.com/) [[@jawang35](https://github.com/jawang35)]
557 1. [T2 Systems](http://t2systems.com) [[@unclaimedpants](https://github.com/unclaimedpants)]
558 1. [Tails.com](https://tails.com/) [[@alanmcruickshank](https://github.com/alanmcruickshank)]
559 1. [TEK](https://www.tek.fi/en) [[@telac](https://github.com/telac)]
560 1. [Telefonica Innovation Alpha](https://www.alpha.company/) [[@Alpha-Health](https://github.com/Alpha-health)]
561 1. [Telia Company](https://www.teliacompany.com/en)
562 1. [Ternary Data](https://ternarydata.com/) [[@mhousley](https://github.com/mhousley), [@JoeReis](https://github.com/JoeReis)]
563 1. [Tesla](https://www.tesla.com/) [[@thoralf-gutierrez](https://github.com/thoralf-gutierrez)]
564 1. [The Home Depot](https://www.homedepot.com/)[[@apekshithr](https://github.com/apekshithr)]
565 1. [THE ICONIC](https://www.theiconic.com.au/) [[@revathijay](https://github.com/revathijay)] [[@ilikedata](https://github.com/ilikedata)]
566 1. [Thinking Machines](https://thinkingmachin.es) [[@marksteve](https://github.com/marksteve)]
567 1. [Thinknear](https://www.thinknear.com/) [[@d3cay1](https://github.com/d3cay1), [@ccson](https://github.com/ccson), & [@ababian](https://github.com/ababian)]
568 1. [ThoughtWorks](https://www.thoughtworks.com/) [[@sann3](https://github.com/sann3)]
569 1. [Thumbtack](https://www.thumbtack.com/) [[@natekupp](https://github.com/natekupp)]
570 1. [Tictail](https://tictail.com/)
571 1. [Tile](https://tile.com/) [[@ranjanmanish](https://github.com/ranjanmanish)]
572 1. [Tinder](https://tinder.com/) [[@kbendick](https://github.com/kbendick)]
573 1. [Tink](https://tink.com/) [[@tink-ab](https://github.com/tink-ab)]
574 1. [TokenAnalyst](https://github.com/tokenanalyst) [[@simonohanlon101](https://github.com/simonohanlon101), [@ankitchiplunkar](https://github.com/ankitchiplunkar), [@sidshekhar](https://github.com/sidshekhar), [@sp6pe](https://github.com/sp6pe)]
575 1. [Tokopedia](https://www.tokopedia.com/) [[@topedmaria](https://github.com/topedmaria)]
576 1. [Trocafone](https://www.trocafone.com/) [[@idontdomath](https://github.com/idontdomath) & [@gseva](https://github.com/gseva) & [@ordonezf](https://github.com/ordonezf) & [@PalmaLeandro](https://github.com/PalmaLeandro)]
577 1. [TruFactor](https://trufactor.io/) [[@gholmes](https://github.com/gholmes) & [@angadsingh](https://github.com/angadsingh/)]
578 1. [Twine Labs](https://www.twinelabs.com/) [[@ivorpeles](https://github.com/ivorpeles)]
579 1. [Twitter](https://www.twitter.com/) [[@aoen](https://github.com/aoen)]
580 1. [Ubisoft](https://www.ubisoft.com/) [[@Walkoss](https://github.com/Walkoss)]
581 1. [Udacity](https://www.udacity.com/) [[@dandikunited](https://github.com/DandikUnited), [@simon-uc](https://github.com/simon-uc)]
582 1. [United Airlines](https://www.united.com/) [[@ilopezfr](https://github.com/ilopezfr)]
583 1. [Upsight](https://www.upsight.com)
584 1. [VeeR VR](https://veer.tv) [[@pishilong](https://github.com/pishilong)]
585 1. [Veikkaus](https://www.veikkaus.fi) [[@hixus](https://github.com/hixus)]
586 1. [Vente-Exclusive.com](http://www.vente-exclusive.com/) [[@alexvanboxel](https://github.com/alexvanboxel)]
587 1. [Vevo](https://www.vevo.com/) [[@csetiawan](https://github.com/csetiawan) & [@jerrygillespie](https://github.com/jerrygillespie)]
588 1. [Vidio](https://www.vidio.com/)
589 1. [Ville de Montréal](http://ville.montreal.qc.ca/)[@VilledeMontreal](https://github.com/VilledeMontreal/)]
590 1. [Vnomics](https://github.com/vnomics) [[@lpalum](https://github.com/lpalum)]
591 1. [Walmart Labs](https://www.walmartlabs.com) [[@bharathpalaksha](https://github.com/bharathpalaksha), [@vipul007ravi](https://github.com/vipul007ravi)]
592 1. [Waze](https://www.waze.com) [[@waze](https://github.com/wazeHQ)]
593 1. [WePay](http://www.wepay.com) [[@criccomini](https://github.com/criccomini) & [@mtagle](https://github.com/mtagle)]
594 1. [WeTransfer](https://github.com/WeTransfer) [[@coredipper](https://github.com/coredipper) & [@higee](https://github.com/higee) & [@azclub](https://github.com/azclub)]
595 1. [Whistle Labs](http://www.whistle.com) [[@ananya77041](https://github.com/ananya77041)]
596 1. [Wildlifestudios](https://wildlifestudios.com/)
597 1. [WiseBanyan](https://wisebanyan.com/)
598 1. [Wooga](https://www.wooga.com/)
599 1. [WorldRemit](https://www.worldremit.com/) [[@boittega](https://github.com/boittega)]
600 1. [Wrike](https://www.wrike.com) [[@eliseealex](https://github.com/eliseealex) & [teoretic6](https://github.com/Teoretic6)]
601 1. [Xero](https://www.xero.com/) [[@yan9yu](https://github.com/yan9yu) & [adamantnz](https://github.com/adamantnz/)]
602 1. [Xoom](https://www.xoom.com/)
603 1. [Yahoo!](https://www.yahoo.com/)
604 1. [Yieldr](https://www.yieldr.com/) [[@ggeorgiadis](https://github.com/ggeorgiadis)]
605 1. [Zapier](https://www.zapier.com) [[@drknexus](https://github.com/drknexus) & [@statwonk](https://github.com/statwonk)]
606 1. [Zego](https://www.zego.com/) [[@ruimffl](https://github.com/ruimffl), [@james-welly](https://github.com/james-welly), [@ken-payne](https://github.com/ken-payne)]
607 1. [Zendesk](https://www.github.com/zendesk)
608 1. [Zenly](https://zen.ly) [[@cerisier](https://github.com/cerisier) & [@jbdalido](https://github.com/jbdalido)]
609 1. [Zerodha](https://zerodha.com/) [[@johnnybravo-xyz](https://github.com/johnnybravo-xyz)]
610 1. [Zymergen](https://www.zymergen.com/)
611 1. [Zynga](https://www.zynga.com)
612
613 ## Who Maintains Apache Airflow?
614
615 Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors),
616 but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow)
617 are responsible for reviewing and merging PRs as well as steering conversation around new feature requests.
618 If you would like to become a maintainer, please review the Apache Airflow
619 [committer requirements](https://cwiki.apache.org/confluence/display/AIRFLOW/Committers).
620
621 ## Can I use the Apache Airflow logo in my presentation?
622
623 Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up to date logos are found in [this repo](/docs/img/logos) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html).
624
625 ## Links
626
627 - [Documentation](https://airflow.apache.org/docs/stable/)
628 - [Chat](https://apache-airflow-slack.herokuapp.com/)
629 - [More](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Links)
630
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
apache/airflow
|
5e4b801b32eeda79b59ff3cc8a3a503f57f5a509
|
Support airflowignore for plugins
Hello,
Airflow has a mechanism to ignore files before they are automatically loaded by Airflow using a .airflowignore file.
A .airflowignore file specifies the directories or files in DAG_FOLDER that Airflow should intentionally ignore.
> For example, you can prepare a .airflowignore file with contents
> ```
> project_a
> tenant_[\d]
> ```
> Then files like project_a_dag_1.py, TESTING_project_a.py, tenant_1.py, project_a/dag_1.py, and tenant_1/dag_1.py in your DAG_FOLDER would be ignored (If a directory’s name matches any of the patterns, this directory and all its subfolders would not be scanned by Airflow at all. This improves efficiency of DAG finding).
More information: https://airflow.readthedocs.io/en/latest/concepts.html?highlight=airflowignore
It would be helpful to make a similar feature available to plugins. This improves the efficiency of plugins finding.
If anyone is interested in this, I am willing to provide all the necessary tips and information.
Are you wondering how to start contributing to this project? Start by reading our [contributor guide](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst)
Cheers
|
@mik-laj
I am interested in this task. Would you mind if I do it?
I plan to add code of "Ignore" to :
https://github.com/apache/airflow/blob/c703ce20c836e9f797203ed233fe6e3d51d8cbd4/airflow/plugins_manager.py#L159-L162
@j-y-matsubara This seems like a good place. I assigned you to this ticket. :-)
|
2020-06-26T10:56:50Z
|
<patch>
diff --git a/airflow/plugins_manager.py b/airflow/plugins_manager.py
--- a/airflow/plugins_manager.py
+++ b/airflow/plugins_manager.py
@@ -29,7 +29,8 @@
import pkg_resources
-from airflow import settings
+from airflow import settings # type: ignore
+from airflow.utils.file import find_path_from_directory # type: ignore
log = logging.getLogger(__name__)
@@ -158,40 +159,37 @@ def load_entrypoint_plugins():
def load_plugins_from_plugin_directory():
"""
- Load and register Airflow Plugins from plugins directory.
+ Load and register Airflow Plugins from plugins directory
"""
global import_errors # pylint: disable=global-statement
global plugins # pylint: disable=global-statement
log.debug("Loading plugins from directory: %s", settings.PLUGINS_FOLDER)
- # Crawl through the plugins folder to find AirflowPlugin derivatives
- for root, _, files in os.walk(settings.PLUGINS_FOLDER, followlinks=True): # noqa # pylint: disable=too-many-nested-blocks
- for f in files:
- filepath = os.path.join(root, f)
- try:
- if not os.path.isfile(filepath):
- continue
- mod_name, file_ext = os.path.splitext(
- os.path.split(filepath)[-1])
- if file_ext != '.py':
- continue
-
- log.debug('Importing plugin module %s', filepath)
-
- loader = importlib.machinery.SourceFileLoader(mod_name, filepath)
- spec = importlib.util.spec_from_loader(mod_name, loader)
- mod = importlib.util.module_from_spec(spec)
- sys.modules[spec.name] = mod
- loader.exec_module(mod)
- for mod_attr_value in list(mod.__dict__.values()):
- if is_valid_plugin(mod_attr_value):
- plugin_instance = mod_attr_value()
- plugins.append(plugin_instance)
- except Exception as e: # pylint: disable=broad-except
- log.exception(e)
- path = filepath or str(f)
- log.error('Failed to import plugin %s', path)
- import_errors[path] = str(e)
+ for file_path in find_path_from_directory(
+ settings.PLUGINS_FOLDER, ".airflowignore"):
+
+ if not os.path.isfile(file_path):
+ continue
+ mod_name, file_ext = os.path.splitext(os.path.split(file_path)[-1])
+ if file_ext != '.py':
+ continue
+
+ try:
+ loader = importlib.machinery.SourceFileLoader(mod_name, file_path)
+ spec = importlib.util.spec_from_loader(mod_name, loader)
+ mod = importlib.util.module_from_spec(spec)
+ sys.modules[spec.name] = mod
+ loader.exec_module(mod)
+ log.debug('Importing plugin module %s', file_path)
+
+ for mod_attr_value in (m for m in mod.__dict__.values() if is_valid_plugin(m)):
+ plugin_instance = mod_attr_value()
+ plugins.append(plugin_instance)
+
+ except Exception as e: # pylint: disable=broad-except
+ log.exception(e)
+ log.error('Failed to import plugin %s', file_path)
+ import_errors[file_path] = str(e)
# pylint: disable=protected-access
diff --git a/airflow/utils/file.py b/airflow/utils/file.py
--- a/airflow/utils/file.py
+++ b/airflow/utils/file.py
@@ -20,9 +20,9 @@
import os
import re
import zipfile
-from typing import Dict, List, Optional, Pattern
+from typing import Dict, Generator, List, Optional, Pattern
-from airflow.configuration import conf
+from airflow.configuration import conf # type: ignore
log = logging.getLogger(__name__)
@@ -90,6 +90,47 @@ def open_maybe_zipped(fileloc, mode='r'):
return io.open(fileloc, mode=mode)
+def find_path_from_directory(
+ base_dir_path: str,
+ ignore_file_name: str) -> Generator[str, None, None]:
+ """
+ Search the file and return the path of the file that should not be ignored.
+ :param base_dir_path: the base path to be searched for.
+ :param ignore_file_name: the file name in which specifies a regular expression pattern is written.
+
+ :return : file path not to be ignored.
+ """
+
+ patterns_by_dir: Dict[str, List[Pattern[str]]] = {}
+
+ for root, dirs, files in os.walk(str(base_dir_path), followlinks=True):
+ patterns: List[Pattern[str]] = patterns_by_dir.get(root, [])
+
+ ignore_file_path = os.path.join(root, ignore_file_name)
+ if os.path.isfile(ignore_file_path):
+ with open(ignore_file_path, 'r') as file:
+ lines_no_comments = [re.sub(r"\s*#.*", "", line) for line in file.read().split("\n")]
+ patterns += [re.compile(line) for line in lines_no_comments if line]
+ patterns = list(set(patterns))
+
+ dirs[:] = [
+ subdir
+ for subdir in dirs
+ if not any(p.search(
+ os.path.join(os.path.relpath(root, str(base_dir_path)), subdir)) for p in patterns)
+ ]
+
+ patterns_by_dir = {os.path.join(root, sd): patterns.copy() for sd in dirs}
+
+ for file in files: # type: ignore
+ if file == ignore_file_name:
+ continue
+ file_path = os.path.join(root, str(file))
+ if any(re.findall(p, file_path) for p in patterns):
+ continue
+ yield str(file_path)
+
+
def list_py_file_paths(directory: str,
safe_mode: bool = conf.getboolean('core', 'DAG_DISCOVERY_SAFE_MODE', fallback=True),
include_examples: Optional[bool] = None):
@@ -116,59 +157,32 @@ def list_py_file_paths(directory: str,
elif os.path.isfile(directory):
return [directory]
elif os.path.isdir(directory):
- patterns_by_dir: Dict[str, List[Pattern[str]]] = {}
- for root, dirs, files in os.walk(directory, followlinks=True):
- patterns: List[Pattern[str]] = patterns_by_dir.get(root, [])
- ignore_file = os.path.join(root, '.airflowignore')
- if os.path.isfile(ignore_file):
- with open(ignore_file, 'r') as file:
- # If we have new patterns create a copy so we don't change
- # the previous list (which would affect other subdirs)
- lines_no_comments = [COMMENT_PATTERN.sub("", line) for line in file.read().split("\n")]
- patterns += [re.compile(line) for line in lines_no_comments if line]
-
- # If we can ignore any subdirs entirely we should - fewer paths
- # to walk is better. We have to modify the ``dirs`` array in
- # place for this to affect os.walk
- dirs[:] = [
- subdir
- for subdir in dirs
- if not any(p.search(os.path.join(root, subdir)) for p in patterns)
- ]
-
- # We want patterns defined in a parent folder's .airflowignore to
- # apply to subdirs too
- for subdir in dirs:
- patterns_by_dir[os.path.join(root, subdir)] = patterns.copy()
-
- find_dag_file_paths(file_paths, files, patterns, root, safe_mode)
+ find_dag_file_paths(directory, file_paths, safe_mode)
if include_examples:
- from airflow import example_dags
+ from airflow import example_dags # type: ignore
example_dag_folder = example_dags.__path__[0] # type: ignore
file_paths.extend(list_py_file_paths(example_dag_folder, safe_mode, False))
return file_paths
-def find_dag_file_paths(file_paths, files, patterns, root, safe_mode):
+def find_dag_file_paths(directory: str, file_paths: list, safe_mode: bool):
"""Finds file paths of all DAG files."""
- for f in files:
+
+ for file_path in find_path_from_directory(
+ directory, ".airflowignore"):
# noinspection PyBroadException
try:
- file_path = os.path.join(root, f)
if not os.path.isfile(file_path):
continue
_, file_ext = os.path.splitext(os.path.split(file_path)[-1])
if file_ext != '.py' and not zipfile.is_zipfile(file_path):
continue
- if any([re.findall(p, file_path) for p in patterns]):
- continue
-
if not might_contain_dag(file_path, safe_mode):
continue
file_paths.append(file_path)
except Exception: # pylint: disable=broad-except
- log.exception("Error while examining %s", f)
+ log.exception("Error while examining %s", file_path)
COMMENT_PATTERN = re.compile(r"\s*#.*")
</patch>
|
[]
|
[]
| |||
conan-io__conan-4991
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Opt-in skip broken symlinks check packaging
We need to package directories with broken symlinks. We are trying to package yocto sdk and we didn't manage to fix them or remove them, because the sdk stops working. So we would need something like `CONAN_SKIP_BROKEN_SYMLINKS_CHECK` to remove the check in `manifest.py` line 39
</issue>
<code>
[start of README.rst]
1 |Logo|
2
3 Conan
4 =====
5
6 A distributed, open-source (MIT), C/C++ package manager.
7
8 +------------------------+-------------------------+-------------------------+
9 | **master** | **develop** | **Coverage** |
10 +========================+=========================+=========================+
11 | |Build Status Master| | |Build Status Develop| | |Develop coverage| |
12 +------------------------+-------------------------+-------------------------+
13
14
15 - Homepage: https://conan.io/
16 - Docs: https://docs.conan.io/en/latest/
17 - Slack: https://cpplang.now.sh/ (#conan channel)
18 - Twitter: https://twitter.com/conan_io
19
20
21 Setup
22 =====
23
24 Please read https://docs.conan.io/en/latest/installation.html
25
26 From binaries
27 -------------
28
29 We have installers for `most platforms here <http://conan.io>`__ but you
30 can run **conan** from sources if you want.
31
32 From pip
33 --------
34
35 Conan is compatible with Python 2 and Python 3.
36
37 - Install pip following `pip docs`_.
38 - Install conan:
39
40 .. code-block:: bash
41
42 $ pip install conan
43
44 You can also use `test.pypi.org <https://test.pypi.org/project/conan/#history>`_ repository to install development (non-stable) Conan versions:
45
46
47 .. code-block:: bash
48
49 $ pip install --index-url https://test.pypi.org/simple/ conan
50
51
52 From Homebrew (OSx)
53 -------------------
54
55 - Install Homebrew following `brew homepage`_.
56
57 .. code-block:: bash
58
59 $ brew update
60 $ brew install conan
61
62 From source
63 -----------
64
65 You can run **conan** client and server in Windows, MacOS, and Linux.
66
67 - **Install pip following** `pip docs`_.
68
69 - **Clone conan repository:**
70
71 .. code-block:: bash
72
73 $ git clone https://github.com/conan-io/conan.git
74
75 - **Install in editable mode**
76
77 .. code-block:: bash
78
79 $ cd conan && sudo pip install -e .
80
81 If you are in Windows, using ``sudo`` is not required.
82
83 - **You are ready, try to run conan:**
84
85 .. code-block::
86
87 $ conan --help
88
89 Consumer commands
90 install Installs the requirements specified in a conanfile (.py or .txt).
91 config Manages configuration. Edits the conan.conf or installs config files.
92 get Gets a file or list a directory of a given reference or package.
93 info Gets information about the dependency graph of a recipe.
94 search Searches package recipes and binaries in the local cache or in a remote.
95 Creator commands
96 new Creates a new package recipe template with a 'conanfile.py'.
97 create Builds a binary package for recipe (conanfile.py) located in current dir.
98 upload Uploads a recipe and binary packages to a remote.
99 export Copies the recipe (conanfile.py & associated files) to your local cache.
100 export-pkg Exports a recipe & creates a package with given files calling 'package'.
101 test Test a package, consuming it with a conanfile recipe with a test() method.
102 Package development commands
103 source Calls your local conanfile.py 'source()' method.
104 build Calls your local conanfile.py 'build()' method.
105 package Calls your local conanfile.py 'package()' method.
106 Misc commands
107 profile Lists profiles in the '.conan/profiles' folder, or shows profile details.
108 remote Manages the remote list and the package recipes associated to a remote.
109 user Authenticates against a remote with user/pass, caching the auth token.
110 imports Calls your local conanfile.py or conanfile.txt 'imports' method.
111 copy Copies conan recipes and packages to another user/channel.
112 remove Removes packages or binaries matching pattern from local cache or remote.
113 alias Creates and exports an 'alias recipe'.
114 download Downloads recipe and binaries to the local cache, without using settings.
115
116 Conan commands. Type "conan <command> -h" for help
117
118 Contributing to the project
119 ===========================
120
121 Feedback and contribution is always welcome in this project.
122 Please read our [contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
123
124 Running the tests
125 =================
126
127 Using tox
128 ---------
129
130 .. code-block:: bash
131
132 $ tox
133
134 It will install the needed requirements and launch `nose` skipping some heavy and slow test.
135 If you want to run the full test suite:
136
137 .. code-block:: bash
138
139 $ tox -e full
140
141 Without tox
142 -----------
143
144 **Install python requirements**
145
146 .. code-block:: bash
147
148 $ pip install -r conans/requirements.txt
149 $ pip install -r conans/requirements_server.txt
150 $ pip install -r conans/requirements_dev.txt
151
152
153 Only in OSX:
154
155 .. code-block:: bash
156
157 $ pip install -r conans/requirements_osx.txt # You can omit this one if not running OSX
158
159
160 If you are not Windows and you are not using a python virtual environment, you will need to run these
161 commands using `sudo`.
162
163 Before you can run the tests, you need to set a few environment variables first.
164
165 .. code-block:: bash
166
167 $ export PYTHONPATH=$PYTHONPATH:$(pwd)
168
169 On Windows it would be (while being in the conan root directory):
170
171 .. code-block:: bash
172
173 $ set PYTHONPATH=.
174
175 Ensure that your ``cmake`` has version 2.8 or later. You can see the
176 version with the following command:
177
178 .. code-block:: bash
179
180 $ cmake --version
181
182 The appropriate values of ``CONAN_COMPILER`` and ``CONAN_COMPILER_VERSION`` depend on your
183 operating system and your requirements.
184
185 These should work for the GCC from ``build-essential`` on Ubuntu 14.04:
186
187 .. code-block:: bash
188
189 $ export CONAN_COMPILER=gcc
190 $ export CONAN_COMPILER_VERSION=4.8
191
192 These should work for OS X:
193
194 .. code-block:: bash
195
196 $ export CONAN_COMPILER=clang
197 $ export CONAN_COMPILER_VERSION=3.5
198
199 Finally, there are some tests that use conan to package Go-lang
200 libraries, so you might **need to install go-lang** in your computer and
201 add it to the path.
202
203 You can run the actual tests like this:
204
205 .. code-block:: bash
206
207 $ nosetests .
208
209
210 There are a couple of test attributes defined, as ``slow``, or ``golang`` that you can use
211 to filter the tests, and do not execute them:
212
213 .. code-block:: bash
214
215 $ nosetests . -a !golang
216
217 A few minutes later it should print ``OK``:
218
219 .. code-block:: bash
220
221 ............................................................................................
222 ----------------------------------------------------------------------
223 Ran 146 tests in 50.993s
224
225 OK
226
227 To run specific tests, you can specify the test name too, something like:
228
229 .. code-block:: bash
230
231 $ nosetests conans.test.command.config_install_test:ConfigInstallTest.install_file_test --nocapture
232
233 The ``--nocapture`` argument can be useful to see some output that otherwise is captured by nosetests.
234
235 License
236 -------
237
238 `MIT LICENSE <./LICENSE.md>`__
239
240 .. |Build Status Master| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/master
241 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/master
242
243 .. |Build Status Develop| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/develop
244 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/develop
245
246 .. |Master coverage| image:: https://codecov.io/gh/conan-io/conan/branch/master/graph/badge.svg
247 :target: https://codecov.io/gh/conan-io/conan/branch/master
248
249 .. |Develop coverage| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graph/badge.svg
250 :target: https://codecov.io/gh/conan-io/conan/branch/develop
251
252 .. |Coverage graph| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graphs/tree.svg
253 :height: 50px
254 :width: 50 px
255 :alt: Conan develop coverage
256
257 .. |Logo| image:: https://conan.io/img/jfrog_conan_logo.png
258
259
260 .. _`pip docs`: https://pip.pypa.io/en/stable/installing/
261
262 .. _`brew homepage`: http://brew.sh/
263
[end of README.rst]
[start of conans/client/installer.py]
1 import os
2 import shutil
3 import time
4
5 from conans.client import tools
6 from conans.client.file_copier import report_copied_files
7 from conans.client.generators import TXTGenerator, write_generators
8 from conans.client.graph.graph import BINARY_BUILD, BINARY_CACHE, BINARY_DOWNLOAD, BINARY_MISSING, \
9 BINARY_SKIP, BINARY_UPDATE, BINARY_EDITABLE
10 from conans.client.importer import remove_imports, run_imports
11 from conans.client.packager import create_package
12 from conans.client.recorder.action_recorder import INSTALL_ERROR_BUILDING, INSTALL_ERROR_MISSING, \
13 INSTALL_ERROR_MISSING_BUILD_FOLDER
14 from conans.client.source import complete_recipe_sources, config_source
15 from conans.client.tools.env import pythonpath
16 from conans.errors import (ConanException, ConanExceptionInUserConanfileMethod,
17 conanfile_exception_formatter)
18 from conans.model.build_info import CppInfo
19 from conans.model.conan_file import get_env_context_manager
20 from conans.model.editable_layout import EditableLayout
21 from conans.model.env_info import EnvInfo
22 from conans.model.manifest import FileTreeManifest
23 from conans.model.ref import PackageReference
24 from conans.model.user_info import UserInfo
25 from conans.paths import BUILD_INFO, CONANINFO, RUN_LOG_NAME
26 from conans.util.env_reader import get_env
27 from conans.util.files import (clean_dirty, is_dirty, make_read_only, mkdir, rmdir, save, set_dirty)
28 from conans.util.log import logger
29 from conans.util.tracer import log_package_built, log_package_got_from_local_cache
30
31
32 def build_id(conan_file):
33 if hasattr(conan_file, "build_id"):
34 # construct new ConanInfo
35 build_id_info = conan_file.info.copy()
36 conan_file.info_build = build_id_info
37 # effectively call the user function to change the package values
38 with conanfile_exception_formatter(str(conan_file), "build_id"):
39 conan_file.build_id()
40 # compute modified ID
41 return build_id_info.package_id()
42 return None
43
44
45 class _PackageBuilder(object):
46 def __init__(self, cache, output, hook_manager, remote_manager):
47 self._cache = cache
48 self._output = output
49 self._hook_manager = hook_manager
50 self._remote_manager = remote_manager
51
52 def _get_build_folder(self, conanfile, package_layout, pref, keep_build, recorder):
53 # Build folder can use a different package_ID if build_id() is defined.
54 # This function decides if the build folder should be re-used (not build again)
55 # and returns the build folder
56 new_id = build_id(conanfile)
57 build_pref = PackageReference(pref.ref, new_id) if new_id else pref
58 build_folder = package_layout.build(build_pref)
59
60 if is_dirty(build_folder):
61 self._output.warn("Build folder is dirty, removing it: %s" % build_folder)
62 rmdir(build_folder)
63
64 # Decide if the build folder should be kept
65 skip_build = conanfile.develop and keep_build
66 if skip_build:
67 self._output.info("Won't be built as specified by --keep-build")
68 if not os.path.exists(build_folder):
69 msg = "--keep-build specified, but build folder not found"
70 recorder.package_install_error(pref, INSTALL_ERROR_MISSING_BUILD_FOLDER,
71 msg, remote_name=None)
72 raise ConanException(msg)
73 elif build_pref != pref and os.path.exists(build_folder) and hasattr(conanfile, "build_id"):
74 self._output.info("Won't be built, using previous build folder as defined in build_id()")
75 skip_build = True
76
77 return build_folder, skip_build
78
79 def _prepare_sources(self, conanfile, pref, package_layout, conanfile_path, source_folder,
80 build_folder, package_folder, remotes):
81 export_folder = package_layout.export()
82 export_source_folder = package_layout.export_sources()
83
84 complete_recipe_sources(self._remote_manager, self._cache, conanfile, pref.ref, remotes)
85 try:
86 rmdir(build_folder)
87 rmdir(package_folder)
88 except OSError as e:
89 raise ConanException("%s\n\nCouldn't remove folder, might be busy or open\n"
90 "Close any app using it, and retry" % str(e))
91
92 config_source(export_folder, export_source_folder, source_folder,
93 conanfile, self._output, conanfile_path, pref.ref,
94 self._hook_manager, self._cache)
95
96 if not getattr(conanfile, 'no_copy_source', False):
97 self._output.info('Copying sources to build folder')
98 try:
99 shutil.copytree(source_folder, build_folder, symlinks=True)
100 except Exception as e:
101 msg = str(e)
102 if "206" in msg: # System error shutil.Error 206: Filename or extension too long
103 msg += "\nUse short_paths=True if paths too long"
104 raise ConanException("%s\nError copying sources to build folder" % msg)
105 logger.debug("BUILD: Copied to %s", build_folder)
106 logger.debug("BUILD: Files copied %s", ",".join(os.listdir(build_folder)))
107
108 def _build(self, conanfile, pref, build_folder):
109 # Read generators from conanfile and generate the needed files
110 logger.info("GENERATORS: Writing generators")
111 write_generators(conanfile, build_folder, self._output)
112
113 # Build step might need DLLs, binaries as protoc to generate source files
114 # So execute imports() before build, storing the list of copied_files
115 copied_files = run_imports(conanfile, build_folder)
116
117 try:
118 self._hook_manager.execute("pre_build", conanfile=conanfile,
119 reference=pref.ref, package_id=pref.id)
120 logger.debug("Call conanfile.build() with files in build folder: %s",
121 os.listdir(build_folder))
122 self._output.highlight("Calling build()")
123 with conanfile_exception_formatter(str(conanfile), "build"):
124 conanfile.build()
125
126 self._output.success("Package '%s' built" % pref.id)
127 self._output.info("Build folder %s" % build_folder)
128 self._hook_manager.execute("post_build", conanfile=conanfile,
129 reference=pref.ref, package_id=pref.id)
130 except Exception as exc:
131 self._output.writeln("")
132 self._output.error("Package '%s' build failed" % pref.id)
133 self._output.warn("Build folder %s" % build_folder)
134 if isinstance(exc, ConanExceptionInUserConanfileMethod):
135 raise exc
136 raise ConanException(exc)
137 finally:
138 # Now remove all files that were imported with imports()
139 remove_imports(conanfile, copied_files, self._output)
140
141 def _package(self, conanfile, pref, package_layout, conanfile_path, build_folder,
142 package_folder):
143 # FIXME: Is weak to assign here the recipe_hash
144 manifest = package_layout.recipe_manifest()
145 conanfile.info.recipe_hash = manifest.summary_hash
146
147 # Creating ***info.txt files
148 save(os.path.join(build_folder, CONANINFO), conanfile.info.dumps())
149 self._output.info("Generated %s" % CONANINFO)
150 save(os.path.join(build_folder, BUILD_INFO), TXTGenerator(conanfile).content)
151 self._output.info("Generated %s" % BUILD_INFO)
152
153 package_id = pref.id
154 # Do the actual copy, call the conanfile.package() method
155 with get_env_context_manager(conanfile):
156 # Could be source or build depends no_copy_source
157 source_folder = conanfile.source_folder
158 install_folder = build_folder # While installing, the infos goes to build folder
159 create_package(conanfile, package_id, source_folder, build_folder,
160 package_folder, install_folder, self._hook_manager,
161 conanfile_path, pref.ref)
162
163 # Update package metadata
164 package_hash = package_layout.package_summary_hash(pref)
165 self._output.info("Created package revision %s" % package_hash)
166 with package_layout.update_metadata() as metadata:
167 metadata.packages[package_id].revision = package_hash
168 metadata.packages[package_id].recipe_revision = pref.ref.revision
169
170 if get_env("CONAN_READ_ONLY_CACHE", False):
171 make_read_only(package_folder)
172 # FIXME: Conan 2.0 Clear the registry entry (package ref)
173 return package_hash
174
175 def build_package(self, node, keep_build, recorder, remotes):
176 t1 = time.time()
177
178 conanfile = node.conanfile
179 pref = node.pref
180
181 package_layout = self._cache.package_layout(pref.ref, conanfile.short_paths)
182 source_folder = package_layout.source()
183 conanfile_path = package_layout.conanfile()
184 package_folder = package_layout.package(pref)
185
186 build_folder, skip_build = self._get_build_folder(conanfile, package_layout,
187 pref, keep_build, recorder)
188 # PREPARE SOURCES
189 if not skip_build:
190 with package_layout.conanfile_write_lock(self._output):
191 set_dirty(build_folder)
192 self._prepare_sources(conanfile, pref, package_layout, conanfile_path, source_folder,
193 build_folder, package_folder, remotes)
194
195 # BUILD & PACKAGE
196 with package_layout.conanfile_read_lock(self._output):
197 mkdir(build_folder)
198 os.chdir(build_folder)
199 self._output.info('Building your package in %s' % build_folder)
200 try:
201 if getattr(conanfile, 'no_copy_source', False):
202 conanfile.source_folder = source_folder
203 else:
204 conanfile.source_folder = build_folder
205
206 if not skip_build:
207 with get_env_context_manager(conanfile):
208 conanfile.build_folder = build_folder
209 conanfile.package_folder = package_folder
210 # In local cache, install folder always is build_folder
211 conanfile.install_folder = build_folder
212 self._build(conanfile, pref, build_folder)
213 clean_dirty(build_folder)
214
215 prev = self._package(conanfile, pref, package_layout, conanfile_path, build_folder,
216 package_folder)
217 node.prev = prev
218 log_file = os.path.join(build_folder, RUN_LOG_NAME)
219 log_file = log_file if os.path.exists(log_file) else None
220 log_package_built(pref, time.time() - t1, log_file)
221 recorder.package_built(pref)
222 except ConanException as exc:
223 recorder.package_install_error(pref, INSTALL_ERROR_BUILDING,
224 str(exc), remote_name=None)
225 raise exc
226
227 return node.pref
228
229
230 def _handle_system_requirements(conan_file, pref, cache, out):
231 """ check first the system_reqs/system_requirements.txt existence, if not existing
232 check package/sha1/
233
234 Used after remote package retrieving and before package building
235 """
236 if "system_requirements" not in type(conan_file).__dict__:
237 return
238
239 system_reqs_path = cache.system_reqs(pref.ref)
240 system_reqs_package_path = cache.system_reqs_package(pref)
241 if os.path.exists(system_reqs_path) or os.path.exists(system_reqs_package_path):
242 return
243
244 ret = call_system_requirements(conan_file, out)
245
246 try:
247 ret = str(ret or "")
248 except Exception:
249 out.warn("System requirements didn't return a string")
250 ret = ""
251 if getattr(conan_file, "global_system_requirements", None):
252 save(system_reqs_path, ret)
253 else:
254 save(system_reqs_package_path, ret)
255
256
257 def call_system_requirements(conanfile, output):
258 try:
259 return conanfile.system_requirements()
260 except Exception as e:
261 output.error("while executing system_requirements(): %s" % str(e))
262 raise ConanException("Error in system requirements")
263
264
265 def raise_package_not_found_error(conan_file, ref, package_id, dependencies, out, recorder):
266 settings_text = ", ".join(conan_file.info.full_settings.dumps().splitlines())
267 options_text = ", ".join(conan_file.info.full_options.dumps().splitlines())
268 dependencies_text = ', '.join(dependencies)
269
270 msg = '''Can't find a '%s' package for the specified settings, options and dependencies:
271 - Settings: %s
272 - Options: %s
273 - Dependencies: %s
274 - Package ID: %s
275 ''' % (ref, settings_text, options_text, dependencies_text, package_id)
276 out.warn(msg)
277 recorder.package_install_error(PackageReference(ref, package_id), INSTALL_ERROR_MISSING, msg)
278 raise ConanException('''Missing prebuilt package for '%s'
279 Try to build it from sources with "--build %s"
280 Or read "http://docs.conan.io/en/latest/faq/troubleshooting.html#error-missing-prebuilt-package"
281 ''' % (ref, ref.name))
282
283
284 class BinaryInstaller(object):
285 """ main responsible of retrieving binary packages or building them from source
286 locally in case they are not found in remotes
287 """
288 def __init__(self, cache, output, remote_manager, recorder, hook_manager):
289 self._cache = cache
290 self._out = output
291 self._remote_manager = remote_manager
292 self._recorder = recorder
293 self._hook_manager = hook_manager
294
295 def install(self, deps_graph, remotes, keep_build=False, graph_info=None):
296 # order by levels and separate the root node (ref=None) from the rest
297 nodes_by_level = deps_graph.by_levels()
298 root_level = nodes_by_level.pop()
299 root_node = root_level[0]
300 # Get the nodes in order and if we have to build them
301 self._build(nodes_by_level, keep_build, root_node, graph_info, remotes)
302
303 def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes):
304 processed_package_refs = set()
305 for level in nodes_by_level:
306 for node in level:
307 ref, conan_file = node.ref, node.conanfile
308 output = conan_file.output
309 package_id = node.package_id
310 if node.binary == BINARY_MISSING:
311 dependencies = [str(dep.dst) for dep in node.dependencies]
312 raise_package_not_found_error(conan_file, ref, package_id, dependencies,
313 out=output, recorder=self._recorder)
314
315 self._propagate_info(node)
316 if node.binary == BINARY_EDITABLE:
317 self._handle_node_editable(node, graph_info)
318 else:
319 if node.binary == BINARY_SKIP: # Privates not necessary
320 continue
321 assert ref.revision is not None, "Installer should receive RREV always"
322 _handle_system_requirements(conan_file, node.pref, self._cache, output)
323 self._handle_node_cache(node, keep_build, processed_package_refs, remotes)
324
325 # Finally, propagate information to root node (ref=None)
326 self._propagate_info(root_node)
327
328 def _node_concurrently_installed(self, node, package_folder):
329 if node.binary == BINARY_DOWNLOAD and os.path.exists(package_folder):
330 return True
331 elif node.binary == BINARY_UPDATE:
332 read_manifest = FileTreeManifest.load(package_folder)
333 if node.update_manifest == read_manifest:
334 return True
335
336 def _handle_node_editable(self, node, graph_info):
337 # Get source of information
338 package_layout = self._cache.package_layout(node.ref)
339 base_path = package_layout.base_folder()
340 self._call_package_info(node.conanfile, package_folder=base_path)
341
342 node.conanfile.cpp_info.filter_empty = False
343 # Try with package-provided file
344 editable_cpp_info = package_layout.editable_cpp_info()
345 if editable_cpp_info:
346 editable_cpp_info.apply_to(node.ref,
347 node.conanfile.cpp_info,
348 settings=node.conanfile.settings,
349 options=node.conanfile.options)
350
351 build_folder = editable_cpp_info.folder(node.ref, EditableLayout.BUILD_FOLDER,
352 settings=node.conanfile.settings,
353 options=node.conanfile.options)
354 if build_folder is not None:
355 build_folder = os.path.join(base_path, build_folder)
356 output = node.conanfile.output
357 write_generators(node.conanfile, build_folder, output)
358 save(os.path.join(build_folder, CONANINFO), node.conanfile.info.dumps())
359 output.info("Generated %s" % CONANINFO)
360 graph_info.save(build_folder)
361 output.info("Generated graphinfo")
362 save(os.path.join(build_folder, BUILD_INFO), TXTGenerator(node.conanfile).content)
363 output.info("Generated %s" % BUILD_INFO)
364 # Build step might need DLLs, binaries as protoc to generate source files
365 # So execute imports() before build, storing the list of copied_files
366 copied_files = run_imports(node.conanfile, build_folder)
367 report_copied_files(copied_files, output)
368
369 def _handle_node_cache(self, node, keep_build, processed_package_references, remotes):
370 pref = node.pref
371 assert pref.id, "Package-ID without value"
372
373 conan_file = node.conanfile
374 output = conan_file.output
375 package_folder = self._cache.package(pref, conan_file.short_paths)
376
377 with self._cache.package_lock(pref):
378 if pref not in processed_package_references:
379 processed_package_references.add(pref)
380 if node.binary == BINARY_BUILD:
381 assert node.prev is None, "PREV for %s to be built should be None" % str(pref)
382 set_dirty(package_folder)
383 pref = self._build_package(node, pref, output, keep_build, remotes)
384 clean_dirty(package_folder)
385 assert node.prev is not None, "PREV for %s to be built is None" % str(pref)
386 assert pref.revision is not None, "PREV for %s to be built is None" % str(pref)
387 elif node.binary in (BINARY_UPDATE, BINARY_DOWNLOAD):
388 assert node.prev, "PREV for %s is None" % str(pref)
389 if not self._node_concurrently_installed(node, package_folder):
390 set_dirty(package_folder)
391 assert pref.revision is not None, "Installer should receive #PREV always"
392 self._remote_manager.get_package(pref, package_folder,
393 node.binary_remote, output,
394 self._recorder)
395 output.info("Downloaded package revision %s" % pref.revision)
396 with self._cache.package_layout(pref.ref).update_metadata() as metadata:
397 metadata.packages[pref.id].remote = node.binary_remote.name
398 clean_dirty(package_folder)
399 else:
400 output.success('Download skipped. Probable concurrent download')
401 log_package_got_from_local_cache(pref)
402 self._recorder.package_fetched_from_cache(pref)
403 elif node.binary == BINARY_CACHE:
404 assert node.prev, "PREV for %s is None" % str(pref)
405 output.success('Already installed!')
406 log_package_got_from_local_cache(pref)
407 self._recorder.package_fetched_from_cache(pref)
408
409 # Call the info method
410 self._call_package_info(conan_file, package_folder)
411 self._recorder.package_cpp_info(pref, conan_file.cpp_info)
412
413 def _build_package(self, node, pref, output, keep_build, remotes):
414 conanfile = node.conanfile
415 assert pref.id, "Package-ID without value"
416
417 # It is necessary to complete the sources of python requires, which might be used
418 for python_require in conanfile.python_requires:
419 assert python_require.ref.revision is not None, \
420 "Installer should receive python_require.ref always"
421 complete_recipe_sources(self._remote_manager, self._cache,
422 conanfile, python_require.ref, remotes)
423
424 builder = _PackageBuilder(self._cache, output, self._hook_manager, self._remote_manager)
425 pref = builder.build_package(node, keep_build, self._recorder, remotes)
426 return pref
427
428 @staticmethod
429 def _propagate_info(node):
430 # Get deps_cpp_info from upstream nodes
431 node_order = [n for n in node.public_closure if n.binary != BINARY_SKIP]
432 # List sort is stable, will keep the original order of the closure, but prioritize levels
433 conan_file = node.conanfile
434 for n in node_order:
435 if n.build_require:
436 conan_file.output.info("Applying build-requirement: %s" % str(n.ref))
437 conan_file.deps_cpp_info.update(n.conanfile.cpp_info, n.ref.name)
438 conan_file.deps_env_info.update(n.conanfile.env_info, n.ref.name)
439 conan_file.deps_user_info[n.ref.name] = n.conanfile.user_info
440
441 # Update the info but filtering the package values that not apply to the subtree
442 # of this current node and its dependencies.
443 subtree_libnames = [node.ref.name for node in node_order]
444 for package_name, env_vars in conan_file._conan_env_values.data.items():
445 for name, value in env_vars.items():
446 if not package_name or package_name in subtree_libnames or \
447 package_name == conan_file.name:
448 conan_file.info.env_values.add(name, value, package_name)
449
450 @staticmethod
451 def _call_package_info(conanfile, package_folder):
452 conanfile.cpp_info = CppInfo(package_folder)
453 conanfile.cpp_info.version = conanfile.version
454 conanfile.cpp_info.description = conanfile.description
455 conanfile.env_info = EnvInfo()
456 conanfile.user_info = UserInfo()
457
458 # Get deps_cpp_info from upstream nodes
459 public_deps = [name for name, req in conanfile.requires.items() if not req.private]
460 conanfile.cpp_info.public_deps = public_deps
461 # Once the node is build, execute package info, so it has access to the
462 # package folder and artifacts
463 with pythonpath(conanfile): # Minimal pythonpath, not the whole context, make it 50% slower
464 with tools.chdir(package_folder):
465 with conanfile_exception_formatter(str(conanfile), "package_info"):
466 conanfile.package_folder = package_folder
467 conanfile.source_folder = None
468 conanfile.build_folder = None
469 conanfile.install_folder = None
470 conanfile.package_info()
471
[end of conans/client/installer.py]
[start of conans/client/source.py]
1 import os
2 import shutil
3
4 import six
5
6 from conans.client import tools
7 from conans.errors import (ConanException, ConanExceptionInUserConanfileMethod,
8 conanfile_exception_formatter)
9 from conans.model.conan_file import get_env_context_manager
10 from conans.model.scm import SCM, get_scm_data
11 from conans.paths import CONANFILE, CONAN_MANIFEST, EXPORT_SOURCES_TGZ_NAME, EXPORT_TGZ_NAME
12 from conans.util.files import (clean_dirty, is_dirty, load, mkdir, rmdir, set_dirty, walk)
13
14
15 def complete_recipe_sources(remote_manager, cache, conanfile, ref, remotes):
16 """ the "exports_sources" sources are not retrieved unless necessary to build. In some
17 occassions, conan needs to get them too, like if uploading to a server, to keep the recipes
18 complete
19 """
20 sources_folder = cache.export_sources(ref, conanfile.short_paths)
21 if os.path.exists(sources_folder):
22 return None
23
24 if conanfile.exports_sources is None:
25 mkdir(sources_folder)
26 return None
27
28 # If not path to sources exists, we have a problem, at least an empty folder
29 # should be there
30 current_remote = cache.package_layout(ref).load_metadata().recipe.remote
31 if current_remote:
32 current_remote = remotes[current_remote]
33 if not current_remote:
34 raise ConanException("Error while trying to get recipe sources for %s. "
35 "No remote defined" % str(ref))
36
37 export_path = cache.export(ref)
38 remote_manager.get_recipe_sources(ref, export_path, sources_folder, current_remote)
39
40
41 def merge_directories(src, dst, excluded=None, symlinks=True):
42 src = os.path.normpath(src)
43 dst = os.path.normpath(dst)
44 excluded = excluded or []
45 excluded = [os.path.normpath(entry) for entry in excluded]
46
47 def is_excluded(origin_path):
48 if origin_path == dst:
49 return True
50 rel_path = os.path.normpath(os.path.relpath(origin_path, src))
51 if rel_path in excluded:
52 return True
53 return False
54
55 for src_dir, dirs, files in walk(src, followlinks=True):
56 if is_excluded(src_dir):
57 dirs[:] = []
58 continue
59
60 # Overwriting the dirs will prevents walk to get into them
61 files[:] = [d for d in files if not is_excluded(os.path.join(src_dir, d))]
62
63 dst_dir = os.path.normpath(os.path.join(dst, os.path.relpath(src_dir, src)))
64 if not os.path.exists(dst_dir):
65 os.makedirs(dst_dir)
66 for file_ in files:
67 src_file = os.path.join(src_dir, file_)
68 dst_file = os.path.join(dst_dir, file_)
69 if os.path.islink(src_file) and symlinks:
70 linkto = os.readlink(src_file)
71 os.symlink(linkto, dst_file)
72 else:
73 shutil.copy2(src_file, dst_file)
74
75
76 def config_source_local(src_folder, conanfile, conanfile_path, hook_manager):
77 """ Entry point for the "conan source" command.
78 """
79 conanfile_folder = os.path.dirname(conanfile_path)
80 _run_source(conanfile, conanfile_path, src_folder, hook_manager, reference=None,
81 cache=None, export_folder=None, export_source_folder=None,
82 local_sources_path=conanfile_folder)
83
84
85 def config_source(export_folder, export_source_folder, src_folder, conanfile, output,
86 conanfile_path, reference, hook_manager, cache):
87 """ Implements the sources configuration when a package is going to be built in the
88 local cache.
89 """
90
91 def remove_source(raise_error=True):
92 output.warn("This can take a while for big packages")
93 try:
94 rmdir(src_folder)
95 except BaseException as e_rm:
96 set_dirty(src_folder)
97 msg = str(e_rm)
98 if six.PY2:
99 msg = str(e_rm).decode("latin1") # Windows prints some chars in latin1
100 output.error("Unable to remove source folder %s\n%s" % (src_folder, msg))
101 output.warn("**** Please delete it manually ****")
102 if raise_error or isinstance(e_rm, KeyboardInterrupt):
103 raise ConanException("Unable to remove source folder")
104
105 sources_pointer = cache.scm_folder(reference)
106 local_sources_path = load(sources_pointer) if os.path.exists(sources_pointer) else None
107 if is_dirty(src_folder):
108 output.warn("Trying to remove corrupted source folder")
109 remove_source()
110 elif conanfile.build_policy_always:
111 output.warn("Detected build_policy 'always', trying to remove source folder")
112 remove_source()
113 elif local_sources_path and os.path.exists(local_sources_path):
114 output.warn("Detected 'scm' auto in conanfile, trying to remove source folder")
115 remove_source()
116
117 if not os.path.exists(src_folder): # No source folder, need to get it
118 set_dirty(src_folder)
119 mkdir(src_folder)
120 _run_source(conanfile, conanfile_path, src_folder, hook_manager, reference,
121 cache, export_folder, export_source_folder, local_sources_path)
122 clean_dirty(src_folder) # Everything went well, remove DIRTY flag
123
124
125 def _run_source(conanfile, conanfile_path, src_folder, hook_manager, reference,
126 cache, export_folder, export_source_folder, local_sources_path):
127 """Execute the source core functionality, both for local cache and user space, in order:
128 - Calling pre_source hook
129 - Getting sources from SCM
130 - Getting sources from exported folders in the local cache
131 - Clean potential TGZ and other files in the local cache
132 - Executing the recipe source() method
133 - Calling post_source hook
134 """
135 conanfile.source_folder = src_folder
136 conanfile.build_folder = None
137 conanfile.package_folder = None
138 with tools.chdir(src_folder):
139 try:
140 with get_env_context_manager(conanfile):
141 hook_manager.execute("pre_source", conanfile=conanfile,
142 conanfile_path=conanfile_path,
143 reference=reference)
144 output = conanfile.output
145 output.info('Configuring sources in %s' % src_folder)
146 _run_scm(conanfile, src_folder, local_sources_path, output, cache=cache)
147
148 if cache:
149 _get_sources_from_exports(src_folder, export_folder, export_source_folder)
150 _clean_source_folder(src_folder)
151 with conanfile_exception_formatter(conanfile.display_name, "source"):
152 conanfile.source()
153
154 hook_manager.execute("post_source", conanfile=conanfile,
155 conanfile_path=conanfile_path,
156 reference=reference)
157 except ConanExceptionInUserConanfileMethod:
158 raise
159 except Exception as e:
160 raise ConanException(e)
161
162
163 def _get_sources_from_exports(src_folder, export_folder, export_source_folder):
164 # so self exported files have precedence over python_requires ones
165 merge_directories(export_folder, src_folder)
166 # Now move the export-sources to the right location
167 merge_directories(export_source_folder, src_folder)
168
169
170 def _clean_source_folder(folder):
171 for f in (EXPORT_TGZ_NAME, EXPORT_SOURCES_TGZ_NAME, CONANFILE+"c",
172 CONANFILE+"o", CONANFILE, CONAN_MANIFEST):
173 try:
174 os.remove(os.path.join(folder, f))
175 except OSError:
176 pass
177 try:
178 shutil.rmtree(os.path.join(folder, "__pycache__"))
179 except OSError:
180 pass
181
182
183 def _run_scm(conanfile, src_folder, local_sources_path, output, cache):
184 scm_data = get_scm_data(conanfile)
185 if not scm_data:
186 return
187
188 dest_dir = os.path.normpath(os.path.join(src_folder, scm_data.subfolder))
189 if cache:
190 # When in cache, capturing the sources from user space is done only if exists
191 if not local_sources_path or not os.path.exists(local_sources_path):
192 local_sources_path = None
193 else:
194 # In user space, if revision="auto", then copy
195 if scm_data.capture_origin or scm_data.capture_revision: # FIXME: or clause?
196 scm = SCM(scm_data, local_sources_path, output)
197 scm_url = scm_data.url if scm_data.url != "auto" else \
198 scm.get_qualified_remote_url(remove_credentials=True)
199
200 src_path = scm.get_local_path_to_url(url=scm_url)
201 if src_path:
202 local_sources_path = src_path
203 else:
204 local_sources_path = None
205
206 if local_sources_path and conanfile.develop:
207 excluded = SCM(scm_data, local_sources_path, output).excluded_files
208 output.info("Getting sources from folder: %s" % local_sources_path)
209 merge_directories(local_sources_path, dest_dir, excluded=excluded)
210 else:
211 output.info("Getting sources from url: '%s'" % scm_data.url)
212 scm = SCM(scm_data, dest_dir, output)
213 scm.checkout()
214
215 if cache:
216 # This is a bit weird. Why after a SCM should we remove files. Maybe check conan 2.0
217 _clean_source_folder(dest_dir)
218
[end of conans/client/source.py]
[start of conans/model/manifest.py]
1 import calendar
2 import datetime
3 import os
4 import time
5
6 from conans.errors import ConanException
7 from conans.paths import CONAN_MANIFEST, EXPORT_SOURCES_TGZ_NAME, EXPORT_TGZ_NAME, PACKAGE_TGZ_NAME
8 from conans.util.files import load, md5, md5sum, save, walk
9
10
11 def discarded_file(filename):
12 """
13 # The __conan pattern is to be prepared for the future, in case we want to manage our
14 own files that shouldn't be uploaded
15 """
16 return filename == ".DS_Store" or filename.endswith(".pyc") or \
17 filename.endswith(".pyo") or filename == "__pycache__" or \
18 filename.startswith("__conan")
19
20
21 def gather_files(folder):
22 file_dict = {}
23 symlinks = {}
24 for root, dirs, files in walk(folder):
25 dirs[:] = [d for d in dirs if d != "__pycache__"] # Avoid recursing pycache
26 for d in dirs:
27 abs_path = os.path.join(root, d)
28 if os.path.islink(abs_path):
29 rel_path = abs_path[len(folder) + 1:].replace("\\", "/")
30 symlinks[rel_path] = os.readlink(abs_path)
31 for f in files:
32 if discarded_file(f):
33 continue
34 abs_path = os.path.join(root, f)
35 rel_path = abs_path[len(folder) + 1:].replace("\\", "/")
36 if os.path.exists(abs_path):
37 file_dict[rel_path] = abs_path
38 else:
39 raise ConanException("The file is a broken symlink, verify that "
40 "you are packaging the needed destination files: '%s'"
41 % abs_path)
42
43 return file_dict, symlinks
44
45
46 class FileTreeManifest(object):
47
48 def __init__(self, the_time, file_sums):
49 """file_sums is a dict with filepaths and md5's: {filepath/to/file.txt: md5}"""
50 self.time = the_time
51 self.file_sums = file_sums
52
53 def files(self):
54 return self.file_sums.keys()
55
56 @property
57 def summary_hash(self):
58 s = ["%s: %s" % (f, fmd5) for f, fmd5 in sorted(self.file_sums.items())]
59 s.append("")
60 return md5("\n".join(s))
61
62 @property
63 def time_str(self):
64 return datetime.datetime.fromtimestamp(int(self.time)).strftime('%Y-%m-%d %H:%M:%S')
65
66 @staticmethod
67 def loads(text):
68 """ parses a string representation, generated with __repr__ of a
69 ConanDigest
70 """
71 tokens = text.split("\n")
72 the_time = int(tokens[0])
73 file_sums = {}
74 for md5line in tokens[1:]:
75 if md5line:
76 filename, file_md5 = md5line.split(": ")
77 if not discarded_file(filename):
78 file_sums[filename] = file_md5
79 return FileTreeManifest(the_time, file_sums)
80
81 @staticmethod
82 def load(folder):
83 text = load(os.path.join(folder, CONAN_MANIFEST))
84 return FileTreeManifest.loads(text)
85
86 def __repr__(self):
87 ret = ["%s" % self.time]
88 for file_path, file_md5 in sorted(self.file_sums.items()):
89 ret.append("%s: %s" % (file_path, file_md5))
90 ret.append("")
91 content = "\n".join(ret)
92 return content
93
94 def __str__(self):
95 dt = datetime.datetime.utcfromtimestamp(self.time).strftime('%Y-%m-%d %H:%M:%S')
96 ret = ["Time: %s" % dt]
97 for file_path, file_md5 in sorted(self.file_sums.items()):
98 ret.append("%s, MD5: %s" % (file_path, file_md5))
99 ret.append("")
100 content = "\n".join(ret)
101 return content
102
103 def save(self, folder, filename=CONAN_MANIFEST):
104 path = os.path.join(folder, filename)
105 save(path, repr(self))
106
107 @classmethod
108 def create(cls, folder, exports_sources_folder=None):
109 """ Walks a folder and create a FileTreeManifest for it, reading file contents
110 from disk, and capturing current time
111 """
112 files, _ = gather_files(folder)
113 for f in (PACKAGE_TGZ_NAME, EXPORT_TGZ_NAME, CONAN_MANIFEST, EXPORT_SOURCES_TGZ_NAME):
114 files.pop(f, None)
115
116 file_dict = {}
117 for name, filepath in files.items():
118 file_dict[name] = md5sum(filepath)
119
120 if exports_sources_folder:
121 export_files, _ = gather_files(exports_sources_folder)
122 for name, filepath in export_files.items():
123 file_dict["export_source/%s" % name] = md5sum(filepath)
124
125 date = calendar.timegm(time.gmtime())
126
127 return cls(date, file_dict)
128
129 def __eq__(self, other):
130 """ Two manifests are equal if file_sums
131 """
132 return self.file_sums == other.file_sums
133
134 def __ne__(self, other):
135 return not self.__eq__(other)
136
137 def difference(self, other):
138 result = {}
139 for f, h in self.file_sums.items():
140 h2 = other.file_sums.get(f)
141 if h != h2:
142 result[f] = h, h2
143 for f, h in other.file_sums.items():
144 h2 = self.file_sums.get(f)
145 if h != h2:
146 result[f] = h2, h
147 return result
148
[end of conans/model/manifest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
conan-io/conan
|
44dbdef0ffe44d520e1c401b74b9c01c787ff45a
|
Opt-in skip broken symlinks check packaging
We need to package directories with broken symlinks. We are trying to package yocto sdk and we didn't manage to fix them or remove them, because the sdk stops working. So we would need something like `CONAN_SKIP_BROKEN_SYMLINKS_CHECK` to remove the check in `manifest.py` line 39
|
2019-04-17T09:03:58Z
|
<patch>
diff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py
--- a/conans/client/conf/__init__.py
+++ b/conans/client/conf/__init__.py
@@ -104,6 +104,7 @@
# use_always_short_paths = False # environment CONAN_USE_ALWAYS_SHORT_PATHS
# skip_vs_projects_upgrade = False # environment CONAN_SKIP_VS_PROJECTS_UPGRADE
# non_interactive = False # environment CONAN_NON_INTERACTIVE
+# skip_broken_symlinks_check = False # enviornment CONAN_SKIP_BROKEN_SYMLINKS_CHECK
# conan_make_program = make # environment CONAN_MAKE_PROGRAM (overrides the make program used in AutoToolsBuildEnvironment.make)
# conan_cmake_program = cmake # environment CONAN_CMAKE_PROGRAM (overrides the make program used in CMake.cmake_program)
@@ -174,6 +175,7 @@ def env_vars(self):
"CONAN_PRINT_RUN_COMMANDS": self._env_c("log.print_run_commands", "CONAN_PRINT_RUN_COMMANDS", "False"),
"CONAN_COMPRESSION_LEVEL": self._env_c("general.compression_level", "CONAN_COMPRESSION_LEVEL", "9"),
"CONAN_NON_INTERACTIVE": self._env_c("general.non_interactive", "CONAN_NON_INTERACTIVE", "False"),
+ "CONAN_SKIP_BROKEN_SYMLINKS_CHECK": self._env_c("general.skip_broken_symlinks_check", "CONAN_SKIP_BROKEN_SYMLINKS_CHECK", "False"),
"CONAN_PYLINTRC": self._env_c("general.pylintrc", "CONAN_PYLINTRC", None),
"CONAN_CACHE_NO_LOCKS": self._env_c("general.cache_no_locks", "CONAN_CACHE_NO_LOCKS", "False"),
"CONAN_PYLINT_WERR": self._env_c("general.pylint_werr", "CONAN_PYLINT_WERR", None),
diff --git a/conans/model/manifest.py b/conans/model/manifest.py
--- a/conans/model/manifest.py
+++ b/conans/model/manifest.py
@@ -5,6 +5,7 @@
from conans.errors import ConanException
from conans.paths import CONAN_MANIFEST, EXPORT_SOURCES_TGZ_NAME, EXPORT_TGZ_NAME, PACKAGE_TGZ_NAME
+from conans.util.env_reader import get_env
from conans.util.files import load, md5, md5sum, save, walk
@@ -36,9 +37,10 @@ def gather_files(folder):
if os.path.exists(abs_path):
file_dict[rel_path] = abs_path
else:
- raise ConanException("The file is a broken symlink, verify that "
- "you are packaging the needed destination files: '%s'"
- % abs_path)
+ if not get_env("CONAN_SKIP_BROKEN_SYMLINKS_CHECK", False):
+ raise ConanException("The file is a broken symlink, verify that "
+ "you are packaging the needed destination files: '%s'"
+ % abs_path)
return file_dict, symlinks
diff --git a/conans/util/files.py b/conans/util/files.py
--- a/conans/util/files.py
+++ b/conans/util/files.py
@@ -73,7 +73,10 @@ def touch(fname, times=None):
def touch_folder(folder):
for dirname, _, filenames in walk(folder):
for fname in filenames:
- os.utime(os.path.join(dirname, fname), None)
+ try:
+ os.utime(os.path.join(dirname, fname), None)
+ except Exception:
+ pass
def normalize(text):
</patch>
|
[]
|
[]
| ||||
Lightning-AI__lightning-964
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
logger is NoneType hence doesn't have any experiment or other functionality in a lightning module
## 🐛 Bug
When trying to use the logging abilities of lightning, I hit a wall, the default and tensorboard loggers both seem to stay uninitialized when calling ```trainer.fit(model)```, resulting in crashes everytime I try to log something.
### To Reproduce
Create a lightning module as such
```
class SimpleRegressor(pl.LightningModule):
...
```
Use the logger anywhere to get this kind of stacktrace:
```
d:\Documents\projects\MetaWatch\MetaWatch\notebooks\audio-video-interest\simple_regressor.py in configure_optimizers(self)
105 #see https://pytorch-lightning.readthedocs.io/en/latest/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.configure_optimizers
106 # REQUIRED
--> 107 self.logger.experiment.add_hparams({'hidden_layer_size':self.hidden_layer_size,
108 'linear_layer_size':self.linear_layer_size,
109 'lstm_layers':self.lstm_layers})
AttributeError: 'NoneType' object has no attribute 'experiment'
```
#### Code sample
```
import pytorch_lightning as pl
class SimpleRegressor(pl.LightningModule):
def __init__(self, cuda=False):
super(SimpleRegressor, self).__init__()
self.logger.experiment.add_hparams({'hidden_layer_size':1})
```
### Expected behavior
To log as described in the documentation.
### Environment
```
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 970
Nvidia driver version: 441.12
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.18.1
[pip3] pytorch-lightning==0.6.0
[pip3] tinynumpy==1.2.1
[pip3] torch==1.4.0
[pip3] torchvision==0.4.1
[conda] Could not collect
```
### Additional context
<!-- Add any other context about the problem here. -->
</issue>
<code>
[start of README.md]
1 <div align="center">
2
3 
4
5 # PyTorch Lightning
6
7 **The lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.**
8
9
10 [](https://badge.fury.io/py/pytorch-lightning)
11 [](https://pepy.tech/project/pytorch-lightning)
12 [](https://github.com/PytorchLightning/pytorch-lightning/tree/master/tests#running-coverage)
13 [](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning)
14
15 [](https://pytorch-lightning.readthedocs.io/en/latest/)
16 [](https://join.slack.com/t/pytorch-lightning/shared_invite/enQtODU5ODIyNTUzODQwLTFkMDg5Mzc1MDBmNjEzMDgxOTVmYTdhYjA1MDdmODUyOTg2OGQ1ZWZkYTQzODhhNzdhZDA3YmNhMDhlMDY4YzQ)
17 [](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE)
18 [](https://shields.io/)
19
20 <!--
21 removed until codecov badge isn't empy. likely a config error showing nothing on master.
22 [](https://codecov.io/gh/Borda/pytorch-lightning)
23 -->
24 </div>
25
26 ---
27 ## Continuous Integration
28 <center>
29
30 | System / PyTorch Version | 1.1 | 1.2 | 1.3 | 1.4 |
31 | :---: | :---: | :---: | :---: | :---: |
32 | Linux py3.6 | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) |
33 | Linux py3.7 |  | <center>—</center> | <center>—</center> |  |
34 | OSX py3.6 |  | <center>—</center> | <center>—</center> |  |
35 | OSX py3.7 |  | <center>—</center> | <center>—</center> |  |
36 | Windows py3.6 |  | <center>—</center> | <center>—</center> |  |
37 | Windows py3.7 |  | <center>—</center> | <center>—</center> |  |
38
39 </center>
40
41 Simple installation from PyPI
42 ```bash
43 pip install pytorch-lightning
44 ```
45
46 ## Docs
47 - [master](https://pytorch-lightning.readthedocs.io/en/latest)
48 - [0.6.0](https://pytorch-lightning.readthedocs.io/en/0.6.0/)
49 - [0.5.3.2](https://pytorch-lightning.readthedocs.io/en/0.5.3.2/)
50
51
52 ## Demo
53 [Copy and run this COLAB!](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg)
54
55 ## What is it?
56 Lightning is a very lightweight wrapper on PyTorch that decouples the science code from the engineering code. It's more of a style-guide than a framework. By refactoring your code, we can automate most of the non-research code.
57
58 To use Lightning, simply refactor your research code into the [LightningModule](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it) format (the science) and Lightning will automate the rest (the engineering). Lightning guarantees tested, correct, modern best practices for the automated parts.
59
60 - If you are a researcher, Lightning is infinitely flexible, you can modify everything down to the way .backward is called or distributed is set up.
61 - If you are a scientist or production team, lightning is very simple to use with best practice defaults.
62
63 ## What does lightning control for me?
64
65 Everything in Blue!
66 This is how lightning separates the science (red) from the engineering (blue).
67
68 
69
70 ## How much effort is it to convert?
71 You're probably tired of switching frameworks at this point. But it is a very quick process to refactor into the Lightning format (ie: hours). [Check out this tutorial](https://towardsdatascience.com/how-to-refactor-your-pytorch-code-to-get-these-42-benefits-of-pytorch-lighting-6fdd0dc97538)
72
73 ## Starting a new project?
74 [Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed)
75
76 ## Why do I want to use lightning?
77 Every research project starts the same, a model, a training loop, validation loop, etc. As your research advances, you're likely to need distributed training, 16-bit precision, checkpointing, gradient accumulation, etc.
78
79 Lightning sets up all the boilerplate state-of-the-art training for you so you can focus on the research.
80
81 ---
82
83 ## README Table of Contents
84 - [How do I use it](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it)
85 - [What lightning automates](https://github.com/PytorchLightning/pytorch-lightning#what-does-lightning-control-for-me)
86 - [Tensorboard integration](https://github.com/PytorchLightning/pytorch-lightning#tensorboard)
87 - [Lightning features](https://github.com/PytorchLightning/pytorch-lightning#lightning-automates-all-of-the-following-each-is-also-configurable)
88 - [Examples](https://github.com/PytorchLightning/pytorch-lightning#examples)
89 - [Tutorials](https://github.com/PytorchLightning/pytorch-lightning#tutorials)
90 - [Asking for help](https://github.com/PytorchLightning/pytorch-lightning#asking-for-help)
91 - [Contributing](https://github.com/PytorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md)
92 - [Bleeding edge install](https://github.com/PytorchLightning/pytorch-lightning#bleeding-edge)
93 - [Lightning Design Principles](https://github.com/PytorchLightning/pytorch-lightning#lightning-design-principles)
94 - [Lightning team](https://github.com/PytorchLightning/pytorch-lightning#lightning-team)
95 - [FAQ](https://github.com/PytorchLightning/pytorch-lightning#faq)
96
97 ---
98
99 ## How do I do use it?
100 Think about Lightning as refactoring your research code instead of using a new framework. The research code goes into a [LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html) which you fit using a Trainer.
101
102 The LightningModule defines a *system* such as seq-2-seq, GAN, etc... It can ALSO define a simple classifier such as the example below.
103
104 To use lightning do 2 things:
105 1. [Define a LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html)
106 **WARNING:** This syntax is for version 0.5.0+ where abbreviations were removed.
107 ```python
108 import os
109
110 import torch
111 from torch.nn import functional as F
112 from torch.utils.data import DataLoader
113 from torchvision.datasets import MNIST
114 from torchvision import transforms
115
116 import pytorch_lightning as pl
117
118 class CoolSystem(pl.LightningModule):
119
120 def __init__(self):
121 super(CoolSystem, self).__init__()
122 # not the best model...
123 self.l1 = torch.nn.Linear(28 * 28, 10)
124
125 def forward(self, x):
126 return torch.relu(self.l1(x.view(x.size(0), -1)))
127
128 def training_step(self, batch, batch_idx):
129 # REQUIRED
130 x, y = batch
131 y_hat = self.forward(x)
132 loss = F.cross_entropy(y_hat, y)
133 tensorboard_logs = {'train_loss': loss}
134 return {'loss': loss, 'log': tensorboard_logs}
135
136 def validation_step(self, batch, batch_idx):
137 # OPTIONAL
138 x, y = batch
139 y_hat = self.forward(x)
140 return {'val_loss': F.cross_entropy(y_hat, y)}
141
142 def validation_end(self, outputs):
143 # OPTIONAL
144 avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
145 tensorboard_logs = {'val_loss': avg_loss}
146 return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
147
148 def test_step(self, batch, batch_idx):
149 # OPTIONAL
150 x, y = batch
151 y_hat = self.forward(x)
152 return {'test_loss': F.cross_entropy(y_hat, y)}
153
154 def test_end(self, outputs):
155 # OPTIONAL
156 avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
157 tensorboard_logs = {'test_loss': avg_loss}
158 return {'avg_test_loss': avg_loss, 'log': tensorboard_logs}
159
160 def configure_optimizers(self):
161 # REQUIRED
162 # can return multiple optimizers and learning_rate schedulers
163 # (LBFGS it is automatically supported, no need for closure function)
164 return torch.optim.Adam(self.parameters(), lr=0.02)
165
166 @pl.data_loader
167 def train_dataloader(self):
168 # REQUIRED
169 return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
170
171 @pl.data_loader
172 def val_dataloader(self):
173 # OPTIONAL
174 return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
175
176 @pl.data_loader
177 def test_dataloader(self):
178 # OPTIONAL
179 return DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), batch_size=32)
180 ```
181 2. Fit with a [trainer](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html)
182 ```python
183 from pytorch_lightning import Trainer
184
185 model = CoolSystem()
186
187 # most basic trainer, uses good defaults
188 trainer = Trainer()
189 trainer.fit(model)
190 ```
191
192 Trainer sets up a tensorboard logger, early stopping and checkpointing by default (you can modify all of them or
193 use something other than tensorboard).
194
195 Here are more advanced examples
196 ```python
197 # train on cpu using only 10% of the data (for demo purposes)
198 trainer = Trainer(max_epochs=1, train_percent_check=0.1)
199
200 # train on 4 gpus (lightning chooses GPUs for you)
201 # trainer = Trainer(max_epochs=1, gpus=4, distributed_backend='ddp')
202
203 # train on 4 gpus (you choose GPUs)
204 # trainer = Trainer(max_epochs=1, gpus=[0, 1, 3, 7], distributed_backend='ddp')
205
206 # train on 32 gpus across 4 nodes (make sure to submit appropriate SLURM job)
207 # trainer = Trainer(max_epochs=1, gpus=8, num_gpu_nodes=4, distributed_backend='ddp')
208
209 # train (1 epoch only here for demo)
210 trainer.fit(model)
211
212 # view tensorboard logs
213 logging.info(f'View tensorboard logs by running\ntensorboard --logdir {os.getcwd()}')
214 logging.info('and going to http://localhost:6006 on your browser')
215 ```
216
217 When you're all done you can even run the test set separately.
218 ```python
219 trainer.test()
220 ```
221
222 **Could be as complex as seq-2-seq + attention**
223
224 ```python
225 # define what happens for training here
226 def training_step(self, batch, batch_idx):
227 x, y = batch
228
229 # define your own forward and loss calculation
230 hidden_states = self.encoder(x)
231
232 # even as complex as a seq-2-seq + attn model
233 # (this is just a toy, non-working example to illustrate)
234 start_token = '<SOS>'
235 last_hidden = torch.zeros(...)
236 loss = 0
237 for step in range(max_seq_len):
238 attn_context = self.attention_nn(hidden_states, start_token)
239 pred = self.decoder(start_token, attn_context, last_hidden)
240 last_hidden = pred
241 pred = self.predict_nn(pred)
242 loss += self.loss(last_hidden, y[step])
243
244 #toy example as well
245 loss = loss / max_seq_len
246 return {'loss': loss}
247 ```
248
249 **Or as basic as CNN image classification**
250
251 ```python
252 # define what happens for validation here
253 def validation_step(self, batch, batch_idx):
254 x, y = batch
255
256 # or as basic as a CNN classification
257 out = self.forward(x)
258 loss = my_loss(out, y)
259 return {'loss': loss}
260 ```
261
262 **And you also decide how to collate the output of all validation steps**
263
264 ```python
265 def validation_end(self, outputs):
266 """
267 Called at the end of validation to aggregate outputs
268 :param outputs: list of individual outputs of each validation step
269 :return:
270 """
271 val_loss_mean = 0
272 val_acc_mean = 0
273 for output in outputs:
274 val_loss_mean += output['val_loss']
275 val_acc_mean += output['val_acc']
276
277 val_loss_mean /= len(outputs)
278 val_acc_mean /= len(outputs)
279 logs = {'val_loss': val_loss_mean.item(), 'val_acc': val_acc_mean.item()}
280 result = {'log': logs}
281 return result
282 ```
283
284 ## Tensorboard
285 Lightning is fully integrated with tensorboard, MLFlow and supports any logging module.
286
287 
288
289 Lightning also adds a text column with all the hyperparameters for this experiment.
290
291 
292
293 ## Lightning automates all of the following ([each is also configurable](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html)):
294
295
296 - [Running grid search on a cluster](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.distrib_data_parallel.html)
297 - [Fast dev run](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.utilities.debugging.html)
298 - [Logging](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.loggers.html)
299 - [Implement Your Own Distributed (DDP) training](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.configure_ddp)
300 - [Multi-GPU & Multi-node](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.distrib_parts.html)
301 - [Training loop](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.training_loop.html)
302 - [Hooks](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.core.hooks.html)
303 - [Configure optimizers](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.configure_optimizers)
304 - [Validations](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.evaluation_loop.html)
305 - [Model saving & Restoring training session](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.training_io.html)
306
307
308 ## Examples
309 - [GAN](https://github.com/PytorchLightning/pytorch-lightning/tree/master/pl_examples/domain_templates/gan.py)
310 - [MNIST](https://github.com/PytorchLightning/pytorch-lightning/tree/master/pl_examples/basic_examples)
311 - [Other projects using Lightning](https://github.com/PytorchLightning/pytorch-lightning/network/dependents?package_id=UGFja2FnZS0zNzE3NDU4OTM%3D)
312 - [Multi-node](https://github.com/PytorchLightning/pytorch-lightning/tree/master/pl_examples/multi_node_examples)
313
314 ## Tutorials
315 - [Basic Lightning use](https://towardsdatascience.com/supercharge-your-ai-research-with-pytorch-lightning-337948a99eec)
316 - [9 key speed features in Pytorch-Lightning](https://towardsdatascience.com/9-tips-for-training-lightning-fast-neural-networks-in-pytorch-8e63a502f565)
317 - [SLURM, multi-node training with Lightning](https://towardsdatascience.com/trivial-multi-node-training-with-pytorch-lightning-ff75dfb809bd)
318
319 ---
320
321 ## Asking for help
322 Welcome to the Lightning community!
323
324 If you have any questions, feel free to:
325 1. [read the docs](https://pytorch-lightning.rtfd.io/en/latest/).
326 2. [Search through the issues](https://github.com/PytorchLightning/pytorch-lightning/issues?utf8=%E2%9C%93&q=my++question).
327 3. [Ask on stackoverflow](https://stackoverflow.com/questions/ask?guided=false) with the tag pytorch-lightning.
328
329 If no one replies to you quickly enough, feel free to post the stackoverflow link to our Gitter chat!
330
331 To chat with the rest of us visit our [gitter channel](https://gitter.im/PyTorch-Lightning/community)!
332
333 ---
334 ## FAQ
335 **How do I use Lightning for rapid research?**
336 [Here's a walk-through](https://pytorch-lightning.rtfd.io/en/latest/)
337
338 **Why was Lightning created?**
339 Lightning has 3 goals in mind:
340 1. Maximal flexibility while abstracting out the common boilerplate across research projects.
341 2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format.
342 3. Democratizing PyTorch power user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning.
343
344 **How does Lightning compare with Ignite and fast.ai?**
345 [Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a).
346
347 **Is this another library I have to learn?**
348 Nope! We use pure Pytorch everywhere and don't add unecessary abstractions!
349
350 **Are there plans to support Python 2?**
351 Nope.
352
353 **Are there plans to support virtualenv?**
354 Nope. Please use anaconda or miniconda.
355
356 **Which PyTorch versions do you support?**
357 - **PyTorch 1.1.0**
358 ```bash
359 # install pytorch 1.1.0 using the official instructions
360
361 # install test-tube 0.6.7.6 which supports 1.1.0
362 pip install test-tube==0.6.7.6
363
364 # install latest Lightning version without upgrading deps
365 pip install -U --no-deps pytorch-lightning
366 ```
367 - **PyTorch 1.2.0, 1.3.0,**
368 Install via pip as normal
369
370 ## Custom installation
371
372 ### Bleeding edge
373
374 If you can't wait for the next release, install the most up to date code with:
375 * using GIT (locally clone whole repo with full history)
376 ```bash
377 pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade
378 ```
379 * using instant zip (last state of the repo without git history)
380 ```bash
381 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/master.zip --upgrade
382 ```
383
384 ### Any release installation
385
386 You can also install any past release `0.X.Y` from this repository:
387 ```bash
388 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/0.X.Y.zip --upgrade
389 ```
390
391 ### Lightning team
392
393 #### Leads
394 - William Falcon [(williamFalcon)](https://github.com/williamFalcon) (Lightning founder)
395 - Jirka Borovec [(Borda)](https://github.com/Borda)
396 - Ethan Harris [(ethanwharris)](https://github.com/ethanwharris) (Torchbearer founder)
397 - Matthew Painter [(MattPainter01)](https://github.com/MattPainter01) (Torchbearer founder)
398
399 #### Core Maintainers
400
401 - Nick Eggert [(neggert)](https://github.com/neggert)
402 - Jeff Ling [(jeffling)](https://github.com/jeffling)
403 - Tullie Murrell [(tullie)](https://github.com/tullie)
404
405 ## Bibtex
406 If you want to cite the framework feel free to use this (but only if you loved it 😊):
407 ```
408 @misc{Falcon2019,
409 author = {Falcon, W.A. et al.},
410 title = {PyTorch Lightning},
411 year = {2019},
412 publisher = {GitHub},
413 journal = {GitHub repository},
414 howpublished = {\url{https://github.com/PytorchLightning/pytorch-lightning}}
415 }
416 ```
417
[end of README.md]
[start of pytorch_lightning/trainer/distrib_parts.py]
1 """
2 Lightning makes multi-gpu training and 16 bit training trivial.
3
4 .. note:: None of the flags below require changing anything about your lightningModel definition.
5
6 Choosing a backend
7 ==================
8
9 Lightning supports two backends. DataParallel and DistributedDataParallel.
10 Both can be used for single-node multi-GPU training.
11 For multi-node training you must use DistributedDataParallel.
12
13 DataParallel (dp)
14 -----------------
15
16 Splits a batch across multiple GPUs on the same node. Cannot be used for multi-node training.
17
18 DistributedDataParallel (ddp)
19 -----------------------------
20
21 Trains a copy of the model on each GPU and only syncs gradients. If used with DistributedSampler, each GPU trains
22 on a subset of the full dataset.
23
24 DistributedDataParallel-2 (ddp2)
25 --------------------------------
26
27 Works like DDP, except each node trains a single copy of the model using ALL GPUs on that node.
28 Very useful when dealing with negative samples, etc...
29
30 You can toggle between each mode by setting this flag.
31
32 .. code-block:: python
33
34 # DEFAULT (when using single GPU or no GPUs)
35 trainer = Trainer(distributed_backend=None)
36
37 # Change to DataParallel (gpus > 1)
38 trainer = Trainer(distributed_backend='dp')
39
40 # change to distributed data parallel (gpus > 1)
41 trainer = Trainer(distributed_backend='ddp')
42
43 # change to distributed data parallel (gpus > 1)
44 trainer = Trainer(distributed_backend='ddp2')
45
46 If you request multiple nodes, the back-end will auto-switch to ddp.
47 We recommend you use DistributedDataparallel even for single-node multi-GPU training.
48 It is MUCH faster than DP but *may* have configuration issues depending on your cluster.
49
50 For a deeper understanding of what lightning is doing, feel free to read this
51 `guide <https://medium.com/@_willfalcon/9-tips-for-training-lightning-fast-neural-networks-in-pytorch-8e63a502f565>`_.
52
53 Distributed and 16-bit precision
54 --------------------------------
55
56 Due to an issue with apex and DistributedDataParallel (PyTorch and NVIDIA issue), Lightning does
57 not allow 16-bit and DP training. We tried to get this to work, but it's an issue on their end.
58
59 Below are the possible configurations we support.
60
61 +-------+---------+----+-----+---------+------------------------------------------------------------+
62 | 1 GPU | 1+ GPUs | DP | DDP | 16-bit | command |
63 +=======+=========+====+=====+=========+============================================================+
64 | Y | | | | | `Trainer(gpus=1)` |
65 +-------+---------+----+-----+---------+------------------------------------------------------------+
66 | Y | | | | Y | `Trainer(gpus=1, use_amp=True)` |
67 +-------+---------+----+-----+---------+------------------------------------------------------------+
68 | | Y | Y | | | `Trainer(gpus=k, distributed_backend='dp')` |
69 +-------+---------+----+-----+---------+------------------------------------------------------------+
70 | | Y | | Y | | `Trainer(gpus=k, distributed_backend='ddp')` |
71 +-------+---------+----+-----+---------+------------------------------------------------------------+
72 | | Y | | Y | Y | `Trainer(gpus=k, distributed_backend='ddp', use_amp=True)` |
73 +-------+---------+----+-----+---------+------------------------------------------------------------+
74
75 You also have the option of specifying which GPUs to use by passing a list:
76
77 .. code-block:: python
78
79 # DEFAULT (int) specifies how many GPUs to use.
80 Trainer(gpus=k)
81
82 # Above is equivalent to
83 Trainer(gpus=list(range(k)))
84
85 # You specify which GPUs (don't use if running on cluster)
86 Trainer(gpus=[0, 1])
87
88 # can also be a string
89 Trainer(gpus='0, 1')
90
91 # can also be -1 or '-1', this uses all available GPUs
92 # this is equivalent to list(range(torch.cuda.available_devices()))
93 Trainer(gpus=-1)
94
95
96 CUDA flags
97 ----------
98
99 CUDA flags make certain GPUs visible to your script.
100 Lightning sets these for you automatically, there's NO NEED to do this yourself.
101
102 .. code-block:: python
103
104 # lightning will set according to what you give the trainer
105 os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
106 os.environ["CUDA_VISIBLE_DEVICES"] = "0"
107
108
109 However, when using a cluster, Lightning will NOT set these flags (and you should not either).
110 SLURM will set these for you.
111
112 16-bit mixed precision
113 ----------------------
114
115 16 bit precision can cut your memory footprint by half. If using volta architecture GPUs
116 it can give a dramatic training speed-up as well.
117 First, install apex (if install fails, look `here <https://github.com/NVIDIA/apex>`_::
118
119 $ git clone https://github.com/NVIDIA/apex
120 $ cd apex
121
122 # ------------------------
123 # OPTIONAL: on your cluster you might need to load cuda 10 or 9
124 # depending on how you installed PyTorch
125
126 # see available modules
127 module avail
128
129 # load correct cuda before install
130 module load cuda-10.0
131 # ------------------------
132
133 # make sure you've loaded a cuda version > 4.0 and < 7.0
134 module load gcc-6.1.0
135
136 $ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
137
138
139 then set this use_amp to True.::
140
141 # DEFAULT
142 trainer = Trainer(amp_level='O2', use_amp=False)
143
144
145 Single-gpu
146 ----------
147
148 Make sure you're on a GPU machine.::
149
150 # DEFAULT
151 trainer = Trainer(gpus=1)
152
153 Multi-gpu
154 ---------
155
156 Make sure you're on a GPU machine. You can set as many GPUs as you want.
157 In this setting, the model will run on all 8 GPUs at once using DataParallel under the hood.
158
159 .. code-block:: python
160
161 # to use DataParallel
162 trainer = Trainer(gpus=8, distributed_backend='dp')
163
164 # RECOMMENDED use DistributedDataParallel
165 trainer = Trainer(gpus=8, distributed_backend='ddp')
166
167 Custom device selection
168 -----------------------
169
170 The number of GPUs can also be selected with a list of indices or a string containing
171 a comma separated list of GPU ids.
172 The table below lists examples of possible input formats and how they are interpreted by Lightning.
173 Note in particular the difference between `gpus=0`, `gpus=[0]` and `gpus="0"`.
174
175 +---------------+-----------+---------------------+---------------------------------+
176 | `gpus` | Type | Parsed | Meaning |
177 +===============+===========+=====================+=================================+
178 | None | NoneType | None | CPU |
179 +---------------+-----------+---------------------+---------------------------------+
180 | 0 | int | None | CPU |
181 +---------------+-----------+---------------------+---------------------------------+
182 | 3 | int | [0, 1, 2] | first 3 GPUs |
183 +---------------+-----------+---------------------+---------------------------------+
184 | -1 | int | [0, 1, 2, ...] | all available GPUs |
185 +---------------+-----------+---------------------+---------------------------------+
186 | [0] | list | [0] | GPU 0 |
187 +---------------+-----------+---------------------+---------------------------------+
188 | [1, 3] | list | [1, 3] | GPUs 1 and 3 |
189 +---------------+-----------+---------------------+---------------------------------+
190 | "0" | str | [0] | GPU 0 |
191 +---------------+-----------+---------------------+---------------------------------+
192 | "3" | str | [3] | GPU 3 |
193 +---------------+-----------+---------------------+---------------------------------+
194 | "1, 3" | str | [1, 3] | GPUs 1 and 3 |
195 +---------------+-----------+---------------------+---------------------------------+
196 | "-1" | str | [0, 1, 2, ...] | all available GPUs |
197 +---------------+-----------+---------------------+---------------------------------+
198
199
200 Multi-node
201 ----------
202
203 Multi-node training is easily done by specifying these flags.
204
205 .. code-block:: python
206
207 # train on 12*8 GPUs
208 trainer = Trainer(gpus=8, num_nodes=12, distributed_backend='ddp')
209
210
211 You must configure your job submission script correctly for the trainer to work.
212 Here is an example script for the above trainer configuration.
213
214 .. code-block:: bash
215
216 #!/bin/bash -l
217
218 # SLURM SUBMIT SCRIPT
219 #SBATCH --nodes=12
220 #SBATCH --gres=gpu:8
221 #SBATCH --ntasks-per-node=8
222 #SBATCH --mem=0
223 #SBATCH --time=0-02:00:00
224
225 # activate conda env
226 conda activate my_env
227
228 # -------------------------
229 # OPTIONAL
230 # -------------------------
231 # debugging flags (optional)
232 # export NCCL_DEBUG=INFO
233 # export PYTHONFAULTHANDLER=1
234
235 # PyTorch comes with prebuilt NCCL support... but if you have issues with it
236 # you might need to load the latest version from your modules
237 # module load NCCL/2.4.7-1-cuda.10.0
238
239 # on your cluster you might need these:
240 # set the network interface
241 # export NCCL_SOCKET_IFNAME=^docker0,lo
242 # -------------------------
243
244 # random port between 12k and 20k
245 export MASTER_PORT=$((12000 + RANDOM % 20000))
246
247 # run script from above
248 python my_main_file.py
249
250 .. note:: When running in DDP mode, any errors in your code will show up as an NCCL issue.
251 Set the `NCCL_DEBUG=INFO` flag to see the ACTUAL error.
252
253 Finally, make sure to add a distributed sampler to your dataset. The distributed sampler copies a
254 portion of your dataset onto each GPU. (World_size = gpus_per_node * nb_nodes).
255
256 .. code-block:: python
257
258 # ie: this:
259 dataset = myDataset()
260 dataloader = Dataloader(dataset)
261
262 # becomes:
263 dataset = myDataset()
264 dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
265 dataloader = Dataloader(dataset, sampler=dist_sampler)
266
267
268 Auto-slurm-job-submission
269 -------------------------
270
271 Instead of manually building SLURM scripts, you can use the
272 `SlurmCluster object <https://williamfalcon.github.io/test-tube/hpc/SlurmCluster>`_
273 to do this for you. The SlurmCluster can also run a grid search if you pass
274 in a `HyperOptArgumentParser
275 <https://williamfalcon.github.io/test-tube/hyperparameter_optimization/HyperOptArgumentParser>`_.
276
277 Here is an example where you run a grid search of 9 combinations of hyperparams.
278 The full examples are `here
279 <https://github.com/PyTorchLightning/pytorch-lightning/tree/master/pl_examples/new_project_templates/multi_node_examples>`_.
280
281 .. code-block:: python
282
283 # grid search 3 values of learning rate and 3 values of number of layers for your net
284 # this generates 9 experiments (lr=1e-3, layers=16), (lr=1e-3, layers=32),
285 # (lr=1e-3, layers=64), ... (lr=1e-1, layers=64)
286 parser = HyperOptArgumentParser(strategy='grid_search', add_help=False)
287 parser.opt_list('--learning_rate', default=0.001, type=float,
288 options=[1e-3, 1e-2, 1e-1], tunable=True)
289 parser.opt_list('--layers', default=1, type=float, options=[16, 32, 64], tunable=True)
290 hyperparams = parser.parse_args()
291
292 # Slurm cluster submits 9 jobs, each with a set of hyperparams
293 cluster = SlurmCluster(
294 hyperparam_optimizer=hyperparams,
295 log_path='/some/path/to/save',
296 )
297
298 # OPTIONAL FLAGS WHICH MAY BE CLUSTER DEPENDENT
299 # which interface your nodes use for communication
300 cluster.add_command('export NCCL_SOCKET_IFNAME=^docker0,lo')
301
302 # see output of the NCCL connection process
303 # NCCL is how the nodes talk to each other
304 cluster.add_command('export NCCL_DEBUG=INFO')
305
306 # setting a master port here is a good idea.
307 cluster.add_command('export MASTER_PORT=%r' % PORT)
308
309 # ************** DON'T FORGET THIS ***************
310 # MUST load the latest NCCL version
311 cluster.load_modules(['NCCL/2.4.7-1-cuda.10.0'])
312
313 # configure cluster
314 cluster.per_experiment_nb_nodes = 12
315 cluster.per_experiment_nb_gpus = 8
316
317 cluster.add_slurm_cmd(cmd='ntasks-per-node', value=8, comment='1 task per gpu')
318
319 # submit a script with 9 combinations of hyper params
320 # (lr=1e-3, layers=16), (lr=1e-3, layers=32), (lr=1e-3, layers=64), ... (lr=1e-1, layers=64)
321 cluster.optimize_parallel_cluster_gpu(
322 main,
323 nb_trials=9, # how many permutations of the grid search to run
324 job_name='name_for_squeue'
325 )
326
327
328 The other option is that you generate scripts on your own via a bash command or use another library...
329
330 Self-balancing architecture
331 ---------------------------
332
333 Here lightning distributes parts of your module across available GPUs to optimize for speed and memory.
334
335 """
336
337 from abc import ABC, abstractmethod
338 import logging as log
339 import os
340
341 import torch
342
343 from pytorch_lightning.overrides.data_parallel import (
344 LightningDistributedDataParallel,
345 LightningDataParallel,
346 )
347 from pytorch_lightning.utilities.debugging import MisconfigurationException
348
349 try:
350 from apex import amp
351
352 APEX_AVAILABLE = True
353 except ImportError:
354 APEX_AVAILABLE = False
355
356 try:
357 import torch_xla.core.xla_model as xm
358 XLA_AVAILABLE = True
359
360 except ImportError:
361 XLA_AVAILABLE = False
362
363
364 class TrainerDPMixin(ABC):
365
366 def __init__(self):
367 # this is just a summary on variables used in this abstract class,
368 # the proper values/initialisation should be done in child class
369 self.on_gpu = None
370 self.use_dp = None
371 self.use_ddp2 = None
372 self.use_ddp = None
373 self.use_amp = None
374 self.testing = None
375 self.single_gpu = None
376 self.root_gpu = None
377 self.amp_level = None
378 self.precision = None
379 self.current_tpu_idx = None
380 self.proc_rank = None
381 self.tpu_local_core_rank = None
382 self.tpu_global_core_rank = None
383 self.use_tpu = None
384
385 @abstractmethod
386 def run_pretrain_routine(self, model):
387 # this is just empty shell for code from other class
388 pass
389
390 @abstractmethod
391 def init_optimizers(self, optimizers):
392 # this is just empty shell for code from other class
393 pass
394
395 def copy_trainer_model_properties(self, model):
396 if isinstance(model, LightningDataParallel):
397 ref_model = model.module
398 elif isinstance(model, LightningDistributedDataParallel):
399 ref_model = model.module
400 else:
401 ref_model = model
402
403 for m in [model, ref_model]:
404 m.trainer = self
405 m.on_gpu = self.on_gpu
406 m.use_dp = self.use_dp
407 m.use_ddp2 = self.use_ddp2
408 m.use_ddp = self.use_ddp
409 m.use_amp = self.use_amp
410 m.testing = self.testing
411 m.single_gpu = self.single_gpu
412 m.use_tpu = self.use_tpu
413 m.tpu_local_core_rank = self.tpu_local_core_rank
414 m.tpu_global_core_rank = self.tpu_global_core_rank
415
416 def transfer_batch_to_tpu(self, batch):
417 return self.__transfer_data_to_device(batch, device='tpu')
418
419 def transfer_batch_to_gpu(self, batch, gpu_id):
420 return self.__transfer_data_to_device(batch, device='gpu', gpu_id=gpu_id)
421
422 def __transfer_data_to_device(self, batch, device, gpu_id=None):
423 if device == 'tpu' and XLA_AVAILABLE:
424 # base case: object can be directly moved using `to`
425 if callable(getattr(batch, 'to', None)):
426 return batch.to(xm.xla_device())
427
428 if device == 'gpu':
429 # base case: object can be directly moved using `cuda` or `to`
430 if callable(getattr(batch, 'cuda', None)):
431 return batch.cuda(gpu_id)
432
433 if callable(getattr(batch, 'to', None)):
434 return batch.to(torch.device('cuda', gpu_id))
435
436 # when list
437 if isinstance(batch, list):
438 for i, x in enumerate(batch):
439 batch[i] = self.__transfer_data_to_device(x, device, gpu_id)
440 return batch
441
442 # when tuple
443 if isinstance(batch, tuple):
444 batch = list(batch)
445 for i, x in enumerate(batch):
446 batch[i] = self.__transfer_data_to_device(x, device, gpu_id)
447 return tuple(batch)
448
449 # when dict
450 if isinstance(batch, dict):
451 for k, v in batch.items():
452 batch[k] = self.__transfer_data_to_device(v, device, gpu_id)
453
454 return batch
455
456 # nothing matches, return the value as is without transform
457 return batch
458
459 def single_gpu_train(self, model):
460 model.cuda(self.root_gpu)
461
462 # CHOOSE OPTIMIZER
463 # allow for lr schedulers as well
464 self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
465
466 if self.use_amp:
467 # An example
468 model, optimizers = model.configure_apex(amp, model, self.optimizers, self.amp_level)
469 self.optimizers = optimizers
470
471 self.run_pretrain_routine(model)
472
473 def tpu_train(self, tpu_core_idx, model):
474 # put model on tpu
475 model.to(xm.xla_device())
476
477 # get the appropriate tpu ranks
478 self.tpu_local_core_rank = xm.get_local_ordinal()
479 self.tpu_global_core_rank = xm.get_ordinal()
480
481 # avoid duplicating progress bar
482 self.show_progress_bar = self.show_progress_bar and self.tpu_global_core_rank == 0
483
484 # track current tpu
485 self.current_tpu_idx = tpu_core_idx
486 self.proc_rank = self.tpu_local_core_rank
487
488 # CHOOSE OPTIMIZER
489 # allow for lr schedulers as well
490 self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
491
492 # init 16 bit for TPU
493 if self.precision == 16:
494 os.environ['XLA_USE_BF16'] = 1
495
496 m = f'INIT TPU local core: {self.tpu_local_core_rank}, ' \
497 f'global rank: {self.tpu_global_core_rank}'
498 log.info(m)
499 self.run_pretrain_routine(model)
500
501 def dp_train(self, model):
502
503 # CHOOSE OPTIMIZER
504 # allow for lr schedulers as well
505 self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
506
507 model.cuda(self.root_gpu)
508
509 # check for this bug (amp + dp + !01 doesn't work)
510 # https://github.com/NVIDIA/apex/issues/227
511 if self.use_dp and self.use_amp:
512 if self.amp_level == 'O2':
513 m = f"""
514 Amp level {self.amp_level} with DataParallel is not supported.
515 See this note from NVIDIA for more info: https://github.com/NVIDIA/apex/issues/227.
516 We recommend you switch to ddp if you want to use amp
517 """
518 raise MisconfigurationException(m)
519 else:
520 model, optimizers = model.configure_apex(amp, model, self.optimizers, self.amp_level)
521
522 # create list of device ids
523 device_ids = self.data_parallel_device_ids
524 if isinstance(device_ids, int):
525 device_ids = list(range(device_ids))
526
527 model = LightningDataParallel(model, device_ids=device_ids)
528
529 self.run_pretrain_routine(model)
530
531
532 def normalize_parse_gpu_string_input(s):
533 if isinstance(s, str):
534 if s == '-1':
535 return -1
536 else:
537 return [int(x.strip()) for x in s.split(',')]
538 else:
539 return s
540
541
542 def get_all_available_gpus():
543 """
544 :return: a list of all available gpus
545 """
546 return list(range(torch.cuda.device_count()))
547
548
549 def check_gpus_data_type(gpus):
550 """
551 :param gpus: gpus parameter as passed to the Trainer
552 Function checks that it is one of: None, Int, String or List
553 Throws otherwise
554 :return: return unmodified gpus variable
555 """
556
557 if gpus is not None and type(gpus) not in (int, str, list):
558 raise MisconfigurationException("GPUs must be int, string or list of ints or None.")
559
560
561 def normalize_parse_gpu_input_to_list(gpus):
562 assert gpus is not None
563 if isinstance(gpus, list):
564 return gpus
565
566 # must be an int
567 if not gpus: # gpus==0
568 return None
569 if gpus == -1:
570 return get_all_available_gpus()
571
572 return list(range(gpus))
573
574
575 def sanitize_gpu_ids(gpus):
576 """
577 :param gpus: list of ints corresponding to GPU indices
578 Checks that each of the GPUs in the list is actually available.
579 Throws if any of the GPUs is not available.
580 :return: unmodified gpus variable
581 """
582 all_available_gpus = get_all_available_gpus()
583 for gpu in gpus:
584 if gpu not in all_available_gpus:
585 message = f"""
586 You requested GPUs: {gpus}
587 But your machine only has: {all_available_gpus}
588 """
589 raise MisconfigurationException(message)
590 return gpus
591
592
593 def parse_gpu_ids(gpus):
594 """
595 :param gpus: Int, string or list
596 An int -1 or string '-1' indicate that all available GPUs should be used.
597 A list of ints or a string containing list of comma separated integers
598 indicates specific GPUs to use
599 An int 0 means that no GPUs should be used
600 Any int N > 0 indicates that GPUs [0..N) should be used.
601 :return: List of gpus to be used
602
603 If no GPUs are available but the value of gpus variable indicates request for GPUs
604 then a misconfiguration exception is raised.
605 """
606
607 # Check that gpus param is None, Int, String or List
608 check_gpus_data_type(gpus)
609
610 # Handle the case when no gpus are requested
611 if gpus is None or isinstance(gpus, int) and gpus == 0:
612 return None
613
614 # We know user requested GPUs therefore if some of the
615 # requested GPUs are not available an exception is thrown.
616
617 gpus = normalize_parse_gpu_string_input(gpus)
618 gpus = normalize_parse_gpu_input_to_list(gpus)
619 gpus = sanitize_gpu_ids(gpus)
620
621 if not gpus:
622 raise MisconfigurationException("GPUs requested but none are available.")
623 return gpus
624
625
626 def determine_root_gpu_device(gpus):
627 """
628 :param gpus: non empty list of ints representing which gpus to use
629 :return: designated root GPU device
630 """
631 if gpus is None:
632 return None
633
634 assert isinstance(gpus, list), "gpus should be a list"
635 assert len(gpus) > 0, "gpus should be a non empty list"
636
637 # set root gpu
638 root_gpu = gpus[0]
639
640 return root_gpu
641
[end of pytorch_lightning/trainer/distrib_parts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Lightning-AI/lightning
|
d856989120b078581f3f694fd7a1c036703f67a9
|
logger is NoneType hence doesn't have any experiment or other functionality in a lightning module
## 🐛 Bug
When trying to use the logging abilities of lightning, I hit a wall, the default and tensorboard loggers both seem to stay uninitialized when calling ```trainer.fit(model)```, resulting in crashes everytime I try to log something.
### To Reproduce
Create a lightning module as such
```
class SimpleRegressor(pl.LightningModule):
...
```
Use the logger anywhere to get this kind of stacktrace:
```
d:\Documents\projects\MetaWatch\MetaWatch\notebooks\audio-video-interest\simple_regressor.py in configure_optimizers(self)
105 #see https://pytorch-lightning.readthedocs.io/en/latest/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.configure_optimizers
106 # REQUIRED
--> 107 self.logger.experiment.add_hparams({'hidden_layer_size':self.hidden_layer_size,
108 'linear_layer_size':self.linear_layer_size,
109 'lstm_layers':self.lstm_layers})
AttributeError: 'NoneType' object has no attribute 'experiment'
```
#### Code sample
```
import pytorch_lightning as pl
class SimpleRegressor(pl.LightningModule):
def __init__(self, cuda=False):
super(SimpleRegressor, self).__init__()
self.logger.experiment.add_hparams({'hidden_layer_size':1})
```
### Expected behavior
To log as described in the documentation.
### Environment
```
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 970
Nvidia driver version: 441.12
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.18.1
[pip3] pytorch-lightning==0.6.0
[pip3] tinynumpy==1.2.1
[pip3] torch==1.4.0
[pip3] torchvision==0.4.1
[conda] Could not collect
```
### Additional context
<!-- Add any other context about the problem here. -->
|
Hey, thanks for your contribution! Great first issue!
Thanks for the issue! The intended way to acheive this is through a hook. When `__init__` is called on the `LightningModule`, the loggers won't have been created yet. I don't think there's any way to change that so we should update the docs to use a hook instead of `__init__`.
@PyTorchLightning/core-contributors any other thoughts on this?
it also doesn't work in other functions, I tried in the training step, in the configure_optimizers too
|
2020-02-27T09:07:23Z
|
<patch>
diff --git a/pytorch_lightning/loggers/__init__.py b/pytorch_lightning/loggers/__init__.py
--- a/pytorch_lightning/loggers/__init__.py
+++ b/pytorch_lightning/loggers/__init__.py
@@ -1,6 +1,7 @@
"""
Lightning supports most popular logging frameworks (Tensorboard, comet, weights and biases, etc...).
-To use a logger, simply pass it into the trainer.
+To use a logger, simply pass it into the trainer. To use multiple loggers, simply pass in a ``list``
+or ``tuple`` of loggers.
.. code-block:: python
@@ -14,14 +15,19 @@
comet_logger = loggers.CometLogger()
trainer = Trainer(logger=comet_logger)
-.. note:: All loggers log by default to `os.getcwd()`. To change the path without creating a logger set
- Trainer(default_save_path='/your/path/to/save/checkpoints')
+ # or pass a list
+ tb_logger = loggers.TensorBoardLogger()
+ comet_logger = loggers.CometLogger()
+ trainer = Trainer(logger=[tb_logger, comet_logger])
+
+.. note:: All loggers log by default to ``os.getcwd()``. To change the path without creating a logger set
+ ``Trainer(default_save_path='/your/path/to/save/checkpoints')``
Custom logger
-------------
You can implement your own logger by writing a class that inherits from
-`LightningLoggerBase`. Use the `rank_zero_only` decorator to make sure that
+``LightningLoggerBase``. Use the ``rank_zero_only`` decorator to make sure that
only the first process in DDP training logs data.
.. code-block:: python
@@ -52,13 +58,13 @@ def finalize(self, status):
# finishes goes here
-If you write a logger than may be useful to others, please send
+If you write a logger that may be useful to others, please send
a pull request to add it to Lighting!
Using loggers
-------------
-Call the logger anywhere from your LightningModule by doing:
+Call the logger anywhere except ``__init__`` in your LightningModule by doing:
.. code-block:: python
@@ -69,6 +75,8 @@ def train_step(...):
def any_lightning_module_function_or_hook(...):
self.logger.experiment.add_histogram(...)
+Read more in the `Experiment Logging use case <./experiment_logging.html>`_.
+
Supported Loggers
-----------------
"""
@@ -77,7 +85,7 @@ def any_lightning_module_function_or_hook(...):
from .base import LightningLoggerBase, LoggerCollection, rank_zero_only
from .tensorboard import TensorBoardLogger
-__all__ = ['TensorBoardLogger', 'LoggerCollection']
+__all__ = ['TensorBoardLogger']
try:
# needed to prevent ImportError and duplicated logs.
diff --git a/pytorch_lightning/loggers/base.py b/pytorch_lightning/loggers/base.py
--- a/pytorch_lightning/loggers/base.py
+++ b/pytorch_lightning/loggers/base.py
@@ -100,6 +100,9 @@ def __init__(self, logger_iterable: Iterable[LightningLoggerBase]):
super().__init__()
self._logger_iterable = logger_iterable
+ def __getitem__(self, index: int) -> LightningLoggerBase:
+ return [logger for logger in self._logger_iterable][index]
+
@property
def experiment(self) -> List[Any]:
return [logger.experiment() for logger in self._logger_iterable]
diff --git a/pytorch_lightning/trainer/trainer.py b/pytorch_lightning/trainer/trainer.py
--- a/pytorch_lightning/trainer/trainer.py
+++ b/pytorch_lightning/trainer/trainer.py
@@ -937,6 +937,9 @@ def fit(
# feed to .fit()
"""
+ # bind logger
+ model.logger = self.logger
+
# Fit begin callbacks
self.on_fit_start()
@@ -1065,10 +1068,8 @@ def run_pretrain_routine(self, model: LightningModule):
# set local properties on the model
self.copy_trainer_model_properties(ref_model)
- # link up experiment object
+ # log hyper-parameters
if self.logger is not None:
- ref_model.logger = self.logger
-
# save exp to get started
if hasattr(ref_model, "hparams"):
self.logger.log_hyperparams(ref_model.hparams)
</patch>
|
[]
|
[]
| |||
ipython__ipython-356
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Usability improvements to Qt console
Currently the Qt console is tricky to use with multiline cells recalled from history because it's very easy to go 'off the edge'. If the cursor reaches the top or bottom line and you do up/down once more, you jump to the next cell. When recalling history that's the desired behavior, but when you've already edited a cell, it's jarring to get bumped out of your editing context due to a single arrow movement.
I don't know how easy/possible it would be to implement, but my idea is the following: the console should detect when a cell has been made 'dirty' by editing (any typing other than arrow movements, pasting, etc), then the behavior should change. At that point, the cell boundaries should become 'hard', preventing the cursor from exiting unless the person clears the cell. Basically, once the cell is being edited, it should feel like a little text editor that doesn't lose its content without some drastic action.
</issue>
<code>
[start of README.rst]
1 ==============
2 IPython README
3 ==============
4
5 Overview
6 ========
7
8 Welcome to IPython. Our full documentation can be found in the ``docs/dist``
9 subdirectory in ``.html`` and ``.pdf`` formats, also available online at our
10 `docs repo <http://ipython.github.com/ipython-doc>`_. The ``docs/source`` directory
11 contains the plaintext version of these manuals.
12
13
14 Dependencies and supported Python versions
15 ==========================================
16
17 For full details, see the installation section of the manual. The basic parts
18 of IPython only need the Python standard library, but much of its more advanced
19 functionality requires extra packages.
20
21 Officially, IPython requires Python version 2.6 or 2.7. An experimental port
22 of IPython to Python 3.x is being developed at
23 http://github.com/ipython/ipython-py3k.
24
25
26 Instant running
27 ===============
28
29 You can run IPython from this directory without even installing it system-wide
30 by typing at the terminal::
31
32 $ python ipython.py
33
34
[end of README.rst]
[start of IPython/core/usage.py]
1 # -*- coding: utf-8 -*-
2 """Usage information for the main IPython applications.
3 """
4 #-----------------------------------------------------------------------------
5 # Copyright (C) 2008-2010 The IPython Development Team
6 # Copyright (C) 2001-2007 Fernando Perez. <[email protected]>
7 #
8 # Distributed under the terms of the BSD License. The full license is in
9 # the file COPYING, distributed as part of this software.
10 #-----------------------------------------------------------------------------
11
12 import sys
13 from IPython.core import release
14
15 cl_usage = """\
16 ipython [options] [files]
17
18 IPython: an enhanced interactive Python shell.
19
20 A Python shell with automatic history (input and output), dynamic object
21 introspection, easier configuration, command completion, access to the
22 system shell and more. IPython can also be embedded in running programs.
23
24 If invoked with no options, it executes all the files listed in sequence
25 and exits, use -i to enter interactive mode after running the files. Files
26 ending in .py will be treated as normal Python, but files ending in .ipy
27 can contain special IPython syntax (magic commands, shell expansions, etc.)
28
29 Please note that some of the configuration options are not available at the
30 command line, simply because they are not practical here. Look into your
31 ipython_config.py configuration file for details on those.
32
33 This file is typically installed in the IPYTHON_DIR directory. For Linux
34 users, this will be $HOME/.config/ipython, and for other users it will be
35 $HOME/.ipython. For Windows users, $HOME resolves to C:\\Documents and
36 Settings\\YourUserName in most instances.
37
38 In IPython's documentation, we will refer to this directory as IPYTHON_DIR,
39 you can change its default location by setting any path you want in this
40 environment variable.
41
42 For more information, see the manual available in HTML and PDF in your
43 installation, or online at http://ipython.scipy.org.
44 """
45
46 interactive_usage = """
47 IPython -- An enhanced Interactive Python
48 =========================================
49
50 IPython offers a combination of convenient shell features, special commands
51 and a history mechanism for both input (command history) and output (results
52 caching, similar to Mathematica). It is intended to be a fully compatible
53 replacement for the standard Python interpreter, while offering vastly
54 improved functionality and flexibility.
55
56 At your system command line, type 'ipython -help' to see the command line
57 options available. This document only describes interactive features.
58
59 Warning: IPython relies on the existence of a global variable called __IP which
60 controls the shell itself. If you redefine __IP to anything, bizarre behavior
61 will quickly occur.
62
63 MAIN FEATURES
64
65 * Access to the standard Python help. As of Python 2.1, a help system is
66 available with access to object docstrings and the Python manuals. Simply
67 type 'help' (no quotes) to access it.
68
69 * Magic commands: type %magic for information on the magic subsystem.
70
71 * System command aliases, via the %alias command or the ipythonrc config file.
72
73 * Dynamic object information:
74
75 Typing ?word or word? prints detailed information about an object. If
76 certain strings in the object are too long (docstrings, code, etc.) they get
77 snipped in the center for brevity.
78
79 Typing ??word or word?? gives access to the full information without
80 snipping long strings. Long strings are sent to the screen through the less
81 pager if longer than the screen, printed otherwise.
82
83 The ?/?? system gives access to the full source code for any object (if
84 available), shows function prototypes and other useful information.
85
86 If you just want to see an object's docstring, type '%pdoc object' (without
87 quotes, and without % if you have automagic on).
88
89 Both %pdoc and ?/?? give you access to documentation even on things which are
90 not explicitely defined. Try for example typing {}.get? or after import os,
91 type os.path.abspath??. The magic functions %pdef, %source and %file operate
92 similarly.
93
94 * Completion in the local namespace, by typing TAB at the prompt.
95
96 At any time, hitting tab will complete any available python commands or
97 variable names, and show you a list of the possible completions if there's
98 no unambiguous one. It will also complete filenames in the current directory.
99
100 This feature requires the readline and rlcomplete modules, so it won't work
101 if your Python lacks readline support (such as under Windows).
102
103 * Search previous command history in two ways (also requires readline):
104
105 - Start typing, and then use Ctrl-p (previous,up) and Ctrl-n (next,down) to
106 search through only the history items that match what you've typed so
107 far. If you use Ctrl-p/Ctrl-n at a blank prompt, they just behave like
108 normal arrow keys.
109
110 - Hit Ctrl-r: opens a search prompt. Begin typing and the system searches
111 your history for lines that match what you've typed so far, completing as
112 much as it can.
113
114 * Persistent command history across sessions (readline required).
115
116 * Logging of input with the ability to save and restore a working session.
117
118 * System escape with !. Typing !ls will run 'ls' in the current directory.
119
120 * The reload command does a 'deep' reload of a module: changes made to the
121 module since you imported will actually be available without having to exit.
122
123 * Verbose and colored exception traceback printouts. See the magic xmode and
124 xcolor functions for details (just type %magic).
125
126 * Input caching system:
127
128 IPython offers numbered prompts (In/Out) with input and output caching. All
129 input is saved and can be retrieved as variables (besides the usual arrow
130 key recall).
131
132 The following GLOBAL variables always exist (so don't overwrite them!):
133 _i: stores previous input.
134 _ii: next previous.
135 _iii: next-next previous.
136 _ih : a list of all input _ih[n] is the input from line n.
137
138 Additionally, global variables named _i<n> are dynamically created (<n>
139 being the prompt counter), such that _i<n> == _ih[<n>]
140
141 For example, what you typed at prompt 14 is available as _i14 and _ih[14].
142
143 You can create macros which contain multiple input lines from this history,
144 for later re-execution, with the %macro function.
145
146 The history function %hist allows you to see any part of your input history
147 by printing a range of the _i variables. Note that inputs which contain
148 magic functions (%) appear in the history with a prepended comment. This is
149 because they aren't really valid Python code, so you can't exec them.
150
151 * Output caching system:
152
153 For output that is returned from actions, a system similar to the input
154 cache exists but using _ instead of _i. Only actions that produce a result
155 (NOT assignments, for example) are cached. If you are familiar with
156 Mathematica, IPython's _ variables behave exactly like Mathematica's %
157 variables.
158
159 The following GLOBAL variables always exist (so don't overwrite them!):
160 _ (one underscore): previous output.
161 __ (two underscores): next previous.
162 ___ (three underscores): next-next previous.
163
164 Global variables named _<n> are dynamically created (<n> being the prompt
165 counter), such that the result of output <n> is always available as _<n>.
166
167 Finally, a global dictionary named _oh exists with entries for all lines
168 which generated output.
169
170 * Directory history:
171
172 Your history of visited directories is kept in the global list _dh, and the
173 magic %cd command can be used to go to any entry in that list.
174
175 * Auto-parentheses and auto-quotes (adapted from Nathan Gray's LazyPython)
176
177 1. Auto-parentheses
178 Callable objects (i.e. functions, methods, etc) can be invoked like
179 this (notice the commas between the arguments):
180 >>> callable_ob arg1, arg2, arg3
181 and the input will be translated to this:
182 --> callable_ob(arg1, arg2, arg3)
183 You can force auto-parentheses by using '/' as the first character
184 of a line. For example:
185 >>> /globals # becomes 'globals()'
186 Note that the '/' MUST be the first character on the line! This
187 won't work:
188 >>> print /globals # syntax error
189
190 In most cases the automatic algorithm should work, so you should
191 rarely need to explicitly invoke /. One notable exception is if you
192 are trying to call a function with a list of tuples as arguments (the
193 parenthesis will confuse IPython):
194 In [1]: zip (1,2,3),(4,5,6) # won't work
195 but this will work:
196 In [2]: /zip (1,2,3),(4,5,6)
197 ------> zip ((1,2,3),(4,5,6))
198 Out[2]= [(1, 4), (2, 5), (3, 6)]
199
200 IPython tells you that it has altered your command line by
201 displaying the new command line preceded by -->. e.g.:
202 In [18]: callable list
203 -------> callable (list)
204
205 2. Auto-Quoting
206 You can force auto-quoting of a function's arguments by using ',' as
207 the first character of a line. For example:
208 >>> ,my_function /home/me # becomes my_function("/home/me")
209
210 If you use ';' instead, the whole argument is quoted as a single
211 string (while ',' splits on whitespace):
212 >>> ,my_function a b c # becomes my_function("a","b","c")
213 >>> ;my_function a b c # becomes my_function("a b c")
214
215 Note that the ',' MUST be the first character on the line! This
216 won't work:
217 >>> x = ,my_function /home/me # syntax error
218 """
219
220 interactive_usage_min = """\
221 An enhanced console for Python.
222 Some of its features are:
223 - Readline support if the readline library is present.
224 - Tab completion in the local namespace.
225 - Logging of input, see command-line options.
226 - System shell escape via ! , eg !ls.
227 - Magic commands, starting with a % (like %ls, %pwd, %cd, etc.)
228 - Keeps track of locally defined variables via %who, %whos.
229 - Show object information with a ? eg ?x or x? (use ?? for more info).
230 """
231
232 quick_reference = r"""
233 IPython -- An enhanced Interactive Python - Quick Reference Card
234 ================================================================
235
236 obj?, obj?? : Get help, or more help for object (also works as
237 ?obj, ??obj).
238 ?foo.*abc* : List names in 'foo' containing 'abc' in them.
239 %magic : Information about IPython's 'magic' % functions.
240
241 Magic functions are prefixed by %, and typically take their arguments without
242 parentheses, quotes or even commas for convenience.
243
244 Example magic function calls:
245
246 %alias d ls -F : 'd' is now an alias for 'ls -F'
247 alias d ls -F : Works if 'alias' not a python name
248 alist = %alias : Get list of aliases to 'alist'
249 cd /usr/share : Obvious. cd -<tab> to choose from visited dirs.
250 %cd?? : See help AND source for magic %cd
251
252 System commands:
253
254 !cp a.txt b/ : System command escape, calls os.system()
255 cp a.txt b/ : after %rehashx, most system commands work without !
256 cp ${f}.txt $bar : Variable expansion in magics and system commands
257 files = !ls /usr : Capture sytem command output
258 files.s, files.l, files.n: "a b c", ['a','b','c'], 'a\nb\nc'
259
260 History:
261
262 _i, _ii, _iii : Previous, next previous, next next previous input
263 _i4, _ih[2:5] : Input history line 4, lines 2-4
264 exec _i81 : Execute input history line #81 again
265 %rep 81 : Edit input history line #81
266 _, __, ___ : previous, next previous, next next previous output
267 _dh : Directory history
268 _oh : Output history
269 %hist : Command history. '%hist -g foo' search history for 'foo'
270
271 Autocall:
272
273 f 1,2 : f(1,2)
274 /f 1,2 : f(1,2) (forced autoparen)
275 ,f 1 2 : f("1","2")
276 ;f 1 2 : f("1 2")
277
278 Remember: TAB completion works in many contexts, not just file names
279 or python names.
280
281 The following magic functions are currently available:
282
283 """
284
285 gui_reference = """\
286 ===============================
287 The graphical IPython console
288 ===============================
289
290 This console is designed to emulate the look, feel and workflow of a terminal
291 environment, while adding a number of enhancements that are simply not possible
292 in a real terminal, such as inline syntax highlighting, true multiline editing,
293 inline graphics and much more.
294
295 This quick reference document contains the basic information you'll need to
296 know to make the most efficient use of it. For the various command line
297 options available at startup, type ``--help`` at the command line.
298
299
300 Multiline editing
301 =================
302
303 The graphical console is capable of true multiline editing, but it also tries
304 to behave intuitively like a terminal when possible. If you are used to
305 IPyhton's old terminal behavior, you should find the transition painless, and
306 once you learn a few basic keybindings it will be a much more efficient
307 environment.
308
309 For single expressions or indented blocks, the console behaves almost like the
310 terminal IPython: single expressions are immediately evaluated, and indented
311 blocks are evaluated once a single blank line is entered::
312
313 In [1]: print "Hello IPython!" # Enter was pressed at the end of the line
314 Hello IPython!
315
316 In [2]: for i in range(10):
317 ...: print i,
318 ...:
319 0 1 2 3 4 5 6 7 8 9
320
321 If you want to enter more than one expression in a single input block
322 (something not possible in the terminal), you can use ``Control-Enter`` at the
323 end of your first line instead of ``Enter``. At that point the console goes
324 into 'cell mode' and even if your inputs are not indented, it will continue
325 accepting arbitrarily many lines until either you enter an extra blank line or
326 you hit ``Shift-Enter`` (the key binding that forces execution). When a
327 multiline cell is entered, IPython analyzes it and executes its code producing
328 an ``Out[n]`` prompt only for the last expression in it, while the rest of the
329 cell is executed as if it was a script. An example should clarify this::
330
331 In [3]: x=1 # Hit C-Enter here
332 ...: y=2 # from now on, regular Enter is sufficient
333 ...: z=3
334 ...: x**2 # This does *not* produce an Out[] value
335 ...: x+y+z # Only the last expression does
336 ...:
337 Out[3]: 6
338
339 The behavior where an extra blank line forces execution is only active if you
340 are actually typing at the keyboard each line, and is meant to make it mimic
341 the IPython terminal behavior. If you paste a long chunk of input (for example
342 a long script copied form an editor or web browser), it can contain arbitrarily
343 many intermediate blank lines and they won't cause any problems. As always,
344 you can then make it execute by appending a blank line *at the end* or hitting
345 ``Shift-Enter`` anywhere within the cell.
346
347 With the up arrow key, you can retrieve previous blocks of input that contain
348 multiple lines. You can move inside of a multiline cell like you would in any
349 text editor. When you want it executed, the simplest thing to do is to hit the
350 force execution key, ``Shift-Enter`` (though you can also navigate to the end
351 and append a blank line by using ``Enter`` twice).
352
353 If you've edited a multiline cell and accidentally navigate out of it with the
354 up or down arrow keys, IPython will clear the cell and replace it with the
355 contents of the one above or below that you navigated to. If this was an
356 accident and you want to retrieve the cell you were editing, use the Undo
357 keybinding, ``Control-z``.
358
359
360 Key bindings
361 ============
362
363 The IPython console supports most of the basic Emacs line-oriented keybindings,
364 in addition to some of its own.
365
366 The keybinding prefixes mean:
367
368 - ``C``: Control
369 - ``S``: Shift
370 - ``M``: Meta (typically the Alt key)
371
372 The keybindings themselves are:
373
374 - ``Enter``: insert new line (may cause execution, see above).
375 - ``C-Enter``: force new line, *never* causes execution.
376 - ``S-Enter``: *force* execution regardless of where cursor is, no newline added.
377 - ``C-c``: copy highlighted text to clipboard (prompts are automatically stripped).
378 - ``C-S-c``: copy highlighted text to clipboard (prompts are not stripped).
379 - ``C-v``: paste text from clipboard.
380 - ``C-z``: undo (retrieves lost text if you move out of a cell with the arrows).
381 - ``C-S-z``: redo.
382 - ``C-o``: move to 'other' area, between pager and terminal.
383 - ``C-l``: clear terminal.
384 - ``C-a``: go to beginning of line.
385 - ``C-e``: go to end of line.
386 - ``C-k``: kill from cursor to the end of the line.
387 - ``C-y``: yank (paste)
388 - ``C-p``: previous line (like up arrow)
389 - ``C-n``: next line (like down arrow)
390 - ``C-f``: forward (like right arrow)
391 - ``C-b``: back (like left arrow)
392 - ``C-d``: delete next character.
393 - ``M-<``: move to the beginning of the input region.
394 - ``M->``: move to the end of the input region.
395 - ``M-d``: delete next word.
396 - ``M-Backspace``: delete previous word.
397 - ``C-.``: force a kernel restart (a confirmation dialog appears).
398 - ``C-+``: increase font size.
399 - ``C--``: decrease font size.
400
401 The IPython pager
402 =================
403
404 IPython will show long blocks of text from many sources using a builtin pager.
405 You can control where this pager appears with the ``--paging`` command-line
406 flag:
407
408 - default: it is overlaid on top of the main terminal. You must quit the pager
409 to get back to the terminal (similar to how a pager such as ``less`` or
410 ``more`` works).
411
412 - vertical: the console is made double-tall, and the pager appears on the
413 bottom area when needed. You can view its contents while using the terminal.
414
415 - horizontal: the console is made double-wide, and the pager appears on the
416 right area when needed. You can view its contents while using the terminal.
417
418 If you use the vertical or horizontal paging modes, you can navigate between
419 terminal and pager as follows:
420
421 - Tab key: goes from pager to terminal (but not the other way around).
422 - Control-o: goes from one to another always.
423 - Mouse: click on either.
424
425 In all cases, the ``q`` or ``Escape`` keys quit the pager (when used with the
426 focus on the pager area).
427
428
429 Running subprocesses
430 ====================
431
432 The graphical IPython console uses the ``pexpect`` module to run subprocesses
433 when you type ``!command``. This has a number of advantages (true asynchronous
434 output from subprocesses as well as very robust termination of rogue
435 subprocesses with ``Control-C``), as well as some limitations. The main
436 limitation is that you can *not* interact back with the subprocess, so anything
437 that invokes a pager or expects you to type input into it will block and hang
438 (you can kill it with ``Control-C``).
439
440 We have provided as magics ``%less`` to page files (aliased to ``%more``),
441 ``%clear`` to clear the terminal, and ``%man`` on Linux/OSX. These cover the
442 most common commands you'd want to call in your subshell and that would cause
443 problems if invoked via ``!cmd``, but you need to be aware of this limitation.
444
445 Display
446 =======
447
448 The IPython console can now display objects in a variety of formats, including
449 HTML, PNG and SVG. This is accomplished using the display functions in
450 ``IPython.core.display``::
451
452 In [4]: from IPython.core.display import display, display_html
453
454 In [5]: from IPython.core.display import display_png, display_svg
455
456 Python objects can simply be passed to these functions and the appropriate
457 representations will be displayed in the console as long as the objects know
458 how to compute those representations. The easiest way of teaching objects how
459 to format themselves in various representations is to define special methods
460 such as: ``__html``, ``__svg__`` and ``__png__``. IPython's display formatters
461 can also be given custom formatter functions for various types::
462
463 In [6]: ip = get_ipython()
464
465 In [7]: html_formatter = ip.display_formatter.formatters['text/html']
466
467 In [8]: html_formatter.for_type(Foo, foo_to_html)
468
469 For further details, see ``IPython.core.formatters``.
470
471 Inline matplotlib graphics
472 ==========================
473
474 The IPython console is capable of displaying matplotlib figures inline, in SVG
475 format. If started with the ``--pylab inline`` flag, then all figures are
476 rendered inline automatically. If started with ``--pylab`` or ``--pylab <your
477 backend>``, then a GUI backend will be used, but IPython's ``display()`` and
478 ``getfigs()`` functions can be used to view plots inline::
479
480 In [9]: display(*getfigs()) # display all figures inline
481
482 In[10]: display(*getfigs(1,2)) # display figures 1 and 2 inline
483 """
484
485
486 quick_guide = """\
487 ? -> Introduction and overview of IPython's features.
488 %quickref -> Quick reference.
489 help -> Python's own help system.
490 object? -> Details about 'object', use 'object??' for extra details.
491 """
492
493 gui_note = """\
494 %guiref -> A brief reference about the graphical user interface.
495 """
496
497 default_banner_parts = [
498 'Python %s\n' % (sys.version.split('\n')[0],),
499 'Type "copyright", "credits" or "license" for more information.\n\n',
500 'IPython %s -- An enhanced Interactive Python.\n' % (release.version,),
501 quick_guide
502 ]
503
504 default_gui_banner_parts = default_banner_parts + [gui_note]
505
506 default_banner = ''.join(default_banner_parts)
507
508 default_gui_banner = ''.join(default_gui_banner_parts)
509
[end of IPython/core/usage.py]
[start of IPython/zmq/zmqshell.py]
1 """A ZMQ-based subclass of InteractiveShell.
2
3 This code is meant to ease the refactoring of the base InteractiveShell into
4 something with a cleaner architecture for 2-process use, without actually
5 breaking InteractiveShell itself. So we're doing something a bit ugly, where
6 we subclass and override what we want to fix. Once this is working well, we
7 can go back to the base class and refactor the code for a cleaner inheritance
8 implementation that doesn't rely on so much monkeypatching.
9
10 But this lets us maintain a fully working IPython as we develop the new
11 machinery. This should thus be thought of as scaffolding.
12 """
13 #-----------------------------------------------------------------------------
14 # Imports
15 #-----------------------------------------------------------------------------
16 from __future__ import print_function
17
18 # Stdlib
19 import inspect
20 import os
21
22 # Our own
23 from IPython.core.interactiveshell import (
24 InteractiveShell, InteractiveShellABC
25 )
26 from IPython.core import page
27 from IPython.core.autocall import ZMQExitAutocall
28 from IPython.core.displayhook import DisplayHook
29 from IPython.core.displaypub import DisplayPublisher
30 from IPython.core.macro import Macro
31 from IPython.core.payloadpage import install_payload_page
32 from IPython.utils import io
33 from IPython.utils.path import get_py_filename
34 from IPython.utils.traitlets import Instance, Type, Dict
35 from IPython.utils.warn import warn
36 from IPython.zmq.session import extract_header
37 from session import Session
38
39 #-----------------------------------------------------------------------------
40 # Globals and side-effects
41 #-----------------------------------------------------------------------------
42
43 # Install the payload version of page.
44 install_payload_page()
45
46 #-----------------------------------------------------------------------------
47 # Functions and classes
48 #-----------------------------------------------------------------------------
49
50 class ZMQDisplayHook(DisplayHook):
51 """A displayhook subclass that publishes data using ZeroMQ."""
52
53 session = Instance(Session)
54 pub_socket = Instance('zmq.Socket')
55 parent_header = Dict({})
56
57 def set_parent(self, parent):
58 """Set the parent for outbound messages."""
59 self.parent_header = extract_header(parent)
60
61 def start_displayhook(self):
62 self.msg = self.session.msg(u'pyout', {}, parent=self.parent_header)
63
64 def write_output_prompt(self):
65 """Write the output prompt."""
66 if self.do_full_cache:
67 self.msg['content']['execution_count'] = self.prompt_count
68
69 def write_format_data(self, format_dict):
70 self.msg['content']['data'] = format_dict
71
72 def finish_displayhook(self):
73 """Finish up all displayhook activities."""
74 self.session.send(self.pub_socket, self.msg)
75 self.msg = None
76
77
78 class ZMQDisplayPublisher(DisplayPublisher):
79 """A display publisher that publishes data using a ZeroMQ PUB socket."""
80
81 session = Instance(Session)
82 pub_socket = Instance('zmq.Socket')
83 parent_header = Dict({})
84
85 def set_parent(self, parent):
86 """Set the parent for outbound messages."""
87 self.parent_header = extract_header(parent)
88
89 def publish(self, source, data, metadata=None):
90 if metadata is None:
91 metadata = {}
92 self._validate_data(source, data, metadata)
93 content = {}
94 content['source'] = source
95 content['data'] = data
96 content['metadata'] = metadata
97 self.session.send(
98 self.pub_socket, u'display_data', content,
99 parent=self.parent_header
100 )
101
102
103 class ZMQInteractiveShell(InteractiveShell):
104 """A subclass of InteractiveShell for ZMQ."""
105
106 displayhook_class = Type(ZMQDisplayHook)
107 display_pub_class = Type(ZMQDisplayPublisher)
108
109 exiter = Instance(ZMQExitAutocall)
110 def _exiter_default(self):
111 return ZMQExitAutocall(self)
112
113 keepkernel_on_exit = None
114
115 def init_environment(self):
116 """Configure the user's environment.
117
118 """
119 env = os.environ
120 # These two ensure 'ls' produces nice coloring on BSD-derived systems
121 env['TERM'] = 'xterm-color'
122 env['CLICOLOR'] = '1'
123 # Since normal pagers don't work at all (over pexpect we don't have
124 # single-key control of the subprocess), try to disable paging in
125 # subprocesses as much as possible.
126 env['PAGER'] = 'cat'
127 env['GIT_PAGER'] = 'cat'
128
129 def auto_rewrite_input(self, cmd):
130 """Called to show the auto-rewritten input for autocall and friends.
131
132 FIXME: this payload is currently not correctly processed by the
133 frontend.
134 """
135 new = self.displayhook.prompt1.auto_rewrite() + cmd
136 payload = dict(
137 source='IPython.zmq.zmqshell.ZMQInteractiveShell.auto_rewrite_input',
138 transformed_input=new,
139 )
140 self.payload_manager.write_payload(payload)
141
142 def ask_exit(self):
143 """Engage the exit actions."""
144 payload = dict(
145 source='IPython.zmq.zmqshell.ZMQInteractiveShell.ask_exit',
146 exit=True,
147 keepkernel=self.keepkernel_on_exit,
148 )
149 self.payload_manager.write_payload(payload)
150
151 def _showtraceback(self, etype, evalue, stb):
152
153 exc_content = {
154 u'traceback' : stb,
155 u'ename' : unicode(etype.__name__),
156 u'evalue' : unicode(evalue)
157 }
158
159 dh = self.displayhook
160 # Send exception info over pub socket for other clients than the caller
161 # to pick up
162 exc_msg = dh.session.send(dh.pub_socket, u'pyerr', exc_content, dh.parent_header)
163
164 # FIXME - Hack: store exception info in shell object. Right now, the
165 # caller is reading this info after the fact, we need to fix this logic
166 # to remove this hack. Even uglier, we need to store the error status
167 # here, because in the main loop, the logic that sets it is being
168 # skipped because runlines swallows the exceptions.
169 exc_content[u'status'] = u'error'
170 self._reply_content = exc_content
171 # /FIXME
172
173 return exc_content
174
175 #------------------------------------------------------------------------
176 # Magic overrides
177 #------------------------------------------------------------------------
178 # Once the base class stops inheriting from magic, this code needs to be
179 # moved into a separate machinery as well. For now, at least isolate here
180 # the magics which this class needs to implement differently from the base
181 # class, or that are unique to it.
182
183 def magic_doctest_mode(self,parameter_s=''):
184 """Toggle doctest mode on and off.
185
186 This mode is intended to make IPython behave as much as possible like a
187 plain Python shell, from the perspective of how its prompts, exceptions
188 and output look. This makes it easy to copy and paste parts of a
189 session into doctests. It does so by:
190
191 - Changing the prompts to the classic ``>>>`` ones.
192 - Changing the exception reporting mode to 'Plain'.
193 - Disabling pretty-printing of output.
194
195 Note that IPython also supports the pasting of code snippets that have
196 leading '>>>' and '...' prompts in them. This means that you can paste
197 doctests from files or docstrings (even if they have leading
198 whitespace), and the code will execute correctly. You can then use
199 '%history -t' to see the translated history; this will give you the
200 input after removal of all the leading prompts and whitespace, which
201 can be pasted back into an editor.
202
203 With these features, you can switch into this mode easily whenever you
204 need to do testing and changes to doctests, without having to leave
205 your existing IPython session.
206 """
207
208 from IPython.utils.ipstruct import Struct
209
210 # Shorthands
211 shell = self.shell
212 disp_formatter = self.shell.display_formatter
213 ptformatter = disp_formatter.formatters['text/plain']
214 # dstore is a data store kept in the instance metadata bag to track any
215 # changes we make, so we can undo them later.
216 dstore = shell.meta.setdefault('doctest_mode', Struct())
217 save_dstore = dstore.setdefault
218
219 # save a few values we'll need to recover later
220 mode = save_dstore('mode', False)
221 save_dstore('rc_pprint', ptformatter.pprint)
222 save_dstore('rc_plain_text_only',disp_formatter.plain_text_only)
223 save_dstore('xmode', shell.InteractiveTB.mode)
224
225 if mode == False:
226 # turn on
227 ptformatter.pprint = False
228 disp_formatter.plain_text_only = True
229 shell.magic_xmode('Plain')
230 else:
231 # turn off
232 ptformatter.pprint = dstore.rc_pprint
233 disp_formatter.plain_text_only = dstore.rc_plain_text_only
234 shell.magic_xmode(dstore.xmode)
235
236 # Store new mode and inform on console
237 dstore.mode = bool(1-int(mode))
238 mode_label = ['OFF','ON'][dstore.mode]
239 print('Doctest mode is:', mode_label)
240
241 # Send the payload back so that clients can modify their prompt display
242 payload = dict(
243 source='IPython.zmq.zmqshell.ZMQInteractiveShell.magic_doctest_mode',
244 mode=dstore.mode)
245 self.payload_manager.write_payload(payload)
246
247 def magic_edit(self,parameter_s='',last_call=['','']):
248 """Bring up an editor and execute the resulting code.
249
250 Usage:
251 %edit [options] [args]
252
253 %edit runs IPython's editor hook. The default version of this hook is
254 set to call the __IPYTHON__.rc.editor command. This is read from your
255 environment variable $EDITOR. If this isn't found, it will default to
256 vi under Linux/Unix and to notepad under Windows. See the end of this
257 docstring for how to change the editor hook.
258
259 You can also set the value of this editor via the command line option
260 '-editor' or in your ipythonrc file. This is useful if you wish to use
261 specifically for IPython an editor different from your typical default
262 (and for Windows users who typically don't set environment variables).
263
264 This command allows you to conveniently edit multi-line code right in
265 your IPython session.
266
267 If called without arguments, %edit opens up an empty editor with a
268 temporary file and will execute the contents of this file when you
269 close it (don't forget to save it!).
270
271
272 Options:
273
274 -n <number>: open the editor at a specified line number. By default,
275 the IPython editor hook uses the unix syntax 'editor +N filename', but
276 you can configure this by providing your own modified hook if your
277 favorite editor supports line-number specifications with a different
278 syntax.
279
280 -p: this will call the editor with the same data as the previous time
281 it was used, regardless of how long ago (in your current session) it
282 was.
283
284 -r: use 'raw' input. This option only applies to input taken from the
285 user's history. By default, the 'processed' history is used, so that
286 magics are loaded in their transformed version to valid Python. If
287 this option is given, the raw input as typed as the command line is
288 used instead. When you exit the editor, it will be executed by
289 IPython's own processor.
290
291 -x: do not execute the edited code immediately upon exit. This is
292 mainly useful if you are editing programs which need to be called with
293 command line arguments, which you can then do using %run.
294
295
296 Arguments:
297
298 If arguments are given, the following possibilites exist:
299
300 - The arguments are numbers or pairs of colon-separated numbers (like
301 1 4:8 9). These are interpreted as lines of previous input to be
302 loaded into the editor. The syntax is the same of the %macro command.
303
304 - If the argument doesn't start with a number, it is evaluated as a
305 variable and its contents loaded into the editor. You can thus edit
306 any string which contains python code (including the result of
307 previous edits).
308
309 - If the argument is the name of an object (other than a string),
310 IPython will try to locate the file where it was defined and open the
311 editor at the point where it is defined. You can use `%edit function`
312 to load an editor exactly at the point where 'function' is defined,
313 edit it and have the file be executed automatically.
314
315 If the object is a macro (see %macro for details), this opens up your
316 specified editor with a temporary file containing the macro's data.
317 Upon exit, the macro is reloaded with the contents of the file.
318
319 Note: opening at an exact line is only supported under Unix, and some
320 editors (like kedit and gedit up to Gnome 2.8) do not understand the
321 '+NUMBER' parameter necessary for this feature. Good editors like
322 (X)Emacs, vi, jed, pico and joe all do.
323
324 - If the argument is not found as a variable, IPython will look for a
325 file with that name (adding .py if necessary) and load it into the
326 editor. It will execute its contents with execfile() when you exit,
327 loading any code in the file into your interactive namespace.
328
329 After executing your code, %edit will return as output the code you
330 typed in the editor (except when it was an existing file). This way
331 you can reload the code in further invocations of %edit as a variable,
332 via _<NUMBER> or Out[<NUMBER>], where <NUMBER> is the prompt number of
333 the output.
334
335 Note that %edit is also available through the alias %ed.
336
337 This is an example of creating a simple function inside the editor and
338 then modifying it. First, start up the editor:
339
340 In [1]: ed
341 Editing... done. Executing edited code...
342 Out[1]: 'def foo():n print "foo() was defined in an editing session"n'
343
344 We can then call the function foo():
345
346 In [2]: foo()
347 foo() was defined in an editing session
348
349 Now we edit foo. IPython automatically loads the editor with the
350 (temporary) file where foo() was previously defined:
351
352 In [3]: ed foo
353 Editing... done. Executing edited code...
354
355 And if we call foo() again we get the modified version:
356
357 In [4]: foo()
358 foo() has now been changed!
359
360 Here is an example of how to edit a code snippet successive
361 times. First we call the editor:
362
363 In [5]: ed
364 Editing... done. Executing edited code...
365 hello
366 Out[5]: "print 'hello'n"
367
368 Now we call it again with the previous output (stored in _):
369
370 In [6]: ed _
371 Editing... done. Executing edited code...
372 hello world
373 Out[6]: "print 'hello world'n"
374
375 Now we call it with the output #8 (stored in _8, also as Out[8]):
376
377 In [7]: ed _8
378 Editing... done. Executing edited code...
379 hello again
380 Out[7]: "print 'hello again'n"
381
382
383 Changing the default editor hook:
384
385 If you wish to write your own editor hook, you can put it in a
386 configuration file which you load at startup time. The default hook
387 is defined in the IPython.core.hooks module, and you can use that as a
388 starting example for further modifications. That file also has
389 general instructions on how to set a new hook for use once you've
390 defined it."""
391
392 # FIXME: This function has become a convoluted mess. It needs a
393 # ground-up rewrite with clean, simple logic.
394
395 def make_filename(arg):
396 "Make a filename from the given args"
397 try:
398 filename = get_py_filename(arg)
399 except IOError:
400 if args.endswith('.py'):
401 filename = arg
402 else:
403 filename = None
404 return filename
405
406 # custom exceptions
407 class DataIsObject(Exception): pass
408
409 opts,args = self.parse_options(parameter_s,'prn:')
410 # Set a few locals from the options for convenience:
411 opts_p = opts.has_key('p')
412 opts_r = opts.has_key('r')
413
414 # Default line number value
415 lineno = opts.get('n',None)
416 if lineno is not None:
417 try:
418 lineno = int(lineno)
419 except:
420 warn("The -n argument must be an integer.")
421 return
422
423 if opts_p:
424 args = '_%s' % last_call[0]
425 if not self.shell.user_ns.has_key(args):
426 args = last_call[1]
427
428 # use last_call to remember the state of the previous call, but don't
429 # let it be clobbered by successive '-p' calls.
430 try:
431 last_call[0] = self.shell.displayhook.prompt_count
432 if not opts_p:
433 last_call[1] = parameter_s
434 except:
435 pass
436
437 # by default this is done with temp files, except when the given
438 # arg is a filename
439 use_temp = True
440
441 data = ''
442 if args[0].isdigit():
443 # Mode where user specifies ranges of lines, like in %macro.
444 # This means that you can't edit files whose names begin with
445 # numbers this way. Tough.
446 ranges = args.split()
447 data = ''.join(self.extract_input_slices(ranges,opts_r))
448 elif args.endswith('.py'):
449 filename = make_filename(args)
450 use_temp = False
451 elif args:
452 try:
453 # Load the parameter given as a variable. If not a string,
454 # process it as an object instead (below)
455
456 #print '*** args',args,'type',type(args) # dbg
457 data = eval(args, self.shell.user_ns)
458 if not isinstance(data, basestring):
459 raise DataIsObject
460
461 except (NameError,SyntaxError):
462 # given argument is not a variable, try as a filename
463 filename = make_filename(args)
464 if filename is None:
465 warn("Argument given (%s) can't be found as a variable "
466 "or as a filename." % args)
467 return
468 use_temp = False
469
470 except DataIsObject:
471 # macros have a special edit function
472 if isinstance(data, Macro):
473 self._edit_macro(args,data)
474 return
475
476 # For objects, try to edit the file where they are defined
477 try:
478 filename = inspect.getabsfile(data)
479 if 'fakemodule' in filename.lower() and inspect.isclass(data):
480 # class created by %edit? Try to find source
481 # by looking for method definitions instead, the
482 # __module__ in those classes is FakeModule.
483 attrs = [getattr(data, aname) for aname in dir(data)]
484 for attr in attrs:
485 if not inspect.ismethod(attr):
486 continue
487 filename = inspect.getabsfile(attr)
488 if filename and 'fakemodule' not in filename.lower():
489 # change the attribute to be the edit target instead
490 data = attr
491 break
492
493 datafile = 1
494 except TypeError:
495 filename = make_filename(args)
496 datafile = 1
497 warn('Could not find file where `%s` is defined.\n'
498 'Opening a file named `%s`' % (args,filename))
499 # Now, make sure we can actually read the source (if it was in
500 # a temp file it's gone by now).
501 if datafile:
502 try:
503 if lineno is None:
504 lineno = inspect.getsourcelines(data)[1]
505 except IOError:
506 filename = make_filename(args)
507 if filename is None:
508 warn('The file `%s` where `%s` was defined cannot '
509 'be read.' % (filename,data))
510 return
511 use_temp = False
512
513 if use_temp:
514 filename = self.shell.mktempfile(data)
515 print('IPython will make a temporary file named:', filename)
516
517 # Make sure we send to the client an absolute path, in case the working
518 # directory of client and kernel don't match
519 filename = os.path.abspath(filename)
520
521 payload = {
522 'source' : 'IPython.zmq.zmqshell.ZMQInteractiveShell.edit_magic',
523 'filename' : filename,
524 'line_number' : lineno
525 }
526 self.payload_manager.write_payload(payload)
527
528 def magic_gui(self, *args, **kwargs):
529 raise NotImplementedError(
530 'GUI support must be enabled in command line options.')
531
532 def magic_pylab(self, *args, **kwargs):
533 raise NotImplementedError(
534 'pylab support must be enabled in command line options.')
535
536 # A few magics that are adapted to the specifics of using pexpect and a
537 # remote terminal
538
539 def magic_clear(self, arg_s):
540 """Clear the terminal."""
541 if os.name == 'posix':
542 self.shell.system("clear")
543 else:
544 self.shell.system("cls")
545
546 if os.name == 'nt':
547 # This is the usual name in windows
548 magic_cls = magic_clear
549
550 # Terminal pagers won't work over pexpect, but we do have our own pager
551
552 def magic_less(self, arg_s):
553 """Show a file through the pager.
554
555 Files ending in .py are syntax-highlighted."""
556 cont = open(arg_s).read()
557 if arg_s.endswith('.py'):
558 cont = self.shell.pycolorize(cont)
559 page.page(cont)
560
561 magic_more = magic_less
562
563 # Man calls a pager, so we also need to redefine it
564 if os.name == 'posix':
565 def magic_man(self, arg_s):
566 """Find the man page for the given command and display in pager."""
567 page.page(self.shell.getoutput('man %s | col -b' % arg_s,
568 split=False))
569
570 # FIXME: this is specific to the GUI, so we should let the gui app load
571 # magics at startup that are only for the gui. Once the gui app has proper
572 # profile and configuration management, we can have it initialize a kernel
573 # with a special config file that provides these.
574 def magic_guiref(self, arg_s):
575 """Show a basic reference about the GUI console."""
576 from IPython.core.usage import gui_reference
577 page.page(gui_reference, auto_html=True)
578
579 def magic_loadpy(self, arg_s):
580 """Load a .py python script into the GUI console.
581
582 This magic command can either take a local filename or a url::
583
584 %loadpy myscript.py
585 %loadpy http://www.example.com/myscript.py
586 """
587 if not arg_s.endswith('.py'):
588 raise ValueError('%%load only works with .py files: %s' % arg_s)
589 if arg_s.startswith('http'):
590 import urllib2
591 response = urllib2.urlopen(arg_s)
592 content = response.read()
593 else:
594 content = open(arg_s).read()
595 payload = dict(
596 source='IPython.zmq.zmqshell.ZMQInteractiveShell.magic_loadpy',
597 text=content
598 )
599 self.payload_manager.write_payload(payload)
600
601 InteractiveShellABC.register(ZMQInteractiveShell)
602
[end of IPython/zmq/zmqshell.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ipython/ipython
|
a48dd85f0f2b72640c41f7ef30cd23d8d67fe212
|
Usability improvements to Qt console
Currently the Qt console is tricky to use with multiline cells recalled from history because it's very easy to go 'off the edge'. If the cursor reaches the top or bottom line and you do up/down once more, you jump to the next cell. When recalling history that's the desired behavior, but when you've already edited a cell, it's jarring to get bumped out of your editing context due to a single arrow movement.
I don't know how easy/possible it would be to implement, but my idea is the following: the console should detect when a cell has been made 'dirty' by editing (any typing other than arrow movements, pasting, etc), then the behavior should change. At that point, the cell boundaries should become 'hard', preventing the cursor from exiting unless the person clears the cell. Basically, once the cell is being edited, it should feel like a little text editor that doesn't lose its content without some drastic action.
|
It could be made like the readline behavior. If you edit a "cell" in the history, that edited text remains even as you move up and down in the history. When you execute or cancel, the original text takes its place back in the history, and the new text (if you executed) appends to the history.
I second Robert's idea.
On Fri, Apr 8, 2011 at 11:32 AM, rkern
[email protected]
wrote:
> It could be made like the readline behavior. If you edit a "cell" in the history, that edited text remains even as you move up and down in the history. When you execute or cancel, the original text takes its place back in the history, and the new text (if you executed) appends to the history.
That sounds good, but do you mean to suggest not stopping the cursor
at cell boundaries once the cell has been edited? Part of what's very
jarring is the jumps that happen if you accidentally up-arrow through
the first line, and your big multiline cell gets replaced by 'ls'.
Even if we can recover it by going back down, I think ui-wise it would
be better to 'lock' the cell from exit at that point.
But I have no idea if that's easy to do in qt or not...
I'm always leery of preventing people from doing stuff, especially if the only way to "unlock" things is to execute something or lose what you've written. Someone might legitimately want to go up or down in the history while they're in the middle of typing something. This is particularly important when the history is from previous sessions so the user cannot simply scroll up to see it.
Rather, I think the principle to follow is one of "safe exploration". They can move around freely without losing their work. Yes, it might be visually jarring from time to time, but that's all.
OK, I can agree to that. I think I would prefer a "safety stop" once the cell has been edited, just requiring perhaps an extra Ctrl-arrow to get out, or somesuch... But if that's hard to implement, it's OK.
Do you guys have any available cycles to implement this over the next few weeks? I don't know enough qt to do it... But I think it's still an important usability limitation of the console to lose edits from an accidental up-arrow. It has happened several times to me and it's really annoying...
People have been very impressed when I've demoed the qt console, but things like this still make it a bit unappealing for actual heavy-duty work. I think we're very close, but not quite there.
But since I can't really do the work myself right now, I can only ask :)
I will implement this, Fernando.
For now, I will go with the readline approach since that satisfies the principle of least surprise. If time permits, I will add a configuration option for some kind of lock, with a keybinding (Shift+Up/Down?) to disable it. But I think that the locking should be off by default.
On Sun, Apr 10, 2011 at 10:03 AM, epatters
[email protected]
wrote:
> I will implement this, Fernando.
>
> For now, I will go with the readline approach since that satisfies the principle of least surprise. If time permits, I will add a configuration option for some kind of lock, with a keybinding (Shift+Up/Down?) to disable it. But I think that the locking should be off by default.
This sounds great, Evan. Many thanks!
|
2011-04-11T00:12:45Z
|
<patch>
diff --git a/IPython/frontend/qt/console/console_widget.py b/IPython/frontend/qt/console/console_widget.py
--- a/IPython/frontend/qt/console/console_widget.py
+++ b/IPython/frontend/qt/console/console_widget.py
@@ -652,13 +652,13 @@ def _prompt_finished_hook(self):
"""
pass
- def _up_pressed(self):
+ def _up_pressed(self, shift_modifier):
""" Called when the up key is pressed. Returns whether to continue
processing the event.
"""
return True
- def _down_pressed(self):
+ def _down_pressed(self, shift_modifier):
""" Called when the down key is pressed. Returns whether to continue
processing the event.
"""
@@ -1040,14 +1040,14 @@ def _event_filter_console_keypress(self, event):
intercepted = True
elif key == QtCore.Qt.Key_Up:
- if self._reading or not self._up_pressed():
+ if self._reading or not self._up_pressed(shift_down):
intercepted = True
else:
prompt_line = self._get_prompt_cursor().blockNumber()
intercepted = cursor.blockNumber() <= prompt_line
elif key == QtCore.Qt.Key_Down:
- if self._reading or not self._down_pressed():
+ if self._reading or not self._down_pressed(shift_down):
intercepted = True
else:
end_line = self._get_end_cursor().blockNumber()
diff --git a/IPython/frontend/qt/console/history_console_widget.py b/IPython/frontend/qt/console/history_console_widget.py
--- a/IPython/frontend/qt/console/history_console_widget.py
+++ b/IPython/frontend/qt/console/history_console_widget.py
@@ -2,6 +2,7 @@
from IPython.external.qt import QtGui
# Local imports
+from IPython.utils.traitlets import Bool
from console_widget import ConsoleWidget
@@ -9,6 +10,13 @@ class HistoryConsoleWidget(ConsoleWidget):
""" A ConsoleWidget that keeps a history of the commands that have been
executed and provides a readline-esque interface to this history.
"""
+
+ #------ Configuration ------------------------------------------------------
+
+ # If enabled, the input buffer will become "locked" to history movement when
+ # an edit is made to a multi-line input buffer. To override the lock, use
+ # Shift in conjunction with the standard history cycling keys.
+ history_lock = Bool(False, config=True)
#---------------------------------------------------------------------------
# 'object' interface
@@ -19,6 +27,7 @@ def __init__(self, *args, **kw):
# HistoryConsoleWidget protected variables.
self._history = []
+ self._history_edits = {}
self._history_index = 0
self._history_prefix = ''
@@ -42,6 +51,9 @@ def execute(self, source=None, hidden=False, interactive=False):
if history and (not self._history or self._history[-1] != history):
self._history.append(history)
+ # Emulate readline: reset all history edits.
+ self._history_edits = {}
+
# Move the history index to the most recent item.
self._history_index = len(self._history)
@@ -51,12 +63,15 @@ def execute(self, source=None, hidden=False, interactive=False):
# 'ConsoleWidget' abstract interface
#---------------------------------------------------------------------------
- def _up_pressed(self):
+ def _up_pressed(self, shift_modifier):
""" Called when the up key is pressed. Returns whether to continue
processing the event.
"""
prompt_cursor = self._get_prompt_cursor()
if self._get_cursor().blockNumber() == prompt_cursor.blockNumber():
+ # Bail out if we're locked.
+ if self._history_locked() and not shift_modifier:
+ return False
# Set a search prefix based on the cursor position.
col = self._get_input_buffer_cursor_column()
@@ -84,21 +99,24 @@ def _up_pressed(self):
return True
- def _down_pressed(self):
+ def _down_pressed(self, shift_modifier):
""" Called when the down key is pressed. Returns whether to continue
processing the event.
"""
end_cursor = self._get_end_cursor()
if self._get_cursor().blockNumber() == end_cursor.blockNumber():
+ # Bail out if we're locked.
+ if self._history_locked() and not shift_modifier:
+ return False
# Perform the search.
- self.history_next(self._history_prefix)
+ replaced = self.history_next(self._history_prefix)
# Emulate readline: keep the cursor position fixed for a prefix
# search. (We don't need to move the cursor to the end of the buffer
# in the other case because this happens automatically when the
# input buffer is set.)
- if self._history_prefix:
+ if self._history_prefix and replaced:
cursor = self._get_prompt_cursor()
cursor.movePosition(QtGui.QTextCursor.Right,
n=len(self._history_prefix))
@@ -113,50 +131,66 @@ def _down_pressed(self):
#---------------------------------------------------------------------------
def history_previous(self, prefix=''):
- """ If possible, set the input buffer to a previous item in the history.
+ """ If possible, set the input buffer to a previous history item.
Parameters:
-----------
prefix : str, optional
If specified, search for an item with this prefix.
+
+ Returns:
+ --------
+ Whether the input buffer was changed.
"""
index = self._history_index
+ replace = False
while index > 0:
index -= 1
- history = self._history[index]
+ history = self._get_edited_history(index)
if history.startswith(prefix):
+ replace = True
break
- else:
- history = None
- if history is not None:
+ if replace:
+ self._store_edits()
self._history_index = index
self.input_buffer = history
+ return replace
+
def history_next(self, prefix=''):
- """ Set the input buffer to a subsequent item in the history, or to the
- original search prefix if there is no such item.
+ """ If possible, set the input buffer to a subsequent history item.
Parameters:
-----------
prefix : str, optional
If specified, search for an item with this prefix.
+
+ Returns:
+ --------
+ Whether the input buffer was changed.
"""
- while self._history_index < len(self._history) - 1:
- self._history_index += 1
- history = self._history[self._history_index]
+ index = self._history_index
+ replace = False
+ while self._history_index < len(self._history):
+ index += 1
+ history = self._get_edited_history(index)
if history.startswith(prefix):
+ replace = True
break
- else:
- self._history_index = len(self._history)
- history = prefix
- self.input_buffer = history
+
+ if replace:
+ self._store_edits()
+ self._history_index = index
+ self.input_buffer = history
+
+ return replace
def history_tail(self, n=10):
""" Get the local history list.
- Parameters
- ----------
+ Parameters:
+ -----------
n : int
The (maximum) number of history items to get.
"""
@@ -166,8 +200,35 @@ def history_tail(self, n=10):
# 'HistoryConsoleWidget' protected interface
#---------------------------------------------------------------------------
+ def _history_locked(self):
+ """ Returns whether history movement is locked.
+ """
+ return (self.history_lock and
+ (self._get_edited_history(self._history_index) !=
+ self.input_buffer) and
+ (self._get_prompt_cursor().blockNumber() !=
+ self._get_end_cursor().blockNumber()))
+
+ def _get_edited_history(self, index):
+ """ Retrieves a history item, possibly with temporary edits.
+ """
+ if index in self._history_edits:
+ return self._history_edits[index]
+ elif index == len(self._history):
+ return unicode()
+ return self._history[index]
+
def _set_history(self, history):
""" Replace the current history with a sequence of history items.
"""
self._history = list(history)
+ self._history_edits = {}
self._history_index = len(self._history)
+
+ def _store_edits(self):
+ """ If there are edits to the current input buffer, store them.
+ """
+ current = self.input_buffer
+ if self._history_index == len(self._history) or \
+ self._history[self._history_index] != current:
+ self._history_edits[self._history_index] = current
</patch>
|
[]
|
[]
| |||
pantsbuild__pants-14361
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't get pants to work with ECR
**Describe the bug**
I am unable to get pants to be able to access docker repos on ECR. We use ECR repos to store the base image of images `package`d with pants as well as the target registry for images we would like to `publish`.
For now, we are forced to manually do `docker pull ...` and `docker push ...` operations before and after using `./pants package` and are unable to use the `publish` goal. This "works" only by having bash logic that can predict the images we need/are produced.
Given a Dockerfile like:
```
FROM 000000000000.dkr.ecr.us-east-1.amazonaws.com/base:label
```
It will fail if we do this:
```
$ aws ecr get-login-password | docker login --username AWS --password-stdin "000000000000.dkr.ecr.us-east-1.amazonaws.com"
Login Succeeded
$ ./pants package target
...
#3 ERROR: unexpected status code [manifests label]: 401 Unauthorized
```
Whereas it will succeed if we do this instead:
```
$ aws ecr get-login-password | docker login --username AWS --password-stdin "000000000000.dkr.ecr.us-east-1.amazonaws.com"
Login Succeeded
$ docker pull 000000000000.dkr.ecr.us-east-1.amazonaws.com/base:label
123456789a: Pulling from base-python
Status: Image is up to date for 00000000.dkr.ecr.us-east-1.amazonaws.com/base:label
$ ./pants package target
22:29:31.52 [INFO] Built docker images:
* 000000000000.dkr.ecr.us-east-1.amazonaws.com/target:latest
```
**Pants version**
2.9.0
**OS**
Are you encountering the bug on MacOS, Linux, or both?
tested only on linux
</issue>
<code>
[start of README.md]
1 # Pants Build System
2
3 Pants is a scalable build system for _monorepos_: codebases containing
4 multiple projects, often using multiple programming languages and frameworks,
5 in a single unified code repository.
6
7 Some noteworthy features include:
8
9 * Explicit dependency modeling.
10 * Fine-grained invalidation.
11 * Shared result caching.
12 * Concurrent execution.
13 * Remote execution.
14 * Unified interface for multiple tools and languages.
15 * Extensibility and customizability via a plugin API.
16
17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/).
18
19 We release to [PyPI](https://pypi.org/pypi)
20 [](https://pypi.org/pypi/pantsbuild.pants)
21 [](https://pypi.org/pypi/pantsbuild.pants)
22
23 # Requirements
24
25 To run Pants, you need:
26
27 * Linux or macOS.
28 * Python 3.7+ discoverable on your `PATH`.
29 * A C compiler, system headers and Python headers (to compile native Python modules).
30 * Internet access (so that Pants can fully bootstrap itself).
31
[end of README.md]
[start of src/python/pants/backend/docker/target_types.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import os
7 import re
8 from abc import ABC, abstractmethod
9 from textwrap import dedent
10 from typing import Callable, ClassVar, Iterator, Optional, cast
11
12 from typing_extensions import final
13
14 from pants.backend.docker.registries import ALL_DEFAULT_REGISTRIES
15 from pants.base.build_environment import get_buildroot
16 from pants.core.goals.run import RestartableField
17 from pants.engine.addresses import Address
18 from pants.engine.fs import GlobMatchErrorBehavior
19 from pants.engine.target import (
20 COMMON_TARGET_FIELDS,
21 AsyncFieldMixin,
22 BoolField,
23 Dependencies,
24 DictStringToStringField,
25 InvalidFieldException,
26 OptionalSingleSourceField,
27 StringField,
28 StringSequenceField,
29 Target,
30 )
31 from pants.util.docutil import bin_name, doc_url
32
33 # Common help text to be applied to each field that supports value interpolation.
34 _interpolation_help = (
35 "{kind} may use placeholders in curly braces to be interpolated. The placeholders are derived "
36 "from various sources, such as the Dockerfile instructions and build args.\n\n"
37 )
38
39
40 class DockerImageBuildArgsField(StringSequenceField):
41 alias = "extra_build_args"
42 default = ()
43 help = (
44 "Build arguments (`--build-arg`) to use when building this image. "
45 "Entries are either strings in the form `ARG_NAME=value` to set an explicit value; "
46 "or just `ARG_NAME` to copy the value from Pants's own environment.\n\n"
47 "Use `[docker].build_args` to set default build args for all images."
48 )
49
50
51 class DockerImageContextRootField(StringField):
52 alias = "context_root"
53 help = (
54 "Specify which directory to use as the Docker build context root. This affects the file "
55 "paths to use for the `COPY` and `ADD` instructions. For example, whether "
56 "`COPY files/f.txt` should look for the file relative to the build root: "
57 "`<build root>/files/f.txt` vs relative to the BUILD file: "
58 "`<build root>/path_to_build_file/files/f.txt`.\n\n"
59 "Specify the `context_root` path as `files` for relative to build root, or as `./files` "
60 "for relative to the BUILD file.\n\n"
61 "If `context_root` is not specified, it defaults to `[docker].default_context_root`."
62 )
63
64 @classmethod
65 def compute_value(cls, raw_value: Optional[str], address: Address) -> Optional[str]:
66 value_or_default = super().compute_value(raw_value, address=address)
67 if isinstance(value_or_default, str) and value_or_default.startswith("/"):
68 val = value_or_default.strip("/")
69 raise InvalidFieldException(
70 f"The `{cls.alias}` field in target {address} must be a relative path, but was "
71 f"{value_or_default!r}. Use {val!r} for a path relative to the build root, or "
72 f"{'./' + val!r} for a path relative to the BUILD file (i.e. {os.path.join(address.spec_path, val)!r})."
73 )
74 return value_or_default
75
76
77 class DockerImageSourceField(OptionalSingleSourceField):
78 default = "Dockerfile"
79
80 # When the default glob value is in effect, we don't want the normal glob match error behavior
81 # to kick in for a missing Dockerfile, in case there are `instructions` provided, in which case
82 # we generate the Dockerfile instead. If there are no `instructions`, or there are both
83 # `instructions` and a Dockerfile hydrated from the `source` glob, we error out with a message
84 # to the user.
85 default_glob_match_error_behavior = GlobMatchErrorBehavior.ignore
86
87 help = (
88 "The Dockerfile to use when building the Docker image.\n\n"
89 "Use the `instructions` field instead if you prefer not having the Dockerfile in your "
90 "source tree."
91 )
92
93
94 class DockerImageInstructionsField(StringSequenceField):
95 alias = "instructions"
96 required = False
97 help = (
98 "The `Dockerfile` content, typically one instruction per list item.\n\n"
99 "Use the `source` field instead if you prefer having the Dockerfile in your source tree."
100 "\n\n"
101 + dedent(
102 """\
103 Example:
104
105 # example/BUILD
106 docker_image(
107 instructions=[
108 "FROM base/image:1.0",
109 "RUN echo example",
110 ],
111 )
112 """
113 )
114 )
115
116
117 class DockerImageTagsField(StringSequenceField):
118 alias = "image_tags"
119 default = ("latest",)
120 help = (
121 "Any tags to apply to the Docker image name (the version is usually applied as a tag).\n\n"
122 + _interpolation_help.format(kind="tag")
123 + f"See {doc_url('tagging-docker-images')}."
124 )
125
126
127 class DockerImageTargetStageField(StringField):
128 alias = "target_stage"
129 help = (
130 "Specify target build stage, rather than building the entire `Dockerfile`.\n\n"
131 "When using multi-stage build, you may name your stages, and can target them when building "
132 "to only selectively build a certain stage. See also the `--docker-build-target-stage` "
133 "option.\n\n"
134 "Read more about [multi-stage Docker builds]"
135 "(https://docs.docker.com/develop/develop-images/multistage-build/#stop-at-a-specific-build-stage)"
136 )
137
138
139 class DockerImageDependenciesField(Dependencies):
140 supports_transitive_excludes = True
141
142
143 class DockerImageRegistriesField(StringSequenceField):
144 alias = "registries"
145 default = (ALL_DEFAULT_REGISTRIES,)
146 help = (
147 "List of addresses or configured aliases to any Docker registries to use for the "
148 "built image.\n\n"
149 "The address is a domain name with optional port for your registry, and any registry "
150 "aliases are prefixed with `@` for addresses in the [docker].registries configuration "
151 "section.\n\n"
152 "By default, all configured registries with `default = true` are used.\n\n"
153 + dedent(
154 """\
155 Example:
156
157 # pants.toml
158 [docker.registries.my-registry-alias]
159 address = "myregistrydomain:port"
160 default = false # optional
161
162 # example/BUILD
163 docker_image(
164 registries = [
165 "@my-registry-alias",
166 "myregistrydomain:port",
167 ],
168 )
169
170 """
171 )
172 + (
173 "The above example shows two valid `registry` options: using an alias to a configured "
174 "registry and the address to a registry verbatim in the BUILD file."
175 )
176 )
177
178
179 class DockerImageRepositoryField(StringField):
180 alias = "repository"
181 help = (
182 'The repository name for the Docker image. e.g. "<repository>/<name>".\n\n'
183 "It uses the `[docker].default_repository` by default.\n\n"
184 + _interpolation_help.format(kind="repository")
185 + "Additional placeholders for the repository field are: `name`, `directory` and "
186 "`parent_directory`.\n\nSee the documentation for `[docker].default_repository` for more "
187 "information."
188 )
189
190
191 class DockerImageSkipPushField(BoolField):
192 alias = "skip_push"
193 default = False
194 help = (
195 f"If set to true, do not push this image to registries when running `{bin_name()} publish`."
196 )
197
198
199 OptionValueFormatter = Callable[[str], str]
200
201
202 class DockerBuildOptionFieldMixin(ABC):
203 """Inherit this mixin class to provide options to `docker build`."""
204
205 docker_build_option: ClassVar[str]
206
207 @abstractmethod
208 def option_values(self, *, value_formatter: OptionValueFormatter) -> Iterator[str]:
209 """Subclasses must implement this, to turn their `self.value` into none, one or more option
210 values."""
211
212 @final
213 def options(self, value_formatter: OptionValueFormatter) -> Iterator[str]:
214 for value in self.option_values(value_formatter=value_formatter):
215 yield from (self.docker_build_option, value)
216
217
218 class DockerImageBuildImageLabelsOptionField(DockerBuildOptionFieldMixin, DictStringToStringField):
219 alias = "image_labels"
220 help = (
221 "Provide image metadata.\n\n"
222 + _interpolation_help.format(kind="label value")
223 + "See [Docker labels](https://docs.docker.com/config/labels-custom-metadata/"
224 "#manage-labels-on-objects) for more information."
225 )
226 docker_build_option = "--label"
227
228 def option_values(self, value_formatter: OptionValueFormatter) -> Iterator[str]:
229 for label, value in (self.value or {}).items():
230 yield f"{label}={value_formatter(value)}"
231
232
233 class DockerImageBuildSecretsOptionField(
234 AsyncFieldMixin, DockerBuildOptionFieldMixin, DictStringToStringField
235 ):
236 alias = "secrets"
237 help = (
238 "Secret files to expose to the build (only if BuildKit enabled).\n\n"
239 "Secrets may use absolute paths, or paths relative to your build root, or the BUILD file "
240 "if prefixed with `./`. The id should be valid as used by the Docker build `--secret` "
241 "option. See [Docker secrets](https://docs.docker.com/engine/swarm/secrets/) for more "
242 "information.\n\n"
243 + dedent(
244 """\
245 Example:
246
247 docker_image(
248 secrets={
249 "mysecret": "/var/secrets/some-secret",
250 "repo-secret": "src/proj/secrets/some-secret",
251 "target-secret": "./secrets/some-secret",
252 }
253 )
254 """
255 )
256 )
257
258 docker_build_option = "--secret"
259
260 def option_values(self, **kwargs) -> Iterator[str]:
261 # os.path.join() discards preceding parts if encountering an abs path, e.g. if the secret
262 # `path` is an absolute path, the `buildroot` and `spec_path` will not be considered. Also,
263 # an empty path part is ignored.
264 for secret, path in (self.value or {}).items():
265 full_path = os.path.join(
266 get_buildroot(),
267 self.address.spec_path if re.match(r"\.{1,2}/", path) else "",
268 path,
269 )
270 yield f"id={secret},src={os.path.normpath(full_path)}"
271
272
273 class DockerImageBuildSSHOptionField(DockerBuildOptionFieldMixin, StringSequenceField):
274 alias = "ssh"
275 default = ()
276 help = (
277 "SSH agent socket or keys to expose to the build (only if BuildKit enabled) "
278 "(format: default|<id>[=<socket>|<key>[,<key>]])\n\n"
279 "The exposed agent and/or keys can then be used in your `Dockerfile` by mounting them in "
280 "your `RUN` instructions:\n\n"
281 " RUN --mount=type=ssh ...\n\n"
282 "See [Docker documentation](https://docs.docker.com/develop/develop-images"
283 "/build_enhancements/#using-ssh-to-access-private-data-in-builds) for more information."
284 )
285
286 docker_build_option = "--ssh"
287
288 def option_values(self, **kwargs) -> Iterator[str]:
289 yield from cast("tuple[str]", self.value)
290
291
292 class DockerImageTarget(Target):
293 alias = "docker_image"
294 core_fields = (
295 *COMMON_TARGET_FIELDS,
296 DockerImageBuildArgsField,
297 DockerImageDependenciesField,
298 DockerImageSourceField,
299 DockerImageInstructionsField,
300 DockerImageContextRootField,
301 DockerImageTagsField,
302 DockerImageRegistriesField,
303 DockerImageRepositoryField,
304 DockerImageBuildImageLabelsOptionField,
305 DockerImageBuildSecretsOptionField,
306 DockerImageBuildSSHOptionField,
307 DockerImageSkipPushField,
308 DockerImageTargetStageField,
309 RestartableField,
310 )
311 help = (
312 "The `docker_image` target describes how to build and tag a Docker image.\n\n"
313 "Any dependencies, as inferred or explicitly specified, will be included in the Docker "
314 "build context, after being packaged if applicable.\n\n"
315 "By default, will use a Dockerfile from the same directory as the BUILD file this target "
316 "is defined in. Point at another file with the `source` field, or use the `instructions` "
317 "field to have the Dockerfile contents verbatim directly in the BUILD file.\n\n"
318 "Dependencies on upstream/base images defined by another `docker_image` are inferred if "
319 "referenced by a build argument with a default value of the target address.\n\n"
320 + dedent(
321 """\
322 Example:
323
324 # src/docker/downstream/Dockerfile
325 ARG BASE=src/docker/upstream:image
326 FROM $BASE
327 ...
328 """
329 )
330 )
331
[end of src/python/pants/backend/docker/target_types.py]
[start of src/python/pants/backend/docker/util_rules/docker_build_context.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import logging
7 from abc import ABC
8 from dataclasses import dataclass
9
10 from pants.backend.docker.package_types import BuiltDockerImage
11 from pants.backend.docker.subsystems.docker_options import DockerOptions
12 from pants.backend.docker.subsystems.dockerfile_parser import DockerfileInfo, DockerfileInfoRequest
13 from pants.backend.docker.target_types import DockerImageSourceField
14 from pants.backend.docker.util_rules.docker_build_args import (
15 DockerBuildArgs,
16 DockerBuildArgsRequest,
17 )
18 from pants.backend.docker.util_rules.docker_build_env import (
19 DockerBuildEnvironment,
20 DockerBuildEnvironmentError,
21 DockerBuildEnvironmentRequest,
22 )
23 from pants.backend.docker.utils import get_hash, suggest_renames
24 from pants.backend.docker.value_interpolation import (
25 DockerBuildArgsInterpolationValue,
26 DockerInterpolationContext,
27 DockerInterpolationValue,
28 )
29 from pants.backend.shell.target_types import ShellSourceField
30 from pants.core.goals.package import BuiltPackage, PackageFieldSet
31 from pants.core.target_types import FileSourceField
32 from pants.core.util_rules.source_files import SourceFiles, SourceFilesRequest
33 from pants.engine.addresses import Address, Addresses, UnparsedAddressInputs
34 from pants.engine.fs import Digest, MergeDigests, Snapshot
35 from pants.engine.rules import Get, MultiGet, collect_rules, rule
36 from pants.engine.target import (
37 Dependencies,
38 DependenciesRequest,
39 FieldSetsPerTarget,
40 FieldSetsPerTargetRequest,
41 GeneratedSources,
42 GenerateSourcesRequest,
43 SourcesField,
44 Targets,
45 TransitiveTargets,
46 TransitiveTargetsRequest,
47 )
48 from pants.engine.unions import UnionRule
49
50 logger = logging.getLogger(__name__)
51
52
53 class DockerBuildContextError(Exception):
54 pass
55
56
57 class DockerContextFilesAcceptableInputsField(ABC, SourcesField):
58 """This is a meta field for the context files generator, to tell the codegen machinery what
59 source fields are good to use as-is.
60
61 Use `DockerContextFilesAcceptableInputsField.register(<SourceField>)` to register input fields
62 that should be accepted.
63
64 This is implemented using the `ABC.register` from Python lib:
65 https://docs.python.org/3/library/abc.html#abc.ABCMeta.register
66 """
67
68
69 # These sources will be used to populate the build context as-is.
70 DockerContextFilesAcceptableInputsField.register(ShellSourceField)
71
72
73 class DockerContextFilesSourcesField(SourcesField):
74 """This is just a type marker for the codegen machinery."""
75
76
77 class GenerateDockerContextFiles(GenerateSourcesRequest):
78 """This translates all files from acceptable Source fields for the docker context using the
79 `codegen` machinery."""
80
81 input = DockerContextFilesAcceptableInputsField
82 output = DockerContextFilesSourcesField
83 exportable = False
84
85
86 @rule
87 async def hydrate_input_sources(request: GenerateDockerContextFiles) -> GeneratedSources:
88 # We simply pass the files on, as-is
89 return GeneratedSources(request.protocol_sources)
90
91
92 @dataclass(frozen=True)
93 class DockerBuildContextRequest:
94 address: Address
95 build_upstream_images: bool = False
96
97
98 @dataclass(frozen=True)
99 class DockerBuildContext:
100 build_args: DockerBuildArgs
101 digest: Digest
102 build_env: DockerBuildEnvironment
103 dockerfile: str
104 interpolation_context: DockerInterpolationContext
105 copy_source_vs_context_source: tuple[tuple[str, str], ...]
106 stages: tuple[str, ...]
107
108 @classmethod
109 def create(
110 cls,
111 build_args: DockerBuildArgs,
112 snapshot: Snapshot,
113 build_env: DockerBuildEnvironment,
114 dockerfile_info: DockerfileInfo,
115 ) -> DockerBuildContext:
116 interpolation_context: dict[str, dict[str, str] | DockerInterpolationValue] = {}
117
118 # Go over all FROM tags and names for all stages.
119 stage_names: set[str] = set()
120 stage_tags = (tag.split(maxsplit=1) for tag in dockerfile_info.version_tags)
121 tags_values: dict[str, str] = {}
122 for idx, (stage, tag) in enumerate(stage_tags):
123 if stage != f"stage{idx}":
124 stage_names.add(stage)
125 if idx == 0:
126 # Expose the first (stage0) FROM directive as the "baseimage".
127 tags_values["baseimage"] = tag
128 tags_values[stage] = tag
129
130 if build_args:
131 # Extract default arg values from the parsed Dockerfile.
132 build_arg_defaults = {
133 def_name: def_value
134 for def_name, has_default, def_value in [
135 def_arg.partition("=") for def_arg in dockerfile_info.build_args
136 ]
137 if has_default
138 }
139 try:
140 # Create build args context value, based on defined build_args and
141 # extra_build_args. We do _not_ auto "magically" pick up all ARG names from the
142 # Dockerfile as first class args to use as placeholders, to make it more explicit
143 # which args are actually being used by Pants. We do pick up any defined default ARG
144 # values from the Dockerfile however, in order to not having to duplicate them in
145 # the BUILD files.
146 interpolation_context["build_args"] = {
147 arg_name: arg_value
148 if has_value
149 else build_env.get(arg_name, build_arg_defaults.get(arg_name))
150 for arg_name, has_value, arg_value in [
151 build_arg.partition("=") for build_arg in build_args
152 ]
153 }
154 except DockerBuildEnvironmentError as e:
155 raise DockerBuildContextError(
156 f"Undefined value for build arg on the {dockerfile_info.address} target: {e}"
157 "\n\nIf you did not intend to inherit the value for this build arg from the "
158 "environment, provide a default value with the option `[docker].build_args` "
159 "or in the `extra_build_args` field on the target definition. Alternatively, "
160 "you may also provide a default value on the `ARG` instruction directly in "
161 "the `Dockerfile`."
162 ) from e
163
164 # Override default value type for the `build_args` context to get helpful error messages.
165 interpolation_context["build_args"] = DockerBuildArgsInterpolationValue(
166 interpolation_context.get("build_args", {})
167 )
168
169 # Data from Pants.
170 interpolation_context["pants"] = {
171 # Present hash for all inputs that can be used for image tagging.
172 "hash": get_hash((build_args, build_env, snapshot.digest)).hexdigest(),
173 }
174
175 # Base image tags values for all stages (as parsed from the Dockerfile instructions).
176 interpolation_context["tags"] = tags_values
177
178 return cls(
179 build_args=build_args,
180 digest=snapshot.digest,
181 dockerfile=dockerfile_info.source,
182 build_env=build_env,
183 interpolation_context=DockerInterpolationContext.from_dict(interpolation_context),
184 copy_source_vs_context_source=tuple(
185 suggest_renames(
186 tentative_paths=(
187 # We don't want to include the Dockerfile as a suggested rename
188 dockerfile_info.source,
189 *dockerfile_info.copy_sources,
190 ),
191 actual_files=snapshot.files,
192 actual_dirs=snapshot.dirs,
193 )
194 ),
195 stages=tuple(sorted(stage_names)),
196 )
197
198
199 @rule
200 async def create_docker_build_context(
201 request: DockerBuildContextRequest, docker_options: DockerOptions
202 ) -> DockerBuildContext:
203 # Get all targets to include in context.
204 transitive_targets = await Get(TransitiveTargets, TransitiveTargetsRequest([request.address]))
205 docker_image = transitive_targets.roots[0]
206
207 # Get all dependencies for the root target.
208 root_dependencies = await Get(Targets, DependenciesRequest(docker_image.get(Dependencies)))
209
210 # Get all file sources from the root dependencies. That includes any non-file sources that can
211 # be "codegen"ed into a file source.
212 sources_request = Get(
213 SourceFiles,
214 SourceFilesRequest(
215 sources_fields=[tgt.get(SourcesField) for tgt in root_dependencies],
216 for_sources_types=(
217 DockerContextFilesSourcesField,
218 FileSourceField,
219 ),
220 enable_codegen=True,
221 ),
222 )
223
224 embedded_pkgs_per_target_request = Get(
225 FieldSetsPerTarget,
226 FieldSetsPerTargetRequest(PackageFieldSet, transitive_targets.dependencies),
227 )
228
229 sources, embedded_pkgs_per_target, dockerfile_info = await MultiGet(
230 sources_request,
231 embedded_pkgs_per_target_request,
232 Get(DockerfileInfo, DockerfileInfoRequest(docker_image.address)),
233 )
234
235 # Package binary dependencies for build context.
236 embedded_pkgs = await MultiGet(
237 Get(BuiltPackage, PackageFieldSet, field_set)
238 for field_set in embedded_pkgs_per_target.field_sets
239 # Exclude docker images, unless build_upstream_images is true.
240 if request.build_upstream_images
241 or not isinstance(getattr(field_set, "source", None), DockerImageSourceField)
242 )
243
244 if request.build_upstream_images:
245 images_str = ", ".join(
246 a.tags[0] for p in embedded_pkgs for a in p.artifacts if isinstance(a, BuiltDockerImage)
247 )
248 if images_str:
249 logger.debug(f"Built upstream Docker images: {images_str}")
250 else:
251 logger.debug("Did not build any upstream Docker images")
252
253 packages_str = ", ".join(a.relpath for p in embedded_pkgs for a in p.artifacts if a.relpath)
254 if packages_str:
255 logger.debug(f"Built packages for Docker image: {packages_str}")
256 else:
257 logger.debug("Did not build any packages for Docker image")
258
259 embedded_pkgs_digest = [built_package.digest for built_package in embedded_pkgs]
260 all_digests = (dockerfile_info.digest, sources.snapshot.digest, *embedded_pkgs_digest)
261
262 # Merge all digests to get the final docker build context digest.
263 context_request = Get(Snapshot, MergeDigests(d for d in all_digests if d))
264
265 # Requests for build args and env
266 build_args_request = Get(DockerBuildArgs, DockerBuildArgsRequest(docker_image))
267 build_env_request = Get(DockerBuildEnvironment, DockerBuildEnvironmentRequest(docker_image))
268 context, build_args, build_env = await MultiGet(
269 context_request, build_args_request, build_env_request
270 )
271
272 if request.build_upstream_images:
273 # Update build arg values for FROM image build args.
274
275 # Get the FROM image build args with defined values in the Dockerfile.
276 dockerfile_build_args = {
277 arg_name: arg_value
278 for arg_name, arg_value in dockerfile_info.build_args.to_dict().items()
279 if arg_value and arg_name in dockerfile_info.from_image_build_arg_names
280 }
281 # Parse the build args values into Address instances.
282 from_image_addresses = await Get(
283 Addresses,
284 UnparsedAddressInputs(
285 dockerfile_build_args.values(),
286 owning_address=dockerfile_info.address,
287 ),
288 )
289 # Map those addresses to the corresponding built image ref (tag).
290 address_to_built_image_tag = {
291 field_set.address: image.tags[0]
292 for field_set, built in zip(embedded_pkgs_per_target.field_sets, embedded_pkgs)
293 for image in built.artifacts
294 if isinstance(image, BuiltDockerImage)
295 }
296 # Create the FROM image build args.
297 from_image_build_args = [
298 f"{arg_name}={address_to_built_image_tag[addr]}"
299 for arg_name, addr in zip(dockerfile_build_args.keys(), from_image_addresses)
300 ]
301 # Merge all build args.
302 build_args = DockerBuildArgs.from_strings(*build_args, *from_image_build_args)
303
304 return DockerBuildContext.create(
305 build_args=build_args,
306 snapshot=context,
307 dockerfile_info=dockerfile_info,
308 build_env=build_env,
309 )
310
311
312 def rules():
313 return (
314 *collect_rules(),
315 UnionRule(GenerateSourcesRequest, GenerateDockerContextFiles),
316 )
317
[end of src/python/pants/backend/docker/util_rules/docker_build_context.py]
[start of src/python/pants/pantsd/process_manager.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import logging
7 import os
8 import signal
9 import sys
10 import time
11 import traceback
12 from abc import ABCMeta
13 from hashlib import sha256
14 from typing import Callable, cast
15
16 import psutil
17
18 from pants.base.build_environment import get_buildroot
19 from pants.bin.pants_env_vars import DAEMON_ENTRYPOINT
20 from pants.option.options import Options
21 from pants.option.options_fingerprinter import OptionsFingerprinter
22 from pants.option.scope import GLOBAL_SCOPE
23 from pants.pantsd.lock import OwnerPrintingInterProcessFileLock
24 from pants.util.dirutil import read_file, rm_rf, safe_file_dump, safe_mkdir
25 from pants.util.memo import memoized_classproperty, memoized_property
26
27 logger = logging.getLogger(__name__)
28
29
30 class ProcessManager:
31 """Manages contextual, on-disk process metadata.
32
33 Metadata is stored under a per-host fingerprinted directory, and a nested per-named-process
34 directory. The per-host directory defends against attempting to use process metadata that has
35 been mounted into virtual machines or docker images.
36 """
37
38 class MetadataError(Exception):
39 pass
40
41 class Timeout(Exception):
42 pass
43
44 class NonResponsiveProcess(Exception):
45 pass
46
47 class NotStarted(Exception):
48 pass
49
50 KILL_WAIT_SEC = 5
51 KILL_CHAIN = (signal.SIGTERM, signal.SIGKILL)
52
53 FAIL_WAIT_SEC = 10
54 INFO_INTERVAL_SEC = 5
55 WAIT_INTERVAL_SEC = 0.1
56
57 SOCKET_KEY = "socket"
58 PROCESS_NAME_KEY = "process_name"
59 PID_KEY = "pid"
60 FINGERPRINT_KEY = "fingerprint"
61
62 def __init__(self, name: str, metadata_base_dir: str) -> None:
63 """
64 :param string name: The process identity/name (e.g. 'pantsd' or 'ng_Zinc').
65 :param str metadata_base_dir: The overridden base directory for process metadata.
66 """
67 super().__init__()
68 self._metadata_base_dir = metadata_base_dir
69 self._name = name.lower().strip()
70 # TODO: Extract process spawning code.
71 self._buildroot = get_buildroot()
72
73 @memoized_classproperty
74 def host_fingerprint(cls) -> str:
75 """A fingerprint that attempts to identify the potential scope of a live process.
76
77 See the class pydoc.
78
79 In the absence of kernel hotswapping, a new uname means a restart or virtual machine, both
80 of which mean that process metadata is invalid. Additionally, docker generates a random
81 hostname per instance, which improves the reliability of this hash.
82
83 TODO: It would be nice to be able to use `uptime` (e.g. https://crates.io/crates/uptime_lib)
84 to identify reboots, but it's more challenging than it should be because it would involve
85 subtracting from the current time, which might hit aliasing issues.
86 """
87 hasher = sha256()
88 for component in os.uname():
89 hasher.update(component.encode())
90 return hasher.hexdigest()[:12]
91
92 @staticmethod
93 def _maybe_cast(item, caster):
94 """Given a casting function, attempt to cast to that type while masking common cast
95 exceptions.
96
97 N.B. This is mostly suitable for casting string types to numeric types - e.g. a port number
98 read from disk into an int.
99
100 :param func caster: A casting callable (e.g. `int`).
101 :returns: The result of caster(item) or item if TypeError or ValueError are raised during cast.
102 """
103 try:
104 return caster(item)
105 except (TypeError, ValueError):
106 # N.B. the TypeError catch here (already) protects against the case that caster is None.
107 return item
108
109 @classmethod
110 def _deadline_until(
111 cls,
112 closure: Callable[[], bool],
113 ongoing_msg: str,
114 completed_msg: str,
115 timeout: float = FAIL_WAIT_SEC,
116 wait_interval: float = WAIT_INTERVAL_SEC,
117 info_interval: float = INFO_INTERVAL_SEC,
118 ):
119 """Execute a function/closure repeatedly until a True condition or timeout is met.
120
121 :param func closure: the function/closure to execute (should not block for long periods of time
122 and must return True on success).
123 :param str ongoing_msg: a description of the action that is being executed, to be rendered as
124 info while we wait, and as part of any rendered exception.
125 :param str completed_msg: a description of the action that is being executed, to be rendered
126 after the action has succeeded (but only if we have previously rendered
127 the ongoing_msg).
128 :param float timeout: the maximum amount of time to wait for a true result from the closure in
129 seconds. N.B. this is timing based, so won't be exact if the runtime of
130 the closure exceeds the timeout.
131 :param float wait_interval: the amount of time to sleep between closure invocations.
132 :param float info_interval: the amount of time to wait before and between reports via info
133 logging that we're still waiting for the closure to succeed.
134 :raises: :class:`ProcessManager.Timeout` on execution timeout.
135 """
136 now = time.time()
137 deadline = now + timeout
138 info_deadline = now + info_interval
139 rendered_ongoing = False
140 while 1:
141 if closure():
142 if rendered_ongoing:
143 logger.info(completed_msg)
144 return True
145
146 now = time.time()
147 if now > deadline:
148 raise cls.Timeout(
149 "exceeded timeout of {} seconds while waiting for {}".format(
150 timeout, ongoing_msg
151 )
152 )
153
154 if now > info_deadline:
155 logger.info(f"waiting for {ongoing_msg}...")
156 rendered_ongoing = True
157 info_deadline = info_deadline + info_interval
158 elif wait_interval:
159 time.sleep(wait_interval)
160
161 @classmethod
162 def _wait_for_file(
163 cls,
164 filename: str,
165 ongoing_msg: str,
166 completed_msg: str,
167 timeout: float = FAIL_WAIT_SEC,
168 want_content: bool = True,
169 ):
170 """Wait up to timeout seconds for filename to appear with a non-zero size or raise
171 Timeout()."""
172
173 def file_waiter():
174 return os.path.exists(filename) and (not want_content or os.path.getsize(filename))
175
176 return cls._deadline_until(file_waiter, ongoing_msg, completed_msg, timeout=timeout)
177
178 @classmethod
179 def _get_metadata_dir_by_name(cls, name: str, metadata_base_dir: str) -> str:
180 """Retrieve the metadata dir by name.
181
182 This should always live outside of the workdir to survive a clean-all.
183 """
184 return os.path.join(metadata_base_dir, cls.host_fingerprint, name)
185
186 def _metadata_file_path(self, metadata_key) -> str:
187 return self.metadata_file_path(self.name, metadata_key, self._metadata_base_dir)
188
189 @classmethod
190 def metadata_file_path(cls, name, metadata_key, metadata_base_dir) -> str:
191 return os.path.join(cls._get_metadata_dir_by_name(name, metadata_base_dir), metadata_key)
192
193 def read_metadata_by_name(self, metadata_key, caster=None):
194 """Read process metadata using a named identity.
195
196 :param string metadata_key: The metadata key (e.g. 'pid').
197 :param func caster: A casting callable to apply to the read value (e.g. `int`).
198 """
199 file_path = self._metadata_file_path(metadata_key)
200 try:
201 metadata = read_file(file_path).strip()
202 return self._maybe_cast(metadata, caster)
203 except OSError:
204 return None
205
206 def write_metadata_by_name(self, metadata_key, metadata_value) -> None:
207 """Write process metadata using a named identity.
208
209 :param string metadata_key: The metadata key (e.g. 'pid').
210 :param string metadata_value: The metadata value (e.g. '1729').
211 """
212 safe_mkdir(self._get_metadata_dir_by_name(self.name, self._metadata_base_dir))
213 file_path = self._metadata_file_path(metadata_key)
214 safe_file_dump(file_path, metadata_value)
215
216 def await_metadata_by_name(
217 self, metadata_key, ongoing_msg: str, completed_msg: str, timeout: float, caster=None
218 ):
219 """Block up to a timeout for process metadata to arrive on disk.
220
221 :param string metadata_key: The metadata key (e.g. 'pid').
222 :param str ongoing_msg: A message that describes what is being waited for while waiting.
223 :param str completed_msg: A message that describes what was being waited for after completion.
224 :param float timeout: The deadline to write metadata.
225 :param type caster: A type-casting callable to apply to the read value (e.g. int, str).
226 :returns: The value of the metadata key (read from disk post-write).
227 :raises: :class:`ProcessManager.Timeout` on timeout.
228 """
229 file_path = self._metadata_file_path(metadata_key)
230 self._wait_for_file(file_path, ongoing_msg, completed_msg, timeout=timeout)
231 return self.read_metadata_by_name(metadata_key, caster)
232
233 def purge_metadata_by_name(self, name) -> None:
234 """Purge a processes metadata directory.
235
236 :raises: `ProcessManager.MetadataError` when OSError is encountered on metadata dir removal.
237 """
238 meta_dir = self._get_metadata_dir_by_name(name, self._metadata_base_dir)
239 logger.debug(f"purging metadata directory: {meta_dir}")
240 try:
241 rm_rf(meta_dir)
242 except OSError as e:
243 raise ProcessManager.MetadataError(
244 f"failed to purge metadata directory {meta_dir}: {e!r}"
245 )
246
247 @property
248 def name(self):
249 """The logical name/label of the process."""
250 return self._name
251
252 @memoized_property
253 def lifecycle_lock(self):
254 """An identity-keyed inter-process lock for safeguarding lifecycle and other operations."""
255 safe_mkdir(self._metadata_base_dir)
256 return OwnerPrintingInterProcessFileLock(
257 # N.B. This lock can't key into the actual named metadata dir (e.g. `.pids/pantsd/lock`
258 # via `ProcessManager._get_metadata_dir_by_name()`) because of a need to purge
259 # the named metadata dir on startup to avoid stale metadata reads.
260 os.path.join(self._metadata_base_dir, f".lock.{self._name}")
261 )
262
263 @property
264 def fingerprint(self):
265 """The fingerprint of the current process.
266
267 This reads the current fingerprint from the `ProcessManager` metadata.
268
269 :returns: The fingerprint of the running process as read from ProcessManager metadata or `None`.
270 :rtype: string
271 """
272 return self.read_metadata_by_name(self.FINGERPRINT_KEY)
273
274 @property
275 def pid(self):
276 """The running processes pid (or None)."""
277 return self.read_metadata_by_name(self.PID_KEY, int)
278
279 @property
280 def process_name(self):
281 """The process name, to be compared to the psutil exe_name for stale pid checking."""
282 return self.read_metadata_by_name(self.PROCESS_NAME_KEY, str)
283
284 @property
285 def socket(self):
286 """The running processes socket/port information (or None)."""
287 return self.read_metadata_by_name(self.SOCKET_KEY, int)
288
289 def has_current_fingerprint(self, fingerprint):
290 """Determines if a new fingerprint is the current fingerprint of the running process.
291
292 :param string fingerprint: The new fingerprint to compare to.
293 :rtype: bool
294 """
295 return fingerprint == self.fingerprint
296
297 def needs_restart(self, fingerprint):
298 """Determines if the current ProcessManager needs to be started or restarted.
299
300 :param string fingerprint: The new fingerprint to compare to.
301 :rtype: bool
302 """
303 return self.is_dead() or not self.has_current_fingerprint(fingerprint)
304
305 def await_pid(self, timeout: float) -> int:
306 """Wait up to a given timeout for a process to write pid metadata."""
307 return cast(
308 int,
309 self.await_metadata_by_name(
310 self.PID_KEY,
311 f"{self._name} to start",
312 f"{self._name} started",
313 timeout,
314 caster=int,
315 ),
316 )
317
318 def await_socket(self, timeout: float) -> int:
319 """Wait up to a given timeout for a process to write socket info."""
320 return cast(
321 int,
322 self.await_metadata_by_name(
323 self.SOCKET_KEY,
324 f"{self._name} socket to be opened",
325 f"{self._name} socket opened",
326 timeout,
327 caster=int,
328 ),
329 )
330
331 def write_pid(self, pid: int | None = None):
332 """Write the current process's PID."""
333 pid = os.getpid() if pid is None else pid
334 self.write_metadata_by_name(self.PID_KEY, str(pid))
335
336 def _get_process_name(self, process: psutil.Process | None = None) -> str:
337 proc = process or self._as_process()
338 cmdline = proc.cmdline()
339 return cast(str, cmdline[0] if cmdline else proc.name())
340
341 def write_process_name(self, process_name: str | None = None):
342 """Write the current process's name."""
343 process_name = process_name or self._get_process_name()
344 self.write_metadata_by_name(self.PROCESS_NAME_KEY, process_name)
345
346 def write_socket(self, socket_info: int):
347 """Write the local processes socket information (TCP port or UNIX socket)."""
348 self.write_metadata_by_name(self.SOCKET_KEY, str(socket_info))
349
350 def write_fingerprint(self, fingerprint: str) -> None:
351 self.write_metadata_by_name(self.FINGERPRINT_KEY, fingerprint)
352
353 def _as_process(self):
354 """Returns a psutil `Process` object wrapping our pid.
355
356 NB: Even with a process object in hand, subsequent method calls against it can always raise
357 `NoSuchProcess`. Care is needed to document the raises in the public API or else trap them and
358 do something sensible for the API.
359
360 :returns: a psutil Process object or else None if we have no pid.
361 :rtype: :class:`psutil.Process`
362 :raises: :class:`psutil.NoSuchProcess` if the process identified by our pid has died.
363 :raises: :class:`self.NotStarted` if no pid has been recorded for this process.
364 """
365 pid = self.pid
366 if not pid:
367 raise self.NotStarted()
368 return psutil.Process(pid)
369
370 def is_dead(self):
371 """Return a boolean indicating whether the process is dead or not."""
372 return not self.is_alive()
373
374 def is_alive(self, extended_check=None):
375 """Return a boolean indicating whether the process is running or not.
376
377 :param func extended_check: An additional callable that will be invoked to perform an extended
378 liveness check. This callable should take a single argument of a
379 `psutil.Process` instance representing the context-local process
380 and return a boolean True/False to indicate alive vs not alive.
381 """
382 try:
383 process = self._as_process()
384 return not (
385 # Can happen if we don't find our pid.
386 (not process)
387 or
388 # Check for walkers.
389 (process.status() == psutil.STATUS_ZOMBIE)
390 or
391 # Check for stale pids.
392 (self.process_name and self.process_name != self._get_process_name(process))
393 or
394 # Extended checking.
395 (extended_check and not extended_check(process))
396 )
397 except (self.NotStarted, psutil.NoSuchProcess, psutil.AccessDenied):
398 # On some platforms, accessing attributes of a zombie'd Process results in NoSuchProcess.
399 return False
400
401 def purge_metadata(self, force=False):
402 """Instance-based version of ProcessManager.purge_metadata_by_name() that checks for process
403 liveness before purging metadata.
404
405 :param bool force: If True, skip process liveness check before purging metadata.
406 :raises: `ProcessManager.MetadataError` when OSError is encountered on metadata dir removal.
407 """
408 if not force and self.is_alive():
409 raise ProcessManager.MetadataError("cannot purge metadata for a running process!")
410
411 self.purge_metadata_by_name(self._name)
412
413 def _kill(self, kill_sig):
414 """Send a signal to the current process."""
415 if self.pid:
416 os.kill(self.pid, kill_sig)
417
418 def terminate(self, signal_chain=KILL_CHAIN, kill_wait=KILL_WAIT_SEC, purge=True):
419 """Ensure a process is terminated by sending a chain of kill signals (SIGTERM, SIGKILL)."""
420 alive = self.is_alive()
421 if alive:
422 logger.debug(f"terminating {self._name}")
423 for signal_type in signal_chain:
424 pid = self.pid
425 try:
426 logger.debug(f"sending signal {signal_type} to pid {pid}")
427 self._kill(signal_type)
428 except OSError as e:
429 logger.warning(
430 "caught OSError({e!s}) during attempt to kill -{signal} {pid}!".format(
431 e=e, signal=signal_type, pid=pid
432 )
433 )
434
435 # Wait up to kill_wait seconds to terminate or move onto the next signal.
436 try:
437 if self._deadline_until(
438 self.is_dead,
439 f"{self._name} to exit",
440 f"{self._name} exited",
441 timeout=kill_wait,
442 ):
443 alive = False
444 logger.debug(f"successfully terminated pid {pid}")
445 break
446 except self.Timeout:
447 # Loop to the next kill signal on timeout.
448 pass
449
450 if alive:
451 raise ProcessManager.NonResponsiveProcess(
452 "failed to kill pid {pid} with signals {chain}".format(
453 pid=self.pid, chain=signal_chain
454 )
455 )
456
457 if purge:
458 self.purge_metadata(force=True)
459
460 def daemon_spawn(
461 self, pre_fork_opts=None, post_fork_parent_opts=None, post_fork_child_opts=None
462 ):
463 """Perform a single-fork to run a subprocess and write the child pid file.
464
465 Use this if your post_fork_child block invokes a subprocess via subprocess.Popen(). In this
466 case, a second fork is extraneous given that Popen() also forks. Using this daemonization
467 method leaves the responsibility of writing the pid to the caller to allow for library-
468 agnostic flexibility in subprocess execution.
469 """
470 self.purge_metadata()
471 self.pre_fork(**pre_fork_opts or {})
472 pid = os.fork()
473 if pid == 0:
474 # fork's child execution
475 try:
476 os.setsid()
477 os.chdir(self._buildroot)
478 self.post_fork_child(**post_fork_child_opts or {})
479 except Exception:
480 logger.critical(traceback.format_exc())
481 finally:
482 os._exit(0)
483 else:
484 # fork's parent execution
485 try:
486 self.post_fork_parent(**post_fork_parent_opts or {})
487 except Exception:
488 logger.critical(traceback.format_exc())
489
490 def pre_fork(self):
491 """Pre-fork callback for subclasses."""
492
493 def post_fork_child(self):
494 """Pre-fork child callback for subclasses."""
495
496 def post_fork_parent(self):
497 """Post-fork parent callback for subclasses."""
498
499
500 class PantsDaemonProcessManager(ProcessManager, metaclass=ABCMeta):
501 """An ABC for classes that interact with pantsd's metadata.
502
503 This is extended by both a pantsd client handle, and by the server: the client reads process
504 metadata, and the server writes it.
505 """
506
507 def __init__(self, bootstrap_options: Options, daemon_entrypoint: str):
508 super().__init__(
509 name="pantsd",
510 metadata_base_dir=bootstrap_options.for_global_scope().pants_subprocessdir,
511 )
512 self._bootstrap_options = bootstrap_options
513 self._daemon_entrypoint = daemon_entrypoint
514
515 @property
516 def options_fingerprint(self):
517 """Returns the options fingerprint for the pantsd process.
518
519 This should cover all options consumed by the pantsd process itself in order to start: also
520 known as the "micro-bootstrap" options. These options are marked `daemon=True` in the global
521 options.
522
523 The `daemon=True` options are a small subset of the bootstrap options. Independently, the
524 PantsDaemonCore fingerprints the entire set of bootstrap options to identify when the
525 Scheduler needs need to be re-initialized.
526 """
527 return OptionsFingerprinter.combined_options_fingerprint_for_scope(
528 GLOBAL_SCOPE, self._bootstrap_options, daemon_only=True
529 )
530
531 def needs_restart(self, option_fingerprint):
532 """Overrides ProcessManager.needs_restart, to account for the case where pantsd is running
533 but we want to shutdown after this run.
534
535 :param option_fingerprint: A fingerprint of the global bootstrap options.
536 :return: True if the daemon needs to restart.
537 """
538 return super().needs_restart(option_fingerprint)
539
540 def post_fork_child(self):
541 """Post-fork() child callback for ProcessManager.daemon_spawn()."""
542 spawn_control_env = {
543 DAEMON_ENTRYPOINT: f"{self._daemon_entrypoint}:launch_new_pantsd_instance",
544 # The daemon should run under the same sys.path as us; so we ensure
545 # this. NB: It will scrub PYTHONPATH once started to avoid infecting
546 # its own unrelated subprocesses.
547 "PYTHONPATH": os.pathsep.join(sys.path),
548 }
549 exec_env = {**os.environ, **spawn_control_env}
550
551 # Pass all of sys.argv so that we can proxy arg flags e.g. `-ldebug`.
552 cmd = [sys.executable] + sys.argv
553
554 spawn_control_env_vars = " ".join(f"{k}={v}" for k, v in spawn_control_env.items())
555 cmd_line = " ".join(cmd)
556 logger.debug(f"pantsd command is: {spawn_control_env_vars} {cmd_line}")
557
558 # TODO: Improve error handling on launch failures.
559 os.spawnve(os.P_NOWAIT, sys.executable, cmd, env=exec_env)
560
[end of src/python/pants/pantsd/process_manager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pantsbuild/pants
|
0400d6d28a40d9a056754e5214bafbcf6d3b578c
|
Can't get pants to work with ECR
**Describe the bug**
I am unable to get pants to be able to access docker repos on ECR. We use ECR repos to store the base image of images `package`d with pants as well as the target registry for images we would like to `publish`.
For now, we are forced to manually do `docker pull ...` and `docker push ...` operations before and after using `./pants package` and are unable to use the `publish` goal. This "works" only by having bash logic that can predict the images we need/are produced.
Given a Dockerfile like:
```
FROM 000000000000.dkr.ecr.us-east-1.amazonaws.com/base:label
```
It will fail if we do this:
```
$ aws ecr get-login-password | docker login --username AWS --password-stdin "000000000000.dkr.ecr.us-east-1.amazonaws.com"
Login Succeeded
$ ./pants package target
...
#3 ERROR: unexpected status code [manifests label]: 401 Unauthorized
```
Whereas it will succeed if we do this instead:
```
$ aws ecr get-login-password | docker login --username AWS --password-stdin "000000000000.dkr.ecr.us-east-1.amazonaws.com"
Login Succeeded
$ docker pull 000000000000.dkr.ecr.us-east-1.amazonaws.com/base:label
123456789a: Pulling from base-python
Status: Image is up to date for 00000000.dkr.ecr.us-east-1.amazonaws.com/base:label
$ ./pants package target
22:29:31.52 [INFO] Built docker images:
* 000000000000.dkr.ecr.us-east-1.amazonaws.com/target:latest
```
**Pants version**
2.9.0
**OS**
Are you encountering the bug on MacOS, Linux, or both?
tested only on linux
|
2022-02-04T16:21:13Z
|
<patch>
diff --git a/src/python/pants/backend/docker/subsystems/docker_options.py b/src/python/pants/backend/docker/subsystems/docker_options.py
--- a/src/python/pants/backend/docker/subsystems/docker_options.py
+++ b/src/python/pants/backend/docker/subsystems/docker_options.py
@@ -164,6 +164,16 @@ class DockerOptions(Subsystem):
advanced=True,
metavar="<binary-paths>",
)
+ _tools = StrListOption(
+ "--tools",
+ default=[],
+ help=(
+ "List any additional executable tools required for Docker to work. The paths to "
+ "these tools will be included in the PATH used in the execution sandbox, so that "
+ "they may be used by the Docker client."
+ ),
+ advanced=True,
+ )
@property
def build_args(self) -> tuple[str, ...]:
@@ -173,6 +183,10 @@ def build_args(self) -> tuple[str, ...]:
def env_vars(self) -> tuple[str, ...]:
return tuple(sorted(set(self._env_vars)))
+ @property
+ def tools(self) -> tuple[str, ...]:
+ return tuple(sorted(set(self._tools)))
+
@memoized_method
def registries(self) -> DockerRegistries:
return DockerRegistries.from_dict(self._registries)
diff --git a/src/python/pants/backend/docker/util_rules/docker_binary.py b/src/python/pants/backend/docker/util_rules/docker_binary.py
--- a/src/python/pants/backend/docker/util_rules/docker_binary.py
+++ b/src/python/pants/backend/docker/util_rules/docker_binary.py
@@ -3,18 +3,19 @@
from __future__ import annotations
+import os
from dataclasses import dataclass
from typing import Mapping
from pants.backend.docker.subsystems.docker_options import DockerOptions
from pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs
from pants.core.util_rules.system_binaries import (
- BinaryNotFoundError,
BinaryPath,
BinaryPathRequest,
BinaryPaths,
BinaryPathTest,
- SearchPath,
+ BinaryShims,
+ BinaryShimsRequest,
)
from pants.engine.environment import Environment, EnvironmentRequest
from pants.engine.fs import Digest
@@ -24,10 +25,36 @@
from pants.util.strutil import pluralize
+# The base class is decorated with `frozen_after_init`.
+@dataclass
class DockerBinary(BinaryPath):
"""The `docker` binary."""
- DEFAULT_SEARCH_PATH = SearchPath(("/usr/bin", "/bin", "/usr/local/bin"))
+ extra_env: Mapping[str, str]
+ extra_input_digests: Mapping[str, Digest] | None
+
+ def __init__(
+ self,
+ path: str,
+ fingerprint: str | None = None,
+ extra_env: Mapping[str, str] | None = None,
+ extra_input_digests: Mapping[str, Digest] | None = None,
+ ) -> None:
+ self.extra_env = {} if extra_env is None else extra_env
+ self.extra_input_digests = extra_input_digests
+ super().__init__(path, fingerprint)
+
+ def _get_process_environment(self, env: Mapping[str, str]) -> Mapping[str, str]:
+ if not self.extra_env:
+ return env
+
+ res = {**self.extra_env, **env}
+
+ # Merge the PATH entries, in case they are present in both `env` and `self.extra_env`.
+ res["PATH"] = os.pathsep.join(
+ p for p in (m.get("PATH") for m in (self.extra_env, env)) if p
+ )
+ return res
def build_image(
self,
@@ -58,8 +85,9 @@ def build_image(
f"Building docker image {tags[0]}"
+ (f" +{pluralize(len(tags)-1, 'additional tag')}." if len(tags) > 1 else "")
),
- env=env,
+ env=self._get_process_environment(env),
input_digest=digest,
+ immutable_input_digests=self.extra_input_digests,
cache_scope=ProcessCacheScope.PER_SESSION,
)
@@ -71,7 +99,8 @@ def push_image(
argv=(self.path, "push", tag),
cache_scope=ProcessCacheScope.PER_SESSION,
description=f"Pushing docker image {tag}",
- env=env,
+ env=self._get_process_environment(env or {}),
+ immutable_input_digests=self.extra_input_digests,
)
for tag in tags
)
@@ -88,16 +117,17 @@ def run_image(
argv=(self.path, "run", *(docker_run_args or []), tag, *(image_args or [])),
cache_scope=ProcessCacheScope.PER_SESSION,
description=f"Running docker image {tag}",
- env=env,
+ env=self._get_process_environment(env or {}),
+ immutable_input_digests=self.extra_input_digests,
)
@dataclass(frozen=True)
class DockerBinaryRequest:
- search_path: SearchPath = DockerBinary.DEFAULT_SEARCH_PATH
+ pass
-@rule(desc="Finding the `docker` binary", level=LogLevel.DEBUG)
+@rule(desc="Finding the `docker` binary and related tooling", level=LogLevel.DEBUG)
async def find_docker(
docker_request: DockerBinaryRequest, docker_options: DockerOptions
) -> DockerBinary:
@@ -105,14 +135,35 @@ async def find_docker(
search_path = docker_options.executable_search_path(env)
request = BinaryPathRequest(
binary_name="docker",
- search_path=search_path or docker_request.search_path,
+ search_path=search_path,
test=BinaryPathTest(args=["-v"]),
)
paths = await Get(BinaryPaths, BinaryPathRequest, request)
- first_path = paths.first_path
- if not first_path:
- raise BinaryNotFoundError.from_request(request, rationale="interact with the docker daemon")
- return DockerBinary(first_path.path, first_path.fingerprint)
+ first_path = paths.first_path_or_raise(request, rationale="interact with the docker daemon")
+
+ if not docker_options.tools:
+ return DockerBinary(first_path.path, first_path.fingerprint)
+
+ tools = await Get(
+ BinaryShims,
+ BinaryShimsRequest,
+ BinaryShimsRequest.for_binaries(
+ *docker_options.tools,
+ rationale="use docker",
+ output_directory="bin",
+ search_path=search_path,
+ ),
+ )
+ tools_path = ".shims"
+ extra_env = {"PATH": os.path.join(tools_path, tools.bin_directory)}
+ extra_input_digests = {tools_path: tools.digest}
+
+ return DockerBinary(
+ first_path.path,
+ first_path.fingerprint,
+ extra_env=extra_env,
+ extra_input_digests=extra_input_digests,
+ )
@rule
</patch>
|
[]
|
[]
| ||||
Qiskit__qiskit-3843
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot += the same circuit twice
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: 0.12.0
- **Python version**: 3.7.5
- **Operating system**: macOS Catalina 10.15.2
### What is the current behavior?
The snippet
```
qc = QuantumCircuit(1)
qc2 = qc
qc2 += qc
```
never terminates.
This problem doesn't occur if different circuit instances are added together, only as soon as the same one is added twice the code seems to be stuck.
### Steps to reproduce the problem
Run the above code snippet.
### What is the expected behavior?
The circuit `qc` should be appended twice to `qc2`.
### Suggested solutions
When calling `_append` the circuit data, `rhs.data`, of the added object is appended to self. Since the elements of `rhs.data` are taken via reference and the circuit was added beforehand, not only `self` is extended by the instructions but also `rhs`.
This can be fixed by changing
```
for instruction_context in rhs.data: # line 339
```
to
```
for instruction_context in rhs.data.copy():
```
I would self-assign to this issue, but I cannot. I'll open a PR for this.
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)
4
5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> result = execute(qc, backend_sim).result()
40 >>> print(result.get_counts(qc))
41 ```
42
43 In this case, the output will be:
44
45 ```python
46 {'00': 513, '11': 511}
47 ```
48
49 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
50 run the same program on a real quantum computer via IBMQ.
51
52 ### Executing your code on a real quantum chip
53
54 You can also use Qiskit to execute your code on a
55 **real quantum chip**.
56 In order to do so, you need to configure Qiskit for using the credentials in
57 your IBM Q account:
58
59 #### Configure your IBMQ credentials
60
61 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
62
63 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
64
65 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
66
67 ```python
68 >>> from qiskit import IBMQ
69 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
70 ```
71
72 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
73 Once they are stored, at any point in the future you can load and use them
74 in your program simply via:
75
76 ```python
77 >>> from qiskit import IBMQ
78 >>> IBMQ.load_account()
79 ```
80
81 Those who do not want to save their credentials to disk should use instead:
82
83 ```python
84 >>> from qiskit import IBMQ
85 >>> IBMQ.enable_account('MY_API_TOKEN')
86 ```
87
88 and the token will only be active for the session. For examples using Terra with real
89 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
90 the levels.
91
92 ## Contribution Guidelines
93
94 If you'd like to contribute to Qiskit Terra, please take a look at our
95 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
96
97 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
98 [join the Qiskit Slack community](https://join.slack.com/t/qiskit/shared_invite/enQtODQ2NTIyOTgwMTQ3LTI0NzM2NzkzZjJhNDgzZjY5MTQzNDY3MGNiZGQzNTNkZTE4Nzg1MjMwMmFjY2UwZTgyNDlmYWQwYmZjMjE1ZTM)
99 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
100 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
101
102 ## Next Steps
103
104 Now you're set up and ready to check out some of the other examples from our
105 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
106
107 ## Authors and Citation
108
109 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
110 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
111
112 ## Changelog and Release Notes
113
114 The changelog for a particular release is dynamically generated and gets
115 written to the release page on Github for each release. For example, you can
116 find the page for the `0.9.0` release here:
117
118 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
119
120 The changelog for the current release can be found in the releases tab:
121 
122 The changelog provides a quick overview of noteable changes for a given
123 release.
124
125 Additionally, as part of each release detailed release notes are written to
126 document in detail what has changed as part of a release. This includes any
127 documentation on potential breaking changes on upgrade and new features.
128 For example, You can find the release notes for the `0.9.0` release in the
129 Qiskit documentation here:
130
131 https://qiskit.org/documentation/release_notes.html#terra-0-9
132
133 ## License
134
135 [Apache License 2.0](LICENSE.txt)
136
[end of README.md]
[start of examples/python/ibmq/using_qiskit_terra_level_1.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 Example showing how to use Qiskit at level 1 (intermediate).
17
18 This example shows how an intermediate user interacts with Terra.
19 It builds some circuits and transpiles them with transpile options.
20 It then makes a qobj object which is just a container to be run on a backend.
21 The same qobj can be submitted to many backends (as shown).
22 It is the user's responsibility to make sure it can be run (i.e. it conforms
23 to the restrictions of the backend, if any).
24 This is useful when you want to compare the same
25 circuit on different backends without recompiling the whole circuit,
26 or just want to change some runtime parameters.
27
28 To control the passes that transform the circuit, we have a pass manager
29 for the level 2 user.
30 """
31
32 # Import the Qiskit modules
33 from qiskit import IBMQ, BasicAer
34 from qiskit.circuit import QuantumCircuit
35 from qiskit.compiler import transpile, assemble
36 from qiskit.providers.ibmq import least_busy
37 from qiskit.tools.monitor import job_monitor
38
39 provider = IBMQ.load_account()
40
41 # Making first circuit: bell state
42 qc1 = QuantumCircuit(2, 2, name="bell")
43 qc1.h(0)
44 qc1.cx(0, 1)
45 qc1.measure([0,1], [0,1])
46
47 # Making another circuit: superpositions
48 qc2 = QuantumCircuit(2, 2, name="superposition")
49 qc2.h([0,1])
50 qc2.measure([0,1], [0,1])
51
52 # Setting up the backend
53 print("(Aer Backends)")
54 for backend in BasicAer.backends():
55 print(backend.status())
56 qasm_simulator = BasicAer.get_backend('qasm_simulator')
57
58
59 # Compile and run the circuit on a real device backend
60 # See a list of available remote backends
61 print("\n(IBMQ Backends)")
62 for backend in provider.backends():
63 print(backend.status())
64
65 try:
66 # select least busy available device and execute.
67 least_busy_device = least_busy(provider.backends(simulator=False))
68 except:
69 print("All devices are currently unavailable.")
70
71 print("Running on current least busy device: ", least_busy_device)
72
73 # Transpile the circuits to make them compatible with the experimental backend
74 [qc1_new, qc2_new] = transpile(circuits=[qc1, qc2], backend=least_busy_device)
75
76 print("Bell circuit before transpile:")
77 print(qc1)
78 print("Bell circuit after transpile:")
79 print(qc1_new)
80 print("Superposition circuit before transpile:")
81 print(qc2)
82 print("Superposition circuit after transpile:")
83 print(qc2_new)
84
85 # Assemble the two circuits into a runnable qobj
86 qobj = assemble([qc1_new, qc2_new], shots=1000)
87
88 # Running qobj on the simulator
89 sim_job = qasm_simulator.run(qobj)
90
91 # Getting the result
92 sim_result=sim_job.result()
93
94 # Show the results
95 print(sim_result.get_counts(qc1))
96 print(sim_result.get_counts(qc2))
97
98 # Running the job.
99 exp_job = least_busy_device.run(qobj)
100
101 job_monitor(exp_job)
102 exp_result = exp_job.result()
103
104 # Show the results
105 print(exp_result.get_counts(qc1))
106 print(exp_result.get_counts(qc2))
107
[end of examples/python/ibmq/using_qiskit_terra_level_1.py]
[start of qiskit/transpiler/passes/routing/stochastic_swap.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Map a DAGCircuit onto a `coupling_map` adding swap gates."""
16
17 from logging import getLogger
18 from math import inf
19 from collections import OrderedDict
20 import numpy as np
21
22 from qiskit.circuit.quantumregister import QuantumRegister
23 from qiskit.transpiler.basepasses import TransformationPass
24 from qiskit.transpiler.exceptions import TranspilerError
25 from qiskit.dagcircuit import DAGCircuit
26 from qiskit.extensions.standard import SwapGate
27 from qiskit.transpiler.layout import Layout
28 # pylint: disable=no-name-in-module
29 from .cython.stochastic_swap.utils import nlayout_from_layout
30 # pylint: disable=no-name-in-module
31 from .cython.stochastic_swap.swap_trial import swap_trial
32
33
34 logger = getLogger(__name__)
35
36
37 class StochasticSwap(TransformationPass):
38 """Map a DAGCircuit onto a `coupling_map` adding swap gates.
39
40 Uses a randomized algorithm.
41
42 Notes:
43 1. Measurements may occur and be followed by swaps that result in repeated
44 measurement of the same qubit. Near-term experiments cannot implement
45 these circuits, so some care is required when using this mapper
46 with experimental backend targets.
47
48 2. We do not use the fact that the input state is zero to simplify
49 the circuit.
50 """
51
52 def __init__(self, coupling_map, trials=20, seed=None):
53 """StochasticSwap initializer.
54
55 The coupling map is a connected graph
56
57 If these are not satisfied, the behavior is undefined.
58
59 Args:
60 coupling_map (CouplingMap): Directed graph representing a coupling
61 map.
62 trials (int): maximum number of iterations to attempt
63 seed (int): seed for random number generator
64 """
65 super().__init__()
66 self.coupling_map = coupling_map
67 self.trials = trials
68 self.seed = seed
69 self.qregs = None
70 self.rng = None
71 self.trivial_layout = None
72
73 def run(self, dag):
74 """Run the StochasticSwap pass on `dag`.
75
76 Args:
77 dag (DAGCircuit): DAG to map.
78
79 Returns:
80 DAGCircuit: A mapped DAG.
81
82 Raises:
83 TranspilerError: if the coupling map or the layout are not
84 compatible with the DAG
85 """
86
87 if len(dag.qregs) != 1 or dag.qregs.get('q', None) is None:
88 raise TranspilerError('Basic swap runs on physical circuits only')
89
90 if len(dag.qubits()) > len(self.coupling_map.physical_qubits):
91 raise TranspilerError('The layout does not match the amount of qubits in the DAG')
92
93 canonical_register = dag.qregs['q']
94 self.trivial_layout = Layout.generate_trivial_layout(canonical_register)
95
96 self.qregs = dag.qregs
97 if self.seed is None:
98 self.seed = np.random.randint(0, np.iinfo(np.int32).max)
99 self.rng = np.random.RandomState(self.seed)
100 logger.debug("StochasticSwap RandomState seeded with seed=%s", self.seed)
101
102 new_dag = self._mapper(dag, self.coupling_map, trials=self.trials)
103 return new_dag
104
105 def _layer_permutation(self, layer_partition, layout, qubit_subset,
106 coupling, trials):
107 """Find a swap circuit that implements a permutation for this layer.
108
109 The goal is to swap qubits such that qubits in the same two-qubit gates
110 are adjacent.
111
112 Based on S. Bravyi's algorithm.
113
114 layer_partition (list): The layer_partition is a list of (qu)bit
115 lists and each qubit is a tuple (qreg, index).
116 layout (Layout): The layout is a Layout object mapping virtual
117 qubits in the input circuit to physical qubits in the coupling
118 graph. It reflects the current positions of the data.
119 qubit_subset (list): The qubit_subset is the set of qubits in
120 the coupling graph that we have chosen to map into, as tuples
121 (Register, index).
122 coupling (CouplingMap): Directed graph representing a coupling map.
123 This coupling map should be one that was provided to the
124 stochastic mapper.
125 trials (int): Number of attempts the randomized algorithm makes.
126
127 Returns:
128 Tuple: success_flag, best_circuit, best_depth, best_layout
129
130 If success_flag is True, then best_circuit contains a DAGCircuit with
131 the swap circuit, best_depth contains the depth of the swap circuit,
132 and best_layout contains the new positions of the data qubits after the
133 swap circuit has been applied.
134
135 Raises:
136 TranspilerError: if anything went wrong.
137 """
138 return _layer_permutation(layer_partition,
139 layout, qubit_subset,
140 coupling, trials, self.rng)
141
142 def _layer_update(self, i, best_layout, best_depth,
143 best_circuit, layer_list):
144 """Provide a DAGCircuit for a new mapped layer.
145
146 Args:
147 i (int): layer number
148 best_layout (Layout): layout returned from _layer_permutation
149 best_depth (int): depth returned from _layer_permutation
150 best_circuit (DAGCircuit): swap circuit returned
151 from _layer_permutation
152 layer_list (list): list of DAGCircuit objects for each layer,
153 output of DAGCircuit layers() method
154
155 Returns:
156 DAGCircuit: a DAGCircuit object to append to the output DAGCircuit
157 that the _mapper method is building.
158 """
159 layout = best_layout
160 logger.debug("layer_update: layout = %s", layout)
161 logger.debug("layer_update: self.trivial_layout = %s", self.trivial_layout)
162 dagcircuit_output = DAGCircuit()
163 for qubit in layout.get_virtual_bits().keys():
164 if qubit.register not in dagcircuit_output.qregs.values():
165 dagcircuit_output.add_qreg(qubit.register)
166
167 # Output any swaps
168 if best_depth > 0:
169 logger.debug("layer_update: there are swaps in this layer, "
170 "depth %d", best_depth)
171 dagcircuit_output.extend_back(best_circuit)
172 else:
173 logger.debug("layer_update: there are no swaps in this layer")
174 # Make qubit edge map and extend by classical bits
175 edge_map = layout.combine_into_edge_map(self.trivial_layout)
176 for bit in dagcircuit_output.clbits():
177 edge_map[bit] = bit
178 # Output this layer
179 dagcircuit_output.compose_back(layer_list[i]["graph"], edge_map)
180
181 return dagcircuit_output
182
183 def _mapper(self, circuit_graph, coupling_graph, trials=20):
184 """Map a DAGCircuit onto a CouplingMap using swap gates.
185
186 Use self.trivial_layout for the initial layout.
187
188 Args:
189 circuit_graph (DAGCircuit): input DAG circuit
190 coupling_graph (CouplingMap): coupling graph to map onto
191 trials (int): number of trials.
192
193 Returns:
194 DAGCircuit: object containing a circuit equivalent to
195 circuit_graph that respects couplings in coupling_graph
196
197 Raises:
198 TranspilerError: if there was any error during the mapping
199 or with the parameters.
200 """
201 # Schedule the input circuit by calling layers()
202 layerlist = list(circuit_graph.layers())
203 logger.debug("schedule:")
204 for i, v in enumerate(layerlist):
205 logger.debug(" %d: %s", i, v["partition"])
206
207 qubit_subset = self.trivial_layout.get_virtual_bits().keys()
208
209 # Find swap circuit to precede each layer of input circuit
210 layout = self.trivial_layout.copy()
211
212 # Construct an empty DAGCircuit with the same set of
213 # qregs and cregs as the input circuit
214 dagcircuit_output = DAGCircuit()
215 dagcircuit_output.name = circuit_graph.name
216 for qreg in circuit_graph.qregs.values():
217 dagcircuit_output.add_qreg(qreg)
218 for creg in circuit_graph.cregs.values():
219 dagcircuit_output.add_creg(creg)
220
221 # Make a trivial wire mapping between the subcircuits
222 # returned by _layer_update and the circuit we build
223 identity_wire_map = {}
224 for qubit in circuit_graph.qubits():
225 identity_wire_map[qubit] = qubit
226 for bit in circuit_graph.clbits():
227 identity_wire_map[bit] = bit
228
229 logger.debug("trivial_layout = %s", layout)
230
231 # Iterate over layers
232 for i, layer in enumerate(layerlist):
233
234 # Attempt to find a permutation for this layer
235 success_flag, best_circuit, best_depth, best_layout \
236 = self._layer_permutation(layer["partition"], layout,
237 qubit_subset, coupling_graph,
238 trials)
239 logger.debug("mapper: layer %d", i)
240 logger.debug("mapper: success_flag=%s,best_depth=%s",
241 success_flag, str(best_depth))
242
243 # If this fails, try one gate at a time in this layer
244 if not success_flag:
245 logger.debug("mapper: failed, layer %d, "
246 "retrying sequentially", i)
247 serial_layerlist = list(layer["graph"].serial_layers())
248
249 # Go through each gate in the layer
250 for j, serial_layer in enumerate(serial_layerlist):
251
252 success_flag, best_circuit, best_depth, best_layout = \
253 self._layer_permutation(
254 serial_layer["partition"],
255 layout, qubit_subset,
256 coupling_graph,
257 trials)
258 logger.debug("mapper: layer %d, sublayer %d", i, j)
259 logger.debug("mapper: success_flag=%s,best_depth=%s,",
260 success_flag, str(best_depth))
261
262 # Give up if we fail again
263 if not success_flag:
264 raise TranspilerError("swap mapper failed: " +
265 "layer %d, sublayer %d" % (i, j))
266
267 # Update the record of qubit positions
268 # for each inner iteration
269 layout = best_layout
270 # Update the DAG
271 dagcircuit_output.extend_back(
272 self._layer_update(j,
273 best_layout,
274 best_depth,
275 best_circuit,
276 serial_layerlist),
277 identity_wire_map)
278
279 else:
280 # Update the record of qubit positions for each iteration
281 layout = best_layout
282
283 # Update the DAG
284 dagcircuit_output.extend_back(
285 self._layer_update(i,
286 best_layout,
287 best_depth,
288 best_circuit,
289 layerlist),
290 identity_wire_map)
291
292 # This is the final edgemap. We might use it to correctly replace
293 # any measurements that needed to be removed earlier.
294 logger.debug("mapper: self.trivial_layout = %s", self.trivial_layout)
295 logger.debug("mapper: layout = %s", layout)
296 last_edgemap = layout.combine_into_edge_map(self.trivial_layout)
297 logger.debug("mapper: last_edgemap = %s", last_edgemap)
298
299 return dagcircuit_output
300
301
302 def _layer_permutation(layer_partition, layout, qubit_subset,
303 coupling, trials, rng):
304 """Find a swap circuit that implements a permutation for this layer.
305
306 Args:
307 layer_partition (list): The layer_partition is a list of (qu)bit
308 lists and each qubit is a tuple (qreg, index).
309 layout (Layout): The layout is a Layout object mapping virtual
310 qubits in the input circuit to physical qubits in the coupling
311 graph. It reflects the current positions of the data.
312 qubit_subset (list): The qubit_subset is the set of qubits in
313 the coupling graph that we have chosen to map into, as tuples
314 (Register, index).
315 coupling (CouplingMap): Directed graph representing a coupling map.
316 This coupling map should be one that was provided to the
317 stochastic mapper.
318 trials (int): Number of attempts the randomized algorithm makes.
319 rng (RandomState): Random number generator.
320
321 Returns:
322 Tuple: success_flag, best_circuit, best_depth, best_layout
323
324 Raises:
325 TranspilerError: if anything went wrong.
326 """
327 logger.debug("layer_permutation: layer_partition = %s",
328 layer_partition)
329 logger.debug("layer_permutation: layout = %s",
330 layout.get_virtual_bits())
331 logger.debug("layer_permutation: qubit_subset = %s",
332 qubit_subset)
333 logger.debug("layer_permutation: trials = %s", trials)
334
335 # The input dag is on a flat canonical register
336 # TODO: cleanup the code that is general for multiple qregs below
337 canonical_register = QuantumRegister(len(layout), 'q')
338 qregs = OrderedDict({canonical_register.name: canonical_register})
339
340 gates = [] # list of lists of tuples [[(register, index), ...], ...]
341 for gate_args in layer_partition:
342 if len(gate_args) > 2:
343 raise TranspilerError("Layer contains > 2-qubit gates")
344 if len(gate_args) == 2:
345 gates.append(tuple(gate_args))
346 logger.debug("layer_permutation: gates = %s", gates)
347
348 # Can we already apply the gates? If so, there is no work to do.
349 dist = sum([coupling.distance(layout[g[0]], layout[g[1]])
350 for g in gates])
351 logger.debug("layer_permutation: distance = %s", dist)
352 if dist == len(gates):
353 logger.debug("layer_permutation: nothing to do")
354 circ = DAGCircuit()
355 circ.add_qreg(canonical_register)
356 return True, circ, 0, layout
357
358 # Begin loop over trials of randomized algorithm
359 num_qubits = len(layout)
360 best_depth = inf # initialize best depth
361 best_edges = None # best edges found
362 best_circuit = None # initialize best swap circuit
363 best_layout = None # initialize best final layout
364
365 cdist2 = coupling._dist_matrix**2
366 # Scaling matrix
367 scale = np.zeros((num_qubits, num_qubits))
368
369 int_qubit_subset = regtuple_to_numeric(qubit_subset, qregs)
370 int_gates = gates_to_idx(gates, qregs)
371 int_layout = nlayout_from_layout(layout, qregs, coupling.size())
372
373 trial_circuit = DAGCircuit() # SWAP circuit for this trial
374 for qubit in layout.get_virtual_bits().keys():
375 if qubit.register not in trial_circuit.qregs.values():
376 trial_circuit.add_qreg(qubit.register)
377
378 slice_circuit = DAGCircuit() # circuit for this swap slice
379 for qubit in layout.get_virtual_bits().keys():
380 if qubit.register not in slice_circuit.qregs.values():
381 slice_circuit.add_qreg(qubit.register)
382 edges = np.asarray(coupling.get_edges(), dtype=np.int32).ravel()
383 cdist = coupling._dist_matrix
384 for trial in range(trials):
385 logger.debug("layer_permutation: trial %s", trial)
386 # This is one Trial --------------------------------------
387 dist, optim_edges, trial_layout, depth_step = swap_trial(num_qubits, int_layout,
388 int_qubit_subset,
389 int_gates, cdist2,
390 cdist, edges, scale,
391 rng)
392
393 logger.debug("layer_permutation: final distance for this trial = %s", dist)
394 if dist == len(gates) and depth_step < best_depth:
395 logger.debug("layer_permutation: got circuit with improved depth %s",
396 depth_step)
397 best_edges = optim_edges
398 best_layout = trial_layout
399 best_depth = min(best_depth, depth_step)
400
401 # Break out of trial loop if we found a depth 1 circuit
402 # since we can't improve it further
403 if best_depth == 1:
404 break
405
406 # If we have no best circuit for this layer, all of the
407 # trials have failed
408 if best_layout is None:
409 logger.debug("layer_permutation: failed!")
410 return False, None, None, None
411
412 edges = best_edges.edges()
413 trivial_layout = Layout.generate_trivial_layout(canonical_register)
414 for idx in range(best_edges.size//2):
415 slice_circuit.apply_operation_back(
416 SwapGate(), [trivial_layout[edges[2*idx]], trivial_layout[edges[2*idx+1]]], [])
417 trial_circuit.extend_back(slice_circuit)
418 best_circuit = trial_circuit
419
420 # Otherwise, we return our result for this layer
421 logger.debug("layer_permutation: success!")
422 best_lay = best_layout.to_layout(qregs)
423 return True, best_circuit, best_depth, best_lay
424
425
426 def regtuple_to_numeric(items, qregs):
427 """Takes Qubit instances and converts them into an integer array.
428
429 Args:
430 items (list): List of Qubit instances to convert.
431 qregs (dict): List of Qubit instances.
432 Returns:
433 ndarray: Array of integers.
434 """
435 sizes = [qr.size for qr in qregs.values()]
436 reg_idx = np.cumsum([0]+sizes)
437 regint = {}
438 for ind, qreg in enumerate(qregs.values()):
439 regint[qreg] = ind
440 out = np.zeros(len(items), dtype=np.int32)
441 for idx, val in enumerate(items):
442 out[idx] = reg_idx[regint[val.register]]+val.index
443 return out
444
445
446 def gates_to_idx(gates, qregs):
447 """Converts gate tuples into a nested list of integers.
448
449 Args:
450 gates (list): List of Qubit instances representing gates.
451 qregs (dict): List of Qubit instances.
452
453 Returns:
454 list: Nested list of integers for gates.
455 """
456 sizes = [qr.size for qr in qregs.values()]
457 reg_idx = np.cumsum([0]+sizes)
458 regint = {}
459 for ind, qreg in enumerate(qregs.values()):
460 regint[qreg] = ind
461 out = np.zeros(2*len(gates), dtype=np.int32)
462 for idx, gate in enumerate(gates):
463 out[2*idx] = reg_idx[regint[gate[0].register]]+gate[0].index
464 out[2*idx+1] = reg_idx[regint[gate[1].register]]+gate[1].index
465 return out
466
[end of qiskit/transpiler/passes/routing/stochastic_swap.py]
[start of qiskit/validation/base.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Building blocks for Qiskit validated classes.
16
17 This module provides the ``BaseSchema`` and ``BaseModel`` classes as the main
18 building blocks for defining objects (Models) that conform to a specification
19 (Schema) and are validated at instantiation, along with providing facilities
20 for being serialized and deserialized.
21
22 Implementors are recommended to subclass the two classes, and "binding" them
23 together by using ``bind_schema``::
24
25 class PersonSchema(BaseSchema):
26 name = String(required=True)
27
28 @bind_schema(PersonSchema)
29 class Person(BaseModel):
30 pass
31 """
32
33 from functools import wraps
34 from types import SimpleNamespace, MethodType
35
36 from marshmallow import ValidationError
37 from marshmallow import Schema, post_dump, post_load
38 from marshmallow import fields as _fields
39 from marshmallow.utils import is_collection, INCLUDE
40
41 from .exceptions import ModelValidationError
42
43
44 class ModelTypeValidator(_fields.Field):
45 """A field able to validate the correct type of a value."""
46
47 valid_types = (object, )
48
49 def _expected_types(self):
50 return self.valid_types
51
52 def check_type(self, value, attr, data, **_):
53 """Validates a value against the correct type of the field.
54
55 It calls ``_expected_types`` to get a list of valid types.
56
57 Subclasses can do one of the following:
58
59 1. Override the ``valid_types`` property with a tuple with the expected
60 types for this field.
61
62 2. Override the ``_expected_types`` method to return a tuple of
63 expected types for the field.
64
65 3. Change ``check_type`` completely to customize validation.
66
67 Note:
68 This method or the overrides must return the ``value`` parameter
69 untouched.
70 """
71 expected_types = self._expected_types()
72 if not isinstance(value, expected_types):
73 raise self._not_expected_type(
74 value, expected_types, fields=[self], field_names=attr, data=data)
75 return value
76
77 @staticmethod
78 def _not_expected_type(value, type_, **kwargs):
79 if is_collection(type_) and len(type_) == 1:
80 type_ = type_[0]
81
82 if is_collection(type_):
83 body = 'is none of the expected types {}'.format(type_)
84 else:
85 body = 'is not the expected type {}'.format(type_)
86
87 message = 'Value \'{}\' {}: {}'.format(value, type(value), body)
88 return ValidationError(message, **kwargs)
89
90 def make_error_serialize(self, key, **kwargs):
91 """Helper method to return a ValidationError from _serialize.
92
93 This method wraps the result of ``make_error()``, adding contextual
94 information in order to provide more informative information to users.
95
96 Args:
97 key (str): error key index.
98 **kwargs: additional arguments to ``make_error()``.
99
100 Returns:
101 ValidationError: an exception with the field name.
102 """
103 bare_error = self.make_error(key, **kwargs)
104 return ValidationError({self.name: bare_error.messages},
105 field_name=self.name)
106
107
108 class BaseSchema(Schema):
109 """Base class for Schemas for validated Qiskit classes.
110
111 Provides convenience functionality for the Qiskit common use case:
112
113 * deserialization into class instances instead of dicts.
114 * handling of unknown attributes not defined in the schema.
115
116 Attributes:
117 model_cls (type): class used to instantiate the instance. The
118 constructor is passed all named parameters from deserialization.
119 """
120
121 class Meta:
122 """Add extra fields to the schema."""
123 unknown = INCLUDE
124
125 model_cls = SimpleNamespace
126
127 @post_dump(pass_original=True, pass_many=True)
128 def dump_additional_data(self, valid_data, original_data, **kwargs):
129 """Include unknown fields after dumping.
130
131 Unknown fields are added with no processing at all.
132
133 Args:
134 valid_data (dict or list): data collected and returned by ``dump()``.
135 original_data (object or list): object passed to ``dump()`` in the
136 first place.
137 **kwargs: extra arguments from the decorators.
138
139 Returns:
140 dict: the same ``valid_data`` extended with the unknown attributes.
141
142 Inspired by https://github.com/marshmallow-code/marshmallow/pull/595.
143 """
144 if kwargs.get('many'):
145 for i, _ in enumerate(valid_data):
146 additional_keys = set(original_data[i].__dict__) - set(valid_data[i])
147 for key in additional_keys:
148 if key.startswith('_'):
149 continue
150 valid_data[i][key] = getattr(original_data[i], key)
151 else:
152 additional_keys = set(original_data.__dict__) - set(valid_data)
153 for key in additional_keys:
154 if key.startswith('_'):
155 continue
156 valid_data[key] = getattr(original_data, key)
157
158 return valid_data
159
160 @post_load
161 def make_model(self, data, **_):
162 """Make ``load`` return a ``model_cls`` instance instead of a dict."""
163 return self.model_cls(**data)
164
165
166 class _SchemaBinder:
167 """Helper class for the parametrized decorator ``bind_schema``."""
168
169 def __init__(self, schema_cls, **kwargs):
170 """Get the schema for the decorated model."""
171 self._schema_cls = schema_cls
172 self._kwargs = kwargs
173
174 def __call__(self, model_cls):
175 """Augment the model class with the validation API.
176
177 See the docs for ``bind_schema`` for further information.
178 """
179 # Check for double binding of schemas.
180 if self._schema_cls.__dict__.get('model_cls', None) is not None:
181 raise ValueError(
182 'The schema {} can not be bound twice. It is already bound to '
183 '{}. If you want to reuse the schema, use '
184 'subclassing'.format(self._schema_cls, self._schema_cls.model_cls))
185
186 # Set a reference to the Model in the Schema, and vice versa.
187 self._schema_cls.model_cls = model_cls
188 model_cls.schema = self._schema_cls(**self._kwargs)
189
190 # Append the methods to the Model class.
191 model_cls.__init__ = self._validate_after_init(model_cls.__init__)
192
193 # Add a Schema that performs minimal validation to the Model.
194 model_cls.shallow_schema = self._create_validation_schema(self._schema_cls)
195
196 return model_cls
197
198 @staticmethod
199 def _create_validation_schema(schema_cls, **kwargs):
200 """Create a patched Schema for validating models.
201
202 Model validation is not part of Marshmallow. Schemas have a ``validate``
203 method but this delegates execution on ``load``. Similarly, ``load``
204 will call ``_deserialize`` on every field in the schema.
205
206 This function patches the ``_deserialize`` instance method of each
207 field to make it call a custom defined method ``check_type``
208 provided by Qiskit in the different fields at
209 ``qiskit.validation.fields``.
210
211 Returns:
212 BaseSchema: a copy of the original Schema, overriding the
213 ``_deserialize()`` call of its fields.
214 """
215 validation_schema = schema_cls(**kwargs)
216 for _, field in validation_schema.fields.items():
217 if isinstance(field, ModelTypeValidator):
218 validate_function = field.__class__.check_type
219 field._deserialize = MethodType(validate_function, field)
220
221 return validation_schema
222
223 @staticmethod
224 def _validate_after_init(init_method):
225 """Add validation during instantiation.
226
227 The validation is performed depending on the ``validate`` parameter
228 passed to the ``init_method``. If ``False``, the validation will not be
229 performed.
230 """
231 @wraps(init_method)
232 def _decorated(self, **kwargs):
233 # Extract the 'validate' parameter.
234 do_validation = kwargs.pop('validate', True)
235 if do_validation:
236 try:
237 _ = self.shallow_schema._do_load(kwargs, postprocess=False)
238 except ValidationError as ex:
239 raise ModelValidationError(
240 ex.messages, ex.field_name, ex.data, ex.valid_data, **ex.kwargs) from None
241
242 # Set the 'validate' parameter to False, assuming that if a
243 # subclass has been validated, it superclasses will also be valid.
244 return init_method(self, **kwargs, validate=False)
245
246 return _decorated
247
248
249 def bind_schema(schema, **kwargs):
250 """Class decorator for adding schema validation to its instances.
251
252 The decorator acts on the model class by adding:
253 * a class attribute ``schema`` with the schema used for validation
254 * a class attribute ``shallow_schema`` used for validation during
255 instantiation.
256
257 The same schema cannot be bound more than once. If you need to reuse a
258 schema for a different class, create a new schema subclassing the one you
259 want to reuse and leave the new empty::
260
261 class MySchema(BaseSchema):
262 title = String()
263
264 class AnotherSchema(MySchema):
265 pass
266
267 @bind_schema(MySchema):
268 class MyModel(BaseModel):
269 pass
270
271 @bind_schema(AnotherSchema):
272 class AnotherModel(BaseModel):
273 pass
274
275 Note:
276 By default, models decorated with this decorator are validated during
277 instantiation. If ``validate=False`` is passed to the constructor, this
278 validation will not be performed.
279
280 Args:
281 schema (class): the schema class used for validation.
282 **kwargs: Additional attributes for the ``marshmallow.Schema``
283 initializer.
284
285 Raises:
286 ValueError: when trying to bind the same schema more than once.
287
288 Return:
289 type: the same class with validation capabilities.
290 """
291 return _SchemaBinder(schema, **kwargs)
292
293
294 def _base_model_from_kwargs(cls, kwargs):
295 """Helper for BaseModel.__reduce__, expanding kwargs."""
296 return cls(**kwargs)
297
298
299 class BaseModel(SimpleNamespace):
300 """Base class for Models for validated Qiskit classes."""
301
302 def __init__(self, validate=True, **kwargs):
303 """BaseModel initializer.
304
305 Note:
306 The ``validate`` argument is used for controlling the behavior of
307 the schema binding, and will not be present on the created object.
308 """
309 # pylint: disable=unused-argument
310 super().__init__(**kwargs)
311
312 def __reduce__(self):
313 """Custom __reduce__ for allowing pickling and unpickling.
314
315 Customize the reduction in order to allow serialization, as the
316 BaseModels need to be pickled during the use of futures by the backends.
317 Instead of returning the class, a helper is used in order to pass the
318 arguments as **kwargs, as it is needed by SimpleNamespace and the
319 standard __reduce__ only allows passing args as a tuple.
320 """
321 return _base_model_from_kwargs, (self.__class__, self.__dict__)
322
323 def __contains__(self, item):
324 """Custom implementation of membership test.
325
326 Implement the ``__contains__`` method for catering to the common case
327 of finding out if a model contains a certain key (``key in model``).
328 """
329 return item in self.__dict__
330
331 def to_dict(self):
332 """Serialize the model into a Python dict of simple types.
333
334 Note that this method requires that the model is bound with
335 ``@bind_schema``.
336 """
337 try:
338 data = self.schema.dump(self)
339 except ValidationError as ex:
340 raise ModelValidationError(
341 ex.messages, ex.field_name, ex.data, ex.valid_data, **ex.kwargs) from None
342
343 return data
344
345 @classmethod
346 def from_dict(cls, dict_):
347 """Deserialize a dict of simple types into an instance of this class.
348
349 Note that this method requires that the model is bound with
350 ``@bind_schema``.
351 """
352 try:
353 data = cls.schema.load(dict_)
354 except ValidationError as ex:
355 raise ModelValidationError(
356 ex.messages, ex.field_name, ex.data, ex.valid_data, **ex.kwargs) from None
357
358 return data
359
360
361 class ObjSchema(BaseSchema):
362 """Generic object schema."""
363 pass
364
365
366 @bind_schema(ObjSchema)
367 class Obj(BaseModel):
368 """Generic object in a Model."""
369 pass
370
[end of qiskit/validation/base.py]
[start of tools/report_ci_failure.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14 """Utility module to open an issue on the repository when CIs fail."""
15
16 import os
17 from github import Github
18
19
20 class CIFailureReporter:
21 """Instances of this class can report to GitHub that the CI is failing.
22
23 """
24
25 def __init__(self, repository, token):
26 """
27 Args:
28 repository (str): a string in the form 'owner/repository-name'
29 indicating the GitHub repository to report against.
30 token (str): a GitHub token obtained following:
31 https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
32 """
33 self._repo = repository
34 self._api = Github(token)
35
36 def report(self, branch, commit, infourl=None, job_name=None):
37 """Report on GitHub that the specified branch is failing to build at
38 the specified commit. The method will open an issue indicating that
39 the branch is failing. If there is an issue already open, it will add a
40 comment avoiding to report twice about the same failure.
41
42 Args:
43 branch (str): branch name to report about.
44 commit (str): commit hash at which the build fails.
45 infourl (str): URL with extra info about the failure such as the
46 build logs.
47 job_name (str): name of the failed ci job.
48 """
49 key_label = self._key_label(branch, job_name)
50 issue_number = self._get_report_issue_number(key_label)
51 if issue_number:
52 self._report_as_comment(issue_number, branch, commit, infourl)
53 else:
54 self._report_as_issue(branch, commit, infourl, job_name)
55
56 def _key_label(self, branch_name, job_name):
57 if job_name == 'Randomized tests':
58 return 'randomized test'
59 elif job_name == 'Benchmarks':
60 return 'benchmarks failing'
61 elif branch_name == 'master':
62 return 'master failing'
63 elif branch_name == 'stable':
64 return 'stable failing'
65 else:
66 return ''
67
68 def _get_report_issue_number(self, key_label):
69 query = 'state:open label:"{}" repo:{}'.format(
70 key_label, self._repo)
71 results = self._api.search_issues(query=query)
72 try:
73 return results[0].number
74 except IndexError:
75 return None
76
77 def _report_as_comment(self, issue_number, branch, commit, infourl):
78 stamp = _branch_is_failing_stamp(branch, commit)
79 report_exists = self._check_report_existence(issue_number, stamp)
80 if not report_exists:
81 _, body = _branch_is_failing_template(branch, commit, infourl)
82 message_body = '{}\n{}'.format(stamp, body)
83 self._post_new_comment(issue_number, message_body)
84
85 def _check_report_existence(self, issue_number, target):
86 repo = self._api.get_repo(self._repo)
87 issue = repo.get_issue(issue_number)
88 if target in issue.body:
89 return True
90
91 for comment in issue.get_comments():
92 if target in comment.body:
93 return True
94
95 return False
96
97 def _report_as_issue(self, branch, commit, infourl, key_label):
98 repo = self._api.get_repo(self._repo)
99 stamp = _branch_is_failing_stamp(branch, commit)
100 title, body = _branch_is_failing_template(branch, commit, infourl)
101 message_body = '{}\n{}'.format(stamp, body)
102 repo.create_issue(title=title, body=message_body,
103 labels=[key_label])
104
105 def _post_new_comment(self, issue_number, body):
106 repo = self._api.get_repo(self._repo)
107 issue = repo.get_issue(issue_number)
108 issue.create_comment(body)
109
110
111 def _branch_is_failing_template(branch, commit, infourl):
112 title = 'Branch `{}` is failing'.format(branch)
113 body = 'Trying to build `{}` at commit {} failed.'.format(branch, commit)
114 if infourl:
115 body += '\nMore info at: {}'.format(infourl)
116 return title, body
117
118
119 def _branch_is_failing_stamp(branch, commit):
120 return '<!-- commit {}@{} -->'.format(commit, branch)
121
122
123 _REPOSITORY = 'Qiskit/qiskit-terra'
124 _GH_TOKEN = os.getenv('GH_TOKEN')
125
126
127 def _get_repo_name():
128 return os.getenv('TRAVIS_REPO_SLUG') or os.getenv('APPVEYOR_REPO_NAME')
129
130
131 def _get_branch_name():
132 return os.getenv('TRAVIS_BRANCH') or os.getenv('APPVEYOR_REPO_BRANCH')
133
134
135 def _get_commit_hash():
136 return os.getenv('TRAVIS_COMMIT') or os.getenv('APPVEYOR_REPO_COMMIT')
137
138
139 def _get_job_name():
140 return os.getenv('TRAVIS_JOB_NAME') or os.getenv('APPVEYOR_JOB_NAME')
141
142
143 def _get_info_url():
144 if os.getenv('TRAVIS'):
145 job_id = os.getenv('TRAVIS_JOB_ID')
146 return 'https://travis-ci.com/{}/jobs/{}'.format(_REPOSITORY, job_id)
147
148 if os.getenv('APPVEYOR'):
149 build_id = os.getenv('APPVEYOR_BUILD_ID')
150 return 'https://ci.appveyor.com/project/{}/build/{}'.format(_REPOSITORY, build_id)
151
152 return None
153
154
155 if __name__ == '__main__':
156 _REPORTER = CIFailureReporter(_get_repo_name(), _GH_TOKEN)
157 _REPORTER.report(_get_branch_name(), _get_commit_hash(),
158 _get_info_url(), _get_job_name())
159
[end of tools/report_ci_failure.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
9c3e6c6151b9d7c3e5635082b5e704876b8a6cc5
|
Cannot += the same circuit twice
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: 0.12.0
- **Python version**: 3.7.5
- **Operating system**: macOS Catalina 10.15.2
### What is the current behavior?
The snippet
```
qc = QuantumCircuit(1)
qc2 = qc
qc2 += qc
```
never terminates.
This problem doesn't occur if different circuit instances are added together, only as soon as the same one is added twice the code seems to be stuck.
### Steps to reproduce the problem
Run the above code snippet.
### What is the expected behavior?
The circuit `qc` should be appended twice to `qc2`.
### Suggested solutions
When calling `_append` the circuit data, `rhs.data`, of the added object is appended to self. Since the elements of `rhs.data` are taken via reference and the circuit was added beforehand, not only `self` is extended by the instructions but also `rhs`.
This can be fixed by changing
```
for instruction_context in rhs.data: # line 339
```
to
```
for instruction_context in rhs.data.copy():
```
I would self-assign to this issue, but I cannot. I'll open a PR for this.
|
2020-02-14T11:04:33Z
|
<patch>
diff --git a/qiskit/circuit/quantumcircuit.py b/qiskit/circuit/quantumcircuit.py
--- a/qiskit/circuit/quantumcircuit.py
+++ b/qiskit/circuit/quantumcircuit.py
@@ -334,8 +334,12 @@ def extend(self, rhs):
if element not in self.cregs:
self.cregs.append(element)
+ # Copy the circuit data if rhs and self are the same, otherwise the data of rhs is
+ # appended to both self and rhs resulting in an infinite loop
+ data = rhs.data.copy() if rhs is self else rhs.data
+
# Add new gates
- for instruction_context in rhs.data:
+ for instruction_context in data:
self._append(*instruction_context)
return self
</patch>
|
[]
|
[]
| ||||
mesonbuild__meson-1156
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
--backend vs2015 always crashes on non-english Windows
```
The Meson build system
Version: 0.36.0
Source dir: F:\avian\hello-c 19:18Build dir: F:\avian\hello-c\build
Build type: native build
Project name: hello
Traceback (most recent call last):
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\mesonmain.py", line 283, in run
app.generate()
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\mesonmain.py", line 163, in generate
intr = interpreter.Interpreter(b, g)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1190, in __init__
self.parse_project()
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1260, in parse_project
self.evaluate_codeblock(self.ast, end=1)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1358, in evaluate_codeblock
raise e
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1352, in evaluate_codeblock
self.evaluate_statement(cur)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1473, in evaluate_statement
return self.function_call(cur)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 2494, in function_call
return self.funcs[func_name](node, self.flatten(posargs), kwargs)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 74, in wrapped
return f(self, node, args, kwargs)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1688, in func_project
self.add_languages(args[1:], True)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1804, in add_languages
(comp, cross_comp) = self.detect_compilers(lang, need_cross_compiler)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1773, in detect_compilers
comp.sanity_check(self.environment.get_scratch_dir(), self.environment)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\compilers.py", line 684, in sanity_check
return self.sanity_check_impl(work_dir, environment, 'sanitycheckc.c', code)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\compilers.py", line 659, in sanity_check_impl
stde = stde.decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8e in position 0: invalid start byte
```
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="http://mesonbuild.com/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://travis-ci.org/mesonbuild/meson)
11 [](https://ci.appveyor.com/project/jpakkane/meson)
12
13 ####Dependencies
14
15 - [Python](http://python.org) (version 3.4 or newer)
16 - [Ninja](https://ninja-build.org)
17
18 ####Installing from source
19
20 You can run Meson directly from a revision control checkout or an
21 extracted tarball. If you wish you can install it locally with the
22 standard Python distutils command `python3 setup.py install <your
23 options here>`.
24
25 Meson is also available from
26 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed
27 with `pip3 install meson` (this does not require a source checkout,
28 pip will download the package automatically). The exact command to
29 type to install with pip can very between systems, be sure to use the
30 Python 3 version of pip.
31
32 #### Creating a standalone script
33
34 Meson can be run as a [Python zip
35 app](https://docs.python.org/3/library/zipapp.html). To generate the
36 executable run the following command:
37
38 python3 -m zipapp -p '/usr/bin/env python3' -m meson:main -o meson <source checkout>
39
40 Note that the source checkout may not be `meson` because it would
41 clash with the generated binary name.
42
43 This will zip all files inside the source checkout into the script
44 which includes hundreds of tests, so you might want to temporarily
45 remove those before running it.
46
47 ####Running
48
49 Meson requires that you have a source directory and a build directory
50 and that these two are different. In your source root must exist a file
51 called 'meson.build'. To generate the build system run this command:
52
53 `meson <source directory> <build directory>`
54
55 Depending on how you obtained Meson the command might also be called
56 `meson.py` instead of plain `meson`. In the rest of this document we
57 are going to use the latter form.
58
59 You can omit either of the two directories, and Meson will substitute
60 the current directory and autodetect what you mean. This allows you to
61 do things like this:
62
63 `cd source_root; mkdir build; cd build; meson ..`
64
65 or
66
67 `cd source_root; mkdir build; meson build`
68
69 To compile, cd into your build directory and type `ninja`. To run unit
70 tests, type `ninja test`.
71
72 Install is the same but it can take an extra argument:
73
74 `DESTDIR=/destdir/path ninja install`
75
76 `DESTDIR` can be omitted. If you are installing to system directories,
77 you may need to run this command with sudo.
78
79
80 ####Contributing
81
82 We love code contributions. See the contributing.txt file for
83 details.
84
85
86 ####IRC
87
88 The irc channel for Meson is `#mesonbuild` over at Freenode.
89
90
91 ####Further info
92
93 More information about the Meson build system can be found at the
94 [project's home page](http://mesonbuild.com).
95
96 Meson is a registered trademark of Jussi Pakkanen
97
[end of README.md]
[start of mesonbuild/environment.py]
1 # Copyright 2012-2016 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os, re, subprocess, platform
16 from . import coredata
17 from . import mesonlib
18 from . import mlog
19 from .compilers import *
20 import configparser
21 import shutil
22
23 build_filename = 'meson.build'
24
25 class EnvironmentException(mesonlib.MesonException):
26 def __init__(self, *args, **kwargs):
27 super().__init__(*args, **kwargs)
28
29 def find_coverage_tools():
30 gcovr_exe = 'gcovr'
31 lcov_exe = 'lcov'
32 genhtml_exe = 'genhtml'
33
34 if not mesonlib.exe_exists([gcovr_exe, '--version']):
35 gcovr_exe = None
36 if not mesonlib.exe_exists([lcov_exe, '--version']):
37 lcov_exe = None
38 if not mesonlib.exe_exists([genhtml_exe, '--version']):
39 genhtml_exe = None
40 return (gcovr_exe, lcov_exe, genhtml_exe)
41
42 def detect_ninja():
43 for n in ['ninja', 'ninja-build']:
44 try:
45 p = subprocess.Popen([n, '--version'], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
46 except (FileNotFoundError, PermissionError):
47 # Doesn't exist in PATH or isn't executable
48 continue
49 version = p.communicate()[0].decode(errors='ignore')
50 # Perhaps we should add a way for the caller to know the failure mode
51 # (not found or too old)
52 if p.returncode == 0 and mesonlib.version_compare(version, ">=1.6"):
53 return n
54
55 def detect_native_windows_arch():
56 """
57 The architecture of Windows itself: x86 or amd64
58 """
59 # These env variables are always available. See:
60 # https://msdn.microsoft.com/en-us/library/aa384274(VS.85).aspx
61 # https://blogs.msdn.microsoft.com/david.wang/2006/03/27/howto-detect-process-bitness/
62 arch = os.environ.get('PROCESSOR_ARCHITEW6432', '').lower()
63 if not arch:
64 try:
65 # If this doesn't exist, something is messing with the environment
66 arch = os.environ['PROCESSOR_ARCHITECTURE'].lower()
67 except KeyError:
68 raise InterpreterException('Unable to detect native OS architecture')
69 return arch
70
71 def detect_windows_arch(compilers):
72 """
73 Detecting the 'native' architecture of Windows is not a trivial task. We
74 cannot trust that the architecture that Python is built for is the 'native'
75 one because you can run 32-bit apps on 64-bit Windows using WOW64 and
76 people sometimes install 32-bit Python on 64-bit Windows.
77
78 We also can't rely on the architecture of the OS itself, since it's
79 perfectly normal to compile and run 32-bit applications on Windows as if
80 they were native applications. It's a terrible experience to require the
81 user to supply a cross-info file to compile 32-bit applications on 64-bit
82 Windows. Thankfully, the only way to compile things with Visual Studio on
83 Windows is by entering the 'msvc toolchain' environment, which can be
84 easily detected.
85
86 In the end, the sanest method is as follows:
87 1. Check if we're in an MSVC toolchain environment, and if so, return the
88 MSVC toolchain architecture as our 'native' architecture.
89 2. If not, check environment variables that are set by Windows and WOW64 to
90 find out the architecture that Windows is built for, and use that as our
91 'native' architecture.
92 """
93 os_arch = detect_native_windows_arch()
94 if os_arch != 'amd64':
95 return os_arch
96 # If we're on 64-bit Windows, 32-bit apps can be compiled without
97 # cross-compilation. So if we're doing that, just set the native arch as
98 # 32-bit and pretend like we're running under WOW64. Else, return the
99 # actual Windows architecture that we deduced above.
100 for compiler in compilers.values():
101 # Check if we're using and inside an MSVC toolchain environment
102 if compiler.id == 'msvc' and 'VCINSTALLDIR' in os.environ:
103 # 'Platform' is only set when the target arch is not 'x86'.
104 # It's 'x64' when targetting x86_64 and 'arm' when targetting ARM.
105 platform = os.environ.get('Platform', 'x86').lower()
106 if platform == 'x86':
107 return platform
108 if compiler.id == 'gcc' and compiler.has_define('__i386__'):
109 return 'x86'
110 return os_arch
111
112 def detect_cpu_family(compilers):
113 """
114 Python is inconsistent in its platform module.
115 It returns different values for the same cpu.
116 For x86 it might return 'x86', 'i686' or somesuch.
117 Do some canonicalization.
118 """
119 if mesonlib.is_windows():
120 trial = detect_windows_arch(compilers)
121 else:
122 trial = platform.machine().lower()
123 if trial.startswith('i') and trial.endswith('86'):
124 return 'x86'
125 if trial.startswith('arm'):
126 return 'arm'
127 if trial in ('amd64', 'x64'):
128 return 'x86_64'
129 # Add fixes here as bugs are reported.
130 return trial
131
132 def detect_cpu(compilers):
133 if mesonlib.is_windows():
134 trial = detect_windows_arch(compilers)
135 else:
136 trial = platform.machine().lower()
137 if trial in ('amd64', 'x64'):
138 return 'x86_64'
139 # Add fixes here as bugs are reported.
140 return trial
141
142 def detect_system():
143 return platform.system().lower()
144
145
146 def for_windows(is_cross, env):
147 """
148 Host machine is windows?
149
150 Note: 'host' is the machine on which compiled binaries will run
151 """
152 if not is_cross:
153 return mesonlib.is_windows()
154 elif env.cross_info.has_host():
155 return env.cross_info.config['host_machine']['system'] == 'windows'
156 return False
157
158 def for_darwin(is_cross, env):
159 """
160 Host machine is Darwin (iOS/OS X)?
161
162 Note: 'host' is the machine on which compiled binaries will run
163 """
164 if not is_cross:
165 return mesonlib.is_osx()
166 elif env.cross_info.has_host():
167 return env.cross_info.config['host_machine']['system'] == 'darwin'
168 return False
169
170
171 def search_version(text):
172 # Usually of the type 4.1.4 but compiler output may contain
173 # stuff like this:
174 # (Sourcery CodeBench Lite 2014.05-29) 4.8.3 20140320 (prerelease)
175 # Limiting major version number to two digits seems to work
176 # thus far. When we get to GCC 100, this will break, but
177 # if we are still relevant when that happens, it can be
178 # considered an achievement in itself.
179 #
180 # This regex is reaching magic levels. If it ever needs
181 # to be updated, do not complexify but convert to something
182 # saner instead.
183 version_regex = '(?<!(\d|\.))(\d{1,2}(\.\d+)+(-[a-zA-Z0-9]+)?)'
184 match = re.search(version_regex, text)
185 if match:
186 return match.group(0)
187 return 'unknown version'
188
189 class Environment():
190 private_dir = 'meson-private'
191 log_dir = 'meson-logs'
192 coredata_file = os.path.join(private_dir, 'coredata.dat')
193
194 def __init__(self, source_dir, build_dir, main_script_launcher, options, original_cmd_line_args):
195 self.source_dir = source_dir
196 self.build_dir = build_dir
197 self.meson_script_launcher = main_script_launcher
198 self.scratch_dir = os.path.join(build_dir, Environment.private_dir)
199 self.log_dir = os.path.join(build_dir, Environment.log_dir)
200 os.makedirs(self.scratch_dir, exist_ok=True)
201 os.makedirs(self.log_dir, exist_ok=True)
202 try:
203 cdf = os.path.join(self.get_build_dir(), Environment.coredata_file)
204 self.coredata = coredata.load(cdf)
205 self.first_invocation = False
206 except FileNotFoundError:
207 # WARNING: Don't use any values from coredata in __init__. It gets
208 # re-initialized with project options by the interpreter during
209 # build file parsing.
210 self.coredata = coredata.CoreData(options)
211 self.coredata.meson_script_launcher = self.meson_script_launcher
212 self.first_invocation = True
213 if self.coredata.cross_file:
214 self.cross_info = CrossBuildInfo(self.coredata.cross_file)
215 else:
216 self.cross_info = None
217 self.cmd_line_options = options
218 self.original_cmd_line_args = original_cmd_line_args
219
220 # List of potential compilers.
221 if mesonlib.is_windows():
222 self.default_c = ['cl', 'cc', 'gcc', 'clang']
223 self.default_cpp = ['cl', 'c++', 'g++', 'clang++']
224 else:
225 self.default_c = ['cc']
226 self.default_cpp = ['c++']
227 self.default_objc = ['cc']
228 self.default_objcpp = ['c++']
229 self.default_fortran = ['gfortran', 'g95', 'f95', 'f90', 'f77']
230 self.default_static_linker = 'ar'
231 self.vs_static_linker = 'lib'
232
233 # Various prefixes and suffixes for import libraries, shared libraries,
234 # static libraries, and executables.
235 # Versioning is added to these names in the backends as-needed.
236 cross = self.is_cross_build()
237 if (not cross and mesonlib.is_windows()) \
238 or (cross and self.cross_info.has_host() and self.cross_info.config['host_machine']['system'] == 'windows'):
239 self.exe_suffix = 'exe'
240 self.object_suffix = 'obj'
241 self.win_libdir_layout = True
242 else:
243 self.exe_suffix = ''
244 self.object_suffix = 'o'
245 self.win_libdir_layout = False
246
247 def is_cross_build(self):
248 return self.cross_info is not None
249
250 def dump_coredata(self, mtime):
251 cdf = os.path.join(self.get_build_dir(), Environment.coredata_file)
252 coredata.save(self.coredata, cdf)
253 os.utime(cdf, times=(mtime, mtime))
254
255 def get_script_dir(self):
256 import mesonbuild.scripts
257 return os.path.dirname(mesonbuild.scripts.__file__)
258
259 def get_log_dir(self):
260 return self.log_dir
261
262 def get_coredata(self):
263 return self.coredata
264
265 def get_build_command(self):
266 return self.meson_script_launcher
267
268 def is_header(self, fname):
269 return is_header(fname)
270
271 def is_source(self, fname):
272 return is_source(fname)
273
274 def is_object(self, fname):
275 return is_object(fname)
276
277 def is_library(self, fname):
278 return is_library(fname)
279
280 def had_argument_for(self, option):
281 trial1 = '--' + option
282 trial2 = '-D' + option
283 previous_is_plaind = False
284 for i in self.original_cmd_line_args:
285 if i.startswith(trial1) or i.startswith(trial2):
286 return True
287 if previous_is_plaind and i.startswith(option):
288 return True
289 previous_is_plaind = i == '-D'
290 return False
291
292 def merge_options(self, options):
293 for (name, value) in options.items():
294 if name not in self.coredata.user_options:
295 self.coredata.user_options[name] = value
296 else:
297 oldval = self.coredata.user_options[name]
298 if type(oldval) != type(value):
299 self.coredata.user_options[name] = value
300
301 @staticmethod
302 def get_gnu_compiler_defines(compiler):
303 """
304 Detect GNU compiler platform type (Apple, MinGW, Unix)
305 """
306 # Arguments to output compiler pre-processor defines to stdout
307 # gcc, g++, and gfortran all support these arguments
308 args = compiler + ['-E', '-dM', '-']
309 p = subprocess.Popen(args, universal_newlines=True,
310 stdin=subprocess.PIPE, stdout=subprocess.PIPE)
311 output = p.communicate('')[0]
312 if p.returncode != 0:
313 raise EnvironmentException('Unable to detect GNU compiler type:\n' + output)
314 # Parse several lines of the type:
315 # `#define ___SOME_DEF some_value`
316 # and extract `___SOME_DEF`
317 defines = {}
318 for line in output.split('\n'):
319 if not line:
320 continue
321 d, *rest = line.split(' ', 2)
322 if d != '#define':
323 continue
324 if len(rest) == 1:
325 defines[rest] = True
326 if len(rest) == 2:
327 defines[rest[0]] = rest[1]
328 return defines
329 @staticmethod
330 def get_gnu_version_from_defines(defines):
331 dot = '.'
332 major = defines.get('__GNUC__', '0')
333 minor = defines.get('__GNUC_MINOR__', '0')
334 patch = defines.get('__GNUC_PATCHLEVEL__', '0')
335 return dot.join((major, minor, patch))
336
337 @staticmethod
338 def get_gnu_compiler_type(defines):
339 # Detect GCC type (Apple, MinGW, Cygwin, Unix)
340 if '__APPLE__' in defines:
341 return GCC_OSX
342 elif '__MINGW32__' in defines or '__MINGW64__' in defines:
343 return GCC_MINGW
344 # We ignore Cygwin for now, and treat it as a standard GCC
345 return GCC_STANDARD
346
347 def detect_c_compiler(self, want_cross):
348 evar = 'CC'
349 if self.is_cross_build() and want_cross:
350 compilers = [self.cross_info.config['binaries']['c']]
351 ccache = []
352 is_cross = True
353 if self.cross_info.need_exe_wrapper():
354 exe_wrap = self.cross_info.config['binaries'].get('exe_wrapper', None)
355 else:
356 exe_wrap = []
357 elif evar in os.environ:
358 compilers = os.environ[evar].split()
359 ccache = []
360 is_cross = False
361 exe_wrap = None
362 else:
363 compilers = self.default_c
364 ccache = self.detect_ccache()
365 is_cross = False
366 exe_wrap = None
367 popen_exceptions = {}
368 for compiler in compilers:
369 try:
370 basename = os.path.basename(compiler).lower()
371 if basename == 'cl' or basename == 'cl.exe':
372 arg = '/?'
373 else:
374 arg = '--version'
375 p = subprocess.Popen([compiler, arg], stdout=subprocess.PIPE,
376 stderr=subprocess.PIPE)
377 except OSError as e:
378 popen_exceptions[' '.join([compiler, arg])] = e
379 continue
380 (out, err) = p.communicate()
381 out = out.decode(errors='ignore')
382 err = err.decode(errors='ignore')
383 version = search_version(out)
384 if 'Free Software Foundation' in out:
385 defines = self.get_gnu_compiler_defines([compiler])
386 if not defines:
387 popen_exceptions[compiler] = 'no pre-processor defines'
388 continue
389 gtype = self.get_gnu_compiler_type(defines)
390 version = self.get_gnu_version_from_defines(defines)
391 return GnuCCompiler(ccache + [compiler], version, gtype, is_cross, exe_wrap, defines)
392 if 'clang' in out:
393 if 'Apple' in out:
394 cltype = CLANG_OSX
395 else:
396 cltype = CLANG_STANDARD
397 return ClangCCompiler(ccache + [compiler], version, cltype, is_cross, exe_wrap)
398 if 'Microsoft' in out or 'Microsoft' in err:
399 # Visual Studio prints version number to stderr but
400 # everything else to stdout. Why? Lord only knows.
401 version = search_version(err)
402 return VisualStudioCCompiler([compiler], version, is_cross, exe_wrap)
403 errmsg = 'Unknown compiler(s): "' + ', '.join(compilers) + '"'
404 if popen_exceptions:
405 errmsg += '\nThe follow exceptions were encountered:'
406 for (c, e) in popen_exceptions.items():
407 errmsg += '\nRunning "{0}" gave "{1}"'.format(c, e)
408 raise EnvironmentException(errmsg)
409
410 def detect_fortran_compiler(self, want_cross):
411 evar = 'FC'
412 if self.is_cross_build() and want_cross:
413 compilers = [self.cross_info['fortran']]
414 is_cross = True
415 if self.cross_info.need_exe_wrapper():
416 exe_wrap = self.cross_info.get('exe_wrapper', None)
417 else:
418 exe_wrap = []
419 elif evar in os.environ:
420 compilers = os.environ[evar].split()
421 is_cross = False
422 exe_wrap = None
423 else:
424 compilers = self.default_fortran
425 is_cross = False
426 exe_wrap = None
427 popen_exceptions = {}
428 for compiler in compilers:
429 for arg in ['--version', '-V']:
430 try:
431 p = subprocess.Popen([compiler, arg],
432 stdout=subprocess.PIPE,
433 stderr=subprocess.PIPE)
434 except OSError as e:
435 popen_exceptions[' '.join([compiler, arg])] = e
436 continue
437 (out, err) = p.communicate()
438 out = out.decode(errors='ignore')
439 err = err.decode(errors='ignore')
440
441 version = search_version(out)
442
443 if 'GNU Fortran' in out:
444 defines = self.get_gnu_compiler_defines([compiler])
445 if not defines:
446 popen_exceptions[compiler] = 'no pre-processor defines'
447 continue
448 gtype = self.get_gnu_compiler_type(defines)
449 version = self.get_gnu_version_from_defines(defines)
450 return GnuFortranCompiler([compiler], version, gtype, is_cross, exe_wrap, defines)
451
452 if 'G95' in out:
453 return G95FortranCompiler([compiler], version, is_cross, exe_wrap)
454
455 if 'Sun Fortran' in err:
456 version = search_version(err)
457 return SunFortranCompiler([compiler], version, is_cross, exe_wrap)
458
459 if 'ifort (IFORT)' in out:
460 return IntelFortranCompiler([compiler], version, is_cross, exe_wrap)
461
462 if 'PathScale EKOPath(tm)' in err:
463 return PathScaleFortranCompiler([compiler], version, is_cross, exe_wrap)
464
465 if 'pgf90' in out:
466 return PGIFortranCompiler([compiler], version, is_cross, exe_wrap)
467
468 if 'Open64 Compiler Suite' in err:
469 return Open64FortranCompiler([compiler], version, is_cross, exe_wrap)
470
471 if 'NAG Fortran' in err:
472 return NAGFortranCompiler([compiler], version, is_cross, exe_wrap)
473 errmsg = 'Unknown compiler(s): "' + ', '.join(compilers) + '"'
474 if popen_exceptions:
475 errmsg += '\nThe follow exceptions were encountered:'
476 for (c, e) in popen_exceptions.items():
477 errmsg += '\nRunning "{0}" gave "{1}"'.format(c, e)
478 raise EnvironmentException(errmsg)
479
480 def get_scratch_dir(self):
481 return self.scratch_dir
482
483 def get_depfixer(self):
484 path = os.path.split(__file__)[0]
485 return os.path.join(path, 'depfixer.py')
486
487 def detect_cpp_compiler(self, want_cross):
488 evar = 'CXX'
489 if self.is_cross_build() and want_cross:
490 compilers = [self.cross_info.config['binaries']['cpp']]
491 ccache = []
492 is_cross = True
493 if self.cross_info.need_exe_wrapper():
494 exe_wrap = self.cross_info.config['binaries'].get('exe_wrapper', None)
495 else:
496 exe_wrap = []
497 elif evar in os.environ:
498 compilers = os.environ[evar].split()
499 ccache = []
500 is_cross = False
501 exe_wrap = None
502 else:
503 compilers = self.default_cpp
504 ccache = self.detect_ccache()
505 is_cross = False
506 exe_wrap = None
507 popen_exceptions = {}
508 for compiler in compilers:
509 basename = os.path.basename(compiler).lower()
510 if basename == 'cl' or basename == 'cl.exe':
511 arg = '/?'
512 else:
513 arg = '--version'
514 try:
515 p = subprocess.Popen([compiler, arg],
516 stdout=subprocess.PIPE,
517 stderr=subprocess.PIPE)
518 except OSError as e:
519 popen_exceptions[' '.join([compiler, arg])] = e
520 continue
521 (out, err) = p.communicate()
522 out = out.decode(errors='ignore')
523 err = err.decode(errors='ignore')
524 version = search_version(out)
525 if 'Free Software Foundation' in out:
526 defines = self.get_gnu_compiler_defines([compiler])
527 if not defines:
528 popen_exceptions[compiler] = 'no pre-processor defines'
529 continue
530 gtype = self.get_gnu_compiler_type(defines)
531 version = self.get_gnu_version_from_defines(defines)
532 return GnuCPPCompiler(ccache + [compiler], version, gtype, is_cross, exe_wrap, defines)
533 if 'clang' in out:
534 if 'Apple' in out:
535 cltype = CLANG_OSX
536 else:
537 cltype = CLANG_STANDARD
538 return ClangCPPCompiler(ccache + [compiler], version, cltype, is_cross, exe_wrap)
539 if 'Microsoft' in out or 'Microsoft' in err:
540 version = search_version(err)
541 return VisualStudioCPPCompiler([compiler], version, is_cross, exe_wrap)
542 errmsg = 'Unknown compiler(s): "' + ', '.join(compilers) + '"'
543 if popen_exceptions:
544 errmsg += '\nThe follow exceptions were encountered:'
545 for (c, e) in popen_exceptions.items():
546 errmsg += '\nRunning "{0}" gave "{1}"'.format(c, e)
547 raise EnvironmentException(errmsg)
548
549 def detect_objc_compiler(self, want_cross):
550 if self.is_cross_build() and want_cross:
551 exelist = [self.cross_info['objc']]
552 is_cross = True
553 if self.cross_info.need_exe_wrapper():
554 exe_wrap = self.cross_info.get('exe_wrapper', None)
555 else:
556 exe_wrap = []
557 else:
558 exelist = self.get_objc_compiler_exelist()
559 is_cross = False
560 exe_wrap = None
561 try:
562 p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
563 except OSError:
564 raise EnvironmentException('Could not execute ObjC compiler "%s"' % ' '.join(exelist))
565 (out, err) = p.communicate()
566 out = out.decode(errors='ignore')
567 err = err.decode(errors='ignore')
568 version = search_version(out)
569 if 'Free Software Foundation' in out:
570 defines = self.get_gnu_compiler_defines(exelist)
571 version = self.get_gnu_version_from_defines(defines)
572 return GnuObjCCompiler(exelist, version, is_cross, exe_wrap, defines)
573 if out.startswith('Apple LLVM'):
574 return ClangObjCCompiler(exelist, version, CLANG_OSX, is_cross, exe_wrap)
575 raise EnvironmentException('Unknown compiler "' + ' '.join(exelist) + '"')
576
577 def detect_objcpp_compiler(self, want_cross):
578 if self.is_cross_build() and want_cross:
579 exelist = [self.cross_info['objcpp']]
580 is_cross = True
581 if self.cross_info.need_exe_wrapper():
582 exe_wrap = self.cross_info.get('exe_wrapper', None)
583 else:
584 exe_wrap = []
585 else:
586 exelist = self.get_objcpp_compiler_exelist()
587 is_cross = False
588 exe_wrap = None
589 try:
590 p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
591 except OSError:
592 raise EnvironmentException('Could not execute ObjC++ compiler "%s"' % ' '.join(exelist))
593 (out, err) = p.communicate()
594 out = out.decode(errors='ignore')
595 err = err.decode(errors='ignore')
596 version = search_version(out)
597 if 'Free Software Foundation' in out:
598 defines = self.get_gnu_compiler_defines(exelist)
599 return GnuObjCPPCompiler(exelist, version, is_cross, exe_wrap, defines)
600 if out.startswith('Apple LLVM'):
601 return ClangObjCPPCompiler(exelist, version, CLANG_OSX, is_cross, exe_wrap)
602 raise EnvironmentException('Unknown compiler "' + ' '.join(exelist) + '"')
603
604 def detect_java_compiler(self):
605 exelist = ['javac']
606 try:
607 p = subprocess.Popen(exelist + ['-version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
608 except OSError:
609 raise EnvironmentException('Could not execute Java compiler "%s"' % ' '.join(exelist))
610 (out, err) = p.communicate()
611 out = out.decode(errors='ignore')
612 err = err.decode(errors='ignore')
613 version = search_version(err)
614 if 'javac' in err:
615 return JavaCompiler(exelist, version)
616 raise EnvironmentException('Unknown compiler "' + ' '.join(exelist) + '"')
617
618 def detect_cs_compiler(self):
619 exelist = ['mcs']
620 try:
621 p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
622 except OSError:
623 raise EnvironmentException('Could not execute C# compiler "%s"' % ' '.join(exelist))
624 (out, err) = p.communicate()
625 out = out.decode(errors='ignore')
626 err = err.decode(errors='ignore')
627 version = search_version(out)
628 if 'Mono' in out:
629 return MonoCompiler(exelist, version)
630 raise EnvironmentException('Unknown compiler "' + ' '.join(exelist) + '"')
631
632 def detect_vala_compiler(self):
633 exelist = ['valac']
634 try:
635 p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
636 except OSError:
637 raise EnvironmentException('Could not execute Vala compiler "%s"' % ' '.join(exelist))
638 (out, _) = p.communicate()
639 out = out.decode(errors='ignore')
640 version = search_version(out)
641 if 'Vala' in out:
642 return ValaCompiler(exelist, version)
643 raise EnvironmentException('Unknown compiler "' + ' '.join(exelist) + '"')
644
645 def detect_rust_compiler(self):
646 exelist = ['rustc']
647 try:
648 p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
649 except OSError:
650 raise EnvironmentException('Could not execute Rust compiler "%s"' % ' '.join(exelist))
651 (out, _) = p.communicate()
652 out = out.decode(errors='ignore')
653 version = search_version(out)
654 if 'rustc' in out:
655 return RustCompiler(exelist, version)
656 raise EnvironmentException('Unknown compiler "' + ' '.join(exelist) + '"')
657
658 def detect_d_compiler(self):
659 exelist = None
660 is_cross = False
661 # Search for a D compiler.
662 # We prefer LDC over GDC unless overridden with the DC
663 # environment variable because LDC has a much more
664 # up to date language version at time (2016).
665 if 'DC' in os.environ:
666 exelist = os.environ['DC'].split()
667 elif self.is_cross_build() and want_cross:
668 exelist = [self.cross_info.config['binaries']['d']]
669 is_cross = True
670 elif shutil.which("ldc2"):
671 exelist = ['ldc2']
672 elif shutil.which("ldc"):
673 exelist = ['ldc']
674 elif shutil.which("gdc"):
675 exelist = ['gdc']
676 elif shutil.which("dmd"):
677 exelist = ['dmd']
678 else:
679 raise EnvironmentException('Could not find any supported D compiler.')
680
681 try:
682 p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
683 except OSError:
684 raise EnvironmentException('Could not execute D compiler "%s"' % ' '.join(exelist))
685 (out, _) = p.communicate()
686 out = out.decode(errors='ignore')
687 version = search_version(out)
688 if 'LLVM D compiler' in out:
689 return LLVMDCompiler(exelist, version, is_cross)
690 elif 'gdc' in out:
691 return GnuDCompiler(exelist, version, is_cross)
692 elif 'Digital Mars' in out:
693 return DmdDCompiler(exelist, version, is_cross)
694 raise EnvironmentException('Unknown compiler "' + ' '.join(exelist) + '"')
695
696 def detect_swift_compiler(self):
697 exelist = ['swiftc']
698 try:
699 p = subprocess.Popen(exelist + ['-v'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
700 except OSError:
701 raise EnvironmentException('Could not execute Swift compiler "%s"' % ' '.join(exelist))
702 (_, err) = p.communicate()
703 err = err.decode(errors='ignore')
704 version = search_version(err)
705 if 'Swift' in err:
706 return SwiftCompiler(exelist, version)
707 raise EnvironmentException('Unknown compiler "' + ' '.join(exelist) + '"')
708
709 def detect_static_linker(self, compiler):
710 if compiler.is_cross:
711 linker = self.cross_info.config['binaries']['ar']
712 else:
713 evar = 'AR'
714 if evar in os.environ:
715 linker = os.environ[evar].strip()
716 elif isinstance(compiler, VisualStudioCCompiler):
717 linker= self.vs_static_linker
718 else:
719 linker = self.default_static_linker
720 basename = os.path.basename(linker).lower()
721 if basename == 'lib' or basename == 'lib.exe':
722 arg = '/?'
723 else:
724 arg = '--version'
725 try:
726 p = subprocess.Popen([linker, arg], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
727 except OSError:
728 raise EnvironmentException('Could not execute static linker "%s".' % linker)
729 (out, err) = p.communicate()
730 out = out.decode(errors='ignore')
731 err = err.decode(errors='ignore')
732 if '/OUT:' in out or '/OUT:' in err:
733 return VisualStudioLinker([linker])
734 if p.returncode == 0:
735 return ArLinker([linker])
736 if p.returncode == 1 and err.startswith('usage'): # OSX
737 return ArLinker([linker])
738 raise EnvironmentException('Unknown static linker "%s"' % linker)
739
740 def detect_ccache(self):
741 try:
742 has_ccache = subprocess.call(['ccache', '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
743 except OSError:
744 has_ccache = 1
745 if has_ccache == 0:
746 cmdlist = ['ccache']
747 else:
748 cmdlist = []
749 return cmdlist
750
751 def get_objc_compiler_exelist(self):
752 ccachelist = self.detect_ccache()
753 evar = 'OBJCC'
754 if evar in os.environ:
755 return os.environ[evar].split()
756 return ccachelist + self.default_objc
757
758 def get_objcpp_compiler_exelist(self):
759 ccachelist = self.detect_ccache()
760 evar = 'OBJCXX'
761 if evar in os.environ:
762 return os.environ[evar].split()
763 return ccachelist + self.default_objcpp
764
765 def get_source_dir(self):
766 return self.source_dir
767
768 def get_build_dir(self):
769 return self.build_dir
770
771 def get_exe_suffix(self):
772 return self.exe_suffix
773
774 def get_import_lib_dir(self):
775 "Install dir for the import library (library used for linking)"
776 return self.get_libdir()
777
778 def get_shared_lib_dir(self):
779 "Install dir for the shared library"
780 if self.win_libdir_layout:
781 return self.get_bindir()
782 return self.get_libdir()
783
784 def get_static_lib_dir(self):
785 "Install dir for the static library"
786 return self.get_libdir()
787
788 def get_object_suffix(self):
789 return self.object_suffix
790
791 def get_prefix(self):
792 return self.coredata.get_builtin_option('prefix')
793
794 def get_libdir(self):
795 return self.coredata.get_builtin_option('libdir')
796
797 def get_libexecdir(self):
798 return self.coredata.get_builtin_option('libexecdir')
799
800 def get_bindir(self):
801 return self.coredata.get_builtin_option('bindir')
802
803 def get_includedir(self):
804 return self.coredata.get_builtin_option('includedir')
805
806 def get_mandir(self):
807 return self.coredata.get_builtin_option('mandir')
808
809 def get_datadir(self):
810 return self.coredata.get_builtin_option('datadir')
811
812
813 def get_args_from_envvars(compiler):
814 """
815 @compiler: Compiler to fetch environment flags for
816
817 Returns a tuple of (compile_flags, link_flags) for the specified language
818 from the inherited environment
819 """
820 def log_var(var, val):
821 if val:
822 mlog.log('Appending {} from environment: {!r}'.format(var, val))
823
824 lang = compiler.get_language()
825 compiler_is_linker = False
826 if hasattr(compiler, 'get_linker_exelist'):
827 compiler_is_linker = (compiler.get_exelist() == compiler.get_linker_exelist())
828
829 if lang not in ('c', 'cpp', 'objc', 'objcpp', 'fortran', 'd'):
830 return ([], [])
831
832 # Compile flags
833 cflags_mapping = {'c': 'CFLAGS', 'cpp': 'CXXFLAGS',
834 'objc': 'OBJCFLAGS', 'objcpp': 'OBJCXXFLAGS',
835 'fortran': 'FFLAGS', 'd': 'DFLAGS'}
836 compile_flags = os.environ.get(cflags_mapping[lang], '')
837 log_var(cflags_mapping[lang], compile_flags)
838 compile_flags = compile_flags.split()
839
840 # Link flags (same for all languages)
841 link_flags = os.environ.get('LDFLAGS', '')
842 log_var('LDFLAGS', link_flags)
843 link_flags = link_flags.split()
844 if compiler_is_linker:
845 # When the compiler is used as a wrapper around the linker (such as
846 # with GCC and Clang), the compile flags can be needed while linking
847 # too. This is also what Autotools does. However, we don't want to do
848 # this when the linker is stand-alone such as with MSVC C/C++, etc.
849 link_flags = compile_flags + link_flags
850
851 # Pre-processof rlags (not for fortran)
852 preproc_flags = ''
853 if lang in ('c', 'cpp', 'objc', 'objcpp'):
854 preproc_flags = os.environ.get('CPPFLAGS', '')
855 log_var('CPPFLAGS', preproc_flags)
856 compile_flags += preproc_flags.split()
857
858 return (compile_flags, link_flags)
859
860 class CrossBuildInfo():
861 def __init__(self, filename):
862 self.config = {'properties': {}}
863 self.parse_datafile(filename)
864 if 'target_machine' in self.config:
865 return
866 if not 'host_machine' in self.config:
867 raise mesonlib.MesonException('Cross info file must have either host or a target machine.')
868 if not 'binaries' in self.config:
869 raise mesonlib.MesonException('Cross file is missing "binaries".')
870
871 def ok_type(self, i):
872 return isinstance(i, (str, int, bool))
873
874 def parse_datafile(self, filename):
875 config = configparser.ConfigParser()
876 config.read(filename)
877 # This is a bit hackish at the moment.
878 for s in config.sections():
879 self.config[s] = {}
880 for entry in config[s]:
881 value = config[s][entry]
882 if ' ' in entry or '\t' in entry or "'" in entry or '"' in entry:
883 raise EnvironmentException('Malformed variable name %s in cross file..' % varname)
884 try:
885 res = eval(value, {'true' : True, 'false' : False})
886 except Exception:
887 raise EnvironmentException('Malformed value in cross file variable %s.' % varname)
888 if self.ok_type(res):
889 self.config[s][entry] = res
890 elif isinstance(res, list):
891 for i in res:
892 if not self.ok_type(i):
893 raise EnvironmentException('Malformed value in cross file variable %s.' % varname)
894 self.config[s][entry] = res
895 else:
896 raise EnvironmentException('Malformed value in cross file variable %s.' % varname)
897
898 def has_host(self):
899 return 'host_machine' in self.config
900
901 def has_target(self):
902 return 'target_machine' in self.config
903
904 def has_stdlib(self, language):
905 return language + '_stdlib' in self.config['properties']
906
907 def get_stdlib(self, language):
908 return self.config['properties'][language + '_stdlib']
909
910 def get_properties(self):
911 return self.config['properties']
912
913 # Wehn compiling a cross compiler we use the native compiler for everything.
914 # But not when cross compiling a cross compiler.
915 def need_cross_compiler(self):
916 return 'host_machine' in self.config
917
918 def need_exe_wrapper(self):
919 # Can almost always run 32-bit binaries on 64-bit natively if the host
920 # and build systems are the same. We don't pass any compilers to
921 # detect_cpu_family() here because we always want to know the OS
922 # architecture, not what the compiler environment tells us.
923 if self.has_host() and detect_cpu_family({}) == 'x86_64' and \
924 self.config['host_machine']['cpu_family'] == 'x86' and \
925 self.config['host_machine']['system'] == detect_system():
926 return False
927 return True
928
[end of mesonbuild/environment.py]
[start of setup.py]
1 #!/usr/bin/env python3
2
3 # Copyright 2016 The Meson development team
4
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8
9 # http://www.apache.org/licenses/LICENSE-2.0
10
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 import os
18 import sys
19 from os import path
20
21 if sys.version_info[0] < 3:
22 print('Tried to install with Python 2, Meson only supports Python 3.')
23 sys.exit(1)
24
25 # We need to support Python installations that have nothing but the basic
26 # Python installation. Use setuptools when possible and fall back to
27 # plain distutils when setuptools is not available.
28 try:
29 from setuptools import setup
30 from setuptools.command.install_scripts import install_scripts as orig
31 except ImportError:
32 from distutils.core import setup
33 from distutils.command.install_scripts import install_scripts as orig
34
35 from distutils.file_util import copy_file
36 from distutils.dir_util import mkpath
37 from stat import ST_MODE
38
39 class install_scripts(orig):
40 def run(self):
41 if sys.platform == 'win32':
42 super().run()
43 return
44
45 self.outfiles = []
46 if not self.dry_run:
47 mkpath(self.install_dir)
48
49 # We want the files to be installed without a suffix on Unix
50 for infile in self.get_inputs():
51 in_stripped = infile[:-3] if infile.endswith('.py') else infile
52 outfile = path.join(self.install_dir, in_stripped)
53 # NOTE: Mode is preserved by default
54 copy_file(infile, outfile, dry_run=self.dry_run)
55 self.outfiles.append(outfile)
56
57 from mesonbuild.coredata import version
58
59 setup(name='meson',
60 version=version,
61 description='A high performance build system',
62 author='Jussi Pakkanen',
63 author_email='[email protected]',
64 url='http://mesonbuild.com',
65 license=' Apache License, Version 2.0',
66 packages=['mesonbuild',
67 'mesonbuild.modules',
68 'mesonbuild.scripts',
69 'mesonbuild.backend',
70 'mesonbuild.wrap'],
71 scripts=['meson.py',
72 'mesonconf.py',
73 'mesontest.py',
74 'mesonintrospect.py',
75 'wraptool.py'],
76 cmdclass={'install_scripts': install_scripts},
77 data_files=[('share/man/man1', ['man/meson.1',
78 'man/mesonconf.1',
79 'man/mesonintrospect.1',
80 'man/wraptool.1'])],
81 classifiers=['Development Status :: 5 - Production/Stable',
82 'Environment :: Console',
83 'Intended Audience :: Developers',
84 'License :: OSI Approved :: Apache Software License',
85 'Natural Language :: English',
86 'Operating System :: MacOS :: MacOS X',
87 'Operating System :: Microsoft :: Windows',
88 'Operating System :: POSIX :: BSD',
89 'Operating System :: POSIX :: Linux',
90 'Programming Language :: Python :: 3 :: Only',
91 'Topic :: Software Development :: Build Tools',
92 ],
93 long_description='''Meson is a cross-platform build system designed to be both as
94 fast and as user friendly as possible. It supports many languages and compilers, including
95 GCC, Clang and Visual Studio. Its build definitions are written in a simple non-turing
96 complete DSL.''')
97
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
344231d336339c0ea4c1eb072ef37ba5e15ff901
|
--backend vs2015 always crashes on non-english Windows
```
The Meson build system
Version: 0.36.0
Source dir: F:\avian\hello-c 19:18Build dir: F:\avian\hello-c\build
Build type: native build
Project name: hello
Traceback (most recent call last):
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\mesonmain.py", line 283, in run
app.generate()
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\mesonmain.py", line 163, in generate
intr = interpreter.Interpreter(b, g)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1190, in __init__
self.parse_project()
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1260, in parse_project
self.evaluate_codeblock(self.ast, end=1)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1358, in evaluate_codeblock
raise e
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1352, in evaluate_codeblock
self.evaluate_statement(cur)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1473, in evaluate_statement
return self.function_call(cur)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 2494, in function_call
return self.funcs[func_name](node, self.flatten(posargs), kwargs)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 74, in wrapped
return f(self, node, args, kwargs)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1688, in func_project
self.add_languages(args[1:], True)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1804, in add_languages
(comp, cross_comp) = self.detect_compilers(lang, need_cross_compiler)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\interpreter.py", line 1773, in detect_compilers
comp.sanity_check(self.environment.get_scratch_dir(), self.environment)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\compilers.py", line 684, in sanity_check
return self.sanity_check_impl(work_dir, environment, 'sanitycheckc.c', code)
File "c:\program files (x86)\python35-32\lib\site-packages\mesonbuild\compilers.py", line 659, in sanity_check_impl
stde = stde.decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8e in position 0: invalid start byte
```
|
Simpest test - hello.c builds by msvc(32/64), mingw(32/64)
https://github.com/msink/hello-c
On appveyor all 4 test passed, on my local windows mingw passed, msvc crashes.
`chcp 65001` fixes this, at least on my computer (win7-x64 russian)
```
rem script for local testing
chcp 65001
for %%t in (mingw,msvc) do (
for %%p in (x86,x64) do (
for %%c in (debug,release) do (
setlocal
set Compiler=%%t
set Platform=%%p
set Configuration=%%c
call appveyor.bat
endlocal
)
)
)
```
Apparently this is fixed with Python 3.6: https://www.python.org/dev/peps/pep-0528/
However, we should also include a workaround inside Meson itself instead of telling people to run `chcp` before running Meson. Apparently there is also a module that does this: https://pypi.org/project/win_unicode_console/
Could you please try to use that and tell us if it works? If so, it would be great to have a patch to Meson that does what the module does on Windows.
That PEP is about console I/O (as is the package) but Meson here is crashing due to a bad decode out of `subprocess`. Could using `universal_newlines=True` fix the problem?
> a module that does this: https://pypi.org/project/win_unicode_console/
No, I tried this - no result.
`chcp 65001` fixes my build.
Dont know why.
My Windows doesn't have Russian or any European language available (probably because of ridiculous OEM things). I tested this with Hindi as my display language but I couldn't reproduce it (probably because it retained `cp1252` as the codepage). I'm going to try shooting in the dark now and send you some patches to test.
Okey, or I can pull a branch of your repository ( https://github.com/centricular/meson.git ) and test...
This patch fixes msvc on my computer:
```
diff --git a/mesonbuild/compilers.py b/mesonbuild/compilers.py
index ced2b6f..96af423 100644
--- a/mesonbuild/compilers.py
+++ b/mesonbuild/compilers.py
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import sys
import shutil
import contextlib
import subprocess, os.path
@@ -461,8 +462,8 @@ class Compiler():
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
(stde, stdo) = p.communicate()
- stde = stde.decode()
- stdo = stdo.decode()
+ stde = stde.decode(sys.stderr.encoding)
+ stdo = stdo.decode(sys.stdout.encoding)
mlog.debug('Compiler stdout:\n', stdo)
mlog.debug('Compiler stderr:\n', stde)
@@ -655,8 +656,8 @@ class CCompiler(Compiler):
cmdlist = self.exelist + extra_flags + [source_name] + self.get_output_args(binary_name)
pc = subprocess.Popen(cmdlist, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=work_dir)
(stdo, stde) = pc.communicate()
- stdo = stdo.decode()
- stde = stde.decode()
+ stdo = stdo.decode(sys.stdout.encoding)
+ stde = stde.decode(sys.stderr.encoding)
mlog.debug('Sanity check compiler command line:', ' '.join(cmdlist))
mlog.debug('Sanity check compile stdout:')
mlog.debug(stdo)
@@ -774,8 +775,8 @@ int main () {{
return RunResult(False)
(so, se) = pe.communicate()
- so = so.decode()
- se = se.decode()
+ so = so.decode(sys.stdout.encoding)
+ se = se.decode(sys.stderr.encoding)
mlog.debug('Program stdout:\n')
mlog.debug(so)
mlog.debug('Program stderr:\n')
```
Dont know if it is correct way (I'm total noob in python) but it works for me, both without and with `chcp 65001`
|
2016-12-07T01:36:14Z
|
<patch>
diff --git a/mesonbuild/backend/ninjabackend.py b/mesonbuild/backend/ninjabackend.py
--- a/mesonbuild/backend/ninjabackend.py
+++ b/mesonbuild/backend/ninjabackend.py
@@ -18,7 +18,7 @@
from .. import mlog
from .. import dependencies
from .. import compilers
-from ..mesonlib import File, MesonException, get_compiler_for_source
+from ..mesonlib import File, MesonException, get_compiler_for_source, Popen_safe
from .backends import InstallData
from ..build import InvalidArguments
import os, sys, pickle, re
@@ -158,18 +158,14 @@ def detect_vs_dep_prefix(self, tempfilename):
int dummy;
''')
- pc = subprocess.Popen(['cl', '/showIncludes', '/c', 'incdetect.c'],
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE,
- cwd=self.environment.get_scratch_dir())
+ pc, stdo = Popen_safe(['cl', '/showIncludes', '/c', 'incdetect.c'],
+ cwd=self.environment.get_scratch_dir())[0:2]
- (stdo, _) = pc.communicate()
-
- for line in stdo.split(b'\r\n'):
- if line.endswith(b'stdio.h'):
- matchstr = b':'.join(line.split(b':')[0:2]) + b':'
- with open(tempfilename, 'ab') as binfile:
- binfile.write(b'msvc_deps_prefix = ' + matchstr + b'\r\n')
+ for line in stdo.split('\n'):
+ if line.endswith('stdio.h'):
+ matchstr = ':'.join(line.split(':')[0:2]) + ':'
+ with open(tempfilename, 'a') as binfile:
+ binfile.write('msvc_deps_prefix = ' + matchstr + '\n')
return open(tempfilename, 'a')
raise MesonException('Could not determine vs dep dependency prefix string.')
diff --git a/mesonbuild/compilers.py b/mesonbuild/compilers.py
--- a/mesonbuild/compilers.py
+++ b/mesonbuild/compilers.py
@@ -18,7 +18,7 @@
import tempfile
from .import mesonlib
from . import mlog
-from .mesonlib import MesonException, version_compare
+from .mesonlib import MesonException, version_compare, Popen_safe
from . import coredata
"""This file contains the data files of all compilers Meson knows
@@ -457,12 +457,7 @@ def compile(self, code, extra_args=None):
mlog.debug('Working directory: ', tmpdirname)
mlog.debug('Command line: ', ' '.join(commands), '\n')
mlog.debug('Code:\n', code)
- p = subprocess.Popen(commands, cwd=tmpdirname,
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
- (stde, stdo) = p.communicate()
- stde = stde.decode()
- stdo = stdo.decode()
+ p, stdo, stde = Popen_safe(commands, cwd=tmpdirname)
mlog.debug('Compiler stdout:\n', stdo)
mlog.debug('Compiler stderr:\n', stde)
@@ -594,9 +589,7 @@ def get_std_shared_lib_link_args(self):
return ['-shared']
def get_library_dirs(self):
- output = subprocess.Popen(self.exelist + ['--print-search-dirs'], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
- (stdo, _) = output.communicate()
- stdo = stdo.decode('utf-8')
+ stdo = Popen_safe(self.exelist + ['--print-search-dirs'])[1]
for line in stdo.split('\n'):
if line.startswith('libraries:'):
libstr = line.split('=', 1)[1]
@@ -653,10 +646,7 @@ def sanity_check_impl(self, work_dir, environment, sname, code):
ofile.write(code)
# Compile sanity check
cmdlist = self.exelist + extra_flags + [source_name] + self.get_output_args(binary_name)
- pc = subprocess.Popen(cmdlist, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=work_dir)
- (stdo, stde) = pc.communicate()
- stdo = stdo.decode()
- stde = stde.decode()
+ pc, stdo, stde = Popen_safe(cmdlist, cwd=work_dir)
mlog.debug('Sanity check compiler command line:', ' '.join(cmdlist))
mlog.debug('Sanity check compile stdout:')
mlog.debug(stdo)
@@ -800,15 +790,11 @@ def run(self, code, env, extra_args=None, dependencies=None):
else:
cmdlist = p.output_name
try:
- pe = subprocess.Popen(cmdlist, stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
+ pe, so, se = Popen_safe(cmdlist)
except Exception as e:
mlog.debug('Could not run: %s (error: %s)\n' % (cmdlist, e))
return RunResult(False)
- (so, se) = pe.communicate()
- so = so.decode()
- se = se.decode()
mlog.debug('Program stdout:\n')
mlog.debug(so)
mlog.debug('Program stderr:\n')
@@ -1919,7 +1905,7 @@ def get_include_args(self, path, is_system):
# understand and you can't tell it to error out on those.
# http://stackoverflow.com/questions/15259720/how-can-i-make-the-microsoft-c-compiler-treat-unknown-flags-as-errors-rather-t
def has_argument(self, arg, env):
- warning_text = b'9002'
+ warning_text = '9002'
code = 'int i;\n'
(fd, srcname) = tempfile.mkstemp(suffix='.'+self.default_suffix)
os.close(fd)
@@ -1932,8 +1918,7 @@ def has_argument(self, arg, env):
mlog.debug('Running VS compile:')
mlog.debug('Command line: ', ' '.join(commands))
mlog.debug('Code:\n', code)
- p = subprocess.Popen(commands, cwd=os.path.split(srcname)[0], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (stde, stdo) = p.communicate()
+ p, stdo, stde = Popen_safe(commands, cwd=os.path.split(srcname)[0])
if p.returncode != 0:
raise MesonException('Compiling test app failed.')
return not(warning_text in stde or warning_text in stdo)
@@ -2587,10 +2572,9 @@ class ArLinker():
def __init__(self, exelist):
self.exelist = exelist
self.id = 'ar'
- pc = subprocess.Popen(self.exelist + ['-h'], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
- (stdo, _) = pc.communicate()
+ pc, stdo = Popen_safe(self.exelist + ['-h'])[0:2]
# Enable deterministic builds if they are available.
- if b'[D]' in stdo:
+ if '[D]' in stdo:
self.std_args = ['csrD']
else:
self.std_args = ['csr']
diff --git a/mesonbuild/dependencies.py b/mesonbuild/dependencies.py
--- a/mesonbuild/dependencies.py
+++ b/mesonbuild/dependencies.py
@@ -23,7 +23,7 @@
import os, stat, glob, subprocess, shutil
import sysconfig
from collections import OrderedDict
-from . mesonlib import MesonException, version_compare, version_compare_many
+from . mesonlib import MesonException, version_compare, version_compare_many, Popen_safe
from . import mlog
from . import mesonlib
from .environment import detect_cpu_family, for_windows
@@ -170,17 +170,14 @@ def __init__(self, name, environment, kwargs):
self._set_libs()
def _call_pkgbin(self, args):
- p = subprocess.Popen([self.pkgbin] + args,
- stdout=subprocess.PIPE, stderr=subprocess.PIPE,
- env=os.environ, universal_newlines=True)
- out = p.communicate()[0]
+ p, out = Popen_safe([self.pkgbin] + args, env=os.environ)[0:2]
return (p.returncode, out.strip())
def _set_cargs(self):
ret, out = self._call_pkgbin(['--cflags', self.name])
if ret != 0:
raise DependencyException('Could not generate cargs for %s:\n\n%s' % \
- (self.name, out.decode(errors='ignore')))
+ (self.name, out))
self.cargs = out.split()
def _set_libs(self):
@@ -190,7 +187,7 @@ def _set_libs(self):
ret, out = self._call_pkgbin(libcmd)
if ret != 0:
raise DependencyException('Could not generate libs for %s:\n\n%s' % \
- (self.name, out.decode(errors='ignore')))
+ (self.name, out))
self.libs = []
for lib in out.split():
if lib.endswith(".la"):
@@ -238,13 +235,11 @@ def check_pkgconfig(self):
pkgbin = os.environ[evar].strip()
else:
pkgbin = 'pkg-config'
- p = subprocess.Popen([pkgbin, '--version'], stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
- out = p.communicate()[0]
+ p, out = Popen_safe([pkgbin, '--version'])[0:2]
if p.returncode == 0:
if not self.silent:
mlog.log('Found pkg-config:', mlog.bold(shutil.which(pkgbin)),
- '(%s)' % out.decode().strip())
+ '(%s)' % out.strip())
PkgConfigDependency.pkgconfig_found = True
return
except (FileNotFoundError, PermissionError):
@@ -303,16 +298,13 @@ def __init__(self, environment, kwargs):
mlog.log("Neither wx-config-3.0 nor wx-config found; can't detect dependency")
return
- p = subprocess.Popen([self.wxc, '--version'],
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
- out = p.communicate()[0]
+ p, out = Popen_safe([self.wxc, '--version'])[0:2]
if p.returncode != 0:
mlog.log('Dependency wxwidgets found:', mlog.red('NO'))
self.cargs = []
self.libs = []
else:
- self.modversion = out.decode().strip()
+ self.modversion = out.strip()
version_req = kwargs.get('version', None)
if version_req is not None:
if not version_compare(self.modversion, version_req, strict=True):
@@ -324,20 +316,15 @@ def __init__(self, environment, kwargs):
self.requested_modules = self.get_requested(kwargs)
# wx-config seems to have a cflags as well but since it requires C++,
# this should be good, at least for now.
- p = subprocess.Popen([self.wxc, '--cxxflags'],
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
- out = p.communicate()[0]
+ p, out = Popen_safe([self.wxc, '--cxxflags'])[0:2]
if p.returncode != 0:
raise DependencyException('Could not generate cargs for wxwidgets.')
- self.cargs = out.decode().split()
+ self.cargs = out.split()
- p = subprocess.Popen([self.wxc, '--libs'] + self.requested_modules,
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- out = p.communicate()[0]
+ p, out = Popen_safe([self.wxc, '--libs'] + self.requested_modules)[0:2]
if p.returncode != 0:
raise DependencyException('Could not generate libs for wxwidgets.')
- self.libs = out.decode().split()
+ self.libs = out.split()
def get_requested(self, kwargs):
modules = 'modules'
@@ -363,12 +350,10 @@ def get_link_args(self):
def check_wxconfig(self):
for wxc in ['wx-config-3.0', 'wx-config']:
try:
- p = subprocess.Popen([wxc, '--version'], stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
- out = p.communicate()[0]
+ p, out = Popen_safe([wxc, '--version'])[0:2]
if p.returncode == 0:
mlog.log('Found wx-config:', mlog.bold(shutil.which(wxc)),
- '(%s)' % out.decode().strip())
+ '(%s)' % out.strip())
self.wxc = wxc
WxDependency.wx_found = True
return
@@ -943,10 +928,7 @@ def _qmake_detect(self, mods, env, kwargs):
if not self.qmake.found():
continue
# Check that the qmake is for qt5
- pc = subprocess.Popen(self.qmake.fullpath + ['-v'],
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
- universal_newlines=True)
- stdo = pc.communicate()[0]
+ pc, stdo = Popen_safe(self.qmake.fullpath + ['-v'])[0:2]
if pc.returncode != 0:
continue
if not 'Qt version ' + self.qtver in stdo:
@@ -959,9 +941,7 @@ def _qmake_detect(self, mods, env, kwargs):
return
self.version = re.search(self.qtver + '(\.\d+)+', stdo).group(0)
# Query library path, header path, and binary path
- stdo = subprocess.Popen(self.qmake.fullpath + ['-query'],
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
- universal_newlines=True).communicate()[0]
+ stdo = Popen_safe(self.qmake.fullpath + ['-query'])[1]
qvars = {}
for line in stdo.split('\n'):
line = line.strip()
@@ -1051,9 +1031,7 @@ def __init__(self, environment, kwargs):
def detect(self):
confprog = 'gnustep-config'
try:
- gp = subprocess.Popen([confprog, '--help'],
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- gp.communicate()
+ gp = Popen_safe([confprog, '--help'])[0]
except (FileNotFoundError, PermissionError):
self.args = None
mlog.log('Dependency GnuStep found:', mlog.red('NO'), '(no gnustep-config)')
@@ -1066,20 +1044,12 @@ def detect(self):
arg = '--gui-libs'
else:
arg = '--base-libs'
- fp = subprocess.Popen([confprog, '--objc-flags'],
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (flagtxt, flagerr) = fp.communicate()
- flagtxt = flagtxt.decode()
- flagerr = flagerr.decode()
+ fp, flagtxt, flagerr = Popen_safe([confprog, '--objc-flags'])
if fp.returncode != 0:
raise DependencyException('Error getting objc-args: %s %s' % (flagtxt, flagerr))
args = flagtxt.split()
self.args = self.filter_arsg(args)
- fp = subprocess.Popen([confprog, arg],
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (libtxt, liberr) = fp.communicate()
- libtxt = libtxt.decode()
- liberr = liberr.decode()
+ fp, libtxt, liberr = Popen_safe([confprog, arg])
if fp.returncode != 0:
raise DependencyException('Error getting objc-lib args: %s %s' % (libtxt, liberr))
self.libs = self.weird_filter(libtxt.split())
@@ -1184,16 +1154,10 @@ def __init__(self, environment, kwargs):
pass
sdlconf = shutil.which('sdl2-config')
if sdlconf:
- pc = subprocess.Popen(['sdl2-config', '--cflags'],
- stdout=subprocess.PIPE,
- stderr=subprocess.DEVNULL)
- (stdo, _) = pc.communicate()
- self.cargs = stdo.decode().strip().split()
- pc = subprocess.Popen(['sdl2-config', '--libs'],
- stdout=subprocess.PIPE,
- stderr=subprocess.DEVNULL)
- (stdo, _) = pc.communicate()
- self.linkargs = stdo.decode().strip().split()
+ pc, stdo = Popen_safe(['sdl2-config', '--cflags'])[0:2]
+ self.cargs = stdo.strip().split()
+ pc, stdo = Popen_safe(['sdl2-config', '--libs'])[0:2]
+ self.linkargs = stdo.strip().split()
self.is_found = True
mlog.log('Dependency', mlog.bold('sdl2'), 'found:', mlog.green('YES'), '(%s)' % sdlconf)
self.version = '2' # FIXME
diff --git a/mesonbuild/environment.py b/mesonbuild/environment.py
--- a/mesonbuild/environment.py
+++ b/mesonbuild/environment.py
@@ -17,6 +17,7 @@
from . import mesonlib
from . import mlog
from .compilers import *
+from .mesonlib import Popen_safe
import configparser
import shutil
@@ -42,11 +43,10 @@ def find_coverage_tools():
def detect_ninja():
for n in ['ninja', 'ninja-build']:
try:
- p = subprocess.Popen([n, '--version'], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
+ p, version = Popen_safe([n, '--version'])[0:2]
except (FileNotFoundError, PermissionError):
# Doesn't exist in PATH or isn't executable
continue
- version = p.communicate()[0].decode(errors='ignore')
# Perhaps we should add a way for the caller to know the failure mode
# (not found or too old)
if p.returncode == 0 and mesonlib.version_compare(version, ">=1.6"):
@@ -306,9 +306,7 @@ def get_gnu_compiler_defines(compiler):
# Arguments to output compiler pre-processor defines to stdout
# gcc, g++, and gfortran all support these arguments
args = compiler + ['-E', '-dM', '-']
- p = subprocess.Popen(args, universal_newlines=True,
- stdin=subprocess.PIPE, stdout=subprocess.PIPE)
- output = p.communicate('')[0]
+ p, output = Popen_safe(args, write='', stdin=subprocess.PIPE)[0:2]
if p.returncode != 0:
raise EnvironmentException('Unable to detect GNU compiler type:\n' + output)
# Parse several lines of the type:
@@ -372,14 +370,10 @@ def detect_c_compiler(self, want_cross):
arg = '/?'
else:
arg = '--version'
- p = subprocess.Popen([compiler, arg], stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
+ p, out, err = Popen_safe([compiler, arg])
except OSError as e:
popen_exceptions[' '.join([compiler, arg])] = e
continue
- (out, err) = p.communicate()
- out = out.decode(errors='ignore')
- err = err.decode(errors='ignore')
version = search_version(out)
if 'Free Software Foundation' in out:
defines = self.get_gnu_compiler_defines([compiler])
@@ -428,15 +422,10 @@ def detect_fortran_compiler(self, want_cross):
for compiler in compilers:
for arg in ['--version', '-V']:
try:
- p = subprocess.Popen([compiler, arg],
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
+ p, out, err = Popen_safe([compiler, arg])
except OSError as e:
popen_exceptions[' '.join([compiler, arg])] = e
continue
- (out, err) = p.communicate()
- out = out.decode(errors='ignore')
- err = err.decode(errors='ignore')
version = search_version(out)
@@ -512,15 +501,10 @@ def detect_cpp_compiler(self, want_cross):
else:
arg = '--version'
try:
- p = subprocess.Popen([compiler, arg],
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
+ p, out, err = Popen_safe([compiler, arg])
except OSError as e:
popen_exceptions[' '.join([compiler, arg])] = e
continue
- (out, err) = p.communicate()
- out = out.decode(errors='ignore')
- err = err.decode(errors='ignore')
version = search_version(out)
if 'Free Software Foundation' in out:
defines = self.get_gnu_compiler_defines([compiler])
@@ -559,12 +543,9 @@ def detect_objc_compiler(self, want_cross):
is_cross = False
exe_wrap = None
try:
- p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ p, out, err = Popen_safe(exelist + ['--version'])
except OSError:
raise EnvironmentException('Could not execute ObjC compiler "%s"' % ' '.join(exelist))
- (out, err) = p.communicate()
- out = out.decode(errors='ignore')
- err = err.decode(errors='ignore')
version = search_version(out)
if 'Free Software Foundation' in out:
defines = self.get_gnu_compiler_defines(exelist)
@@ -587,12 +568,9 @@ def detect_objcpp_compiler(self, want_cross):
is_cross = False
exe_wrap = None
try:
- p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ p, out, err = Popen_safe(exelist + ['--version'])
except OSError:
raise EnvironmentException('Could not execute ObjC++ compiler "%s"' % ' '.join(exelist))
- (out, err) = p.communicate()
- out = out.decode(errors='ignore')
- err = err.decode(errors='ignore')
version = search_version(out)
if 'Free Software Foundation' in out:
defines = self.get_gnu_compiler_defines(exelist)
@@ -604,12 +582,9 @@ def detect_objcpp_compiler(self, want_cross):
def detect_java_compiler(self):
exelist = ['javac']
try:
- p = subprocess.Popen(exelist + ['-version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ p, out, err = Popen_safe(exelist + ['-version'])
except OSError:
raise EnvironmentException('Could not execute Java compiler "%s"' % ' '.join(exelist))
- (out, err) = p.communicate()
- out = out.decode(errors='ignore')
- err = err.decode(errors='ignore')
version = search_version(err)
if 'javac' in err:
return JavaCompiler(exelist, version)
@@ -618,12 +593,9 @@ def detect_java_compiler(self):
def detect_cs_compiler(self):
exelist = ['mcs']
try:
- p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ p, out, err = Popen_safe(exelist + ['--version'])
except OSError:
raise EnvironmentException('Could not execute C# compiler "%s"' % ' '.join(exelist))
- (out, err) = p.communicate()
- out = out.decode(errors='ignore')
- err = err.decode(errors='ignore')
version = search_version(out)
if 'Mono' in out:
return MonoCompiler(exelist, version)
@@ -632,11 +604,9 @@ def detect_cs_compiler(self):
def detect_vala_compiler(self):
exelist = ['valac']
try:
- p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ p, out = Popen_safe(exelist + ['--version'])[0:2]
except OSError:
raise EnvironmentException('Could not execute Vala compiler "%s"' % ' '.join(exelist))
- (out, _) = p.communicate()
- out = out.decode(errors='ignore')
version = search_version(out)
if 'Vala' in out:
return ValaCompiler(exelist, version)
@@ -645,11 +615,9 @@ def detect_vala_compiler(self):
def detect_rust_compiler(self):
exelist = ['rustc']
try:
- p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ p, out = Popen_safe(exelist + ['--version'])[0:2]
except OSError:
raise EnvironmentException('Could not execute Rust compiler "%s"' % ' '.join(exelist))
- (out, _) = p.communicate()
- out = out.decode(errors='ignore')
version = search_version(out)
if 'rustc' in out:
return RustCompiler(exelist, version)
@@ -679,11 +647,9 @@ def detect_d_compiler(self):
raise EnvironmentException('Could not find any supported D compiler.')
try:
- p = subprocess.Popen(exelist + ['--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ p, out = Popen_safe(exelist + ['--version'])[0:2]
except OSError:
raise EnvironmentException('Could not execute D compiler "%s"' % ' '.join(exelist))
- (out, _) = p.communicate()
- out = out.decode(errors='ignore')
version = search_version(out)
if 'LLVM D compiler' in out:
return LLVMDCompiler(exelist, version, is_cross)
@@ -696,11 +662,9 @@ def detect_d_compiler(self):
def detect_swift_compiler(self):
exelist = ['swiftc']
try:
- p = subprocess.Popen(exelist + ['-v'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ p, _, err = Popen_safe(exelist + ['-v'])
except OSError:
raise EnvironmentException('Could not execute Swift compiler "%s"' % ' '.join(exelist))
- (_, err) = p.communicate()
- err = err.decode(errors='ignore')
version = search_version(err)
if 'Swift' in err:
return SwiftCompiler(exelist, version)
@@ -723,12 +687,9 @@ def detect_static_linker(self, compiler):
else:
arg = '--version'
try:
- p = subprocess.Popen([linker, arg], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ p, out, err = Popen_safe([linker, arg])
except OSError:
raise EnvironmentException('Could not execute static linker "%s".' % linker)
- (out, err) = p.communicate()
- out = out.decode(errors='ignore')
- err = err.decode(errors='ignore')
if '/OUT:' in out or '/OUT:' in err:
return VisualStudioLinker([linker])
if p.returncode == 0:
diff --git a/mesonbuild/interpreter.py b/mesonbuild/interpreter.py
--- a/mesonbuild/interpreter.py
+++ b/mesonbuild/interpreter.py
@@ -22,6 +22,7 @@
from . import compilers
from .wrap import wrap
from . import mesonlib
+from .mesonlib import Popen_safe
from .dependencies import InternalDependency, Dependency
from .interpreterbase import InterpreterBase
from .interpreterbase import check_stringlist, noPosargs, noKwargs, stringArgs
@@ -70,17 +71,8 @@ class RunProcess(InterpreterObject):
def __init__(self, command_array, source_dir, build_dir, subdir, in_builddir=False):
super().__init__()
- pc = self.run_command(command_array, source_dir, build_dir, subdir, in_builddir)
- (stdout, stderr) = pc.communicate()
+ pc, self.stdout, self.stderr = self.run_command(command_array, source_dir, build_dir, subdir, in_builddir)
self.returncode = pc.returncode
- if sys.stdout.encoding:
- self.stdout = stdout.decode(encoding=sys.stdout.encoding, errors='ignore').replace('\r\n', '\n')
- else:
- self.stdout = stdout.decode(errors='ignore').replace('\r\n', '\n')
- if sys.stderr.encoding:
- self.stderr = stderr.decode(encoding=sys.stderr.encoding, errors='ignore').replace('\r\n', '\n')
- else:
- self.stderr = stderr.decode(errors='ignore').replace('\r\n', '\n')
self.methods.update({'returncode' : self.returncode_method,
'stdout' : self.stdout_method,
'stderr' : self.stderr_method,
@@ -99,22 +91,19 @@ def run_command(self, command_array, source_dir, build_dir, subdir, in_builddir)
child_env.update(env)
mlog.debug('Running command:', ' '.join(command_array))
try:
- return subprocess.Popen(command_array, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
- env=child_env, cwd=cwd)
+ return Popen_safe(command_array, env=child_env, cwd=cwd)
except FileNotFoundError:
pass
# Was not a command, is a program in path?
exe = shutil.which(cmd_name)
if exe is not None:
command_array = [exe] + command_array[1:]
- return subprocess.Popen(command_array, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
- env=child_env, cwd=cwd)
+ return Popen_safe(command_array, env=child_env, cwd=cwd)
# No? Maybe it is a script in the source tree.
fullpath = os.path.join(source_dir, subdir, cmd_name)
command_array = [fullpath] + command_array[1:]
try:
- return subprocess.Popen(command_array, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
- env=child_env, cwd=cwd)
+ return Popen_safe(command_array, env=child_env, cwd=cwd)
except FileNotFoundError:
raise InterpreterException('Could not execute command "%s".' % cmd_name)
diff --git a/mesonbuild/mesonlib.py b/mesonbuild/mesonlib.py
--- a/mesonbuild/mesonlib.py
+++ b/mesonbuild/mesonlib.py
@@ -385,3 +385,10 @@ def expand_arguments(args):
print(e)
return None
return expended_args
+
+def Popen_safe(args, write=None, stderr=subprocess.PIPE, **kwargs):
+ p = subprocess.Popen(args, universal_newlines=True,
+ stdout=subprocess.PIPE,
+ stderr=stderr, **kwargs)
+ o, e = p.communicate(write)
+ return (p, o, e)
diff --git a/mesonbuild/modules/gnome.py b/mesonbuild/modules/gnome.py
--- a/mesonbuild/modules/gnome.py
+++ b/mesonbuild/modules/gnome.py
@@ -20,7 +20,7 @@
import sys
import copy
import subprocess
-from ..mesonlib import MesonException
+from ..mesonlib import MesonException, Popen_safe
from .. import dependencies
from .. import mlog
from .. import mesonlib
@@ -197,9 +197,7 @@ def _get_gresource_dependencies(self, state, input_file, source_dirs, dependenci
cmd += ['--sourcedir', os.path.join(state.subdir, source_dir)]
cmd += ['--sourcedir', state.subdir] # Current dir
- pc = subprocess.Popen(cmd, stdout=subprocess.PIPE, universal_newlines=True,
- cwd=state.environment.get_source_dir())
- (stdout, _) = pc.communicate()
+ pc, stdout = Popen_safe(cmd, cwd=state.environment.get_source_dir())[0:2]
if pc.returncode != 0:
mlog.warning('glib-compile-resources has failed to get the dependencies for {}'.format(cmd[1]))
raise subprocess.CalledProcessError(pc.returncode, cmd)
diff --git a/mesonbuild/modules/qt4.py b/mesonbuild/modules/qt4.py
--- a/mesonbuild/modules/qt4.py
+++ b/mesonbuild/modules/qt4.py
@@ -15,7 +15,7 @@
import os, subprocess
from .. import mlog
from .. import build
-from ..mesonlib import MesonException
+from ..mesonlib import MesonException, Popen_safe
from ..dependencies import Qt4Dependency
import xml.etree.ElementTree as ET
@@ -37,11 +37,9 @@ def _detect_tools(self, env):
# Moc and rcc return a non-zero result when doing so.
# What kind of an idiot thought that was a good idea?
if self.moc.found():
- mp = subprocess.Popen(self.moc.get_command() + ['-v'],
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (stdout, stderr) = mp.communicate()
- stdout = stdout.decode().strip()
- stderr = stderr.decode().strip()
+ stdout, stderr = Popen_safe(self.moc.get_command() + ['-v'])[1:3]
+ stdout = stdout.strip()
+ stderr = stderr.strip()
if 'Qt Meta' in stderr:
moc_ver = stderr
else:
@@ -52,11 +50,9 @@ def _detect_tools(self, env):
else:
mlog.log(' moc:', mlog.red('NO'))
if self.uic.found():
- up = subprocess.Popen(self.uic.get_command() + ['-v'],
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (stdout, stderr) = up.communicate()
- stdout = stdout.decode().strip()
- stderr = stderr.decode().strip()
+ stdout, stderr = Popen_safe(self.uic.get_command() + ['-v'])[1:3]
+ stdout = stdout.strip()
+ stderr = stderr.strip()
if 'version 4.' in stderr:
uic_ver = stderr
else:
@@ -67,11 +63,9 @@ def _detect_tools(self, env):
else:
mlog.log(' uic:', mlog.red('NO'))
if self.rcc.found():
- rp = subprocess.Popen(self.rcc.get_command() + ['-v'],
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (stdout, stderr) = rp.communicate()
- stdout = stdout.decode().strip()
- stderr = stderr.decode().strip()
+ stdout, stderr = Popen_safe(self.rcc.get_command() + ['-v'])[1:3]
+ stdout = stdout.strip()
+ stderr = stderr.strip()
if 'version 4.' in stderr:
rcc_ver = stderr
else:
diff --git a/mesonbuild/modules/qt5.py b/mesonbuild/modules/qt5.py
--- a/mesonbuild/modules/qt5.py
+++ b/mesonbuild/modules/qt5.py
@@ -15,7 +15,7 @@
import os, subprocess
from .. import mlog
from .. import build
-from ..mesonlib import MesonException
+from ..mesonlib import MesonException, Popen_safe
from ..dependencies import Qt5Dependency
import xml.etree.ElementTree as ET
@@ -37,11 +37,9 @@ def _detect_tools(self, env):
# Moc and rcc return a non-zero result when doing so.
# What kind of an idiot thought that was a good idea?
if self.moc.found():
- mp = subprocess.Popen(self.moc.get_command() + ['-v'],
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (stdout, stderr) = mp.communicate()
- stdout = stdout.decode().strip()
- stderr = stderr.decode().strip()
+ stdout, stderr = Popen_safe(self.moc.get_command() + ['-v'])[1:3]
+ stdout = stdout.strip()
+ stderr = stderr.strip()
if 'Qt 5' in stderr:
moc_ver = stderr
elif '5.' in stdout:
@@ -54,11 +52,9 @@ def _detect_tools(self, env):
else:
mlog.log(' moc:', mlog.red('NO'))
if self.uic.found():
- up = subprocess.Popen(self.uic.get_command() + ['-v'],
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (stdout, stderr) = up.communicate()
- stdout = stdout.decode().strip()
- stderr = stderr.decode().strip()
+ stdout, stderr = Popen_safe(self.uic.get_command() + ['-v'])[1:3]
+ stdout = stdout.strip()
+ stderr = stderr.strip()
if 'version 5.' in stderr:
uic_ver = stderr
elif '5.' in stdout:
@@ -71,11 +67,9 @@ def _detect_tools(self, env):
else:
mlog.log(' uic:', mlog.red('NO'))
if self.rcc.found():
- rp = subprocess.Popen(self.rcc.get_command() + ['-v'],
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (stdout, stderr) = rp.communicate()
- stdout = stdout.decode().strip()
- stderr = stderr.decode().strip()
+ stdout, stderr = Popen_safe(self.rcc.get_command() + ['-v'])[1:3]
+ stdout = stdout.strip()
+ stderr = stderr.strip()
if 'version 5.' in stderr:
rcc_ver = stderr
elif '5.' in stdout:
diff --git a/mesonbuild/scripts/gtkdochelper.py b/mesonbuild/scripts/gtkdochelper.py
--- a/mesonbuild/scripts/gtkdochelper.py
+++ b/mesonbuild/scripts/gtkdochelper.py
@@ -17,7 +17,7 @@
import subprocess
import shutil
import argparse
-from ..mesonlib import MesonException
+from ..mesonlib import MesonException, Popen_safe
from . import destdir_join
parser = argparse.ArgumentParser()
@@ -46,15 +46,13 @@
parser.add_argument('--installdir', dest='install_dir')
def gtkdoc_run_check(cmd, cwd):
- p = subprocess.Popen(cmd, cwd=cwd,
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (stde, stdo) = p.communicate()
+ # Put stderr into stdout since we want to print it out anyway.
+ # This preserves the order of messages.
+ p, out = Popen_safe(cmd, cwd=cwd, stderr=subprocess.STDOUT)[0:2]
if p.returncode != 0:
err_msg = ["{!r} failed with status {:d}".format(cmd[0], p.returncode)]
- if stde:
- err_msg.append(stde.decode(errors='ignore'))
- if stdo:
- err_msg.append(stdo.decode(errors='ignore'))
+ if out:
+ err_msg.append(out)
raise MesonException('\n'.join(err_msg))
def build_gtkdoc(source_root, build_root, doc_subdir, src_subdirs,
diff --git a/mesonbuild/scripts/meson_exe.py b/mesonbuild/scripts/meson_exe.py
--- a/mesonbuild/scripts/meson_exe.py
+++ b/mesonbuild/scripts/meson_exe.py
@@ -21,7 +21,7 @@
import platform
import subprocess
-import mesonbuild
+from ..mesonlib import MesonException, Popen_safe
options = None
@@ -45,7 +45,7 @@ def run_exe(exe):
else:
if exe.is_cross:
if exe.exe_runner is None:
- raise Exception('BUG: Trying to run cross-compiled exes with no wrapper')
+ raise AssertionError('BUG: Trying to run cross-compiled exes with no wrapper')
else:
cmd = [exe.exe_runner] + exe.fname
else:
@@ -55,17 +55,12 @@ def run_exe(exe):
if len(exe.extra_paths) > 0:
child_env['PATH'] = (os.pathsep.join(exe.extra_paths + ['']) +
child_env['PATH'])
- p = subprocess.Popen(cmd + exe.cmd_args,
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE,
- env=child_env,
- cwd=exe.workdir)
- stdout, stderr = p.communicate()
+ p, stdout, stderr = Popen_safe(cmd + exe.cmd_args, env=child_env, cwd=exe.workdir)
if exe.capture and p.returncode == 0:
- with open(exe.capture, 'wb') as output:
+ with open(exe.capture, 'w') as output:
output.write(stdout)
if stderr:
- sys.stderr.buffer.write(stderr)
+ sys.stderr.write(stderr)
return p.returncode
def run(args):
diff --git a/mesonbuild/scripts/meson_install.py b/mesonbuild/scripts/meson_install.py
--- a/mesonbuild/scripts/meson_install.py
+++ b/mesonbuild/scripts/meson_install.py
@@ -18,6 +18,7 @@
from glob import glob
from . import depfixer
from . import destdir_join
+from ..mesonlib import MesonException, Popen_safe
install_log_file = None
@@ -205,12 +206,11 @@ def install_targets(d):
do_copy(fname, outname)
if should_strip:
print('Stripping target')
- ps = subprocess.Popen(['strip', outname], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (stdo, stde) = ps.communicate()
+ ps, stdo, stde = Popen_safe(['strip', outname])
if ps.returncode != 0:
print('Could not strip file.\n')
- print('Stdout:\n%s\n' % stdo.decode())
- print('Stderr:\n%s\n' % stde.decode())
+ print('Stdout:\n%s\n' % stdo)
+ print('Stderr:\n%s\n' % stde)
sys.exit(1)
printed_symlink_error = False
for alias in aliases:
diff --git a/mesonbuild/scripts/symbolextractor.py b/mesonbuild/scripts/symbolextractor.py
--- a/mesonbuild/scripts/symbolextractor.py
+++ b/mesonbuild/scripts/symbolextractor.py
@@ -23,7 +23,8 @@
# http://cgit.freedesktop.org/libreoffice/core/commit/?id=3213cd54b76bc80a6f0516aac75a48ff3b2ad67c
import os, sys, subprocess
-from mesonbuild import mesonlib
+from .. import mesonlib
+from ..mesonlib import MesonException, Popen_safe
import argparse
parser = argparse.ArgumentParser()
@@ -59,23 +60,21 @@ def linux_syms(libfilename, outfilename):
nmbin = os.environ[evar].strip()
else:
nmbin = 'nm'
- pe = subprocess.Popen([readelfbin, '-d', libfilename], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- output = pe.communicate()[0].decode()
+ pe, output = Popen_safe([readelfbin, '-d', libfilename])[0:2]
if pe.returncode != 0:
raise RuntimeError('Readelf does not work')
result = [x for x in output.split('\n') if 'SONAME' in x]
assert(len(result) <= 1)
- pnm = subprocess.Popen([nmbin, '--dynamic', '--extern-only', '--defined-only', '--format=posix', libfilename],
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- output = pnm.communicate()[0].decode()
+ pnm, output = Popen_safe([nmbin, '--dynamic', '--extern-only',
+ '--defined-only', '--format=posix',
+ libfilename])[0:2]
if pnm.returncode != 0:
raise RuntimeError('nm does not work.')
result += [' '.join(x.split()[0:2]) for x in output.split('\n') if len(x) > 0]
write_if_changed('\n'.join(result) + '\n', outfilename)
def osx_syms(libfilename, outfilename):
- pe = subprocess.Popen(['otool', '-l', libfilename], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- output = pe.communicate()[0].decode()
+ pe, output = Popen_safe(['otool', '-l', libfilename])[0:2]
if pe.returncode != 0:
raise RuntimeError('Otool does not work.')
arr = output.split('\n')
@@ -84,8 +83,7 @@ def osx_syms(libfilename, outfilename):
match = i
break
result = [arr[match+2], arr[match+5]] # Libreoffice stores all 5 lines but the others seem irrelevant.
- pnm = subprocess.Popen(['nm', '-g', '-P', libfilename], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- output = pnm.communicate()[0].decode()
+ pnm, output = Popen_safe(['nm', '-g', '-P', libfilename])[0:2]
if pnm.returncode != 0:
raise RuntimeError('nm does not work.')
result += [' '.join(x.split()[0:2]) for x in output.split('\n') if len(x) > 0 and not x.endswith('U')]
</patch>
|
[]
|
[]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.