File size: 2,079 Bytes
a7c79e3 756c79e a7c79e3 756c79e a7c79e3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
pipeline_tag: text-generation
inference: true
widget:
- text: "What's lemur's favorite fruit?"
example_title: Lemur favorite fruit
group: Python
- text: 'Write a Python function to merge two sorted lists into one sorted list without using any built-in sort functions.'
example_title: Merge Sort
group: Python
license: llama2
library_name: transformers
tags:
- text-generation
- code
- text-generation-inference
language:
- en
---
# lemur-70b-chat-v1
<p align="center">
<img src="https://huggingface.co/datasets/OpenLemur/assets/resolve/main/lemur_icon.png" width="300" height="300" alt="Lemur">
</p>
## Model Summary
- **Repository:** [OpenLemur/lemur-v1](https://github.com/OpenLemur/lemur-v1)
- **Project Website:** [xlang.ai](https://www.xlang.ai/)
- **Paper:** [Coming soon](https://www.xlang.ai/)
- **Point of Contact:** [[email protected]](mailto:[email protected])
## Use
### Setup
First, we have to install all the libraries listed in `requirements.txt` in [GitHub](https://github.com/OpenLemur/lemur-v1):
```bash
pip install -r requirements.txt
```
### Generation
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("OpenLemur/lemur-70b-chat-v1")
model = AutoModelForCausalLM.from_pretrained("OpenLemur/lemur-70b-chat-v1", device_map="auto", load_in_8bit=True)
# Text Generation Example
prompt = "What's lemur's favorite fruit?"
input = tokenizer(prompt, return_tensors="pt")
output = model.generate(**input, max_length=50, num_return_sequences=1)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
# Code Generation Example
prompt = "Write a Python function to merge two sorted lists into one sorted list without using any built-in sort functions."
input = tokenizer(prompt, return_tensors="pt")
output = model.generate(**input, max_length=200, num_return_sequences=1)
generated_code = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_code)
```
# License
The model is licensed under the Llama-2 community license agreement. |