rajatkrishna's picture
Create README.md
59cd1aa verified
metadata
language:
  - en
pipeline_tag: text-generation
tags:
  - facebook
  - meta
  - openvino
  - llama
  - llama-3
license: other
license_name: llama3
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE
base_model: meta-llama/Meta-Llama-3-8B-Instruct

Meta-Llama-3-8B-Instruct INT4 Quantized

Model Details

Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.

Model developers Meta

Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.

Input Models input text only.

Output Models generate text and code only.

Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Model Release Date April 18, 2024.

Usage

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.intel.openvino import OVModelForCausalLM

>>> model_name = 'rajatkrishna/Meta-Llama-3-8B-Instruct-OpenVINO-INT4'
>>> model = OVModelForCausalLM.from_pretrained(model_name)

>>> pipe = pipeline("text-generation", model=model, tokenizer=model_name)
>>> pipe("Hey how are you doing today?")