falcon-rw-1b / README.md
slippylolo's picture
Update model card
831952d
|
raw
history blame
5.59 kB
metadata
datasets:
  - tiiuae/falcon-refinedweb
language:
  - en

Falcon-RW-1B

Falcon-RW-1B is a 1B parameters causal decoder-only model built by TII and trained on 350B tokens of RefinedWeb. It is made available under the TII Falcon LLM License.

Paper coming soon 😊.

RefinedWeb is a high-quality web dataset built by leveraging stringent filtering and large-scale deduplication. Falcon-RW-1B, trained on RefinedWeb only, matches or outperforms comparable models trained on curated data.

⚠️ This model is intended for use as a research artifact, to study the influence of training on web data alone. If you are interested in state-of-the-art models, we recommend using Falcon-7B/40B, both trained on >1,000 billion tokens.

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "tiiuae/falcon-rw-1b"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)
sequences = pipeline(
   "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

Model Card for Falcon-RW-1B

Model Details

Model Description

Model Source

  • Paper: coming soon.

Uses

Direct Use

Research on large language models, and the influence of adequately filtered and deduplicated web data on the properties of large language models (fairness, safety, limitations, capabilities, etc.).

Out-of-Scope Use

Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.

Broadly speaking, we would recommend Falcon-7B/40B for any use not directly related to research on web data pipelines.

Bias, Risks, and Limitations

Falcon-RW-1B is trained on English data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.

Recommendations

We recommend users of Falcon-RW-1B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.

How to Get Started with the Model

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "tiiuae/falcon-rw-1b"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)
sequences = pipeline(
   "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

Training Details

Training Data

Falcon-RW-1B was trained on 350B tokens of RefinedWeb, a high-quality filtered and deduplicated web dataset. The data was tokenized with the GPT-2 tokenizer.

Training Procedure

Falcon-RW-1B was trained on 32 A100 40GB GPUs, using only data parallelism with ZeRO.

Training Hyperparameters

Hyperparameters were adapted from the GPT-3 paper (Brown et al., 2020).

  • Precision: bf16;
  • Optimizer: Adam;
  • Learning rate: 2e-4 (500M tokens warm-up, followed by cosine decay to 2e-5);
  • Weight decay: 0.1;
  • Batch size: 512 (with a 4B tokens ramp-up).

Speeds, Sizes, Times [optional]

Training happened in early December 2022 and took about six days.

Evaluation

Paper coming soon.

Technical Specifications

Model Architecture and Objective

Falcon-RW-1B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).

Compute Infrastructure

Hardware

Falcon-RW-1B was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.

Software

Falcon-RW-1B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)

Citation

Paper coming soon 😊.

Contact

[email protected]