This repository houses a fork of togethercomputer/LLaMA-2-7B-32K's modeling_flash_llama.py, with a fix for padding of attention weights merged into it.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using Birchlabs/flash_llama 2