--- language: - en - zh - ja - de license: gpl-3.0 tags: - mlx datasets: - JosephusCheung/GuanacoDataset - meta-math/MetaMathQA - jondurbin/airoboros-3.1 - WizardLM/WizardLM_evol_instruct_V2_196k - RyokoAI/ShareGPT52K - RyokoAI/Fandom23K - milashkaarshif/MoeGirlPedia_wikitext_raw_archive - wikipedia - wiki_lingua - garage-bAInd/Open-Platypus - LDJnr/Puffin - BAAI/COIG - TigerResearch/tigerbot-zhihu-zh-10k - liwu/MNBVC - teknium/openhermes - CausalLM/Refined-Anime-Text - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction --- # mlx-community/35b-beta-long-8bit The Model [mlx-community/35b-beta-long-8bit](https://huggingface.co/mlx-community/35b-beta-long-8bit) was converted to MLX format from [CausalLM/35b-beta-long](https://huggingface.co/CausalLM/35b-beta-long) using mlx-lm version **0.12.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/35b-beta-long-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```