--- license: apache-2.0 --- # How this model was exported: ```bash #requirements.txt huggingface_hub>=0.25.0 transformers>=4.45.0 safetensors>=0.4.3 datasets>=3.0.0 accelerate>=0.33.0 tiktoken>=0.12.0 sentencepiece>=0.2.0 optimum-intel[openvino]>=1.26.0 openvino>=2025.0.0 nncf>=2.12.0 hf-transfer>=0.1.6 python-dotenv>=1.0.1 rich>=13.7.0 ``` ```python from huggingface_hub import snapshot_download, upload_folder from transformers import AutoTokenizer import subprocess token = "hf_..." DOWNLOAD_MODEL = "humain-ai/ALLaM-7B-Instruct-preview" DOWNLOAD_FOLDER = "models/ALLaM-7B-Instruct-preview" UPLOAD_FOLDER = "ov/ALLaM-7B-Instruct-preview-int4-ov" UPLOAD_MODEL = "KFUPM-JRCAI/ALLaM-7B-Instruct-preview-int4-ov" snapshot_download( repo_id=DOWNLOAD_MODEL, local_dir=DOWNLOAD_FOLDER, token=token, ) tokenizer = AutoTokenizer.from_pretrained(DOWNLOAD_FOLDER) tokenizer.save_pretrained(UPLOAD_FOLDER) subprocess.run(["optimum-cli", "export", "openvino", "-m", DOWNLOAD_FOLDER, "--task", "text-generation-with-past", "--weight-format", "int4", "--sym", UPLOAD_FOLDER]) subprocess.run(["convert_tokenizer", DOWNLOAD_FOLDER, "--o", UPLOAD_FOLDER, "--with-detokenizer"]) upload_folder( repo_id=UPLOAD_MODEL, folder_path=UPLOAD_FOLDER, commit_message='OpenVINO model upload', token=token, repo_type="model", ) ```