Model Card for Vistral-7B-Chat

Model Details

  • Model Name: Vistral-7B-LegalBizAI
  • Version: 1.0
  • Model Type: Causal Language Model
  • Architecture: Transformer-based model with 7 billion parameters
  • Quantization: 8-bit quantized for efficiency

Usage

How to use

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "nhotin/vistral7B-legalbizai-q8-gguf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

input_text = "Your text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
3
Safetensors
Model size
7.3B params
Tensor type
F32
FP16
I8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.