Model Card for CALISTA-INDUSTRY/qwen_2_3B_reasoning_en_ft_v1
Model Details
- Model Name: qwen_2_3B_reasoning_en_ft_v1
- Developed by: Dr. Ir. Mohammad Yani, S.T., M.T., M.Sc & Rizky Sulaeman, Politeknik Negeri Indramayu
- Model Type: Transformer-based language model
- Base Model: Qwen/Qwen2-3B
- Parameter Count: 3.09 billion
- Language: English
- License: Apache 2.0
- Fine-tuned from: Qwen2-3B
Model Description
This model is a fine-tuned version of Qwen2-3B, optimized for enhanced reasoning capabilities in English. It has been trained on a curated dataset to improve performance on tasks requiring logical inference, comprehension, and instruction following.
Intended Uses & Limitations
Direct Use
- Applications:
- Logical reasoning tasks
- Instruction-based question answering
- Conversational agents requiring enhanced reasoning
Downstream Use
- Potential Applications:
- Integration into AI systems requiring reasoning capabilities
- Further fine-tuning for domain-specific tasks
Out-of-Scope Use
- Not Recommended For:
- Tasks requiring real-time decision-making in critical systems
- Applications involving sensitive or personal data without proper safeguards
Bias, Risks, and Limitations
While efforts have been made to reduce biases during fine-tuning, the model may still exhibit biases present in the training data. Users should be cautious and evaluate the model's outputs, especially in sensitive applications.
How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "CALISTA-INDUSTRY/qwen_2_3B_reasoning_en_ft_v1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "Explain the theory of relativity in simple terms."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
@misc{calista2025qwen2reasoning,
title={CALISTA-INDUSTRY/qwen_2_3B_reasoning_en_ft_v1},
author={CALISTA INDUSTRY},
year={2025},
url={https://huggingface.co/CALISTA-INDUSTRY/qwen_2_3B_reasoning_en_ft_v1}
}
- Downloads last month
- 180
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support