RHEL AI Model Training Scenario: A Fictional Hotel Group
A fictional example for the Training Large Language Models with Red{nbsp}Hat Enterprise Linux AI (AI0005L) and Deploying Models with Red{nbsp}Hat Enterprise Linux AI (AI0006L) Red Hat Training lessons. These lessons present students with a scenario where a hotel group must train their own LLM, aligned with their business needs, by using RHEL AI.
The taxonomy with skills and knowledge is at https://github.com/RedHatTraining/AI296-taxonomy-hotels.
The generated synthetic dataset is available in the
results
directory at https://github.com/RedHatTraining/AI296-apps/tree/main/scenarios/hotels This directory contains the intermediate outputs of the SDG phase to save time to the student. With the provided taxonomy, the SDG phase takes ~ 2 hours in ag6e.12xlarge
AWS instance.The trained model is stored in this Hugging Face repository
. Additionally, a quantized version is also provided:
samples_89973_Q4_K_M.gguf`.
NOTE: This model has been trained using a reduced version of the RHEL AI default training process. In this reduced version, the model has been trained only during four hours, instead of four-five days. Additionally, the number of training samples has been reduced from ~330,000 to only 10,000.
As a result, the model, although useful for learning purposes, is far from being optimally tuned.
- Downloads last month
- 53