Testys's picture
Job 304268a6-2e7f-4d73-9cb6-e538064f8e80: Setup/update training Space.
89a306d verified
metadata
title: Training Space for Job 304268a6-2e7f-4d73-9cb6-e538064f8e80
emoji: πŸš€
colorFrom: blue
colorTo: green
sdk: docker
python_version: '3.10'
suggested_hardware: t4-small
app_file: train_script_wrapper.py
pinned: false
short_description: pythia-70m-deduped HF-mb28snxb
models:
  - EleutherAI/pythia-70m-deduped
datasets:
  - Testys/dataset_for_job_304268a6_2e7f_4d73_9cb6_e538064f8e80
tags:
  - automated-training
  - causal-lm
  - fine-tuning
  - lora
  - mlops
  - text-generation
preload_from_hub:
  - EleutherAI/pythia-70m-deduped
    # Training Space for MLOps Job `304268a6-2e7f-4d73-9cb6-e538064f8e80`

    This Space is automatically generated to run a LoRA (Low-Rank Adaptation) fine-tuning job.
    It utilizes a Docker environment to execute the `train_text_lora.py` script using parameters defined by the MLOps job configuration.

    ## Job Overview
    - **Job ID:** `304268a6-2e7f-4d73-9cb6-e538064f8e80`
    - **Job Name:** `pythia-70m-deduped HF-mb28snxb`
    - **Model Type:** `CAUSAL_LM`
    - **Base Model (for fine-tuning):** `EleutherAI/pythia-70m-deduped`
    - **Dataset (on Hugging Face Hub):** `Testys/dataset_for_job_304268a6_2e7f_4d73_9cb6_e538064f8e80`

    ## Execution Details
    The core training logic is encapsulated in `train_text_lora.py`, which is orchestrated by `train_script_wrapper.py` within this Space.
    Hyperparameters and script configurations are passed dynamically to the training script.

    ## Outputs
    Outputs from the training process, such as the LoRA adapter and training metrics, will be pushed to the following Hugging Face Hub model repository upon successful completion:
    [Target repository to be configured](https://huggingface.co/#)

    ## Monitoring
    Check the **Logs** tab of this Space for real-time training progress, standard output, and any error messages from the execution script.