jacekduszenko's picture
Upload dataset
b3386fe verified
metadata
license: mit
dataset_info:
  features:
    - name: num_examples
      dtype: int64
    - name: num_batches
      dtype: int64
    - name: num_epochs
      dtype: int64
    - name: batch_size
      dtype: int64
    - name: gradient_accumulation_steps
      dtype: int64
    - name: max_train_steps
      dtype: int64
    - name: label
      dtype: string
    - name: rank
      dtype: int64
    - name: step_loss
      sequence: float64
    - name: step_lr
      sequence: float64
    - name: epoch_loss
      sequence: float64
    - name: epoch_lr
      sequence: float64
    - name: lora_gradient_norm
      sequence: float64
    - name: lora_weight_norm
      sequence: float64
  splits:
    - name: train
      num_bytes: 73088
      num_examples: 128
    - name: test
      num_bytes: 18272
      num_examples: 32
  download_size: 67880
  dataset_size: 91360
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
language:
  - en
tags:
  - not-for-all-audiences
size_categories:
  - 1K<n<10K

LORA Adapters are Good Feature Extractors Dataset

This dataset contains images of two sets of categories that are not safe for work (hentai and porn, labelled as 0 and 2 correspondingly) and one neutral category, labelled as 2. The dataset is the source data for training a zoo of LORA adapters on sample images from each category. Adapters representations will then be used as input data to a weight-space model in an experiment to verify whether WS models operating in low rank representation space are able to extract features and discriminate between harmful and non-harmful LORAs.