experiment_id stringlengths 19 24 | name stringlengths 18 45 | description stringlengths 20 71 | created_at stringdate 2025-07-20 11:20:01 2025-08-09 12:24:12 | status stringclasses 2
values | metrics stringlengths 2 163k | parameters stringlengths 2 4.09k | artifacts stringclasses 2
values | logs stringclasses 5
values | last_updated stringdate 2025-08-09 17:43:05 2025-08-09 17:43:05 |
|---|---|---|---|---|---|---|---|---|---|
exp_20250720_130853 | petite-elle-l-aime-3 | SmolLM3 fine-tuning experiment | 2025-07-20T11:20:01.780908 | running | [{"timestamp": "2025-07-20T11:20:01.780908", "step": 25, "metrics": {"loss": 1.1659, "grad_norm": 10.3125, "learning_rate": 7e-08, "num_tokens": 1642080.0, "mean_token_accuracy": 0.75923578992486, "epoch": 0.004851130919895701}}, {"timestamp": "2025-07-20T11:26:39.042155", "step": 50, "metrics": {"loss": 1.165, "grad_n... | {"model_name": "HuggingFaceTB/SmolLM3-3B", "max_seq_length": 12288, "use_flash_attention": true, "use_gradient_checkpointing": false, "batch_size": 8, "gradient_accumulation_steps": 16, "learning_rate": 3.5e-06, "weight_decay": 0.01, "warmup_steps": 1200, "max_iters": 18000, "eval_interval": 1000, "log_interval": 25, "... | [] | [] | 2025-08-09T17:43:05.368860 |
exp_20250720_134319 | petite-elle-l-aime-3-1 | SmolLM3 fine-tuning experiment | 2025-07-20T11:54:31.993219 | running | [{"timestamp": "2025-07-20T11:54:33.589487", "step": 25, "metrics": {"loss": 1.166, "grad_norm": 10.375, "learning_rate": 7e-08, "num_tokens": 1642080.0, "mean_token_accuracy": 0.7590958896279335, "epoch": 0.004851130919895701, "gpu_0_memory_allocated": 17.202261447906494, "gpu_0_memory_reserved": 75.474609375, "gpu_0_... | {"model_name": "HuggingFaceTB/SmolLM3-3B", "max_seq_length": 12288, "use_flash_attention": true, "use_gradient_checkpointing": false, "batch_size": 8, "gradient_accumulation_steps": 16, "learning_rate": 3.5e-06, "weight_decay": 0.01, "warmup_steps": 1200, "max_iters": 18000, "eval_interval": 1000, "log_interval": 25, "... | [] | [] | 2025-08-09T17:43:05.369029 |
exp_20250727_172507 | petite_llm_3_fr_1_20250727_152506 | SmolLM3 fine-tuning experiment: petite_llm_3_fr_1 | 2025-07-27T17:25:07.131302 | running | [] | {"trainer_type": "sft", "model_name": "HuggingFaceTB/SmolLM3-3B", "max_seq_length": 8192, "use_flash_attention": true, "use_gradient_checkpointing": false, "batch_size": 8, "gradient_accumulation_steps": 16, "learning_rate": 5e-06, "weight_decay": 0.01, "warmup_steps": 1000, "max_iters": 8000, "eval_interval": 500, "lo... | [] | [] | 2025-08-09T17:43:05.369139 |
exp_20250727_172526 | petite_llm_3_fr_1_20250727_152525 | SmolLM3 fine-tuning experiment: petite_llm_3_fr_1 | 2025-07-27T17:25:26.109242 | completed | [{"timestamp": "2025-07-27T17:37:01.100450", "step": 25, "metrics": {"loss": 1.1733, "grad_norm": 11.25, "learning_rate": 1.2000000000000002e-07, "num_tokens": 1642080.0, "mean_token_accuracy": 0.7592912124097347, "epoch": 0.004851130919895701, "timestamp": "2025-07-27T15:37:00.527604", "step": 25, "gpu_0_memory_alloca... | {"trainer_type": "sft", "model_name": "HuggingFaceTB/SmolLM3-3B", "max_seq_length": 8192, "use_flash_attention": true, "use_gradient_checkpointing": false, "batch_size": 8, "gradient_accumulation_steps": 16, "learning_rate": 5e-06, "weight_decay": 0.01, "warmup_steps": 1000, "max_iters": 8000, "eval_interval": 500, "lo... | [] | [] | 2025-08-09T17:43:05.381795 |
exp_20250727_172538 | smollm3_experiment_20250727_152538 | SmolLM3 fine-tuning experiment: smollm3_experiment | 2025-07-27T17:25:38.978779 | running | [] | {} | [] | [] | 2025-08-09T17:43:05.382137 |
exp_20250727_182356 | test_diagnosis_20250727_182356 | Diagnosis test experiment | 2025-07-27T18:23:56.924122 | running | [{"timestamp": "2025-07-27T18:24:07.872673", "step": 100, "metrics": {"loss": 1.234, "accuracy": 0.85}}] | {} | [] | [] | 2025-08-09T17:43:05.382250 |
exp_20250727_182415 | test_monitoring_diagnosis_20250727_182414 | SmolLM3 fine-tuning experiment: test_monitoring_diagnosis | 2025-07-27T18:24:15.047914 | running | [{"timestamp": "2025-07-27T18:24:19.301319", "step": 200, "metrics": {"loss": 2.345, "accuracy": 0.75, "timestamp": "2025-07-27T18:24:18.657441", "step": 200}}] | {"learning_rate": 2e-05, "batch_size": 8} | [] | [] | 2025-08-09T17:43:05.382352 |
exp_20250727_182446 | flow_test_20250727_182445 | Flow test experiment | 2025-07-27T18:24:46.577375 | running | [] | {} | [] | [] | 2025-08-09T17:43:05.382420 |
exp_20250727_182703 | validation_test_20250727_182703 | Validation test experiment | 2025-07-27T18:27:03.714216 | running | [] | {} | [] | [] | 2025-08-09T17:43:05.382486 |
exp_20250727_182718 | test_recreation_20250727_182717 | SmolLM3 fine-tuning experiment: test_recreation | 2025-07-27T18:27:18.185932 | running | [] | {} | [] | [] | 2025-08-09T17:43:05.382550 |
exp_20250727_182734 | test_recreation_recreated_20250727_182733 | Recreated SmolLM3 fine-tuning experiment: test_recreation | 2025-07-27T18:27:34.321157 | running | [] | {"experiment_recreated": true, "original_experiment_name": "test_recreation", "recreation_timestamp": "20250727_182733", "recreation_reason": "Original experiment not found or expired"} | [] | [] | 2025-08-09T17:43:05.382624 |
exp_20250727_182744 | test_recreation_recreated_20250727_182744 | Recreated SmolLM3 fine-tuning experiment: test_recreation | 2025-07-27T18:27:44.894014 | running | [] | {"experiment_recreated": true, "original_experiment_name": "test_recreation", "recreation_timestamp": "20250727_182744", "recreation_reason": "Original experiment not found or expired"} | [] | [] | 2025-08-09T17:43:05.382697 |
exp_20250727_182802 | test_robust_logging_20250727_182801 | SmolLM3 fine-tuning experiment: test_robust_logging | 2025-07-27T18:28:02.317093 | running | [{"timestamp": "2025-07-27T18:28:15.147475", "step": 100, "metrics": {"loss": 1.5, "accuracy": 0.8, "timestamp": "2025-07-27T18:28:11.283323", "step": 100}}] | {"learning_rate": 2e-05, "batch_size": 8} | [] | [] | 2025-08-09T17:43:05.382788 |
exp_20250727_182824 | test_robust_logging_recreated_20250727_182824 | Recreated SmolLM3 fine-tuning experiment: test_robust_logging | 2025-07-27T18:28:24.600453 | running | [] | {"experiment_recreated": true, "original_experiment_name": "test_robust_logging", "recreation_timestamp": "20250727_182824", "recreation_reason": "Original experiment not found or expired"} | [] | [] | 2025-08-09T17:43:05.382863 |
exp_20250727_182833 | test_robust_logging_recreated_20250727_182833 | Recreated SmolLM3 fine-tuning experiment: test_robust_logging | 2025-07-27T18:28:33.707029 | running | [] | {"experiment_recreated": true, "original_experiment_name": "test_robust_logging", "recreation_timestamp": "20250727_182833", "recreation_reason": "Original experiment not found or expired"} | [] | [] | 2025-08-09T17:43:05.382935 |
exp_20250727_183001 | simple_test_20250727_183000 | SmolLM3 fine-tuning experiment: simple_test | 2025-07-27T18:30:01.254197 | running | [{"timestamp": "2025-07-27T18:30:08.455622", "step": 1, "metrics": {"loss": 1.0, "accuracy": 0.8, "timestamp": "2025-07-27T18:30:05.410525", "step": 1}}] | {"learning_rate": 2e-05, "batch_size": 8} | [] | [] | 2025-08-09T17:43:05.383024 |
exp_20250727_193248 | continuity_test_20250727_193248 | SmolLM3 fine-tuning experiment: continuity_test | 2025-07-27T19:32:48.780338 | running | [{"timestamp": "2025-07-27T19:32:55.332253", "step": 0, "metrics": {"loss": 2.5, "learning_rate": 0.0001, "step": 0, "phase": "initial", "timestamp": "2025-07-27T19:32:52.440576"}}, {"timestamp": "2025-07-27T19:33:01.142884", "step": 10, "metrics": {"loss": 2.3, "learning_rate": 0.0001, "step": 10, "phase": "post_loss"... | {"model_name": "SmolLM3-3B", "dataset": "OpenHermes-FR", "batch_size": 8, "learning_rate": 0.0001, "max_steps": 1000, "continuity_test": true} | [] | [] | 2025-08-09T17:43:05.383195 |
exp_20250727_193347 | multiple_recreations_test_20250727_193347 | SmolLM3 fine-tuning experiment: multiple_recreations_test | 2025-07-27T19:33:47.807935 | running | [{"timestamp": "2025-07-27T19:33:55.433909", "step": 0, "metrics": {"loss": 2.0, "step": 0, "recreation_count": 1, "timestamp": "2025-07-27T19:33:52.457912"}}, {"timestamp": "2025-07-27T19:34:01.265497", "step": 10, "metrics": {"loss": 1.8, "step": 10, "recreation_count": 2, "timestamp": "2025-07-27T19:33:58.178631"}},... | {} | [] | [] | 2025-08-09T17:43:05.383375 |
exp_demo_20250808_154602 | smollm3-finetune-demo | SmolLM3 fine-tuning experiment demo with comprehensive metrics tracking | 2025-08-08T15:46:02.531457 | completed | [{"timestamp": "2025-08-08T15:46:02.531462", "step": 100, "metrics": {"loss": 1.15, "grad_norm": 10.5, "learning_rate": 5e-06, "num_tokens": 1000000.0, "mean_token_accuracy": 0.76, "epoch": 0.1, "total_tokens": 1000000.0, "throughput": 2000000.0, "step_time": 0.5, "batch_size": 2, "seq_len": 4096, "token_acc": 0.76, "g... | {"model_name": "HuggingFaceTB/SmolLM3-3B", "max_seq_length": 4096, "batch_size": 2, "learning_rate": 5e-06, "epochs": 3, "dataset": "OpenHermes-FR", "trainer_type": "SFTTrainer", "hardware": "GPU (H100/A100)", "mixed_precision": true, "gradient_checkpointing": true, "flash_attention": true} | [] | [{"timestamp": "2025-08-08T15:46:02.531537", "level": "INFO", "message": "Training started successfully"}, {"timestamp": "2025-08-08T15:46:02.531542", "level": "INFO", "message": "Model loaded and configured"}, {"timestamp": "2025-08-08T15:46:02.531545", "level": "INFO", "message": "Dataset loaded and preprocessed"}] | 2025-08-09T17:43:05.383586 |
exp_20250809_004643 | gpt-oss-med_20250808_154643 | SmolLM3 fine-tuning experiment: gpt-oss-med | 2025-08-09T00:46:43.446910 | running | [{"timestamp": "2025-08-09T00:51:03.562055", "step": 10, "metrics": {"loss": 2.1823, "grad_norm": 2.468163251876831, "learning_rate": 3.6e-05, "num_tokens": 133463.0, "mean_token_accuracy": 0.527485016733408, "epoch": 0.008252527336496802, "timestamp": "2025-08-08T15:51:03.021647", "step": 10, "gpu_0_memory_allocated":... | {"add_eos_token": true, "answer_prefix": "Final Answer: ", "bad_entry_field": "bad_entry", "bad_prompt_field": "bad_prompt_detected", "bad_response_field": "bad_response_detected", "batch_size": 4, "beta1": 0.9, "beta2": 0.95, "bf16": true, "chat_template_kwargs": {"add_generation_prompt": true, "tokenize": false, "rea... | [] | [] | 2025-08-09T17:43:05.384354 |
exp_20250809_004745 | smollm3_experiment_20250808_154745 | SmolLM3 fine-tuning experiment: smollm3_experiment | 2025-08-09T00:47:45.912018 | running | [] | {} | [] | [] | 2025-08-09T17:43:05.384454 |
exp_20250808_154759 | smollm3_experiment | SmolLM3 fine-tuning experiment | 2025-08-08T15:47:43.209121 | running | [{"timestamp": "2025-08-08T16:02:44.063107", "step": null, "metrics": {"train/loss": 1.2174, "train/grad_norm": 0.4662783741950989, "train/learning_rate": 0.000196, "train/num_tokens": 514920.0, "train/mean_token_accuracy": 0.6719511821866035, "train/epoch": 0.04126263668248401, "gpu/0/allocated_memory": 39.30902051925... | {} | [] | [] | 2025-08-09T17:43:05.384596 |
exp_demo_20250808_162528 | smollm3-finetune-demo | SmolLM3 fine-tuning experiment demo with comprehensive metrics tracking | 2025-08-08T16:25:28.753674 | completed | [{"timestamp": "2025-08-08T16:25:28.753679", "step": 100, "metrics": {"loss": 1.15, "grad_norm": 10.5, "learning_rate": 5e-06, "num_tokens": 1000000.0, "mean_token_accuracy": 0.76, "epoch": 0.1, "total_tokens": 1000000.0, "throughput": 2000000.0, "step_time": 0.5, "batch_size": 2, "seq_len": 4096, "token_acc": 0.76, "g... | {"model_name": "HuggingFaceTB/SmolLM3-3B", "max_seq_length": 4096, "batch_size": 2, "learning_rate": 5e-06, "epochs": 3, "dataset": "OpenHermes-FR", "trainer_type": "SFTTrainer", "hardware": "GPU (H100/A100)", "mixed_precision": true, "gradient_checkpointing": true, "flash_attention": true} | [] | [{"timestamp": "2025-08-08T16:25:28.753765", "level": "INFO", "message": "Training started successfully"}, {"timestamp": "2025-08-08T16:25:28.753770", "level": "INFO", "message": "Model loaded and configured"}, {"timestamp": "2025-08-08T16:25:28.753773", "level": "INFO", "message": "Dataset loaded and preprocessed"}] | 2025-08-09T17:43:05.384803 |
exp_20250809_012614 | gpt-oss-med-tonic_20250808_162613 | SmolLM3 fine-tuning experiment: gpt-oss-med-tonic | 2025-08-09T01:26:14.015504 | running | [{"timestamp": "2025-08-09T01:30:32.596348", "step": 10, "metrics": {"loss": 2.1821, "grad_norm": 2.543379306793213, "learning_rate": 3.6e-05, "num_tokens": 133463.0, "mean_token_accuracy": 0.5276724584400654, "epoch": 0.008252527336496802, "timestamp": "2025-08-08T16:30:22.899132", "step": 10, "gpu_0_memory_allocated"... | {"add_eos_token": true, "answer_prefix": "Final Answer: ", "bad_entry_field": "bad_entry", "bad_prompt_field": "bad_prompt_detected", "bad_response_field": "bad_response_detected", "batch_size": 4, "beta1": 0.9, "beta2": 0.95, "bf16": true, "chat_template_kwargs": {"add_generation_prompt": true, "tokenize": false, "rea... | [] | [] | 2025-08-09T17:43:05.385795 |
exp_20250809_012718 | smollm3_experiment_20250808_162717 | SmolLM3 fine-tuning experiment: smollm3_experiment | 2025-08-09T01:27:18.276936 | running | [] | {} | [] | [] | 2025-08-09T17:43:05.385901 |
exp_20250808_162726 | smollm3_experiment | SmolLM3 fine-tuning experiment | 2025-08-08T16:27:15.567844 | running | [{"timestamp": "2025-08-08T17:11:09.539172", "step": null, "metrics": {"train/loss": 1.2194, "train/grad_norm": 0.30800241231918335, "train/learning_rate": 0.00019930723449395235, "train/num_tokens": 1458189.0, "train/mean_token_accuracy": 0.6675439611077308, "train/epoch": 0.11553538271095523, "gpu/0/allocated_memory"... | {} | [] | [] | 2025-08-09T17:43:05.386028 |
exp_demo_20250809_023939 | smollm3-finetune-demo | SmolLM3 fine-tuning experiment demo with comprehensive metrics tracking | 2025-08-09T02:39:39.778345 | completed | [{"timestamp": "2025-08-09T02:39:39.778350", "step": 100, "metrics": {"loss": 1.15, "grad_norm": 10.5, "learning_rate": 5e-06, "num_tokens": 1000000.0, "mean_token_accuracy": 0.76, "epoch": 0.1, "total_tokens": 1000000.0, "throughput": 2000000.0, "step_time": 0.5, "batch_size": 2, "seq_len": 4096, "token_acc": 0.76, "g... | {"model_name": "HuggingFaceTB/SmolLM3-3B", "max_seq_length": 4096, "batch_size": 2, "learning_rate": 5e-06, "epochs": 3, "dataset": "OpenHermes-FR", "trainer_type": "SFTTrainer", "hardware": "GPU (H100/A100)", "mixed_precision": true, "gradient_checkpointing": true, "flash_attention": true} | [] | [{"timestamp": "2025-08-09T02:39:39.778395", "level": "INFO", "message": "Training started successfully"}, {"timestamp": "2025-08-09T02:39:39.778399", "level": "INFO", "message": "Model loaded and configured"}, {"timestamp": "2025-08-09T02:39:39.778401", "level": "INFO", "message": "Dataset loaded and preprocessed"}] | 2025-08-09T17:43:05.386184 |
exp_20250809_114114 | got-oss-med-gpt_20250809_024114 | SmolLM3 fine-tuning experiment: got-oss-med-gpt | 2025-08-09T11:41:14.778185 | running | [{"timestamp": "2025-08-09T11:45:48.667017", "step": 10, "metrics": {"loss": 2.1803, "grad_norm": 2.574507713317871, "learning_rate": 3.6e-05, "num_tokens": 133463.0, "mean_token_accuracy": 0.527261833101511, "epoch": 0.008252527336496802, "timestamp": "2025-08-09T02:45:40.099088", "step": 10, "gpu_0_memory_allocated":... | {"add_eos_token": true, "answer_prefix": "Final Answer: ", "bad_entry_field": "bad_entry", "bad_prompt_field": "bad_prompt_detected", "bad_response_field": "bad_response_detected", "batch_size": 4, "beta1": 0.9, "beta2": 0.95, "bf16": true, "chat_template_kwargs": {"add_generation_prompt": true, "tokenize": false, "rea... | [] | [] | 2025-08-09T17:43:05.386726 |
exp_20250809_114217 | smollm3_experiment_20250809_024217 | SmolLM3 fine-tuning experiment: smollm3_experiment | 2025-08-09T11:42:17.853526 | running | [{"timestamp": "2025-08-09T12:12:16.867651", "step": null, "metrics": {"train/loss": 1.2282, "train/grad_norm": 0.3385251760482788, "train/learning_rate": 0.0001998668500113271, "train/num_tokens": 945967.0, "train/mean_token_accuracy": 0.668889407813549, "train/epoch": 0.07427274602847123, "gpu/0/allocated_memory": 39... | {} | [] | [] | 2025-08-09T17:43:05.386865 |
exp_demo_20250809_032254 | smollm3-finetune-demo | SmolLM3 fine-tuning experiment demo with comprehensive metrics tracking | 2025-08-09T03:22:54.543044 | completed | [{"timestamp": "2025-08-09T03:22:54.543047", "step": 100, "metrics": {"loss": 1.15, "grad_norm": 10.5, "learning_rate": 5e-06, "num_tokens": 1000000.0, "mean_token_accuracy": 0.76, "epoch": 0.1, "total_tokens": 1000000.0, "throughput": 2000000.0, "step_time": 0.5, "batch_size": 2, "seq_len": 4096, "token_acc": 0.76, "g... | {"model_name": "HuggingFaceTB/SmolLM3-3B", "max_seq_length": 4096, "batch_size": 2, "learning_rate": 5e-06, "epochs": 3, "dataset": "OpenHermes-FR", "trainer_type": "SFTTrainer", "hardware": "GPU (H100/A100)", "mixed_precision": true, "gradient_checkpointing": true, "flash_attention": true} | [] | [{"timestamp": "2025-08-09T03:22:54.543088", "level": "INFO", "message": "Training started successfully"}, {"timestamp": "2025-08-09T03:22:54.543091", "level": "INFO", "message": "Model loaded and configured"}, {"timestamp": "2025-08-09T03:22:54.543093", "level": "INFO", "message": "Dataset loaded and preprocessed"}] | 2025-08-09T17:43:05.387020 |
exp_20250809_122335 | gpt-oss-med-track_20250809_032335 | SmolLM3 fine-tuning experiment: gpt-oss-med-track | 2025-08-09T12:23:35.549361 | completed | [{"timestamp": "2025-08-09T12:27:41.174443", "step": 10, "metrics": {"loss": 2.1784, "grad_norm": 2.468487501144409, "learning_rate": 3.6e-05, "num_tokens": 133463.0, "mean_token_accuracy": 0.5280866272747516, "epoch": 0.008252527336496802, "timestamp": "2025-08-09T03:27:32.871836", "step": 10, "gpu_0_memory_allocated"... | {"add_eos_token": true, "answer_prefix": "Final Answer: ", "bad_entry_field": "bad_entry", "bad_prompt_field": "bad_prompt_detected", "bad_response_field": "bad_response_detected", "batch_size": 4, "beta1": 0.9, "beta2": 0.95, "bf16": true, "chat_template_kwargs": {"add_generation_prompt": true, "tokenize": false, "rea... | ["./outputs/gpt-oss-med-track_20250809_032315/checkpoint-500", "./outputs/gpt-oss-med-track_20250809_032315/checkpoint-1000", "./outputs/gpt-oss-med-track_20250809_032315/checkpoint-1500", "./outputs/gpt-oss-med-track_20250809_032315/checkpoint-2000", "./outputs/gpt-oss-med-track_20250809_032315/checkpoint-2424"] | [] | 2025-08-09T17:43:05.395414 |
exp_20250809_122413 | smollm3_experiment_20250809_032412 | SmolLM3 fine-tuning experiment: smollm3_experiment | 2025-08-09T12:24:13.034177 | completed | [{"timestamp": "2025-08-10T02:39:59.963820", "step": null, "metrics": {"gpu_0_memory_allocated": 39.3089919090271, "gpu_0_memory_reserved": 51.466796875, "gpu_0_utilization": 0, "cpu_percent": 1.4, "memory_percent": 6.1, "timestamp": "2025-08-09T17:39:59.543174"}}, {"timestamp": "2025-08-09T12:27:15.605197", "step": 10... | {} | [] | [] | 2025-08-09T17:43:05.402419 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.