simplexsigil2 commited on
Commit
1f8bfe1
·
verified ·
1 Parent(s): 7c28046

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -76
README.md CHANGED
@@ -307,8 +307,8 @@ The repository is organized as follows:
307
  Each label file in the `labels/` directory follows this format:
308
 
309
  ```
310
- path,label,start,end,subject,cam
311
- path/to/clip,class_id,start_time,end_time,subject_id,camera_id
312
  ```
313
 
314
  Where:
@@ -328,12 +328,14 @@ Where:
328
  - `end`: End time of the segment (in seconds)
329
  - `subject`: Subject ID
330
  - `cam`: Camera view ID
 
331
 
332
- For OOPS-Fall, only fall segments and non-fall segments are labeled; non-falls are labels as "other", independent of the underlying content, as long as it is not a fall.
 
333
 
334
  ### Split Format
335
 
336
- Split files in the `splits/` directory list the video segments included in each partition:
337
 
338
  ```
339
  path
@@ -345,7 +347,7 @@ path/to/clip
345
  We provide multiple evaluation configurations via the `dataset.yaml` file:
346
 
347
  ### Basic Configurations
348
- - `default`: Access to all dataset labels
349
  - `cs`: Cross-subject splits for all datasets
350
  - `cv`: Cross-view splits for all datasets
351
 
@@ -359,77 +361,6 @@ We provide multiple evaluation configurations via the `dataset.yaml` file:
359
  - `cs-staged-wild`: Train and validate on staged datasets with cross-subject splits, test on OOPS-Fall
360
  - `cv-staged-wild`: Train and validate on staged datasets with cross-view splits, test on OOPS-Fall
361
 
362
- ## Usage
363
-
364
- To use this dataset with the Hugging Face datasets library:
365
-
366
- ```python
367
- from datasets import load_dataset
368
-
369
- # Load the entire dataset with default configuration
370
- dataset = load_dataset("omnifall")
371
-
372
- # Use cross-subject (CS) evaluation protocol
373
- cs_dataset = load_dataset("omnifall", "cs")
374
- print(f"Train: {len(cs_dataset['train'])} samples")
375
- print(f"Validation: {len(cs_dataset['validation'])} samples")
376
- print(f"Test: {len(cs_dataset['test'])} samples")
377
-
378
- # Use cross-view (CV) evaluation protocol
379
- cv_dataset = load_dataset("omnifall", "cv")
380
-
381
- # Use staged-to-wild evaluation protocol (train on staged datasets, test on OOPS)
382
- staged_to_wild = load_dataset("omnifall", "cs-staged-wild")
383
-
384
- # Use individual dataset
385
- cmdfall = load_dataset("omnifall", "cmdfall")
386
-
387
- # Access specific fields from the dataset
388
- for item in dataset["train"][:5]:
389
- print(f"Path: {item['path']}, Label: {item['label']}")
390
- ```
391
-
392
- ## Experiment Examples
393
-
394
- ### Cross-Subject Fall Detection
395
-
396
- ```python
397
- from datasets import load_dataset
398
- import torch
399
- from torch.utils.data import DataLoader
400
-
401
- # Load the cross-subject evaluation protocol
402
- dataset = load_dataset("omnifall", "cs-staged")
403
-
404
- # Preprocess and create dataloaders
405
- def preprocess(examples):
406
- # Your preprocessing code here
407
- return examples
408
-
409
- processed_dataset = dataset.map(preprocess, batched=True)
410
- train_dataloader = DataLoader(processed_dataset["train"], batch_size=32, shuffle=True)
411
- val_dataloader = DataLoader(processed_dataset["validation"], batch_size=32)
412
- test_dataloader = DataLoader(processed_dataset["test"], batch_size=32)
413
-
414
- # Train and evaluate your model
415
- ```
416
-
417
- ### Staged-to-Wild Generalization
418
-
419
- ```python
420
- from datasets import load_dataset
421
-
422
- # Load the staged-to-wild evaluation protocol
423
- dataset = load_dataset("omnifall", "cs-staged-wild")
424
-
425
- # Train on staged data
426
- train_data = dataset["train"]
427
- val_data = dataset["validation"]
428
-
429
- # Evaluate on wild data
430
- wild_test_data = dataset["test"]
431
- ```
432
-
433
  ## Citation
434
 
435
  If you use OmniFall in your research, please cite our paper (will be updated soon):
 
307
  Each label file in the `labels/` directory follows this format:
308
 
309
  ```
310
+ path,label,start,end,subject,cam,dataset
311
+ path/to/clip,class_id,start_time,end_time,subject_id,camera_id,dataset_name
312
  ```
313
 
314
  Where:
 
328
  - `end`: End time of the segment (in seconds)
329
  - `subject`: Subject ID
330
  - `cam`: Camera view ID
331
+ - `dataset`: Name of the dataset
332
 
333
+ For OOPS-Fall, only fall segments and non-fall segments are labeled; non-falls are labels as "other", independent of the underlying content, as long as it is not a fall.
334
+ Cam and subject ids in OOPS-Fall are -1.
335
 
336
  ### Split Format
337
 
338
+ Split files in the `splits/` directory list the video segments included in each partition. You can use the split paths to filter the label data.:
339
 
340
  ```
341
  path
 
347
  We provide multiple evaluation configurations via the `dataset.yaml` file:
348
 
349
  ### Basic Configurations
350
+ - `default`: Access to all dataset labels (huggingface loads everything into the `train` split by default.)
351
  - `cs`: Cross-subject splits for all datasets
352
  - `cv`: Cross-view splits for all datasets
353
 
 
361
  - `cs-staged-wild`: Train and validate on staged datasets with cross-subject splits, test on OOPS-Fall
362
  - `cv-staged-wild`: Train and validate on staged datasets with cross-view splits, test on OOPS-Fall
363
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
364
  ## Citation
365
 
366
  If you use OmniFall in your research, please cite our paper (will be updated soon):