Wonjun Park commited on
Commit
60ff8bf
·
1 Parent(s): b64a3a2

LOG: DVS Dataset Version 1.0

Browse files
Files changed (5) hide show
  1. .gitignore +1 -0
  2. README.md +20 -4
  3. post_download.py +67 -0
  4. prepare_upload.py +72 -0
  5. si_sdr.py +68 -0
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ test/dog
README.md CHANGED
@@ -6,11 +6,11 @@ tags:
6
  - biology
7
  - dog
8
  - audio
 
 
9
  ---
10
 
11
- # Dataset
12
-
13
- You can download the DVS dataset from [here](https://huggingface.co/datasets/ArlingtonCL2/Dog-Vocal-Separation).
14
 
15
  ## Overview
16
 
@@ -33,18 +33,34 @@ You can download the DVS dataset from [here](https://huggingface.co/datasets/Arl
33
  │   └── mixture
34
  │ └── ...
35
  └── test
 
36
  └── mixture
37
  └── ...
38
  ```
39
 
 
 
 
 
 
 
 
 
40
  ## Description
41
 
42
  Pairs of the 10-second mixed sound and its ground truth dog vocal are given in a train set. Only sound mixtures are provided in a test set. Participants are expected to produce 10-second dog barks as their output.
43
 
44
- The total length of the train, validation, and test sets are about 348, 46, and 32 hours, respectively. In other words, 125,476 pairs for mixtures and ground truths are in the train set. 16,830 pairs and 11,550 pairs for validation and test set as well.
45
 
46
  Pure dog barks come from a previous work [1] which are originally about 1-2 seconds long on average. These are padded to 10 seconds long. Background noises are strategically selected from AudioSet [2] to mix with the dog barks. Dog barks and noises are combined in different permutations, while ensuring that no single dog's vocal data exists in more than one of the sets above, to avoid information leak.
47
 
 
 
 
 
 
 
 
48
  ## References
49
 
50
  [1] Wang, T., Li, X., Zhang, C., Wu, M., & Zhu, K. (2024, November). Phonetic and Lexical Discovery of Canine Vocalization. In *Findings of the Association for Computational Linguistics: EMNLP 2024* (pp. 13972-13983).
 
6
  - biology
7
  - dog
8
  - audio
9
+ task_categories:
10
+ - audio-to-audio
11
  ---
12
 
13
+ # [Dataset] Dog Vocal Separation
 
 
14
 
15
  ## Overview
16
 
 
33
  │   └── mixture
34
  │ └── ...
35
  └── test
36
+    ├── test_pairs.csv
37
  └── mixture
38
  └── ...
39
  ```
40
 
41
+ The CSV files (`train_pairs.csv`, `val_pairs.csv`, and `test_pairs.csv`) contain pairs of (dog, mixture) in their rows. For instance, (`6357ca529eec8ca42a1fa588e0725904.wav`, `f046b186c4def7428cd627ae98d1762d.wav`) is a pair of (dog, mixture) written at `train_pairs.csv`.
42
+
43
+ The dataset uploaded on HuggingFace is splitted into `subdir`s, since the number of files is only allowed up to 10,000 in a single directory. To make the the dataset hierarchy like the original one, you can use the given script;
44
+
45
+ ``` bash
46
+ $ ./post_download.py train/dog
47
+ ```
48
+
49
  ## Description
50
 
51
  Pairs of the 10-second mixed sound and its ground truth dog vocal are given in a train set. Only sound mixtures are provided in a test set. Participants are expected to produce 10-second dog barks as their output.
52
 
53
+ The total length of the train, validation, and test sets are about 348, 46, and 8 hours, respectively. In other words, 125,476 pairs for mixtures and ground truths are in the train set. 16,830 pairs and 3,000 pairs for validation and test set as well.
54
 
55
  Pure dog barks come from a previous work [1] which are originally about 1-2 seconds long on average. These are padded to 10 seconds long. Background noises are strategically selected from AudioSet [2] to mix with the dog barks. Dog barks and noises are combined in different permutations, while ensuring that no single dog's vocal data exists in more than one of the sets above, to avoid information leak.
56
 
57
+ ## Challenge Notice
58
+
59
+ 1. All audios are sampled at 32,000 kHz. Make sure your submission is also sampled at 32,000 kHz.
60
+ 2. Named your submission audios with the `test_pairs.csv`. The csv file contains corresponding the dog filename from the mixture. For instance, if the original filename was `16a4168a678743ce0f23c70f89d9170b.wav` in the test set, your prediction should be `97677e409040b54a21fdec623557bb2b.wav`.
61
+ 3. Only `test_pairs.csv` have a `split` column to indicate the public and private in a submission. Participants do not need to use this column.
62
+ 3. SI-SDR will be calculated using the [si_sdr.py](./si_sdr.py) script.
63
+
64
  ## References
65
 
66
  [1] Wang, T., Li, X., Zhang, C., Wu, M., & Zhu, K. (2024, November). Phonetic and Lexical Discovery of Canine Vocalization. In *Findings of the Association for Computational Linguistics: EMNLP 2024* (pp. 13972-13983).
post_download.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!python
2
+ import os
3
+ import shutil
4
+ import argparse
5
+
6
+ def combine_split_files(base_dir, subdir_prefix="subdir_"):
7
+ """
8
+ Moves all files from subdirectories (whose names start with subdir_prefix)
9
+ back to the base directory.
10
+
11
+ Args:
12
+ base_dir (str): Path to the base directory containing split subdirectories.
13
+ subdir_prefix (str): Prefix used for subdirectories created during split.
14
+ Default is "subdir_".
15
+ """
16
+ # List items in the base directory
17
+ items = os.listdir(base_dir)
18
+ combined_count = 0
19
+
20
+ for item in items:
21
+ sub_dir_path = os.path.join(base_dir, item)
22
+ # Process only directories with the given prefix
23
+ if os.path.isdir(sub_dir_path) and item.startswith(subdir_prefix):
24
+ print(f"Processing directory: {sub_dir_path}")
25
+ for file_name in os.listdir(sub_dir_path):
26
+ src_file = os.path.join(sub_dir_path, file_name)
27
+ dst_file = os.path.join(base_dir, file_name)
28
+ # If a file with the same name already exists, handle it (here we skip)
29
+ if os.path.exists(dst_file):
30
+ print(f"Warning: {dst_file} already exists. Skipping {src_file}.")
31
+ continue
32
+ try:
33
+ shutil.move(src_file, dst_file)
34
+ combined_count += 1
35
+ except Exception as e:
36
+ print(f"Error moving {src_file} to {dst_file}: {e}")
37
+ # After moving, attempt to remove the now-empty subdirectory
38
+ try:
39
+ os.rmdir(sub_dir_path)
40
+ print(f"Removed directory: {sub_dir_path}")
41
+ except Exception as e:
42
+ print(f"Could not remove directory {sub_dir_path}: {e}")
43
+
44
+ print(f"\nCombined {combined_count} files into {base_dir}.")
45
+
46
+ def main():
47
+ parser = argparse.ArgumentParser(
48
+ description="Rollback split files by moving files from split subdirectories back into the base directory."
49
+ )
50
+ parser.add_argument("base_dir", help="Base directory containing the split subdirectories")
51
+ parser.add_argument(
52
+ "--prefix",
53
+ default="subdir_",
54
+ help="Prefix of the split subdirectories (default: 'subdir_')"
55
+ )
56
+ args = parser.parse_args()
57
+
58
+ # Validate the base directory exists
59
+ if not os.path.isdir(args.base_dir):
60
+ print(f"Error: {args.base_dir} is not a valid directory.")
61
+ return
62
+
63
+ combine_split_files(args.base_dir, args.prefix)
64
+
65
+ if __name__ == "__main__":
66
+ main()
67
+
prepare_upload.py ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!python3
2
+ import os
3
+ import math
4
+ import argparse
5
+
6
+ def split_directory(base_dir, max_files=10000, prefix="subdir_"):
7
+ """
8
+ Splits the files in the given base directory into multiple subdirectories,
9
+ each containing at most 'max_files' files.
10
+
11
+ Args:
12
+ base_dir (str): The directory containing the files to be split.
13
+ max_files (int): Maximum number of files allowed per subdirectory.
14
+ Defaults to 10000.
15
+ prefix (str): Prefix for the names of the created subdirectories.
16
+ Defaults to "subdir_".
17
+ """
18
+ # Get a sorted list of only the files in the base directory
19
+ files = [f for f in sorted(os.listdir(base_dir)) if os.path.isfile(os.path.join(base_dir, f))]
20
+ total_files = len(files)
21
+ num_subdirs = math.ceil(total_files / max_files)
22
+
23
+ if total_files == 0:
24
+ print("No files found in the provided base directory.")
25
+ return
26
+
27
+ print(f"Found {total_files} files in {base_dir}. Creating {num_subdirs} subdirectories...")
28
+
29
+ for i in range(num_subdirs):
30
+ # Create subdirectory name (e.g., subdir_0, subdir_1, etc.)
31
+ subdir_name = f"{prefix}{i}"
32
+ subdir_path = os.path.join(base_dir, subdir_name)
33
+ os.makedirs(subdir_path, exist_ok=True)
34
+
35
+ # Determine the start and end indices for the files of this subdirectory
36
+ start_index = i * max_files
37
+ end_index = min(start_index + max_files, total_files)
38
+
39
+ # Move each file from the base directory to the subdirectory
40
+ for file in files[start_index:end_index]:
41
+ src_path = os.path.join(base_dir, file)
42
+ dst_path = os.path.join(subdir_path, file)
43
+ try:
44
+ os.rename(src_path, dst_path)
45
+ except Exception as e:
46
+ print(f"Error moving {src_path} to {dst_path}: {e}")
47
+
48
+ print(f"Moved files {start_index} to {end_index - 1} into {subdir_path}")
49
+
50
+ print(f"\nSuccessfully moved {total_files} files into {num_subdirs} subdirectories.")
51
+
52
+ def main():
53
+ parser = argparse.ArgumentParser(
54
+ description="Split a directory with many files into subdirectories with a maximum file count per subdirectory."
55
+ )
56
+ parser.add_argument("base_dir", help="The base directory containing the files to split.")
57
+ parser.add_argument("--max-files", type=int, default=10000,
58
+ help="Maximum number of files per subdirectory (default: 10000).")
59
+ parser.add_argument("--prefix", default="subdir_",
60
+ help="Prefix for the created subdirectories (default: 'subdir_').")
61
+
62
+ args = parser.parse_args()
63
+
64
+ # Validate that the provided base directory exists
65
+ if not os.path.isdir(args.base_dir):
66
+ parser.error(f"Error: {args.base_dir} is not a valid directory.")
67
+
68
+ split_directory(args.base_dir, args.max_files, args.prefix)
69
+
70
+ if __name__ == "__main__":
71
+ main()
72
+
si_sdr.py ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+
3
+
4
+ def _normalize(tensor: torch.Tensor, eps=1e-10) -> torch.Tensor:
5
+ """
6
+ Helper function to normalize a tensor
7
+ Args:
8
+ tensor (torch.Tensor): input tensor
9
+ eps (float): small value to avoid division by zero
10
+ Returns:
11
+ normalized_tensor (torch.Tensor): normalized tensor
12
+ """
13
+ norm = torch.norm(tensor, dim=-1, keepdim=True)
14
+ normalized_tensor = tensor / (norm + eps)
15
+ return normalized_tensor
16
+
17
+
18
+ def _calculate_alpha(preds, targets, eps=1e-10) -> torch.Tensor:
19
+ """
20
+ Helper function to calculate alpha
21
+ Args:
22
+ preds (torch.Tensor): predicted sources
23
+ targets (torch.Tensor): target sources
24
+ eps (float): small value to avoid division by zero
25
+ Returns:
26
+ alpha (torch.Tensor): alpha value
27
+ """
28
+ dot = torch.sum(preds * targets, dim=-1, keepdim=True)
29
+ target_energy = torch.sum(targets**2, dim=-1, keepdim=True)
30
+ alpha = (dot + eps) / (target_energy + eps)
31
+ return alpha
32
+
33
+
34
+ def _calculate_metric(numerator, denominator, eps=1e-10) -> torch.Tensor:
35
+ """
36
+ Helper function to calculate sdr and its variants
37
+ Args:
38
+ numerator (torch.Tensor): numerator tensor
39
+ denominator (torch.Tensor): denominator tensor
40
+ eps (float): small value to avoid division by zero
41
+ Returns:
42
+ dB (torch.Tensor): dB value
43
+ """
44
+ numerator = torch.sum(numerator, dim=-1) + eps
45
+ denominator = torch.sum(denominator, dim=-1) + eps
46
+ dB = 10 * torch.log10(numerator / denominator)
47
+ return dB
48
+
49
+
50
+ def si_sdr(preds, targets, eps=1e-10) -> torch.Tensor:
51
+ """
52
+ Scale Invariant Signal Distortion Ratio (SI-SDR) metric
53
+ Args:
54
+ preds (torch.Tensor): predicted sources. (batch, time)
55
+ targets (torch.Tensor): target sources. (batch, time)
56
+ eps (float): small value to avoid division by zero
57
+ Returns:
58
+ si_sdr (torch.Tensor): SI-SDR value
59
+ """
60
+ preds = _normalize(preds, eps=eps)
61
+ targets = _normalize(targets, eps=eps)
62
+
63
+ alpha = _calculate_alpha(preds, targets, eps=eps)
64
+
65
+ # compute SI-SDR (in dB)
66
+ numerator = torch.square(alpha * targets)
67
+ denominator = torch.square(preds - alpha * targets)
68
+ return _calculate_metric(numerator, denominator, eps=eps)