Unable to use dataset
I am unable to attach dataset to Google Colab. I have directly mounted data at drive using URL, but the given code to read data is not working. How can I use these 3000 feedback entries. Can anyone help please?
Had the same problem, but could solve it. The origin of the problem it's that tsv files has different column structures. So, that's what we need to manage.
This code works for what i need, but could be a good starting point to other goals. Hope this be helpful:
`` `
from huggingface_hub import hf_hub_download
import pandas as pd
import os
def load_tsv_file(repo_id, file_path):
try:
local_path = hf_hub_download(
repo_id=repo_id,
filename=file_path,
repo_type="dataset"
)
# Try to load without header first
df = pd.read_csv(local_path, sep='\t', header=None)
print(f"Successfully loaded {file_path}")
return df
except Exception as e:
print(f"Error loading {file_path}: {e}")
return None
List of files to try loading
file_paths = [
"Annotated Student Feedback Data/Annotator_1/towe-eacl_recreation_data_set/defomative comment removed/train.tsv",
"Annotated Student Feedback Data/Annotator_1/towe-eacl_recreation_data_set/less than 100 lengthy comment/train.tsv"
]
repo_id = "NLPC-UOM/Student_feedback_analysis_dataset"
dataframes = []
Try to load each file
for file_path in file_paths:
df = load_tsv_file(repo_id, file_path)
if df is not None:
# Print column count to detect inconsistencies
print(f"File {file_path} has {len(df.columns)} columns")
dataframes.append(df)
Check if we have inconsistent column counts
column_counts = [len(df.columns) for df in dataframes]
if len(set(column_counts)) > 1:
print("Warning: Files have inconsistent column counts")
for i, df in enumerate(dataframes):
print(f"DataFrame {i} has columns: {df.columns}")
normalized_dfs = []
for df in dataframes:
# Keep only the first 3 columns if there are more
if len(df.columns) >= 3:
new_df = df.iloc[:, 0:3] # Select first 3 columns
new_df.columns = ['id', 'feedback_text', 'annotation'] # Rename consistently
normalized_dfs.append(new_df)
else:
print(f"Skipping dataframe with only {len(df.columns)} columns")
Now you can concatenate the normalized dataframes
if normalized_dfs:
combined_df = pd.concat(normalized_dfs, ignore_index=True)
print(f"Combined dataframe has {len(combined_df)} rows")
print(combined_df.head())
```