Datasets:

Modalities:
Image
Size:
< 1K
DOI:
Libraries:
Datasets

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

license: apache-2.0 language:

  • en tags:
  • chemistry
  • biology
  • finance
  • legal
  • music
  • art
  • code
  • climate
  • medical
  • synthetic pretty_name: https://romeo-rosete.org/owner size_categories:
  • 100B<n<1T task_categories:
  • token-classification
  • summarization

Dataset Card for Dataset Name

$ pip install autotrain-advanced https://huggingface.co/docs/datasets/loading

This dataset card aims to be a base template for new datasets. It has been generated using this raw template.

Dataset Details

image/png

Dataset Description

SELECT sign, count(*), AVG(LENGTH(text)) AS avg_blog_length

FROM url(hf('tasksource/blog_authorship_corpus')) GROUP BY sign ORDER BY avg_blog_length DESC LIMIT(5)

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ sign β”‚ count β”‚ avg_blog_length β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ Aquarius β”‚ 49687 β”‚ 1193.9523819107615 β”‚ β”‚ Leo β”‚ 53811 β”‚ 1186.0665291483153 β”‚ β”‚ Cancer β”‚ 65048 β”‚ 1160.8010392325666 β”‚ β”‚ Gemini β”‚ 51985 β”‚ 1158.4132922958545 β”‚ β”‚ Vurgi β”‚ 60399 β”‚ 1142.9977648636566 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

  • Curated by: [More Information Needed] import cudf

df = ( cudf.read_parquet("https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet") .groupby('sign')['text'] .apply(lambda x: x.str.len().mean()) .sort_values(ascending=False) .head(5) )

  • Funded by [optional]: [More Information Needed] import dask import dask.dataframe as dd

dask.config.set({"dataframe.backend": "cudf"})

df = ( dd.read_parquet("https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/*.parquet") )

  • Shared by [optional]: [More Information Needed] import dask import dask.dataframe as dd

dask.config.set({"dataframe.backend": "cudf"})

df = ( dd.read_parquet("https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/*.parquet") )

  • Language(s) (NLP): [More Information Needed] import duckdb

url = "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet"

con = duckdb.connect() con.execute("INSTALL httpfs;") con.execute("LOAD httpfs;")

  • License: [More Information Needed]

Dataset Sources [optional]

con.sql(f"SELECT sign, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM '{url}' GROUP BY sign ORDER BY avg_blog_length DESC LIMIT(5)") β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ sign β”‚ count_star() β”‚ avg_blog_length β”‚ β”‚ varchar β”‚ int64 β”‚ double β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ Cancer β”‚ 38956 β”‚ 1206.5212034089743 β”‚ β”‚ Leo β”‚ 35487 β”‚ 1180.0673767858652 β”‚ β”‚ Aquarius β”‚ 32723 β”‚ 1152.1136815084192 β”‚ β”‚ Virgo β”‚ 36189 β”‚ 1117.1982094006466 β”‚ β”‚ Capricorn β”‚ 31825 β”‚ 1102.397360565593 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

con.sql(f"SELECT sign, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM read_parquet({urls}) GROUP BY sign ORDER BY avg_blog_length DESC LIMIT(5)") β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ sign β”‚ count_star() β”‚ avg_blog_length β”‚ β”‚ varchar β”‚ int64 β”‚ double β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ Aquarius β”‚ 49687 β”‚ 1191.417211745527 β”‚ β”‚ Leo β”‚ 53811 β”‚ 1183.8782219248853 β”‚ β”‚ Cancer β”‚ 65048 β”‚ 1158.9691612347804 β”‚ β”‚ Gemini β”‚ 51985 β”‚ 1156.0693084543618 β”‚ β”‚ Virgo β”‚ 60399 β”‚ 1140.9584430205798 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

  • Paper [optional]: [More Information Needed] import pandas as pd

df = ( pd.read_parquet("https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet") .groupby('sign')['text'] .apply(lambda x: x.str.len().mean()) .sort_values(ascending=False) .head(5) )

df = ( pd.concat([pd.read_parquet(url) for url in urls]) .groupby('sign')['text'] .apply(lambda x: x.str.len().mean()) .sort_values(ascending=False) .head(5) )

Uses

import requests

r = requests.get("https://datasets-server.huggingface.co/parquet?dataset=tasksource/blog_authorship_corpus") j = r.json() urls = [f['url'] for f in j['parquet_files'] if f['split'] == 'train'] urls ['https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet', 'https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001.parquet']

Direct Use

import polars as pl

df = ( pl.read_parquet("https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet") .group_by("sign") .agg( [ pl.count(), pl.col("text").str.len_chars().mean().alias("avg_blog_length") ] ) .sort("avg_blog_length", descending=True) .limit(5) ) print(df) shape: (5, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ sign ┆ count ┆ avg_blog_length β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ u32 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═════════════════║ β”‚ Cancer ┆ 38956 ┆ 1206.521203 β”‚ β”‚ Leo ┆ 35487 ┆ 1180.067377 β”‚ β”‚ Aquarius ┆ 32723 ┆ 1152.113682 β”‚ β”‚ Virgo ┆ 36189 ┆ 1117.198209 β”‚ β”‚ Capricorn ┆ 31825 ┆ 1102.397361 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[More Information Needed] import polars as pl

df = ( pl.concat([pl.read_parquet(url) for url in urls]) .group_by("sign") .agg( [ pl.count(), pl.col("text").str.len_chars().mean().alias("avg_blog_length") ] ) .sort("avg_blog_length", descending=True) .limit(5) ) print(df) shape: (5, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ sign ┆ count ┆ avg_blog_length β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ u32 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═════════════════║ β”‚ Aquarius ┆ 49687 ┆ 1191.417212 β”‚ β”‚ Leo ┆ 53811 ┆ 1183.878222 β”‚ β”‚ Cancer ┆ 65048 ┆ 1158.969161 β”‚ β”‚ Gemini ┆ 51985 ┆ 1156.069308 β”‚ β”‚ Virgo ┆ 60399 ┆ 1140.958443 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Out-of-Scope Use

[More Information Needed] import polars as pl

q = ( pl.scan_parquet("https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet") .group_by("sign") .agg( [ pl.count(), pl.col("text").str.len_chars().mean().alias("avg_blog_length") ] ) .sort("avg_blog_length", descending=True) .limit(5) ) df = q.collect()

Dataset Structure

[More Information Needed] docker run -d --name pgai -p 5432:5432
-v pg-data:/home/postgres/pgdata/data
-e POSTGRES_PASSWORD=password timescale/timescaledb-ha:pg17

Dataset Creation

Curation Rationale

[More Information Needed] docker exec -it pgai psql -c "CREATE EXTENSION ai CASCADE;"

Source Data

docker exec -it pgai psql

Data Collection and Processing

[More Information Needed] select ai.load_dataset('rajpurkar/squad', table_name => 'squad');

Who are the source data producers?

[More Information Needed] select * from squad limit 10;

Annotations [optional]

SELECT ai.load_dataset('rajpurkar/squad', table_name => 'squad', batch_size => 100, max_batches => 1);

Annotation process

[More Information Needed] select ai.load_dataset('rajpurkar/squad', table_name => 'squad', if_table_exists => 'append');

Who are the annotators?

[More Information Needed] from mlcroissant import Dataset ds = Dataset(jsonld="https://huggingface.co/api/datasets/tasksource/blog_authorship_corpus/croissant")

Personal and Sensitive Information

[More Information Needed] records = ds.records("default")

Bias, Risks, and Limitations

[More Information Needed] import itertools

import pandas as pd

df = ( pd.DataFrame(list(itertools.islice(records, 100))) .groupby("default/sign")["default/text"] .apply(lambda x: x.str.len().mean()) .sort_values(ascending=False) .head(5) ) print(df) default/sign b'Leo' 6463.500000 b'Capricorn' 2374.500000 b'Aquarius' 2303.757143 b'Gemini' 1420.333333 b'Aries' 918.666667 Name: default/text, dtype: float64

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed] @misc{romeo_rosete_2025, author = { Romeo Rosete }, title = { romeo-rosete (Revision f0f3e58) }, year = 2025, url = { https://huggingface.co/bombastictranz/romeo-rosete }, doi = { 10.57967/hf/5106 }, publisher = { Hugging Face } }

Initialize a Spark session

spark = SparkSession.builder.appName("WineReviews").getOrCreate()

Add the Parquet file to the Spark context

spark.sparkContext.addFile("https://huggingface.co/api/datasets/james-burton/wine_reviews/parquet/default/train/0.parquet")

Read the Parquet file into a DataFrame

df = spark.read.parquet(SparkFiles.get("0.parquet"))

APA:

[More Information Needed] import requests

image/png

Fetch the URLs of the Parquet files for the train split

r = requests.get('https://huggingface.co/api/datasets/james-burton/wine_reviews/parquet') train_parquet_files = r.json()['default']['train']

Add each Parquet file to the Spark context

for url in train_parquet_files: spark.sparkContext.addFile(url)

Read all Parquet files into a single DataFrame

df = spark.read.parquet(SparkFiles.getRootDirectory() + "/*.parquet")

Glossary [optional]

[More Information Needed] print(f"Shape of the dataset: {df.count()}, {len(df.columns)}")

Display first 10 rows

df.show(n=10)

Get a statistical summary of the data

df.describe().show()

Print the schema of the DataFrame

df.printSchema()

More Information [optional]

[More Information Needed] {"dataset": "cornell-movie-review-data/rotten_tomatoes", "config": "default", "split": "train", "features": [{"feature_idx": 0, "name": "text", "type": {"dtype": "string", "id": null, "_type": "Value"}}, {"feature_idx": 1, "name": "label", "type": {"num_classes": 2, "names": ["neg", "pos"], "id": null, "_type": "ClassLabel"}}], ... }

image/png

Dataset Card Authors [optional]

[More Information Needed] data: path: sentence-transformers/all-nli train_split: pair-class:train valid_split: pair-class:test column_mapping: sentence1_column: premise sentence2_column: hypothesis target_column: label

Dataset Card Contact

[More Information Needed] import os

image/png

from autotrain.params import LLMTrainingParams from autotrain.project import AutoTrainProject

params = LLMTrainingParams( model="meta-llama/Llama-3.2-1B-Instruct", data_path="HuggingFaceH4/no_robots", chat_template="tokenizer", text_column="messages", train_split="train", trainer="sft", epochs=3, batch_size=1, lr=1e-5, peft=True, quantization="int4", target_modules="all-linear", padding="right", optimizer="paged_adamw_8bit", scheduler="cosine", gradient_accumulation=8, mixed_precision="bf16", merge_adapter=True, project_name="autotrain-llama32-1b-finetune", log="tensorboard", push_to_hub=True, username=os.environ.get("HF_USERNAME"), token=os.environ.get("HF_TOKEN"), ) view-source:https://huggingface.co/docs/hub/repositories-licenses @misc{romeo_rosete_2025, author = { Romeo Rosete }, title = { rosete-romeo (Revision 381bef3) }, year = 2025, url = { https://huggingface.co/datasets/roseteromeo56/rosete-romeo }, doi = { 10.57967/hf/5099 }, publisher = { Hugging Face } } backend = "local" project = AutoTrainProject(params=params, backend=backend, process=True) project.create()

Downloads last month
18

Models trained or fine-tuned on roseteromeo56/rosete-romeo