You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Dataset Name

https://huggingface.co/docs/datasets/index

This dataset card aims to be a base template for new datasets. It has been generated using this raw template.

Dataset Details

dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) dataset[0]["audio"]

Dataset Description

image/png

import { pipeline } from '@huggingface/transformers';

// Allocate a pipeline for sentiment-analysis const pipe = await pipeline('sentiment-analysis');

const out = await pipe('I love transformers!'); // [{'label': 'POSITIVE', 'score': 0.999817686}] import { pipeline } from '@huggingface/transformers';

// Allocate a pipeline for sentiment-analysis const pipe = await pipeline('sentiment-analysis');

const out = await pipe('I love transformers!'); // [{'label': 'POSITIVE', 'score': 0.999817686}]

image/png

  • Curated by: [More Information Needed] // Use a different model for sentiment-analysis const pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment');
  • Funded by [optional]: [More Information Needed] // Run the model on WebGPU const pipe = await pipeline('sentiment-analysis', 'Xenova/distilbert-base-uncased-finetuned-sst-2-english', { device: 'webgpu', });
  • Shared by [optional]: [More Information Needed] // Run the model at 4-bit quantization const pipe = await pipeline('sentiment-analysis', 'Xenova/distilbert-base-uncased-finetuned-sst-2-english', { dtype: 'q4', });
  • Language(s) (NLP): [More Information Needed] model=teknium/OpenHermes-2.5-Mistral-7B volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run

docker run --rm -it --cap-add=SYS_PTRACE --security-opt seccomp=unconfined
--device=/dev/kfd --device=/dev/dri --group-add video
--ipc=host --shm-size 256g --net host -v $volume:/data
ghcr.io/huggingface/text-generation-inference:3.2.3-rocm
--model-id $model

  • License: [More Information Needed]

image/png

docker run --rm -it --cap-add=SYS_PTRACE --security-opt seccomp=unconfined
--device=/dev/kfd --device=/dev/dri --group-add video
--ipc=host --shm-size 256g --net host -v $volume:/data
ghcr.io/huggingface/text-generation-inference:3.2.3-rocm
--model-id $model

Dataset Sources [optional]

  • Repository: [More Information Needed] import requests

API_URL = "https://router.huggingface.co/novita/v3/openai/chat/completions" headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"} payload = { "messages": [ { "role": "user", "content": "How many 'G's in 'huggingface'?" } ], "model": "deepseek/deepseek-v3-0324", }

response = requests.post(API_URL, headers=headers, json=payload) print(response.json()["choices"][0]["message"])

  • Paper [optional]: [More Information Needed] from huggingface_hub import InferenceClient

client = InferenceClient( provider="novita", api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx", )

completion = client.chat.completions.create( model="deepseek-ai/DeepSeek-V3-0324", messages=[ { "role": "user", "content": "How many 'G's in 'huggingface'?" } ], )

print(completion.choices[0].message)

  • Demo [optional]: [More Information Needed] from huggingface_hub import InferenceClient

client = InferenceClient( provider="novita", api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx", )

completion = client.chat.completions.create( model="deepseek-ai/DeepSeek-V3-0324", messages=[ { "role": "user", "content": "How many 'G's in 'huggingface'?" } ], )

print(completion.choices[0].message) import fetch from "node-fetch";

const response = await fetch( "https://router.huggingface.co/novita/v3/openai/chat/completions", { method: "POST", headers: { Authorization: Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, "Content-Type": "application/json", }, body: JSON.stringify({ provider: "novita", model: "deepseek-ai/DeepSeek-V3-0324", messages: [ { role: "user", content: "How many 'G's in 'huggingface'?", }, ], }), } ); console.log(await response.json());

Uses

import { InferenceClient } from "@huggingface/inference";

const client = new InferenceClient("hf_xxxxxxxxxxxxxxxxxxxxxxxx");

const chatCompletion = await client.chatCompletion({ provider: "novita", model: "deepseek-ai/DeepSeek-V3-0324", messages: [ { role: "user", content: "How many 'G's in 'huggingface'?", }, ], });

console.log(chatCompletion.choices[0].message); from huggingface_hub import InferenceClient

client = InferenceClient( provider="hf-inference", api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx", )

output = client.image_classification("cats.jpg", model="Falconsai/nsfw_image_detection") from huggingface_hub import InferenceClient

client = InferenceClient( provider="together", api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx", )

output is a PIL.Image object

image = client.text_to_image( "Astronaut riding a horse", model="black-forest-labs/FLUX.1-dev", )

Direct Use

# List all models served by Fireworks AI

~ curl -s https://huggingface.co/api/models?inference_provider=fireworks-ai | jq ".[].id" "deepseek-ai/DeepSeek-V3-0324" "deepseek-ai/DeepSeek-R1" "Qwen/QwQ-32B" "deepseek-ai/DeepSeek-V3" ...

[More Information Needed]

Out-of-Scope Use

# List text-to-image models served by Fal AI

~ curl -s https://huggingface.co/api/models?inference_provider=fal-ai&pipeline_tag=text-to-image | jq ".[].id" "black-forest-labs/FLUX.1-dev" "stabilityai/stable-diffusion-3.5-large" "black-forest-labs/FLUX.1-schnell" "stabilityai/stable-diffusion-3.5-large-turbo" ...

[More Information Needed]

Dataset Structure

[More Information Needed] # List image-text-to-text models served by Novita or Sambanova ~ curl -s https://huggingface.co/api/models?inference_provider=sambanova,novita&pipeline_tag=image-text-to-text | jq ".[].id" "meta-llama/Llama-3.2-11B-Vision-Instruct" "meta-llama/Llama-3.2-90B-Vision-Instruct" "Qwen/Qwen2-VL-72B-Instruct"

Dataset Creation

image/png

Curation Rationale

[More Information Needed] # List text-to-video models served by any provider ~ curl -s https://huggingface.co/api/models?inference_provider=all&pipeline_tag=text-to-video | jq ".[].id" "Wan-AI/Wan2.1-T2V-14B" "Lightricks/LTX-Video" "tencent/HunyuanVideo" "Wan-AI/Wan2.1-T2V-1.3B" "THUDM/CogVideoX-5b" "genmo/mochi-1-preview" "BagOu22/Lora_HKLPAZ"

Source Data

from huggingface_hub import model_info

info = model_info("google/gemma-3-27b-it", expand="inference") info.inference

Data Collection and Processing

[More Information Needed] from huggingface_hub import model_info

info = model_info("manycore-research/SpatialLM-Llama-1B", expand="inference") info.inference

Who are the source data producers?

[More Information Needed] from huggingface_hub import model_info

info = model_info("google/gemma-3-27b-it", expand="inferenceProviderMapping") info.inference_provider_mapping

Annotations [optional]

from huggingface_hub import InferenceClient

client = InferenceClient( provider="sambanova", api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx", )

completion = client.chat.completions.create( model="meta-llama/Llama-3.3-70B-Instruct", messages=[ { "role": "user", "content": "What is the capital of France?" } ], max_tokens=500, )

print(completion.choices[0].message)

Annotation process

[More Information Needed] from huggingface_hub import InferenceClient

client = InferenceClient( provider="nebius", api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx", )

completion = client.chat.completions.create( model="meta-llama/Llama-3.2-11B-Vision-Instruct", messages=[ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ], max_tokens=500, )

print(completion.choices[0].me

image/png

Who are the annotators?

from sagemaker.huggingface import HuggingFaceModel

create Hugging Face Model Class and deploy it as SageMaker endpoint

huggingface_model = HuggingFaceModel(...).deploy()

[More Information Needed] pip install sagemaker --upgrade

Personal and Sensitive Information

[More Information Needed] import sagemaker import boto3

iam_client = boto3.client('iam') role = iam_client.get_role(RoleName='role-name-of-your-iam-role-with-right-permissions')['Role']['Arn'] sess = sagemaker.Session()

Bias, Risks, and Limitations

[More Information Needed] import sagemaker sess = sagemaker.Session() role = sagemaker.get_execution_role()

Recommendations

from sagemaker.huggingface import HuggingFace

############ pseudo code start ############

create Hugging Face Estimator for training

huggingface_estimator = HuggingFace(....)

start the train job with our uploaded datasets as input

huggingface_estimator.fit(...)

############ pseudo code end ############

deploy model to SageMaker Inference

predictor = hf_estimator.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")

example request: you always need to define "inputs"

data = { "inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days." }

request

predictor.predict(data)

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

image/png

image/png

APA:

[More Information Needed] from sagemaker.huggingface.model import HuggingFaceModel

create Hugging Face Model Class

huggingface_model = HuggingFaceModel( model_data="s3://models/my-bert-model/model.tar.gz", # path to your trained SageMaker model role=role, # IAM role with permissions to create an endpoint transformers_version="4.26", # Transformers version used pytorch_version="1.13", # PyTorch version used py_version='py39', # Python version used )

deploy model to SageMaker Inference

predictor = huggingface_model.deploy( initial_instance_count=1, instance_type="ml.m5.xlarge" )

example request: you always need to define "inputs"

data = { "inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days." }

request

predictor.predict(data)

Glossary [optional]

[More Information Needed] # delete endpoint predictor.delete_endpoint()

More Information [optional]

[More Information Needed] model.tar.gz/ |- pytorch_model.bin |- vocab.txt |- tokenizer_config.json |- config.json |- special_tokens_map.json

Dataset Card Authors [optional]

[More Information Needed] git lfs install git clone [email protected]:{repository} cd {repository} tar zcvf model.tar.gz *
aws s3 cp model.tar.gz s3://{my-s3-path}

Dataset Card Contact

[More Information Needed] from sagemaker.huggingface.model import HuggingFaceModel

Hub model configuration https://huggingface.co/models

hub = { 'HF_MODEL_ID':'distilbert-base-uncased-distilled-squad', # model_id from hf.co/models 'HF_TASK':'question-answering' # NLP task you want to use for predictions }

create Hugging Face Model Class

huggingface_model = HuggingFaceModel( env=hub, # configuration for loading model from Hub role=role, # IAM role with permissions to create an endpoint transformers_version="4.26", # Transformers version used pytorch_version="1.13", # PyTorch version used py_version='py39', # Python version used )

deploy model to SageMaker Inference

predictor = huggingface_model.deploy( initial_instance_count=1, instance_type="ml.m5.xlarge" )

example request: you always need to define "inputs"

data = { "inputs": { "question": "What is used for inference?", "context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference." } }

request

predictor.predict(data) # delete endpoint predictor.delete_endpoint() batch_job = huggingface_estimator.transformer( instance_count=1, instance_type='ml.p3.2xlarge', strategy='SingleRecord')

batch_job.transform( data='s3://s3-uri-to-batch-data', content_type='application/json',
split_type='Line') from sagemaker.huggingface.model import HuggingFaceModel

Hub model configuration https://huggingface.co/models

hub = { 'HF_MODEL_ID':'distilbert/distilbert-base-uncased-finetuned-sst-2-english', 'HF_TASK':'text-classification' }

create Hugging Face Model Class

huggingface_model = HuggingFaceModel( env=hub, # configuration for loading model from Hub role=role, # IAM role with permissions to create an endpoint transformers_version="4.26", # Transformers version used pytorch_version="1.13", # PyTorch version used py_version='py39', # Python version used )

create transformer to run a batch job

batch_job = huggingface_model.transformer( instance_count=1, instance_type='ml.p3.2xlarge', strategy='SingleRecord' )

starts batch transform job and uses S3 data as input

batch_job.transform( data='s3://sagemaker-s3-demo-test/samples/input.jsonl', content_type='application/json',
split_type='Line' ) {"inputs":"this movie is terrible"} {"inputs":"this movie is amazing"} {"inputs":"SageMaker is pretty cool"} {"inputs":"SageMaker is pretty cool"} {"inputs":"this movie is terrible"}import sagemaker from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri import time

sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_region_name role = sagemaker.get_execution_role() {"inputs":"this movie is amazing"} pip install sagemaker>=2.231.0 image_uri = get_huggingface_llm_image_uri( backend="huggingface", region=region ) model_name = "llama-3-1-8b-instruct" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())

hub = { 'HF_MODEL_ID':'meta-llama/Llama-3.1-8B-Instruct', 'SM_NUM_GPUS':'1', 'HUGGING_FACE_HUB_TOKEN': '', }

assert hub['HUGGING_FACE_HUB_TOKEN'] != '', "You have to provide a token."

image/webp

model = HuggingFaceModel( name=model_name, env=hub, role=role, image_uri=image_uri ) predictor = model.deploy( initial_instance_count=1, instance_type="ml.g5.2xlarge", endpoint_name=model_nameinput_data = { "inputs": "The diamondback terrapin was the first reptile to", "parameters": { "do_sample": True, "max_new_tokens": 100, "temperature": 0.7, "watermark": True } }

predictor.predict(input_data) ) [{'generated_text': 'The diamondback terrapin was the first reptile to make the list, followed by the American alligator, the American crocodile, and the American box turtle. The polecat, a ferret-like animal, and the skunk rounded out the list, both having gained their slots because they have proven to be particularly dangerous to humans.\n\nCalifornians also seemed to appreciate the new list, judging by the comments left after the election.\n\n“This is fantastic,” one commenter declared.\n\n“California is a very'}]

Downloads last month
7

Models trained or fine-tuned on bombastictranz/romeo-rosete

Collection including bombastictranz/romeo-rosete