AI & ML interests

Computer Vision Technology and Data Collection for Anime Waifu

Recent Activity

deepghs's activity

not-lain 
posted an update about 19 hours ago
view post
Post
222
we now have more than 2000 public AI models using ModelHubMixin🤗
Tonic 
posted an update 2 days ago
view post
Post
1130
🙋🏻‍♂️ Hey there folks ,

Facebook AI just released JASCO models that make music stems .

you can try it out here : Tonic/audiocraft

hope you like it
Tonic 
posted an update 4 days ago
view post
Post
2204
🙋🏻‍♂️Hey there folks , Open LLM Europe just released Lucie 7B-Instruct model , a billingual instruct model trained on open data ! You can check out my unofficial demo here while we wait for the official inference api from the group : Tonic/Lucie-7B hope you like it 🚀
not-lain 
posted an update 6 days ago
Tonic 
posted an update 10 days ago
view post
Post
1624
microsoft just released Phi-4 , check it out here : Tonic/Phi-4

hope you like it :-)
DamarJati 
posted an update 18 days ago
view post
Post
2339
Happy New Year 2025 🤗
For the Huggingface community.
eienmojiki 
posted an update about 1 month ago
view post
Post
1474
👀 Introducing 2048 Game API: A RESTful API for the Classic Puzzle Game 🧩

I'm excited to share my latest project, 2048 Game API, a RESTful API that allows you to create, manage, and play games of 2048, a popular puzzle game where players slide numbered tiles to combine them and reach the goal of getting a tile with the value of 2048.

⭐ Features
Create new games with customizable board sizes (3-8)
Make moves (up, down, left, right) and get the updated game state
Get the current game state, including the board, score, and game over status
Delete games
Generate images of the game board with customizable themes (light and dark)

🔗 API Endpoints
POST /api/games - Create a new game
GET /api/games/:gameId - Get the current game state
POST /api/games/:gameId/move - Make a move (up, down, left, right)
DELETE /api/games/:gameId - Delete a game
GET /api/games/:gameId/image - Generate an image of the game board

🧩 Example Use Cases
- Create a new game with a 4x4 board:
curl -X POST -H "Content-Type: application/json" -d '{"size": 4}' http://localhost:3000/api/games

- Make a move up:
curl -X POST -H "Content-Type: application/json" -d '{"direction": "up"}' http://localhost:3000/api/games/:gameId/move

- Get the current game state:
curl -X GET http://localhost:3000/api/games/:gameId

💕 Try it out!
- Demo: eienmojiki/2048
- Source: https://github.com/kogakisaki/koga-2048
- You can try out the API by running the server locally or using a tool like Postman to send requests to the API. I hope you enjoy playing 2048 with this API!

Let me know if you have any questions or feedback!

🐧 Mouse1 is our friend🐧
ImranzamanML 
posted an update about 1 month ago
view post
Post
521
Deep understanding of (C-index) evaluation measure for better model
Lets start with three patients groups:

Group A
Group B
Group C
For each patient, we will predict risk score (higher score means higher risk of early event).

Step 1: Understanding Concordance Index
The Concordance Index (C-index) evaluate that how well the model ranks survival times.

Understand with sample data:
Group A has 3 patients with actual survival times and predicted risk scores:

Patient Actual Survival Time Predicted Risk Score
P1 5 months 0.8
P2 3 months 0.9
P3 10 months 0.2
Comparable pairs:

(P1, P2): P2 has a shorter survival time and a higher risk score → Concordant ✅
(P1, P3): P3 has a longer survival time and a lower risk score → Concordant ✅
(P2, P3): P3 has a longer survival time and a lower risk score → Concordant ✅
Total pairs = 3
Total concordant pairs = 3

C-index for Group A = Concordant pairs/Total pairs= 3/3 = 1.0

Step 2: Calculate C-index for All Groups
Repeat the process for all groups. For now we can assume:

Group A: C-index = 1.0
Group B: C-index = 0.8
Group C: C-index = 0.6
Step 3: Stratified Concordance Index
The Stratified Concordance Index combines the C-index scores of all groups and focusing on the following:

Average performance across groups (mean of C-indices).
Consistency across groups (low standard deviation of C-indices).
Formula:
Stratified C-index = Mean(C-index scores) - Standard Deviation(C-index scores)

Calculate the mean:
Mean=1.0 + 0.8 + 0.6/3 = 0.8

Calculate the standard deviation:
Standard Deviation= sqrt((1.0-0.8)^2 + (0.8-0.8)^2 + (0.6-0.8)^/3) = 0.16

Stratified C-index:
Stratified C-index = 0.8 - 0.16 = 0.64

Step 4: Interpret the Results
A high Stratified C-index means:

The model predicts well overall (high mean C-index).
  • 1 reply
·
lunarflu 
posted an update about 1 month ago
not-lain 
posted an update 2 months ago
view post
Post
2271
ever wondered how you can make an API call to a visual-question-answering model without sending an image url 👀

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
🔗 https://github.com/not-lain/loadimg

API request example 🛠️:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
Tonic 
posted an update 2 months ago
view post
Post
3548
🙋🏻‍♂️hey there folks,

periodic reminder : if you are experiencing ⚠️500 errors ⚠️ or ⚠️ abnormal spaces behavior on load or launch ⚠️

we have a thread 👉🏻 https://discord.com/channels/879548962464493619/1295847667515129877

if you can record the problem and share it there , or on the forums in your own post , please dont be shy because i'm not sure but i do think it helps 🤗🤗🤗
  • 2 replies
·
Tonic 
posted an update 3 months ago
view post
Post
1157
boomers still pick zenodo.org instead of huggingface ??? absolutely clownish nonsense , my random datasets have 30x more downloads and views than front page zenodos ... gonna write a comparison blog , but yeah... cringe.
  • 1 reply
·
ImranzamanML 
posted an update 3 months ago
view post
Post
696
Easy steps for an effective RAG pipeline with LLM models!
1. Document Embedding & Indexing
We can start with the use of embedding models to vectorize documents, store them in vector databases (Elasticsearch, Pinecone, Weaviate) for efficient retrieval.

2. Smart Querying
Then we can generate query embeddings, retrieve top-K relevant chunks and can apply hybrid search if needed for better precision.

3. Context Management
We can concatenate retrieved chunks, optimize chunk order and keep within token limits to preserve response coherence.

4. Prompt Engineering
Then we can instruct the LLM to leverage retrieved context, using clear instructions to prioritize the provided information.

5. Post-Processing
Finally we can implement response verification, fact-checking and integrate feedback loops to refine the responses.

Happy to connect :)
Tonic 
posted an update 3 months ago
view post
Post
834
🙋🏻‍♂️ hey there folks ,

really enjoying sharing cool genomics and protein datasets on the hub these days , check out our cool new org : https://huggingface.co/seq-to-pheno

scroll down for the datasets, still figuring out how to optimize for discoverability , i do think on that part it will be better than zenodo[dot}org , it would be nice to write a tutorial about that and compare : we already have more downloads than most zenodo datasets from famous researchers !
ImranzamanML 
posted an update 3 months ago
view post
Post
1706
Are you a Professional Python Developer? Here is why Logging is important for debugging, tracking and monitoring the code

Logging
Logging is very important part of any project you start. It help you to track the execution of a program, debug issues, monitor system performance and keep an audit trail of events.

Basic Logging Setup
The basic way to add logging to a Python code is by using the logging.basicConfig() function. This function set up basic configuration for logging messages to either console or to a file.

Here is how we can use basic console logging
#Call built in library
import logging

# lets call library and start logging 
logging.basicConfig(level=logging.DEBUG) #you can add more format specifier 

# It will show on the console since we did not added filename to save logs
logging.debug('Here we go for debug message')
logging.info('Here we go for info message')
logging.warning('Here we go for warning message')
logging.error('Here we go for error message')
logging.critical('Here we go for critical message')

#Note:
# If you want to add anything in the log then do like this way
records=100
logging.debug('There are total %s number of records.', records)

# same like string format 
lost=20
logging.debug('There are total %s number of records from which %s are lost', records, lost)



Logging to a File
We can also save the log to a file instead of console. For this, we can add the filename parameter to logging.basicConfig().

import logging
# Saving the log to a file. The logs will be written to app.log
logging.basicConfig(filename='app.log', level=logging.DEBUG)

logging.debug('Here we go for debug message')
logging.info('Here we go for info message')
logging.warning('Here we go for warning message')
logging.error('Here we go for error message')
logging.critical('Here we go for critical message')

You can read more on my medium blog https://medium.com/@imranzaman-5202/are-you-a-professional-python-developer-8596e2b2edaa
ImranzamanML 
posted an update 3 months ago
view post
Post
1388
LoRA with code 🚀 using PEFT (parameter efficient fine-tuning)

LoRA (Low-Rank Adaptation)
LoRA adds low-rank matrices to specific layers and reduce the number of trainable parameters for efficient fine-tuning.

Code:
Please install these libraries first:
pip install peft
pip install datasets
pip install transformers

from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments
from peft import LoraConfig, get_peft_model
from datasets import load_dataset

# Loading the pre-trained BERT model
model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2)

# Configuring the LoRA parameters
lora_config = LoraConfig(
    r=8,
    lora_alpha=16, 
    lora_dropout=0.1, 
    bias="none" 
)

# Applying LoRA to the model
model = get_peft_model(model, lora_config)

# Loading dataset for classification
dataset = load_dataset("glue", "sst2")
train_dataset = dataset["train"]

# Setting the training arguments
training_args = TrainingArguments(
    output_dir="./results",
    per_device_train_batch_size=16,
    num_train_epochs=3,
    logging_dir="./logs",
)

# Creating a Trainer instance for fine-tuning
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
)

# Finally we can fine-tune the model
trainer.train()


LoRA adds low-rank matrices to fine-tune only a small portion of the model and reduces training overhead by training fewer parameters.
We can perform efficient fine-tuning with minimal impact on accuracy and its suitable for large models where full-precision training is still feasible.
Tonic 
posted an update 3 months ago