AI & ML interests

None defined yet.

Nymbo 
posted an update about 2 months ago
view post
Post
2900
Haven't seen this posted anywhere - Llama-3.3-8B-Instruct is available on the new Llama API. Is this a new model or did someone mislabel Llama-3.1-8B?
  • 1 reply
·
Nymbo 
posted an update about 2 months ago
view post
Post
2663
PSA for anyone using Nymbo/Nymbo_Theme or Nymbo/Nymbo_Theme_5 in a Gradio space ~

Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and in-line code is readable now! Both themes are now visually identical across versions.

If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.
not-lain 
posted an update 4 months ago
not-lain 
posted an update 5 months ago
not-lain 
posted an update 5 months ago
view post
Post
1777
we now have more than 2000 public AI models using ModelHubMixin🤗
not-lain 
posted an update 6 months ago
not-lain 
posted an update 8 months ago
view post
Post
2422
ever wondered how you can make an API call to a visual-question-answering model without sending an image url 👀

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
🔗 https://github.com/not-lain/loadimg

API request example 🛠️:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
Blane187 
posted an update 11 months ago
not-lain 
posted an update 11 months ago
Blane187 
posted an update 11 months ago
view post
Post
1456
hello everyone, today I have been working on a project Blane187/rvc-demo, a demo of rvc using pip, this project is still a demo though (I don't have a beta tester lol)
not-lain 
posted an update 12 months ago
view post
Post
7805
I am now a huggingface fellow 🥳
·
not-lain 
posted an update 12 months ago
view post
Post
2714
I have finished writing a blogpost about building an image-based retrieval system, This is one of the first-ever approaches to building such a pipeline using only open-source models/libraries 🤗

You can checkout the blogpost in https://huggingface.co/blog/not-lain/image-retriever and the associated space at not-lain/image-retriever .

✨ If you want to request another blog post consider letting me know down below or you can reach out to me through any of my social media

📖 Happy reading !
not-lain 
posted an update about 1 year ago
not-lain 
posted an update about 1 year ago
view post
Post
2136
It is with great pleasure I inform you that huggingface's ModelHubMixin reached 200+ models on the hub 🥳

ModelHubMixin is a class developed by HF to integrate AI models with the hub with ease and it comes with 3 methods :
* save_pretrained
* from_pretrained
* push_to_hub

Shoutout to @nielsr , @Wauplin and everyone else on HF for their awesome work 🤗

If you are not familiar with ModelHubMixin and you are looking for extra resources you might consider :
* docs: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/mixins
🔗blog about training models with the trainer API and using ModelHubMixin: https://huggingface.co/blog/not-lain/trainer-api-and-mixin-classes
🔗GitHub repo with pip integration: https://github.com/not-lain/PyTorchModelHubMixin-template
🔗basic guide: https://huggingface.co/posts/not-lain/884273241241808
not-lain 
posted an update about 1 year ago
not-lain 
posted an update about 1 year ago
view post
Post
1561
If you're a researcher or developing your own model 👀 you might need to take a look at huggingface's ModelHubMixin classes.
They are used to seamlessly integrate your AI model with huggingface and to save/ load your model easily 🚀

1️⃣ make sure you're using the appropriate library version
pip install -qU "huggingface_hub>=0.22"

2️⃣ inherit from the appropriate class
from huggingface_hub import PyTorchModelHubMixin
from torch import nn

class MyModel(nn.Module,PyTorchModelHubMixin):
  def __init__(self, a, b):
    super().__init__()
    self.layer = nn.Linear(a,b)
  def forward(self,inputs):
    return self.layer(inputs)

first_model = MyModel(3,1)

4️⃣ push the model to the hub (or use save_pretrained method to save locally)
first_model.push_to_hub("not-lain/test")

5️⃣ Load and initialize the model from the hub using the original class
pretrained_model = MyModel.from_pretrained("not-lain/test")

not-lain 
posted an update about 1 year ago
view post
Post
1149
I'm looking for open-source image embedding models for RAG applications and/or multimodel embedding models if they exist in the first place.

if you have any extra resources about using, creating, or finetuning them feel free to share them below 🤗
not-lain 
posted an update about 1 year ago
not-lain 
posted an update about 1 year ago
not-lain 
posted an update about 1 year ago
view post
Post
1800
🚀 just reached 3K+ readers on this blog post about RAG using only HF🤗 related tools in just a little over 1 week from publishing.

📃the most interesting thing about it is that you can use the FAISS index in the datasets library to retrieve your most similar documents.

🔗https://huggingface.co/blog/not-lain/rag-chatbot-using-llama3

Happy reading everyone ✨