Tony Assi

tonyassi

AI & ML interests

computer vision, generative models, fashion, photography, marketing, e-commerce

Recent Activity

updated a Space 3 days ago
tonyassi/MGM-Film-Diffusion
published a dataset 3 days ago
tonyassi/foot2
updated a Space 3 days ago
tonyassi/MGM-Film-Diffusion
View all activity

Organizations

Stable Diffusion concepts library's profile picture Blog-explorers's profile picture ZeroGPU Explorers's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture

tonyassi's activity

reacted to clem's post with πŸ”₯ 25 days ago
view post
Post
7175
AI is not a zero-sum game. Open-source AI is the tide that lifts all boats!
reacted to abidlabs's post with ❀️ 5 months ago
view post
Post
5593
πŸ‘‹ Hi Gradio community,

I'm excited to share that Gradio 5 will launch in October with improvements across security, performance, SEO, design (see the screenshot for Gradio 4 vs. Gradio 5), and user experience, making Gradio a mature framework for web-based ML applications.

Gradio 5 is currently in beta, so if you'd like to try it out early, please refer to the instructions below:

---------- Installation -------------

Gradio 5 depends on Python 3.10 or higher, so if you are running Gradio locally, please ensure that you have Python 3.10 or higher, or download it here: https://www.python.org/downloads/

* Locally: If you are running gradio locally, simply install the release candidate with pip install gradio --pre
* Spaces: If you would like to update an existing gradio Space to use Gradio 5, you can simply update the sdk_version to be 5.0.0b3 in the README.md file on Spaces.

In most cases, that’s all you have to do to run Gradio 5.0. If you start your Gradio application, you should see your Gradio app running, with a fresh new UI.

-----------------------------

Fore more information, please see: https://github.com/gradio-app/gradio/issues/9463
  • 2 replies
Β·
reacted to Norod78's post with ❀️🀯 10 months ago
view post
Post
I've prepared a Google Colab notebook which allows you to play with interpolating between different people using IP-Adapter SDXL Face-ID Plus.

#Prepare a list t of num_of_results values between 0 and 1
t_space = torch.linspace(0, 1, num_of_results)
for t in tqdm(t_space):
    mix_factor = t.item()
    # interpolate between the two face images 
    image = (image1 * (1 - mix_factor) + image2 * mix_factor).astype(np.uint8)
    # interpolate between the two face embedding 
    faceid_embeds = torch.lerp(faceid_embeds1, faceid_embeds2, t)
   #generate interpolated result
    images = ip_model.generate(prompt=prompt, negative_prompt=negative_prompt, face_image=image, faceid_embeds=faceid_embeds, shortcut=v2, num_samples=2, scale=scale, s_scale=s_scale, guidance_scale=guidance_scale, width=width, height=height, num_inference_steps=steps, seed=seed)


Link to notebook:
Norod78/face_id_v2_test_code

Link to Face-ID Repo:
h94/IP-Adapter-FaceID

Link to all sorts of generated examples (Use the file tab):
Norod78/face_id_v2_test_code

Β·
reacted to DmitryRyumin's post with πŸ‘ 11 months ago
view post
Post
2254
πŸš€πŸ’ƒπŸŒŸ New Research Alert (Avatars Collection)! πŸŒŸπŸ•ΊπŸš€
πŸ“„ Title: InstructHumans: Editing Animated 3D Human Textures with Instructions

πŸ“ Description: InstructHumans is a novel framework for text-instructed editing of 3D human textures that employs a modified Score Distillation Sampling (SDS-E) method along with spatial smoothness regularization and gradient-based viewpoint sampling to achieve high-quality, consistent, and instruction-true edits.

πŸ‘₯ Authors: Jiayin Zhu, Linlin Yang, Angela Yao

πŸ”— Paper: InstructHumans: Editing Animated 3D Human Textures with Instructions (2404.04037)

🌐 Web Page: https://jyzhu.top/instruct-humans
πŸ“ Repository: https://github.com/viridityzhu/InstructHumans

πŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

πŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

πŸ” Keywords: #InstructHumans #3DTextureEditing #TextInstructions #ScoreDistillationSampling #SDS-E #SpatialSmoothnessRegularization #3DEditing #AvatarEditing #DeepLearning #Innovation
reacted to wanghaofan's post with πŸ”₯ 11 months ago
replied to Wauplin's post 11 months ago
view reply

huggingface_hub Python library is really amazing, the more I learn about it the more impressed I am. Awesome job πŸ€—

reacted to Wauplin's post with ❀️ 11 months ago
view post
Post
πŸš€ Just released version 0.21.0 of the huggingface_hub Python library!

Exciting updates include:
πŸ–‡οΈ Dataclasses everywhere for improved developer experience!
πŸ’Ύ HfFileSystem optimizations!
🧩 PyTorchHubMixin now supports configs and safetensors!
✨ audio-to-audio supported in the InferenceClient!
πŸ“š Translated docs in Simplified Chinese and French!
πŸ’” Breaking changes: simplified API for listing models and datasets!

Check out the full release notes for more details: Wauplin/huggingface_hub#4 πŸ€–πŸ’»
Β·
posted an update about 1 year ago
view post
Post
MANIFESTO
After working in fashion e-commerce for years I've come to the conclusion that in e-commerce we do not sell clothes... we sell images of clothes. Compressed, digital versions of physical products. As Roland Barthes pointed out in The Fashion System, a product image is a symbol or metaphor of a product. Media--in this case images--mediates the space between customer and product; viewer and object. Images can be altered, changed, corrupted, photoshopped, edited, deleted, or imagined. E-commerce products (or e-commerce photos) can thought of as a possibility space of digital pixels. AI/ML can analyze, manipulate, and create within this "possibility space of pixels"--thus it can be observed that there are opportunities to intervene in the physical fashion world through the imagination of artificial intelligence. Not to replace human creativity--but to augment it. To make it ART-ificial. Art is an artificial representation of reality. AI images are an artificial representation of reality. The sewing machine greatly increased the efficiency of clothing production. Similarly, AI has greatly increased efficiency image production, in our case product photo production. The fashion design paradigm of the past century (design->produce->photograph) has been flipped on this head. Instead of going from physical clothing to digital image via photography--we can go from digital image to physical clothing via stable diffusion. We are writing the chapter of Understanding Media that Marshall McLuhan never imagined. Virtual production hasn't replaced the physical production; it has simply made it out of style.
Β·
reacted to fffiloni's post with ❀️ about 1 year ago
view post
Post
Just published a quick community blog post mainly aimed at Art and Design students, but which is also an attempt to nudge AI researchers who would like to better consider benefits from collaboration with designers and artists πŸ˜‰
Feel free to share your thoughts !

"Breaking Barriers: The Critical Role of Art and Design in Advancing AI Capabilities" πŸ“„ https://huggingface.co/blog/fffiloni/the-critical-role-of-art-and-design-in-advancing-a

β€”
This short publication follows the results of two AI Workshops that took place at Γ‰cole des Arts DΓ©coratifs - Paris, lead by Etienne Mineur, Vadim Bernard, Martin de Bie, Antoine Pintout & Sylvain Filoni.
Β·