Zoe Carter's picture
5 1

Zoe Carter

electricTurtle82
Ā·

AI & ML interests

None yet

Recent Activity

Organizations

None yet

electricTurtle82's activity

reacted to freddyaboulton's post with šŸ¤— 3 months ago
view post
Post
2279
Version 0.0.21 of gradio-pdf now properly loads chinese characters!
reacted to aiqcamp's post with šŸ¤— 3 months ago
view post
Post
4157
šŸŽØ FLUX VisionReply: Where Your Image Starts Its Next Chapter
Imagine This!
A photo you love. An AI that tells its story. Then transforms that story into a completely new artwork.
That's FLUX VisionReply - where magic meets machine! šŸŖ„
āœØ What Makes It Special

Stories Become Art: AI reads your images, crafts narratives, and transforms them into new artworks
Infinite Creative Chain: Each generated image can spark another creative journey
Smart Technology: Powered by Florence-2-Flux AI and ByteDance's cutting-edge image generation

šŸ’« Perfect For

šŸ“ø Photographers seeking artistic reinterpretations
šŸŽØ Designers and artists hunting for inspiration
šŸ’” Creators exploring visual brainstorming
šŸŽ® Game developers needing unique assets
āœļø Writers experimenting with visual storytelling

šŸŽÆ Creative Ways to Use It

Create Artwork Series

Build interconnected series from a single image
Watch your artistic universe expand with each generation


Explore & Inspire

Transform abstract ideas into concrete visuals
Let AI's interpretation offer new perspectives


Visual Storytelling

Begin new narratives from a single image
One photo becomes endless possibilities



āš” Key Features

Advanced AI: Florence-2-Flux AI captures subtle nuances
Premium Generation: ByteDance technology for stunning quality
Intuitive Controls: Fine-tune with simple sliders
Flexible Parameters: Customize dimensions, steps, and guidance
Auto Gallery: Document your creative journey effortlessly

šŸš€ Getting Started

Upload your inspiration
Generate AI's interpretation
Customize the description
Create your new artwork

What users say:

"Like having a conversation through images. Each generation reveals new perspectives I never considered."

https://huggingface.co/spaces/aiqcamp/FLUX-VisionReply

Ready to transform your images? Start your visual journey with FLUX VisionReply today!
#AIArt #CreativeTech #DigitalArt #ImageGeneration #ArtificialIntelligence
reacted to alielfilali01's post with šŸ¤— 3 months ago
view post
Post
3485
Unpopular opinion: Open Source takes courage to do !

Not everyone is brave enough to release what they have done (the way they've done it) to the wild to be judged !
It really requires a high level of "knowing wth are you doing" ! It's kind of a super power !

Cheers to the heroes here who see this!
Ā·
reacted to m-ric's post with šŸ¤— 3 months ago
view post
Post
2268
š—£š—¼š˜š—²š—»š˜š—¶š—®š—¹ š—½š—®š—暝—®š—±š—¶š—“š—ŗ š˜€š—µš—¶š—³š˜ š—¶š—» š—Ÿš—Ÿš— š˜€: š—»š—²š˜„ š—½š—®š—½š—²š—æ š—Æš˜† š— š—²š˜š—® š—°š—¹š—®š—¶š—ŗš˜€ š˜š—µš—®š˜ š˜„š—² š—°š—®š—» š—“š—²š˜ š—暝—¶š—± š—¼š—³ š˜š—¼š—øš—²š—»š—¶š˜‡š—²š—暝˜€! šŸ„³

Current LLMs process text by first splitting it into tokens. They use a module named "tokenizer", that -spl-it-s- th-e- te-xt- in-to- arbitrary tokens depending on a fixed dictionnary.
On the Hub you can find this dictionary in a model's files under tokenizer.json.

āž”ļø This process is called BPE tokenization. It is suboptimal, everyone says it. It breaks text into predefined chunks that often fail to capture the nuance of language. But it has been a necessary evil in language models since their inception.

šŸ’„ In Byte Latent Transformer (BLT), Meta researchers propose an elegant solution by eliminating tokenization entirely, working directly with raw bytes while maintaining efficiency through dynamic "patches."

This had been tried before with different byte-level tokenizations, but it's the first time that an architecture of this type scales as well as BPE tokenization. And it could mean a real paradigm shift! šŸ‘šŸ‘

šŸ—ļø š—”š—暝—°š—µš—¶š˜š—²š—°š˜š˜‚š—暝—²:
Instead of a lightweight tokenizer, BLT has a lightweight encoder that process raw bytes into patches. Then the patches are processed by the main heavy-duty transformers as we do normally (but for patches of bytes instead of tokens), before converting back to bytes.

šŸ§© š——š˜†š—»š—®š—ŗš—¶š—° š—£š—®š˜š—°š—µš—¶š—»š—“:
Instead of fixed tokens, BLT groups bytes based on their predictability (measured by entropy) - using more compute for complex sequences and efficiently handling simple ones. This allows efficient processing while maintaining byte-level understanding.

I hope this breakthrough is confirmed and we can get rid of all the tokenizer stuff, it will make model handling easier!

Read their paper here šŸ‘‰ https://dl.fbaipublicfiles.com/blt/BLT__Patches_Scale_Better_Than_Tokens.pdf
  • 2 replies
Ā·
reacted to singhsidhukuldeep's post with šŸ‘€ 3 months ago
view post
Post
1253
Groundbreaking Research Alert: The 'H' in HNSW Stands for "Hubs", Not "Hierarchy"!

Fascinating new research reveals that the hierarchical structure in the popular HNSW (Hierarchical Navigable Small World) algorithm - widely used for vector similarity search - may be unnecessary for high-dimensional data.

šŸ”¬ Key Technical Findings:

ā€¢ The hierarchical layers in HNSW can be completely removed for vectors with dimensionality > 32, with no performance loss

ā€¢ Memory savings of up to 38% achieved by removing the hierarchy

ā€¢ Performance remains identical in both median and tail latency cases across 13 benchmark datasets

šŸ› ļø Under The Hood:
The researchers discovered that "hub highways" naturally form in high-dimensional spaces. These hubs are well-connected nodes that are frequently traversed during searches, effectively replacing the need for explicit hierarchical layers.

The hub structure works because:
ā€¢ A small subset of nodes appear disproportionately in nearest neighbor lists
ā€¢ These hub nodes form highly connected subgraphs
ā€¢ Queries naturally traverse through these hubs early in the search process
ā€¢ The hubs efficiently connect distant regions of the graph

šŸ’” Industry Impact:
This finding has major implications for vector databases and similarity search systems. Companies can significantly reduce memory usage while maintaining performance by implementing flat navigable small world graphs instead of hierarchical ones.

šŸš€ What's Next:
The researchers have released FlatNav, an open-source implementation of their flat navigable small world approach, enabling immediate practical applications of these findings.
reacted to etemiz's post with šŸ˜Ž 3 months ago
view post
Post
1222
Pretraining is mostly what I do. Some ideas need to be emphasized by re training.

Better curation is possible by emphasizing certain texts.
reacted to alimotahharynia's post with šŸ¤Æ 3 months ago
view post
Post
1616
Here's the space for our new article that leverages LLMs with reinforcement learning to design high-quality small molecules. Check it out at alimotahharynia/GPT-2-Drug-Generator. You can also access the article here: https://arxiv.org/abs/2411.14157.
I would be happy to receive your feedback.