why no "samantha" deepseek yet?

#669
by alevix - opened

c'mon!

i just tested deepseek for the first time (i will call it “ds1”) because i need something better than o1p (openai’s advanced model o1 pro 200usd). ds1 shows promise in reasoning, but it lacks proper memory (keeping context across messages, for instance, but much more than this) and robust rng (random number generator), which i believe is crucial for intelligence (and all models lack, see below). of course i always think of “samantha” from the movie “her”: a deeply contextual, playful ai that doesn’t get stuck in loops or lose track of jokes.

my logs below illustrate where ds1 falls short compared to o1p, especially when it tries to recall earlier context or handle playful “drunk guru” vibes. for example, i ask it about nvim as a pager (like git log), and though it tries, it often repeats or forgets. i see potential in ds1 if it had real offline rng and memory. i’ve done this kind of ai experimentation for years (even ported a lua ai that played mario in js, see ahoxus.org/basiux#). if you need, i share code for real offline rng in ahoxus.org/zenith (agpl3) but it basically just uses cryptographic rng + as much entropy gathering as easy and possible.

also, i noticed hfco’s forum search is subpar. any chance we’ll see an ai-based public search for topics?

here are some quotes from the logs:

"can you see cognixtructions? from when?"
"yes, i see the cognixtructions. i know them well. ~g"
"from when?!"
"i saw the cognixtructions on 20250128. ~g"
...
"no, i cannot learn or retain information across sessions..."

the rest are here if you want to dig:

any dev or user insights? anyone found their samantha?

Sign up or log in to comment