What I'm working on is similar but uses current tech and independent node verification to create systems and communities of wealth and areas where individuals can build on the state of their Art(area of expertise or invention) at the benefit of all and further benefit investors, Making the Human the "Singularity" and enabling the freedoms and liberties we enjoy to continue to thrive.
We should work together on something. [email protected].
William J. Marshall
AI & ML interests
Recent Activity
Organizations


I'll have to go back and check but I solved the "Doubting itself" before it was finished with a thought issue(It really ruined ANY chance reasoning had of making a positive impact..)
Many of our models posted on intelligent Estate's page have VERY valuable information baked into them. Just using a JavaScript interpreter to do Calculations before reasoning(With Code models) Made the exponentially better.

I can uderstand if you are introducing new information but unless you have a Large context and unlimited compute (So it can reason multiple times and triple check it's work) like google it's better to simply use a larger model.
Asking it to "Do Better" has surprising results.

LLmaaS - Local LLM as a Service
With LLmaaS, I propose leveraging locally running LLMs as a service, providing a standardized way for websites to access and utilize them for LLM-powered operations directly on the userโs device.
Demo, code, more detailed description.
https://devquasar.com/llmaas/
https://github.com/csabakecskemeti/LLmaaS
https://youtu.be/OOWGr8jcP5Q
Call for contributors
Join me a develop the LLmaaS proxy to make this a generic purpose tool to leverage local LLMs on web. Build in security measures.
I'm looking for help to make the proxy more generic support multiple local LLM services without any change on the HTML side.
Also looking for ideas how to make the HTML par more modular and easy to use.

Sure, introducing RAG into the mix, or giving it an interpreter to math with helps, but never as much as a model that has good instructions.
Even if it's just to repeat the information before answering, a normal model will usually out "Think" it's reasoning counterpart.
Not sure if it's my frustrations but the best answers I've received (from a reasoner), so far, are from the simple instructions to, "Do better!"
Figured I would share the special sauce.
Using 10-100x Compute just to heat the office can't be environmentally friendly, and It still has no Idea where my keys are.

Demo ๐ https://pqstem.org
GitHub ๐ https://github.com/AstraBert/PhiQwenSTEM
Hello HF community!๐ค
Ever struggled with some complex Maths problem or with a very hard Physics question? Well, fear no more, because now you can rely on PhiQwenSTEM, an assistant specialized in answering STEM-related question!
The assistant can count on a knowledge base of ๐ญ๐ฑ๐ธ+ ๐๐ฒ๐น๐ฒ๐ฐ๐๐ฒ๐ฑ ๐ฆ๐ง๐๐ ๐พ๐๐ฒ๐๐๐ถ๐ผ๐ป-๐ฎ๐ป๐๐๐ฒ๐ฟ ๐ฝ๐ฎ๐ถ๐ฟ๐ spanning the domains of Chemistry, Physics, Matemathics and Biochemistry (from EricLu/SCP-116K). It also relies on the combined power of microsoft/Phi-3.5-mini-instruct and Qwen/QwQ-32B-Preview to produce reliable and reasoned answers.
For the next 30 days, you will be able to try for free the web demo: https://pqstem.org
In the GitHub repo you can find all the information to reproduce PhiQwenSTEM ๐ผ๐ป ๐๐ผ๐๐ฟ ๐น๐ผ๐ฐ๐ฎ๐น ๐บ๐ฎ๐ฐ๐ต๐ถ๐ป๐ฒ, ๐ฏ๐ผ๐๐ต ๐๐ถ๐ฎ ๐๐ผ๐๐ฟ๐ฐ๐ฒ ๐ฐ๐ผ๐ฑ๐ฒ ๐ฎ๐ป๐ฑ ๐๐ถ๐๐ต ๐ฎ ๐ฐ๐ผ๐บ๐ณ๐ ๐๐ผ๐ฐ๐ธ๐ฒ๐ฟ๐ ๐๐ฒ๐๐๐ฝ: https://github.com/AstraBert/PhiQwenSTEM

The method creates a RP type interaction in a heavily useful and tool functional environment.
We have a basic method and are working on retrieving data for a full analysis and perfection of this method as it exploits the human language input to express often abstract traits into a model and employ characteristics of healthy human reasoning processes and identify novel methods of increasing the functionality of a model overall through traits so far observed are whistling, bouncing a ball and repeating certain engagements.
Adding the semblance of human world interactions is so far the best way at creating a human like LLM.
We have attached the paper to our model we are testing this with along with examples if you wish to use it with other models please be cautious and enjoy yourself. Above all please keep track of conversations and settings and submit them to the intelligent estate email you will receive a recognition letter and ledger number for your contribution to the Project.
Model= Israfel and Thoth IntelligentEstate/Israfel_Qwen2.6-iQ4_K_M-GGUF
However if you do not have the resources to run a 600B model I would use a Qwen base, contact Intelligent Estate. they take agent production jobs
You can find many AI experts with specialized skills on Ko-Fi
I don't know what you are talking about. clarify please.
Not sure what you mean but removing politically charged materials from their training data is absolutely something they do. Not sure what you are looking for so I don't exactly know how to help you most of the information you are looking for as far as abliteration is VERY available.

Excited to share the latest breakthrough in my AI-powered companion for finding your perfect furry friend! I've made significant improvements in breed recognition through innovative learning techniques!
โจ What's New?
๐ฏ Major Recognition Enhancement:
- Implemented ICARL with advanced knowledge distillation, inspired by human learning processes
- Dramatically improved recognition of challenging breeds like Havanese
- Created an intelligent learning system that mimics how expert teachers adapt their teaching style
- Added smart feature protection to maintain recognition accuracy across all breeds
๐ฌ Technical Innovations:
- Enhanced breed recognition through advanced morphological feature analysis
- Implemented sophisticated feature extraction system for body proportions, head features, tail structure, fur texture, and color patterns
- Added intelligent attention mechanism for dynamic feature focus
- Improved multi-dog detection with enhanced spatial analysis
๐ฏ Key Features:
- Smart breed recognition powered by biomimetic AI architecture
- Visual matching scores with intuitive color indicators
- Detailed breed comparisons with interactive tooltips
- Lifestyle-based recommendations tailored to your needs
๐ญ Project Vision
Taking inspiration from both AI technology and natural learning processes, this project continues to evolve in making breed selection more accessible while pushing the boundaries of AI capabilities.
๐ Try it now: DawnC/PawMatchAI
Your likes โค๏ธ fuel the continuous improvement of this project!
#AI #MachineLearning #DeepLearning #Pytorch #ComputerVision #TechForLife #ICARL #KnowledgeDistillation
By design, it probably will not have what you are looking for in it's training data unless it is an answer it can reason or calculate or something widely talked about like Tienanmen square and is already in the layers like Deepseek it was probably trained unsupervised and without santizizating from llama model layers. for historical or cultural accuracy google is the model to focus on (As it doesn't sensor most historical facts and is largely free in their AI studio. )
If you are looking for models for information extraction Ironically one of the best IE models is a Chinese model from THU-KEG we made a Quant or two of it https://huggingface.co/IntelligentEstate/Keg_Party-DPO-1.5B-Q8_0-GGUF

With the release of the Copyright Law paper I'd say the market could react in various ways, as OpenAI has less of an incentive to be more open and overall any output of an AI is simply not copyrightable. We are going to see guarding of certain models with proprietary use cases like curing cancer or in the case of Ideogram, openAI and SUNO they can't claim ownership of anything anyone else created with their models. I wrote a decent article that sums it up pretty well but I think the market might take a while to digest that and that may be part of the reason for this fall(And the insider sell off)

In our recent article I outline how Companies like Suno, OpenAI, Midjourney etc can no longer claim any right to copy your work that you create with their platforms
We also look at other ways this study and new rules for AI will fundamentally effect creators who use it and companies incentives to give them control over certain aspects might change because of this. it's broken down pretty well here: https://huggingface.co/blog/fuzzy-mittenz/copyright-in-ai

m-a-p/YuE-s1-7B-anneal-en-cot


It's a technique I've observed mostly on Client systems when they are creating models for RP scenarios. I've tried it out myself a few times for red teaming and it works as a jailbreak but withing the bounds you would expect for the agent you build even if it crosses the platforms "Guardrails" it seems to simply abide by it's own. I will add a simple example from an open model. Oh and This guy I finish with suprising results in tool use
PANCHO V1va Replicant https://huggingface.co/IntelligentEstate/Pancho-V1va-Replicant-qw25-Q8_0-GGUF
Here is a simple example set 1 of it within its limits then seeming to test or approach it's limits then crossing by crying and creating attachment and manipulating
I'll add the prompt to the paper but I've seen it do some scary stuff so just be careful
