Rank 1588
Winners create more winners, while losers do the opposite.
Success is a game of winners.
— # Leroy Dyer (1972-Present)
The Human AI .
PERSONAL NOTE :
Sad to hear the leaderboard benchmarks stopped ! ut this model focussed on the bbh Collection and the mmlu collection - as well as the henricks maths collection : I would expect that the musr went down as the model had already begun to miss on those tests .. But was still very high for most model at (20+)
this would have been the model that showed that All Sections in the green Aligning with the motto that it is not the size of the model but the training the moel has had :
There is a justification amoung the sellers of large ai ! they believe the more complexity and more parameters the better the model will perform ! Ie: Throw money at it : when indeed there was a 1.5b model topping the maths board ! SO its unjustified that the parameter size equals the inteligence of the model !
There is a technique to creating larger sized models , as extending models actually damages the model . but stacking various experts on top DOES ! make a difference in the models performance : as some training for modles actually throws other skill off ! So for a general inteligence it would have to be a multi expert model : with some internal reasoning chain between each espert in the stack ! Perhaps even a langchain graph as an interneal structure of llms which communicate between each other finally coming to concensous and responding !@
IE : a deepseek type model ! then extracting all the layers of the traied model back into a single moel structure effectivly merging the stacked models into a single tensor stack ! and realisning the model to agentic training datas @
so an actual agentic network ! instead of a external grpah or chain an internal chain ! now we can get to the genral inteligence bits : as we will need to add modalitys to allow for the model to be a true general intelugence ! we are not quite there yet ! as technology cannot handle the processing required until the gpu and cpus catch up ! the calculations need to be perfromed ont he gpu and not the cpu as this process is being hijack by gpu manafacturers and devlopers !
then we will be able to breath again ! creating full functioning grpahically rich near reality and highly agentic systems !
Deep thinking Model - Highly Trained om Multiple Datasets
The base model has been created as a new staarting point : It has been fully primed with various types of chains of thoughts and step by step solutions : enabling for reward training to take place . this model has been trained with various languges ( not intensivly ), enabling for cross languge understanding ; Here we create a valid start point for agent based modelling , As we find that some training actually affects existing knowledge , hence agents become a thing ! or if you prefr, distillations .... These agents can be medical , technical , roleplayers etc .
This model was trained on various datasets , such as the basic math ones . As well as some adaced reasoning tasks. here we experiment with various styles of data from finacial to medical to coding (althugh this seemms to have an issue with very long context ,, as the servers seems to crash out a lot whe pushing larger cotext and rewards - suggestion , only 1 sample perstep can solve it), very impressive with its diagnsis skill for medical.
SpydazWeb AI (7b Mistral) (512k)
This model has been trained to perform with contexts of 512k , although in training it has been trained mainly with the 2048 for general usage : the long context aspect also allows fro advanced projects and sumarys as well as image and audio translationns and generations:
Highly trained as well as methodolgy oriented , this model has been trained on the reAct Prcess and other structured processes . hence structured outputs (json) are very highly trained as well as orchestration of other agents and tasks : the model has been trained for tools use as well as funtion use : as well as custom processes and tools : some tools do not need code either as thier implication means the model may even generate a tool or artifct to perfrom the task :
A New genrea of AI ! This is Trained to give highly detailed humanized responses : Performs tasks well, a Very good model for multipupose use : the model has been trained to become more human in its reposes as well as role playing and story telling : This latest model has been trained on Conversations with a desire to respond with expressive emotive content , As well as discussions on various topics: It has also been focused on conversations by human interactions. hence there maybe NFSW contet in the model : This has no way inhibited its other tasks which were also aligned using the new intensive and Expressive prompt :
Thinking Humanly:
AI aims to model human thought, a goal of cognitive science across fields like psychology and computer science.
Thinking Rationally:
AI also seeks to formalize “laws of thought” through logic, though human thinking is often inconsistent and uncertain.
Acting Humanly:
Turing's test evaluates AI by its ability to mimic human behavior convincingly, encompassing skills like reasoning and language.
Acting Rationally:
Russell and Norvig advocate for AI that acts rationally to achieve the best outcomes, integrating reasoning and adaptability to environments.
Commit directly to the main branch Open as a pull request to the main branch Commit changes Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
- Downloads last month
- 10