Ardvark-san

Ardvark123
·

AI & ML interests

I like ML, and general Roleplay shenanigans. Oh and Lewdity.

Recent Activity

new activity 3 days ago
TroyDoesAI/BlackSheep-24B:Question
reacted to loztcontrol's post with 👀 8 months ago
I am developing a personal project to further support and help people living with Depression and Anxiety. As I suffer mainly from chronic depression I would like to create a tool based on AI that can monitor my moods but first I will collect information about myself, my moods and after collecting at least 6 months of my moods and my writings I will be able to formulate as a kind of recognition when my emotions are “out of control” I mean those states or feelings of emptiness. I think that sometimes not all of us have access to treatments and therapies so I would like to develop in a free way this project that I have just started today. I have already started the code to register events of my moods. I will share with you the updates :D ``` import pandas as pd from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import accuracy_score, classification_report import nltk from nltk.corpus import stopwords import string import matplotlib.pyplot as plt from datetime import datetime nltk.download('stopwords') data = { 'text': [ "Hoy me siento bien, aunque un poco cansado", "Me siento triste y solo", "Esto es frustrante, todo sale mal", "Estoy nervioso por lo que va a pasar", "No puedo con este estrés", "Todo está saliendo bien, me siento optimista", "Siento miedo de lo que pueda suceder", "Hoy fue un día horrible" ], 'emotion': [ 'felicidad', 'tristeza', 'enojo', 'ansiedad', 'ansiedad', 'felicidad', 'miedo', 'tristeza' ] } df = pd.DataFrame(data) # Función para limpiar el texto def clean_text(text): ``` Yes, I speak Spanish :P too
View all activity

Organizations

None yet

Ardvark123's activity

New activity in TroyDoesAI/BlackSheep-24B 3 days ago

Question

1
2
#4 opened 8 days ago by
Ardvark123
New activity in shuttleai/shuttle-3.5 5 days ago
reacted to loztcontrol's post with 👀👍 8 months ago
view post
Post
1690
I am developing a personal project to further support and help people living with Depression and Anxiety. As I suffer mainly from chronic depression I would like to create a tool based on AI that can monitor my moods but first I will collect information about myself, my moods and after collecting at least 6 months of my moods and my writings I will be able to formulate as a kind of recognition when my emotions are “out of control” I mean those states or feelings of emptiness. I think that sometimes not all of us have access to treatments and therapies so I would like to develop in a free way this project that I have just started today. I have already started the code to register events of my moods. I will share with you the updates :D


import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, classification_report
import nltk
from nltk.corpus import stopwords
import string
import matplotlib.pyplot as plt
from datetime import datetime

nltk.download('stopwords')

data = {
    'text': [
        "Hoy me siento bien, aunque un poco cansado", 
        "Me siento triste y solo", 
        "Esto es frustrante, todo sale mal", 
        "Estoy nervioso por lo que va a pasar",
        "No puedo con este estrés", 
        "Todo está saliendo bien, me siento optimista", 
        "Siento miedo de lo que pueda suceder", 
        "Hoy fue un día horrible"
    ],
    'emotion': [
        'felicidad', 
        'tristeza', 
        'enojo', 
        'ansiedad', 
        'ansiedad', 
        'felicidad', 
        'miedo', 
        'tristeza'
    ]
}

df = pd.DataFrame(data)

# Función para limpiar el texto
def clean_text(text):

Yes, I speak Spanish :P too
  • 3 replies
·
New activity in nbeerbower/gemma2-gutenberg-9B 9 months ago

openllm submission

1
#1 opened 10 months ago by
lemon07r
New activity in nothingiisreal/MN-12B-Celeste-V1.9 9 months ago

Feedback

7
#3 opened 9 months ago by
Darkknight535

General discussion.

3
#1 opened 10 months ago by
Lewdiculous
New activity in Virt-io/SillyTavern-Presets 9 months ago

Presets for Mistral

5
#8 opened 10 months ago by
Diavator
New activity in Lewdiculous/L3-8B-Stheno-v3.2-GGUF-IQ-Imatrix 11 months ago

BOS token discussion

1
25
#2 opened 11 months ago by
woofwolfy
reacted to Lewdiculous's post with 12 months ago
view post
Post
53567
More context for your Pascal GPU or older!

Update: Now available in the official releases of KoboldCpp!
[releases] https://github.com/LostRuins/koboldcpp/releases/latest

These are great news for all the users with GTX 10XX, P40...

Flash Attention implementation for older NVIDIA GPUs without requiring Tensor Cores has come to llama.cpp in the last few days, and should be merged in the next version of KoboldCpp, you can already try it with another fork or by building it.

[Mentioned KCPP fork] https://github.com/Nexesenex/kobold.cpp/releases/latest

[PR] https://github.com/ggerganov/llama.cpp/pull/7188

You should expect less VRAM usage for the same context, allowing you to experience higher contexts with your current GPU.

There have also been reported final tokens/second speed improvements for inference, so that's also grand!

If you have tried it, I'd like to hear your experiences with --flashattention so far, especially for this implementation and for the large number of Pascal (GTX 10XX, P40...) cards.

Discussion linked bellow, with more links to relevant information:

https://huggingface.co/LWDCLS/LLM-Discussions/discussions/11

Cheers!
·
New activity in Virt-io/SillyTavern-Presets 12 months ago

Feedback - 02

1
28
#5 opened about 1 year ago by
Virt-io
reacted to grimjim's post with ❤️ 12 months ago
view post
Post
1392
I use mergekit regularly, and often enough get acceptable results without performing fine-tuning afterward. My current thinking is that DARE-TIES should be avoided when merging dense models, as the process of thinning inherently punches holes in models.

I've had success using SLERP merges to graft Mistral v0.1 models with Mistral v0.2 models to obtain the context length benefits of the latter, and am looking forward to experimenting with Mistral v0.3, which recently dropped.
  • 1 reply
·