Spaces:
Sleeping
Sleeping
Commit
·
c37b091
1
Parent(s):
70bf9cc
Added env examples
Browse files- .env.huggingface.example +30 -0
- .env.local.example +30 -0
- .env.openai.example +22 -0
.env.huggingface.example
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# You can use any model that available to you and deployed on Hugging Face with compatible API
|
| 2 |
+
# X_NAME variables are optional for HuggingFace API you can use them for your convenience
|
| 3 |
+
|
| 4 |
+
# Make sure your key has permission to use all models
|
| 5 |
+
# Set up you key here: https://huggingface.co/docs/api-inference/en/quicktour#get-your-api-token
|
| 6 |
+
HF_API_KEY=hf_YOUR_HUGGINGFACE_API_KEY
|
| 7 |
+
|
| 8 |
+
# For example you can try public Inference API endpoint for Meta-Llama-3-70B-Instruct model
|
| 9 |
+
# This model quiality is comparable with GPT-4
|
| 10 |
+
# But public API has strict limit for output tokens, so it is very hard to use it for this usecase
|
| 11 |
+
# You can use your private API endpoint for this model
|
| 12 |
+
# Or use any other Hugging Face model that supports Messages API
|
| 13 |
+
# Don't forget to add '/v1' to the end of the URL
|
| 14 |
+
LLM_URL=https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct/v1
|
| 15 |
+
LLM_TYPE=HF_API
|
| 16 |
+
LLM_NAME=Meta-Llama-3-70B-Instruct
|
| 17 |
+
|
| 18 |
+
# The Open AI whisper family with more models is available on HuggingFace:
|
| 19 |
+
# https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013
|
| 20 |
+
# You can also use any other compatible STT model from HuggingFace
|
| 21 |
+
STT_URL=https://api-inference.huggingface.co/models/openai/whisper-tiny.en
|
| 22 |
+
STT_TYPE=HF_API
|
| 23 |
+
STT_NAME=whisper-tiny.en
|
| 24 |
+
|
| 25 |
+
# You can use compatible TTS model from HuggingFace
|
| 26 |
+
# For example you can try public Inference API endpoint for Facebook MMS-TTS model
|
| 27 |
+
# Im my experience OS TTS models from HF sound much more robotic than OpenAI TTS models
|
| 28 |
+
TTS_URL=https://api-inference.huggingface.co/models/facebook/mms-tts-eng
|
| 29 |
+
TTS_TYPE=HF_API
|
| 30 |
+
TTS_NAME=Facebook-mms-tts-eng
|
.env.local.example
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# You can also run models locally or on you own server and use them instead if they are compatible with HuggingFace API
|
| 2 |
+
# For local models seletct HF_API as a type because they usse HuggingFace API
|
| 3 |
+
|
| 4 |
+
# Most probalby you don't need a key for your local model
|
| 5 |
+
# But if you have some kind of authentication compatible with HuggingFace API you can use it here
|
| 6 |
+
HF_API_KEY=None
|
| 7 |
+
|
| 8 |
+
# The main usecase for the local models in locally running LLMs
|
| 9 |
+
# You can serve any model using Text Generation Inference from HuggingFace
|
| 10 |
+
# https://github.com/huggingface/text-generation-inference
|
| 11 |
+
# This project uses Messages API that is compatible with Open AI API and allows you to just plug and play OS models
|
| 12 |
+
# Don't gorget to add '/v1' to the end of the URL
|
| 13 |
+
# Assuming you have Meta-Llama-3-8B-Instruct model running on your local server, your configuration will look like this
|
| 14 |
+
LLM_URL=http://192.168.1.1:8080/v1
|
| 15 |
+
LLM_TYPE=HF_API
|
| 16 |
+
LLM_NAME=Meta-Llama-3-8B-Instruct
|
| 17 |
+
|
| 18 |
+
# Running STT model locally is not straightforward
|
| 19 |
+
# But for example you can one of the whispers models on your laptop
|
| 20 |
+
# It requires some simple wrapper over the model to make it compatible with HuggingFace API. Maybe I will share some in the future
|
| 21 |
+
# But assuming you manages to run a local whisper-server, your configuration will look like this
|
| 22 |
+
STT_URL=http://127.0.0.1:5000/transcribe
|
| 23 |
+
STT_TYPE=HF_API
|
| 24 |
+
STT_NAME=whisper-base.en
|
| 25 |
+
|
| 26 |
+
# I don't see much value in running TTS models locally given the quality of online models
|
| 27 |
+
# But if you have some kind of TTS model running on your local server you can use it here
|
| 28 |
+
TTS_URL=http://127.0.0.1:5001/read
|
| 29 |
+
TTS_TYPE=HF_API
|
| 30 |
+
TTS_NAME=my-tts-model
|
.env.openai.example
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Easy way to set up all your models using only OpenAI API
|
| 2 |
+
|
| 3 |
+
# Make sure your key has permission to use all models
|
| 4 |
+
# Set up you key here: https://platform.openai.com/api-keys
|
| 5 |
+
OPENAI_API_KEY=sk-YOUR_OPENAI_API_KEY
|
| 6 |
+
|
| 7 |
+
# "gpt-3.5-turbo" - ~3 seconds delay with good quality, recommended model
|
| 8 |
+
# "gpt-4-turbo","gpt-4", etc. 10+ seconds delay but higher quality of responses
|
| 9 |
+
LLM_URL=https://api.openai.com/v1
|
| 10 |
+
LLM_TYPE=OPENAI_API
|
| 11 |
+
LLM_NAME=gpt-3.5-turbo
|
| 12 |
+
|
| 13 |
+
# "whisper-1" is the only OpenAI STT model available with OpenAI API
|
| 14 |
+
STT_URL=https://api.openai.com/v1
|
| 15 |
+
STT_TYPE=OPENAI_API
|
| 16 |
+
STT_NAME=whisper-1
|
| 17 |
+
|
| 18 |
+
# "tts-1" - very good quality and close to real-time response
|
| 19 |
+
# "tts-1-hd" - slightly better quality with slightly longer response time
|
| 20 |
+
TTS_URL=https://api.openai.com/v1
|
| 21 |
+
TTS_TYPE=OPENAI_API
|
| 22 |
+
TTS_NAME=tts-1
|