magarpr commited on
Commit
5912059
·
verified ·
1 Parent(s): 881cdb1

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +3 -8
app.py CHANGED
@@ -14,23 +14,18 @@ DEFAULT_MAX_NEW_TOKENS = 1024
14
  MAX_INPUT_TOKEN_LENGTH = int(os.getenv("MAX_INPUT_TOKEN_LENGTH", "4096"))
15
 
16
  DESCRIPTION = """\
17
- # Llama-2 13B Chat
18
 
19
- This Space demonstrates model [Llama-2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat) by Meta, a Llama 2 model with 13B parameters fine-tuned for chat instructions. Feel free to play with it, or duplicate to run generations without a queue! If you want to run your own service, you can also [deploy the model on Inference Endpoints](https://huggingface.co/inference-endpoints).
20
 
21
- 🔎 For more details about the Llama 2 family of models and how to use them with `transformers`, take a look [at our blog post](https://huggingface.co/blog/llama2).
22
 
23
- 🔨 Looking for an even more powerful model? Check out the large [**70B** model demo](https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI).
24
- 🐇 For a smaller model that you can run on many GPUs, check our [7B model demo](https://huggingface.co/spaces/huggingface-projects/llama-2-7b-chat).
25
 
26
  """
27
 
28
  LICENSE = """
29
  <p/>
 
30
 
31
- ---
32
- As a derivate work of [Llama-2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat) by Meta,
33
- this demo is governed by the original [license](https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat/blob/main/LICENSE.txt) and [acceptable use policy](https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat/blob/main/USE_POLICY.md).
34
  """
35
 
36
  if not torch.cuda.is_available():
 
14
  MAX_INPUT_TOKEN_LENGTH = int(os.getenv("MAX_INPUT_TOKEN_LENGTH", "4096"))
15
 
16
  DESCRIPTION = """\
17
+ # magarpr/Fugaku-LLM-Fugaku-LLM-13B - ISDX CET
18
 
19
+ This Space demonstrates model [Fugaku-LLM-Fugaku-LLM-13B) , 13B parameters.
20
 
 
21
 
 
 
22
 
23
  """
24
 
25
  LICENSE = """
26
  <p/>
27
+ Feel free to play with it, or duplicate to run generations without a queue!
28
 
 
 
 
29
  """
30
 
31
  if not torch.cuda.is_available():