Text Classification
Transformers
Safetensors
English
HHEMv2Config
custom_code
forrest-vectara commited on
Commit
0e7edb3
·
verified ·
1 Parent(s): b3973af

Update HHEM-2.1-Open demo app URL

Browse files
Files changed (1) hide show
  1. README.md +3 -7
README.md CHANGED
@@ -9,18 +9,14 @@ pipline_tag: text-classficiation
9
 
10
  <img src="https://huggingface.co/vectara/hallucination_evaluation_model/resolve/main/candle.png" width="50" height="50" style="display: inline;"> In Loving memory of Simon Mark Hughes...
11
 
12
- **Highlights**:
13
- * HHEM-2.1-Open shows a significant improvement over HHEM-1.0.
14
- * HHEM-2.1-Open outperforms GPT-3.5-Turbo and even GPT-4.
15
- * HHEM-2.1-Open can be run on consumer-grade hardware, occupying less than 600MB RAM space at 32-bit precision and elapsing around 1.5 seconds for a 2k-token input on a modern x86 CPU.
16
 
17
- > HHEM-2.1-Open introduces breaking changes to the usage. Please update your code according to the [new usage](#using-hhem-21-open) below. We are working making it compatible with HuggingFace's Inference Endpoint. We apologize for the inconvenience.
18
 
19
- HHEM-2.1-Open is a major upgrade to [HHEM-1.0-Open](https://huggingface.co/vectara/hallucination_evaluation_model/tree/hhem-1.0-open) created by [Vectara](https://vectara.com) in November 2023. The HHEM model series are designed for detecting hallucinations in LLMs. They are particularly useful in the context of building retrieval-augmented-generation (RAG) applications where a set of facts is summarized by an LLM, and HHEM can be used to measure the extent to which this summary is factually consistent with the facts.
 
20
 
21
  If you are interested to learn more about RAG or experiment with Vectara, you can [sign up](https://console.vectara.com/signup/?utm_source=huggingface&utm_medium=space&utm_term=hhem-model&utm_content=console&utm_campaign=) for a Vectara account.
22
 
23
- [**Try out HHEM-2.1-Open from your browser without coding** ](http://13.57.203.109:3000/)
24
 
25
  ## Hallucination Detection 101
26
  By "hallucinated" or "factually inconsistent", we mean that a text (hypothesis, to be judged) is not supported by another text (evidence/premise, given). You **always need two** pieces of text to determine whether a text is hallucinated or not. When applied to RAG (retrieval augmented generation), the LLM is provided with several pieces of text (often called facts or context) retrieved from some dataset, and a hallucination would indicate that the summary (hypothesis) is not supported by those facts (evidence).
 
9
 
10
  <img src="https://huggingface.co/vectara/hallucination_evaluation_model/resolve/main/candle.png" width="50" height="50" style="display: inline;"> In Loving memory of Simon Mark Hughes...
11
 
12
+ [**Click here try out HHEM-2.1-Open from your browser** ](https://huggingface.co/spaces/vectara/hhem-2.1-open-demo?logs=build)
 
 
 
13
 
 
14
 
15
+ With a performance superior than GPT-3.5-Turbo and GPT-4 but a footprint of less than 600MB RAM,
16
+ HHEM-2.1-Open is the lastest open source version of Vectara's HHEM series models for detecting hallucinations in LLMs. They are particularly useful in the context of building retrieval-augmented-generation (RAG) applications where a set of facts is summarized by an LLM, and HHEM can be used to measure the extent to which this summary is factually consistent with the facts.
17
 
18
  If you are interested to learn more about RAG or experiment with Vectara, you can [sign up](https://console.vectara.com/signup/?utm_source=huggingface&utm_medium=space&utm_term=hhem-model&utm_content=console&utm_campaign=) for a Vectara account.
19
 
 
20
 
21
  ## Hallucination Detection 101
22
  By "hallucinated" or "factually inconsistent", we mean that a text (hypothesis, to be judged) is not supported by another text (evidence/premise, given). You **always need two** pieces of text to determine whether a text is hallucinated or not. When applied to RAG (retrieval augmented generation), the LLM is provided with several pieces of text (often called facts or context) retrieved from some dataset, and a hallucination would indicate that the summary (hypothesis) is not supported by those facts (evidence).