tirumaraiselvan commited on
Commit
a11dfa7
·
verified ·
1 Parent(s): 1ecaf7c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,14 +13,14 @@ configs:
13
  path: "questions.tsv"
14
  ---
15
 
16
- # Agentic Data Access benchmark (ADA benchmark)
17
 
18
- Agentic Data Access benchmark is a set of real-world questions over few "closed domains" to illustrate the evaluation of closed domain AI assistants/agents.
19
  Closed domains are domains where data is not available implicitly in the LLM as they reside in secure or private systems e.g. enterprise databases, SaaS applications, etc
20
  and AI solutions require mechanisms to connect an LLM to such data. If you are evaluating an AI product or building your own AI architecture over closed domains, then you can use
21
  these questions/nature of questions to understand the capabilities of your system and qualitatively measure the performance of your assistants/agents.
22
 
23
- ADA benchmark was created because of severe short-comings found in closed domain assistants in the wild. We found that apart from few basic canned questions or workflows,
24
  the assistants were struggling to do anything new. This was found to be because the assistant is not connected
25
  to sufficient data and is unable to perform complex or sequential operations over that data.
26
  We call the ability of an AI system, given the description of data, to agentically use and operate on that data as agentic data access.
 
13
  path: "questions.tsv"
14
  ---
15
 
16
+ # Agentic Data Access Benchmark (ADAB)
17
 
18
+ Agentic Data Access Benchmark is a set of real-world questions over few "closed domains" to illustrate the evaluation of closed domain AI assistants/agents.
19
  Closed domains are domains where data is not available implicitly in the LLM as they reside in secure or private systems e.g. enterprise databases, SaaS applications, etc
20
  and AI solutions require mechanisms to connect an LLM to such data. If you are evaluating an AI product or building your own AI architecture over closed domains, then you can use
21
  these questions/nature of questions to understand the capabilities of your system and qualitatively measure the performance of your assistants/agents.
22
 
23
+ ADAB was created because of severe short-comings found in closed domain assistants in the wild. We found that apart from few basic canned questions or workflows,
24
  the assistants were struggling to do anything new. This was found to be because the assistant is not connected
25
  to sufficient data and is unable to perform complex or sequential operations over that data.
26
  We call the ability of an AI system, given the description of data, to agentically use and operate on that data as agentic data access.