LG-AI-EXAONE commited on
Commit
6029613
·
1 Parent(s): 71c9fdd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -292,7 +292,7 @@ The following tables show the evaluation results of each model, with reasoning a
292
  <td align="center">64.7</td>
293
  </tr>
294
  <tr>
295
- <td >Tau-bench (Airline)</td>
296
  <td align="center">51.5</td>
297
  <td align="center">N/A</td>
298
  <td align="center">38.5</td>
@@ -301,7 +301,7 @@ The following tables show the evaluation results of each model, with reasoning a
301
  <td align="center">53.5</td>
302
  </tr>
303
  <tr>
304
- <td >Tau-bench (Retail)</td>
305
  <td align="center">62.8</td>
306
  <td align="center">N/A</td>
307
  <td align="center">10.2</td>
@@ -367,7 +367,7 @@ The following tables show the evaluation results of each model, with reasoning a
367
  <th>EXAONE 4.0 32B </th>
368
  <th>Phi 4</th>
369
  <th>Mistral-Small-2506</th>
370
- <th>Gemma 3 27B</th>
371
  <th>Qwen3 32B </th>
372
  <th>Qwen3 235B </th>
373
  <th>Llama-4-Maverick</th>
@@ -666,7 +666,7 @@ The following tables show the evaluation results of each model, with reasoning a
666
  <th>EXAONE Deep 2.4B</th>
667
  <th>Qwen 3 0.6B </th>
668
  <th>Qwen 3 1.7B </th>
669
- <th>SmolLM3 3B </th>
670
  </tr>
671
  <tr>
672
  <td align="center">Model Size</td>
@@ -846,7 +846,7 @@ The following tables show the evaluation results of each model, with reasoning a
846
  <th>Qwen 3 0.6B </th>
847
  <th>Gemma 3 1B</th>
848
  <th>Qwen 3 1.7B </th>
849
- <th>SmolLM3 3B </th>
850
  </tr>
851
  <tr>
852
  <td align="center">Model Size</td>
 
292
  <td align="center">64.7</td>
293
  </tr>
294
  <tr>
295
+ <td >Tau-Bench (Airline)</td>
296
  <td align="center">51.5</td>
297
  <td align="center">N/A</td>
298
  <td align="center">38.5</td>
 
301
  <td align="center">53.5</td>
302
  </tr>
303
  <tr>
304
+ <td >Tau-Bench (Retail)</td>
305
  <td align="center">62.8</td>
306
  <td align="center">N/A</td>
307
  <td align="center">10.2</td>
 
367
  <th>EXAONE 4.0 32B </th>
368
  <th>Phi 4</th>
369
  <th>Mistral-Small-2506</th>
370
+ <th>Gemma3 27B</th>
371
  <th>Qwen3 32B </th>
372
  <th>Qwen3 235B </th>
373
  <th>Llama-4-Maverick</th>
 
666
  <th>EXAONE Deep 2.4B</th>
667
  <th>Qwen 3 0.6B </th>
668
  <th>Qwen 3 1.7B </th>
669
+ <th>SmolLM 3 3B </th>
670
  </tr>
671
  <tr>
672
  <td align="center">Model Size</td>
 
846
  <th>Qwen 3 0.6B </th>
847
  <th>Gemma 3 1B</th>
848
  <th>Qwen 3 1.7B </th>
849
+ <th>SmolLM 3 3B </th>
850
  </tr>
851
  <tr>
852
  <td align="center">Model Size</td>