LG-AI-EXAONE commited on
Commit
125af80
·
1 Parent(s): ad69e8d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -276,7 +276,7 @@ The following tables show the evaluation results of each model, with reasoning a
276
  <td align="center">64.7</td>
277
  </tr>
278
  <tr>
279
- <td >Tau-bench (Airline)</td>
280
  <td align="center">51.5</td>
281
  <td align="center">N/A</td>
282
  <td align="center">38.5</td>
@@ -285,7 +285,7 @@ The following tables show the evaluation results of each model, with reasoning a
285
  <td align="center">53.5</td>
286
  </tr>
287
  <tr>
288
- <td >Tau-bench (Retail)</td>
289
  <td align="center">62.8</td>
290
  <td align="center">N/A</td>
291
  <td align="center">10.2</td>
@@ -351,7 +351,7 @@ The following tables show the evaluation results of each model, with reasoning a
351
  <th>EXAONE 4.0 32B </th>
352
  <th>Phi 4</th>
353
  <th>Mistral-Small-2506</th>
354
- <th>Gemma 3 27B</th>
355
  <th>Qwen3 32B </th>
356
  <th>Qwen3 235B </th>
357
  <th>Llama-4-Maverick</th>
@@ -650,7 +650,7 @@ The following tables show the evaluation results of each model, with reasoning a
650
  <th>EXAONE Deep 2.4B</th>
651
  <th>Qwen 3 0.6B </th>
652
  <th>Qwen 3 1.7B </th>
653
- <th>SmolLM3 3B </th>
654
  </tr>
655
  <tr>
656
  <td align="center">Model Size</td>
@@ -830,7 +830,7 @@ The following tables show the evaluation results of each model, with reasoning a
830
  <th>Qwen 3 0.6B </th>
831
  <th>Gemma 3 1B</th>
832
  <th>Qwen 3 1.7B </th>
833
- <th>SmolLM3 3B </th>
834
  </tr>
835
  <tr>
836
  <td align="center">Model Size</td>
 
276
  <td align="center">64.7</td>
277
  </tr>
278
  <tr>
279
+ <td >Tau-Bench (Airline)</td>
280
  <td align="center">51.5</td>
281
  <td align="center">N/A</td>
282
  <td align="center">38.5</td>
 
285
  <td align="center">53.5</td>
286
  </tr>
287
  <tr>
288
+ <td >Tau-Bench (Retail)</td>
289
  <td align="center">62.8</td>
290
  <td align="center">N/A</td>
291
  <td align="center">10.2</td>
 
351
  <th>EXAONE 4.0 32B </th>
352
  <th>Phi 4</th>
353
  <th>Mistral-Small-2506</th>
354
+ <th>Gemma3 27B</th>
355
  <th>Qwen3 32B </th>
356
  <th>Qwen3 235B </th>
357
  <th>Llama-4-Maverick</th>
 
650
  <th>EXAONE Deep 2.4B</th>
651
  <th>Qwen 3 0.6B </th>
652
  <th>Qwen 3 1.7B </th>
653
+ <th>SmolLM 3 3B </th>
654
  </tr>
655
  <tr>
656
  <td align="center">Model Size</td>
 
830
  <th>Qwen 3 0.6B </th>
831
  <th>Gemma 3 1B</th>
832
  <th>Qwen 3 1.7B </th>
833
+ <th>SmolLM 3 3B </th>
834
  </tr>
835
  <tr>
836
  <td align="center">Model Size</td>