Update README.md
Browse files
README.md
CHANGED
@@ -109,7 +109,7 @@ model-index:
|
|
109 |
|
110 |
<sup>**</sup> First authors <sup>†</sup> Senior Authors <sup>‡</sup> Corresponding Author
|
111 |
|
112 |
-
\[[arXiv Paper](
|
113 |
|
114 |
</div>
|
115 |
|
@@ -208,7 +208,7 @@ print(text_outputs)
|
|
208 |
|
209 |
## Training
|
210 |
|
211 |
-
See details in Ye et al. 2025.
|
212 |
|
213 |
### Model
|
214 |
- **Architecture**: SO400M + Qwen2
|
@@ -224,6 +224,8 @@ Neural networks: PyTorch
|
|
224 |
|
225 |
## Citation
|
226 |
|
|
|
|
|
227 |
```bibtex
|
228 |
@article{YeQi2025llavaction,
|
229 |
title={LLaVAction: evaluating and training multi-modal large language models for action recognition},
|
|
|
109 |
|
110 |
<sup>**</sup> First authors <sup>†</sup> Senior Authors <sup>‡</sup> Corresponding Author
|
111 |
|
112 |
+
\[[arXiv Paper](arxiv.org/abs/2503.18712)\] \[[Project Page](https://mmathislab.github.io/llavaction/)\] \[[Github Repo](https://github.com/AdaptiveMotorControlLab/LLaVAction)\]
|
113 |
|
114 |
</div>
|
115 |
|
|
|
208 |
|
209 |
## Training
|
210 |
|
211 |
+
See details in Ye et al. 2025: arxiv.org/abs/2503.18712
|
212 |
|
213 |
### Model
|
214 |
- **Architecture**: SO400M + Qwen2
|
|
|
224 |
|
225 |
## Citation
|
226 |
|
227 |
+
arXiv: arxiv.org/abs/2503.18712
|
228 |
+
|
229 |
```bibtex
|
230 |
@article{YeQi2025llavaction,
|
231 |
title={LLaVAction: evaluating and training multi-modal large language models for action recognition},
|