Update README.md
Browse files
README.md
CHANGED
@@ -883,9 +883,13 @@ Here are our numbers for the full hindi run on BGE-M3
|
|
883 |
|
884 |
|
885 |
# Open Questions (I still have on ColBERT)
|
886 |
-
- People worked on ColBERT would agree marginmse sucks and KLDiv works great for ColBERT in practice, is there a formal study on why
|
887 |
- What BERT as an encoder architecture brings to be the best choice for ColBERT compared to other encoder architectures ?
|
888 |
- What were the temperature choices for ColBERT for query, doc scores ?
|
889 |
-
|
|
|
|
|
|
|
|
|
890 |
|
891 |
|
|
|
883 |
|
884 |
|
885 |
# Open Questions (I still have on ColBERT)
|
886 |
+
- People worked on ColBERT would agree marginmse sucks and KLDiv works great for ColBERT in practice, is there a formal / mathematical study on why marginmse sucks so bad ? (JaColBERT has done some ablations but would love to read why)
|
887 |
- What BERT as an encoder architecture brings to be the best choice for ColBERT compared to other encoder architectures ?
|
888 |
- What were the temperature choices for ColBERT for query, doc scores ?
|
889 |
+
|
890 |
+
|
891 |
+
# Wishlist
|
892 |
+
- When I can expend more GPU would love to try and reproduce Ligton AI's GTE-ModernColBERT BEIR eval numbers.
|
893 |
+
- When I can expend more GPU would run eval for prithivida/modern_colbert_base_en_v1 on long docs benchmark.
|
894 |
|
895 |
|