ahmedelsayed commited on
Commit
2ffb90d
·
1 Parent(s): 1661112

commit files to HF hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. CMU Advanced NLP 2024 (1) Introduction to NLP/CMU Advanced NLP 2024 (1) Introduction to NLP.mp4 +3 -0
  2. CMU Advanced NLP 2024 (1) Introduction to NLP/metadata.json +4 -0
  3. CMU Advanced NLP 2024 (1) Introduction to NLP/transcript.srt +0 -0
  4. CMU Advanced NLP 2024 (1) Introduction to NLP/transcript.vtt +0 -0
  5. CMU Advanced NLP 2024 (10) Retrieval and RAG/CMU Advanced NLP 2024 (10) Retrieval and RAG.mp4 +3 -0
  6. CMU Advanced NLP 2024 (10) Retrieval and RAG/metadata.json +4 -0
  7. CMU Advanced NLP 2024 (10) Retrieval and RAG/transcript.srt +0 -0
  8. CMU Advanced NLP 2024 (10) Retrieval and RAG/transcript.vtt +4036 -0
  9. CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/CMU Advanced NLP 2024 (11) Distillation Quantization and Pruning.mp4 +3 -0
  10. CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/metadata.json +4 -0
  11. CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/transcript.srt +0 -0
  12. CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/transcript.vtt +0 -0
  13. CMU Advanced NLP 2024 (12) Reinforcement Learning/CMU Advanced NLP 2024 (12) Reinforcement Learning.mp4 +3 -0
  14. CMU Advanced NLP 2024 (12) Reinforcement Learning/metadata.json +4 -0
  15. CMU Advanced NLP 2024 (12) Reinforcement Learning/transcript.srt +0 -0
  16. CMU Advanced NLP 2024 (12) Reinforcement Learning/transcript.vtt +0 -0
  17. CMU Advanced NLP 2024 (13) Debugging and Interpretation/CMU Advanced NLP 2024 (13) Debugging and Interpretation.mp4 +3 -0
  18. CMU Advanced NLP 2024 (13) Debugging and Interpretation/metadata.json +4 -0
  19. CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.srt +0 -0
  20. CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.vtt +0 -0
  21. CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts.mp4 +3 -0
  22. CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/metadata.json +4 -0
  23. CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.srt +0 -0
  24. CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.vtt +0 -0
  25. CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models.mp4 +3 -0
  26. CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/metadata.json +4 -0
  27. CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.srt +0 -0
  28. CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.vtt +0 -0
  29. CMU Advanced NLP 2024 (17) Code Generation/CMU Advanced NLP 2024 (17) Code Generation.mp4 +3 -0
  30. CMU Advanced NLP 2024 (17) Code Generation/metadata.json +4 -0
  31. CMU Advanced NLP 2024 (17) Code Generation/transcript.srt +0 -0
  32. CMU Advanced NLP 2024 (17) Code Generation/transcript.vtt +0 -0
  33. CMU Advanced NLP 2024 (18) Knowledge and Language Models/CMU Advanced NLP 2024 (18) Knowledge and Language Models.mp4 +3 -0
  34. CMU Advanced NLP 2024 (18) Knowledge and Language Models/metadata.json +4 -0
  35. CMU Advanced NLP 2024 (18) Knowledge and Language Models/transcript.srt +0 -0
  36. CMU Advanced NLP 2024 (18) Knowledge and Language Models/transcript.vtt +0 -0
  37. CMU Advanced NLP 2024 (2) Word Representation and Text Classification/CMU Advanced NLP 2024 (2) Word Representation and Text Classification.mp4 +3 -0
  38. CMU Advanced NLP 2024 (2) Word Representation and Text Classification/metadata.json +4 -0
  39. CMU Advanced NLP 2024 (2) Word Representation and Text Classification/transcript.srt +0 -0
  40. CMU Advanced NLP 2024 (2) Word Representation and Text Classification/transcript.vtt +0 -0
  41. CMU Advanced NLP 2024 (20) Tool Use and Language Agents/CMU Advanced NLP 2024 (20) Tool Use and Language Agents.mp4 +3 -0
  42. CMU Advanced NLP 2024 (20) Tool Use and Language Agents/metadata.json +4 -0
  43. CMU Advanced NLP 2024 (20) Tool Use and Language Agents/transcript.srt +0 -0
  44. CMU Advanced NLP 2024 (20) Tool Use and Language Agents/transcript.vtt +0 -0
  45. CMU Advanced NLP 2024 (21) Complex Reasoning/CMU Advanced NLP 2024 (21) Complex Reasoning.mp4 +3 -0
  46. CMU Advanced NLP 2024 (21) Complex Reasoning/metadata.json +4 -0
  47. CMU Advanced NLP 2024 (21) Complex Reasoning/transcript.srt +5007 -0
  48. CMU Advanced NLP 2024 (21) Complex Reasoning/transcript.vtt +3757 -0
  49. CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics.mp4 +3 -0
  50. CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/metadata.json +4 -0
CMU Advanced NLP 2024 (1) Introduction to NLP/CMU Advanced NLP 2024 (1) Introduction to NLP.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28be5d1d73b923cf7a91a66e6d77b5862dbf89020af51b887091f9aeedfd7b94
3
+ size 66391760
CMU Advanced NLP 2024 (1) Introduction to NLP/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=6NeTO61qc4M",
3
+ "title": "CMU Advanced NLP 2024 (1) Introduction to NLP"
4
+ }
CMU Advanced NLP 2024 (1) Introduction to NLP/transcript.srt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (1) Introduction to NLP/transcript.vtt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (10) Retrieval and RAG/CMU Advanced NLP 2024 (10) Retrieval and RAG.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81d3898858e07098de0177379421d5ba13d45835e73a1a41b3b7696c17d01774
3
+ size 54642972
CMU Advanced NLP 2024 (10) Retrieval and RAG/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=WQYi-1mvGDM",
3
+ "title": "CMU Advanced NLP 2024 (10) Retrieval and RAG"
4
+ }
CMU Advanced NLP 2024 (10) Retrieval and RAG/transcript.srt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (10) Retrieval and RAG/transcript.vtt ADDED
@@ -0,0 +1,4036 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00:00.040 --> 00:00:03.880
4
+ so today I'm going to talk about
5
+
6
+ 00:00:01.319 --> 00:00:06.680
7
+ retrieval and retrieval augmented
8
+
9
+ 00:00:03.880 --> 00:00:09.040
10
+ generation so if we look at our standard
11
+
12
+ 00:00:06.680 --> 00:00:10.880
13
+ prompting flow normally what we do is we
14
+
15
+ 00:00:09.040 --> 00:00:14.160
16
+ combine together a prompt template with
17
+
18
+ 00:00:10.880 --> 00:00:16.600
19
+ an input so if we say please answer this
20
+
21
+ 00:00:14.160 --> 00:00:18.720
22
+ question I think Vin Diesel has been a
23
+
24
+ 00:00:16.600 --> 00:00:21.000
25
+ voice actor for several pictors in TV
26
+
27
+ 00:00:18.720 --> 00:00:24.000
28
+ series do you know what their names
29
+
30
+ 00:00:21.000 --> 00:00:25.400
31
+ are we could get a response from a
32
+
33
+ 00:00:24.000 --> 00:00:26.840
34
+ language model but there are several
35
+
36
+ 00:00:25.400 --> 00:00:30.840
37
+ problems with
38
+
39
+ 00:00:26.840 --> 00:00:33.680
40
+ this the first is accuracy issues
41
+
42
+ 00:00:30.840 --> 00:00:36.160
43
+ the models generally have a knowledge
44
+
45
+ 00:00:33.680 --> 00:00:38.879
46
+ cut off so the parameters are usually
47
+
48
+ 00:00:36.160 --> 00:00:41.120
49
+ only updated to a particular time so for
50
+
51
+ 00:00:38.879 --> 00:00:43.200
52
+ example if a new Vin Diesel TV series
53
+
54
+ 00:00:41.120 --> 00:00:44.960
55
+ comes out then the model that was
56
+
57
+ 00:00:43.200 --> 00:00:47.440
58
+ trained up to a certain time Point won't
59
+
60
+ 00:00:44.960 --> 00:00:51.000
61
+ be able to know anything about
62
+
63
+ 00:00:47.440 --> 00:00:53.600
64
+ it there's also issues of private data
65
+
66
+ 00:00:51.000 --> 00:00:55.320
67
+ so data stored in private text or data
68
+
69
+ 00:00:53.600 --> 00:00:57.840
70
+ repositories is not suitable for
71
+
72
+ 00:00:55.320 --> 00:01:02.600
73
+ training for a number of reasons number
74
+
75
+ 00:00:57.840 --> 00:01:05.199
76
+ one it's not available to to particular
77
+
78
+ 00:01:02.600 --> 00:01:07.799
79
+ language model training providers such
80
+
81
+ 00:01:05.199 --> 00:01:10.720
82
+ as you know open AI or Google or anybody
83
+
84
+ 00:01:07.799 --> 00:01:13.840
85
+ else like this the second thing is
86
+
87
+ 00:01:10.720 --> 00:01:16.799
88
+ Access Control issues so even if you're
89
+
90
+ 00:01:13.840 --> 00:01:17.840
91
+ within an organization that has lots of
92
+
93
+ 00:01:16.799 --> 00:01:20.799
94
+ private data and you can train a
95
+
96
+ 00:01:17.840 --> 00:01:22.600
97
+ language model on that certain people in
98
+
99
+ 00:01:20.799 --> 00:01:24.200
100
+ the organization may have access to
101
+
102
+ 00:01:22.600 --> 00:01:27.640
103
+ certain varieties of data and other
104
+
105
+ 00:01:24.200 --> 00:01:29.400
106
+ people may not so it's not just solely
107
+
108
+ 00:01:27.640 --> 00:01:31.520
109
+ an issue of third party providers it's
110
+
111
+ 00:01:29.400 --> 00:01:33.840
112
+ an issue of organization level Access
113
+
114
+ 00:01:31.520 --> 00:01:36.159
115
+ Control in
116
+
117
+ 00:01:33.840 --> 00:01:38.920
118
+ general in addition there are learning
119
+
120
+ 00:01:36.159 --> 00:01:40.320
121
+ failures so even for data that the model
122
+
123
+ 00:01:38.920 --> 00:01:42.640
124
+ was trained on it might not be
125
+
126
+ 00:01:40.320 --> 00:01:44.399
127
+ sufficient to get the right answer and
128
+
129
+ 00:01:42.640 --> 00:01:47.799
130
+ this is particularly the case for very
131
+
132
+ 00:01:44.399 --> 00:01:52.320
133
+ very large uh training data sets and
134
+
135
+ 00:01:47.799 --> 00:01:53.920
136
+ models that are you know modestly sized
137
+
138
+ 00:01:52.320 --> 00:01:55.880
139
+ because the models very often won't be
140
+
141
+ 00:01:53.920 --> 00:01:58.360
142
+ able to learn from a single look at a
143
+
144
+ 00:01:55.880 --> 00:02:02.039
145
+ particular fact or or whatever else like
146
+
147
+ 00:01:58.360 --> 00:02:02.039
148
+ this especially if iter early in
149
+
150
+ 00:02:02.159 --> 00:02:08.160
151
+ training another thing is even if the
152
+
153
+ 00:02:05.240 --> 00:02:10.599
154
+ answer is correct it might not be
155
+
156
+ 00:02:08.160 --> 00:02:13.440
157
+ verifiable so you might want to be very
158
+
159
+ 00:02:10.599 --> 00:02:15.000
160
+ sure that the model is not making any
161
+
162
+ 00:02:13.440 --> 00:02:17.640
163
+ accuracy
164
+
165
+ 00:02:15.000 --> 00:02:19.040
166
+ problems and so in order to do that very
167
+
168
+ 00:02:17.640 --> 00:02:21.879
169
+ often a human will want to go back to
170
+
171
+ 00:02:19.040 --> 00:02:21.879
172
+ the source of the
173
+
174
+ 00:02:22.200 --> 00:02:27.319
175
+ data so to solve this there's a method
176
+
177
+ 00:02:25.480 --> 00:02:29.200
178
+ called retrieval augmented generation
179
+
180
+ 00:02:27.319 --> 00:02:30.280
181
+ which will also be the topic of our
182
+
183
+ 00:02:29.200 --> 00:02:32.599
184
+ second assignment
185
+
186
+ 00:02:30.280 --> 00:02:35.680
187
+ here and the way it works is you
188
+
189
+ 00:02:32.599 --> 00:02:38.319
190
+ retrieve relevant passages
191
+
192
+ 00:02:35.680 --> 00:02:40.680
193
+ efficiently ones that kind of entail the
194
+
195
+ 00:02:38.319 --> 00:02:42.480
196
+ answer to a question and then read the
197
+
198
+ 00:02:40.680 --> 00:02:46.080
199
+ passages to answer the
200
+
201
+ 00:02:42.480 --> 00:02:48.599
202
+ query so we have documents like this we
203
+
204
+ 00:02:46.080 --> 00:02:52.360
205
+ have a query based on the query we form
206
+
207
+ 00:02:48.599 --> 00:02:55.360
208
+ retrieval we get a whole bunch of uh
209
+
210
+ 00:02:52.360 --> 00:02:57.560
211
+ passages we do reading and then we get
212
+
213
+ 00:02:55.360 --> 00:02:57.560
214
+ the
215
+
216
+ 00:02:58.280 --> 00:03:04.440
217
+ answer so this is in fact implemented in
218
+
219
+ 00:03:01.720 --> 00:03:07.599
220
+ many or even most uh language modeling
221
+
222
+ 00:03:04.440 --> 00:03:09.840
223
+ providers including open AI so to give
224
+
225
+ 00:03:07.599 --> 00:03:11.480
226
+ an example I asked the question that I
227
+
228
+ 00:03:09.840 --> 00:03:12.879
229
+ just said about Vin Diesel's voice
230
+
231
+ 00:03:11.480 --> 00:03:16.599
232
+ acting and TV
233
+
234
+ 00:03:12.879 --> 00:03:19.760
235
+ series and Chad GPT gave me an answer
236
+
237
+ 00:03:16.599 --> 00:03:22.440
238
+ and you can see that J gpt's answer
239
+
240
+ 00:03:19.760 --> 00:03:24.720
241
+ includes several places with quotes um
242
+
243
+ 00:03:22.440 --> 00:03:28.159
244
+ they the little blue quotes
245
+
246
+ 00:03:24.720 --> 00:03:30.760
247
+ there and if you click on the quote it
248
+
249
+ 00:03:28.159 --> 00:03:33.120
250
+ tells you where the information Source
251
+
252
+ 00:03:30.760 --> 00:03:35.000
253
+ came from and so this one says behind
254
+
255
+ 00:03:33.120 --> 00:03:37.760
256
+ the voice actors been
257
+
258
+ 00:03:35.000 --> 00:03:39.920
259
+ Diesel and behind the voice actors TV
260
+
261
+ 00:03:37.760 --> 00:03:42.959
262
+ shows Big Mouth V
263
+
264
+ 00:03:39.920 --> 00:03:45.640
265
+ diesel now if we look
266
+
267
+ 00:03:42.959 --> 00:03:48.640
268
+ closer into this answer we'll see that
269
+
270
+ 00:03:45.640 --> 00:03:49.959
271
+ it's not perfect even though it is uh
272
+
273
+ 00:03:48.640 --> 00:03:52.519
274
+ performing retrieval augmented
275
+
276
+ 00:03:49.959 --> 00:03:54.840
277
+ Generations so for example I only asked
278
+
279
+ 00:03:52.519 --> 00:03:57.200
280
+ about TV series but it's giving me lots
281
+
282
+ 00:03:54.840 --> 00:03:59.680
283
+ of things about movies where it says
284
+
285
+ 00:03:57.200 --> 00:04:01.319
286
+ Groot in Guardians of the Galaxy volume
287
+
288
+ 00:03:59.680 --> 00:04:04.480
289
+ 3 2023
290
+
291
+ 00:04:01.319 --> 00:04:07.200
292
+ movie and in fact uh Vin Diesel was not
293
+
294
+ 00:04:04.480 --> 00:04:10.920
295
+ even voicing a character named gut here
296
+
297
+ 00:04:07.200 --> 00:04:13.480
298
+ so that's definitely an accuracy
299
+
300
+ 00:04:10.920 --> 00:04:15.079
301
+ mistake and separately there's a place
302
+
303
+ 00:04:13.480 --> 00:04:17.639
304
+ where it says additionally though the
305
+
306
+ 00:04:15.079 --> 00:04:19.959
307
+ website for big mouthless Vin Diesel it
308
+
309
+ 00:04:17.639 --> 00:04:22.040
310
+ appears to be a misunderstanding or err
311
+
312
+ 00:04:19.959 --> 00:04:25.360
313
+ as Nick croll is credited as the voice
314
+
315
+ 00:04:22.040 --> 00:04:27.800
316
+ of Vin Diesel in that show so there
317
+
318
+ 00:04:25.360 --> 00:04:30.039
319
+ actually Nick croll was acting as V
320
+
321
+ 00:04:27.800 --> 00:04:32.800
322
+ diesel but that's um kind of a
323
+
324
+ 00:04:30.039 --> 00:04:34.600
325
+ misunderstanding of the reader model but
326
+
327
+ 00:04:32.800 --> 00:04:36.600
328
+ anyway you can get the general idea here
329
+
330
+ 00:04:34.600 --> 00:04:40.199
331
+ you can also see that it's not perfect
332
+
333
+ 00:04:36.600 --> 00:04:42.720
334
+ even for very strong models like GPD
335
+
336
+ 00:04:40.199 --> 00:04:44.800
337
+ 4 so now I'd like to go into the actual
338
+
339
+ 00:04:42.720 --> 00:04:46.759
340
+ methodology that we use for this uh we
341
+
342
+ 00:04:44.800 --> 00:04:50.360
343
+ have retrieval
344
+
345
+ 00:04:46.759 --> 00:04:53.160
346
+ methods and for the retrieval methods we
347
+
348
+ 00:04:50.360 --> 00:04:55.160
349
+ have uh quite a few different options
350
+
351
+ 00:04:53.160 --> 00:04:57.960
352
+ I'm going to go through each one of them
353
+
354
+ 00:04:55.160 --> 00:05:00.960
355
+ at a time so sparse retrieval document
356
+
357
+ 00:04:57.960 --> 00:05:04.240
358
+ level dense retrieval token level DSE
359
+
360
+ 00:05:00.960 --> 00:05:08.039
361
+ retrieval cross- encoder reranking and
362
+
363
+ 00:05:04.240 --> 00:05:09.320
364
+ blackbox retrieval so blackbox retrieval
365
+
366
+ 00:05:08.039 --> 00:05:11.280
367
+ I'm not really going to go into it a
368
+
369
+ 00:05:09.320 --> 00:05:16.000
370
+ whole lot basically this is just asking
371
+
372
+ 00:05:11.280 --> 00:05:17.560
373
+ a blackbox search engine to retrieve uh
374
+
375
+ 00:05:16.000 --> 00:05:20.000
376
+ you know the relevant context and
377
+
378
+ 00:05:17.560 --> 00:05:22.560
379
+ getting the top several results
380
+
381
+ 00:05:20.000 --> 00:05:24.039
382
+ nonetheless this is a pretty you know
383
+
384
+ 00:05:22.560 --> 00:05:26.800
385
+ reasonable method to do it if you want
386
+
387
+ 00:05:24.039 --> 00:05:29.080
388
+ to do search over you know lots of data
389
+
390
+ 00:05:26.800 --> 00:05:32.759
391
+ that exists on the internet already and
392
+
393
+ 00:05:29.080 --> 00:05:36.600
394
+ that in is what chat jpt does it looks
395
+
396
+ 00:05:32.759 --> 00:05:39.240
397
+ up on Bing by generating a query to
398
+
399
+ 00:05:36.600 --> 00:05:41.560
400
+ Bing so anyway let's go into the actual
401
+
402
+ 00:05:39.240 --> 00:05:43.840
403
+ methods that you develop and control
404
+
405
+ 00:05:41.560 --> 00:05:46.600
406
+ yourself so the first one is sparse
407
+
408
+ 00:05:43.840 --> 00:05:48.479
409
+ retrieval and the way this works is you
410
+
411
+ 00:05:46.600 --> 00:05:50.440
412
+ express the query and document as a
413
+
414
+ 00:05:48.479 --> 00:05:53.680
415
+ sparse word frequency Vector usually
416
+
417
+ 00:05:50.440 --> 00:05:58.759
418
+ normalized by length and so if I ask uh
419
+
420
+ 00:05:53.680 --> 00:06:01.720
421
+ query what is NLP we get a vector where
422
+
423
+ 00:05:58.759 --> 00:06:04.120
424
+ each row the vector corresponds to a
425
+
426
+ 00:06:01.720 --> 00:06:07.919
427
+ different
428
+
429
+ 00:06:04.120 --> 00:06:12.960
430
+ token and we asked what is
431
+
432
+ 00:06:07.919 --> 00:06:16.360
433
+ NLP and so uh the places for what NLP
434
+
435
+ 00:06:12.960 --> 00:06:18.199
436
+ and is will all have a non-zero value
437
+
438
+ 00:06:16.360 --> 00:06:20.199
439
+ and everything else will have a zero
440
+
441
+ 00:06:18.199 --> 00:06:21.720
442
+ value and we also normalize by the
443
+
444
+ 00:06:20.199 --> 00:06:24.120
445
+ length of vectors so we get something
446
+
447
+ 00:06:21.720 --> 00:06:24.120
448
+ like
449
+
450
+ 00:06:24.840 --> 00:06:28.440
451
+ 333333 then we have a whole bunch of
452
+
453
+ 00:06:26.759 --> 00:06:30.720
454
+ documents so the first document says
455
+
456
+ 00:06:28.440 --> 00:06:31.759
457
+ what is life can is life someone really
458
+
459
+ 00:06:30.720 --> 00:06:33.960
460
+ likes
461
+
462
+ 00:06:31.759 --> 00:06:36.000
463
+ candy we also have another one that says
464
+
465
+ 00:06:33.960 --> 00:06:38.360
466
+ NLP as an acronym for natural language
467
+
468
+ 00:06:36.000 --> 00:06:39.479
469
+ processing so this is a pretty good uh
470
+
471
+ 00:06:38.360 --> 00:06:42.479
472
+ you
473
+
474
+ 00:06:39.479 --> 00:06:44.840
475
+ know answer to our
476
+
477
+ 00:06:42.479 --> 00:06:48.039
478
+ question then we also have I like to do
479
+
480
+ 00:06:44.840 --> 00:06:49.360
481
+ good research on NLP which is you know a
482
+
483
+ 00:06:48.039 --> 00:06:51.360
484
+ nice sentiment but not a very good
485
+
486
+ 00:06:49.360 --> 00:06:54.400
487
+ answer to our question I
488
+
489
+ 00:06:51.360 --> 00:06:59.479
490
+ guess so if we look at the vectors here
491
+
492
+ 00:06:54.400 --> 00:07:03.280
493
+ we have uh what and candy and is have uh
494
+
495
+ 00:06:59.479 --> 00:07:07.120
496
+ a fairly high
497
+
498
+ 00:07:03.280 --> 00:07:12.520
499
+ score and we have here NLP and is have a
500
+
501
+ 00:07:07.120 --> 00:07:16.479
502
+ high score and NLP has a a nonzero
503
+
504
+ 00:07:12.520 --> 00:07:18.400
505
+ score So based on this um we find the
506
+
507
+ 00:07:16.479 --> 00:07:20.560
508
+ document similarity with the highest
509
+
510
+ 00:07:18.400 --> 00:07:22.039
511
+ inner product or cosine similarity in
512
+
513
+ 00:07:20.560 --> 00:07:24.360
514
+ the document
515
+
516
+ 00:07:22.039 --> 00:07:27.000
517
+ collection and so if we take the inner
518
+
519
+ 00:07:24.360 --> 00:07:28.759
520
+ product between these vectors we
521
+
522
+ 00:07:27.000 --> 00:07:31.280
523
+ actually see that the first one got the
524
+
525
+ 00:07:28.759 --> 00:07:34.479
526
+ highest score because of its
527
+
528
+ 00:07:31.280 --> 00:07:37.440
529
+ relatively High values for the words
530
+
531
+ 00:07:34.479 --> 00:07:37.440
532
+ what and
533
+
534
+ 00:07:38.160 --> 00:07:43.759
535
+ is
536
+
537
+ 00:07:40.199 --> 00:07:46.720
538
+ so as you can see common words like what
539
+
540
+ 00:07:43.759 --> 00:07:49.000
541
+ and is can get a high score kind of
542
+
543
+ 00:07:46.720 --> 00:07:51.800
544
+ regardless of whether a document is very
545
+
546
+ 00:07:49.000 --> 00:07:53.919
547
+ relevant and so one way we can fix this
548
+
549
+ 00:07:51.800 --> 00:07:55.960
550
+ is through something called term
551
+
552
+ 00:07:53.919 --> 00:07:59.479
553
+ waiting and the way that term waiting
554
+
555
+ 00:07:55.960 --> 00:08:02.680
556
+ works is in addition to having this
557
+
558
+ 00:07:59.479 --> 00:08:04.599
559
+ Vector that
560
+
561
+ 00:08:02.680 --> 00:08:07.680
562
+ calculates
563
+
564
+ 00:08:04.599 --> 00:08:10.680
565
+ the frequency within a particular
566
+
567
+ 00:08:07.680 --> 00:08:13.639
568
+ document we also have an upweighting
569
+
570
+ 00:08:10.680 --> 00:08:15.599
571
+ term that gives higher weight to low
572
+
573
+ 00:08:13.639 --> 00:08:18.199
574
+ frequency words because low frequency
575
+
576
+ 00:08:15.599 --> 00:08:20.280
577
+ words like NLP tend to be more
578
+
579
+ 00:08:18.199 --> 00:08:22.759
580
+ informative about whether the document
581
+
582
+ 00:08:20.280 --> 00:08:25.240
583
+ is relevant than high frequency words
584
+
585
+ 00:08:22.759 --> 00:08:27.080
586
+ like what it is because these high
587
+
588
+ 00:08:25.240 --> 00:08:31.320
589
+ frequency words like what and is Could
590
+
591
+ 00:08:27.080 --> 00:08:34.279
592
+ Happen kind of regardless of whether
593
+
594
+ 00:08:31.320 --> 00:08:36.680
595
+ the you know document is relevant the
596
+
597
+ 00:08:34.279 --> 00:08:41.800
598
+ particular terms the person is asking
599
+
600
+ 00:08:36.680 --> 00:08:44.000
601
+ about so one well used and easy to
602
+
603
+ 00:08:41.800 --> 00:08:46.560
604
+ understand version of this is uh tfidf
605
+
606
+ 00:08:44.000 --> 00:08:48.839
607
+ or term frequency indument
608
+
609
+ 00:08:46.560 --> 00:08:51.200
610
+ frequency so the way we Define term
611
+
612
+ 00:08:48.839 --> 00:08:52.959
613
+ frequency is exactly what I talked about
614
+
615
+ 00:08:51.200 --> 00:08:56.959
616
+ before so it's basically the frequency
617
+
618
+ 00:08:52.959 --> 00:08:59.839
619
+ of the term uh T in the document d
620
+
621
+ 00:08:56.959 --> 00:09:01.640
622
+ normalized by the total term frequency
623
+
624
+ 00:08:59.839 --> 00:09:03.680
625
+ within the document so that that's what
626
+
627
+ 00:09:01.640 --> 00:09:06.800
628
+ I already showed in the previous
629
+
630
+ 00:09:03.680 --> 00:09:09.360
631
+ slide and then indument frequency is a
632
+
633
+ 00:09:06.800 --> 00:09:13.760
634
+ little bit more involved but basically
635
+
636
+ 00:09:09.360 --> 00:09:15.760
637
+ the way this works is we have log of the
638
+
639
+ 00:09:13.760 --> 00:09:18.160
640
+ total number of documents in the
641
+
642
+ 00:09:15.760 --> 00:09:24.040
643
+ collection divided
644
+
645
+ 00:09:18.160 --> 00:09:26.760
646
+ by the total number of uh times this
647
+
648
+ 00:09:24.040 --> 00:09:30.279
649
+ term appeared in any particular
650
+
651
+ 00:09:26.760 --> 00:09:33.360
652
+ document and so if a term appears many
653
+
654
+ 00:09:30.279 --> 00:09:36.120
655
+ times in any particular document it will
656
+
657
+ 00:09:33.360 --> 00:09:39.240
658
+ have a low IDF score uh one that's close
659
+
660
+ 00:09:36.120 --> 00:09:41.519
661
+ to zero but if it rarely appears it will
662
+
663
+ 00:09:39.240 --> 00:09:44.120
664
+ have a high IDF score so basically this
665
+
666
+ 00:09:41.519 --> 00:09:45.040
667
+ is upweighting our frequent terms and
668
+
669
+ 00:09:44.120 --> 00:09:47.560
670
+ then for
671
+
672
+ 00:09:45.040 --> 00:09:51.320
673
+ tfidf uh we basically multiply these two
674
+
675
+ 00:09:47.560 --> 00:09:53.120
676
+ terms together and we upweight the low
677
+
678
+ 00:09:51.320 --> 00:09:55.640
679
+ frequency
680
+
681
+ 00:09:53.120 --> 00:10:00.519
682
+ words there's another version of this
683
+
684
+ 00:09:55.640 --> 00:10:03.640
685
+ called bm25 that is uh widely used used
686
+
687
+ 00:10:00.519 --> 00:10:05.800
688
+ um this is more involved so I'm not
689
+
690
+ 00:10:03.640 --> 00:10:08.120
691
+ going to go into all of the details but
692
+
693
+ 00:10:05.800 --> 00:10:12.399
694
+ basically if you remember back to the
695
+
696
+ 00:10:08.120 --> 00:10:13.720
697
+ lecture on count-based language models
698
+
699
+ 00:10:12.399 --> 00:10:14.880
700
+ there were a bunch of smoothing
701
+
702
+ 00:10:13.720 --> 00:10:18.839
703
+ techniques for these count-based
704
+
705
+ 00:10:14.880 --> 00:10:21.839
706
+ language models and this uses uh kind of
707
+
708
+ 00:10:18.839 --> 00:10:25.839
709
+ a m multiplicative additive smoothing
710
+
711
+ 00:10:21.839 --> 00:10:27.160
712
+ term to upway things instead of using
713
+
714
+ 00:10:25.839 --> 00:10:30.200
715
+ the term
716
+
717
+ 00:10:27.160 --> 00:10:33.399
718
+ frequency and uh the actual formula is
719
+
720
+ 00:10:30.200 --> 00:10:37.240
721
+ here K and B are kind of
722
+
723
+ 00:10:33.399 --> 00:10:39.360
724
+ hyperparameters and um average DL is
725
+
726
+ 00:10:37.240 --> 00:10:40.639
727
+ average document length the details of
728
+
729
+ 00:10:39.360 --> 00:10:42.120
730
+ this are not really important but
731
+
732
+ 00:10:40.639 --> 00:10:43.800
733
+ basically what you should know is that
734
+
735
+ 00:10:42.120 --> 00:10:45.639
736
+ this is doing some smoothing on the term
737
+
738
+ 00:10:43.800 --> 00:10:48.240
739
+ frequencies and you can look in more
740
+
741
+ 00:10:45.639 --> 00:10:48.240
742
+ detail if you're
743
+
744
+ 00:10:49.160 --> 00:10:54.920
745
+ interested so now that we have this sort
746
+
747
+ 00:10:52.880 --> 00:10:57.959
748
+ of term
749
+
750
+ 00:10:54.920 --> 00:11:00.320
751
+ based uh sparse Vector we would like to
752
+
753
+ 00:10:57.959 --> 00:11:03.320
754
+ use this to look up relevant documents
755
+
756
+ 00:11:00.320 --> 00:11:06.000
757
+ in a collection very quickly because you
758
+
759
+ 00:11:03.320 --> 00:11:08.000
760
+ know we might have a collection that's
761
+
762
+ 00:11:06.000 --> 00:11:09.720
763
+ extremely large like as large as the
764
+
765
+ 00:11:08.000 --> 00:11:12.320
766
+ entire internet like what Google is
767
+
768
+ 00:11:09.720 --> 00:11:14.160
769
+ doing when it searches and so in order
770
+
771
+ 00:11:12.320 --> 00:11:16.240
772
+ to solve this we need a data structure
773
+
774
+ 00:11:14.160 --> 00:11:17.279
775
+ that allows for efficient sparse lookup
776
+
777
+ 00:11:16.240 --> 00:11:19.480
778
+ of
779
+
780
+ 00:11:17.279 --> 00:11:23.720
781
+ vectors and so we have all of these
782
+
783
+ 00:11:19.480 --> 00:11:27.279
784
+ sparse vectors like this
785
+
786
+ 00:11:23.720 --> 00:11:31.240
787
+ and we uh basically turn this into an
788
+
789
+ 00:11:27.279 --> 00:11:34.720
790
+ index where we have something like a you
791
+
792
+ 00:11:31.240 --> 00:11:37.920
793
+ know python style dictionary or map that
794
+
795
+ 00:11:34.720 --> 00:11:41.079
796
+ has it's the key each uh word we would
797
+
798
+ 00:11:37.920 --> 00:11:45.000
799
+ look like to look up and is the vector
800
+
801
+ 00:11:41.079 --> 00:11:48.480
802
+ the corresponding um index of that
803
+
804
+ 00:11:45.000 --> 00:11:50.480
805
+ document so for example what in our case
806
+
807
+ 00:11:48.480 --> 00:11:54.200
808
+ here only appears in document one so it
809
+
810
+ 00:11:50.480 --> 00:11:56.279
811
+ would point to document one candy uh
812
+
813
+ 00:11:54.200 --> 00:11:58.560
814
+ also appears in document one NLP appears
815
+
816
+ 00:11:56.279 --> 00:11:59.839
817
+ in two and three and so you can create
818
+
819
+ 00:11:58.560 --> 00:12:02.760
820
+ this index IND like this and this is
821
+
822
+ 00:11:59.839 --> 00:12:02.760
823
+ called an inverted
824
+
825
+ 00:12:03.079 --> 00:12:08.760
826
+ Index this is an important application
827
+
828
+ 00:12:06.000 --> 00:12:11.600
829
+ of course so there's lots of software
830
+
831
+ 00:12:08.760 --> 00:12:14.920
832
+ the most kind of pical software for this
833
+
834
+ 00:12:11.600 --> 00:12:18.760
835
+ is Apache Lucine so if you want to build
836
+
837
+ 00:12:14.920 --> 00:12:21.639
838
+ a big index uh to look up vectors using
839
+
840
+ 00:12:18.760 --> 00:12:24.160
841
+ this sparse index like this you can uh
842
+
843
+ 00:12:21.639 --> 00:12:24.160
844
+ take a look at
845
+
846
+ 00:12:26.160 --> 00:12:30.880
847
+ Lucy so the next thing I'd like to talk
848
+
849
+ 00:12:28.399 --> 00:12:33.199
850
+ about is dense retrieval and the way
851
+
852
+ 00:12:30.880 --> 00:12:36.000
853
+ dense retrieval works is you encode the
854
+
855
+ 00:12:33.199 --> 00:12:37.240
856
+ document in query into a dense factor
857
+
858
+ 00:12:36.000 --> 00:12:40.240
859
+ and find the nearest
860
+
861
+ 00:12:37.240 --> 00:12:42.160
862
+ neighbor in order to do this encoding
863
+
864
+ 00:12:40.240 --> 00:12:44.639
865
+ you can use a number of things you can
866
+
867
+ 00:12:42.160 --> 00:12:47.440
868
+ use out of the box embeddings or you can
869
+
870
+ 00:12:44.639 --> 00:12:49.959
871
+ use learned embeddings specifically
872
+
873
+ 00:12:47.440 --> 00:12:53.519
874
+ created for the purpose of
875
+
876
+ 00:12:49.959 --> 00:12:56.240
877
+ retrieving and so what we do is we take
878
+
879
+ 00:12:53.519 --> 00:12:57.920
880
+ all of these uh documents here we
881
+
882
+ 00:12:56.240 --> 00:12:59.920
883
+ convert them into embeddings using
884
+
885
+ 00:12:57.920 --> 00:13:04.040
886
+ whatever embedding method that we want
887
+
888
+ 00:12:59.920 --> 00:13:05.920
889
+ to use we then have a query and we take
890
+
891
+ 00:13:04.040 --> 00:13:07.720
892
+ that query and we match it and find the
893
+
894
+ 00:13:05.920 --> 00:13:10.040
895
+ nearest neighbor
896
+
897
+ 00:13:07.720 --> 00:13:13.120
898
+ here so if you're just using out of the
899
+
900
+ 00:13:10.040 --> 00:13:14.839
901
+ box embeddings you don't need to um you
902
+
903
+ 00:13:13.120 --> 00:13:15.880
904
+ know do anything special for retrieval
905
+
906
+ 00:13:14.839 --> 00:13:18.440
907
+ you can just take your favorite
908
+
909
+ 00:13:15.880 --> 00:13:22.800
910
+ embeddings like the sentence BT
911
+
912
+ 00:13:18.440 --> 00:13:25.639
913
+ embeddings or the open AI uh Adda
914
+
915
+ 00:13:22.800 --> 00:13:27.240
916
+ embeddings or something like this but
917
+
918
+ 00:13:25.639 --> 00:13:29.519
919
+ actually the type of embeddings you need
920
+
921
+ 00:13:27.240 --> 00:13:32.040
922
+ for retrieval are kind of
923
+
924
+ 00:13:29.519 --> 00:13:33.519
925
+ very special and because of that it's
926
+
927
+ 00:13:32.040 --> 00:13:36.160
928
+ important
929
+
930
+ 00:13:33.519 --> 00:13:38.600
931
+ to if you're very serious about doing a
932
+
933
+ 00:13:36.160 --> 00:13:39.800
934
+ good job of retal it's important to use
935
+
936
+ 00:13:38.600 --> 00:13:41.360
937
+ embeddings that were specifically
938
+
939
+ 00:13:39.800 --> 00:13:45.040
940
+ tailored for
941
+
942
+ 00:13:41.360 --> 00:13:47.680
943
+ retrieval and the reason why it is
944
+
945
+ 00:13:45.040 --> 00:13:50.079
946
+ important to do this is severalfold but
947
+
948
+ 00:13:47.680 --> 00:13:53.800
949
+ the most intuitive way to think about it
950
+
951
+ 00:13:50.079 --> 00:13:57.600
952
+ is if we think about uh the things that
953
+
954
+ 00:13:53.800 --> 00:13:59.440
955
+ tfidf does tfidf is giving a very high
956
+
957
+ 00:13:57.600 --> 00:14:03.000
958
+ weight to
959
+
960
+ 00:13:59.440 --> 00:14:04.959
961
+ contentful words and rare words and
962
+
963
+ 00:14:03.000 --> 00:14:06.639
964
+ we're not guaranteed that any random
965
+
966
+ 00:14:04.959 --> 00:14:10.600
967
+ embedding that we get is going to do
968
+
969
+ 00:14:06.639 --> 00:14:13.800
970
+ that so for example if we just take the
971
+
972
+ 00:14:10.600 --> 00:14:16.160
973
+ average word embeddings of every word in
974
+
975
+ 00:14:13.800 --> 00:14:20.160
976
+ a sequence it's going to give the same
977
+
978
+ 00:14:16.160 --> 00:14:22.320
979
+ weight to all of the words um in the
980
+
981
+ 00:14:20.160 --> 00:14:24.680
982
+ output and in fact common words tend to
983
+
984
+ 00:14:22.320 --> 00:14:27.959
985
+ have slightly higher Norms than
986
+
987
+ 00:14:24.680 --> 00:14:29.639
988
+ infrequent words and so that would
989
+
990
+ 00:14:27.959 --> 00:14:31.880
991
+ actually upli common wordss which is
992
+
993
+ 00:14:29.639 --> 00:14:34.639
994
+ kind of exactly the opposite thing we
995
+
996
+ 00:14:31.880 --> 00:14:36.480
997
+ want so how do we learn retrieval
998
+
999
+ 00:14:34.639 --> 00:14:39.160
1000
+ oriented
1001
+
1002
+ 00:14:36.480 --> 00:14:40.920
1003
+ embeddings the normal way we do this is
1004
+
1005
+ 00:14:39.160 --> 00:14:43.399
1006
+ we select positive and negative
1007
+
1008
+ 00:14:40.920 --> 00:14:46.839
1009
+ documents and then train using a
1010
+
1011
+ 00:14:43.399 --> 00:14:50.240
1012
+ contrastive loss and so an example of
1013
+
1014
+ 00:14:46.839 --> 00:14:52.519
1015
+ this is we have a query and then we have
1016
+
1017
+ 00:14:50.240 --> 00:14:55.519
1018
+ negative documents for that query and we
1019
+
1020
+ 00:14:52.519 --> 00:14:58.199
1021
+ have positive documents for that query
1022
+
1023
+ 00:14:55.519 --> 00:15:00.079
1024
+ and uh we form formulate a hinge loss or
1025
+
1026
+ 00:14:58.199 --> 00:15:04.000
1027
+ maybe some sort of probabilistic loss
1028
+
1029
+ 00:15:00.079 --> 00:15:06.560
1030
+ similar to the Hench loss and uh do fine
1031
+
1032
+ 00:15:04.000 --> 00:15:06.560
1033
+ tuning of the
1034
+
1035
+ 00:15:07.160 --> 00:15:13.440
1036
+ embeddings so if
1037
+
1038
+ 00:15:09.399 --> 00:15:16.320
1039
+ you have gold standard positive
1040
+
1041
+ 00:15:13.440 --> 00:15:18.800
1042
+ documents then this is relatively easy
1043
+
1044
+ 00:15:16.320 --> 00:15:21.040
1045
+ to train uh because you just need the
1046
+
1047
+ 00:15:18.800 --> 00:15:23.800
1048
+ positive documents and then you can get
1049
+
1050
+ 00:15:21.040 --> 00:15:25.959
1051
+ Negative documents in a number of ways
1052
+
1053
+ 00:15:23.800 --> 00:15:29.279
1054
+ one common way of getting negative
1055
+
1056
+ 00:15:25.959 --> 00:15:32.279
1057
+ documents is you just form a batch of
1058
+
1059
+ 00:15:29.279 --> 00:15:34.560
1060
+ data and given that batch of data you
1061
+
1062
+ 00:15:32.279 --> 00:15:37.480
1063
+ take all of the other documents in the
1064
+
1065
+ 00:15:34.560 --> 00:15:39.480
1066
+ batch um all of the documents in the
1067
+
1068
+ 00:15:37.480 --> 00:15:42.839
1069
+ batch that are positive for some other
1070
+
1071
+ 00:15:39.480 --> 00:15:46.399
1072
+ query and you use those as negative
1073
+
1074
+ 00:15:42.839 --> 00:15:49.000
1075
+ documents so you sample 32 query
1076
+
1077
+ 00:15:46.399 --> 00:15:50.759
1078
+ document pairs you use the aligned ones
1079
+
1080
+ 00:15:49.000 --> 00:15:53.759
1081
+ as positive documents and then use the
1082
+
1083
+ 00:15:50.759 --> 00:15:57.440
1084
+ 31 other ones as negative documents and
1085
+
1086
+ 00:15:53.759 --> 00:16:00.279
1087
+ this is both effective and efficient
1088
+
1089
+ 00:15:57.440 --> 00:16:02.000
1090
+ because you can kind of learned from the
1091
+
1092
+ 00:16:00.279 --> 00:16:05.079
1093
+ query document pairs all at the same
1094
+
1095
+ 00:16:02.000 --> 00:16:05.079
1096
+ time in an efficient
1097
+
1098
+ 00:16:05.680 --> 00:16:13.680
1099
+ implementation however this is not
1100
+
1101
+ 00:16:09.160 --> 00:16:16.279
1102
+ enough in many cases because that will
1103
+
1104
+ 00:16:13.680 --> 00:16:19.040
1105
+ end up having lots of very kind of
1106
+
1107
+ 00:16:16.279 --> 00:16:20.440
1108
+ obviously wrong documents because you
1109
+
1110
+ 00:16:19.040 --> 00:16:23.120
1111
+ know
1112
+
1113
+ 00:16:20.440 --> 00:16:25.360
1114
+ they're documents that are relevant for
1115
+
1116
+ 00:16:23.120 --> 00:16:27.880
1117
+ a completely different query and it's
1118
+
1119
+ 00:16:25.360 --> 00:16:29.880
1120
+ kind of easy to distinguish uh between
1121
+
1122
+ 00:16:27.880 --> 00:16:32.319
1123
+ those you can just at superficial word
1124
+
1125
+ 00:16:29.880 --> 00:16:34.519
1126
+ overlap so another common thing to do
1127
+
1128
+ 00:16:32.319 --> 00:16:35.759
1129
+ when you're training these models is to
1130
+
1131
+ 00:16:34.519 --> 00:16:38.160
1132
+ get hard
1133
+
1134
+ 00:16:35.759 --> 00:16:40.680
1135
+ negatives so hard negatives are
1136
+
1137
+ 00:16:38.160 --> 00:16:44.360
1138
+ basically negative examples that look
1139
+
1140
+ 00:16:40.680 --> 00:16:49.399
1141
+ plausible but are actually wrong and
1142
+
1143
+ 00:16:44.360 --> 00:16:53.199
1144
+ so here uh this famous method called DPR
1145
+
1146
+ 00:16:49.399 --> 00:16:55.880
1147
+ is it basically learns the uh encoders
1148
+
1149
+ 00:16:53.199 --> 00:16:57.759
1150
+ based on both inbatch negatives like I
1151
+
1152
+ 00:16:55.880 --> 00:17:00.160
1153
+ mentioned before and hard negatives that
1154
+
1155
+ 00:16:57.759 --> 00:17:01.360
1156
+ were created by looking up documents
1157
+
1158
+ 00:17:00.160 --> 00:17:03.839
1159
+ with
1160
+
1161
+ 00:17:01.360 --> 00:17:06.039
1162
+ bm25 and so the ones that were looked up
1163
+
1164
+ 00:17:03.839 --> 00:17:07.640
1165
+ by bm25 you know kind of look very
1166
+
1167
+ 00:17:06.039 --> 00:17:10.039
1168
+ similar superficially but they might
1169
+
1170
+ 00:17:07.640 --> 00:17:12.400
1171
+ have you know subtle errors in them for
1172
+
1173
+ 00:17:10.039 --> 00:17:12.400
1174
+ why they're
1175
+
1176
+ 00:17:12.799 --> 00:17:17.160
1177
+ inappropriate there's also methods to
1178
+
1179
+ 00:17:15.679 --> 00:17:20.000
1180
+ learn these
1181
+
1182
+ 00:17:17.160 --> 00:17:23.199
1183
+ retrievers based on kind of not
1184
+
1185
+ 00:17:20.000 --> 00:17:26.199
1186
+ supervised data so one major bottleneck
1187
+
1188
+ 00:17:23.199 --> 00:17:29.000
1189
+ if you're taking the positive documents
1190
+
1191
+ 00:17:26.199 --> 00:17:30.440
1192
+ from Human annotations of whether
1193
+
1194
+ 00:17:29.000 --> 00:17:33.440
1195
+ something is correct or not or human
1196
+
1197
+ 00:17:30.440 --> 00:17:37.880
1198
+ clickthrough logs or other things like
1199
+
1200
+ 00:17:33.440 --> 00:17:40.640
1201
+ this is that you need that data in order
1202
+
1203
+ 00:17:37.880 --> 00:17:44.440
1204
+ to start training a bottle so uh
1205
+
1206
+ 00:17:40.640 --> 00:17:47.880
1207
+ contriver is another method that uses
1208
+
1209
+ 00:17:44.440 --> 00:17:51.520
1210
+ two random spans within a document is a
1211
+
1212
+ 00:17:47.880 --> 00:17:54.440
1213
+ positive pair and random spans from
1214
+
1215
+ 00:17:51.520 --> 00:17:56.559
1216
+ across documents is negative Pairs and
1217
+
1218
+ 00:17:54.440 --> 00:17:58.960
1219
+ so this can be used for you know very
1220
+
1221
+ 00:17:56.559 --> 00:18:00.039
1222
+ very large scale initial pre-training of
1223
+
1224
+ 00:17:58.960 --> 00:18:02.280
1225
+ the
1226
+
1227
+ 00:18:00.039 --> 00:18:04.520
1228
+ models and then after you've done that
1229
+
1230
+ 00:18:02.280 --> 00:18:06.840
1231
+ large scale initial pre-training you can
1232
+
1233
+ 00:18:04.520 --> 00:18:10.799
1234
+ then go in and fine-tune it on you know
1235
+
1236
+ 00:18:06.840 --> 00:18:10.799
1237
+ actually annotate the data to improve it
1238
+
1239
+ 00:18:12.120 --> 00:18:18.799
1240
+ further Okay so we've talked about
1241
+
1242
+ 00:18:15.159 --> 00:18:21.559
1243
+ training uh these dense product uh
1244
+
1245
+ 00:18:18.799 --> 00:18:24.559
1246
+ models these uh models that look at
1247
+
1248
+ 00:18:21.559 --> 00:18:27.720
1249
+ dense embedding overlap for nearest
1250
+
1251
+ 00:18:24.559 --> 00:18:28.919
1252
+ neighbors but the problem is in order to
1253
+
1254
+ 00:18:27.720 --> 00:18:30.919
1255
+ calculate this you would need to
1256
+
1257
+ 00:18:28.919 --> 00:18:35.159
1258
+ calculate it over a very very large
1259
+
1260
+ 00:18:30.919 --> 00:18:37.960
1261
+ document base and just taking a product
1262
+
1263
+ 00:18:35.159 --> 00:18:40.480
1264
+ between the query and all of the other
1265
+
1266
+ 00:18:37.960 --> 00:18:42.400
1267
+ documents in the document base is
1268
+
1269
+ 00:18:40.480 --> 00:18:46.080
1270
+ extremely
1271
+
1272
+ 00:18:42.400 --> 00:18:48.080
1273
+ costly and so in order to fix this there
1274
+
1275
+ 00:18:46.080 --> 00:18:49.080
1276
+ are methods for approximate nearest
1277
+
1278
+ 00:18:48.080 --> 00:18:52.280
1279
+ neighbor
1280
+
1281
+ 00:18:49.080 --> 00:18:54.200
1282
+ search and these are methods that allow
1283
+
1284
+ 00:18:52.280 --> 00:18:57.360
1285
+ you to retrieve embeddings that have the
1286
+
1287
+ 00:18:54.200 --> 00:19:00.280
1288
+ maximum inner product between them in
1289
+
1290
+ 00:18:57.360 --> 00:19:02.520
1291
+ sublinear time and because you're doing
1292
+
1293
+ 00:19:00.280 --> 00:19:03.960
1294
+ the maximum inner product this is also
1295
+
1296
+ 00:19:02.520 --> 00:19:06.600
1297
+ often called maximum inner product
1298
+
1299
+ 00:19:03.960 --> 00:19:06.600
1300
+ search or
1301
+
1302
+ 00:19:06.679 --> 00:19:12.360
1303
+ myips so I'm going to introduce on a
1304
+
1305
+ 00:19:09.440 --> 00:19:15.360
1306
+ very high level two common methods to do
1307
+
1308
+ 00:19:12.360 --> 00:19:19.320
1309
+ this the first one is locality sensitive
1310
+
1311
+ 00:19:15.360 --> 00:19:22.440
1312
+ hashen um or this can also be called
1313
+
1314
+ 00:19:19.320 --> 00:19:24.799
1315
+ kind of inverted index as well and what
1316
+
1317
+ 00:19:22.440 --> 00:19:26.840
1318
+ you do is you make partitions in
1319
+
1320
+ 00:19:24.799 --> 00:19:29.320
1321
+ continuous space and then you use it
1322
+
1323
+ 00:19:26.840 --> 00:19:31.240
1324
+ like an inverted index
1325
+
1326
+ 00:19:29.320 --> 00:19:33.679
1327
+ so let's say we have a whole bunch of
1328
+
1329
+ 00:19:31.240 --> 00:19:34.919
1330
+ embeddings uh I demonstrated two
1331
+
1332
+ 00:19:33.679 --> 00:19:36.640
1333
+ dimensional embeddings here but in
1334
+
1335
+ 00:19:34.919 --> 00:19:38.440
1336
+ reality this would be you know as large
1337
+
1338
+ 00:19:36.640 --> 00:19:41.159
1339
+ as your word
1340
+
1341
+ 00:19:38.440 --> 00:19:42.880
1342
+ embedding your query and document
1343
+
1344
+ 00:19:41.159 --> 00:19:47.120
1345
+ embedding space so this would be you
1346
+
1347
+ 00:19:42.880 --> 00:19:49.760
1348
+ know 512 or 1024 or something like that
1349
+
1350
+ 00:19:47.120 --> 00:19:53.480
1351
+ and what you do is you define a whole
1352
+
1353
+ 00:19:49.760 --> 00:19:56.720
1354
+ bunch of planes that separate these
1355
+
1356
+ 00:19:53.480 --> 00:19:59.320
1357
+ points into two spaces so if this is our
1358
+
1359
+ 00:19:56.720 --> 00:20:02.520
1360
+ first plane all the points above the
1361
+
1362
+ 00:19:59.320 --> 00:20:04.280
1363
+ plane will get a one for this partition
1364
+
1365
+ 00:20:02.520 --> 00:20:06.799
1366
+ and all the points below the plane will
1367
+
1368
+ 00:20:04.280 --> 00:20:08.840
1369
+ get a zero for this partition and we do
1370
+
1371
+ 00:20:06.799 --> 00:20:12.400
1372
+ it similarly we we create a whole bunch
1373
+
1374
+ 00:20:08.840 --> 00:20:15.840
1375
+ of them and then based on this we can
1376
+
1377
+ 00:20:12.400 --> 00:20:18.440
1378
+ now assign sparse vectors depending on
1379
+
1380
+ 00:20:15.840 --> 00:20:21.520
1381
+ each of these planes so we have uh for
1382
+
1383
+ 00:20:18.440 --> 00:20:24.000
1384
+ example the top one uh one0 0 because
1385
+
1386
+ 00:20:21.520 --> 00:20:26.400
1387
+ it's on the right side of the blue plane
1388
+
1389
+ 00:20:24.000 --> 00:20:28.760
1390
+ and the um wrong side of the red and the
1391
+
1392
+ 00:20:26.400 --> 00:20:30.679
1393
+ green planes and then for the top right
1394
+
1395
+ 00:20:28.760 --> 00:20:32.799
1396
+ we have one1 because it's on the right
1397
+
1398
+ 00:20:30.679 --> 00:20:37.159
1399
+ side of the blueing the green planes and
1400
+
1401
+ 00:20:32.799 --> 00:20:39.440
1402
+ the wrong side of the red plane and So
1403
+
1404
+ 00:20:37.159 --> 00:20:41.000
1405
+ based on this now we have a sparse
1406
+
1407
+ 00:20:39.440 --> 00:20:42.600
1408
+ vector and we already know what to do
1409
+
1410
+ 00:20:41.000 --> 00:20:44.640
1411
+ with a sparse Vector right we look it up
1412
+
1413
+ 00:20:42.600 --> 00:20:49.039
1414
+ in an inverted index just like we did
1415
+
1416
+ 00:20:44.640 --> 00:20:51.520
1417
+ for a sparse um you know sparse lookup
1418
+
1419
+ 00:20:49.039 --> 00:20:54.520
1420
+ table so that's one
1421
+
1422
+ 00:20:51.520 --> 00:20:57.799
1423
+ method another method uses a graph-based
1424
+
1425
+ 00:20:54.520 --> 00:21:01.320
1426
+ search and the basic idea behind this is
1427
+
1428
+ 00:20:57.799 --> 00:21:02.480
1429
+ that we create hubs uh and these hubs
1430
+
1431
+ 00:21:01.320 --> 00:21:05.200
1432
+ are kind
1433
+
1434
+ 00:21:02.480 --> 00:21:07.960
1435
+ of a small number of points that are
1436
+
1437
+ 00:21:05.200 --> 00:21:09.440
1438
+ close to other points in the space and
1439
+
1440
+ 00:21:07.960 --> 00:21:10.880
1441
+ so we create some hubs and then we
1442
+
1443
+ 00:21:09.440 --> 00:21:12.200
1444
+ search from there so if we have a
1445
+
1446
+ 00:21:10.880 --> 00:21:16.880
1447
+ similar
1448
+
1449
+ 00:21:12.200 --> 00:21:19.159
1450
+ looking uh set of points in the space we
1451
+
1452
+ 00:21:16.880 --> 00:21:21.520
1453
+ find these hubs which are something like
1454
+
1455
+ 00:21:19.159 --> 00:21:24.880
1456
+ cluster centroids and then based on the
1457
+
1458
+ 00:21:21.520 --> 00:21:28.559
1459
+ cluster centroids we then rule down or
1460
+
1461
+ 00:21:24.880 --> 00:21:31.200
1462
+ we greatly reduce the number of
1463
+
1464
+ 00:21:28.559 --> 00:21:33.400
1465
+ points that we need to be looking at and
1466
+
1467
+ 00:21:31.200 --> 00:21:36.960
1468
+ then we search through only those points
1469
+
1470
+ 00:21:33.400 --> 00:21:38.600
1471
+ in a more kind of extensive Manner and
1472
+
1473
+ 00:21:36.960 --> 00:21:41.840
1474
+ you can even turn this into a tree where
1475
+
1476
+ 00:21:38.600 --> 00:21:43.760
1477
+ you have hubs and then you have uh kind
1478
+
1479
+ 00:21:41.840 --> 00:21:46.600
1480
+ of mini hubs and then you have all the
1481
+
1482
+ 00:21:43.760 --> 00:21:50.200
1483
+ points so this allows you to do a kind
1484
+
1485
+ 00:21:46.600 --> 00:21:50.200
1486
+ of tree based or graph based
1487
+
1488
+ 00:21:50.600 --> 00:21:55.840
1489
+ search so obviously unless you're really
1490
+
1491
+ 00:21:54.159 --> 00:21:57.039
1492
+ excited about these algorithms this is
1493
+
1494
+ 00:21:55.840 --> 00:22:00.080
1495
+ something that you probably don't want
1496
+
1497
+ 00:21:57.039 --> 00:22:01.440
1498
+ to be implementing yourself um and the
1499
+
1500
+ 00:22:00.080 --> 00:22:03.000
1501
+ good news is there's lots of very good
1502
+
1503
+ 00:22:01.440 --> 00:22:04.480
1504
+ libraries that help you do this in fact
1505
+
1506
+ 00:22:03.000 --> 00:22:08.799
1507
+ there are so many libraries it's hard to
1508
+
1509
+ 00:22:04.480 --> 00:22:11.960
1510
+ manage them but some libraries that
1511
+
1512
+ 00:22:08.799 --> 00:22:13.799
1513
+ people very commonly use I I think face
1514
+
1515
+ 00:22:11.960 --> 00:22:17.320
1516
+ uh FIS
1517
+
1518
+ 00:22:13.799 --> 00:22:20.200
1519
+ SS is a widely used one created by uh
1520
+
1521
+ 00:22:17.320 --> 00:22:23.760
1522
+ fair and meta and chroma DB is a
1523
+
1524
+ 00:22:20.200 --> 00:22:27.720
1525
+ separate one uh that is kind of an AI
1526
+
1527
+ 00:22:23.760 --> 00:22:30.720
1528
+ native uh embedding search database so
1529
+
1530
+ 00:22:27.720 --> 00:22:30.720
1531
+ both those are good
1532
+
1533
+ 00:22:32.960 --> 00:22:41.120
1534
+ options even with intelligent training
1535
+
1536
+ 00:22:37.880 --> 00:22:42.640
1537
+ of dense embeddings however there still
1538
+
1539
+ 00:22:41.120 --> 00:22:45.600
1540
+ are
1541
+
1542
+ 00:22:42.640 --> 00:22:48.240
1543
+ problems and the biggest
1544
+
1545
+ 00:22:45.600 --> 00:22:51.720
1546
+ problem that you face when you're
1547
+
1548
+ 00:22:48.240 --> 00:22:54.000
1549
+ looking at something like uh cross
1550
+
1551
+ 00:22:51.720 --> 00:22:56.880
1552
+ encoders um that sorry when you're
1553
+
1554
+ 00:22:54.000 --> 00:23:00.240
1555
+ looking at dense embeddings is that in
1556
+
1557
+ 00:22:56.880 --> 00:23:02.159
1558
+ order to form a good dense embedding you
1559
+
1560
+ 00:23:00.240 --> 00:23:03.840
1561
+ need to kind of know in advance what
1562
+
1563
+ 00:23:02.159 --> 00:23:05.799
1564
+ you're looking for right because you're
1565
+
1566
+ 00:23:03.840 --> 00:23:09.120
1567
+ taking a long document you're condensing
1568
+
1569
+ 00:23:05.799 --> 00:23:10.679
1570
+ it down into a single embedding and or a
1571
+
1572
+ 00:23:09.120 --> 00:23:13.320
1573
+ long passage and you're condensing it
1574
+
1575
+ 00:23:10.679 --> 00:23:16.200
1576
+ down to a single embedding and so if
1577
+
1578
+ 00:23:13.320 --> 00:23:19.520
1579
+ that during that condensation process
1580
+
1581
+ 00:23:16.200 --> 00:23:21.240
1582
+ actually there's other information that
1583
+
1584
+ 00:23:19.520 --> 00:23:23.159
1585
+ is relevant to a query but you have to
1586
+
1587
+ 00:23:21.240 --> 00:23:27.600
1588
+ throw out because of the limited
1589
+
1590
+ 00:23:23.159 --> 00:23:30.600
1591
+ embedding capacity this causes you to
1592
+
1593
+ 00:23:27.600 --> 00:23:32.320
1594
+ you know essentially fail at um doing
1595
+
1596
+ 00:23:30.600 --> 00:23:34.840
1597
+ retrieval
1598
+
1599
+ 00:23:32.320 --> 00:23:38.159
1600
+ appropriately so there's a couple
1601
+
1602
+ 00:23:34.840 --> 00:23:40.880
1603
+ methods that can be used to fix this so
1604
+
1605
+ 00:23:38.159 --> 00:23:42.279
1606
+ the first method is in contrast to the
1607
+
1608
+ 00:23:40.880 --> 00:23:44.159
1609
+ buy encoder which is what I've been
1610
+
1611
+ 00:23:42.279 --> 00:23:47.000
1612
+ talking out about at this point where
1613
+
1614
+ 00:23:44.159 --> 00:23:48.520
1615
+ you kind of do full encoding of queries
1616
+
1617
+ 00:23:47.000 --> 00:23:52.120
1618
+ full encoding of documents and then do
1619
+
1620
+ 00:23:48.520 --> 00:23:53.840
1621
+ inner product search for a score uh you
1622
+
1623
+ 00:23:52.120 --> 00:23:56.760
1624
+ can use a cross encoder and the way the
1625
+
1626
+ 00:23:53.840 --> 00:23:58.559
1627
+ cross- encoder works is you append the
1628
+
1629
+ 00:23:56.760 --> 00:24:00.799
1630
+ query and document and then you run them
1631
+
1632
+ 00:23:58.559 --> 00:24:03.400
1633
+ through a model like a Transformer model
1634
+
1635
+ 00:24:00.799 --> 00:24:07.840
1636
+ and you calculate the output
1637
+
1638
+ 00:24:03.400 --> 00:24:09.880
1639
+ score so the problem with this um so
1640
+
1641
+ 00:24:07.840 --> 00:24:12.480
1642
+ this this is great uh because it gives
1643
+
1644
+ 00:24:09.880 --> 00:24:15.799
1645
+ you maximum flexibility um Transformer
1646
+
1647
+ 00:24:12.480 --> 00:24:18.799
1648
+ models are powerful you can uh assess
1649
+
1650
+ 00:24:15.799 --> 00:24:20.520
1651
+ relevance very well the problem with
1652
+
1653
+ 00:24:18.799 --> 00:24:22.200
1654
+ this is this precludes approximate
1655
+
1656
+ 00:24:20.520 --> 00:24:23.720
1657
+ nearest neighbor lookup because now
1658
+
1659
+ 00:24:22.200 --> 00:24:25.799
1660
+ you're running through you know many
1661
+
1662
+ 00:24:23.720 --> 00:24:28.880
1663
+ many nonlinearities
1664
+
1665
+ 00:24:25.799 --> 00:24:32.760
1666
+ here so this is can only be used for
1667
+
1668
+ 00:24:28.880 --> 00:24:34.360
1669
+ reranking documents um or if even if
1670
+
1671
+ 00:24:32.760 --> 00:24:36.880
1672
+ you're doing retrieval doing retrieval
1673
+
1674
+ 00:24:34.360 --> 00:24:39.679
1675
+ over a very very small number of
1676
+
1677
+ 00:24:36.880 --> 00:24:41.960
1678
+ documents but if you really want maximal
1679
+
1680
+ 00:24:39.679 --> 00:24:44.080
1681
+ accuracy I definitely would recommend uh
1682
+
1683
+ 00:24:41.960 --> 00:24:45.720
1684
+ doing something like this because it can
1685
+
1686
+ 00:24:44.080 --> 00:24:47.960
1687
+ allow you to do kind of a second pass
1688
+
1689
+ 00:24:45.720 --> 00:24:49.360
1690
+ filtering over the most relevant looking
1691
+
1692
+ 00:24:47.960 --> 00:24:52.399
1693
+ documents to identify the ones you
1694
+
1695
+ 00:24:49.360 --> 00:24:52.399
1696
+ really want to add to your
1697
+
1698
+ 00:24:54.240 --> 00:24:58.240
1699
+ context so then there are also
1700
+
1701
+ 00:24:56.760 --> 00:25:01.360
1702
+ approaches that are kind kind of in the
1703
+
1704
+ 00:24:58.240 --> 00:25:02.159
1705
+ middle of these two uh the most famous
1706
+
1707
+ 00:25:01.360 --> 00:25:05.880
1708
+ one is
1709
+
1710
+ 00:25:02.159 --> 00:25:08.320
1711
+ Kar and the I called this token level
1712
+
1713
+ 00:25:05.880 --> 00:25:10.840
1714
+ dense retrieval it's also called uh late
1715
+
1716
+ 00:25:08.320 --> 00:25:12.720
1717
+ interaction in the coold bear paper but
1718
+
1719
+ 00:25:10.840 --> 00:25:14.919
1720
+ the way it works is you use
1721
+
1722
+ 00:25:12.720 --> 00:25:18.440
1723
+ contextualized representations of all
1724
+
1725
+ 00:25:14.919 --> 00:25:19.440
1726
+ query and document tokens to compute a
1727
+
1728
+ 00:25:18.440 --> 00:25:23.559
1729
+ retrieval
1730
+
1731
+ 00:25:19.440 --> 00:25:26.919
1732
+ score and so you do offline indexing of
1733
+
1734
+ 00:25:23.559 --> 00:25:29.159
1735
+ every token in the document and then
1736
+
1737
+ 00:25:26.919 --> 00:25:31.399
1738
+ based on this offline X indexing of
1739
+
1740
+ 00:25:29.159 --> 00:25:35.320
1741
+ every token in the document you then
1742
+
1743
+ 00:25:31.399 --> 00:25:38.760
1744
+ have a query encoder and you do matching
1745
+
1746
+ 00:25:35.320 --> 00:25:41.799
1747
+ between each token in the query and the
1748
+
1749
+ 00:25:38.760 --> 00:25:43.399
1750
+ highest scoring tokens in each
1751
+
1752
+ 00:25:41.799 --> 00:25:46.320
1753
+ document
1754
+
1755
+ 00:25:43.399 --> 00:25:48.399
1756
+ and the reason why this is good is it
1757
+
1758
+ 00:25:46.320 --> 00:25:49.600
1759
+ still allows you to encode all of the
1760
+
1761
+ 00:25:48.399 --> 00:25:52.120
1762
+ tokens in the
1763
+
1764
+ 00:25:49.600 --> 00:25:55.440
1765
+ document and but each of these
1766
+
1767
+ 00:25:52.120 --> 00:25:59.679
1768
+ similarity searches is still just
1769
+
1770
+ 00:25:55.440 --> 00:26:03.559
1771
+ a kind of maximum product search and
1772
+
1773
+ 00:25:59.679 --> 00:26:06.279
1774
+ because of this this allows you to do
1775
+
1776
+ 00:26:03.559 --> 00:26:07.960
1777
+ each of these searches efficiently and
1778
+
1779
+ 00:26:06.279 --> 00:26:09.840
1780
+ doesn't preclude you from running it
1781
+
1782
+ 00:26:07.960 --> 00:26:12.919
1783
+ over an entire
1784
+
1785
+ 00:26:09.840 --> 00:26:16.399
1786
+ database the downside to this method uh
1787
+
1788
+ 00:26:12.919 --> 00:26:19.120
1789
+ may already be obvious but in the
1790
+
1791
+ 00:26:16.399 --> 00:26:22.200
1792
+ traditional bu encoder we have a single
1793
+
1794
+ 00:26:19.120 --> 00:26:26.880
1795
+ Vector for each document but here we
1796
+
1797
+ 00:26:22.200 --> 00:26:29.320
1798
+ have one vector for um each token in the
1799
+
1800
+ 00:26:26.880 --> 00:26:31.880
1801
+ document so BAS basically your vector
1802
+
1803
+ 00:26:29.320 --> 00:26:34.399
1804
+ database gets n times larger where n is
1805
+
1806
+ 00:26:31.880 --> 00:26:36.679
1807
+ the number of tokens in the document and
1808
+
1809
+ 00:26:34.399 --> 00:26:38.080
1810
+ there are certain methods to make this
1811
+
1812
+ 00:26:36.679 --> 00:26:41.559
1813
+ better like you can compress each
1814
+
1815
+ 00:26:38.080 --> 00:26:42.960
1816
+ document to a smaller number of n uh but
1817
+
1818
+ 00:26:41.559 --> 00:26:45.880
1819
+ still this is definitely going to be
1820
+
1821
+ 00:26:42.960 --> 00:26:48.399
1822
+ more costly than looking up each uh
1823
+
1824
+ 00:26:45.880 --> 00:26:50.360
1825
+ token so this is definitely something to
1826
+
1827
+ 00:26:48.399 --> 00:26:53.520
1828
+ consider if you want to get you know
1829
+
1830
+ 00:26:50.360 --> 00:26:55.159
1831
+ very good scores and Co bear is a good
1832
+
1833
+ 00:26:53.520 --> 00:26:59.600
1834
+ implementation of that to start with if
1835
+
1836
+ 00:26:55.159 --> 00:26:59.600
1837
+ you're interested in trying it out
1838
+
1839
+ 00:27:00.480 --> 00:27:07.000
1840
+ so this is a final thing this is uh
1841
+
1842
+ 00:27:03.080 --> 00:27:08.679
1843
+ something that is a little bit uh
1844
+
1845
+ 00:27:07.000 --> 00:27:10.080
1846
+ different than all the other things I I
1847
+
1848
+ 00:27:08.679 --> 00:27:12.399
1849
+ talked about before but I've used it
1850
+
1851
+ 00:27:10.080 --> 00:27:15.840
1852
+ myself and it actually can be pretty
1853
+
1854
+ 00:27:12.399 --> 00:27:18.799
1855
+ effective um it was also made at CMU so
1856
+
1857
+ 00:27:15.840 --> 00:27:24.399
1858
+ by Lal so I would like to promote our
1859
+
1860
+ 00:27:18.799 --> 00:27:26.880
1861
+ CMU work of course but um the HP idea
1862
+
1863
+ 00:27:24.399 --> 00:27:28.080
1864
+ between behind a hypothetical document
1865
+
1866
+ 00:27:26.880 --> 00:27:30.320
1867
+ embedding
1868
+
1869
+ 00:27:28.080 --> 00:27:33.440
1870
+ is that it's actually somewhat difficult
1871
+
1872
+ 00:27:30.320 --> 00:27:36.200
1873
+ to match a query and a document right
1874
+
1875
+ 00:27:33.440 --> 00:27:38.919
1876
+ because a query is a very short possibly
1877
+
1878
+ 00:27:36.200 --> 00:27:42.240
1879
+ ungrammatical output that's asking a
1880
+
1881
+ 00:27:38.919 --> 00:27:44.799
1882
+ question and then a document is a very
1883
+
1884
+ 00:27:42.240 --> 00:27:49.440
1885
+ long output that's written in a
1886
+
1887
+ 00:27:44.799 --> 00:27:50.799
1888
+ different proos style and you you know
1889
+
1890
+ 00:27:49.440 --> 00:27:53.159
1891
+ it might have lots of irrelevant
1892
+
1893
+ 00:27:50.799 --> 00:27:54.519
1894
+ information or or boiler plate or fluff
1895
+
1896
+ 00:27:53.159 --> 00:27:57.640
1897
+ or something like
1898
+
1899
+ 00:27:54.519 --> 00:28:00.640
1900
+ that so the idea behind a hypothetical
1901
+
1902
+ 00:27:57.640 --> 00:28:03.120
1903
+ document embedding is that it's e easier
1904
+
1905
+ 00:28:00.640 --> 00:28:05.279
1906
+ to match a document in a document than
1907
+
1908
+ 00:28:03.120 --> 00:28:08.159
1909
+ it is to match a query in a
1910
+
1911
+ 00:28:05.279 --> 00:28:10.159
1912
+ document but the input to our model is a
1913
+
1914
+ 00:28:08.159 --> 00:28:14.360
1915
+ query right so what do we
1916
+
1917
+ 00:28:10.159 --> 00:28:17.919
1918
+ do and so essentially what we do is we
1919
+
1920
+ 00:28:14.360 --> 00:28:20.399
1921
+ then take a large language model we feed
1922
+
1923
+ 00:28:17.919 --> 00:28:23.320
1924
+ it in a query in a prompt and say
1925
+
1926
+ 00:28:20.399 --> 00:28:25.399
1927
+ generate a document that looks like it
1928
+
1929
+ 00:28:23.320 --> 00:28:30.080
1930
+ should be the answer to this
1931
+
1932
+ 00:28:25.399 --> 00:28:32.120
1933
+ query and so so then the llm goes in and
1934
+
1935
+ 00:28:30.080 --> 00:28:34.440
1936
+ it generates a document and hopefully
1937
+
1938
+ 00:28:32.120 --> 00:28:38.440
1939
+ this document looks more similar to the
1940
+
1941
+ 00:28:34.440 --> 00:28:41.440
1942
+ documents you want to retrieve than the
1943
+
1944
+ 00:28:38.440 --> 00:28:44.039
1945
+ um than the original query does and I've
1946
+
1947
+ 00:28:41.440 --> 00:28:47.240
1948
+ actually found this to be relatively
1949
+
1950
+ 00:28:44.039 --> 00:28:51.880
1951
+ effective at improving accuracy
1952
+
1953
+ 00:28:47.240 --> 00:28:53.200
1954
+ on kind of difficult uh tasks especially
1955
+
1956
+ 00:28:51.880 --> 00:28:55.840
1957
+ ones that are out of domain from the
1958
+
1959
+ 00:28:53.200 --> 00:28:58.000
1960
+ trend models that I'm
1961
+
1962
+ 00:28:55.840 --> 00:29:01.880
1963
+ using so I've gone through a whole bunch
1964
+
1965
+ 00:28:58.000 --> 00:29:04.039
1966
+ of methods and I would like to finish up
1967
+
1968
+ 00:29:01.880 --> 00:29:05.679
1969
+ this section by giving some insight
1970
+
1971
+ 00:29:04.039 --> 00:29:11.399
1972
+ about which one you should be
1973
+
1974
+ 00:29:05.679 --> 00:29:14.559
1975
+ using so my impression right now is
1976
+
1977
+ 00:29:11.399 --> 00:29:17.760
1978
+ that a good basine to start out with is
1979
+
1980
+ 00:29:14.559 --> 00:29:20.679
1981
+ something like bm25 it's very easy to
1982
+
1983
+ 00:29:17.760 --> 00:29:23.080
1984
+ start out and compared to embedding
1985
+
1986
+ 00:29:20.679 --> 00:29:26.120
1987
+ based models it tends to be relatively
1988
+
1989
+ 00:29:23.080 --> 00:29:28.279
1990
+ robust to new domains so if you have a
1991
+
1992
+ 00:29:26.120 --> 00:29:30.559
1993
+ new domain you're more less guaranteed
1994
+
1995
+ 00:29:28.279 --> 00:29:32.240
1996
+ that bm25 will give you some performance
1997
+
1998
+ 00:29:30.559 --> 00:29:35.320
1999
+ whereas embeddings may be really good
2000
+
2001
+ 00:29:32.240 --> 00:29:38.399
2002
+ but they may be really bad uh depending
2003
+
2004
+ 00:29:35.320 --> 00:29:40.880
2005
+ on how out of domain that is compared to
2006
+
2007
+ 00:29:38.399 --> 00:29:42.799
2008
+ your underlying embedding
2009
+
2010
+ 00:29:40.880 --> 00:29:44.760
2011
+ model
2012
+
2013
+ 00:29:42.799 --> 00:29:48.039
2014
+ so however if you want to get the
2015
+
2016
+ 00:29:44.760 --> 00:29:51.080
2017
+ highest accuracy definitely tuned models
2018
+
2019
+ 00:29:48.039 --> 00:29:53.200
2020
+ are going to be better and if you're not
2021
+
2022
+ 00:29:51.080 --> 00:29:56.039
2023
+ worried about computation efficiency
2024
+
2025
+ 00:29:53.200 --> 00:29:58.480
2026
+ using something like P bear um with kind
2027
+
2028
+ 00:29:56.039 --> 00:30:01.320
2029
+ of the token level retrieval will
2030
+
2031
+ 00:29:58.480 --> 00:30:05.559
2032
+ definitely give you uh good accuracy
2033
+
2034
+ 00:30:01.320 --> 00:30:08.559
2035
+ here however there's better support for
2036
+
2037
+ 00:30:05.559 --> 00:30:12.159
2038
+ bu encoder style models um in kind of
2039
+
2040
+ 00:30:08.559 --> 00:30:15.240
2041
+ standard Vector databases like feice and
2042
+
2043
+ 00:30:12.159 --> 00:30:17.519
2044
+ uh chroma and other things like that so
2045
+
2046
+ 00:30:15.240 --> 00:30:19.799
2047
+ if you want a kind of easier method to
2048
+
2049
+ 00:30:17.519 --> 00:30:23.279
2050
+ get started very quickly then using a bu
2051
+
2052
+ 00:30:19.799 --> 00:30:23.279
2053
+ encoder is probably the best way to
2054
+
2055
+ 00:30:25.080 --> 00:30:31.080
2056
+ go okay so now moving on to actual
2057
+
2058
+ 00:30:28.279 --> 00:30:33.159
2059
+ retrieval augmented generation models we
2060
+
2061
+ 00:30:31.080 --> 00:30:38.360
2062
+ have uh retriever reader
2063
+
2064
+ 00:30:33.159 --> 00:30:40.880
2065
+ models and the way these work is you
2066
+
2067
+ 00:30:38.360 --> 00:30:43.279
2068
+ basically the simplest way they can work
2069
+
2070
+ 00:30:40.880 --> 00:30:45.799
2071
+ is you basically just chain retrieval
2072
+
2073
+ 00:30:43.279 --> 00:30:47.640
2074
+ and reading together so you use an outof
2075
+
2076
+ 00:30:45.799 --> 00:30:52.519
2077
+ thebox Retriever and an outof thebox
2078
+
2079
+ 00:30:47.640 --> 00:30:54.039
2080
+ reader model and you have your query uh
2081
+
2082
+ 00:30:52.519 --> 00:30:56.159
2083
+ you could for example look something up
2084
+
2085
+ 00:30:54.039 --> 00:30:58.039
2086
+ on Google get a whole bunch of passages
2087
+
2088
+ 00:30:56.159 --> 00:30:59.760
2089
+ and then feed them into a GP key model
2090
+
2091
+ 00:30:58.039 --> 00:31:03.919
2092
+ and get an
2093
+
2094
+ 00:30:59.760 --> 00:31:06.960
2095
+ answer this overall is quite effective
2096
+
2097
+ 00:31:03.919 --> 00:31:09.159
2098
+ um you it's easy to implement and it
2099
+
2100
+ 00:31:06.960 --> 00:31:10.600
2101
+ will give you decent results so
2102
+
2103
+ 00:31:09.159 --> 00:31:15.480
2104
+ definitely it's something to be worth
2105
+
2106
+ 00:31:10.600 --> 00:31:20.720
2107
+ thinking about uh for assignment two in
2108
+
2109
+ 00:31:15.480 --> 00:31:24.799
2110
+ the um in the class you're required to
2111
+
2112
+ 00:31:20.720 --> 00:31:26.679
2113
+ only use uh kind of public models or
2114
+
2115
+ 00:31:24.799 --> 00:31:29.760
2116
+ open source implementations so you could
2117
+
2118
+ 00:31:26.679 --> 00:31:34.360
2119
+ still replace that with Apachi Lucine
2120
+
2121
+ 00:31:29.760 --> 00:31:36.360
2122
+ and then um you know any standard llm
2123
+
2124
+ 00:31:34.360 --> 00:31:39.159
2125
+ and that could be you know llama llama
2126
+
2127
+ 00:31:36.360 --> 00:31:41.600
2128
+ Chad or M mistol or mixol or something
2129
+
2130
+ 00:31:39.159 --> 00:31:45.360
2131
+ like that so uh you could definitely
2132
+
2133
+ 00:31:41.600 --> 00:31:48.120
2134
+ feel feel free to do something like
2135
+
2136
+ 00:31:45.360 --> 00:31:51.559
2137
+ that um of course the passages are
2138
+
2139
+ 00:31:48.120 --> 00:31:53.200
2140
+ concatenated to the context and so
2141
+
2142
+ 00:31:51.559 --> 00:31:54.799
2143
+ because the passages are concatenated to
2144
+
2145
+ 00:31:53.200 --> 00:31:56.679
2146
+ context the contacts can get relatively
2147
+
2148
+ 00:31:54.799 --> 00:31:58.399
2149
+ long and expensive and other things like
2150
+
2151
+ 00:31:56.679 --> 00:32:01.960
2152
+ that but it's just something you have to
2153
+
2154
+ 00:31:58.399 --> 00:32:01.960
2155
+ deal with when you're using
2156
+
2157
+ 00:32:02.600 --> 00:32:07.480
2158
+ R there are methods also for Retriever
2159
+
2160
+ 00:32:05.799 --> 00:32:11.600
2161
+ and Generator endtoend
2162
+
2163
+ 00:32:07.480 --> 00:32:14.720
2164
+ training so this is the paper actually
2165
+
2166
+ 00:32:11.600 --> 00:32:17.600
2167
+ where the name rag came from and I'll
2168
+
2169
+ 00:32:14.720 --> 00:32:20.200
2170
+ use that as an example here uh but
2171
+
2172
+ 00:32:17.600 --> 00:32:21.600
2173
+ basically um there are several methods
2174
+
2175
+ 00:32:20.200 --> 00:32:23.399
2176
+ that propos to train the Retriever and
2177
+
2178
+ 00:32:21.600 --> 00:32:27.440
2179
+ reader to improve
2180
+
2181
+ 00:32:23.399 --> 00:32:31.240
2182
+ accuracy and specifically the rag p by
2183
+
2184
+ 00:32:27.440 --> 00:32:33.200
2185
+ Lewis at all the way it trained the um
2186
+
2187
+ 00:32:31.240 --> 00:32:35.639
2188
+ reader was to maximize generation
2189
+
2190
+ 00:32:33.200 --> 00:32:38.600
2191
+ likelihood given a single retrieved
2192
+
2193
+ 00:32:35.639 --> 00:32:40.279
2194
+ document and for the retriever it
2195
+
2196
+ 00:32:38.600 --> 00:32:41.880
2197
+ maximized overall likelihood by
2198
+
2199
+ 00:32:40.279 --> 00:32:44.480
2200
+ optimizing the mixture weight over
2201
+
2202
+ 00:32:41.880 --> 00:32:46.559
2203
+ documents so here's kind of a a
2204
+
2205
+ 00:32:44.480 --> 00:32:50.480
2206
+ schematic uh which is you have your
2207
+
2208
+ 00:32:46.559 --> 00:32:54.039
2209
+ query encoder um you run the Retriever
2210
+
2211
+ 00:32:50.480 --> 00:32:57.760
2212
+ with uh maximum inner product search it
2213
+
2214
+ 00:32:54.039 --> 00:33:00.919
2215
+ gives you several documents and each
2216
+
2217
+ 00:32:57.760 --> 00:33:05.880
2218
+ document has a score and then based on
2219
+
2220
+ 00:33:00.919 --> 00:33:09.399
2221
+ the documents and the scores you
2222
+
2223
+ 00:33:05.880 --> 00:33:11.200
2224
+ generate uh with each document in the
2225
+
2226
+ 00:33:09.399 --> 00:33:15.360
2227
+ context and
2228
+
2229
+ 00:33:11.200 --> 00:33:17.080
2230
+ then sum together the probabilities
2231
+
2232
+ 00:33:15.360 --> 00:33:18.639
2233
+ multiplied by the weights and I have the
2234
+
2235
+ 00:33:17.080 --> 00:33:20.320
2236
+ actual equations here because I think
2237
+
2238
+ 00:33:18.639 --> 00:33:23.039
2239
+ it'll be a little bit easier to
2240
+
2241
+ 00:33:20.320 --> 00:33:25.760
2242
+ understand after looking at the
2243
+
2244
+ 00:33:23.039 --> 00:33:28.360
2245
+ equations so generation is a mixture
2246
+
2247
+ 00:33:25.760 --> 00:33:31.440
2248
+ model and you pick a document and
2249
+
2250
+ 00:33:28.360 --> 00:33:36.519
2251
+ generate from the document this
2252
+
2253
+ 00:33:31.440 --> 00:33:40.080
2254
+ p z given X is the probability of
2255
+
2256
+ 00:33:36.519 --> 00:33:44.679
2257
+ picking that document given the query X
2258
+
2259
+ 00:33:40.080 --> 00:33:48.880
2260
+ and then this P Theta x z and all of the
2261
+
2262
+ 00:33:44.679 --> 00:33:51.480
2263
+ previous tokens is basically the uh
2264
+
2265
+ 00:33:48.880 --> 00:33:54.840
2266
+ probability of the next token given that
2267
+
2268
+ 00:33:51.480 --> 00:33:56.559
2269
+ you have this particular document so you
2270
+
2271
+ 00:33:54.840 --> 00:34:00.840
2272
+ can see that this is basically linearly
2273
+
2274
+ 00:33:56.559 --> 00:34:00.840
2275
+ interpr ating between the multiple
2276
+
2277
+ 00:34:01.559 --> 00:34:05.760
2278
+ documents and if we look this can be
2279
+
2280
+ 00:34:04.600 --> 00:34:09.039
2281
+ considered the Retriever and the
2282
+
2283
+ 00:34:05.760 --> 00:34:09.039
2284
+ generator the Retriever and the
2285
+
2286
+ 00:34:10.839 --> 00:34:16.119
2287
+ reader one really important thing here
2288
+
2289
+ 00:34:13.639 --> 00:34:17.760
2290
+ uh that enables endtoend training is
2291
+
2292
+ 00:34:16.119 --> 00:34:19.639
2293
+ they have this probability of the
2294
+
2295
+ 00:34:17.760 --> 00:34:22.919
2296
+ retriever be based on
2297
+
2298
+ 00:34:19.639 --> 00:34:25.480
2299
+ embeddings and so here we have the
2300
+
2301
+ 00:34:22.919 --> 00:34:29.040
2302
+ document embedding and the query
2303
+
2304
+ 00:34:25.480 --> 00:34:31.440
2305
+ embedding and the probability is
2306
+
2307
+ 00:34:29.040 --> 00:34:33.320
2308
+ proportional to the inner product of
2309
+
2310
+ 00:34:31.440 --> 00:34:36.599
2311
+ these exponentiated so you're basically
2312
+
2313
+ 00:34:33.320 --> 00:34:38.839
2314
+ taking a soft Max over uh the inner
2315
+
2316
+ 00:34:36.599 --> 00:34:40.599
2317
+ product between the
2318
+
2319
+ 00:34:38.839 --> 00:34:44.200
2320
+ two
2321
+
2322
+ 00:34:40.599 --> 00:34:47.919
2323
+ and this adjusts the retriever to give
2324
+
2325
+ 00:34:44.200 --> 00:34:49.560
2326
+ higher similarities to helpful
2327
+
2328
+ 00:34:47.919 --> 00:34:52.560
2329
+ documents
2330
+
2331
+ 00:34:49.560 --> 00:34:52.560
2332
+ and
2333
+
2334
+ 00:34:54.040 --> 00:35:02.800
2335
+ so because the prob probability of the
2336
+
2337
+ 00:34:59.800 --> 00:35:04.839
2338
+ retriever model here is included in the
2339
+
2340
+ 00:35:02.800 --> 00:35:07.160
2341
+ endtoend probability you don't actually
2342
+
2343
+ 00:35:04.839 --> 00:35:10.680
2344
+ need any annotations
2345
+
2346
+ 00:35:07.160 --> 00:35:12.839
2347
+ about which documents are useful you can
2348
+
2349
+ 00:35:10.680 --> 00:35:16.680
2350
+ just train all of this end to end and
2351
+
2352
+ 00:35:12.839 --> 00:35:19.480
2353
+ let backrop do its thing to update the
2354
+
2355
+ 00:35:16.680 --> 00:35:22.640
2356
+ uh the retriever as
2357
+
2358
+ 00:35:19.480 --> 00:35:25.000
2359
+ well one important issue when training
2360
+
2361
+ 00:35:22.640 --> 00:35:27.480
2362
+ models like this is that the search
2363
+
2364
+ 00:35:25.000 --> 00:35:30.400
2365
+ index will become stale so what do I
2366
+
2367
+ 00:35:27.480 --> 00:35:34.760
2368
+ mean by this if we go back to our
2369
+
2370
+ 00:35:30.400 --> 00:35:34.760
2371
+ previous uh thing about dense
2372
+
2373
+ 00:35:35.480 --> 00:35:43.560
2374
+ models creating this blue search index
2375
+
2376
+ 00:35:39.800 --> 00:35:45.400
2377
+ on the right side of the figure here is
2378
+
2379
+ 00:35:43.560 --> 00:35:48.680
2380
+ very costly so like let's say you want
2381
+
2382
+ 00:35:45.400 --> 00:35:50.520
2383
+ to embed a million documents or a
2384
+
2385
+ 00:35:48.680 --> 00:35:55.240
2386
+ billion documents if you're a big search
2387
+
2388
+ 00:35:50.520 --> 00:35:58.200
2389
+ engine company so doing this is very
2390
+
2391
+ 00:35:55.240 --> 00:36:00.599
2392
+ slow and
2393
+
2394
+ 00:35:58.200 --> 00:36:01.920
2395
+ in contrast doing lookup with kind of
2396
+
2397
+ 00:36:00.599 --> 00:36:04.160
2398
+ these approximate nearest neighbor
2399
+
2400
+ 00:36:01.920 --> 00:36:05.440
2401
+ searches is sublinear time or even you
2402
+
2403
+ 00:36:04.160 --> 00:36:08.119
2404
+ know log time so you can do it
2405
+
2406
+ 00:36:05.440 --> 00:36:12.319
2407
+ relatively quickly
2408
+
2409
+ 00:36:08.119 --> 00:36:15.680
2410
+ so it's fine to do lookup over this big
2411
+
2412
+ 00:36:12.319 --> 00:36:17.520
2413
+ index but if you start updating this
2414
+
2415
+ 00:36:15.680 --> 00:36:19.920
2416
+ document embedding you need to recreate
2417
+
2418
+ 00:36:17.520 --> 00:36:23.760
2419
+ the entire index and that would be you
2420
+
2421
+ 00:36:19.920 --> 00:36:27.240
2422
+ know very computationally costly so the
2423
+
2424
+ 00:36:23.760 --> 00:36:30.119
2425
+ solution to this proposed in this rag
2426
+
2427
+ 00:36:27.240 --> 00:36:33.640
2428
+ paper by Lewis at all is uh we only
2429
+
2430
+ 00:36:30.119 --> 00:36:35.640
2431
+ train the query embeddings and we keep
2432
+
2433
+ 00:36:33.640 --> 00:36:39.640
2434
+ the document embedding
2435
+
2436
+ 00:36:35.640 --> 00:36:41.920
2437
+ swix there's other Alternatives like um
2438
+
2439
+ 00:36:39.640 --> 00:36:45.000
2440
+ there was a paper called realm uh from
2441
+
2442
+ 00:36:41.920 --> 00:36:48.040
2443
+ early in retrieval base modeling and in
2444
+
2445
+ 00:36:45.000 --> 00:36:50.040
2446
+ that in that method they basically had
2447
+
2448
+ 00:36:48.040 --> 00:36:51.520
2449
+ an asynchronous process that was going
2450
+
2451
+ 00:36:50.040 --> 00:36:55.760
2452
+ through and using the most recent
2453
+
2454
+ 00:36:51.520 --> 00:36:59.960
2455
+ document in better to re-update the
2456
+
2457
+ 00:36:55.760 --> 00:37:03.359
2458
+ search index during training but that is
2459
+
2460
+ 00:36:59.960 --> 00:37:05.960
2461
+ uh you know kind of a very onerous
2462
+
2463
+ 00:37:03.359 --> 00:37:07.800
2464
+ process so I think it's quite common to
2465
+
2466
+ 00:37:05.960 --> 00:37:11.000
2467
+ use kind of a fixed document embedding
2468
+
2469
+ 00:37:07.800 --> 00:37:11.000
2470
+ in update only the
2471
+
2472
+ 00:37:12.079 --> 00:37:17.720
2473
+ queries another thing to think about is
2474
+
2475
+ 00:37:14.359 --> 00:37:21.160
2476
+ when do we do retrieval um so there's a
2477
+
2478
+ 00:37:17.720 --> 00:37:23.079
2479
+ bunch of different methods the rag paper
2480
+
2481
+ 00:37:21.160 --> 00:37:24.440
2482
+ that I mentioned before did this only
2483
+
2484
+ 00:37:23.079 --> 00:37:26.359
2485
+ once right at the very beginning of
2486
+
2487
+ 00:37:24.440 --> 00:37:29.400
2488
+ generation it grabbed a single document
2489
+
2490
+ 00:37:26.359 --> 00:37:32.560
2491
+ and generated the entire output this is
2492
+
2493
+ 00:37:29.400 --> 00:37:34.800
2494
+ the default method used by most
2495
+
2496
+ 00:37:32.560 --> 00:37:37.240
2497
+ systems however there's other options as
2498
+
2499
+ 00:37:34.800 --> 00:37:39.640
2500
+ well you can retrieve uh several times
2501
+
2502
+ 00:37:37.240 --> 00:37:43.040
2503
+ during generation as
2504
+
2505
+ 00:37:39.640 --> 00:37:44.480
2506
+ necessary and the way this works uh we
2507
+
2508
+ 00:37:43.040 --> 00:37:46.280
2509
+ can do this either by generating a
2510
+
2511
+ 00:37:44.480 --> 00:37:48.480
2512
+ search token uh saying that we should
2513
+
2514
+ 00:37:46.280 --> 00:37:50.200
2515
+ start searching or searching when the
2516
+
2517
+ 00:37:48.480 --> 00:37:52.640
2518
+ model is
2519
+
2520
+ 00:37:50.200 --> 00:37:55.920
2521
+ uncertain and another way is to do this
2522
+
2523
+ 00:37:52.640 --> 00:37:58.079
2524
+ every token so we can do this by finding
2525
+
2526
+ 00:37:55.920 --> 00:37:59.760
2527
+ similar final embeddings and using this
2528
+
2529
+ 00:37:58.079 --> 00:38:02.240
2530
+ to influence the
2531
+
2532
+ 00:37:59.760 --> 00:38:04.720
2533
+ probabilities or approximating attention
2534
+
2535
+ 00:38:02.240 --> 00:38:06.440
2536
+ with nearest neighbors so I'm going to
2537
+
2538
+ 00:38:04.720 --> 00:38:08.920
2539
+ explain about each of these in a bit
2540
+
2541
+ 00:38:06.440 --> 00:38:12.480
2542
+ more detail
2543
+
2544
+ 00:38:08.920 --> 00:38:16.119
2545
+ in so triggering retrieval with token
2546
+
2547
+ 00:38:12.480 --> 00:38:19.720
2548
+ embeddings is um was proposed by Tool
2549
+
2550
+ 00:38:16.119 --> 00:38:22.119
2551
+ forer shik all and the way it works is
2552
+
2553
+ 00:38:19.720 --> 00:38:25.000
2554
+ you generate tokens that Tri trigger
2555
+
2556
+ 00:38:22.119 --> 00:38:27.880
2557
+ retrieval or other tools so in this
2558
+
2559
+ 00:38:25.000 --> 00:38:30.079
2560
+ particular method it uh had several
2561
+
2562
+ 00:38:27.880 --> 00:38:32.000
2563
+ tools including asking a QA model or
2564
+
2565
+ 00:38:30.079 --> 00:38:34.800
2566
+ getting a calculator or having a machine
2567
+
2568
+ 00:38:32.000 --> 00:38:37.200
2569
+ translation system but with respect to
2570
+
2571
+ 00:38:34.800 --> 00:38:40.000
2572
+ retrieval augmented generation it had
2573
+
2574
+ 00:38:37.200 --> 00:38:41.560
2575
+ this essentially Wiki search
2576
+
2577
+ 00:38:40.000 --> 00:38:43.680
2578
+ functionality that would look up
2579
+
2580
+ 00:38:41.560 --> 00:38:46.680
2581
+ something in Wikipedia and then use that
2582
+
2583
+ 00:38:43.680 --> 00:38:46.680
2584
+ to influence the final
2585
+
2586
+ 00:38:46.760 --> 00:38:52.200
2587
+ probabilities
2588
+
2589
+ 00:38:48.800 --> 00:38:55.160
2590
+ and the way this was trained is training
2591
+
2592
+ 00:38:52.200 --> 00:38:59.800
2593
+ was done in an inative manner where it
2594
+
2595
+ 00:38:55.160 --> 00:38:59.800
2596
+ basically generated uh kind
2597
+
2598
+ 00:39:00.000 --> 00:39:05.680
2599
+ of examples of tools being useful and
2600
+
2601
+ 00:39:04.359 --> 00:39:09.560
2602
+ when the
2603
+
2604
+ 00:39:05.680 --> 00:39:14.160
2605
+ tools improve the probability of the
2606
+
2607
+ 00:39:09.560 --> 00:39:16.119
2608
+ following output then that would be kind
2609
+
2610
+ 00:39:14.160 --> 00:39:19.560
2611
+ of treated as a positive example and
2612
+
2613
+ 00:39:16.119 --> 00:39:21.520
2614
+ used to further train the model so this
2615
+
2616
+ 00:39:19.560 --> 00:39:23.400
2617
+ was really influential and in fact this
2618
+
2619
+ 00:39:21.520 --> 00:39:27.000
2620
+ is how things are implemented in chat
2621
+
2622
+ 00:39:23.400 --> 00:39:29.319
2623
+ GPT nowadays not only for um doing
2624
+
2625
+ 00:39:27.000 --> 00:39:33.400
2626
+ retrieval but also doing other tools
2627
+
2628
+ 00:39:29.319 --> 00:39:35.200
2629
+ like um for example uh generating code
2630
+
2631
+ 00:39:33.400 --> 00:39:37.440
2632
+ or generating images or other things
2633
+
2634
+ 00:39:35.200 --> 00:39:37.440
2635
+ like
2636
+
2637
+ 00:39:38.200 --> 00:39:45.079
2638
+ this another option is to trigger
2639
+
2640
+ 00:39:40.920 --> 00:39:48.240
2641
+ retrieval uh with uncertainty estimates
2642
+
2643
+ 00:39:45.079 --> 00:39:52.280
2644
+ so flare this is a paper by my student
2645
+
2646
+ 00:39:48.240 --> 00:39:55.160
2647
+ Jang bang um where we try to generate
2648
+
2649
+ 00:39:52.280 --> 00:39:58.560
2650
+ content and then do retrieval if the
2651
+
2652
+ 00:39:55.160 --> 00:40:01.800
2653
+ language model certainty is low so
2654
+
2655
+ 00:39:58.560 --> 00:40:05.599
2656
+ here's a schematic of how this works but
2657
+
2658
+ 00:40:01.800 --> 00:40:09.160
2659
+ basically um if we have
2660
+
2661
+ 00:40:05.599 --> 00:40:13.440
2662
+ some uh retrieved documents we can say
2663
+
2664
+ 00:40:09.160 --> 00:40:16.560
2665
+ generate a a summary about Joe Biden and
2666
+
2667
+ 00:40:13.440 --> 00:40:19.560
2668
+ when it generates a summary maybe for
2669
+
2670
+ 00:40:16.560 --> 00:40:20.960
2671
+ the first output um the language model
2672
+
2673
+ 00:40:19.560 --> 00:40:22.960
2674
+ has high
2675
+
2676
+ 00:40:20.960 --> 00:40:24.240
2677
+ confidence and because the language
2678
+
2679
+ 00:40:22.960 --> 00:40:25.359
2680
+ model has high confidence we just
2681
+
2682
+ 00:40:24.240 --> 00:40:27.520
2683
+ generate the
2684
+
2685
+ 00:40:25.359 --> 00:40:29.599
2686
+ output
2687
+
2688
+ 00:40:27.520 --> 00:40:31.839
2689
+ however in the next step if it might
2690
+
2691
+ 00:40:29.599 --> 00:40:33.599
2692
+ generate something like saying Joe Biden
2693
+
2694
+ 00:40:31.839 --> 00:40:35.680
2695
+ attended the University of Pennsylvania
2696
+
2697
+ 00:40:33.599 --> 00:40:37.160
2698
+ where he earned a law degree but the
2699
+
2700
+ 00:40:35.680 --> 00:40:39.000
2701
+ model might not be very certain about
2702
+
2703
+ 00:40:37.160 --> 00:40:41.560
2704
+ this it might have a low probability of
2705
+
2706
+ 00:40:39.000 --> 00:40:45.839
2707
+ certain important entities and So based
2708
+
2709
+ 00:40:41.560 --> 00:40:48.839
2710
+ on this uh we then form a a query where
2711
+
2712
+ 00:40:45.839 --> 00:40:52.119
2713
+ what we do is essentially we blank out
2714
+
2715
+ 00:40:48.839 --> 00:40:55.079
2716
+ the low probability parts of this and we
2717
+
2718
+ 00:40:52.119 --> 00:40:57.200
2719
+ do a search and so this is also a little
2720
+
2721
+ 00:40:55.079 --> 00:41:00.240
2722
+ bit like the hypothetical
2723
+
2724
+ 00:40:57.200 --> 00:41:02.520
2725
+ edings method where we basically create
2726
+
2727
+ 00:41:00.240 --> 00:41:04.040
2728
+ a document that we think will look
2729
+
2730
+ 00:41:02.520 --> 00:41:07.119
2731
+ similar to the document that we want to
2732
+
2733
+ 00:41:04.040 --> 00:41:09.480
2734
+ find we use that to create search
2735
+
2736
+ 00:41:07.119 --> 00:41:11.359
2737
+ results and then we generate the output
2738
+
2739
+ 00:41:09.480 --> 00:41:13.880
2740
+ and then we continue doing that and
2741
+
2742
+ 00:41:11.359 --> 00:41:15.960
2743
+ whenever we have a high confidence
2744
+
2745
+ 00:41:13.880 --> 00:41:18.800
2746
+ output like the one here we don't do any
2747
+
2748
+ 00:41:15.960 --> 00:41:20.040
2749
+ retrieval we just you know generate uh
2750
+
2751
+ 00:41:18.800 --> 00:41:21.880
2752
+ directly from the parameters of the
2753
+
2754
+ 00:41:20.040 --> 00:41:23.960
2755
+ model but whenever we have low
2756
+
2757
+ 00:41:21.880 --> 00:41:27.400
2758
+ confidence outputs we do the retrieval
2759
+
2760
+ 00:41:23.960 --> 00:41:30.400
2761
+ and base the output on this and so I I
2762
+
2763
+ 00:41:27.400 --> 00:41:33.119
2764
+ think this is uh you know a nice method
2765
+
2766
+ 00:41:30.400 --> 00:41:35.000
2767
+ that could potentially be uh used the
2768
+
2769
+ 00:41:33.119 --> 00:41:36.920
2770
+ downside to that is you might sometimes
2771
+
2772
+ 00:41:35.000 --> 00:41:38.920
2773
+ need to generate twice because you would
2774
+
2775
+ 00:41:36.920 --> 00:41:40.480
2776
+ generate the output once and then find
2777
+
2778
+ 00:41:38.920 --> 00:41:42.720
2779
+ the low confidence parts and generate
2780
+
2781
+ 00:41:40.480 --> 00:41:45.400
2782
+ again but you know if you really care
2783
+
2784
+ 00:41:42.720 --> 00:41:47.319
2785
+ about the uh kind of quality of the
2786
+
2787
+ 00:41:45.400 --> 00:41:49.640
2788
+ output this is I think a reasonable
2789
+
2790
+ 00:41:47.319 --> 00:41:49.640
2791
+ thing to
2792
+
2793
+ 00:41:50.160 --> 00:41:54.920
2794
+ do okay so now moving on to the Token by
2795
+
2796
+ 00:41:53.000 --> 00:41:59.800
2797
+ token retrieval
2798
+
2799
+ 00:41:54.920 --> 00:42:03.560
2800
+ methods the kind of original or one of
2801
+
2802
+ 00:41:59.800 --> 00:42:05.200
2803
+ the methods that popularized this idea
2804
+
2805
+ 00:42:03.560 --> 00:42:08.720
2806
+ of token by token retrieval is something
2807
+
2808
+ 00:42:05.200 --> 00:42:10.760
2809
+ called K&N LM and the way it works is it
2810
+
2811
+ 00:42:08.720 --> 00:42:13.839
2812
+ retrieves similar
2813
+
2814
+ 00:42:10.760 --> 00:42:16.680
2815
+ examples and then uses the following
2816
+
2817
+ 00:42:13.839 --> 00:42:20.880
2818
+ tokens from these
2819
+
2820
+ 00:42:16.680 --> 00:42:23.800
2821
+ examples and this is kind of like a very
2822
+
2823
+ 00:42:20.880 --> 00:42:25.839
2824
+ powerful count-based byr model in a way
2825
+
2826
+ 00:42:23.800 --> 00:42:28.440
2827
+ so if you remember back to when we were
2828
+
2829
+ 00:42:25.839 --> 00:42:32.920
2830
+ talking about count based Pam models
2831
+
2832
+ 00:42:28.440 --> 00:42:36.440
2833
+ what we would do is we would take the
2834
+
2835
+ 00:42:32.920 --> 00:42:39.400
2836
+ previous token and we would calculate
2837
+
2838
+ 00:42:36.440 --> 00:42:41.319
2839
+ the probability of the next token by
2840
+
2841
+ 00:42:39.400 --> 00:42:43.040
2842
+ summing up together all of the next
2843
+
2844
+ 00:42:41.319 --> 00:42:44.800
2845
+ tokens and dividing by the total number
2846
+
2847
+ 00:42:43.040 --> 00:42:49.240
2848
+ of times that previous token
2849
+
2850
+ 00:42:44.800 --> 00:42:52.720
2851
+ occurred and so given that background uh
2852
+
2853
+ 00:42:49.240 --> 00:42:56.760
2854
+ we can talk about how the KLM
2855
+
2856
+ 00:42:52.720 --> 00:43:00.319
2857
+ works so we have the text context X
2858
+
2859
+ 00:42:56.760 --> 00:43:02.240
2860
+ and we want to generate a Target output
2861
+
2862
+ 00:43:00.319 --> 00:43:04.839
2863
+ separately from this we have all of the
2864
+
2865
+ 00:43:02.240 --> 00:43:06.440
2866
+ training contexts so this is all of the
2867
+
2868
+ 00:43:04.839 --> 00:43:09.920
2869
+ contexts that appeared in our training
2870
+
2871
+ 00:43:06.440 --> 00:43:13.520
2872
+ data and we encode all of these training
2873
+
2874
+ 00:43:09.920 --> 00:43:15.720
2875
+ contexts specifically by calculating the
2876
+
2877
+ 00:43:13.520 --> 00:43:18.559
2878
+ representation of the final layer or
2879
+
2880
+ 00:43:15.720 --> 00:43:21.119
2881
+ near the final layer of the model and so
2882
+
2883
+ 00:43:18.559 --> 00:43:23.200
2884
+ we encode that as
2885
+
2886
+ 00:43:21.119 --> 00:43:25.240
2887
+ representations separately from that we
2888
+
2889
+ 00:43:23.200 --> 00:43:27.920
2890
+ remember the next word that appeared
2891
+
2892
+ 00:43:25.240 --> 00:43:29.720
2893
+ after this Contex
2894
+
2895
+ 00:43:27.920 --> 00:43:32.920
2896
+ so now we have a data store consisting
2897
+
2898
+ 00:43:29.720 --> 00:43:35.040
2899
+ of representations in next words we then
2900
+
2901
+ 00:43:32.920 --> 00:43:38.440
2902
+ take the representation of the current
2903
+
2904
+ 00:43:35.040 --> 00:43:40.880
2905
+ context and we calculate the distance
2906
+
2907
+ 00:43:38.440 --> 00:43:43.400
2908
+ between the current context and all of
2909
+
2910
+ 00:43:40.880 --> 00:43:47.119
2911
+ the other similar context in the
2912
+
2913
+ 00:43:43.400 --> 00:43:49.839
2914
+ database we take the nearest K so we
2915
+
2916
+ 00:43:47.119 --> 00:43:52.440
2917
+ take the top uh K examples here which
2918
+
2919
+ 00:43:49.839 --> 00:43:55.240
2920
+ would be Hawaii Illinois and
2921
+
2922
+ 00:43:52.440 --> 00:43:57.520
2923
+ Hawaii we then do uh some sort of
2924
+
2925
+ 00:43:55.240 --> 00:44:01.440
2926
+ normalization based on the
2927
+
2928
+ 00:43:57.520 --> 00:44:05.200
2929
+ distance and this gives us a probability
2930
+
2931
+ 00:44:01.440 --> 00:44:06.680
2932
+ distribution over all of the next tokens
2933
+
2934
+ 00:44:05.200 --> 00:44:10.599
2935
+ sometimes these tokens are duplicated
2936
+
2937
+ 00:44:06.680 --> 00:44:13.599
2938
+ multiple times and so we aggregate all
2939
+
2940
+ 00:44:10.599 --> 00:44:15.800
2941
+ of these counts to be Hawaii for example
2942
+
2943
+ 00:44:13.599 --> 00:44:18.839
2944
+ 0.8 and Illinois
2945
+
2946
+ 00:44:15.800 --> 00:44:21.839
2947
+ 0.2 and then we interpolate this with
2948
+
2949
+ 00:44:18.839 --> 00:44:24.040
2950
+ the probability given by the standard
2951
+
2952
+ 00:44:21.839 --> 00:44:26.440
2953
+ language model using an interpolation
2954
+
2955
+ 00:44:24.040 --> 00:44:28.400
2956
+ coefficient Lambda and this gives us our
2957
+
2958
+ 00:44:26.440 --> 00:44:31.000
2959
+ final
2960
+
2961
+ 00:44:28.400 --> 00:44:34.559
2962
+ probability so the nice thing about this
2963
+
2964
+ 00:44:31.000 --> 00:44:38.000
2965
+ is this allows us to explicitly ground
2966
+
2967
+ 00:44:34.559 --> 00:44:42.079
2968
+ our outputs in individual
2969
+
2970
+ 00:44:38.000 --> 00:44:45.319
2971
+ examples uh and it's a pretty effective
2972
+
2973
+ 00:44:42.079 --> 00:44:48.760
2974
+ way to improve the probability of models
2975
+
2976
+ 00:44:45.319 --> 00:44:53.839
2977
+ improve translation and other stuff like
2978
+
2979
+ 00:44:48.760 --> 00:44:56.119
2980
+ this the disadvantage of doing this is
2981
+
2982
+ 00:44:53.839 --> 00:44:59.319
2983
+ that it provides it it kind of ADD add
2984
+
2985
+ 00:44:56.119 --> 00:45:01.800
2986
+ an extra component of the model it adds
2987
+
2988
+ 00:44:59.319 --> 00:45:05.440
2989
+ extra
2990
+
2991
+ 00:45:01.800 --> 00:45:08.520
2992
+ um kind of hyperparameters like Lambda
2993
+
2994
+ 00:45:05.440 --> 00:45:11.680
2995
+ and things like this so it is a little
2996
+
2997
+ 00:45:08.520 --> 00:45:16.960
2998
+ bit finicky and it doesn't work in all
2999
+
3000
+ 00:45:11.680 --> 00:45:21.440
3001
+ situations and so another method that we
3002
+
3003
+ 00:45:16.960 --> 00:45:23.559
3004
+ uh proposed or by Manda Birch who gave
3005
+
3006
+ 00:45:21.440 --> 00:45:26.920
3007
+ the uh previous lecture on generation in
3008
+
3009
+ 00:45:23.559 --> 00:45:29.240
3010
+ this class is unlimi forer and basically
3011
+
3012
+ 00:45:26.920 --> 00:45:32.680
3013
+ what unlimi forer does is it notes that
3014
+
3015
+ 00:45:29.240 --> 00:45:36.079
3016
+ attention itself is an in inner product
3017
+
3018
+ 00:45:32.680 --> 00:45:40.440
3019
+ search and it does topk
3020
+
3021
+ 00:45:36.079 --> 00:45:42.680
3022
+ attention and the way we do this is we
3023
+
3024
+ 00:45:40.440 --> 00:45:45.160
3025
+ first process the input with a sliding
3026
+
3027
+ 00:45:42.680 --> 00:45:47.480
3028
+ window and then perform attention using
3029
+
3030
+ 00:45:45.160 --> 00:45:49.960
3031
+ a vector index so if we have a really
3032
+
3033
+ 00:45:47.480 --> 00:45:54.280
3034
+ long input that we want to encode what
3035
+
3036
+ 00:45:49.960 --> 00:45:56.559
3037
+ we do is we first encode chunks so we
3038
+
3039
+ 00:45:54.280 --> 00:46:01.960
3040
+ encode for example AB
3041
+
3042
+ 00:45:56.559 --> 00:46:03.839
3043
+ then we encode CD and we encode EF we
3044
+
3045
+ 00:46:01.960 --> 00:46:06.240
3046
+ concatenate them together into a big
3047
+
3048
+ 00:46:03.839 --> 00:46:07.800
3049
+ index of one long input so in a way that
3050
+
3051
+ 00:46:06.240 --> 00:46:10.920
3052
+ this is similar to what they did in the
3053
+
3054
+ 00:46:07.800 --> 00:46:12.720
3055
+ KLM you know concatenate all of these
3056
+
3057
+ 00:46:10.920 --> 00:46:16.520
3058
+ embeddings into a single
3059
+
3060
+ 00:46:12.720 --> 00:46:18.680
3061
+ input but the difference is that this is
3062
+
3063
+ 00:46:16.520 --> 00:46:21.640
3064
+ done with
3065
+
3066
+ 00:46:18.680 --> 00:46:24.280
3067
+ um the values that we are attending to
3068
+
3069
+ 00:46:21.640 --> 00:46:27.559
3070
+ as opposed to just the final
3071
+
3072
+ 00:46:24.280 --> 00:46:30.079
3073
+ layer and
3074
+
3075
+ 00:46:27.559 --> 00:46:33.680
3076
+ the interesting thing about this is now
3077
+
3078
+ 00:46:30.079 --> 00:46:36.200
3079
+ we have an index of one long input and
3080
+
3081
+ 00:46:33.680 --> 00:46:39.800
3082
+ when we want to do our next version of
3083
+
3084
+ 00:46:36.200 --> 00:46:42.240
3085
+ attention we do KNN search from the
3086
+
3087
+ 00:46:39.800 --> 00:46:44.280
3088
+ query we take the retrieved hidden
3089
+
3090
+ 00:46:42.240 --> 00:46:47.880
3091
+ States and then we just do attention
3092
+
3093
+ 00:46:44.280 --> 00:46:50.440
3094
+ over them so the nice thing about this
3095
+
3096
+ 00:46:47.880 --> 00:46:53.079
3097
+ is in the extreme case this makes no
3098
+
3099
+ 00:46:50.440 --> 00:46:55.240
3100
+ changes to the model what I mean by this
3101
+
3102
+ 00:46:53.079 --> 00:46:57.520
3103
+ is let's say our input was small enough
3104
+
3105
+ 00:46:55.240 --> 00:47:02.240
3106
+ that we could coded in only a single
3107
+
3108
+ 00:46:57.520 --> 00:47:06.400
3109
+ chunk and for KNN search we also did KNN
3110
+
3111
+ 00:47:02.240 --> 00:47:09.559
3112
+ search um we did you know exact Canon
3113
+
3114
+ 00:47:06.400 --> 00:47:12.400
3115
+ search over all of the embeddings in the
3116
+
3117
+ 00:47:09.559 --> 00:47:14.680
3118
+ trunk in that case this would just be
3119
+
3120
+ 00:47:12.400 --> 00:47:16.520
3121
+ normal attention it's exactly the same
3122
+
3123
+ 00:47:14.680 --> 00:47:18.640
3124
+ as normal
3125
+
3126
+ 00:47:16.520 --> 00:47:20.160
3127
+ attention however there are some
3128
+
3129
+ 00:47:18.640 --> 00:47:21.760
3130
+ approximations that go into here like
3131
+
3132
+ 00:47:20.160 --> 00:47:24.000
3133
+ when we encode chunks they might not be
3134
+
3135
+ 00:47:21.760 --> 00:47:26.359
3136
+ exactly the same as if we encoded the
3137
+
3138
+ 00:47:24.000 --> 00:47:29.839
3139
+ entire thing together and we're also
3140
+
3141
+ 00:47:26.359 --> 00:47:33.640
3142
+ chopping off some of the values with
3143
+
3144
+ 00:47:29.839 --> 00:47:35.800
3145
+ very low um kind of inner products and
3146
+
3147
+ 00:47:33.640 --> 00:47:37.400
3148
+ so because of this there are some
3149
+
3150
+ 00:47:35.800 --> 00:47:38.760
3151
+ approximations being made but in the
3152
+
3153
+ 00:47:37.400 --> 00:47:40.160
3154
+ extreme case if we made no
3155
+
3156
+ 00:47:38.760 --> 00:47:41.880
3157
+ approximations this would just be
3158
+
3159
+ 00:47:40.160 --> 00:47:44.359
3160
+ exactly the same model as we were using
3161
+
3162
+ 00:47:41.880 --> 00:47:46.160
3163
+ before so I find this pretty attractive
3164
+
3165
+ 00:47:44.359 --> 00:47:48.760
3166
+ and uh you know empirically it gives
3167
+
3168
+ 00:47:46.160 --> 00:47:51.720
3169
+ very good results over long
3170
+
3171
+ 00:47:48.760 --> 00:47:53.440
3172
+ distances and you know we can always
3173
+
3174
+ 00:47:51.720 --> 00:47:56.240
3175
+ make our approximations better and
3176
+
3177
+ 00:47:53.440 --> 00:47:57.680
3178
+ improve this model as well so I I think
3179
+
3180
+ 00:47:56.240 --> 00:48:00.960
3181
+ this is a attractive method that you
3182
+
3183
+ 00:47:57.680 --> 00:48:00.960
3184
+ might be interested in taking a look
3185
+
3186
+ 00:48:02.240 --> 00:48:06.200
3187
+ at okay for the final part of this I'd
3188
+
3189
+ 00:48:04.559 --> 00:48:08.079
3190
+ like to talk about long context
3191
+
3192
+ 00:48:06.200 --> 00:48:12.400
3193
+ Transformers and these are models that
3194
+
3195
+ 00:48:08.079 --> 00:48:15.119
3196
+ are explicitly trained in a way that
3197
+
3198
+ 00:48:12.400 --> 00:48:16.920
3199
+ allows you to attend to longer contexts
3200
+
3201
+ 00:48:15.119 --> 00:48:18.839
3202
+ in an efficient
3203
+
3204
+ 00:48:16.920 --> 00:48:21.960
3205
+ manner
3206
+
3207
+ 00:48:18.839 --> 00:48:23.680
3208
+ so one way that we can train over longer
3209
+
3210
+ 00:48:21.960 --> 00:48:25.880
3211
+ context is just append all of the
3212
+
3213
+ 00:48:23.680 --> 00:48:28.040
3214
+ context together and in fact shortly
3215
+
3216
+ 00:48:25.880 --> 00:48:32.200
3217
+ after Transformers came out uh this
3218
+
3219
+ 00:48:28.040 --> 00:48:34.280
3220
+ paper by VOA at all demonstrated that um
3221
+
3222
+ 00:48:32.200 --> 00:48:36.160
3223
+ it doing this can learn you know
3224
+
3225
+ 00:48:34.280 --> 00:48:38.119
3226
+ interesting document level phenomena so
3227
+
3228
+ 00:48:36.160 --> 00:48:40.440
3229
+ it can identify when
3230
+
3231
+ 00:48:38.119 --> 00:48:42.480
3232
+ multiple uh words refer to the same
3233
+
3234
+ 00:48:40.440 --> 00:48:43.680
3235
+ thing or co-reference and other things
3236
+
3237
+ 00:48:42.480 --> 00:48:45.640
3238
+ like
3239
+
3240
+ 00:48:43.680 --> 00:48:47.720
3241
+ this however the problem with
3242
+
3243
+ 00:48:45.640 --> 00:48:51.119
3244
+ Transformers is that computation is
3245
+
3246
+ 00:48:47.720 --> 00:48:52.799
3247
+ quadratic in the sentence length because
3248
+
3249
+ 00:48:51.119 --> 00:48:54.599
3250
+ you're multiplying all of the query
3251
+
3252
+ 00:48:52.799 --> 00:48:56.799
3253
+ vectors by all of the key
3254
+
3255
+ 00:48:54.599 --> 00:48:59.480
3256
+ vectors
3257
+
3258
+ 00:48:56.799 --> 00:49:02.799
3259
+ and that basically causes a big problem
3260
+
3261
+ 00:48:59.480 --> 00:49:02.799
3262
+ if your sequences become very
3263
+
3264
+ 00:49:03.480 --> 00:49:09.760
3265
+ long so if we go back to what we did in
3266
+
3267
+ 00:49:07.480 --> 00:49:12.400
3268
+ rnns uh from the very beginning of the
3269
+
3270
+ 00:49:09.760 --> 00:49:14.359
3271
+ class in rnns they don't have this
3272
+
3273
+ 00:49:12.400 --> 00:49:16.280
3274
+ problem because computation is linear in
3275
+
3276
+ 00:49:14.359 --> 00:49:20.440
3277
+ the length of the sequence you just pass
3278
+
3279
+ 00:49:16.280 --> 00:49:22.200
3280
+ along the RNN State and every single
3281
+
3282
+ 00:49:20.440 --> 00:49:23.839
3283
+ time you do the same computation over it
3284
+
3285
+ 00:49:22.200 --> 00:49:26.559
3286
+ so there's no quadratic term in
3287
+
3288
+ 00:49:23.839 --> 00:49:32.400
3289
+ calculating rnns
3290
+
3291
+ 00:49:26.559 --> 00:49:34.880
3292
+ another thing is that when doing rnns
3293
+
3294
+ 00:49:32.400 --> 00:49:37.680
3295
+ you can actually P State infinitely
3296
+
3297
+ 00:49:34.880 --> 00:49:39.040
3298
+ during the forward pass by just
3299
+
3300
+ 00:49:37.680 --> 00:49:40.240
3301
+ calculating the hidden State and then
3302
+
3303
+ 00:49:39.040 --> 00:49:42.119
3304
+ throwing away the rest of the
3305
+
3306
+ 00:49:40.240 --> 00:49:43.359
3307
+ computation graph that was used in
3308
+
3309
+ 00:49:42.119 --> 00:49:45.160
3310
+ calculating that hidden State and
3311
+
3312
+ 00:49:43.359 --> 00:49:48.319
3313
+ there's no approximation that goes on
3314
+
3315
+ 00:49:45.160 --> 00:49:49.680
3316
+ there so unlike on in un liform that I
3317
+
3318
+ 00:49:48.319 --> 00:49:51.640
3319
+ was talking about before where we needed
3320
+
3321
+ 00:49:49.680 --> 00:49:54.119
3322
+ to make approximations none need to be
3323
+
3324
+ 00:49:51.640 --> 00:49:56.400
3325
+ made in this
3326
+
3327
+ 00:49:54.119 --> 00:50:00.200
3328
+ case however there is a problem with
3329
+
3330
+ 00:49:56.400 --> 00:50:02.040
3331
+ doing back propop uh because in order to
3332
+
3333
+ 00:50:00.200 --> 00:50:05.839
3334
+ do back propop normally you maintain the
3335
+
3336
+ 00:50:02.040 --> 00:50:09.720
3337
+ entire you know state of the computation
3338
+
3339
+ 00:50:05.839 --> 00:50:12.400
3340
+ graph and so there a common method to
3341
+
3342
+ 00:50:09.720 --> 00:50:15.280
3343
+ fix this is basically you pass along the
3344
+
3345
+ 00:50:12.400 --> 00:50:16.920
3346
+ RNN state from the previous sentence but
3347
+
3348
+ 00:50:15.280 --> 00:50:19.240
3349
+ you just don't do backdrop into the
3350
+
3351
+ 00:50:16.920 --> 00:50:21.200
3352
+ previous sentence and this is called
3353
+
3354
+ 00:50:19.240 --> 00:50:24.040
3355
+ truncated backrop or truncated back
3356
+
3357
+ 00:50:21.200 --> 00:50:27.280
3358
+ propagation through time and this allows
3359
+
3360
+ 00:50:24.040 --> 00:50:30.160
3361
+ you to essentially train models with
3362
+
3363
+ 00:50:27.280 --> 00:50:32.319
3364
+ infinite context um or at least models
3365
+
3366
+ 00:50:30.160 --> 00:50:33.720
3367
+ that can pass along context infinitely
3368
+
3369
+ 00:50:32.319 --> 00:50:36.359
3370
+ even if you're not back propping into
3371
+
3372
+ 00:50:33.720 --> 00:50:36.359
3373
+ they Cod ear
3374
+
3375
+ 00:50:37.480 --> 00:50:43.520
3376
+ there so of course a problem with this
3377
+
3378
+ 00:50:40.720 --> 00:50:45.880
3379
+ over long contexts is recurrents uh
3380
+
3381
+ 00:50:43.520 --> 00:50:47.520
3382
+ recurrent models can be slow due to the
3383
+
3384
+ 00:50:45.880 --> 00:50:51.400
3385
+ kind of sequential dependence they're
3386
+
3387
+ 00:50:47.520 --> 00:50:54.280
3388
+ not ideal for um you know running on
3389
+
3390
+ 00:50:51.400 --> 00:50:57.359
3391
+ gpus or things like that and this is
3392
+
3393
+ 00:50:54.280 --> 00:51:01.960
3394
+ improved by recent architectures like
3395
+
3396
+ 00:50:57.359 --> 00:51:05.359
3397
+ Mamba and RW KV which are more conducive
3398
+
3399
+ 00:51:01.960 --> 00:51:07.079
3400
+ to GPU Based training um while still
3401
+
3402
+ 00:51:05.359 --> 00:51:08.599
3403
+ maintaining linear time complexity and
3404
+
3405
+ 00:51:07.079 --> 00:51:11.480
3406
+ so I'm looking forward to talking about
3407
+
3408
+ 00:51:08.599 --> 00:51:11.480
3409
+ that more in a future
3410
+
3411
+ 00:51:13.000 --> 00:51:17.559
3412
+ class so actually if we take this idea
3413
+
3414
+ 00:51:15.880 --> 00:51:20.440
3415
+ of truncated back propagation through
3416
+
3417
+ 00:51:17.559 --> 00:51:22.359
3418
+ time this can also be applied to
3419
+
3420
+ 00:51:20.440 --> 00:51:25.440
3421
+ Transformers and there's a really nice
3422
+
3423
+ 00:51:22.359 --> 00:51:27.880
3424
+ paper Transformer XEL also created by
3425
+
3426
+ 00:51:25.440 --> 00:51:31.119
3427
+ kungai who was formerly at
3428
+
3429
+ 00:51:27.880 --> 00:51:33.119
3430
+ CMU and what this does is this attempts
3431
+
3432
+ 00:51:31.119 --> 00:51:35.760
3433
+ to fix vectors from the previous
3434
+
3435
+ 00:51:33.119 --> 00:51:39.440
3436
+ sentence so if we have a standard
3437
+
3438
+ 00:51:35.760 --> 00:51:40.720
3439
+ Transformer uh in a Transformer XL
3440
+
3441
+ 00:51:39.440 --> 00:51:44.640
3442
+ normally what we do in the standard
3443
+
3444
+ 00:51:40.720 --> 00:51:48.480
3445
+ Transformer is each Vector attends back
3446
+
3447
+ 00:51:44.640 --> 00:51:50.920
3448
+ to all the other vectors in the current
3449
+
3450
+ 00:51:48.480 --> 00:51:53.839
3451
+ context what Transformer XEL does
3452
+
3453
+ 00:51:50.920 --> 00:51:56.359
3454
+ instead is when you have a new segment
3455
+
3456
+ 00:51:53.839 --> 00:51:58.960
3457
+ that you want to do backrop
3458
+
3459
+ 00:51:56.359 --> 00:52:01.200
3460
+ into um you have a new segment that you
3461
+
3462
+ 00:51:58.960 --> 00:52:03.960
3463
+ want to basically train over you also
3464
+
3465
+ 00:52:01.200 --> 00:52:06.400
3466
+ attend to all of the previous tokens in
3467
+
3468
+ 00:52:03.960 --> 00:52:07.640
3469
+ the previous segment but you don't do
3470
+
3471
+ 00:52:06.400 --> 00:52:10.319
3472
+ back propop into
3473
+
3474
+ 00:52:07.640 --> 00:52:12.079
3475
+ them so this is essentially truncated
3476
+
3477
+ 00:52:10.319 --> 00:52:14.480
3478
+ backpropagation through time from the
3479
+
3480
+ 00:52:12.079 --> 00:52:17.760
3481
+ Transformer
3482
+
3483
+ 00:52:14.480 --> 00:52:19.520
3484
+ perspective this is also really nice
3485
+
3486
+ 00:52:17.760 --> 00:52:21.200
3487
+ because what it allows you to do is if
3488
+
3489
+ 00:52:19.520 --> 00:52:25.880
3490
+ you have a multi-layer
3491
+
3492
+ 00:52:21.200 --> 00:52:27.720
3493
+ Transformer it allows you to attend far
3494
+
3495
+ 00:52:25.880 --> 00:52:30.520
3496
+ back so if you look at the last layer
3497
+
3498
+ 00:52:27.720 --> 00:52:33.520
3499
+ it's attending um to things in the
3500
+
3501
+ 00:52:30.520 --> 00:52:36.599
3502
+ previous context window but the second
3503
+
3504
+ 00:52:33.520 --> 00:52:39.760
3505
+ to last layer is attending to things in
3506
+
3507
+ 00:52:36.599 --> 00:52:41.520
3508
+ the um not just one context window
3509
+
3510
+ 00:52:39.760 --> 00:52:44.079
3511
+ before but multiple context windows
3512
+
3513
+ 00:52:41.520 --> 00:52:45.760
3514
+ before and actually this allows you to
3515
+
3516
+ 00:52:44.079 --> 00:52:47.880
3517
+ very effectively attend a very long
3518
+
3519
+ 00:52:45.760 --> 00:52:51.720
3520
+ context because each time kind of the
3521
+
3522
+ 00:52:47.880 --> 00:52:54.799
3523
+ context expands in an exponential
3524
+
3525
+ 00:52:51.720 --> 00:52:56.520
3526
+ manner so um recently there's a popular
3527
+
3528
+ 00:52:54.799 --> 00:52:57.799
3529
+ model called mistol that I'm sure a lot
3530
+
3531
+ 00:52:56.520 --> 00:52:59.480
3532
+ of people have heard about and this is
3533
+
3534
+ 00:52:57.799 --> 00:53:01.920
3535
+ using sliding window attention which is
3536
+
3537
+ 00:52:59.480 --> 00:53:04.160
3538
+ essentially the same mechanism proposed
3539
+
3540
+ 00:53:01.920 --> 00:53:09.240
3541
+ by Transformer XEL so this method is
3542
+
3543
+ 00:53:04.160 --> 00:53:09.240
3544
+ still uh used in uh very practical
3545
+
3546
+ 00:53:10.400 --> 00:53:17.359
3547
+ systems another paper that has been
3548
+
3549
+ 00:53:13.440 --> 00:53:19.319
3550
+ pretty influential in this general area
3551
+
3552
+ 00:53:17.359 --> 00:53:21.079
3553
+ is something called sparse
3554
+
3555
+ 00:53:19.319 --> 00:53:23.359
3556
+ Transformers and the way sparse
3557
+
3558
+ 00:53:21.079 --> 00:53:25.960
3559
+ Transformers work is instead of
3560
+
3561
+ 00:53:23.359 --> 00:53:29.520
3562
+ attending to every single previous state
3563
+
3564
+ 00:53:25.960 --> 00:53:32.640
3565
+ you attend to every n previous
3566
+
3567
+ 00:53:29.520 --> 00:53:34.599
3568
+ States and what this allows you to do is
3569
+
3570
+ 00:53:32.640 --> 00:53:37.119
3571
+ this allows you to essentially create
3572
+
3573
+ 00:53:34.599 --> 00:53:40.319
3574
+ something like the strided uh
3575
+
3576
+ 00:53:37.119 --> 00:53:42.079
3577
+ convolutions or um pyramidal recurrent
3578
+
3579
+ 00:53:40.319 --> 00:53:45.520
3580
+ neural networks that I talked about
3581
+
3582
+ 00:53:42.079 --> 00:53:49.760
3583
+ earlier um so what this looks like
3584
+
3585
+ 00:53:45.520 --> 00:53:51.079
3586
+ essentially is you have um this like if
3587
+
3588
+ 00:53:49.760 --> 00:53:54.880
3589
+ you have a particular state it might
3590
+
3591
+ 00:53:51.079 --> 00:53:56.480
3592
+ attend to all of the previous end tokens
3593
+
3594
+ 00:53:54.880 --> 00:54:00.240
3595
+ but then it
3596
+
3597
+ 00:53:56.480 --> 00:54:04.400
3598
+ also attends to all of the
3599
+
3600
+ 00:54:00.240 --> 00:54:06.880
3601
+ previous um kind of M chunks so you kind
3602
+
3603
+ 00:54:04.400 --> 00:54:08.920
3604
+ of have a combination of local and
3605
+
3606
+ 00:54:06.880 --> 00:54:11.640
3607
+ Global
3608
+
3609
+ 00:54:08.920 --> 00:54:14.760
3610
+ attention or not local and Global but
3611
+
3612
+ 00:54:11.640 --> 00:54:16.760
3613
+ local and kind of longer range attention
3614
+
3615
+ 00:54:14.760 --> 00:54:18.760
3616
+ and this can be very effective because
3617
+
3618
+ 00:54:16.760 --> 00:54:22.319
3619
+ you can attend to you know much longer
3620
+
3621
+ 00:54:18.760 --> 00:54:24.079
3622
+ context with a minimal increase in a
3623
+
3624
+ 00:54:22.319 --> 00:54:26.520
3625
+ computational
3626
+
3627
+ 00:54:24.079 --> 00:54:28.720
3628
+ complexity
3629
+
3630
+ 00:54:26.520 --> 00:54:31.160
3631
+ so another method that's a little bit
3632
+
3633
+ 00:54:28.720 --> 00:54:32.960
3634
+ like this uh or it's very similar in
3635
+
3636
+ 00:54:31.160 --> 00:54:34.359
3637
+ spirit but slightly different in
3638
+
3639
+ 00:54:32.960 --> 00:54:35.599
3640
+ implementation is something called the
3641
+
3642
+ 00:54:34.359 --> 00:54:37.520
3643
+ compressive
3644
+
3645
+ 00:54:35.599 --> 00:54:40.400
3646
+ Transformer and in the compressive
3647
+
3648
+ 00:54:37.520 --> 00:54:43.000
3649
+ Transformer you also have this idea of a
3650
+
3651
+ 00:54:40.400 --> 00:54:44.319
3652
+ local memory and then a longer term
3653
+
3654
+ 00:54:43.000 --> 00:54:47.200
3655
+ compressed
3656
+
3657
+ 00:54:44.319 --> 00:54:50.799
3658
+ memory but you have an explicit
3659
+
3660
+ 00:54:47.200 --> 00:54:54.319
3661
+ compression step that
3662
+
3663
+ 00:54:50.799 --> 00:54:58.079
3664
+ directly essentially generates this uh
3665
+
3666
+ 00:54:54.319 --> 00:55:00.960
3667
+ compressed mem M itself and so this is a
3668
+
3669
+ 00:54:58.079 --> 00:55:04.119
3670
+ little bit more flexible I guess it
3671
+
3672
+ 00:55:00.960 --> 00:55:06.280
3673
+ allows you to take all of the you know
3674
+
3675
+ 00:55:04.119 --> 00:55:09.000
3676
+ relevant things from your local memory
3677
+
3678
+ 00:55:06.280 --> 00:55:12.000
3679
+ and compress it down so it's another
3680
+
3681
+ 00:55:09.000 --> 00:55:12.000
3682
+ method that's worth thinking
3683
+
3684
+ 00:55:12.760 --> 00:55:18.400
3685
+ about finally uh there are some very
3686
+
3687
+ 00:55:15.799 --> 00:55:20.200
3688
+ interesting methods that do low rank
3689
+
3690
+ 00:55:18.400 --> 00:55:23.039
3691
+ approximations for
3692
+
3693
+ 00:55:20.200 --> 00:55:25.920
3694
+ Transformers and so calculating the
3695
+
3696
+ 00:55:23.039 --> 00:55:29.119
3697
+ attention Matrix is expensive but this
3698
+
3699
+ 00:55:25.920 --> 00:55:31.640
3700
+ is a matrix and because it's a matrix we
3701
+
3702
+ 00:55:29.119 --> 00:55:32.640
3703
+ can also approximate it with a lower
3704
+
3705
+ 00:55:31.640 --> 00:55:35.480
3706
+ rank
3707
+
3708
+ 00:55:32.640 --> 00:55:38.559
3709
+ Matrix and there's a couple methods that
3710
+
3711
+ 00:55:35.480 --> 00:55:40.599
3712
+ do things uh like this uh the first one
3713
+
3714
+ 00:55:38.559 --> 00:55:42.680
3715
+ is something called Blind forer which
3716
+
3717
+ 00:55:40.599 --> 00:55:44.520
3718
+ adds low rank linear projections into
3719
+
3720
+ 00:55:42.680 --> 00:55:47.319
3721
+ the model at appropriate
3722
+
3723
+ 00:55:44.520 --> 00:55:50.359
3724
+ places and um there's another one called
3725
+
3726
+ 00:55:47.319 --> 00:55:52.200
3727
+ NR forer which approximates using the ni
3728
+
3729
+ 00:55:50.359 --> 00:55:54.440
3730
+ run method which is based on sampling
3731
+
3732
+ 00:55:52.200 --> 00:55:56.520
3733
+ Landmark points but basically the
3734
+
3735
+ 00:55:54.440 --> 00:56:00.319
3736
+ general IDE aide behind this is normally
3737
+
3738
+ 00:55:56.520 --> 00:56:03.400
3739
+ we do this kind of softmax over you know
3740
+
3741
+ 00:56:00.319 --> 00:56:06.240
3742
+ a very large attention Vector but
3743
+
3744
+ 00:56:03.400 --> 00:56:08.440
3745
+ instead we can approximate the softmax
3746
+
3747
+ 00:56:06.240 --> 00:56:11.520
3748
+ by having some low rank vectors kind of
3749
+
3750
+ 00:56:08.440 --> 00:56:12.799
3751
+ like what we used in Laura and uh
3752
+
3753
+ 00:56:11.520 --> 00:56:16.440
3754
+ nonetheless get a reasonable
3755
+
3756
+ 00:56:12.799 --> 00:56:16.440
3757
+ approximation of the softmax used
3758
+
3759
+ 00:56:17.799 --> 00:56:24.039
3760
+ inion okay so we're nearing the end of
3761
+
3762
+ 00:56:21.520 --> 00:56:26.000
3763
+ what I want to talk about today and
3764
+
3765
+ 00:56:24.039 --> 00:56:29.720
3766
+ finally the thing that I'd like to talk
3767
+
3768
+ 00:56:26.000 --> 00:56:33.240
3769
+ about is benchmarks for long PEX models
3770
+
3771
+ 00:56:29.720 --> 00:56:35.000
3772
+ and there's a few benchmarks one very
3773
+
3774
+ 00:56:33.240 --> 00:56:37.359
3775
+ well-known one is something called long
3776
+
3777
+ 00:56:35.000 --> 00:56:40.599
3778
+ range Arena this is a composite
3779
+
3780
+ 00:56:37.359 --> 00:56:43.000
3781
+ Benchmark containing mostly non NLP
3782
+
3783
+ 00:56:40.599 --> 00:56:45.280
3784
+ tasks and it's definitely used for long
3785
+
3786
+ 00:56:43.000 --> 00:56:46.760
3787
+ sequence modeling but the results on the
3788
+
3789
+ 00:56:45.280 --> 00:56:49.400
3790
+ long range Arena actually tend to
3791
+
3792
+ 00:56:46.760 --> 00:56:51.599
3793
+ diverge uh somewhat from the results
3794
+
3795
+ 00:56:49.400 --> 00:56:54.440
3796
+ that you get for longdistance language
3797
+
3798
+ 00:56:51.599 --> 00:56:56.520
3799
+ modeling so in addition to this another
3800
+
3801
+ 00:56:54.440 --> 00:56:58.400
3802
+ benchmark that I uh personally like and
3803
+
3804
+ 00:56:56.520 --> 00:57:01.960
3805
+ have used a bit is something called
3806
+
3807
+ 00:56:58.400 --> 00:57:05.720
3808
+ Scrolls which uh combines together a
3809
+
3810
+ 00:57:01.960 --> 00:57:07.960
3811
+ whole bunch of kind of QA style or
3812
+
3813
+ 00:57:05.720 --> 00:57:10.920
3814
+ summarization style tasks that have very
3815
+
3816
+ 00:57:07.960 --> 00:57:13.280
3817
+ long contexts including over narratives
3818
+
3819
+ 00:57:10.920 --> 00:57:15.680
3820
+ or books or government reports or other
3821
+
3822
+ 00:57:13.280 --> 00:57:17.280
3823
+ things like that so you can also take a
3824
+
3825
+ 00:57:15.680 --> 00:57:20.680
3826
+ look at this if you're interested in
3827
+
3828
+ 00:57:17.280 --> 00:57:20.680
3829
+ kind of benchmarking longer range
3830
+
3831
+ 00:57:21.839 --> 00:57:28.280
3832
+ models okay the final thing I'd like to
3833
+
3834
+ 00:57:24.559 --> 00:57:30.280
3835
+ talk about is now that we have retriever
3836
+
3837
+ 00:57:28.280 --> 00:57:31.680
3838
+ models we have reader models we maybe
3839
+
3840
+ 00:57:30.280 --> 00:57:34.000
3841
+ even have reader models that can
3842
+
3843
+ 00:57:31.680 --> 00:57:35.520
3844
+ effectively use very long contexts like
3845
+
3846
+ 00:57:34.000 --> 00:57:37.880
3847
+ the ones that we retrieve over whole
3848
+
3849
+ 00:57:35.520 --> 00:57:39.240
3850
+ documents how do we effectively use them
3851
+
3852
+ 00:57:37.880 --> 00:57:43.640
3853
+ in our
3854
+
3855
+ 00:57:39.240 --> 00:57:46.680
3856
+ models so there was a very nice paper um
3857
+
3858
+ 00:57:43.640 --> 00:57:48.880
3859
+ by Nelson Leo at Stanford that about a
3860
+
3861
+ 00:57:46.680 --> 00:57:51.160
3862
+ phenomenon that was kinded lost in the
3863
+
3864
+ 00:57:48.880 --> 00:57:53.079
3865
+ middle and basically what it does is it
3866
+
3867
+ 00:57:51.160 --> 00:57:55.119
3868
+ demonstrates that many many different
3869
+
3870
+ 00:57:53.079 --> 00:57:57.720
3871
+ models including state-of-the-art model
3872
+
3873
+ 00:57:55.119 --> 00:58:00.799
3874
+ models pay less attention to things in
3875
+
3876
+ 00:57:57.720 --> 00:58:03.960
3877
+ the middle of long context windows and
3878
+
3879
+ 00:58:00.799 --> 00:58:06.760
3880
+ so if we have an answer and we put it in
3881
+
3882
+ 00:58:03.960 --> 00:58:09.200
3883
+ you know the first position in Doc in
3884
+
3885
+ 00:58:06.760 --> 00:58:12.280
3886
+ you know a concatenated context or the
3887
+
3888
+ 00:58:09.200 --> 00:58:13.799
3889
+ 20th position in a concatenated context
3890
+
3891
+ 00:58:12.280 --> 00:58:15.240
3892
+ it tends to attend more to the ones at
3893
+
3894
+ 00:58:13.799 --> 00:58:18.359
3895
+ the beginning or the
3896
+
3897
+ 00:58:15.240 --> 00:58:19.480
3898
+ end in contrast the ones in the middle
3899
+
3900
+ 00:58:18.359 --> 00:58:22.760
3901
+ kind of get
3902
+
3903
+ 00:58:19.480 --> 00:58:26.680
3904
+ lost hence the name lost in the middle
3905
+
3906
+ 00:58:22.760 --> 00:58:29.520
3907
+ and the problem with this is you know if
3908
+
3909
+ 00:58:26.680 --> 00:58:32.480
3910
+ we are doing something like retrieval in
3911
+
3912
+ 00:58:29.520 --> 00:58:34.160
3913
+ Reading then that's maybe not such a
3914
+
3915
+ 00:58:32.480 --> 00:58:35.680
3916
+ huge problem because we could just put
3917
+
3918
+ 00:58:34.160 --> 00:58:37.680
3919
+ you know the highest scoring documents
3920
+
3921
+ 00:58:35.680 --> 00:58:39.920
3922
+ at the beginning that might even be more
3923
+
3924
+ 00:58:37.680 --> 00:58:42.440
3925
+ effective than uh you know concatenating
3926
+
3927
+ 00:58:39.920 --> 00:58:44.160
3928
+ lots of low scoring documents together
3929
+
3930
+ 00:58:42.440 --> 00:58:45.559
3931
+ but if we want to read a really long
3932
+
3933
+ 00:58:44.160 --> 00:58:48.839
3934
+ document and synthesize something
3935
+
3936
+ 00:58:45.559 --> 00:58:52.200
3937
+ without doing kind of another uh scoring
3938
+
3939
+ 00:58:48.839 --> 00:58:54.200
3940
+ step uh that can be an issue and also
3941
+
3942
+ 00:58:52.200 --> 00:58:56.359
3943
+ you know our retriever is not perfect so
3944
+
3945
+ 00:58:54.200 --> 00:58:58.799
3946
+ we would like the model to the reader
3947
+
3948
+ 00:58:56.359 --> 00:59:00.520
3949
+ model to do a good job with the outputs
3950
+
3951
+ 00:58:58.799 --> 00:59:04.839
3952
+ that it
3953
+
3954
+ 00:59:00.520 --> 00:59:06.359
3955
+ has so there are methods uh to ensure
3956
+
3957
+ 00:59:04.839 --> 00:59:09.440
3958
+ use of relevant
3959
+
3960
+ 00:59:06.359 --> 00:59:12.119
3961
+ context so of course better retrievers
3962
+
3963
+ 00:59:09.440 --> 00:59:14.880
3964
+ make more relevant context you can do
3965
+
3966
+ 00:59:12.119 --> 00:59:16.240
3967
+ you know reranking or other things like
3968
+
3969
+ 00:59:14.880 --> 00:59:17.280
3970
+ that and only include the context that
3971
+
3972
+ 00:59:16.240 --> 00:59:19.680
3973
+ looks most
3974
+
3975
+ 00:59:17.280 --> 00:59:22.880
3976
+ relevant um or you know refine your
3977
+
3978
+ 00:59:19.680 --> 00:59:25.200
3979
+ reader model but there's also methods
3980
+
3981
+ 00:59:22.880 --> 00:59:28.720
3982
+ that can decide whether contact should
3983
+
3984
+ 00:59:25.200 --> 00:59:32.400
3985
+ be used in the first place so um there
3986
+
3987
+ 00:59:28.720 --> 00:59:35.440
3988
+ are methods uh to decide whether to use
3989
+
3990
+ 00:59:32.400 --> 00:59:37.559
3991
+ whether to include passages or not and
3992
+
3993
+ 00:59:35.440 --> 00:59:39.920
3994
+ also uh recently we proposed a method to
3995
+
3996
+ 00:59:37.559 --> 00:59:42.640
3997
+ filter down to parts of retrieve
3998
+
3999
+ 00:59:39.920 --> 00:59:44.920
4000
+ passages uh to have only appropriate
4001
+
4002
+ 00:59:42.640 --> 00:59:47.480
4003
+ content and this is a model uh that we
4004
+
4005
+ 00:59:44.920 --> 00:59:49.319
4006
+ called filco it basically filters the
4007
+
4008
+ 00:59:47.480 --> 00:59:52.160
4009
+ context down to the most relevant
4010
+
4011
+ 00:59:49.319 --> 00:59:53.920
4012
+ content that we think is appropriate and
4013
+
4014
+ 00:59:52.160 --> 00:59:56.960
4015
+ that allows us to get better results
4016
+
4017
+ 00:59:53.920 --> 00:59:56.960
4018
+ when it's fed to the
4019
+
4020
+ 00:59:57.079 --> 01:00:03.640
4021
+ generator so that's all I have for today
4022
+
4023
+ 01:00:00.319 --> 01:00:06.200
4024
+ um thank you for watching the video and
4025
+
4026
+ 01:00:03.640 --> 01:00:08.599
4027
+ for people in the class I'll be happy to
4028
+
4029
+ 01:00:06.200 --> 01:00:13.079
4030
+ take questions on Piaza or during the
4031
+
4032
+ 01:00:08.599 --> 01:00:13.079
4033
+ office hours that I had planned thanks a
4034
+
4035
+ 01:00:15.319 --> 01:00:18.319
4036
+ lot
CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/CMU Advanced NLP 2024 (11) Distillation Quantization and Pruning.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a58b185362d8bffc64edfe4141f67f0804b6efa04d4bfbba63316ce1b5dd8fe
3
+ size 65064579
CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=s9yyH3RPhdM",
3
+ "title": "CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning"
4
+ }
CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/transcript.srt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/transcript.vtt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (12) Reinforcement Learning/CMU Advanced NLP 2024 (12) Reinforcement Learning.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bb70e0fa6c406fd7c2d8d736e10c2e652b52b3e65757930cea4fb235a50ffb3
3
+ size 72409479
CMU Advanced NLP 2024 (12) Reinforcement Learning/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=NX0l1M0NWPM",
3
+ "title": "CMU Advanced NLP 2024 (12) Reinforcement Learning"
4
+ }
CMU Advanced NLP 2024 (12) Reinforcement Learning/transcript.srt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (12) Reinforcement Learning/transcript.vtt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (13) Debugging and Interpretation/CMU Advanced NLP 2024 (13) Debugging and Interpretation.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7de94ea1f378595ae1d957526a4f6aa5a2c75c49db3b40a36e1f0e5ab2a17152
3
+ size 82237142
CMU Advanced NLP 2024 (13) Debugging and Interpretation/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=c4UwOq2J9mQ",
3
+ "title": "CMU Advanced NLP 2024 (13) Debugging and Interpretation"
4
+ }
CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.srt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.vtt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22c0ccf96269123bc7e4f6390f0c0220a4e05848e941a58ed5e57085ae2d8432
3
+ size 80561402
CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=MueCRSZ3RQ0",
3
+ "title": "CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts"
4
+ }
CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.srt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.vtt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5c3f854f59933275f3273f0b77753dee254d48e166d37f6f3190a5423767201
3
+ size 79708142
CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=2rOSrDtg7HQ",
3
+ "title": "CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models"
4
+ }
CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.srt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.vtt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (17) Code Generation/CMU Advanced NLP 2024 (17) Code Generation.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fcb735ceea4c24db426084df97f450a16142c64c4736ab6e403bb13741c8350
3
+ size 63648833
CMU Advanced NLP 2024 (17) Code Generation/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=bN2ZZieBXsE",
3
+ "title": "CMU Advanced NLP 2024 (17) Code Generation"
4
+ }
CMU Advanced NLP 2024 (17) Code Generation/transcript.srt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (17) Code Generation/transcript.vtt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (18) Knowledge and Language Models/CMU Advanced NLP 2024 (18) Knowledge and Language Models.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b246f116f9c543f9cc995a334954a8947064a1bc3950d9acdc34b8bf42b8771
3
+ size 74113017
CMU Advanced NLP 2024 (18) Knowledge and Language Models/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=IwEYCbdgJ9U",
3
+ "title": "CMU Advanced NLP 2024 (18) Knowledge and Language Models"
4
+ }
CMU Advanced NLP 2024 (18) Knowledge and Language Models/transcript.srt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (18) Knowledge and Language Models/transcript.vtt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (2) Word Representation and Text Classification/CMU Advanced NLP 2024 (2) Word Representation and Text Classification.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35d3a9cb9cc7d1aedba6a0742b46cfd3e24c4999c46b59ae1df22321775c9102
3
+ size 82455565
CMU Advanced NLP 2024 (2) Word Representation and Text Classification/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=wa61zdcKWyU",
3
+ "title": "CMU Advanced NLP 2024 (2) Word Representation and Text Classification"
4
+ }
CMU Advanced NLP 2024 (2) Word Representation and Text Classification/transcript.srt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (2) Word Representation and Text Classification/transcript.vtt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (20) Tool Use and Language Agents/CMU Advanced NLP 2024 (20) Tool Use and Language Agents.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:639a8f338d12c0f77948889dffe7bb1bbe4e9ad4cb6f4aa806babd15d679af88
3
+ size 83218086
CMU Advanced NLP 2024 (20) Tool Use and Language Agents/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=d0QSnLjlgzc",
3
+ "title": "CMU Advanced NLP 2024 (20) Tool Use and Language Agents"
4
+ }
CMU Advanced NLP 2024 (20) Tool Use and Language Agents/transcript.srt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (20) Tool Use and Language Agents/transcript.vtt ADDED
The diff for this file is too large to render. See raw diff
 
CMU Advanced NLP 2024 (21) Complex Reasoning/CMU Advanced NLP 2024 (21) Complex Reasoning.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10a2dabeb41186cd432caae205d3e22b8ad34e91253d174abfdadaa82ea581f2
3
+ size 56293331
CMU Advanced NLP 2024 (21) Complex Reasoning/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=mPd2hFmzjWE",
3
+ "title": "CMU Advanced NLP 2024 (21) Complex Reasoning"
4
+ }
CMU Advanced NLP 2024 (21) Complex Reasoning/transcript.srt ADDED
@@ -0,0 +1,5007 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1
2
+ 00:00:00,280 --> 00:00:05,120
3
+ so I'd like to go ahead with uh complex
4
+
5
+ 2
6
+ 00:00:02,399 --> 00:00:08,719
7
+ reasoning and we've talked a little bit
8
+
9
+ 3
10
+ 00:00:05,120 --> 00:00:10,719
11
+ about uh reasoning in language models uh
12
+
13
+ 4
14
+ 00:00:08,719 --> 00:00:12,160
15
+ up until now and so I'm going to be
16
+
17
+ 5
18
+ 00:00:10,719 --> 00:00:15,280
19
+ talking about stuff that we didn't talk
20
+
21
+ 6
22
+ 00:00:12,160 --> 00:00:17,240
23
+ about yet um this might be a little bit
24
+
25
+ 7
26
+ 00:00:15,280 --> 00:00:19,199
27
+ short because of that because I'm not
28
+
29
+ 8
30
+ 00:00:17,240 --> 00:00:20,640
31
+ talking about like programs because we
32
+
33
+ 9
34
+ 00:00:19,199 --> 00:00:22,080
35
+ talked about that in the code generation
36
+
37
+ 10
38
+ 00:00:20,640 --> 00:00:24,199
39
+ class and we already talked a little bit
40
+
41
+ 11
42
+ 00:00:22,080 --> 00:00:26,320
43
+ about some of the basics here but um you
44
+
45
+ 12
46
+ 00:00:24,199 --> 00:00:30,119
47
+ know if we have time at the end I'd be
48
+
49
+ 13
50
+ 00:00:26,320 --> 00:00:30,840
51
+ happy to discuss free form also so what
52
+
53
+ 14
54
+ 00:00:30,119 --> 00:00:34,320
55
+ is
56
+
57
+ 15
58
+ 00:00:30,840 --> 00:00:35,920
59
+ reasoning um the basic idea is using
60
+
61
+ 16
62
+ 00:00:34,320 --> 00:00:37,680
63
+ evidence and logic to arrive at
64
+
65
+ 17
66
+ 00:00:35,920 --> 00:00:40,200
67
+ conclusions and make
68
+
69
+ 18
70
+ 00:00:37,680 --> 00:00:43,760
71
+ judgments
72
+
73
+ 19
74
+ 00:00:40,200 --> 00:00:48,039
75
+ and what is it in language models is a
76
+
77
+ 20
78
+ 00:00:43,760 --> 00:00:49,399
79
+ little bit um you know less clear uh but
80
+
81
+ 21
82
+ 00:00:48,039 --> 00:00:52,680
83
+ if we talk about it kind of like from
84
+
85
+ 22
86
+ 00:00:49,399 --> 00:00:56,280
87
+ the philosophical standpoint um there
88
+
89
+ 23
90
+ 00:00:52,680 --> 00:00:58,399
91
+ are two varieties of this one is formal
92
+
93
+ 24
94
+ 00:00:56,280 --> 00:01:01,680
95
+ uh reasoning and formal reasoning is
96
+
97
+ 25
98
+ 00:00:58,399 --> 00:01:04,239
99
+ mostly based on strict truth values so
100
+
101
+ 26
102
+ 00:01:01,680 --> 00:01:05,920
103
+ it's kind of like um you can definitely
104
+
105
+ 27
106
+ 00:01:04,239 --> 00:01:08,360
107
+ say this is true you can definitely say
108
+
109
+ 28
110
+ 00:01:05,920 --> 00:01:11,680
111
+ this is not true
112
+
113
+ 29
114
+ 00:01:08,360 --> 00:01:13,799
115
+ and in real life there's very little
116
+
117
+ 30
118
+ 00:01:11,680 --> 00:01:15,759
119
+ actual formal reasoning outside of like
120
+
121
+ 31
122
+ 00:01:13,799 --> 00:01:17,960
123
+ for example mathematics or maybe you
124
+
125
+ 32
126
+ 00:01:15,759 --> 00:01:20,240
127
+ know algorithms computer science and
128
+
129
+ 33
130
+ 00:01:17,960 --> 00:01:21,759
131
+ other things like that um and then
132
+
133
+ 34
134
+ 00:01:20,240 --> 00:01:23,240
135
+ separately from that we have informal
136
+
137
+ 35
138
+ 00:01:21,759 --> 00:01:27,040
139
+ reasoning based on experience and
140
+
141
+ 36
142
+ 00:01:23,240 --> 00:01:30,439
143
+ intuition and actually um this is this
144
+
145
+ 37
146
+ 00:01:27,040 --> 00:01:32,360
147
+ was uh rather elusive uh until
148
+
149
+ 38
150
+ 00:01:30,439 --> 00:01:33,720
151
+ large language models you know people
152
+
153
+ 39
154
+ 00:01:32,360 --> 00:01:35,560
155
+ were working on it but it was really
156
+
157
+ 40
158
+ 00:01:33,720 --> 00:01:38,119
159
+ hard and this is like one of the big
160
+
161
+ 41
162
+ 00:01:35,560 --> 00:01:41,479
163
+ breakthroughs I think of the past few
164
+
165
+ 42
166
+ 00:01:38,119 --> 00:01:46,799
167
+ years um I should note that this uh
168
+
169
+ 43
170
+ 00:01:41,479 --> 00:01:48,520
171
+ paper here uh hang and Chan is a kind of
172
+
173
+ 44
174
+ 00:01:46,799 --> 00:01:50,119
175
+ review survey paper of reasoning in
176
+
177
+ 45
178
+ 00:01:48,520 --> 00:01:51,520
179
+ large language models it's on the
180
+
181
+ 46
182
+ 00:01:50,119 --> 00:01:54,719
183
+ references so if you're interested you
184
+
185
+ 47
186
+ 00:01:51,520 --> 00:01:57,600
187
+ can take a look at that too um but
188
+
189
+ 48
190
+ 00:01:54,719 --> 00:01:59,200
191
+ there's three kinds of reasoning uh
192
+
193
+ 49
194
+ 00:01:57,600 --> 00:02:00,840
195
+ there's many kinds of reasoning but
196
+
197
+ 50
198
+ 00:01:59,200 --> 00:02:03,280
199
+ there's three kinds of reasoning in
200
+
201
+ 51
202
+ 00:02:00,840 --> 00:02:06,240
203
+ particular that I'd like to talk about
204
+
205
+ 52
206
+ 00:02:03,280 --> 00:02:08,840
207
+ um from the point of view of today and
208
+
209
+ 53
210
+ 00:02:06,240 --> 00:02:10,360
211
+ the first one is uh deductive reasoning
212
+
213
+ 54
214
+ 00:02:08,840 --> 00:02:13,080
215
+ and deductive reasoning is basically
216
+
217
+ 55
218
+ 00:02:10,360 --> 00:02:16,040
219
+ using logic to go from a premise to a
220
+
221
+ 56
222
+ 00:02:13,080 --> 00:02:18,440
223
+ conclusion and this is largely what
224
+
225
+ 57
226
+ 00:02:16,040 --> 00:02:19,879
227
+ people not entirely but largely what
228
+
229
+ 58
230
+ 00:02:18,440 --> 00:02:22,400
231
+ people talk about when they think about
232
+
233
+ 59
234
+ 00:02:19,879 --> 00:02:25,879
235
+ formal reasoning and so basically you
236
+
237
+ 60
238
+ 00:02:22,400 --> 00:02:28,640
239
+ have several premises um like all
240
+
241
+ 61
242
+ 00:02:25,879 --> 00:02:32,120
243
+ mammals have kidneys and all whales are
244
+
245
+ 62
246
+ 00:02:28,640 --> 00:02:35,239
247
+ mammals and then from this uh you can go
248
+
249
+ 63
250
+ 00:02:32,120 --> 00:02:35,239
251
+ to all whales have
252
+
253
+ 64
254
+ 00:02:35,440 --> 00:02:40,640
255
+ kidneys then separately there's
256
+
257
+ 65
258
+ 00:02:38,000 --> 00:02:44,040
259
+ inductive reasoning and inductive
260
+
261
+ 66
262
+ 00:02:40,640 --> 00:02:46,040
263
+ reasoning is um from
264
+
265
+ 67
266
+ 00:02:44,040 --> 00:02:48,480
267
+ observation uh predict a likely
268
+
269
+ 68
270
+ 00:02:46,040 --> 00:02:50,080
271
+ conclusion or predict a likely kind of
272
+
273
+ 69
274
+ 00:02:48,480 --> 00:02:53,640
275
+ generalized
276
+
277
+ 70
278
+ 00:02:50,080 --> 00:02:55,360
279
+ conclusion um so this is one example uh
280
+
281
+ 71
282
+ 00:02:53,640 --> 00:02:56,920
283
+ when we see a creature with wings it is
284
+
285
+ 72
286
+ 00:02:55,360 --> 00:02:58,599
287
+ usually a bird we see a creature with
288
+
289
+ 73
290
+ 00:02:56,920 --> 00:03:00,400
291
+ wings the creature is likely to be a
292
+
293
+ 74
294
+ 00:02:58,599 --> 00:03:02,879
295
+ bird so it's kind of this is kind of
296
+
297
+ 75
298
+ 00:03:00,400 --> 00:03:05,319
299
+ like a soft version of deduction another
300
+
301
+ 76
302
+ 00:03:02,879 --> 00:03:07,440
303
+ common thing is like every single
304
+
305
+ 77
306
+ 00:03:05,319 --> 00:03:10,760
307
+ creature I have seen with wings is a
308
+
309
+ 78
310
+ 00:03:07,440 --> 00:03:12,480
311
+ bird and then you can kind of um induce
312
+
313
+ 79
314
+ 00:03:10,760 --> 00:03:16,799
315
+ that all
316
+
317
+ 80
318
+ 00:03:12,480 --> 00:03:19,159
319
+ uh like all uh creatures with wings are
320
+
321
+ 81
322
+ 00:03:16,799 --> 00:03:21,120
323
+ birds but that might not be true it's
324
+
325
+ 82
326
+ 00:03:19,159 --> 00:03:23,879
327
+ not necessarily logically entailed but
328
+
329
+ 83
330
+ 00:03:21,120 --> 00:03:27,560
331
+ you you make that kind
332
+
333
+ 84
334
+ 00:03:23,879 --> 00:03:31,000
335
+ of logical conclusion uh without it
336
+
337
+ 85
338
+ 00:03:27,560 --> 00:03:32,840
339
+ being formally uh correct or verif
340
+
341
+ 86
342
+ 00:03:31,000 --> 00:03:34,720
343
+ and then the final one is abductive
344
+
345
+ 87
346
+ 00:03:32,840 --> 00:03:38,000
347
+ reasoning and so this is from an
348
+
349
+ 88
350
+ 00:03:34,720 --> 00:03:40,760
351
+ observation we predict the most likely
352
+
353
+ 89
354
+ 00:03:38,000 --> 00:03:42,760
355
+ explanation and so for example if we
356
+
357
+ 90
358
+ 00:03:40,760 --> 00:03:44,480
359
+ have something like the car cannot start
360
+
361
+ 91
362
+ 00:03:42,760 --> 00:03:48,319
363
+ and there is a puddle of liquid under
364
+
365
+ 92
366
+ 00:03:44,480 --> 00:03:50,200
367
+ the engine um then we might have a
368
+
369
+ 93
370
+ 00:03:48,319 --> 00:03:53,360
371
+ likely explanation that the car has a
372
+
373
+ 94
374
+ 00:03:50,200 --> 00:03:55,280
375
+ leak in the radiator so we're going from
376
+
377
+ 95
378
+ 00:03:53,360 --> 00:03:58,760
379
+ kind of uh the
380
+
381
+ 96
382
+ 00:03:55,280 --> 00:04:00,879
383
+ car you know these these things and then
384
+
385
+ 97
386
+ 00:03:58,760 --> 00:04:02,280
387
+ we try to predict the reason why this
388
+
389
+ 98
390
+ 00:04:00,879 --> 00:04:05,040
391
+ happens so we're trying to predict like
392
+
393
+ 99
394
+ 00:04:02,280 --> 00:04:07,360
395
+ reverse pausal links
396
+
397
+ 100
398
+ 00:04:05,040 --> 00:04:08,480
399
+ essentially um there's other types of re
400
+
401
+ 101
402
+ 00:04:07,360 --> 00:04:10,400
403
+ reasoning that I'm not going to talk
404
+
405
+ 102
406
+ 00:04:08,480 --> 00:04:12,159
407
+ about as much like analogical reasoning
408
+
409
+ 103
410
+ 00:04:10,400 --> 00:04:14,079
411
+ and and things like this but uh these
412
+
413
+ 104
414
+ 00:04:12,159 --> 00:04:15,440
415
+ are the three main ones I want to talk
416
+
417
+ 105
418
+ 00:04:14,079 --> 00:04:17,720
419
+ about
420
+
421
+ 106
422
+ 00:04:15,440 --> 00:04:22,040
423
+ today uh one thing I should point out is
424
+
425
+ 107
426
+ 00:04:17,720 --> 00:04:24,400
427
+ like even in philosophy or you know
428
+
429
+ 108
430
+ 00:04:22,040 --> 00:04:26,240
431
+ like even when you read descriptions
432
+
433
+ 109
434
+ 00:04:24,400 --> 00:04:29,280
435
+ about these various types of reasoning
436
+
437
+ 110
438
+ 00:04:26,240 --> 00:04:31,880
439
+ the types are a little bit vague so um
440
+
441
+ 111
442
+ 00:04:29,280 --> 00:04:35,280
443
+ take these is like
444
+
445
+ 112
446
+ 00:04:31,880 --> 00:04:37,240
447
+ general not you know General directions
448
+
449
+ 113
450
+ 00:04:35,280 --> 00:04:39,400
451
+ and not strict rules because like which
452
+
453
+ 114
454
+ 00:04:37,240 --> 00:04:42,120
455
+ falls on under which category also can
456
+
457
+ 115
458
+ 00:04:39,400 --> 00:04:44,880
459
+ be a little bit uh you know unclear uh
460
+
461
+ 116
462
+ 00:04:42,120 --> 00:04:44,880
463
+ according to various
464
+
465
+ 117
466
+ 00:04:45,479 --> 00:04:53,440
467
+ definitions cool um so first before
468
+
469
+ 118
470
+ 00:04:49,840 --> 00:04:55,720
471
+ getting into formal reasoning methods
472
+
473
+ 119
474
+ 00:04:53,440 --> 00:04:57,759
475
+ are before getting into the bulk of the
476
+
477
+ 120
478
+ 00:04:55,720 --> 00:05:00,000
479
+ talk which is going to be about llms I
480
+
481
+ 121
482
+ 00:04:57,759 --> 00:05:02,479
483
+ want to talk about some pre-m reasoning
484
+
485
+ 122
486
+ 00:05:00,000 --> 00:05:03,720
487
+ methods and the first one is kind of
488
+
489
+ 123
490
+ 00:05:02,479 --> 00:05:05,160
491
+ like formal reasoning within
492
+
493
+ 124
494
+ 00:05:03,720 --> 00:05:07,320
495
+ computational
496
+
497
+ 125
498
+ 00:05:05,160 --> 00:05:09,840
499
+ semantics and this has been around for a
500
+
501
+ 126
502
+ 00:05:07,320 --> 00:05:12,479
503
+ really long time um it's also kind of
504
+
505
+ 127
506
+ 00:05:09,840 --> 00:05:15,000
507
+ what powered the things that worked over
508
+
509
+ 128
510
+ 00:05:12,479 --> 00:05:21,039
511
+ knowledge bases and other things like
512
+
513
+ 129
514
+ 00:05:15,000 --> 00:05:23,639
515
+ this um and the way it works is it does
516
+
517
+ 130
518
+ 00:05:21,039 --> 00:05:27,600
519
+ derivational um
520
+
521
+ 131
522
+ 00:05:23,639 --> 00:05:31,800
523
+ reasoning by uh sorry I can't read that
524
+
525
+ 132
526
+ 00:05:27,600 --> 00:05:34,720
527
+ in the back um by starting out with
528
+
529
+ 133
530
+ 00:05:31,800 --> 00:05:36,080
531
+ certain premises and getting to um
532
+
533
+ 134
534
+ 00:05:34,720 --> 00:05:40,000
535
+ getting to final
536
+
537
+ 135
538
+ 00:05:36,080 --> 00:05:43,039
539
+ conclusions so there's ways that you can
540
+
541
+ 136
542
+ 00:05:40,000 --> 00:05:44,060
543
+ write this I think you might have
544
+
545
+ 137
546
+ 00:05:43,039 --> 00:05:47,080
547
+ seen
548
+
549
+ 138
550
+ 00:05:44,060 --> 00:05:50,479
551
+ [Music]
552
+
553
+ 139
554
+ 00:05:47,080 --> 00:05:54,240
555
+ um you might have seen
556
+
557
+ 140
558
+ 00:05:50,479 --> 00:05:58,319
559
+ uh this in uh another like math class or
560
+
561
+ 141
562
+ 00:05:54,240 --> 00:06:02,440
563
+ something but uh we we have symbols like
564
+
565
+ 142
566
+ 00:05:58,319 --> 00:06:02,440
567
+ all and um
568
+
569
+ 143
570
+ 00:06:03,039 --> 00:06:08,280
571
+ exist let's
572
+
573
+ 144
574
+ 00:06:04,960 --> 00:06:10,960
575
+ see yeah we have things like all and
576
+
577
+ 145
578
+ 00:06:08,280 --> 00:06:13,319
579
+ exist and like all
580
+
581
+ 146
582
+ 00:06:10,960 --> 00:06:16,240
583
+ X
584
+
585
+ 147
586
+ 00:06:13,319 --> 00:06:20,479
587
+ die means
588
+
589
+ 148
590
+ 00:06:16,240 --> 00:06:23,919
591
+ like every Everything has died and this
592
+
593
+ 149
594
+ 00:06:20,479 --> 00:06:27,360
595
+ uh implies that Mia and Zed have
596
+
597
+ 150
598
+ 00:06:23,919 --> 00:06:30,440
599
+ died um
600
+
601
+ 151
602
+ 00:06:27,360 --> 00:06:32,240
603
+ so yeah this is a actually maybe I'll
604
+
605
+ 152
606
+ 00:06:30,440 --> 00:06:33,280
607
+ not I'll not go through this one and let
608
+
609
+ 153
610
+ 00:06:32,240 --> 00:06:37,639
611
+ me go
612
+
613
+ 154
614
+ 00:06:33,280 --> 00:06:40,440
615
+ through um go to this one so like it
616
+
617
+ 155
618
+ 00:06:37,639 --> 00:06:40,440
619
+ would be something
620
+
621
+ 156
622
+ 00:06:40,639 --> 00:06:45,080
623
+ like uh for
624
+
625
+ 157
626
+ 00:06:42,960 --> 00:06:47,480
627
+ all
628
+
629
+ 158
630
+ 00:06:45,080 --> 00:06:50,669
631
+ X um
632
+
633
+ 159
634
+ 00:06:47,480 --> 00:06:50,669
635
+ [Music]
636
+
637
+ 160
638
+ 00:06:52,039 --> 00:07:00,400
639
+ mamal well X
640
+
641
+ 161
642
+ 00:06:56,759 --> 00:07:03,520
643
+ implies have
644
+
645
+ 162
646
+ 00:07:00,400 --> 00:07:07,560
647
+ X kidney or something like
648
+
649
+ 163
650
+ 00:07:03,520 --> 00:07:09,280
651
+ that and then you would have other rules
652
+
653
+ 164
654
+ 00:07:07,560 --> 00:07:11,879
655
+ and you can go through uh through
656
+
657
+ 165
658
+ 00:07:09,280 --> 00:07:14,440
659
+ derivations and and other things like
660
+
661
+ 166
662
+ 00:07:11,879 --> 00:07:16,120
663
+ this
664
+
665
+ 167
666
+ 00:07:14,440 --> 00:07:19,280
667
+ um
668
+
669
+ 168
670
+ 00:07:16,120 --> 00:07:21,560
671
+ my favorite reference for this is this
672
+
673
+ 169
674
+ 00:07:19,280 --> 00:07:24,599
675
+ Blackburn and buz book right here it's
676
+
677
+ 170
678
+ 00:07:21,560 --> 00:07:26,400
679
+ really well written um and it has like
680
+
681
+ 171
682
+ 00:07:24,599 --> 00:07:28,039
683
+ lots of good examples it also explains
684
+
685
+ 172
686
+ 00:07:26,400 --> 00:07:30,440
687
+ how you go through derivations and other
688
+
689
+ 173
690
+ 00:07:28,039 --> 00:07:34,360
691
+ stuff like that
692
+
693
+ 174
694
+ 00:07:30,440 --> 00:07:35,759
695
+ um and actually neural networks can do
696
+
697
+ 175
698
+ 00:07:34,360 --> 00:07:37,039
699
+ this variety of reasoning through Chain
700
+
701
+ 176
702
+ 00:07:35,759 --> 00:07:38,599
703
+ of Thought and other things I'm going to
704
+
705
+ 177
706
+ 00:07:37,039 --> 00:07:40,120
707
+ talk about today but it's a very rough
708
+
709
+ 178
710
+ 00:07:38,599 --> 00:07:43,960
711
+ approximation and it doesn't work
712
+
713
+ 179
714
+ 00:07:40,120 --> 00:07:47,199
715
+ particularly well for saying like all
716
+
717
+ 180
718
+ 00:07:43,960 --> 00:07:51,240
719
+ you know all people
720
+
721
+ 181
722
+ 00:07:47,199 --> 00:07:53,599
723
+ are of a uh like things that apply to
724
+
725
+ 182
726
+ 00:07:51,240 --> 00:07:57,240
727
+ all people or things that apply to sets
728
+
729
+ 183
730
+ 00:07:53,599 --> 00:08:00,039
731
+ or other things like this so within
732
+
733
+ 184
734
+ 00:07:57,240 --> 00:08:02,879
735
+ prologue you could
736
+
737
+ 185
738
+ 00:08:00,039 --> 00:08:06,520
739
+ take a knowledge base and ask the
740
+
741
+ 186
742
+ 00:08:02,879 --> 00:08:11,960
743
+ knowledge base like do
744
+
745
+ 187
746
+ 00:08:06,520 --> 00:08:12,800
747
+ all people who work at CMU as professors
748
+
749
+ 188
750
+ 00:08:11,960 --> 00:08:15,840
751
+ have a
752
+
753
+ 189
754
+ 00:08:12,800 --> 00:08:18,080
755
+ PhD and you could like actually examine
756
+
757
+ 190
758
+ 00:08:15,840 --> 00:08:20,639
759
+ that based on the knowledge base uh
760
+
761
+ 191
762
+ 00:08:18,080 --> 00:08:23,520
763
+ whereas even if you had
764
+
765
+ 192
766
+ 00:08:20,639 --> 00:08:25,800
767
+ a language model that had access to
768
+
769
+ 193
770
+ 00:08:23,520 --> 00:08:27,280
771
+ everybody's CVS it wouldn't necessarily
772
+
773
+ 194
774
+ 00:08:25,800 --> 00:08:28,599
775
+ be able to answer that question and it
776
+
777
+ 195
778
+ 00:08:27,280 --> 00:08:31,440
779
+ especially wouldn't be able to answer
780
+
781
+ 196
782
+ 00:08:28,599 --> 00:08:31,440
783
+ that question if you were
784
+
785
+ 197
786
+ 00:08:32,320 --> 00:08:37,760
787
+ um it wouldn't be able to answer that
788
+
789
+ 198
790
+ 00:08:34,640 --> 00:08:42,880
791
+ question if there were like multiple
792
+
793
+ 199
794
+ 00:08:37,760 --> 00:08:46,480
795
+ steps so did all people who are working
796
+
797
+ 200
798
+ 00:08:42,880 --> 00:08:50,959
799
+ at CMU get their PHD after
800
+
801
+ 201
802
+ 00:08:46,480 --> 00:08:52,959
803
+ 19 90 or something like that um so and
804
+
805
+ 202
806
+ 00:08:50,959 --> 00:08:54,680
807
+ the answer to that is obviously no but
808
+
809
+ 203
810
+ 00:08:52,959 --> 00:08:56,519
811
+ uh this would be able to find the
812
+
813
+ 204
814
+ 00:08:54,680 --> 00:08:58,120
815
+ counter evidence to that whereas LMS
816
+
817
+ 205
818
+ 00:08:56,519 --> 00:09:00,000
819
+ would not be guaranteed to be able to do
820
+
821
+ 206
822
+ 00:08:58,120 --> 00:09:02,800
823
+ that
824
+
825
+ 207
826
+ 00:09:00,000 --> 00:09:04,279
827
+ so I I think this is really uh like a
828
+
829
+ 208
830
+ 00:09:02,800 --> 00:09:06,760
831
+ nice thing to know but there's a couple
832
+
833
+ 209
834
+ 00:09:04,279 --> 00:09:09,600
835
+ problems with it the first thing is this
836
+
837
+ 210
838
+ 00:09:06,760 --> 00:09:12,519
839
+ really only traffics in like strictly
840
+
841
+ 211
842
+ 00:09:09,600 --> 00:09:17,880
843
+ true or strictly false statements um and
844
+
845
+ 212
846
+ 00:09:12,519 --> 00:09:20,560
847
+ that's a really big issue um so like if
848
+
849
+ 213
850
+ 00:09:17,880 --> 00:09:22,959
851
+ anything's soft you start uh this sort
852
+
853
+ 214
854
+ 00:09:20,560 --> 00:09:24,320
855
+ of formal reasoning starts breaking down
856
+
857
+ 215
858
+ 00:09:22,959 --> 00:09:25,880
859
+ the second thing which actually is a
860
+
861
+ 216
862
+ 00:09:24,320 --> 00:09:28,959
863
+ really big problem is once you start
864
+
865
+ 217
866
+ 00:09:25,880 --> 00:09:30,600
867
+ dealing with more complex things you
868
+
869
+ 218
870
+ 00:09:28,959 --> 00:09:32,560
871
+ don't ize it but there's always like
872
+
873
+ 219
874
+ 00:09:30,600 --> 00:09:34,560
875
+ exceptions and exceptions to exceptions
876
+
877
+ 220
878
+ 00:09:32,560 --> 00:09:36,240
879
+ and other things like that and actually
880
+
881
+ 221
882
+ 00:09:34,560 --> 00:09:38,320
883
+ becomes very computationally expensive
884
+
885
+ 222
886
+ 00:09:36,240 --> 00:09:41,640
887
+ to prove anything that's kind of like
888
+
889
+ 223
890
+ 00:09:38,320 --> 00:09:43,279
891
+ non-trivial um and so because of that uh
892
+
893
+ 224
894
+ 00:09:41,640 --> 00:09:45,839
895
+ I'm not actually going to cover it in
896
+
897
+ 225
898
+ 00:09:43,279 --> 00:09:47,880
899
+ the lecture today but recently there are
900
+
901
+ 226
902
+ 00:09:45,839 --> 00:09:50,880
903
+ um kind of search algorithms through
904
+
905
+ 227
906
+ 00:09:47,880 --> 00:09:54,279
907
+ proof spaces that use uh like neural
908
+
909
+ 228
910
+ 00:09:50,880 --> 00:09:55,880
911
+ models to do to speed up the search by
912
+
913
+ 229
914
+ 00:09:54,279 --> 00:09:58,120
915
+ picking the best and most promising
916
+
917
+ 230
918
+ 00:09:55,880 --> 00:10:00,800
919
+ hypotheses and uh for example Sean
920
+
921
+ 231
922
+ 00:09:58,120 --> 00:10:02,800
923
+ wellik uh here at CMU is working on that
924
+
925
+ 232
926
+ 00:10:00,800 --> 00:10:04,800
927
+ for neural theorem proving where you
928
+
929
+ 233
930
+ 00:10:02,800 --> 00:10:05,959
931
+ have uh like mathematical theorem
932
+
933
+ 234
934
+ 00:10:04,800 --> 00:10:08,079
935
+ proving and then you use a neural
936
+
937
+ 235
938
+ 00:10:05,959 --> 00:10:13,120
939
+ network to pick the best uh paths
940
+
941
+ 236
942
+ 00:10:08,079 --> 00:10:14,880
943
+ through logical uh operations so um
944
+
945
+ 237
946
+ 00:10:13,120 --> 00:10:19,279
947
+ that's kind of a combination of the more
948
+
949
+ 238
950
+ 00:10:14,880 --> 00:10:22,920
951
+ classical and uh modern
952
+
953
+ 239
954
+ 00:10:19,279 --> 00:10:26,240
955
+ methods then another thing that's useful
956
+
957
+ 240
958
+ 00:10:22,920 --> 00:10:28,079
959
+ to talk about I think this isn't very
960
+
961
+ 241
962
+ 00:10:26,240 --> 00:10:31,640
963
+ popular right now but I think it might
964
+
965
+ 242
966
+ 00:10:28,079 --> 00:10:34,360
967
+ be become more popular uh in the future
968
+
969
+ 243
970
+ 00:10:31,640 --> 00:10:36,120
971
+ is we start hitting the limits of uh you
972
+
973
+ 244
974
+ 00:10:34,360 --> 00:10:38,560
975
+ know what we can fit into long context
976
+
977
+ 245
978
+ 00:10:36,120 --> 00:10:40,040
979
+ Windows uh for neural models and stuff
980
+
981
+ 246
982
+ 00:10:38,560 --> 00:10:42,600
983
+ like this is memory
984
+
985
+ 247
986
+ 00:10:40,040 --> 00:10:48,600
987
+ networks and basically the way that
988
+
989
+ 248
990
+ 00:10:42,600 --> 00:10:50,639
991
+ memory networks work is they have write
992
+
993
+ 249
994
+ 00:10:48,600 --> 00:10:51,399
995
+ they have the ability to write and read
996
+
997
+ 250
998
+ 00:10:50,639 --> 00:10:55,639
999
+ from
1000
+
1001
+ 251
1002
+ 00:10:51,399 --> 00:10:57,360
1003
+ memory and so this figure is a little
1004
+
1005
+ 252
1006
+ 00:10:55,639 --> 00:11:00,440
1007
+ bit complex here but
1008
+
1009
+ 253
1010
+ 00:10:57,360 --> 00:11:02,880
1011
+ basically you have a query and then you
1012
+
1013
+ 254
1014
+ 00:11:00,440 --> 00:11:04,560
1015
+ get the embedding of the query um you
1016
+
1017
+ 255
1018
+ 00:11:02,880 --> 00:11:06,760
1019
+ take the inner product you get the soft
1020
+
1021
+ 256
1022
+ 00:11:04,560 --> 00:11:09,720
1023
+ Max of the inner product so this looks
1024
+
1025
+ 257
1026
+ 00:11:06,760 --> 00:11:11,040
1027
+ like a tension you look up embeddings
1028
+
1029
+ 258
1030
+ 00:11:09,720 --> 00:11:12,839
1031
+ and you take the weighted Su of the
1032
+
1033
+ 259
1034
+ 00:11:11,040 --> 00:11:14,560
1035
+ embeddings and you get the like summary
1036
+
1037
+ 260
1038
+ 00:11:12,839 --> 00:11:17,680
1039
+ of the memor so this is basically
1040
+
1041
+ 261
1042
+ 00:11:14,560 --> 00:11:20,320
1043
+ attention over a big memory
1044
+
1045
+ 262
1046
+ 00:11:17,680 --> 00:11:22,120
1047
+ base but then uh memory networks also
1048
+
1049
+ 263
1050
+ 00:11:20,320 --> 00:11:24,000
1051
+ have the ability to go in and update the
1052
+
1053
+ 264
1054
+ 00:11:22,120 --> 00:11:26,639
1055
+ memory so they also H have write
1056
+
1057
+ 265
1058
+ 00:11:24,000 --> 00:11:30,360
1059
+ operations so you can read and write
1060
+
1061
+ 266
1062
+ 00:11:26,639 --> 00:11:34,320
1063
+ from uh from the memory
1064
+
1065
+ 267
1066
+ 00:11:30,360 --> 00:11:36,279
1067
+ base and so the reason why I say this
1068
+
1069
+ 268
1070
+ 00:11:34,320 --> 00:11:40,480
1071
+ might become more popular is like one of
1072
+
1073
+ 269
1074
+ 00:11:36,279 --> 00:11:42,200
1075
+ the big issues with large language
1076
+
1077
+ 270
1078
+ 00:11:40,480 --> 00:11:45,320
1079
+ models nowadays is they don't get like
1080
+
1081
+ 271
1082
+ 00:11:42,200 --> 00:11:47,320
1083
+ to continually update their memory um
1084
+
1085
+ 272
1086
+ 00:11:45,320 --> 00:11:50,279
1087
+ and like one way you can do that is you
1088
+
1089
+ 273
1090
+ 00:11:47,320 --> 00:11:52,160
1091
+ can just add text to the memory but
1092
+
1093
+ 274
1094
+ 00:11:50,279 --> 00:11:54,000
1095
+ there are certain limits to that right
1096
+
1097
+ 275
1098
+ 00:11:52,160 --> 00:11:56,360
1099
+ uh you know text isn't necessarily the
1100
+
1101
+ 276
1102
+ 00:11:54,000 --> 00:11:58,959
1103
+ best way to encode all of the things
1104
+
1105
+ 277
1106
+ 00:11:56,360 --> 00:12:01,880
1107
+ that you've seen in the past so I I feel
1108
+
1109
+ 278
1110
+ 00:11:58,959 --> 00:12:03,360
1111
+ like this kind of architecture might be
1112
+
1113
+ 279
1114
+ 00:12:01,880 --> 00:12:04,920
1115
+ um how to pin these sorts of
1116
+
1117
+ 280
1118
+ 00:12:03,360 --> 00:12:06,480
1119
+ architectures onto language models might
1120
+
1121
+ 281
1122
+ 00:12:04,920 --> 00:12:08,639
1123
+ be an interesting research direction for
1124
+
1125
+ 282
1126
+ 00:12:06,480 --> 00:12:08,639
1127
+ the
1128
+
1129
+ 283
1130
+ 00:12:08,680 --> 00:12:15,360
1131
+ future um another thing which I am not
1132
+
1133
+ 284
1134
+ 00:12:12,600 --> 00:12:16,720
1135
+ going to talk about very much uh but
1136
+
1137
+ 285
1138
+ 00:12:15,360 --> 00:12:20,560
1139
+ because we kind of already talked about
1140
+
1141
+ 286
1142
+ 00:12:16,720 --> 00:12:23,560
1143
+ it in the code Generation Um area but
1144
+
1145
+ 287
1146
+ 00:12:20,560 --> 00:12:26,959
1147
+ it's actually been around for a while is
1148
+
1149
+ 288
1150
+ 00:12:23,560 --> 00:12:30,600
1151
+ solving questions with sort of symbolic
1152
+
1153
+ 289
1154
+ 00:12:26,959 --> 00:12:36,480
1155
+ reasoning and the way it works
1156
+
1157
+ 290
1158
+ 00:12:30,600 --> 00:12:41,320
1159
+ is for example you would have a
1160
+
1161
+ 291
1162
+ 00:12:36,480 --> 00:12:43,639
1163
+ um you would have a text here and based
1164
+
1165
+ 292
1166
+ 00:12:41,320 --> 00:12:47,440
1167
+ on the text you can run these sort of
1168
+
1169
+ 293
1170
+ 00:12:43,639 --> 00:12:50,440
1171
+ symbolic operations like find and filter
1172
+
1173
+ 294
1174
+ 00:12:47,440 --> 00:12:52,720
1175
+ and find the max number and relocate and
1176
+
1177
+ 295
1178
+ 00:12:50,440 --> 00:12:54,480
1179
+ other things like this and this
1180
+
1181
+ 296
1182
+ 00:12:52,720 --> 00:12:58,040
1183
+ explicitly
1184
+
1185
+ 297
1186
+ 00:12:54,480 --> 00:12:59,880
1187
+ manipulates uh kind of the attention and
1188
+
1189
+ 298
1190
+ 00:12:58,040 --> 00:13:02,519
1191
+ the um
1192
+
1193
+ 299
1194
+ 00:12:59,880 --> 00:13:03,839
1195
+ you can do things like filtering down to
1196
+
1197
+ 300
1198
+ 00:13:02,519 --> 00:13:08,600
1199
+ find the
1200
+
1201
+ 301
1202
+ 00:13:03,839 --> 00:13:11,040
1203
+ most uh like highest largest number for
1204
+
1205
+ 302
1206
+ 00:13:08,600 --> 00:13:12,800
1207
+ example or other things like this and
1208
+
1209
+ 303
1210
+ 00:13:11,040 --> 00:13:14,160
1211
+ this is kind of interesting because like
1212
+
1213
+ 304
1214
+ 00:13:12,800 --> 00:13:17,240
1215
+ some of the things that neural networks
1216
+
1217
+ 305
1218
+ 00:13:14,160 --> 00:13:20,360
1219
+ are bad at are like finding the largest
1220
+
1221
+ 306
1222
+ 00:13:17,240 --> 00:13:21,600
1223
+ number in a big data set or um finding
1224
+
1225
+ 307
1226
+ 00:13:20,360 --> 00:13:23,360
1227
+ all of the things where something
1228
+
1229
+ 308
1230
+ 00:13:21,600 --> 00:13:26,240
1231
+ applies and throwing out all of the
1232
+
1233
+ 309
1234
+ 00:13:23,360 --> 00:13:27,959
1235
+ things where something doesn't apply so
1236
+
1237
+ 310
1238
+ 00:13:26,240 --> 00:13:29,560
1239
+ again this isn't used super widely in
1240
+
1241
+ 311
1242
+ 00:13:27,959 --> 00:13:31,959
1243
+ larged language models right now because
1244
+
1245
+ 312
1246
+ 00:13:29,560 --> 00:13:33,920
1247
+ I feel like um people have been focusing
1248
+
1249
+ 313
1250
+ 00:13:31,959 --> 00:13:36,440
1251
+ on prompting
1252
+
1253
+ 314
1254
+ 00:13:33,920 --> 00:13:38,880
1255
+ techniques uh in order to do this sort
1256
+
1257
+ 315
1258
+ 00:13:36,440 --> 00:13:41,199
1259
+ of reasoning but I think this is another
1260
+
1261
+ 316
1262
+ 00:13:38,880 --> 00:13:43,320
1263
+ thing that's worth thinking about taking
1264
+
1265
+ 317
1266
+ 00:13:41,199 --> 00:13:45,079
1267
+ a close another look at and seeing if
1268
+
1269
+ 318
1270
+ 00:13:43,320 --> 00:13:47,440
1271
+ there are ways to incorporate it with
1272
+
1273
+ 319
1274
+ 00:13:45,079 --> 00:13:49,320
1275
+ the current models because like
1276
+
1277
+ 320
1278
+ 00:13:47,440 --> 00:13:50,720
1279
+ basically what I wanted to say is like
1280
+
1281
+ 321
1282
+ 00:13:49,320 --> 00:13:52,279
1283
+ all of the things that I decided to
1284
+
1285
+ 322
1286
+ 00:13:50,720 --> 00:13:54,560
1287
+ introduce here in this section are
1288
+
1289
+ 323
1290
+ 00:13:52,279 --> 00:13:57,600
1291
+ things that current models are still not
1292
+
1293
+ 324
1294
+ 00:13:54,560 --> 00:14:00,800
1295
+ particularly good at like Reon taking
1296
+
1297
+ 325
1298
+ 00:13:57,600 --> 00:14:03,079
1299
+ many steps over sets of
1300
+
1301
+ 326
1302
+ 00:14:00,800 --> 00:14:05,079
1303
+ inputs um reading and writing from
1304
+
1305
+ 327
1306
+ 00:14:03,079 --> 00:14:09,839
1307
+ memory so that you can remember things
1308
+
1309
+ 328
1310
+ 00:14:05,079 --> 00:14:11,720
1311
+ over long periods and also um filtering
1312
+
1313
+ 329
1314
+ 00:14:09,839 --> 00:14:13,399
1315
+ down large pieces of text into smaller
1316
+
1317
+ 330
1318
+ 00:14:11,720 --> 00:14:16,040
1319
+ pieces of text to find relevant
1320
+
1321
+ 331
1322
+ 00:14:13,399 --> 00:14:17,560
1323
+ information so um if any of those things
1324
+
1325
+ 332
1326
+ 00:14:16,040 --> 00:14:19,880
1327
+ sound interesting you can take a look at
1328
+
1329
+ 333
1330
+ 00:14:17,560 --> 00:14:22,800
1331
+ this but um after this I'd like to go
1332
+
1333
+ 334
1334
+ 00:14:19,880 --> 00:14:24,399
1335
+ kind of into the you know main event
1336
+
1337
+ 335
1338
+ 00:14:22,800 --> 00:14:27,759
1339
+ where I talk about the stuff that people
1340
+
1341
+ 336
1342
+ 00:14:24,399 --> 00:14:31,040
1343
+ are actually using a lot no it is um any
1344
+
1345
+ 337
1346
+ 00:14:27,759 --> 00:14:31,040
1347
+ questions about these three
1348
+
1349
+ 338
1350
+ 00:14:33,000 --> 00:14:39,120
1351
+ okay cool um so now I'd like to go into
1352
+
1353
+ 339
1354
+ 00:14:36,399 --> 00:14:40,639
1355
+ Chain of Thought and variance and I
1356
+
1357
+ 340
1358
+ 00:14:39,120 --> 00:14:42,279
1359
+ actually have already talked about Chain
1360
+
1361
+ 341
1362
+ 00:14:40,639 --> 00:14:44,199
1363
+ of Thought in fact we've mentioned it a
1364
+
1365
+ 342
1366
+ 00:14:42,279 --> 00:14:47,720
1367
+ couple times um but just you know to
1368
+
1369
+ 343
1370
+ 00:14:44,199 --> 00:14:49,399
1371
+ remind everybody the basic idea is um
1372
+
1373
+ 344
1374
+ 00:14:47,720 --> 00:14:52,880
1375
+ compared to standard prompting where we
1376
+
1377
+ 345
1378
+ 00:14:49,399 --> 00:14:55,519
1379
+ have like a question um and an answer in
1380
+
1381
+ 346
1382
+ 00:14:52,880 --> 00:14:58,480
1383
+ Chain of Thought we have a question and
1384
+
1385
+ 347
1386
+ 00:14:55,519 --> 00:15:01,040
1387
+ then we have a derivation for the
1388
+
1389
+ 348
1390
+ 00:14:58,480 --> 00:15:02,440
1391
+ questions so like uh Roger started with
1392
+
1393
+ 349
1394
+ 00:15:01,040 --> 00:15:06,120
1395
+ five
1396
+
1397
+ 350
1398
+ 00:15:02,440 --> 00:15:09,040
1399
+ balls two can uh five balls two cans of
1400
+
1401
+ 351
1402
+ 00:15:06,120 --> 00:15:13,839
1403
+ three tennis balls each is six tenis 5
1404
+
1405
+ 352
1406
+ 00:15:09,040 --> 00:15:15,639
1407
+ plus 6al 11 the answer is 11 so um you
1408
+
1409
+ 353
1410
+ 00:15:13,839 --> 00:15:17,519
1411
+ add this to the prompt and by adding
1412
+
1413
+ 354
1414
+ 00:15:15,639 --> 00:15:19,240
1415
+ this to the prompt you get the model to
1416
+
1417
+ 355
1418
+ 00:15:17,519 --> 00:15:22,600
1419
+ uh also do these derivations at test
1420
+
1421
+ 356
1422
+ 00:15:19,240 --> 00:15:25,199
1423
+ time and this greatly improves some
1424
+
1425
+ 357
1426
+ 00:15:22,600 --> 00:15:27,759
1427
+ tasks it improves tasks where we can't
1428
+
1429
+ 358
1430
+ 00:15:25,199 --> 00:15:30,040
1431
+ like immediately predict the answer
1432
+
1433
+ 359
1434
+ 00:15:27,759 --> 00:15:32,000
1435
+ directly and then I also previously
1436
+
1437
+ 360
1438
+ 00:15:30,040 --> 00:15:33,440
1439
+ talked about zero shot Chain of Thought
1440
+
1441
+ 361
1442
+ 00:15:32,000 --> 00:15:35,880
1443
+ uh reasoning where we just prompt the
1444
+
1445
+ 362
1446
+ 00:15:33,440 --> 00:15:38,480
1447
+ model to with something like let's think
1448
+
1449
+ 363
1450
+ 00:15:35,880 --> 00:15:42,680
1451
+ step by step and then the model becomes
1452
+
1453
+ 364
1454
+ 00:15:38,480 --> 00:15:46,240
1455
+ able to do this uh Chain of Thought
1456
+
1457
+ 365
1458
+ 00:15:42,680 --> 00:15:48,279
1459
+ reasoning okay so that was review and
1460
+
1461
+ 366
1462
+ 00:15:46,240 --> 00:15:51,680
1463
+ now I'd like to talk about some of like
1464
+
1465
+ 367
1466
+ 00:15:48,279 --> 00:15:53,560
1467
+ more advanced methods that people use
1468
+
1469
+ 368
1470
+ 00:15:51,680 --> 00:15:55,079
1471
+ for uh reasoning as
1472
+
1473
+ 369
1474
+ 00:15:53,560 --> 00:15:58,040
1475
+ well
1476
+
1477
+ 370
1478
+ 00:15:55,079 --> 00:15:59,959
1479
+ and this is by no means an exhaustive
1480
+
1481
+ 371
1482
+ 00:15:58,040 --> 00:16:01,800
1483
+ list they're just of the ones that I
1484
+
1485
+ 372
1486
+ 00:15:59,959 --> 00:16:03,319
1487
+ found interesting so if you know other
1488
+
1489
+ 373
1490
+ 00:16:01,800 --> 00:16:04,839
1491
+ ones that you'd like to talk about or
1492
+
1493
+ 374
1494
+ 00:16:03,319 --> 00:16:07,720
1495
+ introduce to the class or something like
1496
+
1497
+ 375
1498
+ 00:16:04,839 --> 00:16:10,600
1499
+ that I'd also be happy to uh to hear uh
1500
+
1501
+ 376
1502
+ 00:16:07,720 --> 00:16:14,120
1503
+ which ones you like or have heard about
1504
+
1505
+ 377
1506
+ 00:16:10,600 --> 00:16:16,920
1507
+ but the first one is um self-as and one
1508
+
1509
+ 378
1510
+ 00:16:14,120 --> 00:16:20,959
1511
+ of the issues with large language models
1512
+
1513
+ 379
1514
+ 00:16:16,920 --> 00:16:23,240
1515
+ nowadays is that they're not uh very
1516
+
1517
+ 380
1518
+ 00:16:20,959 --> 00:16:25,519
1519
+ good at asking follow-up questions or
1520
+
1521
+ 381
1522
+ 00:16:23,240 --> 00:16:27,839
1523
+ maybe not that they're not very good at
1524
+
1525
+ 382
1526
+ 00:16:25,519 --> 00:16:31,160
1527
+ it but just they're not trained to do it
1528
+
1529
+ 383
1530
+ 00:16:27,839 --> 00:16:32,880
1531
+ so like if you play around with chat GPT
1532
+
1533
+ 384
1534
+ 00:16:31,160 --> 00:16:35,240
1535
+ I have never had chat GPT ask me a
1536
+
1537
+ 385
1538
+ 00:16:32,880 --> 00:16:36,680
1539
+ follow-up question I don't think it's
1540
+
1541
+ 386
1542
+ 00:16:35,240 --> 00:16:38,319
1543
+ like it's not because large language
1544
+
1545
+ 387
1546
+ 00:16:36,680 --> 00:16:41,920
1547
+ models aren't capable of doing it it's
1548
+
1549
+ 388
1550
+ 00:16:38,319 --> 00:16:43,519
1551
+ just that they like the open AI must
1552
+
1553
+ 389
1554
+ 00:16:41,920 --> 00:16:45,000
1555
+ think it's a bad user experience to have
1556
+
1557
+ 390
1558
+ 00:16:43,519 --> 00:16:47,680
1559
+ a language model that asks you follow up
1560
+
1561
+ 391
1562
+ 00:16:45,000 --> 00:16:51,319
1563
+ questions that's only like you know
1564
+
1565
+ 392
1566
+ 00:16:47,680 --> 00:16:53,160
1567
+ reason I can think about it um but
1568
+
1569
+ 393
1570
+ 00:16:51,319 --> 00:16:56,199
1571
+ basically what self ask does is it
1572
+
1573
+ 394
1574
+ 00:16:53,160 --> 00:17:00,000
1575
+ explicitly prompts the model to ask to
1576
+
1577
+ 395
1578
+ 00:16:56,199 --> 00:17:02,360
1579
+ ask if there are followup questions so
1580
+
1581
+ 396
1582
+ 00:17:00,000 --> 00:17:05,799
1583
+ here's an example on the left where the
1584
+
1585
+ 397
1586
+ 00:17:02,360 --> 00:17:11,240
1587
+ question is uh who lived longer Theodore
1588
+
1589
+ 398
1590
+ 00:17:05,799 --> 00:17:12,640
1591
+ haer or Harry vau wat uh Watkins and
1592
+
1593
+ 399
1594
+ 00:17:11,240 --> 00:17:15,240
1595
+ basically it says are follow-up
1596
+
1597
+ 400
1598
+ 00:17:12,640 --> 00:17:17,679
1599
+ questions needed here yes and then the
1600
+
1601
+ 401
1602
+ 00:17:15,240 --> 00:17:20,319
1603
+ followup is how old was Theodore hacker
1604
+
1605
+ 402
1606
+ 00:17:17,679 --> 00:17:23,640
1607
+ when he died and the intermediate answer
1608
+
1609
+ 403
1610
+ 00:17:20,319 --> 00:17:26,959
1611
+ is Theodore hacker was 65 years old how
1612
+
1613
+ 404
1614
+ 00:17:23,640 --> 00:17:29,000
1615
+ old was Harry Von Watkins um Harry Von
1616
+
1617
+ 405
1618
+ 00:17:26,959 --> 00:17:32,400
1619
+ Watkins was 69 years old but so the
1620
+
1621
+ 406
1622
+ 00:17:29,000 --> 00:17:35,240
1623
+ final answer is Harry Bon Watkins and um
1624
+
1625
+ 407
1626
+ 00:17:32,400 --> 00:17:37,520
1627
+ in this particular paper this is just
1628
+
1629
+ 408
1630
+ 00:17:35,240 --> 00:17:42,520
1631
+ like another variety of Chain of Thought
1632
+
1633
+ 409
1634
+ 00:17:37,520 --> 00:17:44,720
1635
+ it's like not using it to incorporate
1636
+
1637
+ 410
1638
+ 00:17:42,520 --> 00:17:47,400
1639
+ any external information or anything
1640
+
1641
+ 411
1642
+ 00:17:44,720 --> 00:17:48,720
1643
+ like that it's just trying to more
1644
+
1645
+ 412
1646
+ 00:17:47,400 --> 00:17:52,360
1647
+ directly
1648
+
1649
+ 413
1650
+ 00:17:48,720 --> 00:17:53,840
1651
+ elicit um information from the model um
1652
+
1653
+ 414
1654
+ 00:17:52,360 --> 00:17:55,360
1655
+ but nonetheless they demonstrate that
1656
+
1657
+ 415
1658
+ 00:17:53,840 --> 00:17:57,760
1659
+ this is useful and then there's also
1660
+
1661
+ 416
1662
+ 00:17:55,360 --> 00:18:00,120
1663
+ other methods that actually try to look
1664
+
1665
+ 417
1666
+ 00:17:57,760 --> 00:18:02,240
1667
+ up information explicit to answer these
1668
+
1669
+ 418
1670
+ 00:18:00,120 --> 00:18:05,280
1671
+ questions um which are even more
1672
+
1673
+ 419
1674
+ 00:18:02,240 --> 00:18:05,280
1675
+ powerful than what we have
1676
+
1677
+ 420
1678
+ 00:18:05,720 --> 00:18:13,200
1679
+ here um so that's what I'd like to
1680
+
1681
+ 421
1682
+ 00:18:09,960 --> 00:18:16,919
1683
+ introduce next and basically the idea um
1684
+
1685
+ 422
1686
+ 00:18:13,200 --> 00:18:19,760
1687
+ here is this is a method that instead of
1688
+
1689
+ 423
1690
+ 00:18:16,919 --> 00:18:22,880
1691
+ just doing Chain of Thought it retrieves
1692
+
1693
+ 424
1694
+ 00:18:19,760 --> 00:18:25,480
1695
+ relevant sentences when you're doing the
1696
+
1697
+ 425
1698
+ 00:18:22,880 --> 00:18:28,919
1699
+ Chain of Thought So like
1700
+
1701
+ 426
1702
+ 00:18:25,480 --> 00:18:30,880
1703
+ here um
1704
+
1705
+ 427
1706
+ 00:18:28,919 --> 00:18:32,960
1707
+ uh we have the followup are follow-ups
1708
+
1709
+ 428
1710
+ 00:18:30,880 --> 00:18:35,159
1711
+ needed here yes and then this is the
1712
+
1713
+ 429
1714
+ 00:18:32,960 --> 00:18:36,880
1715
+ followup but if the model itself doesn't
1716
+
1717
+ 430
1718
+ 00:18:35,159 --> 00:18:39,440
1719
+ know how old somebody was when they died
1720
+
1721
+ 431
1722
+ 00:18:36,880 --> 00:18:40,760
1723
+ then it won't be able to answer this so
1724
+
1725
+ 432
1726
+ 00:18:39,440 --> 00:18:44,400
1727
+ what they do in order to make this
1728
+
1729
+ 433
1730
+ 00:18:40,760 --> 00:18:47,200
1731
+ happen is they um do bm25 based
1732
+
1733
+ 434
1734
+ 00:18:44,400 --> 00:18:49,520
1735
+ retrieval over Wikipedia for each of the
1736
+
1737
+ 435
1738
+ 00:18:47,200 --> 00:18:51,760
1739
+ Chain of Thought uh answers and then
1740
+
1741
+ 436
1742
+ 00:18:49,520 --> 00:18:53,400
1743
+ they use the retrieved uh I think it's
1744
+
1745
+ 437
1746
+ 00:18:51,760 --> 00:18:56,039
1747
+ like 10 documents or something like that
1748
+
1749
+ 438
1750
+ 00:18:53,400 --> 00:18:59,640
1751
+ multiple retriev documents to prompt the
1752
+
1753
+ 439
1754
+ 00:18:56,039 --> 00:19:03,200
1755
+ model um to basically follow up with its
1756
+
1757
+ 440
1758
+ 00:18:59,640 --> 00:19:05,440
1759
+ Chain of Thought so this is another uh
1760
+
1761
+ 441
1762
+ 00:19:03,200 --> 00:19:07,880
1763
+ variety of things that you can do in
1764
+
1765
+ 442
1766
+ 00:19:05,440 --> 00:19:07,880
1767
+ order to
1768
+
1769
+ 443
1770
+ 00:19:10,720 --> 00:19:16,120
1771
+ improve
1772
+
1773
+ 444
1774
+ 00:19:13,120 --> 00:19:16,120
1775
+ cool
1776
+
1777
+ 445
1778
+ 00:19:16,400 --> 00:19:21,440
1779
+ um then another one that I'd like to
1780
+
1781
+ 446
1782
+ 00:19:18,960 --> 00:19:22,559
1783
+ talk about is U multilingual Chain of
1784
+
1785
+ 447
1786
+ 00:19:21,440 --> 00:19:24,039
1787
+ Thought reasoning I'm going to be
1788
+
1789
+ 448
1790
+ 00:19:22,559 --> 00:19:28,000
1791
+ talking more about multilingual things
1792
+
1793
+ 449
1794
+ 00:19:24,039 --> 00:19:29,960
1795
+ in the multilingual class in a week but
1796
+
1797
+ 450
1798
+ 00:19:28,000 --> 00:19:33,559
1799
+ the interesting thing about multilingual
1800
+
1801
+ 451
1802
+ 00:19:29,960 --> 00:19:37,200
1803
+ Chain of Thought is we have a design
1804
+
1805
+ 452
1806
+ 00:19:33,559 --> 00:19:41,280
1807
+ decision right like do we want to just
1808
+
1809
+ 453
1810
+ 00:19:37,200 --> 00:19:44,000
1811
+ answer questions in the language that we
1812
+
1813
+ 454
1814
+ 00:19:41,280 --> 00:19:46,679
1815
+ are asking questions in like so if I ask
1816
+
1817
+ 455
1818
+ 00:19:44,000 --> 00:19:48,080
1819
+ a question in Japanese am I going to
1820
+
1821
+ 456
1822
+ 00:19:46,679 --> 00:19:49,840
1823
+ have it go through the whole chain of
1824
+
1825
+ 457
1826
+ 00:19:48,080 --> 00:19:52,720
1827
+ thought process in Japanese and then
1828
+
1829
+ 458
1830
+ 00:19:49,840 --> 00:19:55,840
1831
+ answer my question in Japanese or do I
1832
+
1833
+ 459
1834
+ 00:19:52,720 --> 00:19:57,120
1835
+ want it to uh somehow go through English
1836
+
1837
+ 460
1838
+ 00:19:55,840 --> 00:19:59,159
1839
+ because the model has been trained on
1840
+
1841
+ 461
1842
+ 00:19:57,120 --> 00:20:00,640
1843
+ lots of English and it has better
1844
+
1845
+ 462
1846
+ 00:19:59,159 --> 00:20:02,120
1847
+ it's like a better way to take advantage
1848
+
1849
+ 463
1850
+ 00:20:00,640 --> 00:20:04,840
1851
+ of its reasoning
1852
+
1853
+ 464
1854
+ 00:20:02,120 --> 00:20:07,200
1855
+ capabilities does anyone have a idea
1856
+
1857
+ 465
1858
+ 00:20:04,840 --> 00:20:07,200
1859
+ about the
1860
+
1861
+ 466
1862
+ 00:20:07,960 --> 00:20:12,480
1863
+ answer who thinks it's better to do it
1864
+
1865
+ 467
1866
+ 00:20:10,240 --> 00:20:15,360
1867
+ entirely in the the language that the
1868
+
1869
+ 468
1870
+ 00:20:12,480 --> 00:20:15,360
1871
+ question is asked
1872
+
1873
+ 469
1874
+ 00:20:15,640 --> 00:20:20,080
1875
+ in and who thinks it's better to do
1876
+
1877
+ 470
1878
+ 00:20:17,919 --> 00:20:23,000
1879
+ something in
1880
+
1881
+ 471
1882
+ 00:20:20,080 --> 00:20:28,200
1883
+ English
1884
+
1885
+ 472
1886
+ 00:20:23,000 --> 00:20:29,159
1887
+ okay so um basically the answer is do it
1888
+
1889
+ 473
1890
+ 00:20:28,200 --> 00:20:31,440
1891
+ in English
1892
+
1893
+ 474
1894
+ 00:20:29,159 --> 00:20:34,120
1895
+ um and maybe this
1896
+
1897
+ 475
1898
+ 00:20:31,440 --> 00:20:35,799
1899
+ is it might be a little bit dependent on
1900
+
1901
+ 476
1902
+ 00:20:34,120 --> 00:20:39,840
1903
+ the language but all of the languages
1904
+
1905
+ 477
1906
+ 00:20:35,799 --> 00:20:42,880
1907
+ they tested it's essentially uh that's
1908
+
1909
+ 478
1910
+ 00:20:39,840 --> 00:20:44,919
1911
+ the conclusion that they came to and
1912
+
1913
+ 479
1914
+ 00:20:42,880 --> 00:20:47,679
1915
+ it's pretty Stark in this particular
1916
+
1917
+ 480
1918
+ 00:20:44,919 --> 00:20:50,640
1919
+ paper this might change a little bit
1920
+
1921
+ 481
1922
+ 00:20:47,679 --> 00:20:52,960
1923
+ with um with more powerful models but I
1924
+
1925
+ 482
1926
+ 00:20:50,640 --> 00:20:57,360
1927
+ still would be very surprised if this is
1928
+
1929
+ 483
1930
+ 00:20:52,960 --> 00:21:00,440
1931
+ not like if this doesn't hold still so
1932
+
1933
+ 484
1934
+ 00:20:57,360 --> 00:21:04,440
1935
+ you can see it's like approximately on
1936
+
1937
+ 485
1938
+ 00:21:00,440 --> 00:21:08,200
1939
+ average uh 7even Point increase in the
1940
+
1941
+ 486
1942
+ 00:21:04,440 --> 00:21:11,720
1943
+ results and just to to be clear here um
1944
+
1945
+ 487
1946
+ 00:21:08,200 --> 00:21:13,600
1947
+ we have native uh Chain of Thought So
1948
+
1949
+ 488
1950
+ 00:21:11,720 --> 00:21:16,039
1951
+ This is doing Chain of Thought in the in
1952
+
1953
+ 489
1954
+ 00:21:13,600 --> 00:21:17,799
1955
+ the language itself this is doing Chain
1956
+
1957
+ 490
1958
+ 00:21:16,039 --> 00:21:19,240
1959
+ of Thought in English but then answering
1960
+
1961
+ 491
1962
+ 00:21:17,799 --> 00:21:22,200
1963
+ in the language itself and this is just
1964
+
1965
+ 492
1966
+ 00:21:19,240 --> 00:21:23,799
1967
+ like translating everything into
1968
+
1969
+ 493
1970
+ 00:21:22,200 --> 00:21:27,440
1971
+ English
1972
+
1973
+ 494
1974
+ 00:21:23,799 --> 00:21:30,159
1975
+ um you can try this out too like if you
1976
+
1977
+ 495
1978
+ 00:21:27,440 --> 00:21:31,840
1979
+ uh if you speak another Lang you can um
1980
+
1981
+ 496
1982
+ 00:21:30,159 --> 00:21:34,200
1983
+ try to do it myself when I try it in
1984
+
1985
+ 497
1986
+ 00:21:31,840 --> 00:21:36,200
1987
+ Japanese it's very clear that like the
1988
+
1989
+ 498
1990
+ 00:21:34,200 --> 00:21:38,640
1991
+ model seems more intelligent in English
1992
+
1993
+ 499
1994
+ 00:21:36,200 --> 00:21:41,559
1995
+ it just can seems like it can do other
1996
+
1997
+ 500
1998
+ 00:21:38,640 --> 00:21:43,120
1999
+ things even though like intelligence uh
2000
+
2001
+ 501
2002
+ 00:21:41,559 --> 00:21:44,640
2003
+ shouldn't be a function of the language
2004
+
2005
+ 502
2006
+ 00:21:43,120 --> 00:21:47,120
2007
+ that you're asking a question in right
2008
+
2009
+ 503
2010
+ 00:21:44,640 --> 00:21:49,679
2011
+ like the model should have the ability
2012
+
2013
+ 504
2014
+ 00:21:47,120 --> 00:21:51,440
2015
+ to answer questions but it because
2016
+
2017
+ 505
2018
+ 00:21:49,679 --> 00:21:53,000
2019
+ that's how humans work right our
2020
+
2021
+ 506
2022
+ 00:21:51,440 --> 00:21:54,520
2023
+ intelligence is kind of separated from
2024
+
2025
+ 507
2026
+ 00:21:53,000 --> 00:21:57,039
2027
+ our language how well we can express
2028
+
2029
+ 508
2030
+ 00:21:54,520 --> 00:22:00,480
2031
+ ourselves is a little bit different but
2032
+
2033
+ 509
2034
+ 00:21:57,039 --> 00:22:02,320
2035
+ um yeah for the final appli this was it
2036
+
2037
+ 510
2038
+ 00:22:00,480 --> 00:22:04,840
2039
+ translated back to the original language
2040
+
2041
+ 511
2042
+ 00:22:02,320 --> 00:22:09,440
2043
+ and then evaluated for translate English
2044
+
2045
+ 512
2046
+ 00:22:04,840 --> 00:22:12,559
2047
+ I'm not 100% sure about this I think it
2048
+
2049
+ 513
2050
+ 00:22:09,440 --> 00:22:13,840
2051
+ was not so that might be a confounding
2052
+
2053
+ 514
2054
+ 00:22:12,559 --> 00:22:16,799
2055
+ factor for this one but it's not a
2056
+
2057
+ 515
2058
+ 00:22:13,840 --> 00:22:20,039
2059
+ confounding factor for this one anyway
2060
+
2061
+ 516
2062
+ 00:22:16,799 --> 00:22:20,039
2063
+ yeah any other
2064
+
2065
+ 517
2066
+ 00:22:20,679 --> 00:22:23,919
2067
+ questions Okay
2068
+
2069
+ 518
2070
+ 00:22:24,200 --> 00:22:29,559
2071
+ cool so this is a pretty interesting
2072
+
2073
+ 519
2074
+ 00:22:26,799 --> 00:22:32,000
2075
+ result here um
2076
+
2077
+ 520
2078
+ 00:22:29,559 --> 00:22:34,120
2079
+ and the next kind of series of results
2080
+
2081
+ 521
2082
+ 00:22:32,000 --> 00:22:35,360
2083
+ are going to be based on the uh that I'm
2084
+
2085
+ 522
2086
+ 00:22:34,120 --> 00:22:36,919
2087
+ going to talk about are going to be
2088
+
2089
+ 523
2090
+ 00:22:35,360 --> 00:22:39,240
2091
+ based on the quality of the reasoning
2092
+
2093
+ 524
2094
+ 00:22:36,919 --> 00:22:43,480
2095
+ chains that the model uses in Chain of
2096
+
2097
+ 525
2098
+ 00:22:39,240 --> 00:22:45,520
2099
+ Thought and this one is a simple
2100
+
2101
+ 526
2102
+ 00:22:43,480 --> 00:22:46,600
2103
+ heuristic for improving the quality of
2104
+
2105
+ 527
2106
+ 00:22:45,520 --> 00:22:49,279
2107
+ the reasoning
2108
+
2109
+ 528
2110
+ 00:22:46,600 --> 00:22:50,640
2111
+ chains and um yeah one thing I should
2112
+
2113
+ 529
2114
+ 00:22:49,279 --> 00:22:52,480
2115
+ mention is that the quality of the
2116
+
2117
+ 530
2118
+ 00:22:50,640 --> 00:22:55,760
2119
+ reasoning chain is definitely connected
2120
+
2121
+ 531
2122
+ 00:22:52,480 --> 00:22:58,080
2123
+ to the uh quality of the output like
2124
+
2125
+ 532
2126
+ 00:22:55,760 --> 00:23:00,159
2127
+ some that's not necessarily the case
2128
+
2129
+ 533
2130
+ 00:22:58,080 --> 00:23:04,679
2131
+ right it could just say a whole bunch of
2132
+
2133
+ 534
2134
+ 00:23:00,159 --> 00:23:07,799
2135
+ you know false like uh actually no maybe
2136
+
2137
+ 535
2138
+ 00:23:04,679 --> 00:23:07,799
2139
+ I'll I'll skip this
2140
+
2141
+ 536
2142
+ 00:23:08,200 --> 00:23:14,919
2143
+ one and go and and explain this one next
2144
+
2145
+ 537
2146
+ 00:23:11,919 --> 00:23:14,919
2147
+ so
2148
+
2149
+ 538
2150
+ 00:23:15,159 --> 00:23:19,039
2151
+ um yeah actually sorry the or the
2152
+
2153
+ 539
2154
+ 00:23:17,600 --> 00:23:20,520
2155
+ explanation ordering for this is a
2156
+
2157
+ 540
2158
+ 00:23:19,039 --> 00:23:25,360
2159
+ little bit hard but yeah I'll explain
2160
+
2161
+ 541
2162
+ 00:23:20,520 --> 00:23:26,840
2163
+ this one next so um very quickly um
2164
+
2165
+ 542
2166
+ 00:23:25,360 --> 00:23:29,640
2167
+ there's two ways that you could be
2168
+
2169
+ 543
2170
+ 00:23:26,840 --> 00:23:32,880
2171
+ reasoning one way you could be reasoning
2172
+
2173
+ 544
2174
+ 00:23:29,640 --> 00:23:35,000
2175
+ is doing an explanation first and then
2176
+
2177
+ 545
2178
+ 00:23:32,880 --> 00:23:36,720
2179
+ uh predicting the answer the other way
2180
+
2181
+ 546
2182
+ 00:23:35,000 --> 00:23:39,080
2183
+ you could do it is predicting the answer
2184
+
2185
+ 547
2186
+ 00:23:36,720 --> 00:23:43,039
2187
+ and then do it um then giving the
2188
+
2189
+ 548
2190
+ 00:23:39,080 --> 00:23:45,559
2191
+ explanation and in general if you have a
2192
+
2193
+ 549
2194
+ 00:23:43,039 --> 00:23:47,919
2195
+ reasonably strong model uh you know any
2196
+
2197
+ 550
2198
+ 00:23:45,559 --> 00:23:50,679
2199
+ of the modern kind of Frontier level
2200
+
2201
+ 551
2202
+ 00:23:47,919 --> 00:23:52,240
2203
+ models right now doing the explanation
2204
+
2205
+ 552
2206
+ 00:23:50,679 --> 00:23:54,039
2207
+ first and then making the prediction is
2208
+
2209
+ 553
2210
+ 00:23:52,240 --> 00:23:56,880
2211
+ better and the reason why is because
2212
+
2213
+ 554
2214
+ 00:23:54,039 --> 00:23:59,240
2215
+ Chain of Thought works and the model is
2216
+
2217
+ 555
2218
+ 00:23:56,880 --> 00:24:02,960
2219
+ able to break down the quest um the
2220
+
2221
+ 556
2222
+ 00:23:59,240 --> 00:24:07,279
2223
+ questions into kind of
2224
+
2225
+ 557
2226
+ 00:24:02,960 --> 00:24:10,159
2227
+ simpler uh it's able to break down the
2228
+
2229
+ 558
2230
+ 00:24:07,279 --> 00:24:11,520
2231
+ like the answer into like simp simpler
2232
+
2233
+ 559
2234
+ 00:24:10,159 --> 00:24:14,080
2235
+ questions for like mathematical
2236
+
2237
+ 560
2238
+ 00:24:11,520 --> 00:24:15,679
2239
+ reasoning or something like that um and
2240
+
2241
+ 561
2242
+ 00:24:14,080 --> 00:24:18,039
2243
+ then give me the answer so like for
2244
+
2245
+ 562
2246
+ 00:24:15,679 --> 00:24:20,000
2247
+ example for text DCI 002 which was State
2248
+
2249
+ 563
2250
+ 00:24:18,039 --> 00:24:22,679
2251
+ ofth art at the time of this writing you
2252
+
2253
+ 564
2254
+ 00:24:20,000 --> 00:24:24,360
2255
+ see a fivepoint boost from using um
2256
+
2257
+ 565
2258
+ 00:24:22,679 --> 00:24:29,080
2259
+ explanation first and then prediction
2260
+
2261
+ 566
2262
+ 00:24:24,360 --> 00:24:30,640
2263
+ after that um and in accur
2264
+
2265
+ 567
2266
+ 00:24:29,080 --> 00:24:34,039
2267
+ but for the weaker models that was not
2268
+
2269
+ 568
2270
+ 00:24:30,640 --> 00:24:36,039
2271
+ the case so if you were using um GPD 3
2272
+
2273
+ 569
2274
+ 00:24:34,039 --> 00:24:38,720
2275
+ that wasn't trained for Chain of Thought
2276
+
2277
+ 570
2278
+ 00:24:36,039 --> 00:24:40,600
2279
+ or you were using opt uh that was not
2280
+
2281
+ 571
2282
+ 00:24:38,720 --> 00:24:42,640
2283
+ the case but nowadays I think basically
2284
+
2285
+ 572
2286
+ 00:24:40,600 --> 00:24:45,279
2287
+ all models uh doing the explanation
2288
+
2289
+ 573
2290
+ 00:24:42,640 --> 00:24:48,120
2291
+ first and then the prediction is
2292
+
2293
+ 574
2294
+ 00:24:45,279 --> 00:24:49,640
2295
+ better um so going
2296
+
2297
+ 575
2298
+ 00:24:48,120 --> 00:24:51,640
2299
+ back
2300
+
2301
+ 576
2302
+ 00:24:49,640 --> 00:24:53,559
2303
+ um another thing that people have
2304
+
2305
+ 577
2306
+ 00:24:51,640 --> 00:24:55,120
2307
+ noticed is like if your explanation is
2308
+
2309
+ 578
2310
+ 00:24:53,559 --> 00:24:56,520
2311
+ wrong your prediction also tends to be
2312
+
2313
+ 579
2314
+ 00:24:55,120 --> 00:24:58,120
2315
+ wrong so if you make mistakes in
2316
+
2317
+ 580
2318
+ 00:24:56,520 --> 00:25:00,520
2319
+ intermediate steps of your explanation
2320
+
2321
+ 581
2322
+ 00:24:58,120 --> 00:25:03,679
2323
+ it's tends to mess up your final
2324
+
2325
+ 582
2326
+ 00:25:00,520 --> 00:25:06,000
2327
+ prediction um so like one of the
2328
+
2329
+ 583
2330
+ 00:25:03,679 --> 00:25:09,320
2331
+ interesting ways that people have found
2332
+
2333
+ 584
2334
+ 00:25:06,000 --> 00:25:11,559
2335
+ to improve the final the explanation
2336
+
2337
+ 585
2338
+ 00:25:09,320 --> 00:25:13,880
2339
+ quality is they just observe that if the
2340
+
2341
+ 586
2342
+ 00:25:11,559 --> 00:25:18,840
2343
+ explanations are longer they tend to be
2344
+
2345
+ 587
2346
+ 00:25:13,880 --> 00:25:20,960
2347
+ better it's uh kind of interesting but
2348
+
2349
+ 588
2350
+ 00:25:18,840 --> 00:25:23,000
2351
+ like if they give you more reasoning
2352
+
2353
+ 589
2354
+ 00:25:20,960 --> 00:25:25,000
2355
+ steps this tends to be more accurate and
2356
+
2357
+ 590
2358
+ 00:25:23,000 --> 00:25:27,320
2359
+ they actually demonstrate that in this
2360
+
2361
+ 591
2362
+ 00:25:25,000 --> 00:25:29,200
2363
+ paper where here's a simple reasoning
2364
+
2365
+ 592
2366
+ 00:25:27,320 --> 00:25:31,720
2367
+ chain here's a more complex reasoning
2368
+
2369
+ 593
2370
+ 00:25:29,200 --> 00:25:35,480
2371
+ chain and you actually see for exactly
2372
+
2373
+ 594
2374
+ 00:25:31,720 --> 00:25:36,760
2375
+ the same problem they get about a 15%
2376
+
2377
+ 595
2378
+ 00:25:35,480 --> 00:25:38,360
2379
+ boost and these are kind of like
2380
+
2381
+ 596
2382
+ 00:25:36,760 --> 00:25:39,960
2383
+ naturally occurring reasoning chains
2384
+
2385
+ 597
2386
+ 00:25:38,360 --> 00:25:41,520
2387
+ they didn't like train the model to give
2388
+
2389
+ 598
2390
+ 00:25:39,960 --> 00:25:43,919
2391
+ you longer reasoning chains or anything
2392
+
2393
+ 599
2394
+ 00:25:41,520 --> 00:25:45,279
2395
+ like that but amongst the naturally
2396
+
2397
+ 600
2398
+ 00:25:43,919 --> 00:25:46,840
2399
+ occurring reasoning chains the longer
2400
+
2401
+ 601
2402
+ 00:25:45,279 --> 00:25:50,480
2403
+ ones tend to be
2404
+
2405
+ 602
2406
+ 00:25:46,840 --> 00:25:53,159
2407
+ better and this fact could be simply
2408
+
2409
+ 603
2410
+ 00:25:50,480 --> 00:25:54,679
2411
+ used to improve accuracy um and so the
2412
+
2413
+ 604
2414
+ 00:25:53,159 --> 00:25:57,360
2415
+ way they did this is they just sampled
2416
+
2417
+ 605
2418
+ 00:25:54,679 --> 00:25:59,279
2419
+ multiple reasoning paths and then they
2420
+
2421
+ 606
2422
+ 00:25:57,360 --> 00:26:00,840
2423
+ performed self consistency over the
2424
+
2425
+ 607
2426
+ 00:25:59,279 --> 00:26:03,000
2427
+ longer reasoning paths so if you
2428
+
2429
+ 608
2430
+ 00:26:00,840 --> 00:26:05,240
2431
+ remember what self consistency is it's
2432
+
2433
+ 609
2434
+ 00:26:03,000 --> 00:26:07,240
2435
+ basically like you do majority voting
2436
+
2437
+ 610
2438
+ 00:26:05,240 --> 00:26:09,679
2439
+ over the answers for multiple reasoning
2440
+
2441
+ 611
2442
+ 00:26:07,240 --> 00:26:13,880
2443
+ paths so they threw out the lower
2444
+
2445
+ 612
2446
+ 00:26:09,679 --> 00:26:13,880
2447
+ quality ones and that improved overall
2448
+
2449
+ 613
2450
+ 00:26:14,399 --> 00:26:20,279
2451
+ accuracy so um yeah that's a thing that
2452
+
2453
+ 614
2454
+ 00:26:18,000 --> 00:26:20,279
2455
+ you can
2456
+
2457
+ 615
2458
+ 00:26:21,039 --> 00:26:25,960
2459
+ do
2460
+
2461
+ 616
2462
+ 00:26:23,120 --> 00:26:28,880
2463
+ um so yeah going back to systematic
2464
+
2465
+ 617
2466
+ 00:26:25,960 --> 00:26:31,360
2467
+ studies of reasoning in llms
2468
+
2469
+ 618
2470
+ 00:26:28,880 --> 00:26:33,559
2471
+ um one of the big results that's
2472
+
2473
+ 619
2474
+ 00:26:31,360 --> 00:26:35,880
2475
+ actually really important to know about
2476
+
2477
+ 620
2478
+ 00:26:33,559 --> 00:26:39,039
2479
+ is th this sort of Chain of Thought
2480
+
2481
+ 621
2482
+ 00:26:35,880 --> 00:26:41,080
2483
+ reasoning um is considered to be an
2484
+
2485
+ 622
2486
+ 00:26:39,039 --> 00:26:43,520
2487
+ emergent ability
2488
+
2489
+ 623
2490
+ 00:26:41,080 --> 00:26:47,080
2491
+ in uh large language models and what we
2492
+
2493
+ 624
2494
+ 00:26:43,520 --> 00:26:49,360
2495
+ mean by an emergent ability is it's or
2496
+
2497
+ 625
2498
+ 00:26:47,080 --> 00:26:53,679
2499
+ what what the the name emergent ability
2500
+
2501
+ 626
2502
+ 00:26:49,360 --> 00:26:56,399
2503
+ typically refers to is that it is
2504
+
2505
+ 627
2506
+ 00:26:53,679 --> 00:26:58,640
2507
+ something that increases dramatically as
2508
+
2509
+ 628
2510
+ 00:26:56,399 --> 00:27:01,679
2511
+ the model size gets uh up up to a
2512
+
2513
+ 629
2514
+ 00:26:58,640 --> 00:27:03,200
2515
+ certain point so these actually I'm I'm
2516
+
2517
+ 630
2518
+ 00:27:01,679 --> 00:27:06,080
2519
+ really sorry I cut off the thing on the
2520
+
2521
+ 631
2522
+ 00:27:03,200 --> 00:27:07,360
2523
+ bottom here this is like open AI does
2524
+
2525
+ 632
2526
+ 00:27:06,080 --> 00:27:08,520
2527
+ this all the time to not tell you how
2528
+
2529
+ 633
2530
+ 00:27:07,360 --> 00:27:11,399
2531
+ many parameters they have in their
2532
+
2533
+ 634
2534
+ 00:27:08,520 --> 00:27:12,760
2535
+ models but I did not do it intentionally
2536
+
2537
+ 635
2538
+ 00:27:11,399 --> 00:27:15,360
2539
+ here because I think it's actually in
2540
+
2541
+ 636
2542
+ 00:27:12,760 --> 00:27:17,320
2543
+ here in the paper um but like these ones
2544
+
2545
+ 637
2546
+ 00:27:15,360 --> 00:27:19,399
2547
+ over here are kind of the like 175
2548
+
2549
+ 638
2550
+ 00:27:17,320 --> 00:27:20,640
2551
+ billion parameter models and like the
2552
+
2553
+ 639
2554
+ 00:27:19,399 --> 00:27:24,520
2555
+ the larger
2556
+
2557
+ 640
2558
+ 00:27:20,640 --> 00:27:25,960
2559
+ models um and what you see is like up
2560
+
2561
+ 641
2562
+ 00:27:24,520 --> 00:27:29,919
2563
+ until a certain point you get basically
2564
+
2565
+ 642
2566
+ 00:27:25,960 --> 00:27:33,919
2567
+ zero accuracy and then uh the outputs
2568
+
2569
+ 643
2570
+ 00:27:29,919 --> 00:27:37,000
2571
+ improve and so for a while people were
2572
+
2573
+ 644
2574
+ 00:27:33,919 --> 00:27:39,240
2575
+ really like confused about this like why
2576
+
2577
+ 645
2578
+ 00:27:37,000 --> 00:27:41,440
2579
+ why does this happen it feels like magic
2580
+
2581
+ 646
2582
+ 00:27:39,240 --> 00:27:44,279
2583
+ that you get a really you know powerful
2584
+
2585
+ 647
2586
+ 00:27:41,440 --> 00:27:46,679
2587
+ model and then suddenly it gets better
2588
+
2589
+ 648
2590
+ 00:27:44,279 --> 00:27:49,799
2591
+ uh uh like at the very
2592
+
2593
+ 649
2594
+ 00:27:46,679 --> 00:27:52,159
2595
+ end but actually there's a much simpler
2596
+
2597
+ 650
2598
+ 00:27:49,799 --> 00:27:53,760
2599
+ solution there's not not that much magic
2600
+
2601
+ 651
2602
+ 00:27:52,159 --> 00:27:55,960
2603
+ to this
2604
+
2605
+ 652
2606
+ 00:27:53,760 --> 00:27:58,399
2607
+ and we've known about this for a little
2608
+
2609
+ 653
2610
+ 00:27:55,960 --> 00:28:00,919
2611
+ while but this paper from 2023 really
2612
+
2613
+ 654
2614
+ 00:27:58,399 --> 00:28:02,360
2615
+ like expressed it very clearly um so I
2616
+
2617
+ 655
2618
+ 00:28:00,919 --> 00:28:04,360
2619
+ highly recommend you take a look at this
2620
+
2621
+ 656
2622
+ 00:28:02,360 --> 00:28:07,720
2623
+ if you're interested in kind of like the
2624
+
2625
+ 657
2626
+ 00:28:04,360 --> 00:28:10,159
2627
+ emerg abilities and language models but
2628
+
2629
+ 658
2630
+ 00:28:07,720 --> 00:28:15,039
2631
+ basically the the thing about emergent
2632
+
2633
+ 659
2634
+ 00:28:10,159 --> 00:28:19,720
2635
+ abilities is that they're mostly
2636
+
2637
+ 660
2638
+ 00:28:15,039 --> 00:28:20,720
2639
+ a matter of how you um how you measure
2640
+
2641
+ 661
2642
+ 00:28:19,720 --> 00:28:22,519
2643
+ your
2644
+
2645
+ 662
2646
+ 00:28:20,720 --> 00:28:27,640
2647
+ models
2648
+
2649
+ 663
2650
+ 00:28:22,519 --> 00:28:30,120
2651
+ accuracy and so let's say as your model
2652
+
2653
+ 664
2654
+ 00:28:27,640 --> 00:28:30,120
2655
+ gets better
2656
+
2657
+ 665
2658
+ 00:28:39,039 --> 00:28:45,600
2659
+ it gets gradually better at predicting
2660
+
2661
+ 666
2662
+ 00:28:41,200 --> 00:28:45,600
2663
+ the like a reasonable next
2664
+
2665
+ 667
2666
+ 00:28:47,799 --> 00:28:54,760
2667
+ token so this is like a I don't know
2668
+
2669
+ 668
2670
+ 00:28:50,919 --> 00:28:59,120
2671
+ like 200 million parameter model 500
2672
+
2673
+ 669
2674
+ 00:28:54,760 --> 00:29:03,240
2675
+ million 1 billion 3 billion
2676
+
2677
+ 670
2678
+ 00:28:59,120 --> 00:29:06,600
2679
+ 7 billion and like 70 billion or
2680
+
2681
+ 671
2682
+ 00:29:03,240 --> 00:29:09,600
2683
+ something like that um and so this is
2684
+
2685
+ 672
2686
+ 00:29:06,600 --> 00:29:12,640
2687
+ like the next token prediction accuracy
2688
+
2689
+ 673
2690
+ 00:29:09,600 --> 00:29:14,320
2691
+ um or like the the accuracy of
2692
+
2693
+ 674
2694
+ 00:29:12,640 --> 00:29:16,279
2695
+ predicting a reasonable next token that
2696
+
2697
+ 675
2698
+ 00:29:14,320 --> 00:29:18,880
2699
+ won't make result in your reasoning
2700
+
2701
+ 676
2702
+ 00:29:16,279 --> 00:29:20,000
2703
+ chain being wrong and making a mistake
2704
+
2705
+ 677
2706
+ 00:29:18,880 --> 00:29:24,200
2707
+ and
2708
+
2709
+ 678
2710
+ 00:29:20,000 --> 00:29:26,200
2711
+ so if you have an accuracy like this in
2712
+
2713
+ 679
2714
+ 00:29:24,200 --> 00:29:28,880
2715
+ order to get the correct answer like
2716
+
2717
+ 680
2718
+ 00:29:26,200 --> 00:29:30,559
2719
+ let's say there's about five or eight
2720
+
2721
+ 681
2722
+ 00:29:28,880 --> 00:29:33,519
2723
+ places where you could possibly make a
2724
+
2725
+ 682
2726
+ 00:29:30,559 --> 00:29:35,080
2727
+ mistake in the derivation like one
2728
+
2729
+ 683
2730
+ 00:29:33,519 --> 00:29:36,760
2731
+ common places to make a mistake in a
2732
+
2733
+ 684
2734
+ 00:29:35,080 --> 00:29:38,519
2735
+ derivation for math for example are
2736
+
2737
+ 685
2738
+ 00:29:36,760 --> 00:29:40,200
2739
+ where you predict a number like where
2740
+
2741
+ 686
2742
+ 00:29:38,519 --> 00:29:42,679
2743
+ you predict the result of an equation
2744
+
2745
+ 687
2746
+ 00:29:40,200 --> 00:29:44,120
2747
+ and you might have five reasoning steps
2748
+
2749
+ 688
2750
+ 00:29:42,679 --> 00:29:47,720
2751
+ where you might predict the result of an
2752
+
2753
+ 689
2754
+ 00:29:44,120 --> 00:29:53,039
2755
+ equation um and so if we do
2756
+
2757
+ 690
2758
+ 00:29:47,720 --> 00:29:53,039
2759
+ this let's exponentiate all of these by
2760
+
2761
+ 691
2762
+ 00:29:54,799 --> 00:29:58,799
2763
+ five um
2764
+
2765
+ 692
2766
+ 00:30:06,640 --> 00:30:16,120
2767
+ uh write python code to exp
2768
+
2769
+ 693
2770
+ 00:30:11,200 --> 00:30:16,120
2771
+ she these numbers by
2772
+
2773
+ 694
2774
+ 00:30:19,600 --> 00:30:27,559
2775
+ five I'm wszy enough that I just ask
2776
+
2777
+ 695
2778
+ 00:30:22,159 --> 00:30:27,559
2779
+ chat GP chat GP to do this for me now
2780
+
2781
+ 696
2782
+ 00:30:30,080 --> 00:30:32,919
2783
+ and so if we do
2784
+
2785
+ 697
2786
+ 00:30:35,399 --> 00:30:39,840
2787
+ this do go go chat
2788
+
2789
+ 698
2790
+ 00:30:50,000 --> 00:30:58,360
2791
+ GPD so now we are getting something that
2792
+
2793
+ 699
2794
+ 00:30:54,760 --> 00:30:58,360
2795
+ looks like zero
2796
+
2797
+ 700
2798
+ 00:31:02,159 --> 00:31:07,960
2799
+ um basically zero basically
2800
+
2801
+ 701
2802
+ 00:31:05,639 --> 00:31:10,960
2803
+ zero
2804
+
2805
+ 702
2806
+ 00:31:07,960 --> 00:31:10,960
2807
+ uh
2808
+
2809
+ 703
2810
+ 00:31:13,399 --> 00:31:16,399
2811
+ 3%
2812
+
2813
+ 704
2814
+ 00:31:16,799 --> 00:31:22,440
2815
+ 23%
2816
+
2817
+ 705
2818
+ 00:31:19,080 --> 00:31:22,440
2819
+ 9% and
2820
+
2821
+ 706
2822
+ 00:31:22,559 --> 00:31:28,720
2823
+ 90% so what you can see is there's
2824
+
2825
+ 707
2826
+ 00:31:26,639 --> 00:31:30,600
2827
+ actually a pretty steady GR gradation of
2828
+
2829
+ 708
2830
+ 00:31:28,720 --> 00:31:33,120
2831
+ like the next token prediction accuracy
2832
+
2833
+ 709
2834
+ 00:31:30,600 --> 00:31:36,600
2835
+ here but if you need to predict multiple
2836
+
2837
+ 710
2838
+ 00:31:33,120 --> 00:31:38,919
2839
+ tokens correct then it looks like it's
2840
+
2841
+ 711
2842
+ 00:31:36,600 --> 00:31:41,240
2843
+ doing basically nothing until you get up
2844
+
2845
+ 712
2846
+ 00:31:38,919 --> 00:31:43,600
2847
+ to like 75% next token accuracy and then
2848
+
2849
+ 713
2850
+ 00:31:41,240 --> 00:31:45,320
2851
+ it starts taking off so that's like uh
2852
+
2853
+ 714
2854
+ 00:31:43,600 --> 00:31:46,960
2855
+ what happens in emergent abilities and
2856
+
2857
+ 715
2858
+ 00:31:45,320 --> 00:31:49,159
2859
+ you'll notice that most things that are
2860
+
2861
+ 716
2862
+ 00:31:46,960 --> 00:31:50,880
2863
+ talking about emergent abilities are
2864
+
2865
+ 717
2866
+ 00:31:49,159 --> 00:31:53,559
2867
+ usually talking about some sort of Chain
2868
+
2869
+ 718
2870
+ 00:31:50,880 --> 00:31:55,799
2871
+ of Thought or some sort of reasoning uh
2872
+
2873
+ 719
2874
+ 00:31:53,559 --> 00:31:58,480
2875
+ reasoning accuracy even if that's not
2876
+
2877
+ 720
2878
+ 00:31:55,799 --> 00:32:00,480
2879
+ the case um even if they're just
2880
+
2881
+ 721
2882
+ 00:31:58,480 --> 00:32:02,639
2883
+ predicting a single token it can still
2884
+
2885
+ 722
2886
+ 00:32:00,480 --> 00:32:05,399
2887
+ happen because
2888
+
2889
+ 723
2890
+ 00:32:02,639 --> 00:32:08,559
2891
+ basically the probability of a single
2892
+
2893
+ 724
2894
+ 00:32:05,399 --> 00:32:11,639
2895
+ token can continue to go up smoothly but
2896
+
2897
+ 725
2898
+ 00:32:08,559 --> 00:32:13,240
2899
+ you only get the the token correct after
2900
+
2901
+ 726
2902
+ 00:32:11,639 --> 00:32:14,760
2903
+ the probability starts getting higher
2904
+
2905
+ 727
2906
+ 00:32:13,240 --> 00:32:18,320
2907
+ than all the others and that's also a
2908
+
2909
+ 728
2910
+ 00:32:14,760 --> 00:32:21,279
2911
+ discontinuous function so um so
2912
+
2913
+ 729
2914
+ 00:32:18,320 --> 00:32:23,080
2915
+ basically what this paper shows is like
2916
+
2917
+ 730
2918
+ 00:32:21,279 --> 00:32:26,440
2919
+ even if you have like the probability of
2920
+
2921
+ 731
2922
+ 00:32:23,080 --> 00:32:28,679
2923
+ the correct token going um the correct
2924
+
2925
+ 732
2926
+ 00:32:26,440 --> 00:32:30,639
2927
+ token going up gradually uh you can see
2928
+
2929
+ 733
2930
+ 00:32:28,679 --> 00:32:33,440
2931
+ this emergent ability based on how you
2932
+
2933
+ 734
2934
+ 00:32:30,639 --> 00:32:37,279
2935
+ uh measure it so um that's an important
2936
+
2937
+ 735
2938
+ 00:32:33,440 --> 00:32:38,960
2939
+ thing to realize about uh this another
2940
+
2941
+ 736
2942
+ 00:32:37,279 --> 00:32:41,080
2943
+ correl of this is like let's say you
2944
+
2945
+ 737
2946
+ 00:32:38,960 --> 00:32:44,679
2947
+ want to do interesting experiments about
2948
+
2949
+ 738
2950
+ 00:32:41,080 --> 00:32:45,960
2951
+ reasoning on um on smaller models like
2952
+
2953
+ 739
2954
+ 00:32:44,679 --> 00:32:47,279
2955
+ let's say you want to train a smaller
2956
+
2957
+ 740
2958
+ 00:32:45,960 --> 00:32:49,159
2959
+ model and see how it improves on
2960
+
2961
+ 741
2962
+ 00:32:47,279 --> 00:32:52,159
2963
+ reasoning I would definitely encourage
2964
+
2965
+ 742
2966
+ 00:32:49,159 --> 00:32:54,799
2967
+ you to measure not only accuracy because
2968
+
2969
+ 743
2970
+ 00:32:52,159 --> 00:32:57,279
2971
+ you might see like very little change in
2972
+
2973
+ 744
2974
+ 00:32:54,799 --> 00:32:58,720
2975
+ accuracy but also measure like log
2976
+
2977
+ 745
2978
+ 00:32:57,279 --> 00:33:00,360
2979
+ likelihood of reasoning chains or
2980
+
2981
+ 746
2982
+ 00:32:58,720 --> 00:33:02,960
2983
+ something like that because you'll see a
2984
+
2985
+ 747
2986
+ 00:33:00,360 --> 00:33:02,960
2987
+ a smoother
2988
+
2989
+ 748
2990
+ 00:33:03,799 --> 00:33:09,080
2991
+ curve cool um any questions about
2992
+
2993
+ 749
2994
+ 00:33:11,039 --> 00:33:17,240
2995
+ this okay um sounds
2996
+
2997
+ 750
2998
+ 00:33:14,720 --> 00:33:20,559
2999
+ good so I I talked a little bit about
3000
+
3001
+ 751
3002
+ 00:33:17,240 --> 00:33:23,120
3003
+ this um one one of the things here that
3004
+
3005
+ 752
3006
+ 00:33:20,559 --> 00:33:25,320
3007
+ I didn't talk about is this paper
3008
+
3009
+ 753
3010
+ 00:33:23,120 --> 00:33:28,159
3011
+ measures not just the accuracy of the
3012
+
3013
+ 754
3014
+ 00:33:25,320 --> 00:33:30,880
3015
+ answer with chain of thoughts um but it
3016
+
3017
+ 755
3018
+ 00:33:28,159 --> 00:33:35,840
3019
+ also measures the factuality of the
3020
+
3021
+ 756
3022
+ 00:33:30,880 --> 00:33:40,480
3023
+ explanation so basically um whether the
3024
+
3025
+ 757
3026
+ 00:33:35,840 --> 00:33:40,480
3027
+ explanation is a good explanation for
3028
+
3029
+ 758
3030
+ 00:33:40,760 --> 00:33:47,240
3031
+ the um whether the explanation is a good
3032
+
3033
+ 759
3034
+ 00:33:43,960 --> 00:33:50,039
3035
+ explanation for the actual
3036
+
3037
+ 760
3038
+ 00:33:47,240 --> 00:33:51,919
3039
+ derivation um and also the consistency
3040
+
3041
+ 761
3042
+ 00:33:50,039 --> 00:33:53,480
3043
+ of the answer in the explanation to
3044
+
3045
+ 762
3046
+ 00:33:51,919 --> 00:33:56,120
3047
+ figure out whether the answer and the
3048
+
3049
+ 763
3050
+ 00:33:53,480 --> 00:33:58,200
3051
+ explanation um match up with each other
3052
+
3053
+ 764
3054
+ 00:33:56,120 --> 00:33:59,600
3055
+ and they they did this with some uh
3056
+
3057
+ 765
3058
+ 00:33:58,200 --> 00:34:02,320
3059
+ synthetic data sets where you could
3060
+
3061
+ 766
3062
+ 00:33:59,600 --> 00:34:07,120
3063
+ actually measure the um the re the
3064
+
3065
+ 767
3066
+ 00:34:02,320 --> 00:34:10,399
3067
+ reasoning steps uh by using math so um
3068
+
3069
+ 768
3070
+ 00:34:07,120 --> 00:34:13,560
3071
+ what they were able to find is basically
3072
+
3073
+ 769
3074
+ 00:34:10,399 --> 00:34:15,760
3075
+ the answer and the explanation um
3076
+
3077
+ 770
3078
+ 00:34:13,560 --> 00:34:17,639
3079
+ when the answer in the explanation
3080
+
3081
+ 771
3082
+ 00:34:15,760 --> 00:34:22,079
3083
+ tended to be consistent especially for
3084
+
3085
+ 772
3086
+ 00:34:17,639 --> 00:34:23,760
3087
+ the stronger models and let's see yeah
3088
+
3089
+ 773
3090
+ 00:34:22,079 --> 00:34:25,399
3091
+ the the answer in the explanation tended
3092
+
3093
+ 774
3094
+ 00:34:23,760 --> 00:34:28,440
3095
+ to be consistent especially for the
3096
+
3097
+ 775
3098
+ 00:34:25,399 --> 00:34:30,879
3099
+ stronger models and um
3100
+
3101
+ 776
3102
+ 00:34:28,440 --> 00:34:33,000
3103
+ that also meant that if you had higher
3104
+
3105
+ 777
3106
+ 00:34:30,879 --> 00:34:35,839
3107
+ factuality in the explanation that
3108
+
3109
+ 778
3110
+ 00:34:33,000 --> 00:34:38,240
3111
+ translates into higher um you know
3112
+
3113
+ 779
3114
+ 00:34:35,839 --> 00:34:40,520
3115
+ factuality of the accuracy of the actual
3116
+
3117
+ 780
3118
+ 00:34:38,240 --> 00:34:43,159
3119
+ prediction um I would bet that these
3120
+
3121
+ 781
3122
+ 00:34:40,520 --> 00:34:45,240
3123
+ numbers are even higher uh nowadays I
3124
+
3125
+ 782
3126
+ 00:34:43,159 --> 00:34:49,040
3127
+ bet the consistency is even higher uh
3128
+
3129
+ 783
3130
+ 00:34:45,240 --> 00:34:49,040
3131
+ with more modern models than Tex avenci
3132
+
3133
+ 784
3134
+ 00:34:49,399 --> 00:34:53,200
3135
+ 002 and the re the reason being is like
3136
+
3137
+ 785
3138
+ 00:34:51,839 --> 00:34:54,760
3139
+ number one models are stronger number
3140
+
3141
+ 786
3142
+ 00:34:53,200 --> 00:34:56,560
3143
+ two all models are like trained for
3144
+
3145
+ 787
3146
+ 00:34:54,760 --> 00:35:00,960
3147
+ Chain of Thought pretty aggressively now
3148
+
3149
+ 788
3150
+ 00:34:56,560 --> 00:35:00,960
3151
+ so uh that would make the difference
3152
+
3153
+ 789
3154
+ 00:35:02,200 --> 00:35:08,640
3155
+ there cool um so the the other thing I'd
3156
+
3157
+ 790
3158
+ 00:35:07,000 --> 00:35:09,359
3159
+ like to talk about is training for Chain
3160
+
3161
+ 791
3162
+ 00:35:08,640 --> 00:35:13,079
3163
+ of
3164
+
3165
+ 792
3166
+ 00:35:09,359 --> 00:35:17,440
3167
+ Thought um so there's a fair amount of
3168
+
3169
+ 793
3170
+ 00:35:13,079 --> 00:35:19,200
3171
+ work in this general direction um from
3172
+
3173
+ 794
3174
+ 00:35:17,440 --> 00:35:23,040
3175
+ my point of view there's basically two
3176
+
3177
+ 795
3178
+ 00:35:19,200 --> 00:35:25,800
3179
+ ways that people do this nowadays um the
3180
+
3181
+ 796
3182
+ 00:35:23,040 --> 00:35:28,960
3183
+ first way is usually through generating
3184
+
3185
+ 797
3186
+ 00:35:25,800 --> 00:35:33,480
3187
+ lots of synthetic data that represents
3188
+
3189
+ 798
3190
+ 00:35:28,960 --> 00:35:37,800
3191
+ chains of thoughts and then using that
3192
+
3193
+ 799
3194
+ 00:35:33,480 --> 00:35:39,520
3195
+ to um to train models and this is the
3196
+
3197
+ 800
3198
+ 00:35:37,800 --> 00:35:41,839
3199
+ most famous version of this although
3200
+
3201
+ 801
3202
+ 00:35:39,520 --> 00:35:44,079
3203
+ this paper cites a lot of uh a lot of
3204
+
3205
+ 802
3206
+ 00:35:41,839 --> 00:35:45,760
3207
+ other ones but basically they generate a
3208
+
3209
+ 803
3210
+ 00:35:44,079 --> 00:35:48,280
3211
+ large and diverse uh Chain of Thought
3212
+
3213
+ 804
3214
+ 00:35:45,760 --> 00:35:51,240
3215
+ data set from GPT 3.5 and
3216
+
3217
+ 805
3218
+ 00:35:48,280 --> 00:35:53,200
3219
+ gp4 um it includes 5 million complex
3220
+
3221
+ 806
3222
+ 00:35:51,240 --> 00:35:55,640
3223
+ instructions I think they generated 1
3224
+
3225
+ 807
3226
+ 00:35:53,200 --> 00:35:59,000
3227
+ million from GPD 4 and 4 million from uh
3228
+
3229
+ 808
3230
+ 00:35:55,640 --> 00:36:01,640
3231
+ GPT 3.5 just because generating long
3232
+
3233
+ 809
3234
+ 00:35:59,000 --> 00:36:06,520
3235
+ sequences from gp4 is expensive and they
3236
+
3237
+ 810
3238
+ 00:36:01,640 --> 00:36:09,640
3239
+ didn't want to do that many um and
3240
+
3241
+ 811
3242
+ 00:36:06,520 --> 00:36:11,760
3243
+ then they uh achieved corresponding high
3244
+
3245
+ 812
3246
+ 00:36:09,640 --> 00:36:13,200
3247
+ accuracy on Chain of Thought related
3248
+
3249
+ 813
3250
+ 00:36:11,760 --> 00:36:16,200
3251
+ things compared to other data sets so
3252
+
3253
+ 814
3254
+ 00:36:13,200 --> 00:36:17,760
3255
+ compared to like alpaka which is much uh
3256
+
3257
+ 815
3258
+ 00:36:16,200 --> 00:36:21,760
3259
+ smaller and doesn't have as much Chain
3260
+
3261
+ 816
3262
+ 00:36:17,760 --> 00:36:24,079
3263
+ of Thought and also um uh vicuna which
3264
+
3265
+ 817
3266
+ 00:36:21,760 --> 00:36:26,640
3267
+ is similarly less focused on chain of
3268
+
3269
+ 818
3270
+ 00:36:24,079 --> 00:36:29,359
3271
+ thought they were able to do uh a good
3272
+
3273
+ 819
3274
+ 00:36:26,640 --> 00:36:31,599
3275
+ job
3276
+
3277
+ 820
3278
+ 00:36:29,359 --> 00:36:33,640
3279
+ um this paper was by Microsoft and they
3280
+
3281
+ 821
3282
+ 00:36:31,599 --> 00:36:36,960
3283
+ didn't actually release the Orca data
3284
+
3285
+ 822
3286
+ 00:36:33,640 --> 00:36:39,400
3287
+ set um for whatever reason uh legal
3288
+
3289
+ 823
3290
+ 00:36:36,960 --> 00:36:41,400
3291
+ legal or competitive reasons or whatever
3292
+
3293
+ 824
3294
+ 00:36:39,400 --> 00:36:43,000
3295
+ but there's another open Orca data set
3296
+
3297
+ 825
3298
+ 00:36:41,400 --> 00:36:44,359
3299
+ that you can download and use uh that
3300
+
3301
+ 826
3302
+ 00:36:43,000 --> 00:36:47,480
3303
+ attempts to replicate it and it's
3304
+
3305
+ 827
3306
+ 00:36:44,359 --> 00:36:50,440
3307
+ reasonably good so uh you you can uh
3308
+
3309
+ 828
3310
+ 00:36:47,480 --> 00:36:50,440
3311
+ keep that in mind if you're
3312
+
3313
+ 829
3314
+ 00:36:50,800 --> 00:36:59,520
3315
+ interested um this is another really
3316
+
3317
+ 830
3318
+ 00:36:53,280 --> 00:36:59,520
3319
+ interesting paper on uh trying to create
3320
+
3321
+ 831
3322
+ 00:37:00,160 --> 00:37:05,760
3323
+ assessments automatic assessments of how
3324
+
3325
+ 832
3326
+ 00:37:03,440 --> 00:37:09,880
3327
+ good chains of thought are and what they
3328
+
3329
+ 833
3330
+ 00:37:05,760 --> 00:37:13,079
3331
+ do essentially is it's relatively simple
3332
+
3333
+ 834
3334
+ 00:37:09,880 --> 00:37:15,200
3335
+ they get human feedback on each step of
3336
+
3337
+ 835
3338
+ 00:37:13,079 --> 00:37:17,760
3339
+ a derivation so they just basically ask
3340
+
3341
+ 836
3342
+ 00:37:15,200 --> 00:37:20,599
3343
+ people is this step of the derivation
3344
+
3345
+ 837
3346
+ 00:37:17,760 --> 00:37:22,160
3347
+ good and uh if the answer is yes then
3348
+
3349
+ 838
3350
+ 00:37:20,599 --> 00:37:24,760
3351
+ they give it a a smiley face if the
3352
+
3353
+ 839
3354
+ 00:37:22,160 --> 00:37:26,440
3355
+ answer is no they give it a frowny face
3356
+
3357
+ 840
3358
+ 00:37:24,760 --> 00:37:28,560
3359
+ and they use this to train a reward
3360
+
3361
+ 841
3362
+ 00:37:26,440 --> 00:37:32,000
3363
+ model where the reward model basically
3364
+
3365
+ 842
3366
+ 00:37:28,560 --> 00:37:34,760
3367
+ predicts whether each uh thing of the um
3368
+
3369
+ 843
3370
+ 00:37:32,000 --> 00:37:36,800
3371
+ each step of the derivation is good and
3372
+
3373
+ 844
3374
+ 00:37:34,760 --> 00:37:38,160
3375
+ so we have two examples over here I know
3376
+
3377
+ 845
3378
+ 00:37:36,800 --> 00:37:41,160
3379
+ this is really small you might be able
3380
+
3381
+ 846
3382
+ 00:37:38,160 --> 00:37:43,200
3383
+ to see it um either in the paper on uh
3384
+
3385
+ 847
3386
+ 00:37:41,160 --> 00:37:46,359
3387
+ the slides on the website but what we
3388
+
3389
+ 848
3390
+ 00:37:43,200 --> 00:37:49,000
3391
+ can see here is that it assesses each of
3392
+
3393
+ 849
3394
+ 00:37:46,359 --> 00:37:52,680
3395
+ these steps and uh checks that the
3396
+
3397
+ 850
3398
+ 00:37:49,000 --> 00:37:55,760
3399
+ answer is good um but it's also able to
3400
+
3401
+ 851
3402
+ 00:37:52,680 --> 00:37:57,119
3403
+ identify places where uh like steps are
3404
+
3405
+ 852
3406
+ 00:37:55,760 --> 00:37:59,560
3407
+ incorrect and then the final answer
3408
+
3409
+ 853
3410
+ 00:37:57,119 --> 00:38:02,560
3411
+ becomes Incorrect and then they use this
3412
+
3413
+ 854
3414
+ 00:37:59,560 --> 00:38:04,440
3415
+ for training um a Chain of Thought style
3416
+
3417
+ 855
3418
+ 00:38:02,560 --> 00:38:06,319
3419
+ model so they have the model generate
3420
+
3421
+ 856
3422
+ 00:38:04,440 --> 00:38:08,520
3423
+ chains of thought and they assess them
3424
+
3425
+ 857
3426
+ 00:38:06,319 --> 00:38:10,079
3427
+ with the reward model and upweight
3428
+
3429
+ 858
3430
+ 00:38:08,520 --> 00:38:12,160
3431
+ answers that have good chains of thought
3432
+
3433
+ 859
3434
+ 00:38:10,079 --> 00:38:15,680
3435
+ and so the good thing about this is they
3436
+
3437
+ 860
3438
+ 00:38:12,160 --> 00:38:17,440
3439
+ actually don't need um they don't need
3440
+
3441
+ 861
3442
+ 00:38:15,680 --> 00:38:20,160
3443
+ the correct answers to train the model
3444
+
3445
+ 862
3446
+ 00:38:17,440 --> 00:38:21,640
3447
+ this way and because they don't need the
3448
+
3449
+ 863
3450
+ 00:38:20,160 --> 00:38:23,920
3451
+ correct answers to train the model this
3452
+
3453
+ 864
3454
+ 00:38:21,640 --> 00:38:26,640
3455
+ way they can also train the model on
3456
+
3457
+ 865
3458
+ 00:38:23,920 --> 00:38:29,200
3459
+ lots of other questions the reason why
3460
+
3461
+ 866
3462
+ 00:38:26,640 --> 00:38:31,520
3463
+ this works is because like Chain of
3464
+
3465
+ 867
3466
+ 00:38:29,200 --> 00:38:34,880
3467
+ Thought makes it easier to generate each
3468
+
3469
+ 868
3470
+ 00:38:31,520 --> 00:38:36,720
3471
+ of the steps in the derivation it's also
3472
+
3473
+ 869
3474
+ 00:38:34,880 --> 00:38:38,640
3475
+ easier to assess whether an individual
3476
+
3477
+ 870
3478
+ 00:38:36,720 --> 00:38:40,000
3479
+ step in a derivation is wrong then
3480
+
3481
+ 871
3482
+ 00:38:38,640 --> 00:38:42,960
3483
+ assess whether the answer is correct
3484
+
3485
+ 872
3486
+ 00:38:40,000 --> 00:38:45,319
3487
+ overall so um this feedback signal is
3488
+
3489
+ 873
3490
+ 00:38:42,960 --> 00:38:48,640
3491
+ easier to get model provided than it is
3492
+
3493
+ 874
3494
+ 00:38:45,319 --> 00:38:51,160
3495
+ for um uh like getting feedback on the
3496
+
3497
+ 875
3498
+ 00:38:48,640 --> 00:38:53,839
3499
+ answer itself yeah failure in one step
3500
+
3501
+ 876
3502
+ 00:38:51,160 --> 00:38:56,920
3503
+ causes all the other steps to fail yep
3504
+
3505
+ 877
3506
+ 00:38:53,839 --> 00:38:57,960
3507
+ you just assess the next steps based on
3508
+
3509
+ 878
3510
+ 00:38:56,920 --> 00:39:00,079
3511
+ the assumption
3512
+
3513
+ 879
3514
+ 00:38:57,960 --> 00:39:02,920
3515
+ the or do
3516
+
3517
+ 880
3518
+ 00:39:00,079 --> 00:39:05,240
3519
+ you I I don't think
3520
+
3521
+ 881
3522
+ 00:39:02,920 --> 00:39:07,599
3523
+ they I don't think they do that I think
3524
+
3525
+ 882
3526
+ 00:39:05,240 --> 00:39:10,119
3527
+ they um it it's a good question I'm not
3528
+
3529
+ 883
3530
+ 00:39:07,599 --> 00:39:12,160
3531
+ 100% sure about this but I think they um
3532
+
3533
+ 884
3534
+ 00:39:10,119 --> 00:39:14,280
3535
+ assess each one of the steps
3536
+
3537
+ 885
3538
+ 00:39:12,160 --> 00:39:15,920
3539
+ independently um and it's not
3540
+
3541
+ 886
3542
+ 00:39:14,280 --> 00:39:17,480
3543
+ necessarily the case that like failing
3544
+
3545
+ 887
3546
+ 00:39:15,920 --> 00:39:19,000
3547
+ on this step means the step is wrong
3548
+
3549
+ 888
3550
+ 00:39:17,480 --> 00:39:21,319
3551
+ right it could be just not using it at
3552
+
3553
+ 889
3554
+ 00:39:19,000 --> 00:39:25,240
3555
+ all also
3556
+
3557
+ 890
3558
+ 00:39:21,319 --> 00:39:25,240
3559
+ so um
3560
+
3561
+ 891
3562
+ 00:39:25,440 --> 00:39:31,119
3563
+ cool so a final thing like to talk about
3564
+
3565
+ 892
3566
+ 00:39:28,160 --> 00:39:34,640
3567
+ which I think is kind of interesting um
3568
+
3569
+ 893
3570
+ 00:39:31,119 --> 00:39:37,040
3571
+ is abductive reasoning uh or learning
3572
+
3573
+ 894
3574
+ 00:39:34,640 --> 00:39:40,040
3575
+ explanations from
3576
+
3577
+ 895
3578
+ 00:39:37,040 --> 00:39:40,040
3579
+ data
3580
+
3581
+ 896
3582
+ 00:39:46,359 --> 00:39:49,359
3583
+ and
3584
+
3585
+ 897
3586
+ 00:39:52,440 --> 00:39:57,119
3587
+ sorry
3588
+
3589
+ 898
3590
+ 00:39:54,480 --> 00:40:00,760
3591
+ so basically the idea is can we find a
3592
+
3593
+ 899
3594
+ 00:39:57,119 --> 00:40:03,599
3595
+ rule that underes a pattern in data
3596
+
3597
+ 900
3598
+ 00:40:00,760 --> 00:40:06,680
3599
+ and here are some examples of this the
3600
+
3601
+ 901
3602
+ 00:40:03,599 --> 00:40:11,680
3603
+ basic idea is if we have
3604
+
3605
+ 902
3606
+ 00:40:06,680 --> 00:40:16,599
3607
+ examples um which are like if I put
3608
+
3609
+ 903
3610
+ 00:40:11,680 --> 00:40:19,960
3611
+ a cylinder and a square a cylinder and a
3612
+
3613
+ 904
3614
+ 00:40:16,599 --> 00:40:22,119
3615
+ cube on uh this pink block I get a noise
3616
+
3617
+ 905
3618
+ 00:40:19,960 --> 00:40:25,440
3619
+ if I put just a cylinder on the pink
3620
+
3621
+ 906
3622
+ 00:40:22,119 --> 00:40:29,359
3623
+ block I don't get a noise and you want
3624
+
3625
+ 907
3626
+ 00:40:25,440 --> 00:40:31,800
3627
+ to discover underlying rules based on
3628
+
3629
+ 908
3630
+ 00:40:29,359 --> 00:40:33,160
3631
+ the data that you observed and so why
3632
+
3633
+ 909
3634
+ 00:40:31,800 --> 00:40:34,720
3635
+ would you want to do this there's a
3636
+
3637
+ 910
3638
+ 00:40:33,160 --> 00:40:38,000
3639
+ couple reasons why you would want to do
3640
+
3641
+ 911
3642
+ 00:40:34,720 --> 00:40:41,560
3643
+ this um the first reason why you would
3644
+
3645
+ 912
3646
+ 00:40:38,000 --> 00:40:42,920
3647
+ like to do this is because um you might
3648
+
3649
+ 913
3650
+ 00:40:41,560 --> 00:40:45,119
3651
+ want something that you can explain to
3652
+
3653
+ 914
3654
+ 00:40:42,920 --> 00:40:47,760
3655
+ humans right you can explain I this
3656
+
3657
+ 915
3658
+ 00:40:45,119 --> 00:40:51,240
3659
+ underlying pattern um exists in this
3660
+
3661
+ 916
3662
+ 00:40:47,760 --> 00:40:55,119
3663
+ data it explains why the
3664
+
3665
+ 917
3666
+ 00:40:51,240 --> 00:40:57,319
3667
+ data you know appears as it does appear
3668
+
3669
+ 918
3670
+ 00:40:55,119 --> 00:40:59,240
3671
+ and then humans can go in and analyze it
3672
+
3673
+ 919
3674
+ 00:40:57,319 --> 00:41:02,079
3675
+ or something like that so recently
3676
+
3677
+ 920
3678
+ 00:40:59,240 --> 00:41:03,880
3679
+ there's been a big focus on like using
3680
+
3681
+ 921
3682
+ 00:41:02,079 --> 00:41:06,480
3683
+ large language models for scientific
3684
+
3685
+ 922
3686
+ 00:41:03,880 --> 00:41:08,240
3687
+ inquiry and other things like that by
3688
+
3689
+ 923
3690
+ 00:41:06,480 --> 00:41:10,920
3691
+ coming up with good explanations for why
3692
+
3693
+ 924
3694
+ 00:41:08,240 --> 00:41:12,160
3695
+ data is the way it is so if we were able
3696
+
3697
+ 925
3698
+ 00:41:10,920 --> 00:41:15,599
3699
+ to do that that would be really
3700
+
3701
+ 926
3702
+ 00:41:12,160 --> 00:41:19,280
3703
+ interesting another thing is um language
3704
+
3705
+ 927
3706
+ 00:41:15,599 --> 00:41:22,960
3707
+ models are not particularly good
3708
+
3709
+ 928
3710
+ 00:41:19,280 --> 00:41:24,760
3711
+ at coming up with they're not
3712
+
3713
+ 929
3714
+ 00:41:22,960 --> 00:41:29,480
3715
+ particularly good at being consistent
3716
+
3717
+ 930
3718
+ 00:41:24,760 --> 00:41:33,640
3719
+ about difficult tasks across very large
3720
+
3721
+ 931
3722
+ 00:41:29,480 --> 00:41:35,319
3723
+ you know numbers of examples so if you
3724
+
3725
+ 932
3726
+ 00:41:33,640 --> 00:41:37,920
3727
+ could look at like all of the data at
3728
+
3729
+ 933
3730
+ 00:41:35,319 --> 00:41:41,240
3731
+ once infer general rules from them put
3732
+
3733
+ 934
3734
+ 00:41:37,920 --> 00:41:43,480
3735
+ those rules in a prompt and then apply
3736
+
3737
+ 935
3738
+ 00:41:41,240 --> 00:41:44,960
3739
+ that prompt to make predictions on new
3740
+
3741
+ 936
3742
+ 00:41:43,480 --> 00:41:47,880
3743
+ examples you might be able to raise your
3744
+
3745
+ 937
3746
+ 00:41:44,960 --> 00:41:49,760
3747
+ overall accuracy as well so it's kind of
3748
+
3749
+ 938
3750
+ 00:41:47,880 --> 00:41:52,480
3751
+ like you know that's how humans learn as
3752
+
3753
+ 939
3754
+ 00:41:49,760 --> 00:41:55,560
3755
+ well right we don't like just memorize
3756
+
3757
+ 940
3758
+ 00:41:52,480 --> 00:41:57,400
3759
+ each example um if we just look at a few
3760
+
3761
+ 941
3762
+ 00:41:55,560 --> 00:41:59,040
3763
+ examples then we might you know not
3764
+
3765
+ 942
3766
+ 00:41:57,400 --> 00:42:02,560
3767
+ generalize well to new examples so we
3768
+
3769
+ 943
3770
+ 00:41:59,040 --> 00:42:06,359
3771
+ kind of tried to abstract away general
3772
+
3773
+ 944
3774
+ 00:42:02,560 --> 00:42:08,160
3775
+ rules um so this is also similar to
3776
+
3777
+ 945
3778
+ 00:42:06,359 --> 00:42:10,200
3779
+ program induction from input output
3780
+
3781
+ 946
3782
+ 00:42:08,160 --> 00:42:12,240
3783
+ examples which I talked during the code
3784
+
3785
+ 947
3786
+ 00:42:10,200 --> 00:42:14,040
3787
+ uh generation class so you have like
3788
+
3789
+ 948
3790
+ 00:42:12,240 --> 00:42:16,200
3791
+ input output examples and from them you
3792
+
3793
+ 949
3794
+ 00:42:14,040 --> 00:42:18,119
3795
+ would like to come up with uh general
3796
+
3797
+ 950
3798
+ 00:42:16,200 --> 00:42:19,920
3799
+ rules but this is a little bit more
3800
+
3801
+ 951
3802
+ 00:42:18,119 --> 00:42:21,920
3803
+ General it doesn't necessarily need to
3804
+
3805
+ 952
3806
+ 00:42:19,920 --> 00:42:24,160
3807
+ be a program that you're inducing it
3808
+
3809
+ 953
3810
+ 00:42:21,920 --> 00:42:25,920
3811
+ could be you know a grammar or it could
3812
+
3813
+ 954
3814
+ 00:42:24,160 --> 00:42:29,119
3815
+ be an explanation or it could be
3816
+
3817
+ 955
3818
+ 00:42:25,920 --> 00:42:29,119
3819
+ anything else like this
3820
+
3821
+ 956
3822
+ 00:42:30,079 --> 00:42:34,680
3823
+ um so there's a bit of work on rule
3824
+
3825
+ 957
3826
+ 00:42:31,960 --> 00:42:36,800
3827
+ induction with llms it's pretty recent
3828
+
3829
+ 958
3830
+ 00:42:34,680 --> 00:42:40,200
3831
+ work uh but I think it's pretty
3832
+
3833
+ 959
3834
+ 00:42:36,800 --> 00:42:43,400
3835
+ interesting so the first one is um
3836
+
3837
+ 960
3838
+ 00:42:40,200 --> 00:42:45,119
3839
+ hypothesis generation or the first step
3840
+
3841
+ 961
3842
+ 00:42:43,400 --> 00:42:47,839
3843
+ um of this particular work here is
3844
+
3845
+ 962
3846
+ 00:42:45,119 --> 00:42:53,280
3847
+ hypothesis generation and basically what
3848
+
3849
+ 963
3850
+ 00:42:47,839 --> 00:42:55,480
3851
+ it does is it takes all of these uh you
3852
+
3853
+ 964
3854
+ 00:42:53,280 --> 00:42:58,119
3855
+ know input output examples and from
3856
+
3857
+ 965
3858
+ 00:42:55,480 --> 00:43:01,680
3859
+ these input output examples it predicts
3860
+
3861
+ 966
3862
+ 00:42:58,119 --> 00:43:04,720
3863
+ these uh rules like the answer is always
3864
+
3865
+ 967
3866
+ 00:43:01,680 --> 00:43:06,720
3867
+ one or uh you want to pick the smallest
3868
+
3869
+ 968
3870
+ 00:43:04,720 --> 00:43:10,839
3871
+ one or you want to pick the first
3872
+
3873
+ 969
3874
+ 00:43:06,720 --> 00:43:12,880
3875
+ element and then you evaluate it um and
3876
+
3877
+ 970
3878
+ 00:43:10,839 --> 00:43:14,359
3879
+ so you pick the smallest one and you can
3880
+
3881
+ 971
3882
+ 00:43:12,880 --> 00:43:16,040
3883
+ either evaluate it using another
3884
+
3885
+ 972
3886
+ 00:43:14,359 --> 00:43:19,040
3887
+ language model or you can evaluate it
3888
+
3889
+ 973
3890
+ 00:43:16,040 --> 00:43:21,280
3891
+ using symbolic uh using a symbolic
3892
+
3893
+ 974
3894
+ 00:43:19,040 --> 00:43:23,359
3895
+ evaluator um if it's a program you could
3896
+
3897
+ 975
3898
+ 00:43:21,280 --> 00:43:24,680
3899
+ use a symbolic evaluator if it's a
3900
+
3901
+ 976
3902
+ 00:43:23,359 --> 00:43:28,559
3903
+ language model you could just ask the
3904
+
3905
+ 977
3906
+ 00:43:24,680 --> 00:43:30,960
3907
+ language model to pick you know
3908
+
3909
+ 978
3910
+ 00:43:28,559 --> 00:43:33,400
3911
+ an answer one always or pick the
3912
+
3913
+ 979
3914
+ 00:43:30,960 --> 00:43:35,400
3915
+ smallest one or pick the first element
3916
+
3917
+ 980
3918
+ 00:43:33,400 --> 00:43:37,480
3919
+ and then you get lots of outputs and
3920
+
3921
+ 981
3922
+ 00:43:35,400 --> 00:43:39,240
3923
+ then when you get lots of outputs you
3924
+
3925
+ 982
3926
+ 00:43:37,480 --> 00:43:42,079
3927
+ then can compare them against the
3928
+
3929
+ 983
3930
+ 00:43:39,240 --> 00:43:44,559
3931
+ expected outputs and verify whether the
3932
+
3933
+ 984
3934
+ 00:43:42,079 --> 00:43:47,920
3935
+ rule is correct verify whether the rule
3936
+
3937
+ 985
3938
+ 00:43:44,559 --> 00:43:50,160
3939
+ gives you the appropriate answer
3940
+
3941
+ 986
3942
+ 00:43:47,920 --> 00:43:53,599
3943
+ and once you've done that you can go
3944
+
3945
+ 987
3946
+ 00:43:50,160 --> 00:43:56,079
3947
+ back and do hypothesis refinement um uh
3948
+
3949
+ 988
3950
+ 00:43:53,599 --> 00:43:57,720
3951
+ and maybe even give this feedback about
3952
+
3953
+ 989
3954
+ 00:43:56,079 --> 00:44:00,079
3955
+ like what was wrong
3956
+
3957
+ 990
3958
+ 00:43:57,720 --> 00:44:03,280
3959
+ and gradually refine you know more
3960
+
3961
+ 991
3962
+ 00:44:00,079 --> 00:44:03,280
3963
+ accurate and more complex
3964
+
3965
+ 992
3966
+ 00:44:04,880 --> 00:44:11,040
3967
+ hypothesis this is another variant of
3968
+
3969
+ 993
3970
+ 00:44:07,720 --> 00:44:12,760
3971
+ this idea um which uses different
3972
+
3973
+ 994
3974
+ 00:44:11,040 --> 00:44:14,960
3975
+ methodology I think both are completely
3976
+
3977
+ 995
3978
+ 00:44:12,760 --> 00:44:17,920
3979
+ valid but um this one has a little bit
3980
+
3981
+ 996
3982
+ 00:44:14,960 --> 00:44:20,400
3983
+ higher data constraints so basically
3984
+
3985
+ 997
3986
+ 00:44:17,920 --> 00:44:23,160
3987
+ what we do is we use hypotheses in Chain
3988
+
3989
+ 998
3990
+ 00:44:20,400 --> 00:44:25,319
3991
+ of Thought reasoning and keep ones that
3992
+
3993
+ 999
3994
+ 00:44:23,160 --> 00:44:28,480
3995
+ give resul in correct
3996
+
3997
+ 1000
3998
+ 00:44:25,319 --> 00:44:30,760
3999
+ answers so
4000
+
4001
+ 1001
4002
+ 00:44:28,480 --> 00:44:35,880
4003
+ uh this is the step where they're trying
4004
+
4005
+ 1002
4006
+ 00:44:30,760 --> 00:44:40,440
4007
+ to induce rules and so here this says um
4008
+
4009
+ 1003
4010
+ 00:44:35,880 --> 00:44:42,599
4011
+ in base 9 what is 76 + 14 and they used
4012
+
4013
+ 1004
4014
+ 00:44:40,440 --> 00:44:44,079
4015
+ base 9 here obviously because if it was
4016
+
4017
+ 1005
4018
+ 00:44:42,599 --> 00:44:45,520
4019
+ in base 10 the language model would just
4020
+
4021
+ 1006
4022
+ 00:44:44,079 --> 00:44:48,400
4023
+ solve the problem and it's not very
4024
+
4025
+ 1007
4026
+ 00:44:45,520 --> 00:44:54,319
4027
+ interesting so uh they they did base 9
4028
+
4029
+ 1008
4030
+ 00:44:48,400 --> 00:44:55,839
4031
+ addition and so the answer is um we have
4032
+
4033
+ 1009
4034
+ 00:44:54,319 --> 00:45:00,280
4035
+ or the answer provided by the language
4036
+
4037
+ 1010
4038
+ 00:44:55,839 --> 00:45:03,319
4039
+ model is we have 6 + 4 = 11 um the digit
4040
+
4041
+ 1011
4042
+ 00:45:00,280 --> 00:45:07,480
4043
+ is 1 and the carry is 1 we have 7 + 1 +
4044
+
4045
+ 1012
4046
+ 00:45:03,319 --> 00:45:09,480
4047
+ 1 = 10 the digit is zero and the is one
4048
+
4049
+ 1013
4050
+ 00:45:07,480 --> 00:45:13,000
4051
+ a leading digit is one so the answer is
4052
+
4053
+ 1014
4054
+ 00:45:09,480 --> 00:45:15,240
4055
+ 101 um and this verifies so they get the
4056
+
4057
+ 1015
4058
+ 00:45:13,000 --> 00:45:17,240
4059
+ answer correct and so they know that
4060
+
4061
+ 1016
4062
+ 00:45:15,240 --> 00:45:20,800
4063
+ they assume that this derivation is also
4064
+
4065
+ 1017
4066
+ 00:45:17,240 --> 00:45:25,599
4067
+ correct and then they extract particular
4068
+
4069
+ 1018
4070
+ 00:45:20,800 --> 00:45:28,200
4071
+ rules like 6 + 4 = 11 and 7 + 1 + 1 = 10
4072
+
4073
+ 1019
4074
+ 00:45:25,599 --> 00:45:30,800
4075
+ um and they add this to the rule
4076
+
4077
+ 1020
4078
+ 00:45:28,200 --> 00:45:32,960
4079
+ Library so then the question is how do
4080
+
4081
+ 1021
4082
+ 00:45:30,800 --> 00:45:35,000
4083
+ they extract the rules the way they
4084
+
4085
+ 1022
4086
+ 00:45:32,960 --> 00:45:37,920
4087
+ extract the rules is they have an in
4088
+
4089
+ 1023
4090
+ 00:45:35,000 --> 00:45:40,760
4091
+ context prompt which surrounds the rules
4092
+
4093
+ 1024
4094
+ 00:45:37,920 --> 00:45:43,520
4095
+ by basically XML tags that says this is
4096
+
4097
+ 1025
4098
+ 00:45:40,760 --> 00:45:46,640
4099
+ a rule that should be extracted and so
4100
+
4101
+ 1026
4102
+ 00:45:43,520 --> 00:45:48,400
4103
+ then um anything that is in an XML tag
4104
+
4105
+ 1027
4106
+ 00:45:46,640 --> 00:45:50,960
4107
+ they when you get the correct answer
4108
+
4109
+ 1028
4110
+ 00:45:48,400 --> 00:45:53,440
4111
+ they extract and add that to the rule
4112
+
4113
+ 1029
4114
+ 00:45:50,960 --> 00:45:55,680
4115
+ library and then conversely like if the
4116
+
4117
+ 1030
4118
+ 00:45:53,440 --> 00:45:57,800
4119
+ derivation um if the answer is wrong
4120
+
4121
+ 1031
4122
+ 00:45:55,680 --> 00:45:59,920
4123
+ they just don't add it or they add it as
4124
+
4125
+ 1032
4126
+ 00:45:57,800 --> 00:46:01,079
4127
+ a negative example and say this is a
4128
+
4129
+ 1033
4130
+ 00:45:59,920 --> 00:46:04,119
4131
+ incorrect
4132
+
4133
+ 1034
4134
+ 00:46:01,079 --> 00:46:05,839
4135
+ rule um and then in the final step where
4136
+
4137
+ 1035
4138
+ 00:46:04,119 --> 00:46:07,480
4139
+ they do deductive reasoning they can
4140
+
4141
+ 1036
4142
+ 00:46:05,839 --> 00:46:09,119
4143
+ then go ahead and use these rules and
4144
+
4145
+ 1037
4146
+ 00:46:07,480 --> 00:46:11,640
4147
+ improve accuracy and they demonstrate
4148
+
4149
+ 1038
4150
+ 00:46:09,119 --> 00:46:12,960
4151
+ that that helps so basically these are
4152
+
4153
+ 1039
4154
+ 00:46:11,640 --> 00:46:14,520
4155
+ two different approaches one is
4156
+
4157
+ 1040
4158
+ 00:46:12,960 --> 00:46:17,400
4159
+ extracting directly from the Chain of
4160
+
4161
+ 1041
4162
+ 00:46:14,520 --> 00:46:18,880
4163
+ Thought the other is uh a priori trying
4164
+
4165
+ 1042
4166
+ 00:46:17,400 --> 00:46:23,760
4167
+ to generate rules from the whole rule
4168
+
4169
+ 1043
4170
+ 00:46:18,880 --> 00:46:27,480
4171
+ base and then um then verifying them um
4172
+
4173
+ 1044
4174
+ 00:46:23,760 --> 00:46:31,000
4175
+ notably both of these require verifiers
4176
+
4177
+ 1045
4178
+ 00:46:27,480 --> 00:46:33,839
4179
+ um and so in some recent work which uh I
4180
+
4181
+ 1046
4182
+ 00:46:31,000 --> 00:46:36,040
4183
+ I hope will be on archive very soon uh
4184
+
4185
+ 1047
4186
+ 00:46:33,839 --> 00:46:38,839
4187
+ we took a look at whether language
4188
+
4189
+ 1048
4190
+ 00:46:36,040 --> 00:46:42,800
4191
+ models themselves can verify their own
4192
+
4193
+ 1049
4194
+ 00:46:38,839 --> 00:46:46,079
4195
+ hypothesis and um so that removes the
4196
+
4197
+ 1050
4198
+ 00:46:42,800 --> 00:46:48,000
4199
+ symbolic verifier here um by just asking
4200
+
4201
+ 1051
4202
+ 00:46:46,079 --> 00:46:51,480
4203
+ the language model whether the output is
4204
+
4205
+ 1052
4206
+ 00:46:48,000 --> 00:46:53,480
4207
+ correct or not and um we found that with
4208
+
4209
+ 1053
4210
+ 00:46:51,480 --> 00:46:55,240
4211
+ very powerful language models like gp4
4212
+
4213
+ 1054
4214
+ 00:46:53,480 --> 00:46:57,760
4215
+ you can actually do that as well so that
4216
+
4217
+ 1055
4218
+ 00:46:55,240 --> 00:47:01,319
4219
+ REM removes the necess necessity to have
4220
+
4221
+ 1056
4222
+ 00:46:57,760 --> 00:47:05,480
4223
+ a symbolic verifier in the loop as
4224
+
4225
+ 1057
4226
+ 00:47:01,319 --> 00:47:08,200
4227
+ well cool um the reason why I wanted to
4228
+
4229
+ 1058
4230
+ 00:47:05,480 --> 00:47:09,440
4231
+ introduce this is I don't know if like
4232
+
4233
+ 1059
4234
+ 00:47:08,200 --> 00:47:12,359
4235
+ like it seems like all of these have
4236
+
4237
+ 1060
4238
+ 00:47:09,440 --> 00:47:16,359
4239
+ been applied so far on kind of very toy
4240
+
4241
+ 1061
4242
+ 00:47:12,359 --> 00:47:19,119
4243
+ examples like you know
4244
+
4245
+ 1062
4246
+ 00:47:16,359 --> 00:47:22,240
4247
+ um like honestly I don't really care
4248
+
4249
+ 1063
4250
+ 00:47:19,119 --> 00:47:25,920
4251
+ about whether I can play Tetris or um
4252
+
4253
+ 1064
4254
+ 00:47:22,240 --> 00:47:27,920
4255
+ you know uh find the largest or smallest
4256
+
4257
+ 1065
4258
+ 00:47:25,920 --> 00:47:30,880
4259
+ number within
4260
+
4261
+ 1066
4262
+ 00:47:27,920 --> 00:47:33,720
4263
+ um you know list or something like this
4264
+
4265
+ 1067
4266
+ 00:47:30,880 --> 00:47:36,000
4267
+ but I think they have like really exting
4268
+
4269
+ 1068
4270
+ 00:47:33,720 --> 00:47:38,480
4271
+ possibilities for how we could extract
4272
+
4273
+ 1069
4274
+ 00:47:36,000 --> 00:47:40,319
4275
+ more General patterns and like use these
4276
+
4277
+ 1070
4278
+ 00:47:38,480 --> 00:47:41,720
4279
+ to improve language model based systems
4280
+
4281
+ 1071
4282
+ 00:47:40,319 --> 00:47:43,599
4283
+ so I think it's a really exciting
4284
+
4285
+ 1072
4286
+ 00:47:41,720 --> 00:47:48,000
4287
+ research
4288
+
4289
+ 1073
4290
+ 00:47:43,599 --> 00:47:51,000
4291
+ Direction um cool any questions about
4292
+
4293
+ 1074
4294
+ 00:47:48,000 --> 00:47:51,000
4295
+ this
4296
+
4297
+ 1075
4298
+ 00:47:54,240 --> 00:48:02,160
4299
+ yeah yeah so that's a good question
4300
+
4301
+ 1076
4302
+ 00:47:58,160 --> 00:48:06,079
4303
+ um so I I think tool
4304
+
4305
+ 1077
4306
+ 00:48:02,160 --> 00:48:09,359
4307
+ learning is maybe kind of a sub subset
4308
+
4309
+ 1078
4310
+ 00:48:06,079 --> 00:48:12,319
4311
+ of this possibly like I feel like in
4312
+
4313
+ 1079
4314
+ 00:48:09,359 --> 00:48:13,559
4315
+ tool learning you're learning functions
4316
+
4317
+ 1080
4318
+ 00:48:12,319 --> 00:48:15,559
4319
+ that
4320
+
4321
+ 1081
4322
+ 00:48:13,559 --> 00:48:17,559
4323
+ are I don't know if they are like good
4324
+
4325
+ 1082
4326
+ 00:48:15,559 --> 00:48:19,680
4327
+ explanations of the data but at the very
4328
+
4329
+ 1083
4330
+ 00:48:17,559 --> 00:48:23,119
4331
+ least they're like useful um they're
4332
+
4333
+ 1084
4334
+ 00:48:19,680 --> 00:48:25,119
4335
+ useful rules for solving the task um so
4336
+
4337
+ 1085
4338
+ 00:48:23,119 --> 00:48:26,880
4339
+ I I feel like they're approaching it
4340
+
4341
+ 1086
4342
+ 00:48:25,119 --> 00:48:28,760
4343
+ from two different motivations but
4344
+
4345
+ 1087
4346
+ 00:48:26,880 --> 00:48:30,960
4347
+ actually
4348
+
4349
+ 1088
4350
+ 00:48:28,760 --> 00:48:33,559
4351
+ the methods that they're using are
4352
+
4353
+ 1089
4354
+ 00:48:30,960 --> 00:48:36,240
4355
+ similar so like for example in our tool
4356
+
4357
+ 1090
4358
+ 00:48:33,559 --> 00:48:38,559
4359
+ learning work Trove we generated like
4360
+
4361
+ 1091
4362
+ 00:48:36,240 --> 00:48:42,240
4363
+ multiple options for tools and we kept
4364
+
4365
+ 1092
4366
+ 00:48:38,559 --> 00:48:44,000
4367
+ the ones that had high self- consistency
4368
+
4369
+ 1093
4370
+ 00:48:42,240 --> 00:48:46,800
4371
+ so that's kind of like the verifier step
4372
+
4373
+ 1094
4374
+ 00:48:44,000 --> 00:48:49,040
4375
+ right and then um we threw away the ones
4376
+
4377
+ 1095
4378
+ 00:48:46,800 --> 00:48:52,760
4379
+ that weren't useful so that helps make a
4380
+
4381
+ 1096
4382
+ 00:48:49,040 --> 00:48:56,760
4383
+ concise rule set so
4384
+
4385
+ 1097
4386
+ 00:48:52,760 --> 00:48:59,280
4387
+ yeah and then like could we use tools to
4388
+
4389
+ 1098
4390
+ 00:48:56,760 --> 00:49:01,880
4391
+ [Music]
4392
+
4393
+ 1099
4394
+ 00:48:59,280 --> 00:49:04,079
4395
+ attack kind of the more like conceptual
4396
+
4397
+ 1100
4398
+ 00:49:01,880 --> 00:49:05,319
4399
+ reasoning stuff I I don't actually know
4400
+
4401
+ 1101
4402
+ 00:49:04,079 --> 00:49:06,839
4403
+ uh the answer to that it's a good
4404
+
4405
+ 1102
4406
+ 00:49:05,319 --> 00:49:10,599
4407
+ question
4408
+
4409
+ 1103
4410
+ 00:49:06,839 --> 00:49:10,599
4411
+ yeah any any other
4412
+
4413
+ 1104
4414
+ 00:49:11,240 --> 00:49:18,680
4415
+ things okay uh another final one that
4416
+
4417
+ 1105
4418
+ 00:49:14,440 --> 00:49:21,680
4419
+ I'd like to introduce um this is really
4420
+
4421
+ 1106
4422
+ 00:49:18,680 --> 00:49:23,839
4423
+ like I I really really like this paper
4424
+
4425
+ 1107
4426
+ 00:49:21,680 --> 00:49:27,440
4427
+ um just from the point of view of its
4428
+
4429
+ 1108
4430
+ 00:49:23,839 --> 00:49:29,880
4431
+ ambition and motivation um and
4432
+
4433
+ 1109
4434
+ 00:49:27,440 --> 00:49:31,920
4435
+ the idea is that they want to learn
4436
+
4437
+ 1110
4438
+ 00:49:29,880 --> 00:49:34,440
4439
+ differences between text
4440
+
4441
+ 1111
4442
+ 00:49:31,920 --> 00:49:36,200
4443
+ Collections and why would you want to do
4444
+
4445
+ 1112
4446
+ 00:49:34,440 --> 00:49:38,079
4447
+ this there's actually a ton of reasons
4448
+
4449
+ 1113
4450
+ 00:49:36,200 --> 00:49:39,720
4451
+ why you would want to do this but the
4452
+
4453
+ 1114
4454
+ 00:49:38,079 --> 00:49:44,720
4455
+ the best one that they give
4456
+
4457
+ 1115
4458
+ 00:49:39,720 --> 00:49:44,720
4459
+ here is actually no sorry maybe I I
4460
+
4461
+ 1116
4462
+ 00:49:46,440 --> 00:49:50,359
4463
+ didn't okay so this is a less
4464
+
4465
+ 1117
4466
+ 00:49:48,480 --> 00:49:53,440
4467
+ interesting one the the more interesting
4468
+
4469
+ 1118
4470
+ 00:49:50,359 --> 00:49:57,799
4471
+ one uh that they give in the paper is um
4472
+
4473
+ 1119
4474
+ 00:49:53,440 --> 00:50:00,200
4475
+ examples of reports from patients who
4476
+
4477
+ 1120
4478
+ 00:49:57,799 --> 00:50:04,200
4479
+ took an actual drug and took a
4480
+
4481
+ 1121
4482
+ 00:50:00,200 --> 00:50:06,640
4483
+ placebo and so patients write about like
4484
+
4485
+ 1122
4486
+ 00:50:04,200 --> 00:50:08,400
4487
+ their their symptoms or how they felt or
4488
+
4489
+ 1123
4490
+ 00:50:06,640 --> 00:50:11,000
4491
+ they have checkups or things like that
4492
+
4493
+ 1124
4494
+ 00:50:08,400 --> 00:50:13,839
4495
+ that are all written in natural language
4496
+
4497
+ 1125
4498
+ 00:50:11,000 --> 00:50:16,319
4499
+ so one of the things that doctors try to
4500
+
4501
+ 1126
4502
+ 00:50:13,839 --> 00:50:18,000
4503
+ do is they try to look at all of these
4504
+
4505
+ 1127
4506
+ 00:50:16,319 --> 00:50:20,240
4507
+ reports and figure out if there's any
4508
+
4509
+ 1128
4510
+ 00:50:18,000 --> 00:50:21,880
4511
+ like consistent difference between
4512
+
4513
+ 1129
4514
+ 00:50:20,240 --> 00:50:25,079
4515
+ people who took a placebo and people who
4516
+
4517
+ 1130
4518
+ 00:50:21,880 --> 00:50:27,359
4519
+ took an actual um actual drug and this
4520
+
4521
+ 1131
4522
+ 00:50:25,079 --> 00:50:31,079
4523
+ is like a major part of medical trials
4524
+
4525
+ 1132
4526
+ 00:50:27,359 --> 00:50:32,960
4527
+ right um and so the idea is like given
4528
+
4529
+ 1133
4530
+ 00:50:31,079 --> 00:50:35,000
4531
+ all of the texts of people who took the
4532
+
4533
+ 1134
4534
+ 00:50:32,960 --> 00:50:36,599
4535
+ drug given all the texts of people who
4536
+
4537
+ 1135
4538
+ 00:50:35,000 --> 00:50:38,319
4539
+ of people who took the placebo could you
4540
+
4541
+ 1136
4542
+ 00:50:36,599 --> 00:50:40,960
4543
+ automatically extract differences
4544
+
4545
+ 1137
4546
+ 00:50:38,319 --> 00:50:45,000
4547
+ between them in some way and so the
4548
+
4549
+ 1138
4550
+ 00:50:40,960 --> 00:50:47,760
4551
+ methodology that they use for this is um
4552
+
4553
+ 1139
4554
+ 00:50:45,000 --> 00:50:51,359
4555
+ they have like group a uh the Manchester
4556
+
4557
+ 1140
4558
+ 00:50:47,760 --> 00:50:53,240
4559
+ United soccer Squad welcomes Rising Star
4560
+
4561
+ 1141
4562
+ 00:50:51,359 --> 00:50:54,599
4563
+ as Serena Williams joins the UCLA
4564
+
4565
+ 1142
4566
+ 00:50:53,240 --> 00:50:56,920
4567
+ women's tennis roster and then you have
4568
+
4569
+ 1143
4570
+ 00:50:54,599 --> 00:51:00,200
4571
+ like 20 more examples and then here you
4572
+
4573
+ 1144
4574
+ 00:50:56,920 --> 00:51:03,480
4575
+ have Egypt's President uh at the African
4576
+
4577
+ 1145
4578
+ 00:51:00,200 --> 00:51:07,200
4579
+ unit Union Summit um and other things
4580
+
4581
+ 1146
4582
+ 00:51:03,480 --> 00:51:12,000
4583
+ like that in 20 examples uh not seen
4584
+
4585
+ 1147
4586
+ 00:51:07,200 --> 00:51:14,359
4587
+ here and so then if I asked a question
4588
+
4589
+ 1148
4590
+ 00:51:12,000 --> 00:51:16,359
4591
+ um the original data set includes news
4592
+
4593
+ 1149
4594
+ 00:51:14,359 --> 00:51:18,680
4595
+ summaries the two corpora are generated
4596
+
4597
+ 1150
4598
+ 00:51:16,359 --> 00:51:21,240
4599
+ based on when they were published uh
4600
+
4601
+ 1151
4602
+ 00:51:18,680 --> 00:51:24,359
4603
+ samples from group a include news from
4604
+
4605
+ 1152
4606
+ 00:51:21,240 --> 00:51:27,480
4607
+ 2007 while samples from Group B include
4608
+
4609
+ 1153
4610
+ 00:51:24,359 --> 00:51:29,000
4611
+ news from 2008 I'm a joural trying to
4612
+
4613
+ 1154
4614
+ 00:51:27,480 --> 00:51:31,240
4615
+ understand what topics are popular
4616
+
4617
+ 1155
4618
+ 00:51:29,000 --> 00:51:33,440
4619
+ across years please write a list of
4620
+
4621
+ 1156
4622
+ 00:51:31,240 --> 00:51:35,280
4623
+ hypotheses separated by bullet points of
4624
+
4625
+ 1157
4626
+ 00:51:33,440 --> 00:51:39,920
4627
+ how data points from group a differ from
4628
+
4629
+ 1158
4630
+ 00:51:35,280 --> 00:51:42,400
4631
+ those of group b um and then formatting
4632
+
4633
+ 1159
4634
+ 00:51:39,920 --> 00:51:44,160
4635
+ information
4636
+
4637
+ 1160
4638
+ 00:51:42,400 --> 00:51:46,960
4639
+ um
4640
+
4641
+ 1161
4642
+ 00:51:44,160 --> 00:51:49,680
4643
+ and so based on the two sentence groups
4644
+
4645
+ 1162
4646
+ 00:51:46,960 --> 00:51:50,559
4647
+ A and B from the above more sentences in
4648
+
4649
+ 1163
4650
+ 00:51:49,680 --> 00:51:53,400
4651
+ group
4652
+
4653
+ 1164
4654
+ 00:51:50,559 --> 00:51:55,240
4655
+ a mention a sports team or mention about
4656
+
4657
+ 1165
4658
+ 00:51:53,400 --> 00:51:57,319
4659
+ academic relations or things like that
4660
+
4661
+ 1166
4662
+ 00:51:55,240 --> 00:51:58,599
4663
+ and so what this allows you to do is it
4664
+
4665
+ 1167
4666
+ 00:51:57,319 --> 00:52:00,319
4667
+ allows you to come up with a whole bunch
4668
+
4669
+ 1168
4670
+ 00:51:58,599 --> 00:52:01,400
4671
+ of hypotheses about why one might be
4672
+
4673
+ 1169
4674
+ 00:52:00,319 --> 00:52:04,920
4675
+ better than the
4676
+
4677
+ 1170
4678
+ 00:52:01,400 --> 00:52:08,920
4679
+ other so the problem with this though is
4680
+
4681
+ 1171
4682
+ 00:52:04,920 --> 00:52:10,880
4683
+ like because of language model you know
4684
+
4685
+ 1172
4686
+ 00:52:08,920 --> 00:52:13,440
4687
+ limits number one they might just
4688
+
4689
+ 1173
4690
+ 00:52:10,880 --> 00:52:17,119
4691
+ hallucinate things and be totally wrong
4692
+
4693
+ 1174
4694
+ 00:52:13,440 --> 00:52:19,680
4695
+ um number two
4696
+
4697
+ 1175
4698
+ 00:52:17,119 --> 00:52:21,040
4699
+ the size of the context so that they can
4700
+
4701
+ 1176
4702
+ 00:52:19,680 --> 00:52:23,960
4703
+ take into account when making this
4704
+
4705
+ 1177
4706
+ 00:52:21,040 --> 00:52:26,720
4707
+ decision is relatively small so the next
4708
+
4709
+ 1178
4710
+ 00:52:23,960 --> 00:52:29,280
4711
+ thing that they do is then they have a a
4712
+
4713
+ 1179
4714
+ 00:52:26,720 --> 00:52:32,119
4715
+ much larger Corpus of
4716
+
4717
+ 1180
4718
+ 00:52:29,280 --> 00:52:33,200
4719
+ text um with like a thousand examples or
4720
+
4721
+ 1181
4722
+ 00:52:32,119 --> 00:52:36,640
4723
+ something like
4724
+
4725
+ 1182
4726
+ 00:52:33,200 --> 00:52:40,240
4727
+ this and then they treat each of these
4728
+
4729
+ 1183
4730
+ 00:52:36,640 --> 00:52:42,680
4731
+ hypotheses as a
4732
+
4733
+ 1184
4734
+ 00:52:40,240 --> 00:52:44,559
4735
+ classifier and then they go through all
4736
+
4737
+ 1185
4738
+ 00:52:42,680 --> 00:52:47,480
4739
+ of the examples from Corpus one which is
4740
+
4741
+ 1186
4742
+ 00:52:44,559 --> 00:52:50,480
4743
+ like maybe 2000 year 2000 and then
4744
+
4745
+ 1187
4746
+ 00:52:47,480 --> 00:52:52,079
4747
+ Corpus 2 which is year 2008 and they ask
4748
+
4749
+ 1188
4750
+ 00:52:50,480 --> 00:52:55,880
4751
+ the language model with respect to all
4752
+
4753
+ 1189
4754
+ 00:52:52,079 --> 00:52:58,119
4755
+ of them um does this sentence mention a
4756
+
4757
+ 1190
4758
+ 00:52:55,880 --> 00:53:01,400
4759
+ sports team recording recruiting a new
4760
+
4761
+ 1191
4762
+ 00:52:58,119 --> 00:53:04,839
4763
+ member um and so you get a
4764
+
4765
+ 1192
4766
+ 00:53:01,400 --> 00:53:04,839
4767
+ classification for each one of
4768
+
4769
+ 1193
4770
+ 00:53:12,359 --> 00:53:17,440
4771
+ these and you get a certain number of
4772
+
4773
+ 1194
4774
+ 00:53:14,520 --> 00:53:18,799
4775
+ ones and zeros and so once you have a
4776
+
4777
+ 1195
4778
+ 00:53:17,440 --> 00:53:20,839
4779
+ certain number of ones and zeros what's
4780
+
4781
+ 1196
4782
+ 00:53:18,799 --> 00:53:24,079
4783
+ the next thing that you would do
4784
+
4785
+ 1197
4786
+ 00:53:20,839 --> 00:53:24,079
4787
+ here any
4788
+
4789
+ 1198
4790
+ 00:53:24,880 --> 00:53:30,599
4791
+ ideas how do you tell there's like
4792
+
4793
+ 1199
4794
+ 00:53:27,359 --> 00:53:30,599
4795
+ actually a difference between these
4796
+
4797
+ 1200
4798
+ 00:53:36,520 --> 00:53:43,319
4799
+ two between two sets
4800
+
4801
+ 1201
4802
+ 00:53:39,319 --> 00:53:45,920
4803
+ of numbers like one and
4804
+
4805
+ 1202
4806
+ 00:53:43,319 --> 00:53:48,680
4807
+ zero a hint is you probably had to do
4808
+
4809
+ 1203
4810
+ 00:53:45,920 --> 00:53:48,680
4811
+ this for assignment
4812
+
4813
+ 1204
4814
+ 00:53:53,720 --> 00:53:58,520
4815
+ two yeah
4816
+
4817
+ 1205
4818
+ 00:53:56,799 --> 00:54:01,200
4819
+ yeah exactly you you do a significance
4820
+
4821
+ 1206
4822
+ 00:53:58,520 --> 00:54:04,200
4823
+ test between the two and so um what you
4824
+
4825
+ 1207
4826
+ 00:54:01,200 --> 00:54:06,440
4827
+ can then do is you have lots of
4828
+
4829
+ 1208
4830
+ 00:54:04,200 --> 00:54:08,839
4831
+ hypotheses you have lots of significance
4832
+
4833
+ 1209
4834
+ 00:54:06,440 --> 00:54:11,040
4835
+ values you can order them by the
4836
+
4837
+ 1210
4838
+ 00:54:08,839 --> 00:54:13,839
4839
+ significance value and say the most
4840
+
4841
+ 1211
4842
+ 00:54:11,040 --> 00:54:17,559
4843
+ significance or the the difference with
4844
+
4845
+ 1212
4846
+ 00:54:13,839 --> 00:54:19,160
4847
+ the like lowest P value between them is
4848
+
4849
+ 1213
4850
+ 00:54:17,559 --> 00:54:20,480
4851
+ the one that's most likely to be an
4852
+
4853
+ 1214
4854
+ 00:54:19,160 --> 00:54:26,520
4855
+ actual difference between the two and
4856
+
4857
+ 1215
4858
+ 00:54:20,480 --> 00:54:29,079
4859
+ you can find um like uh the news in 2007
4860
+
4861
+ 1216
4862
+ 00:54:26,520 --> 00:54:32,520
4863
+ indeed tended to talk about X more than
4864
+
4865
+ 1217
4866
+ 00:54:29,079 --> 00:54:34,559
4867
+ uh than other things so I uh I actually
4868
+
4869
+ 1218
4870
+ 00:54:32,520 --> 00:54:36,079
4871
+ used this in one of my uh one of my
4872
+
4873
+ 1219
4874
+ 00:54:34,559 --> 00:54:39,520
4875
+ unrelated projects where I wanted to
4876
+
4877
+ 1220
4878
+ 00:54:36,079 --> 00:54:42,680
4879
+ find the difference between um language
4880
+
4881
+ 1221
4882
+ 00:54:39,520 --> 00:54:45,640
4883
+ models sentences that language models
4884
+
4885
+ 1222
4886
+ 00:54:42,680 --> 00:54:47,839
4887
+ aligned well with human brain signals in
4888
+
4889
+ 1223
4890
+ 00:54:45,640 --> 00:54:49,760
4891
+ sentences where language models didn't
4892
+
4893
+ 1224
4894
+ 00:54:47,839 --> 00:54:52,559
4895
+ align well with human brain signals so
4896
+
4897
+ 1225
4898
+ 00:54:49,760 --> 00:54:53,799
4899
+ we like we had some data of human brain
4900
+
4901
+ 1226
4902
+ 00:54:52,559 --> 00:54:56,880
4903
+ signals and we had a measure of
4904
+
4905
+ 1227
4906
+ 00:54:53,799 --> 00:54:58,240
4907
+ alignment um on each sentence and it
4908
+
4909
+ 1228
4910
+ 00:54:56,880 --> 00:55:01,799
4911
+ actually found some pretty interesting
4912
+
4913
+ 1229
4914
+ 00:54:58,240 --> 00:55:03,359
4915
+ hypothesis like um uh language models
4916
+
4917
+ 1230
4918
+ 00:55:01,799 --> 00:55:06,200
4919
+ tend to align less well with human brain
4920
+
4921
+ 1231
4922
+ 00:55:03,359 --> 00:55:07,319
4923
+ signals on metaphorical language or a
4924
+
4925
+ 1232
4926
+ 00:55:06,200 --> 00:55:10,599
4927
+ language that had to do with
4928
+
4929
+ 1233
4930
+ 00:55:07,319 --> 00:55:11,799
4931
+ interpersonal relations or um or other
4932
+
4933
+ 1234
4934
+ 00:55:10,599 --> 00:55:15,200
4935
+ things like that and then we actually
4936
+
4937
+ 1235
4938
+ 00:55:11,799 --> 00:55:17,559
4939
+ went in and pursued um you know these to
4940
+
4941
+ 1236
4942
+ 00:55:15,200 --> 00:55:21,000
4943
+ examine them further and uh we didn't
4944
+
4945
+ 1237
4946
+ 00:55:17,559 --> 00:55:22,680
4947
+ entirely rely on this um you know like
4948
+
4949
+ 1238
4950
+ 00:55:21,000 --> 00:55:25,160
4951
+ significance test because I didn't quite
4952
+
4953
+ 1239
4954
+ 00:55:22,680 --> 00:55:26,880
4955
+ trust language models that much to like
4956
+
4957
+ 1240
4958
+ 00:55:25,160 --> 00:55:28,559
4959
+ shape my entire resource
4960
+
4961
+ 1241
4962
+ 00:55:26,880 --> 00:55:29,880
4963
+ research agenda around them but we came
4964
+
4965
+ 1242
4966
+ 00:55:28,559 --> 00:55:31,720
4967
+ up with other ways to measure it and
4968
+
4969
+ 1243
4970
+ 00:55:29,880 --> 00:55:35,000
4971
+ some of the things checked out some of
4972
+
4973
+ 1244
4974
+ 00:55:31,720 --> 00:55:36,799
4975
+ the things didn't check out so um again
4976
+
4977
+ 1245
4978
+ 00:55:35,000 --> 00:55:38,760
4979
+ I think this general direction of like
4980
+
4981
+ 1246
4982
+ 00:55:36,799 --> 00:55:41,720
4983
+ how can language models help us answer
4984
+
4985
+ 1247
4986
+ 00:55:38,760 --> 00:55:43,760
4987
+ you know uh complex research questions
4988
+
4989
+ 1248
4990
+ 00:55:41,720 --> 00:55:45,480
4991
+ that we wouldn't be able to easily or
4992
+
4993
+ 1249
4994
+ 00:55:43,760 --> 00:55:47,960
4995
+ very efficiently that would require
4996
+
4997
+ 1250
4998
+ 00:55:45,480 --> 00:55:52,200
4999
+ normally humans annotating lots of data
5000
+
5001
+ 1251
5002
+ 00:55:47,960 --> 00:55:56,839
5003
+ is um an interesting topic as
5004
+
5005
+ 1252
5006
+ 00:55:52,200 --> 00:55:56,839
5007
+ well cool um
CMU Advanced NLP 2024 (21) Complex Reasoning/transcript.vtt ADDED
@@ -0,0 +1,3757 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00:00.280 --> 00:00:05.120
4
+ so I'd like to go ahead with uh complex
5
+
6
+ 00:00:02.399 --> 00:00:08.719
7
+ reasoning and we've talked a little bit
8
+
9
+ 00:00:05.120 --> 00:00:10.719
10
+ about uh reasoning in language models uh
11
+
12
+ 00:00:08.719 --> 00:00:12.160
13
+ up until now and so I'm going to be
14
+
15
+ 00:00:10.719 --> 00:00:15.280
16
+ talking about stuff that we didn't talk
17
+
18
+ 00:00:12.160 --> 00:00:17.240
19
+ about yet um this might be a little bit
20
+
21
+ 00:00:15.280 --> 00:00:19.199
22
+ short because of that because I'm not
23
+
24
+ 00:00:17.240 --> 00:00:20.640
25
+ talking about like programs because we
26
+
27
+ 00:00:19.199 --> 00:00:22.080
28
+ talked about that in the code generation
29
+
30
+ 00:00:20.640 --> 00:00:24.199
31
+ class and we already talked a little bit
32
+
33
+ 00:00:22.080 --> 00:00:26.320
34
+ about some of the basics here but um you
35
+
36
+ 00:00:24.199 --> 00:00:30.119
37
+ know if we have time at the end I'd be
38
+
39
+ 00:00:26.320 --> 00:00:30.840
40
+ happy to discuss free form also so what
41
+
42
+ 00:00:30.119 --> 00:00:34.320
43
+ is
44
+
45
+ 00:00:30.840 --> 00:00:35.920
46
+ reasoning um the basic idea is using
47
+
48
+ 00:00:34.320 --> 00:00:37.680
49
+ evidence and logic to arrive at
50
+
51
+ 00:00:35.920 --> 00:00:40.200
52
+ conclusions and make
53
+
54
+ 00:00:37.680 --> 00:00:43.760
55
+ judgments
56
+
57
+ 00:00:40.200 --> 00:00:48.039
58
+ and what is it in language models is a
59
+
60
+ 00:00:43.760 --> 00:00:49.399
61
+ little bit um you know less clear uh but
62
+
63
+ 00:00:48.039 --> 00:00:52.680
64
+ if we talk about it kind of like from
65
+
66
+ 00:00:49.399 --> 00:00:56.280
67
+ the philosophical standpoint um there
68
+
69
+ 00:00:52.680 --> 00:00:58.399
70
+ are two varieties of this one is formal
71
+
72
+ 00:00:56.280 --> 00:01:01.680
73
+ uh reasoning and formal reasoning is
74
+
75
+ 00:00:58.399 --> 00:01:04.239
76
+ mostly based on strict truth values so
77
+
78
+ 00:01:01.680 --> 00:01:05.920
79
+ it's kind of like um you can definitely
80
+
81
+ 00:01:04.239 --> 00:01:08.360
82
+ say this is true you can definitely say
83
+
84
+ 00:01:05.920 --> 00:01:11.680
85
+ this is not true
86
+
87
+ 00:01:08.360 --> 00:01:13.799
88
+ and in real life there's very little
89
+
90
+ 00:01:11.680 --> 00:01:15.759
91
+ actual formal reasoning outside of like
92
+
93
+ 00:01:13.799 --> 00:01:17.960
94
+ for example mathematics or maybe you
95
+
96
+ 00:01:15.759 --> 00:01:20.240
97
+ know algorithms computer science and
98
+
99
+ 00:01:17.960 --> 00:01:21.759
100
+ other things like that um and then
101
+
102
+ 00:01:20.240 --> 00:01:23.240
103
+ separately from that we have informal
104
+
105
+ 00:01:21.759 --> 00:01:27.040
106
+ reasoning based on experience and
107
+
108
+ 00:01:23.240 --> 00:01:30.439
109
+ intuition and actually um this is this
110
+
111
+ 00:01:27.040 --> 00:01:32.360
112
+ was uh rather elusive uh until
113
+
114
+ 00:01:30.439 --> 00:01:33.720
115
+ large language models you know people
116
+
117
+ 00:01:32.360 --> 00:01:35.560
118
+ were working on it but it was really
119
+
120
+ 00:01:33.720 --> 00:01:38.119
121
+ hard and this is like one of the big
122
+
123
+ 00:01:35.560 --> 00:01:41.479
124
+ breakthroughs I think of the past few
125
+
126
+ 00:01:38.119 --> 00:01:46.799
127
+ years um I should note that this uh
128
+
129
+ 00:01:41.479 --> 00:01:48.520
130
+ paper here uh hang and Chan is a kind of
131
+
132
+ 00:01:46.799 --> 00:01:50.119
133
+ review survey paper of reasoning in
134
+
135
+ 00:01:48.520 --> 00:01:51.520
136
+ large language models it's on the
137
+
138
+ 00:01:50.119 --> 00:01:54.719
139
+ references so if you're interested you
140
+
141
+ 00:01:51.520 --> 00:01:57.600
142
+ can take a look at that too um but
143
+
144
+ 00:01:54.719 --> 00:01:59.200
145
+ there's three kinds of reasoning uh
146
+
147
+ 00:01:57.600 --> 00:02:00.840
148
+ there's many kinds of reasoning but
149
+
150
+ 00:01:59.200 --> 00:02:03.280
151
+ there's three kinds of reasoning in
152
+
153
+ 00:02:00.840 --> 00:02:06.240
154
+ particular that I'd like to talk about
155
+
156
+ 00:02:03.280 --> 00:02:08.840
157
+ um from the point of view of today and
158
+
159
+ 00:02:06.240 --> 00:02:10.360
160
+ the first one is uh deductive reasoning
161
+
162
+ 00:02:08.840 --> 00:02:13.080
163
+ and deductive reasoning is basically
164
+
165
+ 00:02:10.360 --> 00:02:16.040
166
+ using logic to go from a premise to a
167
+
168
+ 00:02:13.080 --> 00:02:18.440
169
+ conclusion and this is largely what
170
+
171
+ 00:02:16.040 --> 00:02:19.879
172
+ people not entirely but largely what
173
+
174
+ 00:02:18.440 --> 00:02:22.400
175
+ people talk about when they think about
176
+
177
+ 00:02:19.879 --> 00:02:25.879
178
+ formal reasoning and so basically you
179
+
180
+ 00:02:22.400 --> 00:02:28.640
181
+ have several premises um like all
182
+
183
+ 00:02:25.879 --> 00:02:32.120
184
+ mammals have kidneys and all whales are
185
+
186
+ 00:02:28.640 --> 00:02:35.239
187
+ mammals and then from this uh you can go
188
+
189
+ 00:02:32.120 --> 00:02:35.239
190
+ to all whales have
191
+
192
+ 00:02:35.440 --> 00:02:40.640
193
+ kidneys then separately there's
194
+
195
+ 00:02:38.000 --> 00:02:44.040
196
+ inductive reasoning and inductive
197
+
198
+ 00:02:40.640 --> 00:02:46.040
199
+ reasoning is um from
200
+
201
+ 00:02:44.040 --> 00:02:48.480
202
+ observation uh predict a likely
203
+
204
+ 00:02:46.040 --> 00:02:50.080
205
+ conclusion or predict a likely kind of
206
+
207
+ 00:02:48.480 --> 00:02:53.640
208
+ generalized
209
+
210
+ 00:02:50.080 --> 00:02:55.360
211
+ conclusion um so this is one example uh
212
+
213
+ 00:02:53.640 --> 00:02:56.920
214
+ when we see a creature with wings it is
215
+
216
+ 00:02:55.360 --> 00:02:58.599
217
+ usually a bird we see a creature with
218
+
219
+ 00:02:56.920 --> 00:03:00.400
220
+ wings the creature is likely to be a
221
+
222
+ 00:02:58.599 --> 00:03:02.879
223
+ bird so it's kind of this is kind of
224
+
225
+ 00:03:00.400 --> 00:03:05.319
226
+ like a soft version of deduction another
227
+
228
+ 00:03:02.879 --> 00:03:07.440
229
+ common thing is like every single
230
+
231
+ 00:03:05.319 --> 00:03:10.760
232
+ creature I have seen with wings is a
233
+
234
+ 00:03:07.440 --> 00:03:12.480
235
+ bird and then you can kind of um induce
236
+
237
+ 00:03:10.760 --> 00:03:16.799
238
+ that all
239
+
240
+ 00:03:12.480 --> 00:03:19.159
241
+ uh like all uh creatures with wings are
242
+
243
+ 00:03:16.799 --> 00:03:21.120
244
+ birds but that might not be true it's
245
+
246
+ 00:03:19.159 --> 00:03:23.879
247
+ not necessarily logically entailed but
248
+
249
+ 00:03:21.120 --> 00:03:27.560
250
+ you you make that kind
251
+
252
+ 00:03:23.879 --> 00:03:31.000
253
+ of logical conclusion uh without it
254
+
255
+ 00:03:27.560 --> 00:03:32.840
256
+ being formally uh correct or verif
257
+
258
+ 00:03:31.000 --> 00:03:34.720
259
+ and then the final one is abductive
260
+
261
+ 00:03:32.840 --> 00:03:38.000
262
+ reasoning and so this is from an
263
+
264
+ 00:03:34.720 --> 00:03:40.760
265
+ observation we predict the most likely
266
+
267
+ 00:03:38.000 --> 00:03:42.760
268
+ explanation and so for example if we
269
+
270
+ 00:03:40.760 --> 00:03:44.480
271
+ have something like the car cannot start
272
+
273
+ 00:03:42.760 --> 00:03:48.319
274
+ and there is a puddle of liquid under
275
+
276
+ 00:03:44.480 --> 00:03:50.200
277
+ the engine um then we might have a
278
+
279
+ 00:03:48.319 --> 00:03:53.360
280
+ likely explanation that the car has a
281
+
282
+ 00:03:50.200 --> 00:03:55.280
283
+ leak in the radiator so we're going from
284
+
285
+ 00:03:53.360 --> 00:03:58.760
286
+ kind of uh the
287
+
288
+ 00:03:55.280 --> 00:04:00.879
289
+ car you know these these things and then
290
+
291
+ 00:03:58.760 --> 00:04:02.280
292
+ we try to predict the reason why this
293
+
294
+ 00:04:00.879 --> 00:04:05.040
295
+ happens so we're trying to predict like
296
+
297
+ 00:04:02.280 --> 00:04:07.360
298
+ reverse pausal links
299
+
300
+ 00:04:05.040 --> 00:04:08.480
301
+ essentially um there's other types of re
302
+
303
+ 00:04:07.360 --> 00:04:10.400
304
+ reasoning that I'm not going to talk
305
+
306
+ 00:04:08.480 --> 00:04:12.159
307
+ about as much like analogical reasoning
308
+
309
+ 00:04:10.400 --> 00:04:14.079
310
+ and and things like this but uh these
311
+
312
+ 00:04:12.159 --> 00:04:15.440
313
+ are the three main ones I want to talk
314
+
315
+ 00:04:14.079 --> 00:04:17.720
316
+ about
317
+
318
+ 00:04:15.440 --> 00:04:22.040
319
+ today uh one thing I should point out is
320
+
321
+ 00:04:17.720 --> 00:04:24.400
322
+ like even in philosophy or you know
323
+
324
+ 00:04:22.040 --> 00:04:26.240
325
+ like even when you read descriptions
326
+
327
+ 00:04:24.400 --> 00:04:29.280
328
+ about these various types of reasoning
329
+
330
+ 00:04:26.240 --> 00:04:31.880
331
+ the types are a little bit vague so um
332
+
333
+ 00:04:29.280 --> 00:04:35.280
334
+ take these is like
335
+
336
+ 00:04:31.880 --> 00:04:37.240
337
+ general not you know General directions
338
+
339
+ 00:04:35.280 --> 00:04:39.400
340
+ and not strict rules because like which
341
+
342
+ 00:04:37.240 --> 00:04:42.120
343
+ falls on under which category also can
344
+
345
+ 00:04:39.400 --> 00:04:44.880
346
+ be a little bit uh you know unclear uh
347
+
348
+ 00:04:42.120 --> 00:04:44.880
349
+ according to various
350
+
351
+ 00:04:45.479 --> 00:04:53.440
352
+ definitions cool um so first before
353
+
354
+ 00:04:49.840 --> 00:04:55.720
355
+ getting into formal reasoning methods
356
+
357
+ 00:04:53.440 --> 00:04:57.759
358
+ are before getting into the bulk of the
359
+
360
+ 00:04:55.720 --> 00:05:00.000
361
+ talk which is going to be about llms I
362
+
363
+ 00:04:57.759 --> 00:05:02.479
364
+ want to talk about some pre-m reasoning
365
+
366
+ 00:05:00.000 --> 00:05:03.720
367
+ methods and the first one is kind of
368
+
369
+ 00:05:02.479 --> 00:05:05.160
370
+ like formal reasoning within
371
+
372
+ 00:05:03.720 --> 00:05:07.320
373
+ computational
374
+
375
+ 00:05:05.160 --> 00:05:09.840
376
+ semantics and this has been around for a
377
+
378
+ 00:05:07.320 --> 00:05:12.479
379
+ really long time um it's also kind of
380
+
381
+ 00:05:09.840 --> 00:05:15.000
382
+ what powered the things that worked over
383
+
384
+ 00:05:12.479 --> 00:05:21.039
385
+ knowledge bases and other things like
386
+
387
+ 00:05:15.000 --> 00:05:23.639
388
+ this um and the way it works is it does
389
+
390
+ 00:05:21.039 --> 00:05:27.600
391
+ derivational um
392
+
393
+ 00:05:23.639 --> 00:05:31.800
394
+ reasoning by uh sorry I can't read that
395
+
396
+ 00:05:27.600 --> 00:05:34.720
397
+ in the back um by starting out with
398
+
399
+ 00:05:31.800 --> 00:05:36.080
400
+ certain premises and getting to um
401
+
402
+ 00:05:34.720 --> 00:05:40.000
403
+ getting to final
404
+
405
+ 00:05:36.080 --> 00:05:43.039
406
+ conclusions so there's ways that you can
407
+
408
+ 00:05:40.000 --> 00:05:44.060
409
+ write this I think you might have
410
+
411
+ 00:05:43.039 --> 00:05:47.080
412
+ seen
413
+
414
+ 00:05:44.060 --> 00:05:50.479
415
+ [Music]
416
+
417
+ 00:05:47.080 --> 00:05:54.240
418
+ um you might have seen
419
+
420
+ 00:05:50.479 --> 00:05:58.319
421
+ uh this in uh another like math class or
422
+
423
+ 00:05:54.240 --> 00:06:02.440
424
+ something but uh we we have symbols like
425
+
426
+ 00:05:58.319 --> 00:06:02.440
427
+ all and um
428
+
429
+ 00:06:03.039 --> 00:06:08.280
430
+ exist let's
431
+
432
+ 00:06:04.960 --> 00:06:10.960
433
+ see yeah we have things like all and
434
+
435
+ 00:06:08.280 --> 00:06:13.319
436
+ exist and like all
437
+
438
+ 00:06:10.960 --> 00:06:16.240
439
+ X
440
+
441
+ 00:06:13.319 --> 00:06:20.479
442
+ die means
443
+
444
+ 00:06:16.240 --> 00:06:23.919
445
+ like every Everything has died and this
446
+
447
+ 00:06:20.479 --> 00:06:27.360
448
+ uh implies that Mia and Zed have
449
+
450
+ 00:06:23.919 --> 00:06:30.440
451
+ died um
452
+
453
+ 00:06:27.360 --> 00:06:32.240
454
+ so yeah this is a actually maybe I'll
455
+
456
+ 00:06:30.440 --> 00:06:33.280
457
+ not I'll not go through this one and let
458
+
459
+ 00:06:32.240 --> 00:06:37.639
460
+ me go
461
+
462
+ 00:06:33.280 --> 00:06:40.440
463
+ through um go to this one so like it
464
+
465
+ 00:06:37.639 --> 00:06:40.440
466
+ would be something
467
+
468
+ 00:06:40.639 --> 00:06:45.080
469
+ like uh for
470
+
471
+ 00:06:42.960 --> 00:06:47.480
472
+ all
473
+
474
+ 00:06:45.080 --> 00:06:50.669
475
+ X um
476
+
477
+ 00:06:47.480 --> 00:06:50.669
478
+ [Music]
479
+
480
+ 00:06:52.039 --> 00:07:00.400
481
+ mamal well X
482
+
483
+ 00:06:56.759 --> 00:07:03.520
484
+ implies have
485
+
486
+ 00:07:00.400 --> 00:07:07.560
487
+ X kidney or something like
488
+
489
+ 00:07:03.520 --> 00:07:09.280
490
+ that and then you would have other rules
491
+
492
+ 00:07:07.560 --> 00:07:11.879
493
+ and you can go through uh through
494
+
495
+ 00:07:09.280 --> 00:07:14.440
496
+ derivations and and other things like
497
+
498
+ 00:07:11.879 --> 00:07:16.120
499
+ this
500
+
501
+ 00:07:14.440 --> 00:07:19.280
502
+ um
503
+
504
+ 00:07:16.120 --> 00:07:21.560
505
+ my favorite reference for this is this
506
+
507
+ 00:07:19.280 --> 00:07:24.599
508
+ Blackburn and buz book right here it's
509
+
510
+ 00:07:21.560 --> 00:07:26.400
511
+ really well written um and it has like
512
+
513
+ 00:07:24.599 --> 00:07:28.039
514
+ lots of good examples it also explains
515
+
516
+ 00:07:26.400 --> 00:07:30.440
517
+ how you go through derivations and other
518
+
519
+ 00:07:28.039 --> 00:07:34.360
520
+ stuff like that
521
+
522
+ 00:07:30.440 --> 00:07:35.759
523
+ um and actually neural networks can do
524
+
525
+ 00:07:34.360 --> 00:07:37.039
526
+ this variety of reasoning through Chain
527
+
528
+ 00:07:35.759 --> 00:07:38.599
529
+ of Thought and other things I'm going to
530
+
531
+ 00:07:37.039 --> 00:07:40.120
532
+ talk about today but it's a very rough
533
+
534
+ 00:07:38.599 --> 00:07:43.960
535
+ approximation and it doesn't work
536
+
537
+ 00:07:40.120 --> 00:07:47.199
538
+ particularly well for saying like all
539
+
540
+ 00:07:43.960 --> 00:07:51.240
541
+ you know all people
542
+
543
+ 00:07:47.199 --> 00:07:53.599
544
+ are of a uh like things that apply to
545
+
546
+ 00:07:51.240 --> 00:07:57.240
547
+ all people or things that apply to sets
548
+
549
+ 00:07:53.599 --> 00:08:00.039
550
+ or other things like this so within
551
+
552
+ 00:07:57.240 --> 00:08:02.879
553
+ prologue you could
554
+
555
+ 00:08:00.039 --> 00:08:06.520
556
+ take a knowledge base and ask the
557
+
558
+ 00:08:02.879 --> 00:08:11.960
559
+ knowledge base like do
560
+
561
+ 00:08:06.520 --> 00:08:12.800
562
+ all people who work at CMU as professors
563
+
564
+ 00:08:11.960 --> 00:08:15.840
565
+ have a
566
+
567
+ 00:08:12.800 --> 00:08:18.080
568
+ PhD and you could like actually examine
569
+
570
+ 00:08:15.840 --> 00:08:20.639
571
+ that based on the knowledge base uh
572
+
573
+ 00:08:18.080 --> 00:08:23.520
574
+ whereas even if you had
575
+
576
+ 00:08:20.639 --> 00:08:25.800
577
+ a language model that had access to
578
+
579
+ 00:08:23.520 --> 00:08:27.280
580
+ everybody's CVS it wouldn't necessarily
581
+
582
+ 00:08:25.800 --> 00:08:28.599
583
+ be able to answer that question and it
584
+
585
+ 00:08:27.280 --> 00:08:31.440
586
+ especially wouldn't be able to answer
587
+
588
+ 00:08:28.599 --> 00:08:31.440
589
+ that question if you were
590
+
591
+ 00:08:32.320 --> 00:08:37.760
592
+ um it wouldn't be able to answer that
593
+
594
+ 00:08:34.640 --> 00:08:42.880
595
+ question if there were like multiple
596
+
597
+ 00:08:37.760 --> 00:08:46.480
598
+ steps so did all people who are working
599
+
600
+ 00:08:42.880 --> 00:08:50.959
601
+ at CMU get their PHD after
602
+
603
+ 00:08:46.480 --> 00:08:52.959
604
+ 19 90 or something like that um so and
605
+
606
+ 00:08:50.959 --> 00:08:54.680
607
+ the answer to that is obviously no but
608
+
609
+ 00:08:52.959 --> 00:08:56.519
610
+ uh this would be able to find the
611
+
612
+ 00:08:54.680 --> 00:08:58.120
613
+ counter evidence to that whereas LMS
614
+
615
+ 00:08:56.519 --> 00:09:00.000
616
+ would not be guaranteed to be able to do
617
+
618
+ 00:08:58.120 --> 00:09:02.800
619
+ that
620
+
621
+ 00:09:00.000 --> 00:09:04.279
622
+ so I I think this is really uh like a
623
+
624
+ 00:09:02.800 --> 00:09:06.760
625
+ nice thing to know but there's a couple
626
+
627
+ 00:09:04.279 --> 00:09:09.600
628
+ problems with it the first thing is this
629
+
630
+ 00:09:06.760 --> 00:09:12.519
631
+ really only traffics in like strictly
632
+
633
+ 00:09:09.600 --> 00:09:17.880
634
+ true or strictly false statements um and
635
+
636
+ 00:09:12.519 --> 00:09:20.560
637
+ that's a really big issue um so like if
638
+
639
+ 00:09:17.880 --> 00:09:22.959
640
+ anything's soft you start uh this sort
641
+
642
+ 00:09:20.560 --> 00:09:24.320
643
+ of formal reasoning starts breaking down
644
+
645
+ 00:09:22.959 --> 00:09:25.880
646
+ the second thing which actually is a
647
+
648
+ 00:09:24.320 --> 00:09:28.959
649
+ really big problem is once you start
650
+
651
+ 00:09:25.880 --> 00:09:30.600
652
+ dealing with more complex things you
653
+
654
+ 00:09:28.959 --> 00:09:32.560
655
+ don't ize it but there's always like
656
+
657
+ 00:09:30.600 --> 00:09:34.560
658
+ exceptions and exceptions to exceptions
659
+
660
+ 00:09:32.560 --> 00:09:36.240
661
+ and other things like that and actually
662
+
663
+ 00:09:34.560 --> 00:09:38.320
664
+ becomes very computationally expensive
665
+
666
+ 00:09:36.240 --> 00:09:41.640
667
+ to prove anything that's kind of like
668
+
669
+ 00:09:38.320 --> 00:09:43.279
670
+ non-trivial um and so because of that uh
671
+
672
+ 00:09:41.640 --> 00:09:45.839
673
+ I'm not actually going to cover it in
674
+
675
+ 00:09:43.279 --> 00:09:47.880
676
+ the lecture today but recently there are
677
+
678
+ 00:09:45.839 --> 00:09:50.880
679
+ um kind of search algorithms through
680
+
681
+ 00:09:47.880 --> 00:09:54.279
682
+ proof spaces that use uh like neural
683
+
684
+ 00:09:50.880 --> 00:09:55.880
685
+ models to do to speed up the search by
686
+
687
+ 00:09:54.279 --> 00:09:58.120
688
+ picking the best and most promising
689
+
690
+ 00:09:55.880 --> 00:10:00.800
691
+ hypotheses and uh for example Sean
692
+
693
+ 00:09:58.120 --> 00:10:02.800
694
+ wellik uh here at CMU is working on that
695
+
696
+ 00:10:00.800 --> 00:10:04.800
697
+ for neural theorem proving where you
698
+
699
+ 00:10:02.800 --> 00:10:05.959
700
+ have uh like mathematical theorem
701
+
702
+ 00:10:04.800 --> 00:10:08.079
703
+ proving and then you use a neural
704
+
705
+ 00:10:05.959 --> 00:10:13.120
706
+ network to pick the best uh paths
707
+
708
+ 00:10:08.079 --> 00:10:14.880
709
+ through logical uh operations so um
710
+
711
+ 00:10:13.120 --> 00:10:19.279
712
+ that's kind of a combination of the more
713
+
714
+ 00:10:14.880 --> 00:10:22.920
715
+ classical and uh modern
716
+
717
+ 00:10:19.279 --> 00:10:26.240
718
+ methods then another thing that's useful
719
+
720
+ 00:10:22.920 --> 00:10:28.079
721
+ to talk about I think this isn't very
722
+
723
+ 00:10:26.240 --> 00:10:31.640
724
+ popular right now but I think it might
725
+
726
+ 00:10:28.079 --> 00:10:34.360
727
+ be become more popular uh in the future
728
+
729
+ 00:10:31.640 --> 00:10:36.120
730
+ is we start hitting the limits of uh you
731
+
732
+ 00:10:34.360 --> 00:10:38.560
733
+ know what we can fit into long context
734
+
735
+ 00:10:36.120 --> 00:10:40.040
736
+ Windows uh for neural models and stuff
737
+
738
+ 00:10:38.560 --> 00:10:42.600
739
+ like this is memory
740
+
741
+ 00:10:40.040 --> 00:10:48.600
742
+ networks and basically the way that
743
+
744
+ 00:10:42.600 --> 00:10:50.639
745
+ memory networks work is they have write
746
+
747
+ 00:10:48.600 --> 00:10:51.399
748
+ they have the ability to write and read
749
+
750
+ 00:10:50.639 --> 00:10:55.639
751
+ from
752
+
753
+ 00:10:51.399 --> 00:10:57.360
754
+ memory and so this figure is a little
755
+
756
+ 00:10:55.639 --> 00:11:00.440
757
+ bit complex here but
758
+
759
+ 00:10:57.360 --> 00:11:02.880
760
+ basically you have a query and then you
761
+
762
+ 00:11:00.440 --> 00:11:04.560
763
+ get the embedding of the query um you
764
+
765
+ 00:11:02.880 --> 00:11:06.760
766
+ take the inner product you get the soft
767
+
768
+ 00:11:04.560 --> 00:11:09.720
769
+ Max of the inner product so this looks
770
+
771
+ 00:11:06.760 --> 00:11:11.040
772
+ like a tension you look up embeddings
773
+
774
+ 00:11:09.720 --> 00:11:12.839
775
+ and you take the weighted Su of the
776
+
777
+ 00:11:11.040 --> 00:11:14.560
778
+ embeddings and you get the like summary
779
+
780
+ 00:11:12.839 --> 00:11:17.680
781
+ of the memor so this is basically
782
+
783
+ 00:11:14.560 --> 00:11:20.320
784
+ attention over a big memory
785
+
786
+ 00:11:17.680 --> 00:11:22.120
787
+ base but then uh memory networks also
788
+
789
+ 00:11:20.320 --> 00:11:24.000
790
+ have the ability to go in and update the
791
+
792
+ 00:11:22.120 --> 00:11:26.639
793
+ memory so they also H have write
794
+
795
+ 00:11:24.000 --> 00:11:30.360
796
+ operations so you can read and write
797
+
798
+ 00:11:26.639 --> 00:11:34.320
799
+ from uh from the memory
800
+
801
+ 00:11:30.360 --> 00:11:36.279
802
+ base and so the reason why I say this
803
+
804
+ 00:11:34.320 --> 00:11:40.480
805
+ might become more popular is like one of
806
+
807
+ 00:11:36.279 --> 00:11:42.200
808
+ the big issues with large language
809
+
810
+ 00:11:40.480 --> 00:11:45.320
811
+ models nowadays is they don't get like
812
+
813
+ 00:11:42.200 --> 00:11:47.320
814
+ to continually update their memory um
815
+
816
+ 00:11:45.320 --> 00:11:50.279
817
+ and like one way you can do that is you
818
+
819
+ 00:11:47.320 --> 00:11:52.160
820
+ can just add text to the memory but
821
+
822
+ 00:11:50.279 --> 00:11:54.000
823
+ there are certain limits to that right
824
+
825
+ 00:11:52.160 --> 00:11:56.360
826
+ uh you know text isn't necessarily the
827
+
828
+ 00:11:54.000 --> 00:11:58.959
829
+ best way to encode all of the things
830
+
831
+ 00:11:56.360 --> 00:12:01.880
832
+ that you've seen in the past so I I feel
833
+
834
+ 00:11:58.959 --> 00:12:03.360
835
+ like this kind of architecture might be
836
+
837
+ 00:12:01.880 --> 00:12:04.920
838
+ um how to pin these sorts of
839
+
840
+ 00:12:03.360 --> 00:12:06.480
841
+ architectures onto language models might
842
+
843
+ 00:12:04.920 --> 00:12:08.639
844
+ be an interesting research direction for
845
+
846
+ 00:12:06.480 --> 00:12:08.639
847
+ the
848
+
849
+ 00:12:08.680 --> 00:12:15.360
850
+ future um another thing which I am not
851
+
852
+ 00:12:12.600 --> 00:12:16.720
853
+ going to talk about very much uh but
854
+
855
+ 00:12:15.360 --> 00:12:20.560
856
+ because we kind of already talked about
857
+
858
+ 00:12:16.720 --> 00:12:23.560
859
+ it in the code Generation Um area but
860
+
861
+ 00:12:20.560 --> 00:12:26.959
862
+ it's actually been around for a while is
863
+
864
+ 00:12:23.560 --> 00:12:30.600
865
+ solving questions with sort of symbolic
866
+
867
+ 00:12:26.959 --> 00:12:36.480
868
+ reasoning and the way it works
869
+
870
+ 00:12:30.600 --> 00:12:41.320
871
+ is for example you would have a
872
+
873
+ 00:12:36.480 --> 00:12:43.639
874
+ um you would have a text here and based
875
+
876
+ 00:12:41.320 --> 00:12:47.440
877
+ on the text you can run these sort of
878
+
879
+ 00:12:43.639 --> 00:12:50.440
880
+ symbolic operations like find and filter
881
+
882
+ 00:12:47.440 --> 00:12:52.720
883
+ and find the max number and relocate and
884
+
885
+ 00:12:50.440 --> 00:12:54.480
886
+ other things like this and this
887
+
888
+ 00:12:52.720 --> 00:12:58.040
889
+ explicitly
890
+
891
+ 00:12:54.480 --> 00:12:59.880
892
+ manipulates uh kind of the attention and
893
+
894
+ 00:12:58.040 --> 00:13:02.519
895
+ the um
896
+
897
+ 00:12:59.880 --> 00:13:03.839
898
+ you can do things like filtering down to
899
+
900
+ 00:13:02.519 --> 00:13:08.600
901
+ find the
902
+
903
+ 00:13:03.839 --> 00:13:11.040
904
+ most uh like highest largest number for
905
+
906
+ 00:13:08.600 --> 00:13:12.800
907
+ example or other things like this and
908
+
909
+ 00:13:11.040 --> 00:13:14.160
910
+ this is kind of interesting because like
911
+
912
+ 00:13:12.800 --> 00:13:17.240
913
+ some of the things that neural networks
914
+
915
+ 00:13:14.160 --> 00:13:20.360
916
+ are bad at are like finding the largest
917
+
918
+ 00:13:17.240 --> 00:13:21.600
919
+ number in a big data set or um finding
920
+
921
+ 00:13:20.360 --> 00:13:23.360
922
+ all of the things where something
923
+
924
+ 00:13:21.600 --> 00:13:26.240
925
+ applies and throwing out all of the
926
+
927
+ 00:13:23.360 --> 00:13:27.959
928
+ things where something doesn't apply so
929
+
930
+ 00:13:26.240 --> 00:13:29.560
931
+ again this isn't used super widely in
932
+
933
+ 00:13:27.959 --> 00:13:31.959
934
+ larged language models right now because
935
+
936
+ 00:13:29.560 --> 00:13:33.920
937
+ I feel like um people have been focusing
938
+
939
+ 00:13:31.959 --> 00:13:36.440
940
+ on prompting
941
+
942
+ 00:13:33.920 --> 00:13:38.880
943
+ techniques uh in order to do this sort
944
+
945
+ 00:13:36.440 --> 00:13:41.199
946
+ of reasoning but I think this is another
947
+
948
+ 00:13:38.880 --> 00:13:43.320
949
+ thing that's worth thinking about taking
950
+
951
+ 00:13:41.199 --> 00:13:45.079
952
+ a close another look at and seeing if
953
+
954
+ 00:13:43.320 --> 00:13:47.440
955
+ there are ways to incorporate it with
956
+
957
+ 00:13:45.079 --> 00:13:49.320
958
+ the current models because like
959
+
960
+ 00:13:47.440 --> 00:13:50.720
961
+ basically what I wanted to say is like
962
+
963
+ 00:13:49.320 --> 00:13:52.279
964
+ all of the things that I decided to
965
+
966
+ 00:13:50.720 --> 00:13:54.560
967
+ introduce here in this section are
968
+
969
+ 00:13:52.279 --> 00:13:57.600
970
+ things that current models are still not
971
+
972
+ 00:13:54.560 --> 00:14:00.800
973
+ particularly good at like Reon taking
974
+
975
+ 00:13:57.600 --> 00:14:03.079
976
+ many steps over sets of
977
+
978
+ 00:14:00.800 --> 00:14:05.079
979
+ inputs um reading and writing from
980
+
981
+ 00:14:03.079 --> 00:14:09.839
982
+ memory so that you can remember things
983
+
984
+ 00:14:05.079 --> 00:14:11.720
985
+ over long periods and also um filtering
986
+
987
+ 00:14:09.839 --> 00:14:13.399
988
+ down large pieces of text into smaller
989
+
990
+ 00:14:11.720 --> 00:14:16.040
991
+ pieces of text to find relevant
992
+
993
+ 00:14:13.399 --> 00:14:17.560
994
+ information so um if any of those things
995
+
996
+ 00:14:16.040 --> 00:14:19.880
997
+ sound interesting you can take a look at
998
+
999
+ 00:14:17.560 --> 00:14:22.800
1000
+ this but um after this I'd like to go
1001
+
1002
+ 00:14:19.880 --> 00:14:24.399
1003
+ kind of into the you know main event
1004
+
1005
+ 00:14:22.800 --> 00:14:27.759
1006
+ where I talk about the stuff that people
1007
+
1008
+ 00:14:24.399 --> 00:14:31.040
1009
+ are actually using a lot no it is um any
1010
+
1011
+ 00:14:27.759 --> 00:14:31.040
1012
+ questions about these three
1013
+
1014
+ 00:14:33.000 --> 00:14:39.120
1015
+ okay cool um so now I'd like to go into
1016
+
1017
+ 00:14:36.399 --> 00:14:40.639
1018
+ Chain of Thought and variance and I
1019
+
1020
+ 00:14:39.120 --> 00:14:42.279
1021
+ actually have already talked about Chain
1022
+
1023
+ 00:14:40.639 --> 00:14:44.199
1024
+ of Thought in fact we've mentioned it a
1025
+
1026
+ 00:14:42.279 --> 00:14:47.720
1027
+ couple times um but just you know to
1028
+
1029
+ 00:14:44.199 --> 00:14:49.399
1030
+ remind everybody the basic idea is um
1031
+
1032
+ 00:14:47.720 --> 00:14:52.880
1033
+ compared to standard prompting where we
1034
+
1035
+ 00:14:49.399 --> 00:14:55.519
1036
+ have like a question um and an answer in
1037
+
1038
+ 00:14:52.880 --> 00:14:58.480
1039
+ Chain of Thought we have a question and
1040
+
1041
+ 00:14:55.519 --> 00:15:01.040
1042
+ then we have a derivation for the
1043
+
1044
+ 00:14:58.480 --> 00:15:02.440
1045
+ questions so like uh Roger started with
1046
+
1047
+ 00:15:01.040 --> 00:15:06.120
1048
+ five
1049
+
1050
+ 00:15:02.440 --> 00:15:09.040
1051
+ balls two can uh five balls two cans of
1052
+
1053
+ 00:15:06.120 --> 00:15:13.839
1054
+ three tennis balls each is six tenis 5
1055
+
1056
+ 00:15:09.040 --> 00:15:15.639
1057
+ plus 6al 11 the answer is 11 so um you
1058
+
1059
+ 00:15:13.839 --> 00:15:17.519
1060
+ add this to the prompt and by adding
1061
+
1062
+ 00:15:15.639 --> 00:15:19.240
1063
+ this to the prompt you get the model to
1064
+
1065
+ 00:15:17.519 --> 00:15:22.600
1066
+ uh also do these derivations at test
1067
+
1068
+ 00:15:19.240 --> 00:15:25.199
1069
+ time and this greatly improves some
1070
+
1071
+ 00:15:22.600 --> 00:15:27.759
1072
+ tasks it improves tasks where we can't
1073
+
1074
+ 00:15:25.199 --> 00:15:30.040
1075
+ like immediately predict the answer
1076
+
1077
+ 00:15:27.759 --> 00:15:32.000
1078
+ directly and then I also previously
1079
+
1080
+ 00:15:30.040 --> 00:15:33.440
1081
+ talked about zero shot Chain of Thought
1082
+
1083
+ 00:15:32.000 --> 00:15:35.880
1084
+ uh reasoning where we just prompt the
1085
+
1086
+ 00:15:33.440 --> 00:15:38.480
1087
+ model to with something like let's think
1088
+
1089
+ 00:15:35.880 --> 00:15:42.680
1090
+ step by step and then the model becomes
1091
+
1092
+ 00:15:38.480 --> 00:15:46.240
1093
+ able to do this uh Chain of Thought
1094
+
1095
+ 00:15:42.680 --> 00:15:48.279
1096
+ reasoning okay so that was review and
1097
+
1098
+ 00:15:46.240 --> 00:15:51.680
1099
+ now I'd like to talk about some of like
1100
+
1101
+ 00:15:48.279 --> 00:15:53.560
1102
+ more advanced methods that people use
1103
+
1104
+ 00:15:51.680 --> 00:15:55.079
1105
+ for uh reasoning as
1106
+
1107
+ 00:15:53.560 --> 00:15:58.040
1108
+ well
1109
+
1110
+ 00:15:55.079 --> 00:15:59.959
1111
+ and this is by no means an exhaustive
1112
+
1113
+ 00:15:58.040 --> 00:16:01.800
1114
+ list they're just of the ones that I
1115
+
1116
+ 00:15:59.959 --> 00:16:03.319
1117
+ found interesting so if you know other
1118
+
1119
+ 00:16:01.800 --> 00:16:04.839
1120
+ ones that you'd like to talk about or
1121
+
1122
+ 00:16:03.319 --> 00:16:07.720
1123
+ introduce to the class or something like
1124
+
1125
+ 00:16:04.839 --> 00:16:10.600
1126
+ that I'd also be happy to uh to hear uh
1127
+
1128
+ 00:16:07.720 --> 00:16:14.120
1129
+ which ones you like or have heard about
1130
+
1131
+ 00:16:10.600 --> 00:16:16.920
1132
+ but the first one is um self-as and one
1133
+
1134
+ 00:16:14.120 --> 00:16:20.959
1135
+ of the issues with large language models
1136
+
1137
+ 00:16:16.920 --> 00:16:23.240
1138
+ nowadays is that they're not uh very
1139
+
1140
+ 00:16:20.959 --> 00:16:25.519
1141
+ good at asking follow-up questions or
1142
+
1143
+ 00:16:23.240 --> 00:16:27.839
1144
+ maybe not that they're not very good at
1145
+
1146
+ 00:16:25.519 --> 00:16:31.160
1147
+ it but just they're not trained to do it
1148
+
1149
+ 00:16:27.839 --> 00:16:32.880
1150
+ so like if you play around with chat GPT
1151
+
1152
+ 00:16:31.160 --> 00:16:35.240
1153
+ I have never had chat GPT ask me a
1154
+
1155
+ 00:16:32.880 --> 00:16:36.680
1156
+ follow-up question I don't think it's
1157
+
1158
+ 00:16:35.240 --> 00:16:38.319
1159
+ like it's not because large language
1160
+
1161
+ 00:16:36.680 --> 00:16:41.920
1162
+ models aren't capable of doing it it's
1163
+
1164
+ 00:16:38.319 --> 00:16:43.519
1165
+ just that they like the open AI must
1166
+
1167
+ 00:16:41.920 --> 00:16:45.000
1168
+ think it's a bad user experience to have
1169
+
1170
+ 00:16:43.519 --> 00:16:47.680
1171
+ a language model that asks you follow up
1172
+
1173
+ 00:16:45.000 --> 00:16:51.319
1174
+ questions that's only like you know
1175
+
1176
+ 00:16:47.680 --> 00:16:53.160
1177
+ reason I can think about it um but
1178
+
1179
+ 00:16:51.319 --> 00:16:56.199
1180
+ basically what self ask does is it
1181
+
1182
+ 00:16:53.160 --> 00:17:00.000
1183
+ explicitly prompts the model to ask to
1184
+
1185
+ 00:16:56.199 --> 00:17:02.360
1186
+ ask if there are followup questions so
1187
+
1188
+ 00:17:00.000 --> 00:17:05.799
1189
+ here's an example on the left where the
1190
+
1191
+ 00:17:02.360 --> 00:17:11.240
1192
+ question is uh who lived longer Theodore
1193
+
1194
+ 00:17:05.799 --> 00:17:12.640
1195
+ haer or Harry vau wat uh Watkins and
1196
+
1197
+ 00:17:11.240 --> 00:17:15.240
1198
+ basically it says are follow-up
1199
+
1200
+ 00:17:12.640 --> 00:17:17.679
1201
+ questions needed here yes and then the
1202
+
1203
+ 00:17:15.240 --> 00:17:20.319
1204
+ followup is how old was Theodore hacker
1205
+
1206
+ 00:17:17.679 --> 00:17:23.640
1207
+ when he died and the intermediate answer
1208
+
1209
+ 00:17:20.319 --> 00:17:26.959
1210
+ is Theodore hacker was 65 years old how
1211
+
1212
+ 00:17:23.640 --> 00:17:29.000
1213
+ old was Harry Von Watkins um Harry Von
1214
+
1215
+ 00:17:26.959 --> 00:17:32.400
1216
+ Watkins was 69 years old but so the
1217
+
1218
+ 00:17:29.000 --> 00:17:35.240
1219
+ final answer is Harry Bon Watkins and um
1220
+
1221
+ 00:17:32.400 --> 00:17:37.520
1222
+ in this particular paper this is just
1223
+
1224
+ 00:17:35.240 --> 00:17:42.520
1225
+ like another variety of Chain of Thought
1226
+
1227
+ 00:17:37.520 --> 00:17:44.720
1228
+ it's like not using it to incorporate
1229
+
1230
+ 00:17:42.520 --> 00:17:47.400
1231
+ any external information or anything
1232
+
1233
+ 00:17:44.720 --> 00:17:48.720
1234
+ like that it's just trying to more
1235
+
1236
+ 00:17:47.400 --> 00:17:52.360
1237
+ directly
1238
+
1239
+ 00:17:48.720 --> 00:17:53.840
1240
+ elicit um information from the model um
1241
+
1242
+ 00:17:52.360 --> 00:17:55.360
1243
+ but nonetheless they demonstrate that
1244
+
1245
+ 00:17:53.840 --> 00:17:57.760
1246
+ this is useful and then there's also
1247
+
1248
+ 00:17:55.360 --> 00:18:00.120
1249
+ other methods that actually try to look
1250
+
1251
+ 00:17:57.760 --> 00:18:02.240
1252
+ up information explicit to answer these
1253
+
1254
+ 00:18:00.120 --> 00:18:05.280
1255
+ questions um which are even more
1256
+
1257
+ 00:18:02.240 --> 00:18:05.280
1258
+ powerful than what we have
1259
+
1260
+ 00:18:05.720 --> 00:18:13.200
1261
+ here um so that's what I'd like to
1262
+
1263
+ 00:18:09.960 --> 00:18:16.919
1264
+ introduce next and basically the idea um
1265
+
1266
+ 00:18:13.200 --> 00:18:19.760
1267
+ here is this is a method that instead of
1268
+
1269
+ 00:18:16.919 --> 00:18:22.880
1270
+ just doing Chain of Thought it retrieves
1271
+
1272
+ 00:18:19.760 --> 00:18:25.480
1273
+ relevant sentences when you're doing the
1274
+
1275
+ 00:18:22.880 --> 00:18:28.919
1276
+ Chain of Thought So like
1277
+
1278
+ 00:18:25.480 --> 00:18:30.880
1279
+ here um
1280
+
1281
+ 00:18:28.919 --> 00:18:32.960
1282
+ uh we have the followup are follow-ups
1283
+
1284
+ 00:18:30.880 --> 00:18:35.159
1285
+ needed here yes and then this is the
1286
+
1287
+ 00:18:32.960 --> 00:18:36.880
1288
+ followup but if the model itself doesn't
1289
+
1290
+ 00:18:35.159 --> 00:18:39.440
1291
+ know how old somebody was when they died
1292
+
1293
+ 00:18:36.880 --> 00:18:40.760
1294
+ then it won't be able to answer this so
1295
+
1296
+ 00:18:39.440 --> 00:18:44.400
1297
+ what they do in order to make this
1298
+
1299
+ 00:18:40.760 --> 00:18:47.200
1300
+ happen is they um do bm25 based
1301
+
1302
+ 00:18:44.400 --> 00:18:49.520
1303
+ retrieval over Wikipedia for each of the
1304
+
1305
+ 00:18:47.200 --> 00:18:51.760
1306
+ Chain of Thought uh answers and then
1307
+
1308
+ 00:18:49.520 --> 00:18:53.400
1309
+ they use the retrieved uh I think it's
1310
+
1311
+ 00:18:51.760 --> 00:18:56.039
1312
+ like 10 documents or something like that
1313
+
1314
+ 00:18:53.400 --> 00:18:59.640
1315
+ multiple retriev documents to prompt the
1316
+
1317
+ 00:18:56.039 --> 00:19:03.200
1318
+ model um to basically follow up with its
1319
+
1320
+ 00:18:59.640 --> 00:19:05.440
1321
+ Chain of Thought so this is another uh
1322
+
1323
+ 00:19:03.200 --> 00:19:07.880
1324
+ variety of things that you can do in
1325
+
1326
+ 00:19:05.440 --> 00:19:07.880
1327
+ order to
1328
+
1329
+ 00:19:10.720 --> 00:19:16.120
1330
+ improve
1331
+
1332
+ 00:19:13.120 --> 00:19:16.120
1333
+ cool
1334
+
1335
+ 00:19:16.400 --> 00:19:21.440
1336
+ um then another one that I'd like to
1337
+
1338
+ 00:19:18.960 --> 00:19:22.559
1339
+ talk about is U multilingual Chain of
1340
+
1341
+ 00:19:21.440 --> 00:19:24.039
1342
+ Thought reasoning I'm going to be
1343
+
1344
+ 00:19:22.559 --> 00:19:28.000
1345
+ talking more about multilingual things
1346
+
1347
+ 00:19:24.039 --> 00:19:29.960
1348
+ in the multilingual class in a week but
1349
+
1350
+ 00:19:28.000 --> 00:19:33.559
1351
+ the interesting thing about multilingual
1352
+
1353
+ 00:19:29.960 --> 00:19:37.200
1354
+ Chain of Thought is we have a design
1355
+
1356
+ 00:19:33.559 --> 00:19:41.280
1357
+ decision right like do we want to just
1358
+
1359
+ 00:19:37.200 --> 00:19:44.000
1360
+ answer questions in the language that we
1361
+
1362
+ 00:19:41.280 --> 00:19:46.679
1363
+ are asking questions in like so if I ask
1364
+
1365
+ 00:19:44.000 --> 00:19:48.080
1366
+ a question in Japanese am I going to
1367
+
1368
+ 00:19:46.679 --> 00:19:49.840
1369
+ have it go through the whole chain of
1370
+
1371
+ 00:19:48.080 --> 00:19:52.720
1372
+ thought process in Japanese and then
1373
+
1374
+ 00:19:49.840 --> 00:19:55.840
1375
+ answer my question in Japanese or do I
1376
+
1377
+ 00:19:52.720 --> 00:19:57.120
1378
+ want it to uh somehow go through English
1379
+
1380
+ 00:19:55.840 --> 00:19:59.159
1381
+ because the model has been trained on
1382
+
1383
+ 00:19:57.120 --> 00:20:00.640
1384
+ lots of English and it has better
1385
+
1386
+ 00:19:59.159 --> 00:20:02.120
1387
+ it's like a better way to take advantage
1388
+
1389
+ 00:20:00.640 --> 00:20:04.840
1390
+ of its reasoning
1391
+
1392
+ 00:20:02.120 --> 00:20:07.200
1393
+ capabilities does anyone have a idea
1394
+
1395
+ 00:20:04.840 --> 00:20:07.200
1396
+ about the
1397
+
1398
+ 00:20:07.960 --> 00:20:12.480
1399
+ answer who thinks it's better to do it
1400
+
1401
+ 00:20:10.240 --> 00:20:15.360
1402
+ entirely in the the language that the
1403
+
1404
+ 00:20:12.480 --> 00:20:15.360
1405
+ question is asked
1406
+
1407
+ 00:20:15.640 --> 00:20:20.080
1408
+ in and who thinks it's better to do
1409
+
1410
+ 00:20:17.919 --> 00:20:23.000
1411
+ something in
1412
+
1413
+ 00:20:20.080 --> 00:20:28.200
1414
+ English
1415
+
1416
+ 00:20:23.000 --> 00:20:29.159
1417
+ okay so um basically the answer is do it
1418
+
1419
+ 00:20:28.200 --> 00:20:31.440
1420
+ in English
1421
+
1422
+ 00:20:29.159 --> 00:20:34.120
1423
+ um and maybe this
1424
+
1425
+ 00:20:31.440 --> 00:20:35.799
1426
+ is it might be a little bit dependent on
1427
+
1428
+ 00:20:34.120 --> 00:20:39.840
1429
+ the language but all of the languages
1430
+
1431
+ 00:20:35.799 --> 00:20:42.880
1432
+ they tested it's essentially uh that's
1433
+
1434
+ 00:20:39.840 --> 00:20:44.919
1435
+ the conclusion that they came to and
1436
+
1437
+ 00:20:42.880 --> 00:20:47.679
1438
+ it's pretty Stark in this particular
1439
+
1440
+ 00:20:44.919 --> 00:20:50.640
1441
+ paper this might change a little bit
1442
+
1443
+ 00:20:47.679 --> 00:20:52.960
1444
+ with um with more powerful models but I
1445
+
1446
+ 00:20:50.640 --> 00:20:57.360
1447
+ still would be very surprised if this is
1448
+
1449
+ 00:20:52.960 --> 00:21:00.440
1450
+ not like if this doesn't hold still so
1451
+
1452
+ 00:20:57.360 --> 00:21:04.440
1453
+ you can see it's like approximately on
1454
+
1455
+ 00:21:00.440 --> 00:21:08.200
1456
+ average uh 7even Point increase in the
1457
+
1458
+ 00:21:04.440 --> 00:21:11.720
1459
+ results and just to to be clear here um
1460
+
1461
+ 00:21:08.200 --> 00:21:13.600
1462
+ we have native uh Chain of Thought So
1463
+
1464
+ 00:21:11.720 --> 00:21:16.039
1465
+ This is doing Chain of Thought in the in
1466
+
1467
+ 00:21:13.600 --> 00:21:17.799
1468
+ the language itself this is doing Chain
1469
+
1470
+ 00:21:16.039 --> 00:21:19.240
1471
+ of Thought in English but then answering
1472
+
1473
+ 00:21:17.799 --> 00:21:22.200
1474
+ in the language itself and this is just
1475
+
1476
+ 00:21:19.240 --> 00:21:23.799
1477
+ like translating everything into
1478
+
1479
+ 00:21:22.200 --> 00:21:27.440
1480
+ English
1481
+
1482
+ 00:21:23.799 --> 00:21:30.159
1483
+ um you can try this out too like if you
1484
+
1485
+ 00:21:27.440 --> 00:21:31.840
1486
+ uh if you speak another Lang you can um
1487
+
1488
+ 00:21:30.159 --> 00:21:34.200
1489
+ try to do it myself when I try it in
1490
+
1491
+ 00:21:31.840 --> 00:21:36.200
1492
+ Japanese it's very clear that like the
1493
+
1494
+ 00:21:34.200 --> 00:21:38.640
1495
+ model seems more intelligent in English
1496
+
1497
+ 00:21:36.200 --> 00:21:41.559
1498
+ it just can seems like it can do other
1499
+
1500
+ 00:21:38.640 --> 00:21:43.120
1501
+ things even though like intelligence uh
1502
+
1503
+ 00:21:41.559 --> 00:21:44.640
1504
+ shouldn't be a function of the language
1505
+
1506
+ 00:21:43.120 --> 00:21:47.120
1507
+ that you're asking a question in right
1508
+
1509
+ 00:21:44.640 --> 00:21:49.679
1510
+ like the model should have the ability
1511
+
1512
+ 00:21:47.120 --> 00:21:51.440
1513
+ to answer questions but it because
1514
+
1515
+ 00:21:49.679 --> 00:21:53.000
1516
+ that's how humans work right our
1517
+
1518
+ 00:21:51.440 --> 00:21:54.520
1519
+ intelligence is kind of separated from
1520
+
1521
+ 00:21:53.000 --> 00:21:57.039
1522
+ our language how well we can express
1523
+
1524
+ 00:21:54.520 --> 00:22:00.480
1525
+ ourselves is a little bit different but
1526
+
1527
+ 00:21:57.039 --> 00:22:02.320
1528
+ um yeah for the final appli this was it
1529
+
1530
+ 00:22:00.480 --> 00:22:04.840
1531
+ translated back to the original language
1532
+
1533
+ 00:22:02.320 --> 00:22:09.440
1534
+ and then evaluated for translate English
1535
+
1536
+ 00:22:04.840 --> 00:22:12.559
1537
+ I'm not 100% sure about this I think it
1538
+
1539
+ 00:22:09.440 --> 00:22:13.840
1540
+ was not so that might be a confounding
1541
+
1542
+ 00:22:12.559 --> 00:22:16.799
1543
+ factor for this one but it's not a
1544
+
1545
+ 00:22:13.840 --> 00:22:20.039
1546
+ confounding factor for this one anyway
1547
+
1548
+ 00:22:16.799 --> 00:22:20.039
1549
+ yeah any other
1550
+
1551
+ 00:22:20.679 --> 00:22:23.919
1552
+ questions Okay
1553
+
1554
+ 00:22:24.200 --> 00:22:29.559
1555
+ cool so this is a pretty interesting
1556
+
1557
+ 00:22:26.799 --> 00:22:32.000
1558
+ result here um
1559
+
1560
+ 00:22:29.559 --> 00:22:34.120
1561
+ and the next kind of series of results
1562
+
1563
+ 00:22:32.000 --> 00:22:35.360
1564
+ are going to be based on the uh that I'm
1565
+
1566
+ 00:22:34.120 --> 00:22:36.919
1567
+ going to talk about are going to be
1568
+
1569
+ 00:22:35.360 --> 00:22:39.240
1570
+ based on the quality of the reasoning
1571
+
1572
+ 00:22:36.919 --> 00:22:43.480
1573
+ chains that the model uses in Chain of
1574
+
1575
+ 00:22:39.240 --> 00:22:45.520
1576
+ Thought and this one is a simple
1577
+
1578
+ 00:22:43.480 --> 00:22:46.600
1579
+ heuristic for improving the quality of
1580
+
1581
+ 00:22:45.520 --> 00:22:49.279
1582
+ the reasoning
1583
+
1584
+ 00:22:46.600 --> 00:22:50.640
1585
+ chains and um yeah one thing I should
1586
+
1587
+ 00:22:49.279 --> 00:22:52.480
1588
+ mention is that the quality of the
1589
+
1590
+ 00:22:50.640 --> 00:22:55.760
1591
+ reasoning chain is definitely connected
1592
+
1593
+ 00:22:52.480 --> 00:22:58.080
1594
+ to the uh quality of the output like
1595
+
1596
+ 00:22:55.760 --> 00:23:00.159
1597
+ some that's not necessarily the case
1598
+
1599
+ 00:22:58.080 --> 00:23:04.679
1600
+ right it could just say a whole bunch of
1601
+
1602
+ 00:23:00.159 --> 00:23:07.799
1603
+ you know false like uh actually no maybe
1604
+
1605
+ 00:23:04.679 --> 00:23:07.799
1606
+ I'll I'll skip this
1607
+
1608
+ 00:23:08.200 --> 00:23:14.919
1609
+ one and go and and explain this one next
1610
+
1611
+ 00:23:11.919 --> 00:23:14.919
1612
+ so
1613
+
1614
+ 00:23:15.159 --> 00:23:19.039
1615
+ um yeah actually sorry the or the
1616
+
1617
+ 00:23:17.600 --> 00:23:20.520
1618
+ explanation ordering for this is a
1619
+
1620
+ 00:23:19.039 --> 00:23:25.360
1621
+ little bit hard but yeah I'll explain
1622
+
1623
+ 00:23:20.520 --> 00:23:26.840
1624
+ this one next so um very quickly um
1625
+
1626
+ 00:23:25.360 --> 00:23:29.640
1627
+ there's two ways that you could be
1628
+
1629
+ 00:23:26.840 --> 00:23:32.880
1630
+ reasoning one way you could be reasoning
1631
+
1632
+ 00:23:29.640 --> 00:23:35.000
1633
+ is doing an explanation first and then
1634
+
1635
+ 00:23:32.880 --> 00:23:36.720
1636
+ uh predicting the answer the other way
1637
+
1638
+ 00:23:35.000 --> 00:23:39.080
1639
+ you could do it is predicting the answer
1640
+
1641
+ 00:23:36.720 --> 00:23:43.039
1642
+ and then do it um then giving the
1643
+
1644
+ 00:23:39.080 --> 00:23:45.559
1645
+ explanation and in general if you have a
1646
+
1647
+ 00:23:43.039 --> 00:23:47.919
1648
+ reasonably strong model uh you know any
1649
+
1650
+ 00:23:45.559 --> 00:23:50.679
1651
+ of the modern kind of Frontier level
1652
+
1653
+ 00:23:47.919 --> 00:23:52.240
1654
+ models right now doing the explanation
1655
+
1656
+ 00:23:50.679 --> 00:23:54.039
1657
+ first and then making the prediction is
1658
+
1659
+ 00:23:52.240 --> 00:23:56.880
1660
+ better and the reason why is because
1661
+
1662
+ 00:23:54.039 --> 00:23:59.240
1663
+ Chain of Thought works and the model is
1664
+
1665
+ 00:23:56.880 --> 00:24:02.960
1666
+ able to break down the quest um the
1667
+
1668
+ 00:23:59.240 --> 00:24:07.279
1669
+ questions into kind of
1670
+
1671
+ 00:24:02.960 --> 00:24:10.159
1672
+ simpler uh it's able to break down the
1673
+
1674
+ 00:24:07.279 --> 00:24:11.520
1675
+ like the answer into like simp simpler
1676
+
1677
+ 00:24:10.159 --> 00:24:14.080
1678
+ questions for like mathematical
1679
+
1680
+ 00:24:11.520 --> 00:24:15.679
1681
+ reasoning or something like that um and
1682
+
1683
+ 00:24:14.080 --> 00:24:18.039
1684
+ then give me the answer so like for
1685
+
1686
+ 00:24:15.679 --> 00:24:20.000
1687
+ example for text DCI 002 which was State
1688
+
1689
+ 00:24:18.039 --> 00:24:22.679
1690
+ ofth art at the time of this writing you
1691
+
1692
+ 00:24:20.000 --> 00:24:24.360
1693
+ see a fivepoint boost from using um
1694
+
1695
+ 00:24:22.679 --> 00:24:29.080
1696
+ explanation first and then prediction
1697
+
1698
+ 00:24:24.360 --> 00:24:30.640
1699
+ after that um and in accur
1700
+
1701
+ 00:24:29.080 --> 00:24:34.039
1702
+ but for the weaker models that was not
1703
+
1704
+ 00:24:30.640 --> 00:24:36.039
1705
+ the case so if you were using um GPD 3
1706
+
1707
+ 00:24:34.039 --> 00:24:38.720
1708
+ that wasn't trained for Chain of Thought
1709
+
1710
+ 00:24:36.039 --> 00:24:40.600
1711
+ or you were using opt uh that was not
1712
+
1713
+ 00:24:38.720 --> 00:24:42.640
1714
+ the case but nowadays I think basically
1715
+
1716
+ 00:24:40.600 --> 00:24:45.279
1717
+ all models uh doing the explanation
1718
+
1719
+ 00:24:42.640 --> 00:24:48.120
1720
+ first and then the prediction is
1721
+
1722
+ 00:24:45.279 --> 00:24:49.640
1723
+ better um so going
1724
+
1725
+ 00:24:48.120 --> 00:24:51.640
1726
+ back
1727
+
1728
+ 00:24:49.640 --> 00:24:53.559
1729
+ um another thing that people have
1730
+
1731
+ 00:24:51.640 --> 00:24:55.120
1732
+ noticed is like if your explanation is
1733
+
1734
+ 00:24:53.559 --> 00:24:56.520
1735
+ wrong your prediction also tends to be
1736
+
1737
+ 00:24:55.120 --> 00:24:58.120
1738
+ wrong so if you make mistakes in
1739
+
1740
+ 00:24:56.520 --> 00:25:00.520
1741
+ intermediate steps of your explanation
1742
+
1743
+ 00:24:58.120 --> 00:25:03.679
1744
+ it's tends to mess up your final
1745
+
1746
+ 00:25:00.520 --> 00:25:06.000
1747
+ prediction um so like one of the
1748
+
1749
+ 00:25:03.679 --> 00:25:09.320
1750
+ interesting ways that people have found
1751
+
1752
+ 00:25:06.000 --> 00:25:11.559
1753
+ to improve the final the explanation
1754
+
1755
+ 00:25:09.320 --> 00:25:13.880
1756
+ quality is they just observe that if the
1757
+
1758
+ 00:25:11.559 --> 00:25:18.840
1759
+ explanations are longer they tend to be
1760
+
1761
+ 00:25:13.880 --> 00:25:20.960
1762
+ better it's uh kind of interesting but
1763
+
1764
+ 00:25:18.840 --> 00:25:23.000
1765
+ like if they give you more reasoning
1766
+
1767
+ 00:25:20.960 --> 00:25:25.000
1768
+ steps this tends to be more accurate and
1769
+
1770
+ 00:25:23.000 --> 00:25:27.320
1771
+ they actually demonstrate that in this
1772
+
1773
+ 00:25:25.000 --> 00:25:29.200
1774
+ paper where here's a simple reasoning
1775
+
1776
+ 00:25:27.320 --> 00:25:31.720
1777
+ chain here's a more complex reasoning
1778
+
1779
+ 00:25:29.200 --> 00:25:35.480
1780
+ chain and you actually see for exactly
1781
+
1782
+ 00:25:31.720 --> 00:25:36.760
1783
+ the same problem they get about a 15%
1784
+
1785
+ 00:25:35.480 --> 00:25:38.360
1786
+ boost and these are kind of like
1787
+
1788
+ 00:25:36.760 --> 00:25:39.960
1789
+ naturally occurring reasoning chains
1790
+
1791
+ 00:25:38.360 --> 00:25:41.520
1792
+ they didn't like train the model to give
1793
+
1794
+ 00:25:39.960 --> 00:25:43.919
1795
+ you longer reasoning chains or anything
1796
+
1797
+ 00:25:41.520 --> 00:25:45.279
1798
+ like that but amongst the naturally
1799
+
1800
+ 00:25:43.919 --> 00:25:46.840
1801
+ occurring reasoning chains the longer
1802
+
1803
+ 00:25:45.279 --> 00:25:50.480
1804
+ ones tend to be
1805
+
1806
+ 00:25:46.840 --> 00:25:53.159
1807
+ better and this fact could be simply
1808
+
1809
+ 00:25:50.480 --> 00:25:54.679
1810
+ used to improve accuracy um and so the
1811
+
1812
+ 00:25:53.159 --> 00:25:57.360
1813
+ way they did this is they just sampled
1814
+
1815
+ 00:25:54.679 --> 00:25:59.279
1816
+ multiple reasoning paths and then they
1817
+
1818
+ 00:25:57.360 --> 00:26:00.840
1819
+ performed self consistency over the
1820
+
1821
+ 00:25:59.279 --> 00:26:03.000
1822
+ longer reasoning paths so if you
1823
+
1824
+ 00:26:00.840 --> 00:26:05.240
1825
+ remember what self consistency is it's
1826
+
1827
+ 00:26:03.000 --> 00:26:07.240
1828
+ basically like you do majority voting
1829
+
1830
+ 00:26:05.240 --> 00:26:09.679
1831
+ over the answers for multiple reasoning
1832
+
1833
+ 00:26:07.240 --> 00:26:13.880
1834
+ paths so they threw out the lower
1835
+
1836
+ 00:26:09.679 --> 00:26:13.880
1837
+ quality ones and that improved overall
1838
+
1839
+ 00:26:14.399 --> 00:26:20.279
1840
+ accuracy so um yeah that's a thing that
1841
+
1842
+ 00:26:18.000 --> 00:26:20.279
1843
+ you can
1844
+
1845
+ 00:26:21.039 --> 00:26:25.960
1846
+ do
1847
+
1848
+ 00:26:23.120 --> 00:26:28.880
1849
+ um so yeah going back to systematic
1850
+
1851
+ 00:26:25.960 --> 00:26:31.360
1852
+ studies of reasoning in llms
1853
+
1854
+ 00:26:28.880 --> 00:26:33.559
1855
+ um one of the big results that's
1856
+
1857
+ 00:26:31.360 --> 00:26:35.880
1858
+ actually really important to know about
1859
+
1860
+ 00:26:33.559 --> 00:26:39.039
1861
+ is th this sort of Chain of Thought
1862
+
1863
+ 00:26:35.880 --> 00:26:41.080
1864
+ reasoning um is considered to be an
1865
+
1866
+ 00:26:39.039 --> 00:26:43.520
1867
+ emergent ability
1868
+
1869
+ 00:26:41.080 --> 00:26:47.080
1870
+ in uh large language models and what we
1871
+
1872
+ 00:26:43.520 --> 00:26:49.360
1873
+ mean by an emergent ability is it's or
1874
+
1875
+ 00:26:47.080 --> 00:26:53.679
1876
+ what what the the name emergent ability
1877
+
1878
+ 00:26:49.360 --> 00:26:56.399
1879
+ typically refers to is that it is
1880
+
1881
+ 00:26:53.679 --> 00:26:58.640
1882
+ something that increases dramatically as
1883
+
1884
+ 00:26:56.399 --> 00:27:01.679
1885
+ the model size gets uh up up to a
1886
+
1887
+ 00:26:58.640 --> 00:27:03.200
1888
+ certain point so these actually I'm I'm
1889
+
1890
+ 00:27:01.679 --> 00:27:06.080
1891
+ really sorry I cut off the thing on the
1892
+
1893
+ 00:27:03.200 --> 00:27:07.360
1894
+ bottom here this is like open AI does
1895
+
1896
+ 00:27:06.080 --> 00:27:08.520
1897
+ this all the time to not tell you how
1898
+
1899
+ 00:27:07.360 --> 00:27:11.399
1900
+ many parameters they have in their
1901
+
1902
+ 00:27:08.520 --> 00:27:12.760
1903
+ models but I did not do it intentionally
1904
+
1905
+ 00:27:11.399 --> 00:27:15.360
1906
+ here because I think it's actually in
1907
+
1908
+ 00:27:12.760 --> 00:27:17.320
1909
+ here in the paper um but like these ones
1910
+
1911
+ 00:27:15.360 --> 00:27:19.399
1912
+ over here are kind of the like 175
1913
+
1914
+ 00:27:17.320 --> 00:27:20.640
1915
+ billion parameter models and like the
1916
+
1917
+ 00:27:19.399 --> 00:27:24.520
1918
+ the larger
1919
+
1920
+ 00:27:20.640 --> 00:27:25.960
1921
+ models um and what you see is like up
1922
+
1923
+ 00:27:24.520 --> 00:27:29.919
1924
+ until a certain point you get basically
1925
+
1926
+ 00:27:25.960 --> 00:27:33.919
1927
+ zero accuracy and then uh the outputs
1928
+
1929
+ 00:27:29.919 --> 00:27:37.000
1930
+ improve and so for a while people were
1931
+
1932
+ 00:27:33.919 --> 00:27:39.240
1933
+ really like confused about this like why
1934
+
1935
+ 00:27:37.000 --> 00:27:41.440
1936
+ why does this happen it feels like magic
1937
+
1938
+ 00:27:39.240 --> 00:27:44.279
1939
+ that you get a really you know powerful
1940
+
1941
+ 00:27:41.440 --> 00:27:46.679
1942
+ model and then suddenly it gets better
1943
+
1944
+ 00:27:44.279 --> 00:27:49.799
1945
+ uh uh like at the very
1946
+
1947
+ 00:27:46.679 --> 00:27:52.159
1948
+ end but actually there's a much simpler
1949
+
1950
+ 00:27:49.799 --> 00:27:53.760
1951
+ solution there's not not that much magic
1952
+
1953
+ 00:27:52.159 --> 00:27:55.960
1954
+ to this
1955
+
1956
+ 00:27:53.760 --> 00:27:58.399
1957
+ and we've known about this for a little
1958
+
1959
+ 00:27:55.960 --> 00:28:00.919
1960
+ while but this paper from 2023 really
1961
+
1962
+ 00:27:58.399 --> 00:28:02.360
1963
+ like expressed it very clearly um so I
1964
+
1965
+ 00:28:00.919 --> 00:28:04.360
1966
+ highly recommend you take a look at this
1967
+
1968
+ 00:28:02.360 --> 00:28:07.720
1969
+ if you're interested in kind of like the
1970
+
1971
+ 00:28:04.360 --> 00:28:10.159
1972
+ emerg abilities and language models but
1973
+
1974
+ 00:28:07.720 --> 00:28:15.039
1975
+ basically the the thing about emergent
1976
+
1977
+ 00:28:10.159 --> 00:28:19.720
1978
+ abilities is that they're mostly
1979
+
1980
+ 00:28:15.039 --> 00:28:20.720
1981
+ a matter of how you um how you measure
1982
+
1983
+ 00:28:19.720 --> 00:28:22.519
1984
+ your
1985
+
1986
+ 00:28:20.720 --> 00:28:27.640
1987
+ models
1988
+
1989
+ 00:28:22.519 --> 00:28:30.120
1990
+ accuracy and so let's say as your model
1991
+
1992
+ 00:28:27.640 --> 00:28:30.120
1993
+ gets better
1994
+
1995
+ 00:28:39.039 --> 00:28:45.600
1996
+ it gets gradually better at predicting
1997
+
1998
+ 00:28:41.200 --> 00:28:45.600
1999
+ the like a reasonable next
2000
+
2001
+ 00:28:47.799 --> 00:28:54.760
2002
+ token so this is like a I don't know
2003
+
2004
+ 00:28:50.919 --> 00:28:59.120
2005
+ like 200 million parameter model 500
2006
+
2007
+ 00:28:54.760 --> 00:29:03.240
2008
+ million 1 billion 3 billion
2009
+
2010
+ 00:28:59.120 --> 00:29:06.600
2011
+ 7 billion and like 70 billion or
2012
+
2013
+ 00:29:03.240 --> 00:29:09.600
2014
+ something like that um and so this is
2015
+
2016
+ 00:29:06.600 --> 00:29:12.640
2017
+ like the next token prediction accuracy
2018
+
2019
+ 00:29:09.600 --> 00:29:14.320
2020
+ um or like the the accuracy of
2021
+
2022
+ 00:29:12.640 --> 00:29:16.279
2023
+ predicting a reasonable next token that
2024
+
2025
+ 00:29:14.320 --> 00:29:18.880
2026
+ won't make result in your reasoning
2027
+
2028
+ 00:29:16.279 --> 00:29:20.000
2029
+ chain being wrong and making a mistake
2030
+
2031
+ 00:29:18.880 --> 00:29:24.200
2032
+ and
2033
+
2034
+ 00:29:20.000 --> 00:29:26.200
2035
+ so if you have an accuracy like this in
2036
+
2037
+ 00:29:24.200 --> 00:29:28.880
2038
+ order to get the correct answer like
2039
+
2040
+ 00:29:26.200 --> 00:29:30.559
2041
+ let's say there's about five or eight
2042
+
2043
+ 00:29:28.880 --> 00:29:33.519
2044
+ places where you could possibly make a
2045
+
2046
+ 00:29:30.559 --> 00:29:35.080
2047
+ mistake in the derivation like one
2048
+
2049
+ 00:29:33.519 --> 00:29:36.760
2050
+ common places to make a mistake in a
2051
+
2052
+ 00:29:35.080 --> 00:29:38.519
2053
+ derivation for math for example are
2054
+
2055
+ 00:29:36.760 --> 00:29:40.200
2056
+ where you predict a number like where
2057
+
2058
+ 00:29:38.519 --> 00:29:42.679
2059
+ you predict the result of an equation
2060
+
2061
+ 00:29:40.200 --> 00:29:44.120
2062
+ and you might have five reasoning steps
2063
+
2064
+ 00:29:42.679 --> 00:29:47.720
2065
+ where you might predict the result of an
2066
+
2067
+ 00:29:44.120 --> 00:29:53.039
2068
+ equation um and so if we do
2069
+
2070
+ 00:29:47.720 --> 00:29:53.039
2071
+ this let's exponentiate all of these by
2072
+
2073
+ 00:29:54.799 --> 00:29:58.799
2074
+ five um
2075
+
2076
+ 00:30:06.640 --> 00:30:16.120
2077
+ uh write python code to exp
2078
+
2079
+ 00:30:11.200 --> 00:30:16.120
2080
+ she these numbers by
2081
+
2082
+ 00:30:19.600 --> 00:30:27.559
2083
+ five I'm wszy enough that I just ask
2084
+
2085
+ 00:30:22.159 --> 00:30:27.559
2086
+ chat GP chat GP to do this for me now
2087
+
2088
+ 00:30:30.080 --> 00:30:32.919
2089
+ and so if we do
2090
+
2091
+ 00:30:35.399 --> 00:30:39.840
2092
+ this do go go chat
2093
+
2094
+ 00:30:50.000 --> 00:30:58.360
2095
+ GPD so now we are getting something that
2096
+
2097
+ 00:30:54.760 --> 00:30:58.360
2098
+ looks like zero
2099
+
2100
+ 00:31:02.159 --> 00:31:07.960
2101
+ um basically zero basically
2102
+
2103
+ 00:31:05.639 --> 00:31:10.960
2104
+ zero
2105
+
2106
+ 00:31:07.960 --> 00:31:10.960
2107
+ uh
2108
+
2109
+ 00:31:13.399 --> 00:31:16.399
2110
+ 3%
2111
+
2112
+ 00:31:16.799 --> 00:31:22.440
2113
+ 23%
2114
+
2115
+ 00:31:19.080 --> 00:31:22.440
2116
+ 9% and
2117
+
2118
+ 00:31:22.559 --> 00:31:28.720
2119
+ 90% so what you can see is there's
2120
+
2121
+ 00:31:26.639 --> 00:31:30.600
2122
+ actually a pretty steady GR gradation of
2123
+
2124
+ 00:31:28.720 --> 00:31:33.120
2125
+ like the next token prediction accuracy
2126
+
2127
+ 00:31:30.600 --> 00:31:36.600
2128
+ here but if you need to predict multiple
2129
+
2130
+ 00:31:33.120 --> 00:31:38.919
2131
+ tokens correct then it looks like it's
2132
+
2133
+ 00:31:36.600 --> 00:31:41.240
2134
+ doing basically nothing until you get up
2135
+
2136
+ 00:31:38.919 --> 00:31:43.600
2137
+ to like 75% next token accuracy and then
2138
+
2139
+ 00:31:41.240 --> 00:31:45.320
2140
+ it starts taking off so that's like uh
2141
+
2142
+ 00:31:43.600 --> 00:31:46.960
2143
+ what happens in emergent abilities and
2144
+
2145
+ 00:31:45.320 --> 00:31:49.159
2146
+ you'll notice that most things that are
2147
+
2148
+ 00:31:46.960 --> 00:31:50.880
2149
+ talking about emergent abilities are
2150
+
2151
+ 00:31:49.159 --> 00:31:53.559
2152
+ usually talking about some sort of Chain
2153
+
2154
+ 00:31:50.880 --> 00:31:55.799
2155
+ of Thought or some sort of reasoning uh
2156
+
2157
+ 00:31:53.559 --> 00:31:58.480
2158
+ reasoning accuracy even if that's not
2159
+
2160
+ 00:31:55.799 --> 00:32:00.480
2161
+ the case um even if they're just
2162
+
2163
+ 00:31:58.480 --> 00:32:02.639
2164
+ predicting a single token it can still
2165
+
2166
+ 00:32:00.480 --> 00:32:05.399
2167
+ happen because
2168
+
2169
+ 00:32:02.639 --> 00:32:08.559
2170
+ basically the probability of a single
2171
+
2172
+ 00:32:05.399 --> 00:32:11.639
2173
+ token can continue to go up smoothly but
2174
+
2175
+ 00:32:08.559 --> 00:32:13.240
2176
+ you only get the the token correct after
2177
+
2178
+ 00:32:11.639 --> 00:32:14.760
2179
+ the probability starts getting higher
2180
+
2181
+ 00:32:13.240 --> 00:32:18.320
2182
+ than all the others and that's also a
2183
+
2184
+ 00:32:14.760 --> 00:32:21.279
2185
+ discontinuous function so um so
2186
+
2187
+ 00:32:18.320 --> 00:32:23.080
2188
+ basically what this paper shows is like
2189
+
2190
+ 00:32:21.279 --> 00:32:26.440
2191
+ even if you have like the probability of
2192
+
2193
+ 00:32:23.080 --> 00:32:28.679
2194
+ the correct token going um the correct
2195
+
2196
+ 00:32:26.440 --> 00:32:30.639
2197
+ token going up gradually uh you can see
2198
+
2199
+ 00:32:28.679 --> 00:32:33.440
2200
+ this emergent ability based on how you
2201
+
2202
+ 00:32:30.639 --> 00:32:37.279
2203
+ uh measure it so um that's an important
2204
+
2205
+ 00:32:33.440 --> 00:32:38.960
2206
+ thing to realize about uh this another
2207
+
2208
+ 00:32:37.279 --> 00:32:41.080
2209
+ correl of this is like let's say you
2210
+
2211
+ 00:32:38.960 --> 00:32:44.679
2212
+ want to do interesting experiments about
2213
+
2214
+ 00:32:41.080 --> 00:32:45.960
2215
+ reasoning on um on smaller models like
2216
+
2217
+ 00:32:44.679 --> 00:32:47.279
2218
+ let's say you want to train a smaller
2219
+
2220
+ 00:32:45.960 --> 00:32:49.159
2221
+ model and see how it improves on
2222
+
2223
+ 00:32:47.279 --> 00:32:52.159
2224
+ reasoning I would definitely encourage
2225
+
2226
+ 00:32:49.159 --> 00:32:54.799
2227
+ you to measure not only accuracy because
2228
+
2229
+ 00:32:52.159 --> 00:32:57.279
2230
+ you might see like very little change in
2231
+
2232
+ 00:32:54.799 --> 00:32:58.720
2233
+ accuracy but also measure like log
2234
+
2235
+ 00:32:57.279 --> 00:33:00.360
2236
+ likelihood of reasoning chains or
2237
+
2238
+ 00:32:58.720 --> 00:33:02.960
2239
+ something like that because you'll see a
2240
+
2241
+ 00:33:00.360 --> 00:33:02.960
2242
+ a smoother
2243
+
2244
+ 00:33:03.799 --> 00:33:09.080
2245
+ curve cool um any questions about
2246
+
2247
+ 00:33:11.039 --> 00:33:17.240
2248
+ this okay um sounds
2249
+
2250
+ 00:33:14.720 --> 00:33:20.559
2251
+ good so I I talked a little bit about
2252
+
2253
+ 00:33:17.240 --> 00:33:23.120
2254
+ this um one one of the things here that
2255
+
2256
+ 00:33:20.559 --> 00:33:25.320
2257
+ I didn't talk about is this paper
2258
+
2259
+ 00:33:23.120 --> 00:33:28.159
2260
+ measures not just the accuracy of the
2261
+
2262
+ 00:33:25.320 --> 00:33:30.880
2263
+ answer with chain of thoughts um but it
2264
+
2265
+ 00:33:28.159 --> 00:33:35.840
2266
+ also measures the factuality of the
2267
+
2268
+ 00:33:30.880 --> 00:33:40.480
2269
+ explanation so basically um whether the
2270
+
2271
+ 00:33:35.840 --> 00:33:40.480
2272
+ explanation is a good explanation for
2273
+
2274
+ 00:33:40.760 --> 00:33:47.240
2275
+ the um whether the explanation is a good
2276
+
2277
+ 00:33:43.960 --> 00:33:50.039
2278
+ explanation for the actual
2279
+
2280
+ 00:33:47.240 --> 00:33:51.919
2281
+ derivation um and also the consistency
2282
+
2283
+ 00:33:50.039 --> 00:33:53.480
2284
+ of the answer in the explanation to
2285
+
2286
+ 00:33:51.919 --> 00:33:56.120
2287
+ figure out whether the answer and the
2288
+
2289
+ 00:33:53.480 --> 00:33:58.200
2290
+ explanation um match up with each other
2291
+
2292
+ 00:33:56.120 --> 00:33:59.600
2293
+ and they they did this with some uh
2294
+
2295
+ 00:33:58.200 --> 00:34:02.320
2296
+ synthetic data sets where you could
2297
+
2298
+ 00:33:59.600 --> 00:34:07.120
2299
+ actually measure the um the re the
2300
+
2301
+ 00:34:02.320 --> 00:34:10.399
2302
+ reasoning steps uh by using math so um
2303
+
2304
+ 00:34:07.120 --> 00:34:13.560
2305
+ what they were able to find is basically
2306
+
2307
+ 00:34:10.399 --> 00:34:15.760
2308
+ the answer and the explanation um
2309
+
2310
+ 00:34:13.560 --> 00:34:17.639
2311
+ when the answer in the explanation
2312
+
2313
+ 00:34:15.760 --> 00:34:22.079
2314
+ tended to be consistent especially for
2315
+
2316
+ 00:34:17.639 --> 00:34:23.760
2317
+ the stronger models and let's see yeah
2318
+
2319
+ 00:34:22.079 --> 00:34:25.399
2320
+ the the answer in the explanation tended
2321
+
2322
+ 00:34:23.760 --> 00:34:28.440
2323
+ to be consistent especially for the
2324
+
2325
+ 00:34:25.399 --> 00:34:30.879
2326
+ stronger models and um
2327
+
2328
+ 00:34:28.440 --> 00:34:33.000
2329
+ that also meant that if you had higher
2330
+
2331
+ 00:34:30.879 --> 00:34:35.839
2332
+ factuality in the explanation that
2333
+
2334
+ 00:34:33.000 --> 00:34:38.240
2335
+ translates into higher um you know
2336
+
2337
+ 00:34:35.839 --> 00:34:40.520
2338
+ factuality of the accuracy of the actual
2339
+
2340
+ 00:34:38.240 --> 00:34:43.159
2341
+ prediction um I would bet that these
2342
+
2343
+ 00:34:40.520 --> 00:34:45.240
2344
+ numbers are even higher uh nowadays I
2345
+
2346
+ 00:34:43.159 --> 00:34:49.040
2347
+ bet the consistency is even higher uh
2348
+
2349
+ 00:34:45.240 --> 00:34:49.040
2350
+ with more modern models than Tex avenci
2351
+
2352
+ 00:34:49.399 --> 00:34:53.200
2353
+ 002 and the re the reason being is like
2354
+
2355
+ 00:34:51.839 --> 00:34:54.760
2356
+ number one models are stronger number
2357
+
2358
+ 00:34:53.200 --> 00:34:56.560
2359
+ two all models are like trained for
2360
+
2361
+ 00:34:54.760 --> 00:35:00.960
2362
+ Chain of Thought pretty aggressively now
2363
+
2364
+ 00:34:56.560 --> 00:35:00.960
2365
+ so uh that would make the difference
2366
+
2367
+ 00:35:02.200 --> 00:35:08.640
2368
+ there cool um so the the other thing I'd
2369
+
2370
+ 00:35:07.000 --> 00:35:09.359
2371
+ like to talk about is training for Chain
2372
+
2373
+ 00:35:08.640 --> 00:35:13.079
2374
+ of
2375
+
2376
+ 00:35:09.359 --> 00:35:17.440
2377
+ Thought um so there's a fair amount of
2378
+
2379
+ 00:35:13.079 --> 00:35:19.200
2380
+ work in this general direction um from
2381
+
2382
+ 00:35:17.440 --> 00:35:23.040
2383
+ my point of view there's basically two
2384
+
2385
+ 00:35:19.200 --> 00:35:25.800
2386
+ ways that people do this nowadays um the
2387
+
2388
+ 00:35:23.040 --> 00:35:28.960
2389
+ first way is usually through generating
2390
+
2391
+ 00:35:25.800 --> 00:35:33.480
2392
+ lots of synthetic data that represents
2393
+
2394
+ 00:35:28.960 --> 00:35:37.800
2395
+ chains of thoughts and then using that
2396
+
2397
+ 00:35:33.480 --> 00:35:39.520
2398
+ to um to train models and this is the
2399
+
2400
+ 00:35:37.800 --> 00:35:41.839
2401
+ most famous version of this although
2402
+
2403
+ 00:35:39.520 --> 00:35:44.079
2404
+ this paper cites a lot of uh a lot of
2405
+
2406
+ 00:35:41.839 --> 00:35:45.760
2407
+ other ones but basically they generate a
2408
+
2409
+ 00:35:44.079 --> 00:35:48.280
2410
+ large and diverse uh Chain of Thought
2411
+
2412
+ 00:35:45.760 --> 00:35:51.240
2413
+ data set from GPT 3.5 and
2414
+
2415
+ 00:35:48.280 --> 00:35:53.200
2416
+ gp4 um it includes 5 million complex
2417
+
2418
+ 00:35:51.240 --> 00:35:55.640
2419
+ instructions I think they generated 1
2420
+
2421
+ 00:35:53.200 --> 00:35:59.000
2422
+ million from GPD 4 and 4 million from uh
2423
+
2424
+ 00:35:55.640 --> 00:36:01.640
2425
+ GPT 3.5 just because generating long
2426
+
2427
+ 00:35:59.000 --> 00:36:06.520
2428
+ sequences from gp4 is expensive and they
2429
+
2430
+ 00:36:01.640 --> 00:36:09.640
2431
+ didn't want to do that many um and
2432
+
2433
+ 00:36:06.520 --> 00:36:11.760
2434
+ then they uh achieved corresponding high
2435
+
2436
+ 00:36:09.640 --> 00:36:13.200
2437
+ accuracy on Chain of Thought related
2438
+
2439
+ 00:36:11.760 --> 00:36:16.200
2440
+ things compared to other data sets so
2441
+
2442
+ 00:36:13.200 --> 00:36:17.760
2443
+ compared to like alpaka which is much uh
2444
+
2445
+ 00:36:16.200 --> 00:36:21.760
2446
+ smaller and doesn't have as much Chain
2447
+
2448
+ 00:36:17.760 --> 00:36:24.079
2449
+ of Thought and also um uh vicuna which
2450
+
2451
+ 00:36:21.760 --> 00:36:26.640
2452
+ is similarly less focused on chain of
2453
+
2454
+ 00:36:24.079 --> 00:36:29.359
2455
+ thought they were able to do uh a good
2456
+
2457
+ 00:36:26.640 --> 00:36:31.599
2458
+ job
2459
+
2460
+ 00:36:29.359 --> 00:36:33.640
2461
+ um this paper was by Microsoft and they
2462
+
2463
+ 00:36:31.599 --> 00:36:36.960
2464
+ didn't actually release the Orca data
2465
+
2466
+ 00:36:33.640 --> 00:36:39.400
2467
+ set um for whatever reason uh legal
2468
+
2469
+ 00:36:36.960 --> 00:36:41.400
2470
+ legal or competitive reasons or whatever
2471
+
2472
+ 00:36:39.400 --> 00:36:43.000
2473
+ but there's another open Orca data set
2474
+
2475
+ 00:36:41.400 --> 00:36:44.359
2476
+ that you can download and use uh that
2477
+
2478
+ 00:36:43.000 --> 00:36:47.480
2479
+ attempts to replicate it and it's
2480
+
2481
+ 00:36:44.359 --> 00:36:50.440
2482
+ reasonably good so uh you you can uh
2483
+
2484
+ 00:36:47.480 --> 00:36:50.440
2485
+ keep that in mind if you're
2486
+
2487
+ 00:36:50.800 --> 00:36:59.520
2488
+ interested um this is another really
2489
+
2490
+ 00:36:53.280 --> 00:36:59.520
2491
+ interesting paper on uh trying to create
2492
+
2493
+ 00:37:00.160 --> 00:37:05.760
2494
+ assessments automatic assessments of how
2495
+
2496
+ 00:37:03.440 --> 00:37:09.880
2497
+ good chains of thought are and what they
2498
+
2499
+ 00:37:05.760 --> 00:37:13.079
2500
+ do essentially is it's relatively simple
2501
+
2502
+ 00:37:09.880 --> 00:37:15.200
2503
+ they get human feedback on each step of
2504
+
2505
+ 00:37:13.079 --> 00:37:17.760
2506
+ a derivation so they just basically ask
2507
+
2508
+ 00:37:15.200 --> 00:37:20.599
2509
+ people is this step of the derivation
2510
+
2511
+ 00:37:17.760 --> 00:37:22.160
2512
+ good and uh if the answer is yes then
2513
+
2514
+ 00:37:20.599 --> 00:37:24.760
2515
+ they give it a a smiley face if the
2516
+
2517
+ 00:37:22.160 --> 00:37:26.440
2518
+ answer is no they give it a frowny face
2519
+
2520
+ 00:37:24.760 --> 00:37:28.560
2521
+ and they use this to train a reward
2522
+
2523
+ 00:37:26.440 --> 00:37:32.000
2524
+ model where the reward model basically
2525
+
2526
+ 00:37:28.560 --> 00:37:34.760
2527
+ predicts whether each uh thing of the um
2528
+
2529
+ 00:37:32.000 --> 00:37:36.800
2530
+ each step of the derivation is good and
2531
+
2532
+ 00:37:34.760 --> 00:37:38.160
2533
+ so we have two examples over here I know
2534
+
2535
+ 00:37:36.800 --> 00:37:41.160
2536
+ this is really small you might be able
2537
+
2538
+ 00:37:38.160 --> 00:37:43.200
2539
+ to see it um either in the paper on uh
2540
+
2541
+ 00:37:41.160 --> 00:37:46.359
2542
+ the slides on the website but what we
2543
+
2544
+ 00:37:43.200 --> 00:37:49.000
2545
+ can see here is that it assesses each of
2546
+
2547
+ 00:37:46.359 --> 00:37:52.680
2548
+ these steps and uh checks that the
2549
+
2550
+ 00:37:49.000 --> 00:37:55.760
2551
+ answer is good um but it's also able to
2552
+
2553
+ 00:37:52.680 --> 00:37:57.119
2554
+ identify places where uh like steps are
2555
+
2556
+ 00:37:55.760 --> 00:37:59.560
2557
+ incorrect and then the final answer
2558
+
2559
+ 00:37:57.119 --> 00:38:02.560
2560
+ becomes Incorrect and then they use this
2561
+
2562
+ 00:37:59.560 --> 00:38:04.440
2563
+ for training um a Chain of Thought style
2564
+
2565
+ 00:38:02.560 --> 00:38:06.319
2566
+ model so they have the model generate
2567
+
2568
+ 00:38:04.440 --> 00:38:08.520
2569
+ chains of thought and they assess them
2570
+
2571
+ 00:38:06.319 --> 00:38:10.079
2572
+ with the reward model and upweight
2573
+
2574
+ 00:38:08.520 --> 00:38:12.160
2575
+ answers that have good chains of thought
2576
+
2577
+ 00:38:10.079 --> 00:38:15.680
2578
+ and so the good thing about this is they
2579
+
2580
+ 00:38:12.160 --> 00:38:17.440
2581
+ actually don't need um they don't need
2582
+
2583
+ 00:38:15.680 --> 00:38:20.160
2584
+ the correct answers to train the model
2585
+
2586
+ 00:38:17.440 --> 00:38:21.640
2587
+ this way and because they don't need the
2588
+
2589
+ 00:38:20.160 --> 00:38:23.920
2590
+ correct answers to train the model this
2591
+
2592
+ 00:38:21.640 --> 00:38:26.640
2593
+ way they can also train the model on
2594
+
2595
+ 00:38:23.920 --> 00:38:29.200
2596
+ lots of other questions the reason why
2597
+
2598
+ 00:38:26.640 --> 00:38:31.520
2599
+ this works is because like Chain of
2600
+
2601
+ 00:38:29.200 --> 00:38:34.880
2602
+ Thought makes it easier to generate each
2603
+
2604
+ 00:38:31.520 --> 00:38:36.720
2605
+ of the steps in the derivation it's also
2606
+
2607
+ 00:38:34.880 --> 00:38:38.640
2608
+ easier to assess whether an individual
2609
+
2610
+ 00:38:36.720 --> 00:38:40.000
2611
+ step in a derivation is wrong then
2612
+
2613
+ 00:38:38.640 --> 00:38:42.960
2614
+ assess whether the answer is correct
2615
+
2616
+ 00:38:40.000 --> 00:38:45.319
2617
+ overall so um this feedback signal is
2618
+
2619
+ 00:38:42.960 --> 00:38:48.640
2620
+ easier to get model provided than it is
2621
+
2622
+ 00:38:45.319 --> 00:38:51.160
2623
+ for um uh like getting feedback on the
2624
+
2625
+ 00:38:48.640 --> 00:38:53.839
2626
+ answer itself yeah failure in one step
2627
+
2628
+ 00:38:51.160 --> 00:38:56.920
2629
+ causes all the other steps to fail yep
2630
+
2631
+ 00:38:53.839 --> 00:38:57.960
2632
+ you just assess the next steps based on
2633
+
2634
+ 00:38:56.920 --> 00:39:00.079
2635
+ the assumption
2636
+
2637
+ 00:38:57.960 --> 00:39:02.920
2638
+ the or do
2639
+
2640
+ 00:39:00.079 --> 00:39:05.240
2641
+ you I I don't think
2642
+
2643
+ 00:39:02.920 --> 00:39:07.599
2644
+ they I don't think they do that I think
2645
+
2646
+ 00:39:05.240 --> 00:39:10.119
2647
+ they um it it's a good question I'm not
2648
+
2649
+ 00:39:07.599 --> 00:39:12.160
2650
+ 100% sure about this but I think they um
2651
+
2652
+ 00:39:10.119 --> 00:39:14.280
2653
+ assess each one of the steps
2654
+
2655
+ 00:39:12.160 --> 00:39:15.920
2656
+ independently um and it's not
2657
+
2658
+ 00:39:14.280 --> 00:39:17.480
2659
+ necessarily the case that like failing
2660
+
2661
+ 00:39:15.920 --> 00:39:19.000
2662
+ on this step means the step is wrong
2663
+
2664
+ 00:39:17.480 --> 00:39:21.319
2665
+ right it could be just not using it at
2666
+
2667
+ 00:39:19.000 --> 00:39:25.240
2668
+ all also
2669
+
2670
+ 00:39:21.319 --> 00:39:25.240
2671
+ so um
2672
+
2673
+ 00:39:25.440 --> 00:39:31.119
2674
+ cool so a final thing like to talk about
2675
+
2676
+ 00:39:28.160 --> 00:39:34.640
2677
+ which I think is kind of interesting um
2678
+
2679
+ 00:39:31.119 --> 00:39:37.040
2680
+ is abductive reasoning uh or learning
2681
+
2682
+ 00:39:34.640 --> 00:39:40.040
2683
+ explanations from
2684
+
2685
+ 00:39:37.040 --> 00:39:40.040
2686
+ data
2687
+
2688
+ 00:39:46.359 --> 00:39:49.359
2689
+ and
2690
+
2691
+ 00:39:52.440 --> 00:39:57.119
2692
+ sorry
2693
+
2694
+ 00:39:54.480 --> 00:40:00.760
2695
+ so basically the idea is can we find a
2696
+
2697
+ 00:39:57.119 --> 00:40:03.599
2698
+ rule that underes a pattern in data
2699
+
2700
+ 00:40:00.760 --> 00:40:06.680
2701
+ and here are some examples of this the
2702
+
2703
+ 00:40:03.599 --> 00:40:11.680
2704
+ basic idea is if we have
2705
+
2706
+ 00:40:06.680 --> 00:40:16.599
2707
+ examples um which are like if I put
2708
+
2709
+ 00:40:11.680 --> 00:40:19.960
2710
+ a cylinder and a square a cylinder and a
2711
+
2712
+ 00:40:16.599 --> 00:40:22.119
2713
+ cube on uh this pink block I get a noise
2714
+
2715
+ 00:40:19.960 --> 00:40:25.440
2716
+ if I put just a cylinder on the pink
2717
+
2718
+ 00:40:22.119 --> 00:40:29.359
2719
+ block I don't get a noise and you want
2720
+
2721
+ 00:40:25.440 --> 00:40:31.800
2722
+ to discover underlying rules based on
2723
+
2724
+ 00:40:29.359 --> 00:40:33.160
2725
+ the data that you observed and so why
2726
+
2727
+ 00:40:31.800 --> 00:40:34.720
2728
+ would you want to do this there's a
2729
+
2730
+ 00:40:33.160 --> 00:40:38.000
2731
+ couple reasons why you would want to do
2732
+
2733
+ 00:40:34.720 --> 00:40:41.560
2734
+ this um the first reason why you would
2735
+
2736
+ 00:40:38.000 --> 00:40:42.920
2737
+ like to do this is because um you might
2738
+
2739
+ 00:40:41.560 --> 00:40:45.119
2740
+ want something that you can explain to
2741
+
2742
+ 00:40:42.920 --> 00:40:47.760
2743
+ humans right you can explain I this
2744
+
2745
+ 00:40:45.119 --> 00:40:51.240
2746
+ underlying pattern um exists in this
2747
+
2748
+ 00:40:47.760 --> 00:40:55.119
2749
+ data it explains why the
2750
+
2751
+ 00:40:51.240 --> 00:40:57.319
2752
+ data you know appears as it does appear
2753
+
2754
+ 00:40:55.119 --> 00:40:59.240
2755
+ and then humans can go in and analyze it
2756
+
2757
+ 00:40:57.319 --> 00:41:02.079
2758
+ or something like that so recently
2759
+
2760
+ 00:40:59.240 --> 00:41:03.880
2761
+ there's been a big focus on like using
2762
+
2763
+ 00:41:02.079 --> 00:41:06.480
2764
+ large language models for scientific
2765
+
2766
+ 00:41:03.880 --> 00:41:08.240
2767
+ inquiry and other things like that by
2768
+
2769
+ 00:41:06.480 --> 00:41:10.920
2770
+ coming up with good explanations for why
2771
+
2772
+ 00:41:08.240 --> 00:41:12.160
2773
+ data is the way it is so if we were able
2774
+
2775
+ 00:41:10.920 --> 00:41:15.599
2776
+ to do that that would be really
2777
+
2778
+ 00:41:12.160 --> 00:41:19.280
2779
+ interesting another thing is um language
2780
+
2781
+ 00:41:15.599 --> 00:41:22.960
2782
+ models are not particularly good
2783
+
2784
+ 00:41:19.280 --> 00:41:24.760
2785
+ at coming up with they're not
2786
+
2787
+ 00:41:22.960 --> 00:41:29.480
2788
+ particularly good at being consistent
2789
+
2790
+ 00:41:24.760 --> 00:41:33.640
2791
+ about difficult tasks across very large
2792
+
2793
+ 00:41:29.480 --> 00:41:35.319
2794
+ you know numbers of examples so if you
2795
+
2796
+ 00:41:33.640 --> 00:41:37.920
2797
+ could look at like all of the data at
2798
+
2799
+ 00:41:35.319 --> 00:41:41.240
2800
+ once infer general rules from them put
2801
+
2802
+ 00:41:37.920 --> 00:41:43.480
2803
+ those rules in a prompt and then apply
2804
+
2805
+ 00:41:41.240 --> 00:41:44.960
2806
+ that prompt to make predictions on new
2807
+
2808
+ 00:41:43.480 --> 00:41:47.880
2809
+ examples you might be able to raise your
2810
+
2811
+ 00:41:44.960 --> 00:41:49.760
2812
+ overall accuracy as well so it's kind of
2813
+
2814
+ 00:41:47.880 --> 00:41:52.480
2815
+ like you know that's how humans learn as
2816
+
2817
+ 00:41:49.760 --> 00:41:55.560
2818
+ well right we don't like just memorize
2819
+
2820
+ 00:41:52.480 --> 00:41:57.400
2821
+ each example um if we just look at a few
2822
+
2823
+ 00:41:55.560 --> 00:41:59.040
2824
+ examples then we might you know not
2825
+
2826
+ 00:41:57.400 --> 00:42:02.560
2827
+ generalize well to new examples so we
2828
+
2829
+ 00:41:59.040 --> 00:42:06.359
2830
+ kind of tried to abstract away general
2831
+
2832
+ 00:42:02.560 --> 00:42:08.160
2833
+ rules um so this is also similar to
2834
+
2835
+ 00:42:06.359 --> 00:42:10.200
2836
+ program induction from input output
2837
+
2838
+ 00:42:08.160 --> 00:42:12.240
2839
+ examples which I talked during the code
2840
+
2841
+ 00:42:10.200 --> 00:42:14.040
2842
+ uh generation class so you have like
2843
+
2844
+ 00:42:12.240 --> 00:42:16.200
2845
+ input output examples and from them you
2846
+
2847
+ 00:42:14.040 --> 00:42:18.119
2848
+ would like to come up with uh general
2849
+
2850
+ 00:42:16.200 --> 00:42:19.920
2851
+ rules but this is a little bit more
2852
+
2853
+ 00:42:18.119 --> 00:42:21.920
2854
+ General it doesn't necessarily need to
2855
+
2856
+ 00:42:19.920 --> 00:42:24.160
2857
+ be a program that you're inducing it
2858
+
2859
+ 00:42:21.920 --> 00:42:25.920
2860
+ could be you know a grammar or it could
2861
+
2862
+ 00:42:24.160 --> 00:42:29.119
2863
+ be an explanation or it could be
2864
+
2865
+ 00:42:25.920 --> 00:42:29.119
2866
+ anything else like this
2867
+
2868
+ 00:42:30.079 --> 00:42:34.680
2869
+ um so there's a bit of work on rule
2870
+
2871
+ 00:42:31.960 --> 00:42:36.800
2872
+ induction with llms it's pretty recent
2873
+
2874
+ 00:42:34.680 --> 00:42:40.200
2875
+ work uh but I think it's pretty
2876
+
2877
+ 00:42:36.800 --> 00:42:43.400
2878
+ interesting so the first one is um
2879
+
2880
+ 00:42:40.200 --> 00:42:45.119
2881
+ hypothesis generation or the first step
2882
+
2883
+ 00:42:43.400 --> 00:42:47.839
2884
+ um of this particular work here is
2885
+
2886
+ 00:42:45.119 --> 00:42:53.280
2887
+ hypothesis generation and basically what
2888
+
2889
+ 00:42:47.839 --> 00:42:55.480
2890
+ it does is it takes all of these uh you
2891
+
2892
+ 00:42:53.280 --> 00:42:58.119
2893
+ know input output examples and from
2894
+
2895
+ 00:42:55.480 --> 00:43:01.680
2896
+ these input output examples it predicts
2897
+
2898
+ 00:42:58.119 --> 00:43:04.720
2899
+ these uh rules like the answer is always
2900
+
2901
+ 00:43:01.680 --> 00:43:06.720
2902
+ one or uh you want to pick the smallest
2903
+
2904
+ 00:43:04.720 --> 00:43:10.839
2905
+ one or you want to pick the first
2906
+
2907
+ 00:43:06.720 --> 00:43:12.880
2908
+ element and then you evaluate it um and
2909
+
2910
+ 00:43:10.839 --> 00:43:14.359
2911
+ so you pick the smallest one and you can
2912
+
2913
+ 00:43:12.880 --> 00:43:16.040
2914
+ either evaluate it using another
2915
+
2916
+ 00:43:14.359 --> 00:43:19.040
2917
+ language model or you can evaluate it
2918
+
2919
+ 00:43:16.040 --> 00:43:21.280
2920
+ using symbolic uh using a symbolic
2921
+
2922
+ 00:43:19.040 --> 00:43:23.359
2923
+ evaluator um if it's a program you could
2924
+
2925
+ 00:43:21.280 --> 00:43:24.680
2926
+ use a symbolic evaluator if it's a
2927
+
2928
+ 00:43:23.359 --> 00:43:28.559
2929
+ language model you could just ask the
2930
+
2931
+ 00:43:24.680 --> 00:43:30.960
2932
+ language model to pick you know
2933
+
2934
+ 00:43:28.559 --> 00:43:33.400
2935
+ an answer one always or pick the
2936
+
2937
+ 00:43:30.960 --> 00:43:35.400
2938
+ smallest one or pick the first element
2939
+
2940
+ 00:43:33.400 --> 00:43:37.480
2941
+ and then you get lots of outputs and
2942
+
2943
+ 00:43:35.400 --> 00:43:39.240
2944
+ then when you get lots of outputs you
2945
+
2946
+ 00:43:37.480 --> 00:43:42.079
2947
+ then can compare them against the
2948
+
2949
+ 00:43:39.240 --> 00:43:44.559
2950
+ expected outputs and verify whether the
2951
+
2952
+ 00:43:42.079 --> 00:43:47.920
2953
+ rule is correct verify whether the rule
2954
+
2955
+ 00:43:44.559 --> 00:43:50.160
2956
+ gives you the appropriate answer
2957
+
2958
+ 00:43:47.920 --> 00:43:53.599
2959
+ and once you've done that you can go
2960
+
2961
+ 00:43:50.160 --> 00:43:56.079
2962
+ back and do hypothesis refinement um uh
2963
+
2964
+ 00:43:53.599 --> 00:43:57.720
2965
+ and maybe even give this feedback about
2966
+
2967
+ 00:43:56.079 --> 00:44:00.079
2968
+ like what was wrong
2969
+
2970
+ 00:43:57.720 --> 00:44:03.280
2971
+ and gradually refine you know more
2972
+
2973
+ 00:44:00.079 --> 00:44:03.280
2974
+ accurate and more complex
2975
+
2976
+ 00:44:04.880 --> 00:44:11.040
2977
+ hypothesis this is another variant of
2978
+
2979
+ 00:44:07.720 --> 00:44:12.760
2980
+ this idea um which uses different
2981
+
2982
+ 00:44:11.040 --> 00:44:14.960
2983
+ methodology I think both are completely
2984
+
2985
+ 00:44:12.760 --> 00:44:17.920
2986
+ valid but um this one has a little bit
2987
+
2988
+ 00:44:14.960 --> 00:44:20.400
2989
+ higher data constraints so basically
2990
+
2991
+ 00:44:17.920 --> 00:44:23.160
2992
+ what we do is we use hypotheses in Chain
2993
+
2994
+ 00:44:20.400 --> 00:44:25.319
2995
+ of Thought reasoning and keep ones that
2996
+
2997
+ 00:44:23.160 --> 00:44:28.480
2998
+ give resul in correct
2999
+
3000
+ 00:44:25.319 --> 00:44:30.760
3001
+ answers so
3002
+
3003
+ 00:44:28.480 --> 00:44:35.880
3004
+ uh this is the step where they're trying
3005
+
3006
+ 00:44:30.760 --> 00:44:40.440
3007
+ to induce rules and so here this says um
3008
+
3009
+ 00:44:35.880 --> 00:44:42.599
3010
+ in base 9 what is 76 + 14 and they used
3011
+
3012
+ 00:44:40.440 --> 00:44:44.079
3013
+ base 9 here obviously because if it was
3014
+
3015
+ 00:44:42.599 --> 00:44:45.520
3016
+ in base 10 the language model would just
3017
+
3018
+ 00:44:44.079 --> 00:44:48.400
3019
+ solve the problem and it's not very
3020
+
3021
+ 00:44:45.520 --> 00:44:54.319
3022
+ interesting so uh they they did base 9
3023
+
3024
+ 00:44:48.400 --> 00:44:55.839
3025
+ addition and so the answer is um we have
3026
+
3027
+ 00:44:54.319 --> 00:45:00.280
3028
+ or the answer provided by the language
3029
+
3030
+ 00:44:55.839 --> 00:45:03.319
3031
+ model is we have 6 + 4 = 11 um the digit
3032
+
3033
+ 00:45:00.280 --> 00:45:07.480
3034
+ is 1 and the carry is 1 we have 7 + 1 +
3035
+
3036
+ 00:45:03.319 --> 00:45:09.480
3037
+ 1 = 10 the digit is zero and the is one
3038
+
3039
+ 00:45:07.480 --> 00:45:13.000
3040
+ a leading digit is one so the answer is
3041
+
3042
+ 00:45:09.480 --> 00:45:15.240
3043
+ 101 um and this verifies so they get the
3044
+
3045
+ 00:45:13.000 --> 00:45:17.240
3046
+ answer correct and so they know that
3047
+
3048
+ 00:45:15.240 --> 00:45:20.800
3049
+ they assume that this derivation is also
3050
+
3051
+ 00:45:17.240 --> 00:45:25.599
3052
+ correct and then they extract particular
3053
+
3054
+ 00:45:20.800 --> 00:45:28.200
3055
+ rules like 6 + 4 = 11 and 7 + 1 + 1 = 10
3056
+
3057
+ 00:45:25.599 --> 00:45:30.800
3058
+ um and they add this to the rule
3059
+
3060
+ 00:45:28.200 --> 00:45:32.960
3061
+ Library so then the question is how do
3062
+
3063
+ 00:45:30.800 --> 00:45:35.000
3064
+ they extract the rules the way they
3065
+
3066
+ 00:45:32.960 --> 00:45:37.920
3067
+ extract the rules is they have an in
3068
+
3069
+ 00:45:35.000 --> 00:45:40.760
3070
+ context prompt which surrounds the rules
3071
+
3072
+ 00:45:37.920 --> 00:45:43.520
3073
+ by basically XML tags that says this is
3074
+
3075
+ 00:45:40.760 --> 00:45:46.640
3076
+ a rule that should be extracted and so
3077
+
3078
+ 00:45:43.520 --> 00:45:48.400
3079
+ then um anything that is in an XML tag
3080
+
3081
+ 00:45:46.640 --> 00:45:50.960
3082
+ they when you get the correct answer
3083
+
3084
+ 00:45:48.400 --> 00:45:53.440
3085
+ they extract and add that to the rule
3086
+
3087
+ 00:45:50.960 --> 00:45:55.680
3088
+ library and then conversely like if the
3089
+
3090
+ 00:45:53.440 --> 00:45:57.800
3091
+ derivation um if the answer is wrong
3092
+
3093
+ 00:45:55.680 --> 00:45:59.920
3094
+ they just don't add it or they add it as
3095
+
3096
+ 00:45:57.800 --> 00:46:01.079
3097
+ a negative example and say this is a
3098
+
3099
+ 00:45:59.920 --> 00:46:04.119
3100
+ incorrect
3101
+
3102
+ 00:46:01.079 --> 00:46:05.839
3103
+ rule um and then in the final step where
3104
+
3105
+ 00:46:04.119 --> 00:46:07.480
3106
+ they do deductive reasoning they can
3107
+
3108
+ 00:46:05.839 --> 00:46:09.119
3109
+ then go ahead and use these rules and
3110
+
3111
+ 00:46:07.480 --> 00:46:11.640
3112
+ improve accuracy and they demonstrate
3113
+
3114
+ 00:46:09.119 --> 00:46:12.960
3115
+ that that helps so basically these are
3116
+
3117
+ 00:46:11.640 --> 00:46:14.520
3118
+ two different approaches one is
3119
+
3120
+ 00:46:12.960 --> 00:46:17.400
3121
+ extracting directly from the Chain of
3122
+
3123
+ 00:46:14.520 --> 00:46:18.880
3124
+ Thought the other is uh a priori trying
3125
+
3126
+ 00:46:17.400 --> 00:46:23.760
3127
+ to generate rules from the whole rule
3128
+
3129
+ 00:46:18.880 --> 00:46:27.480
3130
+ base and then um then verifying them um
3131
+
3132
+ 00:46:23.760 --> 00:46:31.000
3133
+ notably both of these require verifiers
3134
+
3135
+ 00:46:27.480 --> 00:46:33.839
3136
+ um and so in some recent work which uh I
3137
+
3138
+ 00:46:31.000 --> 00:46:36.040
3139
+ I hope will be on archive very soon uh
3140
+
3141
+ 00:46:33.839 --> 00:46:38.839
3142
+ we took a look at whether language
3143
+
3144
+ 00:46:36.040 --> 00:46:42.800
3145
+ models themselves can verify their own
3146
+
3147
+ 00:46:38.839 --> 00:46:46.079
3148
+ hypothesis and um so that removes the
3149
+
3150
+ 00:46:42.800 --> 00:46:48.000
3151
+ symbolic verifier here um by just asking
3152
+
3153
+ 00:46:46.079 --> 00:46:51.480
3154
+ the language model whether the output is
3155
+
3156
+ 00:46:48.000 --> 00:46:53.480
3157
+ correct or not and um we found that with
3158
+
3159
+ 00:46:51.480 --> 00:46:55.240
3160
+ very powerful language models like gp4
3161
+
3162
+ 00:46:53.480 --> 00:46:57.760
3163
+ you can actually do that as well so that
3164
+
3165
+ 00:46:55.240 --> 00:47:01.319
3166
+ REM removes the necess necessity to have
3167
+
3168
+ 00:46:57.760 --> 00:47:05.480
3169
+ a symbolic verifier in the loop as
3170
+
3171
+ 00:47:01.319 --> 00:47:08.200
3172
+ well cool um the reason why I wanted to
3173
+
3174
+ 00:47:05.480 --> 00:47:09.440
3175
+ introduce this is I don't know if like
3176
+
3177
+ 00:47:08.200 --> 00:47:12.359
3178
+ like it seems like all of these have
3179
+
3180
+ 00:47:09.440 --> 00:47:16.359
3181
+ been applied so far on kind of very toy
3182
+
3183
+ 00:47:12.359 --> 00:47:19.119
3184
+ examples like you know
3185
+
3186
+ 00:47:16.359 --> 00:47:22.240
3187
+ um like honestly I don't really care
3188
+
3189
+ 00:47:19.119 --> 00:47:25.920
3190
+ about whether I can play Tetris or um
3191
+
3192
+ 00:47:22.240 --> 00:47:27.920
3193
+ you know uh find the largest or smallest
3194
+
3195
+ 00:47:25.920 --> 00:47:30.880
3196
+ number within
3197
+
3198
+ 00:47:27.920 --> 00:47:33.720
3199
+ um you know list or something like this
3200
+
3201
+ 00:47:30.880 --> 00:47:36.000
3202
+ but I think they have like really exting
3203
+
3204
+ 00:47:33.720 --> 00:47:38.480
3205
+ possibilities for how we could extract
3206
+
3207
+ 00:47:36.000 --> 00:47:40.319
3208
+ more General patterns and like use these
3209
+
3210
+ 00:47:38.480 --> 00:47:41.720
3211
+ to improve language model based systems
3212
+
3213
+ 00:47:40.319 --> 00:47:43.599
3214
+ so I think it's a really exciting
3215
+
3216
+ 00:47:41.720 --> 00:47:48.000
3217
+ research
3218
+
3219
+ 00:47:43.599 --> 00:47:51.000
3220
+ Direction um cool any questions about
3221
+
3222
+ 00:47:48.000 --> 00:47:51.000
3223
+ this
3224
+
3225
+ 00:47:54.240 --> 00:48:02.160
3226
+ yeah yeah so that's a good question
3227
+
3228
+ 00:47:58.160 --> 00:48:06.079
3229
+ um so I I think tool
3230
+
3231
+ 00:48:02.160 --> 00:48:09.359
3232
+ learning is maybe kind of a sub subset
3233
+
3234
+ 00:48:06.079 --> 00:48:12.319
3235
+ of this possibly like I feel like in
3236
+
3237
+ 00:48:09.359 --> 00:48:13.559
3238
+ tool learning you're learning functions
3239
+
3240
+ 00:48:12.319 --> 00:48:15.559
3241
+ that
3242
+
3243
+ 00:48:13.559 --> 00:48:17.559
3244
+ are I don't know if they are like good
3245
+
3246
+ 00:48:15.559 --> 00:48:19.680
3247
+ explanations of the data but at the very
3248
+
3249
+ 00:48:17.559 --> 00:48:23.119
3250
+ least they're like useful um they're
3251
+
3252
+ 00:48:19.680 --> 00:48:25.119
3253
+ useful rules for solving the task um so
3254
+
3255
+ 00:48:23.119 --> 00:48:26.880
3256
+ I I feel like they're approaching it
3257
+
3258
+ 00:48:25.119 --> 00:48:28.760
3259
+ from two different motivations but
3260
+
3261
+ 00:48:26.880 --> 00:48:30.960
3262
+ actually
3263
+
3264
+ 00:48:28.760 --> 00:48:33.559
3265
+ the methods that they're using are
3266
+
3267
+ 00:48:30.960 --> 00:48:36.240
3268
+ similar so like for example in our tool
3269
+
3270
+ 00:48:33.559 --> 00:48:38.559
3271
+ learning work Trove we generated like
3272
+
3273
+ 00:48:36.240 --> 00:48:42.240
3274
+ multiple options for tools and we kept
3275
+
3276
+ 00:48:38.559 --> 00:48:44.000
3277
+ the ones that had high self- consistency
3278
+
3279
+ 00:48:42.240 --> 00:48:46.800
3280
+ so that's kind of like the verifier step
3281
+
3282
+ 00:48:44.000 --> 00:48:49.040
3283
+ right and then um we threw away the ones
3284
+
3285
+ 00:48:46.800 --> 00:48:52.760
3286
+ that weren't useful so that helps make a
3287
+
3288
+ 00:48:49.040 --> 00:48:56.760
3289
+ concise rule set so
3290
+
3291
+ 00:48:52.760 --> 00:48:59.280
3292
+ yeah and then like could we use tools to
3293
+
3294
+ 00:48:56.760 --> 00:49:01.880
3295
+ [Music]
3296
+
3297
+ 00:48:59.280 --> 00:49:04.079
3298
+ attack kind of the more like conceptual
3299
+
3300
+ 00:49:01.880 --> 00:49:05.319
3301
+ reasoning stuff I I don't actually know
3302
+
3303
+ 00:49:04.079 --> 00:49:06.839
3304
+ uh the answer to that it's a good
3305
+
3306
+ 00:49:05.319 --> 00:49:10.599
3307
+ question
3308
+
3309
+ 00:49:06.839 --> 00:49:10.599
3310
+ yeah any any other
3311
+
3312
+ 00:49:11.240 --> 00:49:18.680
3313
+ things okay uh another final one that
3314
+
3315
+ 00:49:14.440 --> 00:49:21.680
3316
+ I'd like to introduce um this is really
3317
+
3318
+ 00:49:18.680 --> 00:49:23.839
3319
+ like I I really really like this paper
3320
+
3321
+ 00:49:21.680 --> 00:49:27.440
3322
+ um just from the point of view of its
3323
+
3324
+ 00:49:23.839 --> 00:49:29.880
3325
+ ambition and motivation um and
3326
+
3327
+ 00:49:27.440 --> 00:49:31.920
3328
+ the idea is that they want to learn
3329
+
3330
+ 00:49:29.880 --> 00:49:34.440
3331
+ differences between text
3332
+
3333
+ 00:49:31.920 --> 00:49:36.200
3334
+ Collections and why would you want to do
3335
+
3336
+ 00:49:34.440 --> 00:49:38.079
3337
+ this there's actually a ton of reasons
3338
+
3339
+ 00:49:36.200 --> 00:49:39.720
3340
+ why you would want to do this but the
3341
+
3342
+ 00:49:38.079 --> 00:49:44.720
3343
+ the best one that they give
3344
+
3345
+ 00:49:39.720 --> 00:49:44.720
3346
+ here is actually no sorry maybe I I
3347
+
3348
+ 00:49:46.440 --> 00:49:50.359
3349
+ didn't okay so this is a less
3350
+
3351
+ 00:49:48.480 --> 00:49:53.440
3352
+ interesting one the the more interesting
3353
+
3354
+ 00:49:50.359 --> 00:49:57.799
3355
+ one uh that they give in the paper is um
3356
+
3357
+ 00:49:53.440 --> 00:50:00.200
3358
+ examples of reports from patients who
3359
+
3360
+ 00:49:57.799 --> 00:50:04.200
3361
+ took an actual drug and took a
3362
+
3363
+ 00:50:00.200 --> 00:50:06.640
3364
+ placebo and so patients write about like
3365
+
3366
+ 00:50:04.200 --> 00:50:08.400
3367
+ their their symptoms or how they felt or
3368
+
3369
+ 00:50:06.640 --> 00:50:11.000
3370
+ they have checkups or things like that
3371
+
3372
+ 00:50:08.400 --> 00:50:13.839
3373
+ that are all written in natural language
3374
+
3375
+ 00:50:11.000 --> 00:50:16.319
3376
+ so one of the things that doctors try to
3377
+
3378
+ 00:50:13.839 --> 00:50:18.000
3379
+ do is they try to look at all of these
3380
+
3381
+ 00:50:16.319 --> 00:50:20.240
3382
+ reports and figure out if there's any
3383
+
3384
+ 00:50:18.000 --> 00:50:21.880
3385
+ like consistent difference between
3386
+
3387
+ 00:50:20.240 --> 00:50:25.079
3388
+ people who took a placebo and people who
3389
+
3390
+ 00:50:21.880 --> 00:50:27.359
3391
+ took an actual um actual drug and this
3392
+
3393
+ 00:50:25.079 --> 00:50:31.079
3394
+ is like a major part of medical trials
3395
+
3396
+ 00:50:27.359 --> 00:50:32.960
3397
+ right um and so the idea is like given
3398
+
3399
+ 00:50:31.079 --> 00:50:35.000
3400
+ all of the texts of people who took the
3401
+
3402
+ 00:50:32.960 --> 00:50:36.599
3403
+ drug given all the texts of people who
3404
+
3405
+ 00:50:35.000 --> 00:50:38.319
3406
+ of people who took the placebo could you
3407
+
3408
+ 00:50:36.599 --> 00:50:40.960
3409
+ automatically extract differences
3410
+
3411
+ 00:50:38.319 --> 00:50:45.000
3412
+ between them in some way and so the
3413
+
3414
+ 00:50:40.960 --> 00:50:47.760
3415
+ methodology that they use for this is um
3416
+
3417
+ 00:50:45.000 --> 00:50:51.359
3418
+ they have like group a uh the Manchester
3419
+
3420
+ 00:50:47.760 --> 00:50:53.240
3421
+ United soccer Squad welcomes Rising Star
3422
+
3423
+ 00:50:51.359 --> 00:50:54.599
3424
+ as Serena Williams joins the UCLA
3425
+
3426
+ 00:50:53.240 --> 00:50:56.920
3427
+ women's tennis roster and then you have
3428
+
3429
+ 00:50:54.599 --> 00:51:00.200
3430
+ like 20 more examples and then here you
3431
+
3432
+ 00:50:56.920 --> 00:51:03.480
3433
+ have Egypt's President uh at the African
3434
+
3435
+ 00:51:00.200 --> 00:51:07.200
3436
+ unit Union Summit um and other things
3437
+
3438
+ 00:51:03.480 --> 00:51:12.000
3439
+ like that in 20 examples uh not seen
3440
+
3441
+ 00:51:07.200 --> 00:51:14.359
3442
+ here and so then if I asked a question
3443
+
3444
+ 00:51:12.000 --> 00:51:16.359
3445
+ um the original data set includes news
3446
+
3447
+ 00:51:14.359 --> 00:51:18.680
3448
+ summaries the two corpora are generated
3449
+
3450
+ 00:51:16.359 --> 00:51:21.240
3451
+ based on when they were published uh
3452
+
3453
+ 00:51:18.680 --> 00:51:24.359
3454
+ samples from group a include news from
3455
+
3456
+ 00:51:21.240 --> 00:51:27.480
3457
+ 2007 while samples from Group B include
3458
+
3459
+ 00:51:24.359 --> 00:51:29.000
3460
+ news from 2008 I'm a joural trying to
3461
+
3462
+ 00:51:27.480 --> 00:51:31.240
3463
+ understand what topics are popular
3464
+
3465
+ 00:51:29.000 --> 00:51:33.440
3466
+ across years please write a list of
3467
+
3468
+ 00:51:31.240 --> 00:51:35.280
3469
+ hypotheses separated by bullet points of
3470
+
3471
+ 00:51:33.440 --> 00:51:39.920
3472
+ how data points from group a differ from
3473
+
3474
+ 00:51:35.280 --> 00:51:42.400
3475
+ those of group b um and then formatting
3476
+
3477
+ 00:51:39.920 --> 00:51:44.160
3478
+ information
3479
+
3480
+ 00:51:42.400 --> 00:51:46.960
3481
+ um
3482
+
3483
+ 00:51:44.160 --> 00:51:49.680
3484
+ and so based on the two sentence groups
3485
+
3486
+ 00:51:46.960 --> 00:51:50.559
3487
+ A and B from the above more sentences in
3488
+
3489
+ 00:51:49.680 --> 00:51:53.400
3490
+ group
3491
+
3492
+ 00:51:50.559 --> 00:51:55.240
3493
+ a mention a sports team or mention about
3494
+
3495
+ 00:51:53.400 --> 00:51:57.319
3496
+ academic relations or things like that
3497
+
3498
+ 00:51:55.240 --> 00:51:58.599
3499
+ and so what this allows you to do is it
3500
+
3501
+ 00:51:57.319 --> 00:52:00.319
3502
+ allows you to come up with a whole bunch
3503
+
3504
+ 00:51:58.599 --> 00:52:01.400
3505
+ of hypotheses about why one might be
3506
+
3507
+ 00:52:00.319 --> 00:52:04.920
3508
+ better than the
3509
+
3510
+ 00:52:01.400 --> 00:52:08.920
3511
+ other so the problem with this though is
3512
+
3513
+ 00:52:04.920 --> 00:52:10.880
3514
+ like because of language model you know
3515
+
3516
+ 00:52:08.920 --> 00:52:13.440
3517
+ limits number one they might just
3518
+
3519
+ 00:52:10.880 --> 00:52:17.119
3520
+ hallucinate things and be totally wrong
3521
+
3522
+ 00:52:13.440 --> 00:52:19.680
3523
+ um number two
3524
+
3525
+ 00:52:17.119 --> 00:52:21.040
3526
+ the size of the context so that they can
3527
+
3528
+ 00:52:19.680 --> 00:52:23.960
3529
+ take into account when making this
3530
+
3531
+ 00:52:21.040 --> 00:52:26.720
3532
+ decision is relatively small so the next
3533
+
3534
+ 00:52:23.960 --> 00:52:29.280
3535
+ thing that they do is then they have a a
3536
+
3537
+ 00:52:26.720 --> 00:52:32.119
3538
+ much larger Corpus of
3539
+
3540
+ 00:52:29.280 --> 00:52:33.200
3541
+ text um with like a thousand examples or
3542
+
3543
+ 00:52:32.119 --> 00:52:36.640
3544
+ something like
3545
+
3546
+ 00:52:33.200 --> 00:52:40.240
3547
+ this and then they treat each of these
3548
+
3549
+ 00:52:36.640 --> 00:52:42.680
3550
+ hypotheses as a
3551
+
3552
+ 00:52:40.240 --> 00:52:44.559
3553
+ classifier and then they go through all
3554
+
3555
+ 00:52:42.680 --> 00:52:47.480
3556
+ of the examples from Corpus one which is
3557
+
3558
+ 00:52:44.559 --> 00:52:50.480
3559
+ like maybe 2000 year 2000 and then
3560
+
3561
+ 00:52:47.480 --> 00:52:52.079
3562
+ Corpus 2 which is year 2008 and they ask
3563
+
3564
+ 00:52:50.480 --> 00:52:55.880
3565
+ the language model with respect to all
3566
+
3567
+ 00:52:52.079 --> 00:52:58.119
3568
+ of them um does this sentence mention a
3569
+
3570
+ 00:52:55.880 --> 00:53:01.400
3571
+ sports team recording recruiting a new
3572
+
3573
+ 00:52:58.119 --> 00:53:04.839
3574
+ member um and so you get a
3575
+
3576
+ 00:53:01.400 --> 00:53:04.839
3577
+ classification for each one of
3578
+
3579
+ 00:53:12.359 --> 00:53:17.440
3580
+ these and you get a certain number of
3581
+
3582
+ 00:53:14.520 --> 00:53:18.799
3583
+ ones and zeros and so once you have a
3584
+
3585
+ 00:53:17.440 --> 00:53:20.839
3586
+ certain number of ones and zeros what's
3587
+
3588
+ 00:53:18.799 --> 00:53:24.079
3589
+ the next thing that you would do
3590
+
3591
+ 00:53:20.839 --> 00:53:24.079
3592
+ here any
3593
+
3594
+ 00:53:24.880 --> 00:53:30.599
3595
+ ideas how do you tell there's like
3596
+
3597
+ 00:53:27.359 --> 00:53:30.599
3598
+ actually a difference between these
3599
+
3600
+ 00:53:36.520 --> 00:53:43.319
3601
+ two between two sets
3602
+
3603
+ 00:53:39.319 --> 00:53:45.920
3604
+ of numbers like one and
3605
+
3606
+ 00:53:43.319 --> 00:53:48.680
3607
+ zero a hint is you probably had to do
3608
+
3609
+ 00:53:45.920 --> 00:53:48.680
3610
+ this for assignment
3611
+
3612
+ 00:53:53.720 --> 00:53:58.520
3613
+ two yeah
3614
+
3615
+ 00:53:56.799 --> 00:54:01.200
3616
+ yeah exactly you you do a significance
3617
+
3618
+ 00:53:58.520 --> 00:54:04.200
3619
+ test between the two and so um what you
3620
+
3621
+ 00:54:01.200 --> 00:54:06.440
3622
+ can then do is you have lots of
3623
+
3624
+ 00:54:04.200 --> 00:54:08.839
3625
+ hypotheses you have lots of significance
3626
+
3627
+ 00:54:06.440 --> 00:54:11.040
3628
+ values you can order them by the
3629
+
3630
+ 00:54:08.839 --> 00:54:13.839
3631
+ significance value and say the most
3632
+
3633
+ 00:54:11.040 --> 00:54:17.559
3634
+ significance or the the difference with
3635
+
3636
+ 00:54:13.839 --> 00:54:19.160
3637
+ the like lowest P value between them is
3638
+
3639
+ 00:54:17.559 --> 00:54:20.480
3640
+ the one that's most likely to be an
3641
+
3642
+ 00:54:19.160 --> 00:54:26.520
3643
+ actual difference between the two and
3644
+
3645
+ 00:54:20.480 --> 00:54:29.079
3646
+ you can find um like uh the news in 2007
3647
+
3648
+ 00:54:26.520 --> 00:54:32.520
3649
+ indeed tended to talk about X more than
3650
+
3651
+ 00:54:29.079 --> 00:54:34.559
3652
+ uh than other things so I uh I actually
3653
+
3654
+ 00:54:32.520 --> 00:54:36.079
3655
+ used this in one of my uh one of my
3656
+
3657
+ 00:54:34.559 --> 00:54:39.520
3658
+ unrelated projects where I wanted to
3659
+
3660
+ 00:54:36.079 --> 00:54:42.680
3661
+ find the difference between um language
3662
+
3663
+ 00:54:39.520 --> 00:54:45.640
3664
+ models sentences that language models
3665
+
3666
+ 00:54:42.680 --> 00:54:47.839
3667
+ aligned well with human brain signals in
3668
+
3669
+ 00:54:45.640 --> 00:54:49.760
3670
+ sentences where language models didn't
3671
+
3672
+ 00:54:47.839 --> 00:54:52.559
3673
+ align well with human brain signals so
3674
+
3675
+ 00:54:49.760 --> 00:54:53.799
3676
+ we like we had some data of human brain
3677
+
3678
+ 00:54:52.559 --> 00:54:56.880
3679
+ signals and we had a measure of
3680
+
3681
+ 00:54:53.799 --> 00:54:58.240
3682
+ alignment um on each sentence and it
3683
+
3684
+ 00:54:56.880 --> 00:55:01.799
3685
+ actually found some pretty interesting
3686
+
3687
+ 00:54:58.240 --> 00:55:03.359
3688
+ hypothesis like um uh language models
3689
+
3690
+ 00:55:01.799 --> 00:55:06.200
3691
+ tend to align less well with human brain
3692
+
3693
+ 00:55:03.359 --> 00:55:07.319
3694
+ signals on metaphorical language or a
3695
+
3696
+ 00:55:06.200 --> 00:55:10.599
3697
+ language that had to do with
3698
+
3699
+ 00:55:07.319 --> 00:55:11.799
3700
+ interpersonal relations or um or other
3701
+
3702
+ 00:55:10.599 --> 00:55:15.200
3703
+ things like that and then we actually
3704
+
3705
+ 00:55:11.799 --> 00:55:17.559
3706
+ went in and pursued um you know these to
3707
+
3708
+ 00:55:15.200 --> 00:55:21.000
3709
+ examine them further and uh we didn't
3710
+
3711
+ 00:55:17.559 --> 00:55:22.680
3712
+ entirely rely on this um you know like
3713
+
3714
+ 00:55:21.000 --> 00:55:25.160
3715
+ significance test because I didn't quite
3716
+
3717
+ 00:55:22.680 --> 00:55:26.880
3718
+ trust language models that much to like
3719
+
3720
+ 00:55:25.160 --> 00:55:28.559
3721
+ shape my entire resource
3722
+
3723
+ 00:55:26.880 --> 00:55:29.880
3724
+ research agenda around them but we came
3725
+
3726
+ 00:55:28.559 --> 00:55:31.720
3727
+ up with other ways to measure it and
3728
+
3729
+ 00:55:29.880 --> 00:55:35.000
3730
+ some of the things checked out some of
3731
+
3732
+ 00:55:31.720 --> 00:55:36.799
3733
+ the things didn't check out so um again
3734
+
3735
+ 00:55:35.000 --> 00:55:38.760
3736
+ I think this general direction of like
3737
+
3738
+ 00:55:36.799 --> 00:55:41.720
3739
+ how can language models help us answer
3740
+
3741
+ 00:55:38.760 --> 00:55:43.760
3742
+ you know uh complex research questions
3743
+
3744
+ 00:55:41.720 --> 00:55:45.480
3745
+ that we wouldn't be able to easily or
3746
+
3747
+ 00:55:43.760 --> 00:55:47.960
3748
+ very efficiently that would require
3749
+
3750
+ 00:55:45.480 --> 00:55:52.200
3751
+ normally humans annotating lots of data
3752
+
3753
+ 00:55:47.960 --> 00:55:56.839
3754
+ is um an interesting topic as
3755
+
3756
+ 00:55:52.200 --> 00:55:56.839
3757
+ well cool um
CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2ba1e151ead63c27e40f5b4e74afe57d221e1c0234298fceb10208d60fa6783
3
+ size 91209687
CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/metadata.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "url": "https://www.youtube.com/watch?v=7Sse6P5xbEc",
3
+ "title": "CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics"
4
+ }