ahmedelsayed's picture
commit files to HF hub
2ffb90d
1
00:00:00,040 --> 00:00:03,880
so today I'm going to talk about
2
00:00:01,319 --> 00:00:06,680
retrieval and retrieval augmented
3
00:00:03,880 --> 00:00:09,040
generation so if we look at our standard
4
00:00:06,680 --> 00:00:10,880
prompting flow normally what we do is we
5
00:00:09,040 --> 00:00:14,160
combine together a prompt template with
6
00:00:10,880 --> 00:00:16,600
an input so if we say please answer this
7
00:00:14,160 --> 00:00:18,720
question I think Vin Diesel has been a
8
00:00:16,600 --> 00:00:21,000
voice actor for several pictors in TV
9
00:00:18,720 --> 00:00:24,000
series do you know what their names
10
00:00:21,000 --> 00:00:25,400
are we could get a response from a
11
00:00:24,000 --> 00:00:26,840
language model but there are several
12
00:00:25,400 --> 00:00:30,840
problems with
13
00:00:26,840 --> 00:00:33,680
this the first is accuracy issues
14
00:00:30,840 --> 00:00:36,160
the models generally have a knowledge
15
00:00:33,680 --> 00:00:38,879
cut off so the parameters are usually
16
00:00:36,160 --> 00:00:41,120
only updated to a particular time so for
17
00:00:38,879 --> 00:00:43,200
example if a new Vin Diesel TV series
18
00:00:41,120 --> 00:00:44,960
comes out then the model that was
19
00:00:43,200 --> 00:00:47,440
trained up to a certain time Point won't
20
00:00:44,960 --> 00:00:51,000
be able to know anything about
21
00:00:47,440 --> 00:00:53,600
it there's also issues of private data
22
00:00:51,000 --> 00:00:55,320
so data stored in private text or data
23
00:00:53,600 --> 00:00:57,840
repositories is not suitable for
24
00:00:55,320 --> 00:01:02,600
training for a number of reasons number
25
00:00:57,840 --> 00:01:05,199
one it's not available to to particular
26
00:01:02,600 --> 00:01:07,799
language model training providers such
27
00:01:05,199 --> 00:01:10,720
as you know open AI or Google or anybody
28
00:01:07,799 --> 00:01:13,840
else like this the second thing is
29
00:01:10,720 --> 00:01:16,799
Access Control issues so even if you're
30
00:01:13,840 --> 00:01:17,840
within an organization that has lots of
31
00:01:16,799 --> 00:01:20,799
private data and you can train a
32
00:01:17,840 --> 00:01:22,600
language model on that certain people in
33
00:01:20,799 --> 00:01:24,200
the organization may have access to
34
00:01:22,600 --> 00:01:27,640
certain varieties of data and other
35
00:01:24,200 --> 00:01:29,400
people may not so it's not just solely
36
00:01:27,640 --> 00:01:31,520
an issue of third party providers it's
37
00:01:29,400 --> 00:01:33,840
an issue of organization level Access
38
00:01:31,520 --> 00:01:36,159
Control in
39
00:01:33,840 --> 00:01:38,920
general in addition there are learning
40
00:01:36,159 --> 00:01:40,320
failures so even for data that the model
41
00:01:38,920 --> 00:01:42,640
was trained on it might not be
42
00:01:40,320 --> 00:01:44,399
sufficient to get the right answer and
43
00:01:42,640 --> 00:01:47,799
this is particularly the case for very
44
00:01:44,399 --> 00:01:52,320
very large uh training data sets and
45
00:01:47,799 --> 00:01:53,920
models that are you know modestly sized
46
00:01:52,320 --> 00:01:55,880
because the models very often won't be
47
00:01:53,920 --> 00:01:58,360
able to learn from a single look at a
48
00:01:55,880 --> 00:02:02,039
particular fact or or whatever else like
49
00:01:58,360 --> 00:02:02,039
this especially if iter early in
50
00:02:02,159 --> 00:02:08,160
training another thing is even if the
51
00:02:05,240 --> 00:02:10,599
answer is correct it might not be
52
00:02:08,160 --> 00:02:13,440
verifiable so you might want to be very
53
00:02:10,599 --> 00:02:15,000
sure that the model is not making any
54
00:02:13,440 --> 00:02:17,640
accuracy
55
00:02:15,000 --> 00:02:19,040
problems and so in order to do that very
56
00:02:17,640 --> 00:02:21,879
often a human will want to go back to
57
00:02:19,040 --> 00:02:21,879
the source of the
58
00:02:22,200 --> 00:02:27,319
data so to solve this there's a method
59
00:02:25,480 --> 00:02:29,200
called retrieval augmented generation
60
00:02:27,319 --> 00:02:30,280
which will also be the topic of our
61
00:02:29,200 --> 00:02:32,599
second assignment
62
00:02:30,280 --> 00:02:35,680
here and the way it works is you
63
00:02:32,599 --> 00:02:38,319
retrieve relevant passages
64
00:02:35,680 --> 00:02:40,680
efficiently ones that kind of entail the
65
00:02:38,319 --> 00:02:42,480
answer to a question and then read the
66
00:02:40,680 --> 00:02:46,080
passages to answer the
67
00:02:42,480 --> 00:02:48,599
query so we have documents like this we
68
00:02:46,080 --> 00:02:52,360
have a query based on the query we form
69
00:02:48,599 --> 00:02:55,360
retrieval we get a whole bunch of uh
70
00:02:52,360 --> 00:02:57,560
passages we do reading and then we get
71
00:02:55,360 --> 00:02:57,560
the
72
00:02:58,280 --> 00:03:04,440
answer so this is in fact implemented in
73
00:03:01,720 --> 00:03:07,599
many or even most uh language modeling
74
00:03:04,440 --> 00:03:09,840
providers including open AI so to give
75
00:03:07,599 --> 00:03:11,480
an example I asked the question that I
76
00:03:09,840 --> 00:03:12,879
just said about Vin Diesel's voice
77
00:03:11,480 --> 00:03:16,599
acting and TV
78
00:03:12,879 --> 00:03:19,760
series and Chad GPT gave me an answer
79
00:03:16,599 --> 00:03:22,440
and you can see that J gpt's answer
80
00:03:19,760 --> 00:03:24,720
includes several places with quotes um
81
00:03:22,440 --> 00:03:28,159
they the little blue quotes
82
00:03:24,720 --> 00:03:30,760
there and if you click on the quote it
83
00:03:28,159 --> 00:03:33,120
tells you where the information Source
84
00:03:30,760 --> 00:03:35,000
came from and so this one says behind
85
00:03:33,120 --> 00:03:37,760
the voice actors been
86
00:03:35,000 --> 00:03:39,920
Diesel and behind the voice actors TV
87
00:03:37,760 --> 00:03:42,959
shows Big Mouth V
88
00:03:39,920 --> 00:03:45,640
diesel now if we look
89
00:03:42,959 --> 00:03:48,640
closer into this answer we'll see that
90
00:03:45,640 --> 00:03:49,959
it's not perfect even though it is uh
91
00:03:48,640 --> 00:03:52,519
performing retrieval augmented
92
00:03:49,959 --> 00:03:54,840
Generations so for example I only asked
93
00:03:52,519 --> 00:03:57,200
about TV series but it's giving me lots
94
00:03:54,840 --> 00:03:59,680
of things about movies where it says
95
00:03:57,200 --> 00:04:01,319
Groot in Guardians of the Galaxy volume
96
00:03:59,680 --> 00:04:04,480
3 2023
97
00:04:01,319 --> 00:04:07,200
movie and in fact uh Vin Diesel was not
98
00:04:04,480 --> 00:04:10,920
even voicing a character named gut here
99
00:04:07,200 --> 00:04:13,480
so that's definitely an accuracy
100
00:04:10,920 --> 00:04:15,079
mistake and separately there's a place
101
00:04:13,480 --> 00:04:17,639
where it says additionally though the
102
00:04:15,079 --> 00:04:19,959
website for big mouthless Vin Diesel it
103
00:04:17,639 --> 00:04:22,040
appears to be a misunderstanding or err
104
00:04:19,959 --> 00:04:25,360
as Nick croll is credited as the voice
105
00:04:22,040 --> 00:04:27,800
of Vin Diesel in that show so there
106
00:04:25,360 --> 00:04:30,039
actually Nick croll was acting as V
107
00:04:27,800 --> 00:04:32,800
diesel but that's um kind of a
108
00:04:30,039 --> 00:04:34,600
misunderstanding of the reader model but
109
00:04:32,800 --> 00:04:36,600
anyway you can get the general idea here
110
00:04:34,600 --> 00:04:40,199
you can also see that it's not perfect
111
00:04:36,600 --> 00:04:42,720
even for very strong models like GPD
112
00:04:40,199 --> 00:04:44,800
4 so now I'd like to go into the actual
113
00:04:42,720 --> 00:04:46,759
methodology that we use for this uh we
114
00:04:44,800 --> 00:04:50,360
have retrieval
115
00:04:46,759 --> 00:04:53,160
methods and for the retrieval methods we
116
00:04:50,360 --> 00:04:55,160
have uh quite a few different options
117
00:04:53,160 --> 00:04:57,960
I'm going to go through each one of them
118
00:04:55,160 --> 00:05:00,960
at a time so sparse retrieval document
119
00:04:57,960 --> 00:05:04,240
level dense retrieval token level DSE
120
00:05:00,960 --> 00:05:08,039
retrieval cross- encoder reranking and
121
00:05:04,240 --> 00:05:09,320
blackbox retrieval so blackbox retrieval
122
00:05:08,039 --> 00:05:11,280
I'm not really going to go into it a
123
00:05:09,320 --> 00:05:16,000
whole lot basically this is just asking
124
00:05:11,280 --> 00:05:17,560
a blackbox search engine to retrieve uh
125
00:05:16,000 --> 00:05:20,000
you know the relevant context and
126
00:05:17,560 --> 00:05:22,560
getting the top several results
127
00:05:20,000 --> 00:05:24,039
nonetheless this is a pretty you know
128
00:05:22,560 --> 00:05:26,800
reasonable method to do it if you want
129
00:05:24,039 --> 00:05:29,080
to do search over you know lots of data
130
00:05:26,800 --> 00:05:32,759
that exists on the internet already and
131
00:05:29,080 --> 00:05:36,600
that in is what chat jpt does it looks
132
00:05:32,759 --> 00:05:39,240
up on Bing by generating a query to
133
00:05:36,600 --> 00:05:41,560
Bing so anyway let's go into the actual
134
00:05:39,240 --> 00:05:43,840
methods that you develop and control
135
00:05:41,560 --> 00:05:46,600
yourself so the first one is sparse
136
00:05:43,840 --> 00:05:48,479
retrieval and the way this works is you
137
00:05:46,600 --> 00:05:50,440
express the query and document as a
138
00:05:48,479 --> 00:05:53,680
sparse word frequency Vector usually
139
00:05:50,440 --> 00:05:58,759
normalized by length and so if I ask uh
140
00:05:53,680 --> 00:06:01,720
query what is NLP we get a vector where
141
00:05:58,759 --> 00:06:04,120
each row the vector corresponds to a
142
00:06:01,720 --> 00:06:07,919
different
143
00:06:04,120 --> 00:06:12,960
token and we asked what is
144
00:06:07,919 --> 00:06:16,360
NLP and so uh the places for what NLP
145
00:06:12,960 --> 00:06:18,199
and is will all have a non-zero value
146
00:06:16,360 --> 00:06:20,199
and everything else will have a zero
147
00:06:18,199 --> 00:06:21,720
value and we also normalize by the
148
00:06:20,199 --> 00:06:24,120
length of vectors so we get something
149
00:06:21,720 --> 00:06:24,120
like
150
00:06:24,840 --> 00:06:28,440
333333 then we have a whole bunch of
151
00:06:26,759 --> 00:06:30,720
documents so the first document says
152
00:06:28,440 --> 00:06:31,759
what is life can is life someone really
153
00:06:30,720 --> 00:06:33,960
likes
154
00:06:31,759 --> 00:06:36,000
candy we also have another one that says
155
00:06:33,960 --> 00:06:38,360
NLP as an acronym for natural language
156
00:06:36,000 --> 00:06:39,479
processing so this is a pretty good uh
157
00:06:38,360 --> 00:06:42,479
you
158
00:06:39,479 --> 00:06:44,840
know answer to our
159
00:06:42,479 --> 00:06:48,039
question then we also have I like to do
160
00:06:44,840 --> 00:06:49,360
good research on NLP which is you know a
161
00:06:48,039 --> 00:06:51,360
nice sentiment but not a very good
162
00:06:49,360 --> 00:06:54,400
answer to our question I
163
00:06:51,360 --> 00:06:59,479
guess so if we look at the vectors here
164
00:06:54,400 --> 00:07:03,280
we have uh what and candy and is have uh
165
00:06:59,479 --> 00:07:07,120
a fairly high
166
00:07:03,280 --> 00:07:12,520
score and we have here NLP and is have a
167
00:07:07,120 --> 00:07:16,479
high score and NLP has a a nonzero
168
00:07:12,520 --> 00:07:18,400
score So based on this um we find the
169
00:07:16,479 --> 00:07:20,560
document similarity with the highest
170
00:07:18,400 --> 00:07:22,039
inner product or cosine similarity in
171
00:07:20,560 --> 00:07:24,360
the document
172
00:07:22,039 --> 00:07:27,000
collection and so if we take the inner
173
00:07:24,360 --> 00:07:28,759
product between these vectors we
174
00:07:27,000 --> 00:07:31,280
actually see that the first one got the
175
00:07:28,759 --> 00:07:34,479
highest score because of its
176
00:07:31,280 --> 00:07:37,440
relatively High values for the words
177
00:07:34,479 --> 00:07:37,440
what and
178
00:07:38,160 --> 00:07:43,759
is
179
00:07:40,199 --> 00:07:46,720
so as you can see common words like what
180
00:07:43,759 --> 00:07:49,000
and is can get a high score kind of
181
00:07:46,720 --> 00:07:51,800
regardless of whether a document is very
182
00:07:49,000 --> 00:07:53,919
relevant and so one way we can fix this
183
00:07:51,800 --> 00:07:55,960
is through something called term
184
00:07:53,919 --> 00:07:59,479
waiting and the way that term waiting
185
00:07:55,960 --> 00:08:02,680
works is in addition to having this
186
00:07:59,479 --> 00:08:04,599
Vector that
187
00:08:02,680 --> 00:08:07,680
calculates
188
00:08:04,599 --> 00:08:10,680
the frequency within a particular
189
00:08:07,680 --> 00:08:13,639
document we also have an upweighting
190
00:08:10,680 --> 00:08:15,599
term that gives higher weight to low
191
00:08:13,639 --> 00:08:18,199
frequency words because low frequency
192
00:08:15,599 --> 00:08:20,280
words like NLP tend to be more
193
00:08:18,199 --> 00:08:22,759
informative about whether the document
194
00:08:20,280 --> 00:08:25,240
is relevant than high frequency words
195
00:08:22,759 --> 00:08:27,080
like what it is because these high
196
00:08:25,240 --> 00:08:31,320
frequency words like what and is Could
197
00:08:27,080 --> 00:08:34,279
Happen kind of regardless of whether
198
00:08:31,320 --> 00:08:36,680
the you know document is relevant the
199
00:08:34,279 --> 00:08:41,800
particular terms the person is asking
200
00:08:36,680 --> 00:08:44,000
about so one well used and easy to
201
00:08:41,800 --> 00:08:46,560
understand version of this is uh tfidf
202
00:08:44,000 --> 00:08:48,839
or term frequency indument
203
00:08:46,560 --> 00:08:51,200
frequency so the way we Define term
204
00:08:48,839 --> 00:08:52,959
frequency is exactly what I talked about
205
00:08:51,200 --> 00:08:56,959
before so it's basically the frequency
206
00:08:52,959 --> 00:08:59,839
of the term uh T in the document d
207
00:08:56,959 --> 00:09:01,640
normalized by the total term frequency
208
00:08:59,839 --> 00:09:03,680
within the document so that that's what
209
00:09:01,640 --> 00:09:06,800
I already showed in the previous
210
00:09:03,680 --> 00:09:09,360
slide and then indument frequency is a
211
00:09:06,800 --> 00:09:13,760
little bit more involved but basically
212
00:09:09,360 --> 00:09:15,760
the way this works is we have log of the
213
00:09:13,760 --> 00:09:18,160
total number of documents in the
214
00:09:15,760 --> 00:09:24,040
collection divided
215
00:09:18,160 --> 00:09:26,760
by the total number of uh times this
216
00:09:24,040 --> 00:09:30,279
term appeared in any particular
217
00:09:26,760 --> 00:09:33,360
document and so if a term appears many
218
00:09:30,279 --> 00:09:36,120
times in any particular document it will
219
00:09:33,360 --> 00:09:39,240
have a low IDF score uh one that's close
220
00:09:36,120 --> 00:09:41,519
to zero but if it rarely appears it will
221
00:09:39,240 --> 00:09:44,120
have a high IDF score so basically this
222
00:09:41,519 --> 00:09:45,040
is upweighting our frequent terms and
223
00:09:44,120 --> 00:09:47,560
then for
224
00:09:45,040 --> 00:09:51,320
tfidf uh we basically multiply these two
225
00:09:47,560 --> 00:09:53,120
terms together and we upweight the low
226
00:09:51,320 --> 00:09:55,640
frequency
227
00:09:53,120 --> 00:10:00,519
words there's another version of this
228
00:09:55,640 --> 00:10:03,640
called bm25 that is uh widely used used
229
00:10:00,519 --> 00:10:05,800
um this is more involved so I'm not
230
00:10:03,640 --> 00:10:08,120
going to go into all of the details but
231
00:10:05,800 --> 00:10:12,399
basically if you remember back to the
232
00:10:08,120 --> 00:10:13,720
lecture on count-based language models
233
00:10:12,399 --> 00:10:14,880
there were a bunch of smoothing
234
00:10:13,720 --> 00:10:18,839
techniques for these count-based
235
00:10:14,880 --> 00:10:21,839
language models and this uses uh kind of
236
00:10:18,839 --> 00:10:25,839
a m multiplicative additive smoothing
237
00:10:21,839 --> 00:10:27,160
term to upway things instead of using
238
00:10:25,839 --> 00:10:30,200
the term
239
00:10:27,160 --> 00:10:33,399
frequency and uh the actual formula is
240
00:10:30,200 --> 00:10:37,240
here K and B are kind of
241
00:10:33,399 --> 00:10:39,360
hyperparameters and um average DL is
242
00:10:37,240 --> 00:10:40,639
average document length the details of
243
00:10:39,360 --> 00:10:42,120
this are not really important but
244
00:10:40,639 --> 00:10:43,800
basically what you should know is that
245
00:10:42,120 --> 00:10:45,639
this is doing some smoothing on the term
246
00:10:43,800 --> 00:10:48,240
frequencies and you can look in more
247
00:10:45,639 --> 00:10:48,240
detail if you're
248
00:10:49,160 --> 00:10:54,920
interested so now that we have this sort
249
00:10:52,880 --> 00:10:57,959
of term
250
00:10:54,920 --> 00:11:00,320
based uh sparse Vector we would like to
251
00:10:57,959 --> 00:11:03,320
use this to look up relevant documents
252
00:11:00,320 --> 00:11:06,000
in a collection very quickly because you
253
00:11:03,320 --> 00:11:08,000
know we might have a collection that's
254
00:11:06,000 --> 00:11:09,720
extremely large like as large as the
255
00:11:08,000 --> 00:11:12,320
entire internet like what Google is
256
00:11:09,720 --> 00:11:14,160
doing when it searches and so in order
257
00:11:12,320 --> 00:11:16,240
to solve this we need a data structure
258
00:11:14,160 --> 00:11:17,279
that allows for efficient sparse lookup
259
00:11:16,240 --> 00:11:19,480
of
260
00:11:17,279 --> 00:11:23,720
vectors and so we have all of these
261
00:11:19,480 --> 00:11:27,279
sparse vectors like this
262
00:11:23,720 --> 00:11:31,240
and we uh basically turn this into an
263
00:11:27,279 --> 00:11:34,720
index where we have something like a you
264
00:11:31,240 --> 00:11:37,920
know python style dictionary or map that
265
00:11:34,720 --> 00:11:41,079
has it's the key each uh word we would
266
00:11:37,920 --> 00:11:45,000
look like to look up and is the vector
267
00:11:41,079 --> 00:11:48,480
the corresponding um index of that
268
00:11:45,000 --> 00:11:50,480
document so for example what in our case
269
00:11:48,480 --> 00:11:54,200
here only appears in document one so it
270
00:11:50,480 --> 00:11:56,279
would point to document one candy uh
271
00:11:54,200 --> 00:11:58,560
also appears in document one NLP appears
272
00:11:56,279 --> 00:11:59,839
in two and three and so you can create
273
00:11:58,560 --> 00:12:02,760
this index IND like this and this is
274
00:11:59,839 --> 00:12:02,760
called an inverted
275
00:12:03,079 --> 00:12:08,760
Index this is an important application
276
00:12:06,000 --> 00:12:11,600
of course so there's lots of software
277
00:12:08,760 --> 00:12:14,920
the most kind of pical software for this
278
00:12:11,600 --> 00:12:18,760
is Apache Lucine so if you want to build
279
00:12:14,920 --> 00:12:21,639
a big index uh to look up vectors using
280
00:12:18,760 --> 00:12:24,160
this sparse index like this you can uh
281
00:12:21,639 --> 00:12:24,160
take a look at
282
00:12:26,160 --> 00:12:30,880
Lucy so the next thing I'd like to talk
283
00:12:28,399 --> 00:12:33,199
about is dense retrieval and the way
284
00:12:30,880 --> 00:12:36,000
dense retrieval works is you encode the
285
00:12:33,199 --> 00:12:37,240
document in query into a dense factor
286
00:12:36,000 --> 00:12:40,240
and find the nearest
287
00:12:37,240 --> 00:12:42,160
neighbor in order to do this encoding
288
00:12:40,240 --> 00:12:44,639
you can use a number of things you can
289
00:12:42,160 --> 00:12:47,440
use out of the box embeddings or you can
290
00:12:44,639 --> 00:12:49,959
use learned embeddings specifically
291
00:12:47,440 --> 00:12:53,519
created for the purpose of
292
00:12:49,959 --> 00:12:56,240
retrieving and so what we do is we take
293
00:12:53,519 --> 00:12:57,920
all of these uh documents here we
294
00:12:56,240 --> 00:12:59,920
convert them into embeddings using
295
00:12:57,920 --> 00:13:04,040
whatever embedding method that we want
296
00:12:59,920 --> 00:13:05,920
to use we then have a query and we take
297
00:13:04,040 --> 00:13:07,720
that query and we match it and find the
298
00:13:05,920 --> 00:13:10,040
nearest neighbor
299
00:13:07,720 --> 00:13:13,120
here so if you're just using out of the
300
00:13:10,040 --> 00:13:14,839
box embeddings you don't need to um you
301
00:13:13,120 --> 00:13:15,880
know do anything special for retrieval
302
00:13:14,839 --> 00:13:18,440
you can just take your favorite
303
00:13:15,880 --> 00:13:22,800
embeddings like the sentence BT
304
00:13:18,440 --> 00:13:25,639
embeddings or the open AI uh Adda
305
00:13:22,800 --> 00:13:27,240
embeddings or something like this but
306
00:13:25,639 --> 00:13:29,519
actually the type of embeddings you need
307
00:13:27,240 --> 00:13:32,040
for retrieval are kind of
308
00:13:29,519 --> 00:13:33,519
very special and because of that it's
309
00:13:32,040 --> 00:13:36,160
important
310
00:13:33,519 --> 00:13:38,600
to if you're very serious about doing a
311
00:13:36,160 --> 00:13:39,800
good job of retal it's important to use
312
00:13:38,600 --> 00:13:41,360
embeddings that were specifically
313
00:13:39,800 --> 00:13:45,040
tailored for
314
00:13:41,360 --> 00:13:47,680
retrieval and the reason why it is
315
00:13:45,040 --> 00:13:50,079
important to do this is severalfold but
316
00:13:47,680 --> 00:13:53,800
the most intuitive way to think about it
317
00:13:50,079 --> 00:13:57,600
is if we think about uh the things that
318
00:13:53,800 --> 00:13:59,440
tfidf does tfidf is giving a very high
319
00:13:57,600 --> 00:14:03,000
weight to
320
00:13:59,440 --> 00:14:04,959
contentful words and rare words and
321
00:14:03,000 --> 00:14:06,639
we're not guaranteed that any random
322
00:14:04,959 --> 00:14:10,600
embedding that we get is going to do
323
00:14:06,639 --> 00:14:13,800
that so for example if we just take the
324
00:14:10,600 --> 00:14:16,160
average word embeddings of every word in
325
00:14:13,800 --> 00:14:20,160
a sequence it's going to give the same
326
00:14:16,160 --> 00:14:22,320
weight to all of the words um in the
327
00:14:20,160 --> 00:14:24,680
output and in fact common words tend to
328
00:14:22,320 --> 00:14:27,959
have slightly higher Norms than
329
00:14:24,680 --> 00:14:29,639
infrequent words and so that would
330
00:14:27,959 --> 00:14:31,880
actually upli common wordss which is
331
00:14:29,639 --> 00:14:34,639
kind of exactly the opposite thing we
332
00:14:31,880 --> 00:14:36,480
want so how do we learn retrieval
333
00:14:34,639 --> 00:14:39,160
oriented
334
00:14:36,480 --> 00:14:40,920
embeddings the normal way we do this is
335
00:14:39,160 --> 00:14:43,399
we select positive and negative
336
00:14:40,920 --> 00:14:46,839
documents and then train using a
337
00:14:43,399 --> 00:14:50,240
contrastive loss and so an example of
338
00:14:46,839 --> 00:14:52,519
this is we have a query and then we have
339
00:14:50,240 --> 00:14:55,519
negative documents for that query and we
340
00:14:52,519 --> 00:14:58,199
have positive documents for that query
341
00:14:55,519 --> 00:15:00,079
and uh we form formulate a hinge loss or
342
00:14:58,199 --> 00:15:04,000
maybe some sort of probabilistic loss
343
00:15:00,079 --> 00:15:06,560
similar to the Hench loss and uh do fine
344
00:15:04,000 --> 00:15:06,560
tuning of the
345
00:15:07,160 --> 00:15:13,440
embeddings so if
346
00:15:09,399 --> 00:15:16,320
you have gold standard positive
347
00:15:13,440 --> 00:15:18,800
documents then this is relatively easy
348
00:15:16,320 --> 00:15:21,040
to train uh because you just need the
349
00:15:18,800 --> 00:15:23,800
positive documents and then you can get
350
00:15:21,040 --> 00:15:25,959
Negative documents in a number of ways
351
00:15:23,800 --> 00:15:29,279
one common way of getting negative
352
00:15:25,959 --> 00:15:32,279
documents is you just form a batch of
353
00:15:29,279 --> 00:15:34,560
data and given that batch of data you
354
00:15:32,279 --> 00:15:37,480
take all of the other documents in the
355
00:15:34,560 --> 00:15:39,480
batch um all of the documents in the
356
00:15:37,480 --> 00:15:42,839
batch that are positive for some other
357
00:15:39,480 --> 00:15:46,399
query and you use those as negative
358
00:15:42,839 --> 00:15:49,000
documents so you sample 32 query
359
00:15:46,399 --> 00:15:50,759
document pairs you use the aligned ones
360
00:15:49,000 --> 00:15:53,759
as positive documents and then use the
361
00:15:50,759 --> 00:15:57,440
31 other ones as negative documents and
362
00:15:53,759 --> 00:16:00,279
this is both effective and efficient
363
00:15:57,440 --> 00:16:02,000
because you can kind of learned from the
364
00:16:00,279 --> 00:16:05,079
query document pairs all at the same
365
00:16:02,000 --> 00:16:05,079
time in an efficient
366
00:16:05,680 --> 00:16:13,680
implementation however this is not
367
00:16:09,160 --> 00:16:16,279
enough in many cases because that will
368
00:16:13,680 --> 00:16:19,040
end up having lots of very kind of
369
00:16:16,279 --> 00:16:20,440
obviously wrong documents because you
370
00:16:19,040 --> 00:16:23,120
know
371
00:16:20,440 --> 00:16:25,360
they're documents that are relevant for
372
00:16:23,120 --> 00:16:27,880
a completely different query and it's
373
00:16:25,360 --> 00:16:29,880
kind of easy to distinguish uh between
374
00:16:27,880 --> 00:16:32,319
those you can just at superficial word
375
00:16:29,880 --> 00:16:34,519
overlap so another common thing to do
376
00:16:32,319 --> 00:16:35,759
when you're training these models is to
377
00:16:34,519 --> 00:16:38,160
get hard
378
00:16:35,759 --> 00:16:40,680
negatives so hard negatives are
379
00:16:38,160 --> 00:16:44,360
basically negative examples that look
380
00:16:40,680 --> 00:16:49,399
plausible but are actually wrong and
381
00:16:44,360 --> 00:16:53,199
so here uh this famous method called DPR
382
00:16:49,399 --> 00:16:55,880
is it basically learns the uh encoders
383
00:16:53,199 --> 00:16:57,759
based on both inbatch negatives like I
384
00:16:55,880 --> 00:17:00,160
mentioned before and hard negatives that
385
00:16:57,759 --> 00:17:01,360
were created by looking up documents
386
00:17:00,160 --> 00:17:03,839
with
387
00:17:01,360 --> 00:17:06,039
bm25 and so the ones that were looked up
388
00:17:03,839 --> 00:17:07,640
by bm25 you know kind of look very
389
00:17:06,039 --> 00:17:10,039
similar superficially but they might
390
00:17:07,640 --> 00:17:12,400
have you know subtle errors in them for
391
00:17:10,039 --> 00:17:12,400
why they're
392
00:17:12,799 --> 00:17:17,160
inappropriate there's also methods to
393
00:17:15,679 --> 00:17:20,000
learn these
394
00:17:17,160 --> 00:17:23,199
retrievers based on kind of not
395
00:17:20,000 --> 00:17:26,199
supervised data so one major bottleneck
396
00:17:23,199 --> 00:17:29,000
if you're taking the positive documents
397
00:17:26,199 --> 00:17:30,440
from Human annotations of whether
398
00:17:29,000 --> 00:17:33,440
something is correct or not or human
399
00:17:30,440 --> 00:17:37,880
clickthrough logs or other things like
400
00:17:33,440 --> 00:17:40,640
this is that you need that data in order
401
00:17:37,880 --> 00:17:44,440
to start training a bottle so uh
402
00:17:40,640 --> 00:17:47,880
contriver is another method that uses
403
00:17:44,440 --> 00:17:51,520
two random spans within a document is a
404
00:17:47,880 --> 00:17:54,440
positive pair and random spans from
405
00:17:51,520 --> 00:17:56,559
across documents is negative Pairs and
406
00:17:54,440 --> 00:17:58,960
so this can be used for you know very
407
00:17:56,559 --> 00:18:00,039
very large scale initial pre-training of
408
00:17:58,960 --> 00:18:02,280
the
409
00:18:00,039 --> 00:18:04,520
models and then after you've done that
410
00:18:02,280 --> 00:18:06,840
large scale initial pre-training you can
411
00:18:04,520 --> 00:18:10,799
then go in and fine-tune it on you know
412
00:18:06,840 --> 00:18:10,799
actually annotate the data to improve it
413
00:18:12,120 --> 00:18:18,799
further Okay so we've talked about
414
00:18:15,159 --> 00:18:21,559
training uh these dense product uh
415
00:18:18,799 --> 00:18:24,559
models these uh models that look at
416
00:18:21,559 --> 00:18:27,720
dense embedding overlap for nearest
417
00:18:24,559 --> 00:18:28,919
neighbors but the problem is in order to
418
00:18:27,720 --> 00:18:30,919
calculate this you would need to
419
00:18:28,919 --> 00:18:35,159
calculate it over a very very large
420
00:18:30,919 --> 00:18:37,960
document base and just taking a product
421
00:18:35,159 --> 00:18:40,480
between the query and all of the other
422
00:18:37,960 --> 00:18:42,400
documents in the document base is
423
00:18:40,480 --> 00:18:46,080
extremely
424
00:18:42,400 --> 00:18:48,080
costly and so in order to fix this there
425
00:18:46,080 --> 00:18:49,080
are methods for approximate nearest
426
00:18:48,080 --> 00:18:52,280
neighbor
427
00:18:49,080 --> 00:18:54,200
search and these are methods that allow
428
00:18:52,280 --> 00:18:57,360
you to retrieve embeddings that have the
429
00:18:54,200 --> 00:19:00,280
maximum inner product between them in
430
00:18:57,360 --> 00:19:02,520
sublinear time and because you're doing
431
00:19:00,280 --> 00:19:03,960
the maximum inner product this is also
432
00:19:02,520 --> 00:19:06,600
often called maximum inner product
433
00:19:03,960 --> 00:19:06,600
search or
434
00:19:06,679 --> 00:19:12,360
myips so I'm going to introduce on a
435
00:19:09,440 --> 00:19:15,360
very high level two common methods to do
436
00:19:12,360 --> 00:19:19,320
this the first one is locality sensitive
437
00:19:15,360 --> 00:19:22,440
hashen um or this can also be called
438
00:19:19,320 --> 00:19:24,799
kind of inverted index as well and what
439
00:19:22,440 --> 00:19:26,840
you do is you make partitions in
440
00:19:24,799 --> 00:19:29,320
continuous space and then you use it
441
00:19:26,840 --> 00:19:31,240
like an inverted index
442
00:19:29,320 --> 00:19:33,679
so let's say we have a whole bunch of
443
00:19:31,240 --> 00:19:34,919
embeddings uh I demonstrated two
444
00:19:33,679 --> 00:19:36,640
dimensional embeddings here but in
445
00:19:34,919 --> 00:19:38,440
reality this would be you know as large
446
00:19:36,640 --> 00:19:41,159
as your word
447
00:19:38,440 --> 00:19:42,880
embedding your query and document
448
00:19:41,159 --> 00:19:47,120
embedding space so this would be you
449
00:19:42,880 --> 00:19:49,760
know 512 or 1024 or something like that
450
00:19:47,120 --> 00:19:53,480
and what you do is you define a whole
451
00:19:49,760 --> 00:19:56,720
bunch of planes that separate these
452
00:19:53,480 --> 00:19:59,320
points into two spaces so if this is our
453
00:19:56,720 --> 00:20:02,520
first plane all the points above the
454
00:19:59,320 --> 00:20:04,280
plane will get a one for this partition
455
00:20:02,520 --> 00:20:06,799
and all the points below the plane will
456
00:20:04,280 --> 00:20:08,840
get a zero for this partition and we do
457
00:20:06,799 --> 00:20:12,400
it similarly we we create a whole bunch
458
00:20:08,840 --> 00:20:15,840
of them and then based on this we can
459
00:20:12,400 --> 00:20:18,440
now assign sparse vectors depending on
460
00:20:15,840 --> 00:20:21,520
each of these planes so we have uh for
461
00:20:18,440 --> 00:20:24,000
example the top one uh one0 0 because
462
00:20:21,520 --> 00:20:26,400
it's on the right side of the blue plane
463
00:20:24,000 --> 00:20:28,760
and the um wrong side of the red and the
464
00:20:26,400 --> 00:20:30,679
green planes and then for the top right
465
00:20:28,760 --> 00:20:32,799
we have one1 because it's on the right
466
00:20:30,679 --> 00:20:37,159
side of the blueing the green planes and
467
00:20:32,799 --> 00:20:39,440
the wrong side of the red plane and So
468
00:20:37,159 --> 00:20:41,000
based on this now we have a sparse
469
00:20:39,440 --> 00:20:42,600
vector and we already know what to do
470
00:20:41,000 --> 00:20:44,640
with a sparse Vector right we look it up
471
00:20:42,600 --> 00:20:49,039
in an inverted index just like we did
472
00:20:44,640 --> 00:20:51,520
for a sparse um you know sparse lookup
473
00:20:49,039 --> 00:20:54,520
table so that's one
474
00:20:51,520 --> 00:20:57,799
method another method uses a graph-based
475
00:20:54,520 --> 00:21:01,320
search and the basic idea behind this is
476
00:20:57,799 --> 00:21:02,480
that we create hubs uh and these hubs
477
00:21:01,320 --> 00:21:05,200
are kind
478
00:21:02,480 --> 00:21:07,960
of a small number of points that are
479
00:21:05,200 --> 00:21:09,440
close to other points in the space and
480
00:21:07,960 --> 00:21:10,880
so we create some hubs and then we
481
00:21:09,440 --> 00:21:12,200
search from there so if we have a
482
00:21:10,880 --> 00:21:16,880
similar
483
00:21:12,200 --> 00:21:19,159
looking uh set of points in the space we
484
00:21:16,880 --> 00:21:21,520
find these hubs which are something like
485
00:21:19,159 --> 00:21:24,880
cluster centroids and then based on the
486
00:21:21,520 --> 00:21:28,559
cluster centroids we then rule down or
487
00:21:24,880 --> 00:21:31,200
we greatly reduce the number of
488
00:21:28,559 --> 00:21:33,400
points that we need to be looking at and
489
00:21:31,200 --> 00:21:36,960
then we search through only those points
490
00:21:33,400 --> 00:21:38,600
in a more kind of extensive Manner and
491
00:21:36,960 --> 00:21:41,840
you can even turn this into a tree where
492
00:21:38,600 --> 00:21:43,760
you have hubs and then you have uh kind
493
00:21:41,840 --> 00:21:46,600
of mini hubs and then you have all the
494
00:21:43,760 --> 00:21:50,200
points so this allows you to do a kind
495
00:21:46,600 --> 00:21:50,200
of tree based or graph based
496
00:21:50,600 --> 00:21:55,840
search so obviously unless you're really
497
00:21:54,159 --> 00:21:57,039
excited about these algorithms this is
498
00:21:55,840 --> 00:22:00,080
something that you probably don't want
499
00:21:57,039 --> 00:22:01,440
to be implementing yourself um and the
500
00:22:00,080 --> 00:22:03,000
good news is there's lots of very good
501
00:22:01,440 --> 00:22:04,480
libraries that help you do this in fact
502
00:22:03,000 --> 00:22:08,799
there are so many libraries it's hard to
503
00:22:04,480 --> 00:22:11,960
manage them but some libraries that
504
00:22:08,799 --> 00:22:13,799
people very commonly use I I think face
505
00:22:11,960 --> 00:22:17,320
uh FIS
506
00:22:13,799 --> 00:22:20,200
SS is a widely used one created by uh
507
00:22:17,320 --> 00:22:23,760
fair and meta and chroma DB is a
508
00:22:20,200 --> 00:22:27,720
separate one uh that is kind of an AI
509
00:22:23,760 --> 00:22:30,720
native uh embedding search database so
510
00:22:27,720 --> 00:22:30,720
both those are good
511
00:22:32,960 --> 00:22:41,120
options even with intelligent training
512
00:22:37,880 --> 00:22:42,640
of dense embeddings however there still
513
00:22:41,120 --> 00:22:45,600
are
514
00:22:42,640 --> 00:22:48,240
problems and the biggest
515
00:22:45,600 --> 00:22:51,720
problem that you face when you're
516
00:22:48,240 --> 00:22:54,000
looking at something like uh cross
517
00:22:51,720 --> 00:22:56,880
encoders um that sorry when you're
518
00:22:54,000 --> 00:23:00,240
looking at dense embeddings is that in
519
00:22:56,880 --> 00:23:02,159
order to form a good dense embedding you
520
00:23:00,240 --> 00:23:03,840
need to kind of know in advance what
521
00:23:02,159 --> 00:23:05,799
you're looking for right because you're
522
00:23:03,840 --> 00:23:09,120
taking a long document you're condensing
523
00:23:05,799 --> 00:23:10,679
it down into a single embedding and or a
524
00:23:09,120 --> 00:23:13,320
long passage and you're condensing it
525
00:23:10,679 --> 00:23:16,200
down to a single embedding and so if
526
00:23:13,320 --> 00:23:19,520
that during that condensation process
527
00:23:16,200 --> 00:23:21,240
actually there's other information that
528
00:23:19,520 --> 00:23:23,159
is relevant to a query but you have to
529
00:23:21,240 --> 00:23:27,600
throw out because of the limited
530
00:23:23,159 --> 00:23:30,600
embedding capacity this causes you to
531
00:23:27,600 --> 00:23:32,320
you know essentially fail at um doing
532
00:23:30,600 --> 00:23:34,840
retrieval
533
00:23:32,320 --> 00:23:38,159
appropriately so there's a couple
534
00:23:34,840 --> 00:23:40,880
methods that can be used to fix this so
535
00:23:38,159 --> 00:23:42,279
the first method is in contrast to the
536
00:23:40,880 --> 00:23:44,159
buy encoder which is what I've been
537
00:23:42,279 --> 00:23:47,000
talking out about at this point where
538
00:23:44,159 --> 00:23:48,520
you kind of do full encoding of queries
539
00:23:47,000 --> 00:23:52,120
full encoding of documents and then do
540
00:23:48,520 --> 00:23:53,840
inner product search for a score uh you
541
00:23:52,120 --> 00:23:56,760
can use a cross encoder and the way the
542
00:23:53,840 --> 00:23:58,559
cross- encoder works is you append the
543
00:23:56,760 --> 00:24:00,799
query and document and then you run them
544
00:23:58,559 --> 00:24:03,400
through a model like a Transformer model
545
00:24:00,799 --> 00:24:07,840
and you calculate the output
546
00:24:03,400 --> 00:24:09,880
score so the problem with this um so
547
00:24:07,840 --> 00:24:12,480
this this is great uh because it gives
548
00:24:09,880 --> 00:24:15,799
you maximum flexibility um Transformer
549
00:24:12,480 --> 00:24:18,799
models are powerful you can uh assess
550
00:24:15,799 --> 00:24:20,520
relevance very well the problem with
551
00:24:18,799 --> 00:24:22,200
this is this precludes approximate
552
00:24:20,520 --> 00:24:23,720
nearest neighbor lookup because now
553
00:24:22,200 --> 00:24:25,799
you're running through you know many
554
00:24:23,720 --> 00:24:28,880
many nonlinearities
555
00:24:25,799 --> 00:24:32,760
here so this is can only be used for
556
00:24:28,880 --> 00:24:34,360
reranking documents um or if even if
557
00:24:32,760 --> 00:24:36,880
you're doing retrieval doing retrieval
558
00:24:34,360 --> 00:24:39,679
over a very very small number of
559
00:24:36,880 --> 00:24:41,960
documents but if you really want maximal
560
00:24:39,679 --> 00:24:44,080
accuracy I definitely would recommend uh
561
00:24:41,960 --> 00:24:45,720
doing something like this because it can
562
00:24:44,080 --> 00:24:47,960
allow you to do kind of a second pass
563
00:24:45,720 --> 00:24:49,360
filtering over the most relevant looking
564
00:24:47,960 --> 00:24:52,399
documents to identify the ones you
565
00:24:49,360 --> 00:24:52,399
really want to add to your
566
00:24:54,240 --> 00:24:58,240
context so then there are also
567
00:24:56,760 --> 00:25:01,360
approaches that are kind kind of in the
568
00:24:58,240 --> 00:25:02,159
middle of these two uh the most famous
569
00:25:01,360 --> 00:25:05,880
one is
570
00:25:02,159 --> 00:25:08,320
Kar and the I called this token level
571
00:25:05,880 --> 00:25:10,840
dense retrieval it's also called uh late
572
00:25:08,320 --> 00:25:12,720
interaction in the coold bear paper but
573
00:25:10,840 --> 00:25:14,919
the way it works is you use
574
00:25:12,720 --> 00:25:18,440
contextualized representations of all
575
00:25:14,919 --> 00:25:19,440
query and document tokens to compute a
576
00:25:18,440 --> 00:25:23,559
retrieval
577
00:25:19,440 --> 00:25:26,919
score and so you do offline indexing of
578
00:25:23,559 --> 00:25:29,159
every token in the document and then
579
00:25:26,919 --> 00:25:31,399
based on this offline X indexing of
580
00:25:29,159 --> 00:25:35,320
every token in the document you then
581
00:25:31,399 --> 00:25:38,760
have a query encoder and you do matching
582
00:25:35,320 --> 00:25:41,799
between each token in the query and the
583
00:25:38,760 --> 00:25:43,399
highest scoring tokens in each
584
00:25:41,799 --> 00:25:46,320
document
585
00:25:43,399 --> 00:25:48,399
and the reason why this is good is it
586
00:25:46,320 --> 00:25:49,600
still allows you to encode all of the
587
00:25:48,399 --> 00:25:52,120
tokens in the
588
00:25:49,600 --> 00:25:55,440
document and but each of these
589
00:25:52,120 --> 00:25:59,679
similarity searches is still just
590
00:25:55,440 --> 00:26:03,559
a kind of maximum product search and
591
00:25:59,679 --> 00:26:06,279
because of this this allows you to do
592
00:26:03,559 --> 00:26:07,960
each of these searches efficiently and
593
00:26:06,279 --> 00:26:09,840
doesn't preclude you from running it
594
00:26:07,960 --> 00:26:12,919
over an entire
595
00:26:09,840 --> 00:26:16,399
database the downside to this method uh
596
00:26:12,919 --> 00:26:19,120
may already be obvious but in the
597
00:26:16,399 --> 00:26:22,200
traditional bu encoder we have a single
598
00:26:19,120 --> 00:26:26,880
Vector for each document but here we
599
00:26:22,200 --> 00:26:29,320
have one vector for um each token in the
600
00:26:26,880 --> 00:26:31,880
document so BAS basically your vector
601
00:26:29,320 --> 00:26:34,399
database gets n times larger where n is
602
00:26:31,880 --> 00:26:36,679
the number of tokens in the document and
603
00:26:34,399 --> 00:26:38,080
there are certain methods to make this
604
00:26:36,679 --> 00:26:41,559
better like you can compress each
605
00:26:38,080 --> 00:26:42,960
document to a smaller number of n uh but
606
00:26:41,559 --> 00:26:45,880
still this is definitely going to be
607
00:26:42,960 --> 00:26:48,399
more costly than looking up each uh
608
00:26:45,880 --> 00:26:50,360
token so this is definitely something to
609
00:26:48,399 --> 00:26:53,520
consider if you want to get you know
610
00:26:50,360 --> 00:26:55,159
very good scores and Co bear is a good
611
00:26:53,520 --> 00:26:59,600
implementation of that to start with if
612
00:26:55,159 --> 00:26:59,600
you're interested in trying it out
613
00:27:00,480 --> 00:27:07,000
so this is a final thing this is uh
614
00:27:03,080 --> 00:27:08,679
something that is a little bit uh
615
00:27:07,000 --> 00:27:10,080
different than all the other things I I
616
00:27:08,679 --> 00:27:12,399
talked about before but I've used it
617
00:27:10,080 --> 00:27:15,840
myself and it actually can be pretty
618
00:27:12,399 --> 00:27:18,799
effective um it was also made at CMU so
619
00:27:15,840 --> 00:27:24,399
by Lal so I would like to promote our
620
00:27:18,799 --> 00:27:26,880
CMU work of course but um the HP idea
621
00:27:24,399 --> 00:27:28,080
between behind a hypothetical document
622
00:27:26,880 --> 00:27:30,320
embedding
623
00:27:28,080 --> 00:27:33,440
is that it's actually somewhat difficult
624
00:27:30,320 --> 00:27:36,200
to match a query and a document right
625
00:27:33,440 --> 00:27:38,919
because a query is a very short possibly
626
00:27:36,200 --> 00:27:42,240
ungrammatical output that's asking a
627
00:27:38,919 --> 00:27:44,799
question and then a document is a very
628
00:27:42,240 --> 00:27:49,440
long output that's written in a
629
00:27:44,799 --> 00:27:50,799
different proos style and you you know
630
00:27:49,440 --> 00:27:53,159
it might have lots of irrelevant
631
00:27:50,799 --> 00:27:54,519
information or or boiler plate or fluff
632
00:27:53,159 --> 00:27:57,640
or something like
633
00:27:54,519 --> 00:28:00,640
that so the idea behind a hypothetical
634
00:27:57,640 --> 00:28:03,120
document embedding is that it's e easier
635
00:28:00,640 --> 00:28:05,279
to match a document in a document than
636
00:28:03,120 --> 00:28:08,159
it is to match a query in a
637
00:28:05,279 --> 00:28:10,159
document but the input to our model is a
638
00:28:08,159 --> 00:28:14,360
query right so what do we
639
00:28:10,159 --> 00:28:17,919
do and so essentially what we do is we
640
00:28:14,360 --> 00:28:20,399
then take a large language model we feed
641
00:28:17,919 --> 00:28:23,320
it in a query in a prompt and say
642
00:28:20,399 --> 00:28:25,399
generate a document that looks like it
643
00:28:23,320 --> 00:28:30,080
should be the answer to this
644
00:28:25,399 --> 00:28:32,120
query and so so then the llm goes in and
645
00:28:30,080 --> 00:28:34,440
it generates a document and hopefully
646
00:28:32,120 --> 00:28:38,440
this document looks more similar to the
647
00:28:34,440 --> 00:28:41,440
documents you want to retrieve than the
648
00:28:38,440 --> 00:28:44,039
um than the original query does and I've
649
00:28:41,440 --> 00:28:47,240
actually found this to be relatively
650
00:28:44,039 --> 00:28:51,880
effective at improving accuracy
651
00:28:47,240 --> 00:28:53,200
on kind of difficult uh tasks especially
652
00:28:51,880 --> 00:28:55,840
ones that are out of domain from the
653
00:28:53,200 --> 00:28:58,000
trend models that I'm
654
00:28:55,840 --> 00:29:01,880
using so I've gone through a whole bunch
655
00:28:58,000 --> 00:29:04,039
of methods and I would like to finish up
656
00:29:01,880 --> 00:29:05,679
this section by giving some insight
657
00:29:04,039 --> 00:29:11,399
about which one you should be
658
00:29:05,679 --> 00:29:14,559
using so my impression right now is
659
00:29:11,399 --> 00:29:17,760
that a good basine to start out with is
660
00:29:14,559 --> 00:29:20,679
something like bm25 it's very easy to
661
00:29:17,760 --> 00:29:23,080
start out and compared to embedding
662
00:29:20,679 --> 00:29:26,120
based models it tends to be relatively
663
00:29:23,080 --> 00:29:28,279
robust to new domains so if you have a
664
00:29:26,120 --> 00:29:30,559
new domain you're more less guaranteed
665
00:29:28,279 --> 00:29:32,240
that bm25 will give you some performance
666
00:29:30,559 --> 00:29:35,320
whereas embeddings may be really good
667
00:29:32,240 --> 00:29:38,399
but they may be really bad uh depending
668
00:29:35,320 --> 00:29:40,880
on how out of domain that is compared to
669
00:29:38,399 --> 00:29:42,799
your underlying embedding
670
00:29:40,880 --> 00:29:44,760
model
671
00:29:42,799 --> 00:29:48,039
so however if you want to get the
672
00:29:44,760 --> 00:29:51,080
highest accuracy definitely tuned models
673
00:29:48,039 --> 00:29:53,200
are going to be better and if you're not
674
00:29:51,080 --> 00:29:56,039
worried about computation efficiency
675
00:29:53,200 --> 00:29:58,480
using something like P bear um with kind
676
00:29:56,039 --> 00:30:01,320
of the token level retrieval will
677
00:29:58,480 --> 00:30:05,559
definitely give you uh good accuracy
678
00:30:01,320 --> 00:30:08,559
here however there's better support for
679
00:30:05,559 --> 00:30:12,159
bu encoder style models um in kind of
680
00:30:08,559 --> 00:30:15,240
standard Vector databases like feice and
681
00:30:12,159 --> 00:30:17,519
uh chroma and other things like that so
682
00:30:15,240 --> 00:30:19,799
if you want a kind of easier method to
683
00:30:17,519 --> 00:30:23,279
get started very quickly then using a bu
684
00:30:19,799 --> 00:30:23,279
encoder is probably the best way to
685
00:30:25,080 --> 00:30:31,080
go okay so now moving on to actual
686
00:30:28,279 --> 00:30:33,159
retrieval augmented generation models we
687
00:30:31,080 --> 00:30:38,360
have uh retriever reader
688
00:30:33,159 --> 00:30:40,880
models and the way these work is you
689
00:30:38,360 --> 00:30:43,279
basically the simplest way they can work
690
00:30:40,880 --> 00:30:45,799
is you basically just chain retrieval
691
00:30:43,279 --> 00:30:47,640
and reading together so you use an outof
692
00:30:45,799 --> 00:30:52,519
thebox Retriever and an outof thebox
693
00:30:47,640 --> 00:30:54,039
reader model and you have your query uh
694
00:30:52,519 --> 00:30:56,159
you could for example look something up
695
00:30:54,039 --> 00:30:58,039
on Google get a whole bunch of passages
696
00:30:56,159 --> 00:30:59,760
and then feed them into a GP key model
697
00:30:58,039 --> 00:31:03,919
and get an
698
00:30:59,760 --> 00:31:06,960
answer this overall is quite effective
699
00:31:03,919 --> 00:31:09,159
um you it's easy to implement and it
700
00:31:06,960 --> 00:31:10,600
will give you decent results so
701
00:31:09,159 --> 00:31:15,480
definitely it's something to be worth
702
00:31:10,600 --> 00:31:20,720
thinking about uh for assignment two in
703
00:31:15,480 --> 00:31:24,799
the um in the class you're required to
704
00:31:20,720 --> 00:31:26,679
only use uh kind of public models or
705
00:31:24,799 --> 00:31:29,760
open source implementations so you could
706
00:31:26,679 --> 00:31:34,360
still replace that with Apachi Lucine
707
00:31:29,760 --> 00:31:36,360
and then um you know any standard llm
708
00:31:34,360 --> 00:31:39,159
and that could be you know llama llama
709
00:31:36,360 --> 00:31:41,600
Chad or M mistol or mixol or something
710
00:31:39,159 --> 00:31:45,360
like that so uh you could definitely
711
00:31:41,600 --> 00:31:48,120
feel feel free to do something like
712
00:31:45,360 --> 00:31:51,559
that um of course the passages are
713
00:31:48,120 --> 00:31:53,200
concatenated to the context and so
714
00:31:51,559 --> 00:31:54,799
because the passages are concatenated to
715
00:31:53,200 --> 00:31:56,679
context the contacts can get relatively
716
00:31:54,799 --> 00:31:58,399
long and expensive and other things like
717
00:31:56,679 --> 00:32:01,960
that but it's just something you have to
718
00:31:58,399 --> 00:32:01,960
deal with when you're using
719
00:32:02,600 --> 00:32:07,480
R there are methods also for Retriever
720
00:32:05,799 --> 00:32:11,600
and Generator endtoend
721
00:32:07,480 --> 00:32:14,720
training so this is the paper actually
722
00:32:11,600 --> 00:32:17,600
where the name rag came from and I'll
723
00:32:14,720 --> 00:32:20,200
use that as an example here uh but
724
00:32:17,600 --> 00:32:21,600
basically um there are several methods
725
00:32:20,200 --> 00:32:23,399
that propos to train the Retriever and
726
00:32:21,600 --> 00:32:27,440
reader to improve
727
00:32:23,399 --> 00:32:31,240
accuracy and specifically the rag p by
728
00:32:27,440 --> 00:32:33,200
Lewis at all the way it trained the um
729
00:32:31,240 --> 00:32:35,639
reader was to maximize generation
730
00:32:33,200 --> 00:32:38,600
likelihood given a single retrieved
731
00:32:35,639 --> 00:32:40,279
document and for the retriever it
732
00:32:38,600 --> 00:32:41,880
maximized overall likelihood by
733
00:32:40,279 --> 00:32:44,480
optimizing the mixture weight over
734
00:32:41,880 --> 00:32:46,559
documents so here's kind of a a
735
00:32:44,480 --> 00:32:50,480
schematic uh which is you have your
736
00:32:46,559 --> 00:32:54,039
query encoder um you run the Retriever
737
00:32:50,480 --> 00:32:57,760
with uh maximum inner product search it
738
00:32:54,039 --> 00:33:00,919
gives you several documents and each
739
00:32:57,760 --> 00:33:05,880
document has a score and then based on
740
00:33:00,919 --> 00:33:09,399
the documents and the scores you
741
00:33:05,880 --> 00:33:11,200
generate uh with each document in the
742
00:33:09,399 --> 00:33:15,360
context and
743
00:33:11,200 --> 00:33:17,080
then sum together the probabilities
744
00:33:15,360 --> 00:33:18,639
multiplied by the weights and I have the
745
00:33:17,080 --> 00:33:20,320
actual equations here because I think
746
00:33:18,639 --> 00:33:23,039
it'll be a little bit easier to
747
00:33:20,320 --> 00:33:25,760
understand after looking at the
748
00:33:23,039 --> 00:33:28,360
equations so generation is a mixture
749
00:33:25,760 --> 00:33:31,440
model and you pick a document and
750
00:33:28,360 --> 00:33:36,519
generate from the document this
751
00:33:31,440 --> 00:33:40,080
p z given X is the probability of
752
00:33:36,519 --> 00:33:44,679
picking that document given the query X
753
00:33:40,080 --> 00:33:48,880
and then this P Theta x z and all of the
754
00:33:44,679 --> 00:33:51,480
previous tokens is basically the uh
755
00:33:48,880 --> 00:33:54,840
probability of the next token given that
756
00:33:51,480 --> 00:33:56,559
you have this particular document so you
757
00:33:54,840 --> 00:34:00,840
can see that this is basically linearly
758
00:33:56,559 --> 00:34:00,840
interpr ating between the multiple
759
00:34:01,559 --> 00:34:05,760
documents and if we look this can be
760
00:34:04,600 --> 00:34:09,039
considered the Retriever and the
761
00:34:05,760 --> 00:34:09,039
generator the Retriever and the
762
00:34:10,839 --> 00:34:16,119
reader one really important thing here
763
00:34:13,639 --> 00:34:17,760
uh that enables endtoend training is
764
00:34:16,119 --> 00:34:19,639
they have this probability of the
765
00:34:17,760 --> 00:34:22,919
retriever be based on
766
00:34:19,639 --> 00:34:25,480
embeddings and so here we have the
767
00:34:22,919 --> 00:34:29,040
document embedding and the query
768
00:34:25,480 --> 00:34:31,440
embedding and the probability is
769
00:34:29,040 --> 00:34:33,320
proportional to the inner product of
770
00:34:31,440 --> 00:34:36,599
these exponentiated so you're basically
771
00:34:33,320 --> 00:34:38,839
taking a soft Max over uh the inner
772
00:34:36,599 --> 00:34:40,599
product between the
773
00:34:38,839 --> 00:34:44,200
two
774
00:34:40,599 --> 00:34:47,919
and this adjusts the retriever to give
775
00:34:44,200 --> 00:34:49,560
higher similarities to helpful
776
00:34:47,919 --> 00:34:52,560
documents
777
00:34:49,560 --> 00:34:52,560
and
778
00:34:54,040 --> 00:35:02,800
so because the prob probability of the
779
00:34:59,800 --> 00:35:04,839
retriever model here is included in the
780
00:35:02,800 --> 00:35:07,160
endtoend probability you don't actually
781
00:35:04,839 --> 00:35:10,680
need any annotations
782
00:35:07,160 --> 00:35:12,839
about which documents are useful you can
783
00:35:10,680 --> 00:35:16,680
just train all of this end to end and
784
00:35:12,839 --> 00:35:19,480
let backrop do its thing to update the
785
00:35:16,680 --> 00:35:22,640
uh the retriever as
786
00:35:19,480 --> 00:35:25,000
well one important issue when training
787
00:35:22,640 --> 00:35:27,480
models like this is that the search
788
00:35:25,000 --> 00:35:30,400
index will become stale so what do I
789
00:35:27,480 --> 00:35:34,760
mean by this if we go back to our
790
00:35:30,400 --> 00:35:34,760
previous uh thing about dense
791
00:35:35,480 --> 00:35:43,560
models creating this blue search index
792
00:35:39,800 --> 00:35:45,400
on the right side of the figure here is
793
00:35:43,560 --> 00:35:48,680
very costly so like let's say you want
794
00:35:45,400 --> 00:35:50,520
to embed a million documents or a
795
00:35:48,680 --> 00:35:55,240
billion documents if you're a big search
796
00:35:50,520 --> 00:35:58,200
engine company so doing this is very
797
00:35:55,240 --> 00:36:00,599
slow and
798
00:35:58,200 --> 00:36:01,920
in contrast doing lookup with kind of
799
00:36:00,599 --> 00:36:04,160
these approximate nearest neighbor
800
00:36:01,920 --> 00:36:05,440
searches is sublinear time or even you
801
00:36:04,160 --> 00:36:08,119
know log time so you can do it
802
00:36:05,440 --> 00:36:12,319
relatively quickly
803
00:36:08,119 --> 00:36:15,680
so it's fine to do lookup over this big
804
00:36:12,319 --> 00:36:17,520
index but if you start updating this
805
00:36:15,680 --> 00:36:19,920
document embedding you need to recreate
806
00:36:17,520 --> 00:36:23,760
the entire index and that would be you
807
00:36:19,920 --> 00:36:27,240
know very computationally costly so the
808
00:36:23,760 --> 00:36:30,119
solution to this proposed in this rag
809
00:36:27,240 --> 00:36:33,640
paper by Lewis at all is uh we only
810
00:36:30,119 --> 00:36:35,640
train the query embeddings and we keep
811
00:36:33,640 --> 00:36:39,640
the document embedding
812
00:36:35,640 --> 00:36:41,920
swix there's other Alternatives like um
813
00:36:39,640 --> 00:36:45,000
there was a paper called realm uh from
814
00:36:41,920 --> 00:36:48,040
early in retrieval base modeling and in
815
00:36:45,000 --> 00:36:50,040
that in that method they basically had
816
00:36:48,040 --> 00:36:51,520
an asynchronous process that was going
817
00:36:50,040 --> 00:36:55,760
through and using the most recent
818
00:36:51,520 --> 00:36:59,960
document in better to re-update the
819
00:36:55,760 --> 00:37:03,359
search index during training but that is
820
00:36:59,960 --> 00:37:05,960
uh you know kind of a very onerous
821
00:37:03,359 --> 00:37:07,800
process so I think it's quite common to
822
00:37:05,960 --> 00:37:11,000
use kind of a fixed document embedding
823
00:37:07,800 --> 00:37:11,000
in update only the
824
00:37:12,079 --> 00:37:17,720
queries another thing to think about is
825
00:37:14,359 --> 00:37:21,160
when do we do retrieval um so there's a
826
00:37:17,720 --> 00:37:23,079
bunch of different methods the rag paper
827
00:37:21,160 --> 00:37:24,440
that I mentioned before did this only
828
00:37:23,079 --> 00:37:26,359
once right at the very beginning of
829
00:37:24,440 --> 00:37:29,400
generation it grabbed a single document
830
00:37:26,359 --> 00:37:32,560
and generated the entire output this is
831
00:37:29,400 --> 00:37:34,800
the default method used by most
832
00:37:32,560 --> 00:37:37,240
systems however there's other options as
833
00:37:34,800 --> 00:37:39,640
well you can retrieve uh several times
834
00:37:37,240 --> 00:37:43,040
during generation as
835
00:37:39,640 --> 00:37:44,480
necessary and the way this works uh we
836
00:37:43,040 --> 00:37:46,280
can do this either by generating a
837
00:37:44,480 --> 00:37:48,480
search token uh saying that we should
838
00:37:46,280 --> 00:37:50,200
start searching or searching when the
839
00:37:48,480 --> 00:37:52,640
model is
840
00:37:50,200 --> 00:37:55,920
uncertain and another way is to do this
841
00:37:52,640 --> 00:37:58,079
every token so we can do this by finding
842
00:37:55,920 --> 00:37:59,760
similar final embeddings and using this
843
00:37:58,079 --> 00:38:02,240
to influence the
844
00:37:59,760 --> 00:38:04,720
probabilities or approximating attention
845
00:38:02,240 --> 00:38:06,440
with nearest neighbors so I'm going to
846
00:38:04,720 --> 00:38:08,920
explain about each of these in a bit
847
00:38:06,440 --> 00:38:12,480
more detail
848
00:38:08,920 --> 00:38:16,119
in so triggering retrieval with token
849
00:38:12,480 --> 00:38:19,720
embeddings is um was proposed by Tool
850
00:38:16,119 --> 00:38:22,119
forer shik all and the way it works is
851
00:38:19,720 --> 00:38:25,000
you generate tokens that Tri trigger
852
00:38:22,119 --> 00:38:27,880
retrieval or other tools so in this
853
00:38:25,000 --> 00:38:30,079
particular method it uh had several
854
00:38:27,880 --> 00:38:32,000
tools including asking a QA model or
855
00:38:30,079 --> 00:38:34,800
getting a calculator or having a machine
856
00:38:32,000 --> 00:38:37,200
translation system but with respect to
857
00:38:34,800 --> 00:38:40,000
retrieval augmented generation it had
858
00:38:37,200 --> 00:38:41,560
this essentially Wiki search
859
00:38:40,000 --> 00:38:43,680
functionality that would look up
860
00:38:41,560 --> 00:38:46,680
something in Wikipedia and then use that
861
00:38:43,680 --> 00:38:46,680
to influence the final
862
00:38:46,760 --> 00:38:52,200
probabilities
863
00:38:48,800 --> 00:38:55,160
and the way this was trained is training
864
00:38:52,200 --> 00:38:59,800
was done in an inative manner where it
865
00:38:55,160 --> 00:38:59,800
basically generated uh kind
866
00:39:00,000 --> 00:39:05,680
of examples of tools being useful and
867
00:39:04,359 --> 00:39:09,560
when the
868
00:39:05,680 --> 00:39:14,160
tools improve the probability of the
869
00:39:09,560 --> 00:39:16,119
following output then that would be kind
870
00:39:14,160 --> 00:39:19,560
of treated as a positive example and
871
00:39:16,119 --> 00:39:21,520
used to further train the model so this
872
00:39:19,560 --> 00:39:23,400
was really influential and in fact this
873
00:39:21,520 --> 00:39:27,000
is how things are implemented in chat
874
00:39:23,400 --> 00:39:29,319
GPT nowadays not only for um doing
875
00:39:27,000 --> 00:39:33,400
retrieval but also doing other tools
876
00:39:29,319 --> 00:39:35,200
like um for example uh generating code
877
00:39:33,400 --> 00:39:37,440
or generating images or other things
878
00:39:35,200 --> 00:39:37,440
like
879
00:39:38,200 --> 00:39:45,079
this another option is to trigger
880
00:39:40,920 --> 00:39:48,240
retrieval uh with uncertainty estimates
881
00:39:45,079 --> 00:39:52,280
so flare this is a paper by my student
882
00:39:48,240 --> 00:39:55,160
Jang bang um where we try to generate
883
00:39:52,280 --> 00:39:58,560
content and then do retrieval if the
884
00:39:55,160 --> 00:40:01,800
language model certainty is low so
885
00:39:58,560 --> 00:40:05,599
here's a schematic of how this works but
886
00:40:01,800 --> 00:40:09,160
basically um if we have
887
00:40:05,599 --> 00:40:13,440
some uh retrieved documents we can say
888
00:40:09,160 --> 00:40:16,560
generate a a summary about Joe Biden and
889
00:40:13,440 --> 00:40:19,560
when it generates a summary maybe for
890
00:40:16,560 --> 00:40:20,960
the first output um the language model
891
00:40:19,560 --> 00:40:22,960
has high
892
00:40:20,960 --> 00:40:24,240
confidence and because the language
893
00:40:22,960 --> 00:40:25,359
model has high confidence we just
894
00:40:24,240 --> 00:40:27,520
generate the
895
00:40:25,359 --> 00:40:29,599
output
896
00:40:27,520 --> 00:40:31,839
however in the next step if it might
897
00:40:29,599 --> 00:40:33,599
generate something like saying Joe Biden
898
00:40:31,839 --> 00:40:35,680
attended the University of Pennsylvania
899
00:40:33,599 --> 00:40:37,160
where he earned a law degree but the
900
00:40:35,680 --> 00:40:39,000
model might not be very certain about
901
00:40:37,160 --> 00:40:41,560
this it might have a low probability of
902
00:40:39,000 --> 00:40:45,839
certain important entities and So based
903
00:40:41,560 --> 00:40:48,839
on this uh we then form a a query where
904
00:40:45,839 --> 00:40:52,119
what we do is essentially we blank out
905
00:40:48,839 --> 00:40:55,079
the low probability parts of this and we
906
00:40:52,119 --> 00:40:57,200
do a search and so this is also a little
907
00:40:55,079 --> 00:41:00,240
bit like the hypothetical
908
00:40:57,200 --> 00:41:02,520
edings method where we basically create
909
00:41:00,240 --> 00:41:04,040
a document that we think will look
910
00:41:02,520 --> 00:41:07,119
similar to the document that we want to
911
00:41:04,040 --> 00:41:09,480
find we use that to create search
912
00:41:07,119 --> 00:41:11,359
results and then we generate the output
913
00:41:09,480 --> 00:41:13,880
and then we continue doing that and
914
00:41:11,359 --> 00:41:15,960
whenever we have a high confidence
915
00:41:13,880 --> 00:41:18,800
output like the one here we don't do any
916
00:41:15,960 --> 00:41:20,040
retrieval we just you know generate uh
917
00:41:18,800 --> 00:41:21,880
directly from the parameters of the
918
00:41:20,040 --> 00:41:23,960
model but whenever we have low
919
00:41:21,880 --> 00:41:27,400
confidence outputs we do the retrieval
920
00:41:23,960 --> 00:41:30,400
and base the output on this and so I I
921
00:41:27,400 --> 00:41:33,119
think this is uh you know a nice method
922
00:41:30,400 --> 00:41:35,000
that could potentially be uh used the
923
00:41:33,119 --> 00:41:36,920
downside to that is you might sometimes
924
00:41:35,000 --> 00:41:38,920
need to generate twice because you would
925
00:41:36,920 --> 00:41:40,480
generate the output once and then find
926
00:41:38,920 --> 00:41:42,720
the low confidence parts and generate
927
00:41:40,480 --> 00:41:45,400
again but you know if you really care
928
00:41:42,720 --> 00:41:47,319
about the uh kind of quality of the
929
00:41:45,400 --> 00:41:49,640
output this is I think a reasonable
930
00:41:47,319 --> 00:41:49,640
thing to
931
00:41:50,160 --> 00:41:54,920
do okay so now moving on to the Token by
932
00:41:53,000 --> 00:41:59,800
token retrieval
933
00:41:54,920 --> 00:42:03,560
methods the kind of original or one of
934
00:41:59,800 --> 00:42:05,200
the methods that popularized this idea
935
00:42:03,560 --> 00:42:08,720
of token by token retrieval is something
936
00:42:05,200 --> 00:42:10,760
called K&N LM and the way it works is it
937
00:42:08,720 --> 00:42:13,839
retrieves similar
938
00:42:10,760 --> 00:42:16,680
examples and then uses the following
939
00:42:13,839 --> 00:42:20,880
tokens from these
940
00:42:16,680 --> 00:42:23,800
examples and this is kind of like a very
941
00:42:20,880 --> 00:42:25,839
powerful count-based byr model in a way
942
00:42:23,800 --> 00:42:28,440
so if you remember back to when we were
943
00:42:25,839 --> 00:42:32,920
talking about count based Pam models
944
00:42:28,440 --> 00:42:36,440
what we would do is we would take the
945
00:42:32,920 --> 00:42:39,400
previous token and we would calculate
946
00:42:36,440 --> 00:42:41,319
the probability of the next token by
947
00:42:39,400 --> 00:42:43,040
summing up together all of the next
948
00:42:41,319 --> 00:42:44,800
tokens and dividing by the total number
949
00:42:43,040 --> 00:42:49,240
of times that previous token
950
00:42:44,800 --> 00:42:52,720
occurred and so given that background uh
951
00:42:49,240 --> 00:42:56,760
we can talk about how the KLM
952
00:42:52,720 --> 00:43:00,319
works so we have the text context X
953
00:42:56,760 --> 00:43:02,240
and we want to generate a Target output
954
00:43:00,319 --> 00:43:04,839
separately from this we have all of the
955
00:43:02,240 --> 00:43:06,440
training contexts so this is all of the
956
00:43:04,839 --> 00:43:09,920
contexts that appeared in our training
957
00:43:06,440 --> 00:43:13,520
data and we encode all of these training
958
00:43:09,920 --> 00:43:15,720
contexts specifically by calculating the
959
00:43:13,520 --> 00:43:18,559
representation of the final layer or
960
00:43:15,720 --> 00:43:21,119
near the final layer of the model and so
961
00:43:18,559 --> 00:43:23,200
we encode that as
962
00:43:21,119 --> 00:43:25,240
representations separately from that we
963
00:43:23,200 --> 00:43:27,920
remember the next word that appeared
964
00:43:25,240 --> 00:43:29,720
after this Contex
965
00:43:27,920 --> 00:43:32,920
so now we have a data store consisting
966
00:43:29,720 --> 00:43:35,040
of representations in next words we then
967
00:43:32,920 --> 00:43:38,440
take the representation of the current
968
00:43:35,040 --> 00:43:40,880
context and we calculate the distance
969
00:43:38,440 --> 00:43:43,400
between the current context and all of
970
00:43:40,880 --> 00:43:47,119
the other similar context in the
971
00:43:43,400 --> 00:43:49,839
database we take the nearest K so we
972
00:43:47,119 --> 00:43:52,440
take the top uh K examples here which
973
00:43:49,839 --> 00:43:55,240
would be Hawaii Illinois and
974
00:43:52,440 --> 00:43:57,520
Hawaii we then do uh some sort of
975
00:43:55,240 --> 00:44:01,440
normalization based on the
976
00:43:57,520 --> 00:44:05,200
distance and this gives us a probability
977
00:44:01,440 --> 00:44:06,680
distribution over all of the next tokens
978
00:44:05,200 --> 00:44:10,599
sometimes these tokens are duplicated
979
00:44:06,680 --> 00:44:13,599
multiple times and so we aggregate all
980
00:44:10,599 --> 00:44:15,800
of these counts to be Hawaii for example
981
00:44:13,599 --> 00:44:18,839
0.8 and Illinois
982
00:44:15,800 --> 00:44:21,839
0.2 and then we interpolate this with
983
00:44:18,839 --> 00:44:24,040
the probability given by the standard
984
00:44:21,839 --> 00:44:26,440
language model using an interpolation
985
00:44:24,040 --> 00:44:28,400
coefficient Lambda and this gives us our
986
00:44:26,440 --> 00:44:31,000
final
987
00:44:28,400 --> 00:44:34,559
probability so the nice thing about this
988
00:44:31,000 --> 00:44:38,000
is this allows us to explicitly ground
989
00:44:34,559 --> 00:44:42,079
our outputs in individual
990
00:44:38,000 --> 00:44:45,319
examples uh and it's a pretty effective
991
00:44:42,079 --> 00:44:48,760
way to improve the probability of models
992
00:44:45,319 --> 00:44:53,839
improve translation and other stuff like
993
00:44:48,760 --> 00:44:56,119
this the disadvantage of doing this is
994
00:44:53,839 --> 00:44:59,319
that it provides it it kind of ADD add
995
00:44:56,119 --> 00:45:01,800
an extra component of the model it adds
996
00:44:59,319 --> 00:45:05,440
extra
997
00:45:01,800 --> 00:45:08,520
um kind of hyperparameters like Lambda
998
00:45:05,440 --> 00:45:11,680
and things like this so it is a little
999
00:45:08,520 --> 00:45:16,960
bit finicky and it doesn't work in all
1000
00:45:11,680 --> 00:45:21,440
situations and so another method that we
1001
00:45:16,960 --> 00:45:23,559
uh proposed or by Manda Birch who gave
1002
00:45:21,440 --> 00:45:26,920
the uh previous lecture on generation in
1003
00:45:23,559 --> 00:45:29,240
this class is unlimi forer and basically
1004
00:45:26,920 --> 00:45:32,680
what unlimi forer does is it notes that
1005
00:45:29,240 --> 00:45:36,079
attention itself is an in inner product
1006
00:45:32,680 --> 00:45:40,440
search and it does topk
1007
00:45:36,079 --> 00:45:42,680
attention and the way we do this is we
1008
00:45:40,440 --> 00:45:45,160
first process the input with a sliding
1009
00:45:42,680 --> 00:45:47,480
window and then perform attention using
1010
00:45:45,160 --> 00:45:49,960
a vector index so if we have a really
1011
00:45:47,480 --> 00:45:54,280
long input that we want to encode what
1012
00:45:49,960 --> 00:45:56,559
we do is we first encode chunks so we
1013
00:45:54,280 --> 00:46:01,960
encode for example AB
1014
00:45:56,559 --> 00:46:03,839
then we encode CD and we encode EF we
1015
00:46:01,960 --> 00:46:06,240
concatenate them together into a big
1016
00:46:03,839 --> 00:46:07,800
index of one long input so in a way that
1017
00:46:06,240 --> 00:46:10,920
this is similar to what they did in the
1018
00:46:07,800 --> 00:46:12,720
KLM you know concatenate all of these
1019
00:46:10,920 --> 00:46:16,520
embeddings into a single
1020
00:46:12,720 --> 00:46:18,680
input but the difference is that this is
1021
00:46:16,520 --> 00:46:21,640
done with
1022
00:46:18,680 --> 00:46:24,280
um the values that we are attending to
1023
00:46:21,640 --> 00:46:27,559
as opposed to just the final
1024
00:46:24,280 --> 00:46:30,079
layer and
1025
00:46:27,559 --> 00:46:33,680
the interesting thing about this is now
1026
00:46:30,079 --> 00:46:36,200
we have an index of one long input and
1027
00:46:33,680 --> 00:46:39,800
when we want to do our next version of
1028
00:46:36,200 --> 00:46:42,240
attention we do KNN search from the
1029
00:46:39,800 --> 00:46:44,280
query we take the retrieved hidden
1030
00:46:42,240 --> 00:46:47,880
States and then we just do attention
1031
00:46:44,280 --> 00:46:50,440
over them so the nice thing about this
1032
00:46:47,880 --> 00:46:53,079
is in the extreme case this makes no
1033
00:46:50,440 --> 00:46:55,240
changes to the model what I mean by this
1034
00:46:53,079 --> 00:46:57,520
is let's say our input was small enough
1035
00:46:55,240 --> 00:47:02,240
that we could coded in only a single
1036
00:46:57,520 --> 00:47:06,400
chunk and for KNN search we also did KNN
1037
00:47:02,240 --> 00:47:09,559
search um we did you know exact Canon
1038
00:47:06,400 --> 00:47:12,400
search over all of the embeddings in the
1039
00:47:09,559 --> 00:47:14,680
trunk in that case this would just be
1040
00:47:12,400 --> 00:47:16,520
normal attention it's exactly the same
1041
00:47:14,680 --> 00:47:18,640
as normal
1042
00:47:16,520 --> 00:47:20,160
attention however there are some
1043
00:47:18,640 --> 00:47:21,760
approximations that go into here like
1044
00:47:20,160 --> 00:47:24,000
when we encode chunks they might not be
1045
00:47:21,760 --> 00:47:26,359
exactly the same as if we encoded the
1046
00:47:24,000 --> 00:47:29,839
entire thing together and we're also
1047
00:47:26,359 --> 00:47:33,640
chopping off some of the values with
1048
00:47:29,839 --> 00:47:35,800
very low um kind of inner products and
1049
00:47:33,640 --> 00:47:37,400
so because of this there are some
1050
00:47:35,800 --> 00:47:38,760
approximations being made but in the
1051
00:47:37,400 --> 00:47:40,160
extreme case if we made no
1052
00:47:38,760 --> 00:47:41,880
approximations this would just be
1053
00:47:40,160 --> 00:47:44,359
exactly the same model as we were using
1054
00:47:41,880 --> 00:47:46,160
before so I find this pretty attractive
1055
00:47:44,359 --> 00:47:48,760
and uh you know empirically it gives
1056
00:47:46,160 --> 00:47:51,720
very good results over long
1057
00:47:48,760 --> 00:47:53,440
distances and you know we can always
1058
00:47:51,720 --> 00:47:56,240
make our approximations better and
1059
00:47:53,440 --> 00:47:57,680
improve this model as well so I I think
1060
00:47:56,240 --> 00:48:00,960
this is a attractive method that you
1061
00:47:57,680 --> 00:48:00,960
might be interested in taking a look
1062
00:48:02,240 --> 00:48:06,200
at okay for the final part of this I'd
1063
00:48:04,559 --> 00:48:08,079
like to talk about long context
1064
00:48:06,200 --> 00:48:12,400
Transformers and these are models that
1065
00:48:08,079 --> 00:48:15,119
are explicitly trained in a way that
1066
00:48:12,400 --> 00:48:16,920
allows you to attend to longer contexts
1067
00:48:15,119 --> 00:48:18,839
in an efficient
1068
00:48:16,920 --> 00:48:21,960
manner
1069
00:48:18,839 --> 00:48:23,680
so one way that we can train over longer
1070
00:48:21,960 --> 00:48:25,880
context is just append all of the
1071
00:48:23,680 --> 00:48:28,040
context together and in fact shortly
1072
00:48:25,880 --> 00:48:32,200
after Transformers came out uh this
1073
00:48:28,040 --> 00:48:34,280
paper by VOA at all demonstrated that um
1074
00:48:32,200 --> 00:48:36,160
it doing this can learn you know
1075
00:48:34,280 --> 00:48:38,119
interesting document level phenomena so
1076
00:48:36,160 --> 00:48:40,440
it can identify when
1077
00:48:38,119 --> 00:48:42,480
multiple uh words refer to the same
1078
00:48:40,440 --> 00:48:43,680
thing or co-reference and other things
1079
00:48:42,480 --> 00:48:45,640
like
1080
00:48:43,680 --> 00:48:47,720
this however the problem with
1081
00:48:45,640 --> 00:48:51,119
Transformers is that computation is
1082
00:48:47,720 --> 00:48:52,799
quadratic in the sentence length because
1083
00:48:51,119 --> 00:48:54,599
you're multiplying all of the query
1084
00:48:52,799 --> 00:48:56,799
vectors by all of the key
1085
00:48:54,599 --> 00:48:59,480
vectors
1086
00:48:56,799 --> 00:49:02,799
and that basically causes a big problem
1087
00:48:59,480 --> 00:49:02,799
if your sequences become very
1088
00:49:03,480 --> 00:49:09,760
long so if we go back to what we did in
1089
00:49:07,480 --> 00:49:12,400
rnns uh from the very beginning of the
1090
00:49:09,760 --> 00:49:14,359
class in rnns they don't have this
1091
00:49:12,400 --> 00:49:16,280
problem because computation is linear in
1092
00:49:14,359 --> 00:49:20,440
the length of the sequence you just pass
1093
00:49:16,280 --> 00:49:22,200
along the RNN State and every single
1094
00:49:20,440 --> 00:49:23,839
time you do the same computation over it
1095
00:49:22,200 --> 00:49:26,559
so there's no quadratic term in
1096
00:49:23,839 --> 00:49:32,400
calculating rnns
1097
00:49:26,559 --> 00:49:34,880
another thing is that when doing rnns
1098
00:49:32,400 --> 00:49:37,680
you can actually P State infinitely
1099
00:49:34,880 --> 00:49:39,040
during the forward pass by just
1100
00:49:37,680 --> 00:49:40,240
calculating the hidden State and then
1101
00:49:39,040 --> 00:49:42,119
throwing away the rest of the
1102
00:49:40,240 --> 00:49:43,359
computation graph that was used in
1103
00:49:42,119 --> 00:49:45,160
calculating that hidden State and
1104
00:49:43,359 --> 00:49:48,319
there's no approximation that goes on
1105
00:49:45,160 --> 00:49:49,680
there so unlike on in un liform that I
1106
00:49:48,319 --> 00:49:51,640
was talking about before where we needed
1107
00:49:49,680 --> 00:49:54,119
to make approximations none need to be
1108
00:49:51,640 --> 00:49:56,400
made in this
1109
00:49:54,119 --> 00:50:00,200
case however there is a problem with
1110
00:49:56,400 --> 00:50:02,040
doing back propop uh because in order to
1111
00:50:00,200 --> 00:50:05,839
do back propop normally you maintain the
1112
00:50:02,040 --> 00:50:09,720
entire you know state of the computation
1113
00:50:05,839 --> 00:50:12,400
graph and so there a common method to
1114
00:50:09,720 --> 00:50:15,280
fix this is basically you pass along the
1115
00:50:12,400 --> 00:50:16,920
RNN state from the previous sentence but
1116
00:50:15,280 --> 00:50:19,240
you just don't do backdrop into the
1117
00:50:16,920 --> 00:50:21,200
previous sentence and this is called
1118
00:50:19,240 --> 00:50:24,040
truncated backrop or truncated back
1119
00:50:21,200 --> 00:50:27,280
propagation through time and this allows
1120
00:50:24,040 --> 00:50:30,160
you to essentially train models with
1121
00:50:27,280 --> 00:50:32,319
infinite context um or at least models
1122
00:50:30,160 --> 00:50:33,720
that can pass along context infinitely
1123
00:50:32,319 --> 00:50:36,359
even if you're not back propping into
1124
00:50:33,720 --> 00:50:36,359
they Cod ear
1125
00:50:37,480 --> 00:50:43,520
there so of course a problem with this
1126
00:50:40,720 --> 00:50:45,880
over long contexts is recurrents uh
1127
00:50:43,520 --> 00:50:47,520
recurrent models can be slow due to the
1128
00:50:45,880 --> 00:50:51,400
kind of sequential dependence they're
1129
00:50:47,520 --> 00:50:54,280
not ideal for um you know running on
1130
00:50:51,400 --> 00:50:57,359
gpus or things like that and this is
1131
00:50:54,280 --> 00:51:01,960
improved by recent architectures like
1132
00:50:57,359 --> 00:51:05,359
Mamba and RW KV which are more conducive
1133
00:51:01,960 --> 00:51:07,079
to GPU Based training um while still
1134
00:51:05,359 --> 00:51:08,599
maintaining linear time complexity and
1135
00:51:07,079 --> 00:51:11,480
so I'm looking forward to talking about
1136
00:51:08,599 --> 00:51:11,480
that more in a future
1137
00:51:13,000 --> 00:51:17,559
class so actually if we take this idea
1138
00:51:15,880 --> 00:51:20,440
of truncated back propagation through
1139
00:51:17,559 --> 00:51:22,359
time this can also be applied to
1140
00:51:20,440 --> 00:51:25,440
Transformers and there's a really nice
1141
00:51:22,359 --> 00:51:27,880
paper Transformer XEL also created by
1142
00:51:25,440 --> 00:51:31,119
kungai who was formerly at
1143
00:51:27,880 --> 00:51:33,119
CMU and what this does is this attempts
1144
00:51:31,119 --> 00:51:35,760
to fix vectors from the previous
1145
00:51:33,119 --> 00:51:39,440
sentence so if we have a standard
1146
00:51:35,760 --> 00:51:40,720
Transformer uh in a Transformer XL
1147
00:51:39,440 --> 00:51:44,640
normally what we do in the standard
1148
00:51:40,720 --> 00:51:48,480
Transformer is each Vector attends back
1149
00:51:44,640 --> 00:51:50,920
to all the other vectors in the current
1150
00:51:48,480 --> 00:51:53,839
context what Transformer XEL does
1151
00:51:50,920 --> 00:51:56,359
instead is when you have a new segment
1152
00:51:53,839 --> 00:51:58,960
that you want to do backrop
1153
00:51:56,359 --> 00:52:01,200
into um you have a new segment that you
1154
00:51:58,960 --> 00:52:03,960
want to basically train over you also
1155
00:52:01,200 --> 00:52:06,400
attend to all of the previous tokens in
1156
00:52:03,960 --> 00:52:07,640
the previous segment but you don't do
1157
00:52:06,400 --> 00:52:10,319
back propop into
1158
00:52:07,640 --> 00:52:12,079
them so this is essentially truncated
1159
00:52:10,319 --> 00:52:14,480
backpropagation through time from the
1160
00:52:12,079 --> 00:52:17,760
Transformer
1161
00:52:14,480 --> 00:52:19,520
perspective this is also really nice
1162
00:52:17,760 --> 00:52:21,200
because what it allows you to do is if
1163
00:52:19,520 --> 00:52:25,880
you have a multi-layer
1164
00:52:21,200 --> 00:52:27,720
Transformer it allows you to attend far
1165
00:52:25,880 --> 00:52:30,520
back so if you look at the last layer
1166
00:52:27,720 --> 00:52:33,520
it's attending um to things in the
1167
00:52:30,520 --> 00:52:36,599
previous context window but the second
1168
00:52:33,520 --> 00:52:39,760
to last layer is attending to things in
1169
00:52:36,599 --> 00:52:41,520
the um not just one context window
1170
00:52:39,760 --> 00:52:44,079
before but multiple context windows
1171
00:52:41,520 --> 00:52:45,760
before and actually this allows you to
1172
00:52:44,079 --> 00:52:47,880
very effectively attend a very long
1173
00:52:45,760 --> 00:52:51,720
context because each time kind of the
1174
00:52:47,880 --> 00:52:54,799
context expands in an exponential
1175
00:52:51,720 --> 00:52:56,520
manner so um recently there's a popular
1176
00:52:54,799 --> 00:52:57,799
model called mistol that I'm sure a lot
1177
00:52:56,520 --> 00:52:59,480
of people have heard about and this is
1178
00:52:57,799 --> 00:53:01,920
using sliding window attention which is
1179
00:52:59,480 --> 00:53:04,160
essentially the same mechanism proposed
1180
00:53:01,920 --> 00:53:09,240
by Transformer XEL so this method is
1181
00:53:04,160 --> 00:53:09,240
still uh used in uh very practical
1182
00:53:10,400 --> 00:53:17,359
systems another paper that has been
1183
00:53:13,440 --> 00:53:19,319
pretty influential in this general area
1184
00:53:17,359 --> 00:53:21,079
is something called sparse
1185
00:53:19,319 --> 00:53:23,359
Transformers and the way sparse
1186
00:53:21,079 --> 00:53:25,960
Transformers work is instead of
1187
00:53:23,359 --> 00:53:29,520
attending to every single previous state
1188
00:53:25,960 --> 00:53:32,640
you attend to every n previous
1189
00:53:29,520 --> 00:53:34,599
States and what this allows you to do is
1190
00:53:32,640 --> 00:53:37,119
this allows you to essentially create
1191
00:53:34,599 --> 00:53:40,319
something like the strided uh
1192
00:53:37,119 --> 00:53:42,079
convolutions or um pyramidal recurrent
1193
00:53:40,319 --> 00:53:45,520
neural networks that I talked about
1194
00:53:42,079 --> 00:53:49,760
earlier um so what this looks like
1195
00:53:45,520 --> 00:53:51,079
essentially is you have um this like if
1196
00:53:49,760 --> 00:53:54,880
you have a particular state it might
1197
00:53:51,079 --> 00:53:56,480
attend to all of the previous end tokens
1198
00:53:54,880 --> 00:54:00,240
but then it
1199
00:53:56,480 --> 00:54:04,400
also attends to all of the
1200
00:54:00,240 --> 00:54:06,880
previous um kind of M chunks so you kind
1201
00:54:04,400 --> 00:54:08,920
of have a combination of local and
1202
00:54:06,880 --> 00:54:11,640
Global
1203
00:54:08,920 --> 00:54:14,760
attention or not local and Global but
1204
00:54:11,640 --> 00:54:16,760
local and kind of longer range attention
1205
00:54:14,760 --> 00:54:18,760
and this can be very effective because
1206
00:54:16,760 --> 00:54:22,319
you can attend to you know much longer
1207
00:54:18,760 --> 00:54:24,079
context with a minimal increase in a
1208
00:54:22,319 --> 00:54:26,520
computational
1209
00:54:24,079 --> 00:54:28,720
complexity
1210
00:54:26,520 --> 00:54:31,160
so another method that's a little bit
1211
00:54:28,720 --> 00:54:32,960
like this uh or it's very similar in
1212
00:54:31,160 --> 00:54:34,359
spirit but slightly different in
1213
00:54:32,960 --> 00:54:35,599
implementation is something called the
1214
00:54:34,359 --> 00:54:37,520
compressive
1215
00:54:35,599 --> 00:54:40,400
Transformer and in the compressive
1216
00:54:37,520 --> 00:54:43,000
Transformer you also have this idea of a
1217
00:54:40,400 --> 00:54:44,319
local memory and then a longer term
1218
00:54:43,000 --> 00:54:47,200
compressed
1219
00:54:44,319 --> 00:54:50,799
memory but you have an explicit
1220
00:54:47,200 --> 00:54:54,319
compression step that
1221
00:54:50,799 --> 00:54:58,079
directly essentially generates this uh
1222
00:54:54,319 --> 00:55:00,960
compressed mem M itself and so this is a
1223
00:54:58,079 --> 00:55:04,119
little bit more flexible I guess it
1224
00:55:00,960 --> 00:55:06,280
allows you to take all of the you know
1225
00:55:04,119 --> 00:55:09,000
relevant things from your local memory
1226
00:55:06,280 --> 00:55:12,000
and compress it down so it's another
1227
00:55:09,000 --> 00:55:12,000
method that's worth thinking
1228
00:55:12,760 --> 00:55:18,400
about finally uh there are some very
1229
00:55:15,799 --> 00:55:20,200
interesting methods that do low rank
1230
00:55:18,400 --> 00:55:23,039
approximations for
1231
00:55:20,200 --> 00:55:25,920
Transformers and so calculating the
1232
00:55:23,039 --> 00:55:29,119
attention Matrix is expensive but this
1233
00:55:25,920 --> 00:55:31,640
is a matrix and because it's a matrix we
1234
00:55:29,119 --> 00:55:32,640
can also approximate it with a lower
1235
00:55:31,640 --> 00:55:35,480
rank
1236
00:55:32,640 --> 00:55:38,559
Matrix and there's a couple methods that
1237
00:55:35,480 --> 00:55:40,599
do things uh like this uh the first one
1238
00:55:38,559 --> 00:55:42,680
is something called Blind forer which
1239
00:55:40,599 --> 00:55:44,520
adds low rank linear projections into
1240
00:55:42,680 --> 00:55:47,319
the model at appropriate
1241
00:55:44,520 --> 00:55:50,359
places and um there's another one called
1242
00:55:47,319 --> 00:55:52,200
NR forer which approximates using the ni
1243
00:55:50,359 --> 00:55:54,440
run method which is based on sampling
1244
00:55:52,200 --> 00:55:56,520
Landmark points but basically the
1245
00:55:54,440 --> 00:56:00,319
general IDE aide behind this is normally
1246
00:55:56,520 --> 00:56:03,400
we do this kind of softmax over you know
1247
00:56:00,319 --> 00:56:06,240
a very large attention Vector but
1248
00:56:03,400 --> 00:56:08,440
instead we can approximate the softmax
1249
00:56:06,240 --> 00:56:11,520
by having some low rank vectors kind of
1250
00:56:08,440 --> 00:56:12,799
like what we used in Laura and uh
1251
00:56:11,520 --> 00:56:16,440
nonetheless get a reasonable
1252
00:56:12,799 --> 00:56:16,440
approximation of the softmax used
1253
00:56:17,799 --> 00:56:24,039
inion okay so we're nearing the end of
1254
00:56:21,520 --> 00:56:26,000
what I want to talk about today and
1255
00:56:24,039 --> 00:56:29,720
finally the thing that I'd like to talk
1256
00:56:26,000 --> 00:56:33,240
about is benchmarks for long PEX models
1257
00:56:29,720 --> 00:56:35,000
and there's a few benchmarks one very
1258
00:56:33,240 --> 00:56:37,359
well-known one is something called long
1259
00:56:35,000 --> 00:56:40,599
range Arena this is a composite
1260
00:56:37,359 --> 00:56:43,000
Benchmark containing mostly non NLP
1261
00:56:40,599 --> 00:56:45,280
tasks and it's definitely used for long
1262
00:56:43,000 --> 00:56:46,760
sequence modeling but the results on the
1263
00:56:45,280 --> 00:56:49,400
long range Arena actually tend to
1264
00:56:46,760 --> 00:56:51,599
diverge uh somewhat from the results
1265
00:56:49,400 --> 00:56:54,440
that you get for longdistance language
1266
00:56:51,599 --> 00:56:56,520
modeling so in addition to this another
1267
00:56:54,440 --> 00:56:58,400
benchmark that I uh personally like and
1268
00:56:56,520 --> 00:57:01,960
have used a bit is something called
1269
00:56:58,400 --> 00:57:05,720
Scrolls which uh combines together a
1270
00:57:01,960 --> 00:57:07,960
whole bunch of kind of QA style or
1271
00:57:05,720 --> 00:57:10,920
summarization style tasks that have very
1272
00:57:07,960 --> 00:57:13,280
long contexts including over narratives
1273
00:57:10,920 --> 00:57:15,680
or books or government reports or other
1274
00:57:13,280 --> 00:57:17,280
things like that so you can also take a
1275
00:57:15,680 --> 00:57:20,680
look at this if you're interested in
1276
00:57:17,280 --> 00:57:20,680
kind of benchmarking longer range
1277
00:57:21,839 --> 00:57:28,280
models okay the final thing I'd like to
1278
00:57:24,559 --> 00:57:30,280
talk about is now that we have retriever
1279
00:57:28,280 --> 00:57:31,680
models we have reader models we maybe
1280
00:57:30,280 --> 00:57:34,000
even have reader models that can
1281
00:57:31,680 --> 00:57:35,520
effectively use very long contexts like
1282
00:57:34,000 --> 00:57:37,880
the ones that we retrieve over whole
1283
00:57:35,520 --> 00:57:39,240
documents how do we effectively use them
1284
00:57:37,880 --> 00:57:43,640
in our
1285
00:57:39,240 --> 00:57:46,680
models so there was a very nice paper um
1286
00:57:43,640 --> 00:57:48,880
by Nelson Leo at Stanford that about a
1287
00:57:46,680 --> 00:57:51,160
phenomenon that was kinded lost in the
1288
00:57:48,880 --> 00:57:53,079
middle and basically what it does is it
1289
00:57:51,160 --> 00:57:55,119
demonstrates that many many different
1290
00:57:53,079 --> 00:57:57,720
models including state-of-the-art model
1291
00:57:55,119 --> 00:58:00,799
models pay less attention to things in
1292
00:57:57,720 --> 00:58:03,960
the middle of long context windows and
1293
00:58:00,799 --> 00:58:06,760
so if we have an answer and we put it in
1294
00:58:03,960 --> 00:58:09,200
you know the first position in Doc in
1295
00:58:06,760 --> 00:58:12,280
you know a concatenated context or the
1296
00:58:09,200 --> 00:58:13,799
20th position in a concatenated context
1297
00:58:12,280 --> 00:58:15,240
it tends to attend more to the ones at
1298
00:58:13,799 --> 00:58:18,359
the beginning or the
1299
00:58:15,240 --> 00:58:19,480
end in contrast the ones in the middle
1300
00:58:18,359 --> 00:58:22,760
kind of get
1301
00:58:19,480 --> 00:58:26,680
lost hence the name lost in the middle
1302
00:58:22,760 --> 00:58:29,520
and the problem with this is you know if
1303
00:58:26,680 --> 00:58:32,480
we are doing something like retrieval in
1304
00:58:29,520 --> 00:58:34,160
Reading then that's maybe not such a
1305
00:58:32,480 --> 00:58:35,680
huge problem because we could just put
1306
00:58:34,160 --> 00:58:37,680
you know the highest scoring documents
1307
00:58:35,680 --> 00:58:39,920
at the beginning that might even be more
1308
00:58:37,680 --> 00:58:42,440
effective than uh you know concatenating
1309
00:58:39,920 --> 00:58:44,160
lots of low scoring documents together
1310
00:58:42,440 --> 00:58:45,559
but if we want to read a really long
1311
00:58:44,160 --> 00:58:48,839
document and synthesize something
1312
00:58:45,559 --> 00:58:52,200
without doing kind of another uh scoring
1313
00:58:48,839 --> 00:58:54,200
step uh that can be an issue and also
1314
00:58:52,200 --> 00:58:56,359
you know our retriever is not perfect so
1315
00:58:54,200 --> 00:58:58,799
we would like the model to the reader
1316
00:58:56,359 --> 00:59:00,520
model to do a good job with the outputs
1317
00:58:58,799 --> 00:59:04,839
that it
1318
00:59:00,520 --> 00:59:06,359
has so there are methods uh to ensure
1319
00:59:04,839 --> 00:59:09,440
use of relevant
1320
00:59:06,359 --> 00:59:12,119
context so of course better retrievers
1321
00:59:09,440 --> 00:59:14,880
make more relevant context you can do
1322
00:59:12,119 --> 00:59:16,240
you know reranking or other things like
1323
00:59:14,880 --> 00:59:17,280
that and only include the context that
1324
00:59:16,240 --> 00:59:19,680
looks most
1325
00:59:17,280 --> 00:59:22,880
relevant um or you know refine your
1326
00:59:19,680 --> 00:59:25,200
reader model but there's also methods
1327
00:59:22,880 --> 00:59:28,720
that can decide whether contact should
1328
00:59:25,200 --> 00:59:32,400
be used in the first place so um there
1329
00:59:28,720 --> 00:59:35,440
are methods uh to decide whether to use
1330
00:59:32,400 --> 00:59:37,559
whether to include passages or not and
1331
00:59:35,440 --> 00:59:39,920
also uh recently we proposed a method to
1332
00:59:37,559 --> 00:59:42,640
filter down to parts of retrieve
1333
00:59:39,920 --> 00:59:44,920
passages uh to have only appropriate
1334
00:59:42,640 --> 00:59:47,480
content and this is a model uh that we
1335
00:59:44,920 --> 00:59:49,319
called filco it basically filters the
1336
00:59:47,480 --> 00:59:52,160
context down to the most relevant
1337
00:59:49,319 --> 00:59:53,920
content that we think is appropriate and
1338
00:59:52,160 --> 00:59:56,960
that allows us to get better results
1339
00:59:53,920 --> 00:59:56,960
when it's fed to the
1340
00:59:57,079 --> 01:00:03,640
generator so that's all I have for today
1341
01:00:00,319 --> 01:00:06,200
um thank you for watching the video and
1342
01:00:03,640 --> 01:00:08,599
for people in the class I'll be happy to
1343
01:00:06,200 --> 01:00:13,079
take questions on Piaza or during the
1344
01:00:08,599 --> 01:00:13,079
office hours that I had planned thanks a
1345
01:00:15,319 --> 01:00:18,319
lot