|
WEBVTT |
|
|
|
00:00:00.040 --> 00:00:03.880 |
|
so today I'm going to talk about |
|
|
|
00:00:01.319 --> 00:00:06.680 |
|
retrieval and retrieval augmented |
|
|
|
00:00:03.880 --> 00:00:09.040 |
|
generation so if we look at our standard |
|
|
|
00:00:06.680 --> 00:00:10.880 |
|
prompting flow normally what we do is we |
|
|
|
00:00:09.040 --> 00:00:14.160 |
|
combine together a prompt template with |
|
|
|
00:00:10.880 --> 00:00:16.600 |
|
an input so if we say please answer this |
|
|
|
00:00:14.160 --> 00:00:18.720 |
|
question I think Vin Diesel has been a |
|
|
|
00:00:16.600 --> 00:00:21.000 |
|
voice actor for several pictors in TV |
|
|
|
00:00:18.720 --> 00:00:24.000 |
|
series do you know what their names |
|
|
|
00:00:21.000 --> 00:00:25.400 |
|
are we could get a response from a |
|
|
|
00:00:24.000 --> 00:00:26.840 |
|
language model but there are several |
|
|
|
00:00:25.400 --> 00:00:30.840 |
|
problems with |
|
|
|
00:00:26.840 --> 00:00:33.680 |
|
this the first is accuracy issues |
|
|
|
00:00:30.840 --> 00:00:36.160 |
|
the models generally have a knowledge |
|
|
|
00:00:33.680 --> 00:00:38.879 |
|
cut off so the parameters are usually |
|
|
|
00:00:36.160 --> 00:00:41.120 |
|
only updated to a particular time so for |
|
|
|
00:00:38.879 --> 00:00:43.200 |
|
example if a new Vin Diesel TV series |
|
|
|
00:00:41.120 --> 00:00:44.960 |
|
comes out then the model that was |
|
|
|
00:00:43.200 --> 00:00:47.440 |
|
trained up to a certain time Point won't |
|
|
|
00:00:44.960 --> 00:00:51.000 |
|
be able to know anything about |
|
|
|
00:00:47.440 --> 00:00:53.600 |
|
it there's also issues of private data |
|
|
|
00:00:51.000 --> 00:00:55.320 |
|
so data stored in private text or data |
|
|
|
00:00:53.600 --> 00:00:57.840 |
|
repositories is not suitable for |
|
|
|
00:00:55.320 --> 00:01:02.600 |
|
training for a number of reasons number |
|
|
|
00:00:57.840 --> 00:01:05.199 |
|
one it's not available to to particular |
|
|
|
00:01:02.600 --> 00:01:07.799 |
|
language model training providers such |
|
|
|
00:01:05.199 --> 00:01:10.720 |
|
as you know open AI or Google or anybody |
|
|
|
00:01:07.799 --> 00:01:13.840 |
|
else like this the second thing is |
|
|
|
00:01:10.720 --> 00:01:16.799 |
|
Access Control issues so even if you're |
|
|
|
00:01:13.840 --> 00:01:17.840 |
|
within an organization that has lots of |
|
|
|
00:01:16.799 --> 00:01:20.799 |
|
private data and you can train a |
|
|
|
00:01:17.840 --> 00:01:22.600 |
|
language model on that certain people in |
|
|
|
00:01:20.799 --> 00:01:24.200 |
|
the organization may have access to |
|
|
|
00:01:22.600 --> 00:01:27.640 |
|
certain varieties of data and other |
|
|
|
00:01:24.200 --> 00:01:29.400 |
|
people may not so it's not just solely |
|
|
|
00:01:27.640 --> 00:01:31.520 |
|
an issue of third party providers it's |
|
|
|
00:01:29.400 --> 00:01:33.840 |
|
an issue of organization level Access |
|
|
|
00:01:31.520 --> 00:01:36.159 |
|
Control in |
|
|
|
00:01:33.840 --> 00:01:38.920 |
|
general in addition there are learning |
|
|
|
00:01:36.159 --> 00:01:40.320 |
|
failures so even for data that the model |
|
|
|
00:01:38.920 --> 00:01:42.640 |
|
was trained on it might not be |
|
|
|
00:01:40.320 --> 00:01:44.399 |
|
sufficient to get the right answer and |
|
|
|
00:01:42.640 --> 00:01:47.799 |
|
this is particularly the case for very |
|
|
|
00:01:44.399 --> 00:01:52.320 |
|
very large uh training data sets and |
|
|
|
00:01:47.799 --> 00:01:53.920 |
|
models that are you know modestly sized |
|
|
|
00:01:52.320 --> 00:01:55.880 |
|
because the models very often won't be |
|
|
|
00:01:53.920 --> 00:01:58.360 |
|
able to learn from a single look at a |
|
|
|
00:01:55.880 --> 00:02:02.039 |
|
particular fact or or whatever else like |
|
|
|
00:01:58.360 --> 00:02:02.039 |
|
this especially if iter early in |
|
|
|
00:02:02.159 --> 00:02:08.160 |
|
training another thing is even if the |
|
|
|
00:02:05.240 --> 00:02:10.599 |
|
answer is correct it might not be |
|
|
|
00:02:08.160 --> 00:02:13.440 |
|
verifiable so you might want to be very |
|
|
|
00:02:10.599 --> 00:02:15.000 |
|
sure that the model is not making any |
|
|
|
00:02:13.440 --> 00:02:17.640 |
|
accuracy |
|
|
|
00:02:15.000 --> 00:02:19.040 |
|
problems and so in order to do that very |
|
|
|
00:02:17.640 --> 00:02:21.879 |
|
often a human will want to go back to |
|
|
|
00:02:19.040 --> 00:02:21.879 |
|
the source of the |
|
|
|
00:02:22.200 --> 00:02:27.319 |
|
data so to solve this there's a method |
|
|
|
00:02:25.480 --> 00:02:29.200 |
|
called retrieval augmented generation |
|
|
|
00:02:27.319 --> 00:02:30.280 |
|
which will also be the topic of our |
|
|
|
00:02:29.200 --> 00:02:32.599 |
|
second assignment |
|
|
|
00:02:30.280 --> 00:02:35.680 |
|
here and the way it works is you |
|
|
|
00:02:32.599 --> 00:02:38.319 |
|
retrieve relevant passages |
|
|
|
00:02:35.680 --> 00:02:40.680 |
|
efficiently ones that kind of entail the |
|
|
|
00:02:38.319 --> 00:02:42.480 |
|
answer to a question and then read the |
|
|
|
00:02:40.680 --> 00:02:46.080 |
|
passages to answer the |
|
|
|
00:02:42.480 --> 00:02:48.599 |
|
query so we have documents like this we |
|
|
|
00:02:46.080 --> 00:02:52.360 |
|
have a query based on the query we form |
|
|
|
00:02:48.599 --> 00:02:55.360 |
|
retrieval we get a whole bunch of uh |
|
|
|
00:02:52.360 --> 00:02:57.560 |
|
passages we do reading and then we get |
|
|
|
00:02:55.360 --> 00:02:57.560 |
|
the |
|
|
|
00:02:58.280 --> 00:03:04.440 |
|
answer so this is in fact implemented in |
|
|
|
00:03:01.720 --> 00:03:07.599 |
|
many or even most uh language modeling |
|
|
|
00:03:04.440 --> 00:03:09.840 |
|
providers including open AI so to give |
|
|
|
00:03:07.599 --> 00:03:11.480 |
|
an example I asked the question that I |
|
|
|
00:03:09.840 --> 00:03:12.879 |
|
just said about Vin Diesel's voice |
|
|
|
00:03:11.480 --> 00:03:16.599 |
|
acting and TV |
|
|
|
00:03:12.879 --> 00:03:19.760 |
|
series and Chad GPT gave me an answer |
|
|
|
00:03:16.599 --> 00:03:22.440 |
|
and you can see that J gpt's answer |
|
|
|
00:03:19.760 --> 00:03:24.720 |
|
includes several places with quotes um |
|
|
|
00:03:22.440 --> 00:03:28.159 |
|
they the little blue quotes |
|
|
|
00:03:24.720 --> 00:03:30.760 |
|
there and if you click on the quote it |
|
|
|
00:03:28.159 --> 00:03:33.120 |
|
tells you where the information Source |
|
|
|
00:03:30.760 --> 00:03:35.000 |
|
came from and so this one says behind |
|
|
|
00:03:33.120 --> 00:03:37.760 |
|
the voice actors been |
|
|
|
00:03:35.000 --> 00:03:39.920 |
|
Diesel and behind the voice actors TV |
|
|
|
00:03:37.760 --> 00:03:42.959 |
|
shows Big Mouth V |
|
|
|
00:03:39.920 --> 00:03:45.640 |
|
diesel now if we look |
|
|
|
00:03:42.959 --> 00:03:48.640 |
|
closer into this answer we'll see that |
|
|
|
00:03:45.640 --> 00:03:49.959 |
|
it's not perfect even though it is uh |
|
|
|
00:03:48.640 --> 00:03:52.519 |
|
performing retrieval augmented |
|
|
|
00:03:49.959 --> 00:03:54.840 |
|
Generations so for example I only asked |
|
|
|
00:03:52.519 --> 00:03:57.200 |
|
about TV series but it's giving me lots |
|
|
|
00:03:54.840 --> 00:03:59.680 |
|
of things about movies where it says |
|
|
|
00:03:57.200 --> 00:04:01.319 |
|
Groot in Guardians of the Galaxy volume |
|
|
|
00:03:59.680 --> 00:04:04.480 |
|
3 2023 |
|
|
|
00:04:01.319 --> 00:04:07.200 |
|
movie and in fact uh Vin Diesel was not |
|
|
|
00:04:04.480 --> 00:04:10.920 |
|
even voicing a character named gut here |
|
|
|
00:04:07.200 --> 00:04:13.480 |
|
so that's definitely an accuracy |
|
|
|
00:04:10.920 --> 00:04:15.079 |
|
mistake and separately there's a place |
|
|
|
00:04:13.480 --> 00:04:17.639 |
|
where it says additionally though the |
|
|
|
00:04:15.079 --> 00:04:19.959 |
|
website for big mouthless Vin Diesel it |
|
|
|
00:04:17.639 --> 00:04:22.040 |
|
appears to be a misunderstanding or err |
|
|
|
00:04:19.959 --> 00:04:25.360 |
|
as Nick croll is credited as the voice |
|
|
|
00:04:22.040 --> 00:04:27.800 |
|
of Vin Diesel in that show so there |
|
|
|
00:04:25.360 --> 00:04:30.039 |
|
actually Nick croll was acting as V |
|
|
|
00:04:27.800 --> 00:04:32.800 |
|
diesel but that's um kind of a |
|
|
|
00:04:30.039 --> 00:04:34.600 |
|
misunderstanding of the reader model but |
|
|
|
00:04:32.800 --> 00:04:36.600 |
|
anyway you can get the general idea here |
|
|
|
00:04:34.600 --> 00:04:40.199 |
|
you can also see that it's not perfect |
|
|
|
00:04:36.600 --> 00:04:42.720 |
|
even for very strong models like GPD |
|
|
|
00:04:40.199 --> 00:04:44.800 |
|
4 so now I'd like to go into the actual |
|
|
|
00:04:42.720 --> 00:04:46.759 |
|
methodology that we use for this uh we |
|
|
|
00:04:44.800 --> 00:04:50.360 |
|
have retrieval |
|
|
|
00:04:46.759 --> 00:04:53.160 |
|
methods and for the retrieval methods we |
|
|
|
00:04:50.360 --> 00:04:55.160 |
|
have uh quite a few different options |
|
|
|
00:04:53.160 --> 00:04:57.960 |
|
I'm going to go through each one of them |
|
|
|
00:04:55.160 --> 00:05:00.960 |
|
at a time so sparse retrieval document |
|
|
|
00:04:57.960 --> 00:05:04.240 |
|
level dense retrieval token level DSE |
|
|
|
00:05:00.960 --> 00:05:08.039 |
|
retrieval cross- encoder reranking and |
|
|
|
00:05:04.240 --> 00:05:09.320 |
|
blackbox retrieval so blackbox retrieval |
|
|
|
00:05:08.039 --> 00:05:11.280 |
|
I'm not really going to go into it a |
|
|
|
00:05:09.320 --> 00:05:16.000 |
|
whole lot basically this is just asking |
|
|
|
00:05:11.280 --> 00:05:17.560 |
|
a blackbox search engine to retrieve uh |
|
|
|
00:05:16.000 --> 00:05:20.000 |
|
you know the relevant context and |
|
|
|
00:05:17.560 --> 00:05:22.560 |
|
getting the top several results |
|
|
|
00:05:20.000 --> 00:05:24.039 |
|
nonetheless this is a pretty you know |
|
|
|
00:05:22.560 --> 00:05:26.800 |
|
reasonable method to do it if you want |
|
|
|
00:05:24.039 --> 00:05:29.080 |
|
to do search over you know lots of data |
|
|
|
00:05:26.800 --> 00:05:32.759 |
|
that exists on the internet already and |
|
|
|
00:05:29.080 --> 00:05:36.600 |
|
that in is what chat jpt does it looks |
|
|
|
00:05:32.759 --> 00:05:39.240 |
|
up on Bing by generating a query to |
|
|
|
00:05:36.600 --> 00:05:41.560 |
|
Bing so anyway let's go into the actual |
|
|
|
00:05:39.240 --> 00:05:43.840 |
|
methods that you develop and control |
|
|
|
00:05:41.560 --> 00:05:46.600 |
|
yourself so the first one is sparse |
|
|
|
00:05:43.840 --> 00:05:48.479 |
|
retrieval and the way this works is you |
|
|
|
00:05:46.600 --> 00:05:50.440 |
|
express the query and document as a |
|
|
|
00:05:48.479 --> 00:05:53.680 |
|
sparse word frequency Vector usually |
|
|
|
00:05:50.440 --> 00:05:58.759 |
|
normalized by length and so if I ask uh |
|
|
|
00:05:53.680 --> 00:06:01.720 |
|
query what is NLP we get a vector where |
|
|
|
00:05:58.759 --> 00:06:04.120 |
|
each row the vector corresponds to a |
|
|
|
00:06:01.720 --> 00:06:07.919 |
|
different |
|
|
|
00:06:04.120 --> 00:06:12.960 |
|
token and we asked what is |
|
|
|
00:06:07.919 --> 00:06:16.360 |
|
NLP and so uh the places for what NLP |
|
|
|
00:06:12.960 --> 00:06:18.199 |
|
and is will all have a non-zero value |
|
|
|
00:06:16.360 --> 00:06:20.199 |
|
and everything else will have a zero |
|
|
|
00:06:18.199 --> 00:06:21.720 |
|
value and we also normalize by the |
|
|
|
00:06:20.199 --> 00:06:24.120 |
|
length of vectors so we get something |
|
|
|
00:06:21.720 --> 00:06:24.120 |
|
like |
|
|
|
00:06:24.840 --> 00:06:28.440 |
|
333333 then we have a whole bunch of |
|
|
|
00:06:26.759 --> 00:06:30.720 |
|
documents so the first document says |
|
|
|
00:06:28.440 --> 00:06:31.759 |
|
what is life can is life someone really |
|
|
|
00:06:30.720 --> 00:06:33.960 |
|
likes |
|
|
|
00:06:31.759 --> 00:06:36.000 |
|
candy we also have another one that says |
|
|
|
00:06:33.960 --> 00:06:38.360 |
|
NLP as an acronym for natural language |
|
|
|
00:06:36.000 --> 00:06:39.479 |
|
processing so this is a pretty good uh |
|
|
|
00:06:38.360 --> 00:06:42.479 |
|
you |
|
|
|
00:06:39.479 --> 00:06:44.840 |
|
know answer to our |
|
|
|
00:06:42.479 --> 00:06:48.039 |
|
question then we also have I like to do |
|
|
|
00:06:44.840 --> 00:06:49.360 |
|
good research on NLP which is you know a |
|
|
|
00:06:48.039 --> 00:06:51.360 |
|
nice sentiment but not a very good |
|
|
|
00:06:49.360 --> 00:06:54.400 |
|
answer to our question I |
|
|
|
00:06:51.360 --> 00:06:59.479 |
|
guess so if we look at the vectors here |
|
|
|
00:06:54.400 --> 00:07:03.280 |
|
we have uh what and candy and is have uh |
|
|
|
00:06:59.479 --> 00:07:07.120 |
|
a fairly high |
|
|
|
00:07:03.280 --> 00:07:12.520 |
|
score and we have here NLP and is have a |
|
|
|
00:07:07.120 --> 00:07:16.479 |
|
high score and NLP has a a nonzero |
|
|
|
00:07:12.520 --> 00:07:18.400 |
|
score So based on this um we find the |
|
|
|
00:07:16.479 --> 00:07:20.560 |
|
document similarity with the highest |
|
|
|
00:07:18.400 --> 00:07:22.039 |
|
inner product or cosine similarity in |
|
|
|
00:07:20.560 --> 00:07:24.360 |
|
the document |
|
|
|
00:07:22.039 --> 00:07:27.000 |
|
collection and so if we take the inner |
|
|
|
00:07:24.360 --> 00:07:28.759 |
|
product between these vectors we |
|
|
|
00:07:27.000 --> 00:07:31.280 |
|
actually see that the first one got the |
|
|
|
00:07:28.759 --> 00:07:34.479 |
|
highest score because of its |
|
|
|
00:07:31.280 --> 00:07:37.440 |
|
relatively High values for the words |
|
|
|
00:07:34.479 --> 00:07:37.440 |
|
what and |
|
|
|
00:07:38.160 --> 00:07:43.759 |
|
is |
|
|
|
00:07:40.199 --> 00:07:46.720 |
|
so as you can see common words like what |
|
|
|
00:07:43.759 --> 00:07:49.000 |
|
and is can get a high score kind of |
|
|
|
00:07:46.720 --> 00:07:51.800 |
|
regardless of whether a document is very |
|
|
|
00:07:49.000 --> 00:07:53.919 |
|
relevant and so one way we can fix this |
|
|
|
00:07:51.800 --> 00:07:55.960 |
|
is through something called term |
|
|
|
00:07:53.919 --> 00:07:59.479 |
|
waiting and the way that term waiting |
|
|
|
00:07:55.960 --> 00:08:02.680 |
|
works is in addition to having this |
|
|
|
00:07:59.479 --> 00:08:04.599 |
|
Vector that |
|
|
|
00:08:02.680 --> 00:08:07.680 |
|
calculates |
|
|
|
00:08:04.599 --> 00:08:10.680 |
|
the frequency within a particular |
|
|
|
00:08:07.680 --> 00:08:13.639 |
|
document we also have an upweighting |
|
|
|
00:08:10.680 --> 00:08:15.599 |
|
term that gives higher weight to low |
|
|
|
00:08:13.639 --> 00:08:18.199 |
|
frequency words because low frequency |
|
|
|
00:08:15.599 --> 00:08:20.280 |
|
words like NLP tend to be more |
|
|
|
00:08:18.199 --> 00:08:22.759 |
|
informative about whether the document |
|
|
|
00:08:20.280 --> 00:08:25.240 |
|
is relevant than high frequency words |
|
|
|
00:08:22.759 --> 00:08:27.080 |
|
like what it is because these high |
|
|
|
00:08:25.240 --> 00:08:31.320 |
|
frequency words like what and is Could |
|
|
|
00:08:27.080 --> 00:08:34.279 |
|
Happen kind of regardless of whether |
|
|
|
00:08:31.320 --> 00:08:36.680 |
|
the you know document is relevant the |
|
|
|
00:08:34.279 --> 00:08:41.800 |
|
particular terms the person is asking |
|
|
|
00:08:36.680 --> 00:08:44.000 |
|
about so one well used and easy to |
|
|
|
00:08:41.800 --> 00:08:46.560 |
|
understand version of this is uh tfidf |
|
|
|
00:08:44.000 --> 00:08:48.839 |
|
or term frequency indument |
|
|
|
00:08:46.560 --> 00:08:51.200 |
|
frequency so the way we Define term |
|
|
|
00:08:48.839 --> 00:08:52.959 |
|
frequency is exactly what I talked about |
|
|
|
00:08:51.200 --> 00:08:56.959 |
|
before so it's basically the frequency |
|
|
|
00:08:52.959 --> 00:08:59.839 |
|
of the term uh T in the document d |
|
|
|
00:08:56.959 --> 00:09:01.640 |
|
normalized by the total term frequency |
|
|
|
00:08:59.839 --> 00:09:03.680 |
|
within the document so that that's what |
|
|
|
00:09:01.640 --> 00:09:06.800 |
|
I already showed in the previous |
|
|
|
00:09:03.680 --> 00:09:09.360 |
|
slide and then indument frequency is a |
|
|
|
00:09:06.800 --> 00:09:13.760 |
|
little bit more involved but basically |
|
|
|
00:09:09.360 --> 00:09:15.760 |
|
the way this works is we have log of the |
|
|
|
00:09:13.760 --> 00:09:18.160 |
|
total number of documents in the |
|
|
|
00:09:15.760 --> 00:09:24.040 |
|
collection divided |
|
|
|
00:09:18.160 --> 00:09:26.760 |
|
by the total number of uh times this |
|
|
|
00:09:24.040 --> 00:09:30.279 |
|
term appeared in any particular |
|
|
|
00:09:26.760 --> 00:09:33.360 |
|
document and so if a term appears many |
|
|
|
00:09:30.279 --> 00:09:36.120 |
|
times in any particular document it will |
|
|
|
00:09:33.360 --> 00:09:39.240 |
|
have a low IDF score uh one that's close |
|
|
|
00:09:36.120 --> 00:09:41.519 |
|
to zero but if it rarely appears it will |
|
|
|
00:09:39.240 --> 00:09:44.120 |
|
have a high IDF score so basically this |
|
|
|
00:09:41.519 --> 00:09:45.040 |
|
is upweighting our frequent terms and |
|
|
|
00:09:44.120 --> 00:09:47.560 |
|
then for |
|
|
|
00:09:45.040 --> 00:09:51.320 |
|
tfidf uh we basically multiply these two |
|
|
|
00:09:47.560 --> 00:09:53.120 |
|
terms together and we upweight the low |
|
|
|
00:09:51.320 --> 00:09:55.640 |
|
frequency |
|
|
|
00:09:53.120 --> 00:10:00.519 |
|
words there's another version of this |
|
|
|
00:09:55.640 --> 00:10:03.640 |
|
called bm25 that is uh widely used used |
|
|
|
00:10:00.519 --> 00:10:05.800 |
|
um this is more involved so I'm not |
|
|
|
00:10:03.640 --> 00:10:08.120 |
|
going to go into all of the details but |
|
|
|
00:10:05.800 --> 00:10:12.399 |
|
basically if you remember back to the |
|
|
|
00:10:08.120 --> 00:10:13.720 |
|
lecture on count-based language models |
|
|
|
00:10:12.399 --> 00:10:14.880 |
|
there were a bunch of smoothing |
|
|
|
00:10:13.720 --> 00:10:18.839 |
|
techniques for these count-based |
|
|
|
00:10:14.880 --> 00:10:21.839 |
|
language models and this uses uh kind of |
|
|
|
00:10:18.839 --> 00:10:25.839 |
|
a m multiplicative additive smoothing |
|
|
|
00:10:21.839 --> 00:10:27.160 |
|
term to upway things instead of using |
|
|
|
00:10:25.839 --> 00:10:30.200 |
|
the term |
|
|
|
00:10:27.160 --> 00:10:33.399 |
|
frequency and uh the actual formula is |
|
|
|
00:10:30.200 --> 00:10:37.240 |
|
here K and B are kind of |
|
|
|
00:10:33.399 --> 00:10:39.360 |
|
hyperparameters and um average DL is |
|
|
|
00:10:37.240 --> 00:10:40.639 |
|
average document length the details of |
|
|
|
00:10:39.360 --> 00:10:42.120 |
|
this are not really important but |
|
|
|
00:10:40.639 --> 00:10:43.800 |
|
basically what you should know is that |
|
|
|
00:10:42.120 --> 00:10:45.639 |
|
this is doing some smoothing on the term |
|
|
|
00:10:43.800 --> 00:10:48.240 |
|
frequencies and you can look in more |
|
|
|
00:10:45.639 --> 00:10:48.240 |
|
detail if you're |
|
|
|
00:10:49.160 --> 00:10:54.920 |
|
interested so now that we have this sort |
|
|
|
00:10:52.880 --> 00:10:57.959 |
|
of term |
|
|
|
00:10:54.920 --> 00:11:00.320 |
|
based uh sparse Vector we would like to |
|
|
|
00:10:57.959 --> 00:11:03.320 |
|
use this to look up relevant documents |
|
|
|
00:11:00.320 --> 00:11:06.000 |
|
in a collection very quickly because you |
|
|
|
00:11:03.320 --> 00:11:08.000 |
|
know we might have a collection that's |
|
|
|
00:11:06.000 --> 00:11:09.720 |
|
extremely large like as large as the |
|
|
|
00:11:08.000 --> 00:11:12.320 |
|
entire internet like what Google is |
|
|
|
00:11:09.720 --> 00:11:14.160 |
|
doing when it searches and so in order |
|
|
|
00:11:12.320 --> 00:11:16.240 |
|
to solve this we need a data structure |
|
|
|
00:11:14.160 --> 00:11:17.279 |
|
that allows for efficient sparse lookup |
|
|
|
00:11:16.240 --> 00:11:19.480 |
|
of |
|
|
|
00:11:17.279 --> 00:11:23.720 |
|
vectors and so we have all of these |
|
|
|
00:11:19.480 --> 00:11:27.279 |
|
sparse vectors like this |
|
|
|
00:11:23.720 --> 00:11:31.240 |
|
and we uh basically turn this into an |
|
|
|
00:11:27.279 --> 00:11:34.720 |
|
index where we have something like a you |
|
|
|
00:11:31.240 --> 00:11:37.920 |
|
know python style dictionary or map that |
|
|
|
00:11:34.720 --> 00:11:41.079 |
|
has it's the key each uh word we would |
|
|
|
00:11:37.920 --> 00:11:45.000 |
|
look like to look up and is the vector |
|
|
|
00:11:41.079 --> 00:11:48.480 |
|
the corresponding um index of that |
|
|
|
00:11:45.000 --> 00:11:50.480 |
|
document so for example what in our case |
|
|
|
00:11:48.480 --> 00:11:54.200 |
|
here only appears in document one so it |
|
|
|
00:11:50.480 --> 00:11:56.279 |
|
would point to document one candy uh |
|
|
|
00:11:54.200 --> 00:11:58.560 |
|
also appears in document one NLP appears |
|
|
|
00:11:56.279 --> 00:11:59.839 |
|
in two and three and so you can create |
|
|
|
00:11:58.560 --> 00:12:02.760 |
|
this index IND like this and this is |
|
|
|
00:11:59.839 --> 00:12:02.760 |
|
called an inverted |
|
|
|
00:12:03.079 --> 00:12:08.760 |
|
Index this is an important application |
|
|
|
00:12:06.000 --> 00:12:11.600 |
|
of course so there's lots of software |
|
|
|
00:12:08.760 --> 00:12:14.920 |
|
the most kind of pical software for this |
|
|
|
00:12:11.600 --> 00:12:18.760 |
|
is Apache Lucine so if you want to build |
|
|
|
00:12:14.920 --> 00:12:21.639 |
|
a big index uh to look up vectors using |
|
|
|
00:12:18.760 --> 00:12:24.160 |
|
this sparse index like this you can uh |
|
|
|
00:12:21.639 --> 00:12:24.160 |
|
take a look at |
|
|
|
00:12:26.160 --> 00:12:30.880 |
|
Lucy so the next thing I'd like to talk |
|
|
|
00:12:28.399 --> 00:12:33.199 |
|
about is dense retrieval and the way |
|
|
|
00:12:30.880 --> 00:12:36.000 |
|
dense retrieval works is you encode the |
|
|
|
00:12:33.199 --> 00:12:37.240 |
|
document in query into a dense factor |
|
|
|
00:12:36.000 --> 00:12:40.240 |
|
and find the nearest |
|
|
|
00:12:37.240 --> 00:12:42.160 |
|
neighbor in order to do this encoding |
|
|
|
00:12:40.240 --> 00:12:44.639 |
|
you can use a number of things you can |
|
|
|
00:12:42.160 --> 00:12:47.440 |
|
use out of the box embeddings or you can |
|
|
|
00:12:44.639 --> 00:12:49.959 |
|
use learned embeddings specifically |
|
|
|
00:12:47.440 --> 00:12:53.519 |
|
created for the purpose of |
|
|
|
00:12:49.959 --> 00:12:56.240 |
|
retrieving and so what we do is we take |
|
|
|
00:12:53.519 --> 00:12:57.920 |
|
all of these uh documents here we |
|
|
|
00:12:56.240 --> 00:12:59.920 |
|
convert them into embeddings using |
|
|
|
00:12:57.920 --> 00:13:04.040 |
|
whatever embedding method that we want |
|
|
|
00:12:59.920 --> 00:13:05.920 |
|
to use we then have a query and we take |
|
|
|
00:13:04.040 --> 00:13:07.720 |
|
that query and we match it and find the |
|
|
|
00:13:05.920 --> 00:13:10.040 |
|
nearest neighbor |
|
|
|
00:13:07.720 --> 00:13:13.120 |
|
here so if you're just using out of the |
|
|
|
00:13:10.040 --> 00:13:14.839 |
|
box embeddings you don't need to um you |
|
|
|
00:13:13.120 --> 00:13:15.880 |
|
know do anything special for retrieval |
|
|
|
00:13:14.839 --> 00:13:18.440 |
|
you can just take your favorite |
|
|
|
00:13:15.880 --> 00:13:22.800 |
|
embeddings like the sentence BT |
|
|
|
00:13:18.440 --> 00:13:25.639 |
|
embeddings or the open AI uh Adda |
|
|
|
00:13:22.800 --> 00:13:27.240 |
|
embeddings or something like this but |
|
|
|
00:13:25.639 --> 00:13:29.519 |
|
actually the type of embeddings you need |
|
|
|
00:13:27.240 --> 00:13:32.040 |
|
for retrieval are kind of |
|
|
|
00:13:29.519 --> 00:13:33.519 |
|
very special and because of that it's |
|
|
|
00:13:32.040 --> 00:13:36.160 |
|
important |
|
|
|
00:13:33.519 --> 00:13:38.600 |
|
to if you're very serious about doing a |
|
|
|
00:13:36.160 --> 00:13:39.800 |
|
good job of retal it's important to use |
|
|
|
00:13:38.600 --> 00:13:41.360 |
|
embeddings that were specifically |
|
|
|
00:13:39.800 --> 00:13:45.040 |
|
tailored for |
|
|
|
00:13:41.360 --> 00:13:47.680 |
|
retrieval and the reason why it is |
|
|
|
00:13:45.040 --> 00:13:50.079 |
|
important to do this is severalfold but |
|
|
|
00:13:47.680 --> 00:13:53.800 |
|
the most intuitive way to think about it |
|
|
|
00:13:50.079 --> 00:13:57.600 |
|
is if we think about uh the things that |
|
|
|
00:13:53.800 --> 00:13:59.440 |
|
tfidf does tfidf is giving a very high |
|
|
|
00:13:57.600 --> 00:14:03.000 |
|
weight to |
|
|
|
00:13:59.440 --> 00:14:04.959 |
|
contentful words and rare words and |
|
|
|
00:14:03.000 --> 00:14:06.639 |
|
we're not guaranteed that any random |
|
|
|
00:14:04.959 --> 00:14:10.600 |
|
embedding that we get is going to do |
|
|
|
00:14:06.639 --> 00:14:13.800 |
|
that so for example if we just take the |
|
|
|
00:14:10.600 --> 00:14:16.160 |
|
average word embeddings of every word in |
|
|
|
00:14:13.800 --> 00:14:20.160 |
|
a sequence it's going to give the same |
|
|
|
00:14:16.160 --> 00:14:22.320 |
|
weight to all of the words um in the |
|
|
|
00:14:20.160 --> 00:14:24.680 |
|
output and in fact common words tend to |
|
|
|
00:14:22.320 --> 00:14:27.959 |
|
have slightly higher Norms than |
|
|
|
00:14:24.680 --> 00:14:29.639 |
|
infrequent words and so that would |
|
|
|
00:14:27.959 --> 00:14:31.880 |
|
actually upli common wordss which is |
|
|
|
00:14:29.639 --> 00:14:34.639 |
|
kind of exactly the opposite thing we |
|
|
|
00:14:31.880 --> 00:14:36.480 |
|
want so how do we learn retrieval |
|
|
|
00:14:34.639 --> 00:14:39.160 |
|
oriented |
|
|
|
00:14:36.480 --> 00:14:40.920 |
|
embeddings the normal way we do this is |
|
|
|
00:14:39.160 --> 00:14:43.399 |
|
we select positive and negative |
|
|
|
00:14:40.920 --> 00:14:46.839 |
|
documents and then train using a |
|
|
|
00:14:43.399 --> 00:14:50.240 |
|
contrastive loss and so an example of |
|
|
|
00:14:46.839 --> 00:14:52.519 |
|
this is we have a query and then we have |
|
|
|
00:14:50.240 --> 00:14:55.519 |
|
negative documents for that query and we |
|
|
|
00:14:52.519 --> 00:14:58.199 |
|
have positive documents for that query |
|
|
|
00:14:55.519 --> 00:15:00.079 |
|
and uh we form formulate a hinge loss or |
|
|
|
00:14:58.199 --> 00:15:04.000 |
|
maybe some sort of probabilistic loss |
|
|
|
00:15:00.079 --> 00:15:06.560 |
|
similar to the Hench loss and uh do fine |
|
|
|
00:15:04.000 --> 00:15:06.560 |
|
tuning of the |
|
|
|
00:15:07.160 --> 00:15:13.440 |
|
embeddings so if |
|
|
|
00:15:09.399 --> 00:15:16.320 |
|
you have gold standard positive |
|
|
|
00:15:13.440 --> 00:15:18.800 |
|
documents then this is relatively easy |
|
|
|
00:15:16.320 --> 00:15:21.040 |
|
to train uh because you just need the |
|
|
|
00:15:18.800 --> 00:15:23.800 |
|
positive documents and then you can get |
|
|
|
00:15:21.040 --> 00:15:25.959 |
|
Negative documents in a number of ways |
|
|
|
00:15:23.800 --> 00:15:29.279 |
|
one common way of getting negative |
|
|
|
00:15:25.959 --> 00:15:32.279 |
|
documents is you just form a batch of |
|
|
|
00:15:29.279 --> 00:15:34.560 |
|
data and given that batch of data you |
|
|
|
00:15:32.279 --> 00:15:37.480 |
|
take all of the other documents in the |
|
|
|
00:15:34.560 --> 00:15:39.480 |
|
batch um all of the documents in the |
|
|
|
00:15:37.480 --> 00:15:42.839 |
|
batch that are positive for some other |
|
|
|
00:15:39.480 --> 00:15:46.399 |
|
query and you use those as negative |
|
|
|
00:15:42.839 --> 00:15:49.000 |
|
documents so you sample 32 query |
|
|
|
00:15:46.399 --> 00:15:50.759 |
|
document pairs you use the aligned ones |
|
|
|
00:15:49.000 --> 00:15:53.759 |
|
as positive documents and then use the |
|
|
|
00:15:50.759 --> 00:15:57.440 |
|
31 other ones as negative documents and |
|
|
|
00:15:53.759 --> 00:16:00.279 |
|
this is both effective and efficient |
|
|
|
00:15:57.440 --> 00:16:02.000 |
|
because you can kind of learned from the |
|
|
|
00:16:00.279 --> 00:16:05.079 |
|
query document pairs all at the same |
|
|
|
00:16:02.000 --> 00:16:05.079 |
|
time in an efficient |
|
|
|
00:16:05.680 --> 00:16:13.680 |
|
implementation however this is not |
|
|
|
00:16:09.160 --> 00:16:16.279 |
|
enough in many cases because that will |
|
|
|
00:16:13.680 --> 00:16:19.040 |
|
end up having lots of very kind of |
|
|
|
00:16:16.279 --> 00:16:20.440 |
|
obviously wrong documents because you |
|
|
|
00:16:19.040 --> 00:16:23.120 |
|
know |
|
|
|
00:16:20.440 --> 00:16:25.360 |
|
they're documents that are relevant for |
|
|
|
00:16:23.120 --> 00:16:27.880 |
|
a completely different query and it's |
|
|
|
00:16:25.360 --> 00:16:29.880 |
|
kind of easy to distinguish uh between |
|
|
|
00:16:27.880 --> 00:16:32.319 |
|
those you can just at superficial word |
|
|
|
00:16:29.880 --> 00:16:34.519 |
|
overlap so another common thing to do |
|
|
|
00:16:32.319 --> 00:16:35.759 |
|
when you're training these models is to |
|
|
|
00:16:34.519 --> 00:16:38.160 |
|
get hard |
|
|
|
00:16:35.759 --> 00:16:40.680 |
|
negatives so hard negatives are |
|
|
|
00:16:38.160 --> 00:16:44.360 |
|
basically negative examples that look |
|
|
|
00:16:40.680 --> 00:16:49.399 |
|
plausible but are actually wrong and |
|
|
|
00:16:44.360 --> 00:16:53.199 |
|
so here uh this famous method called DPR |
|
|
|
00:16:49.399 --> 00:16:55.880 |
|
is it basically learns the uh encoders |
|
|
|
00:16:53.199 --> 00:16:57.759 |
|
based on both inbatch negatives like I |
|
|
|
00:16:55.880 --> 00:17:00.160 |
|
mentioned before and hard negatives that |
|
|
|
00:16:57.759 --> 00:17:01.360 |
|
were created by looking up documents |
|
|
|
00:17:00.160 --> 00:17:03.839 |
|
with |
|
|
|
00:17:01.360 --> 00:17:06.039 |
|
bm25 and so the ones that were looked up |
|
|
|
00:17:03.839 --> 00:17:07.640 |
|
by bm25 you know kind of look very |
|
|
|
00:17:06.039 --> 00:17:10.039 |
|
similar superficially but they might |
|
|
|
00:17:07.640 --> 00:17:12.400 |
|
have you know subtle errors in them for |
|
|
|
00:17:10.039 --> 00:17:12.400 |
|
why they're |
|
|
|
00:17:12.799 --> 00:17:17.160 |
|
inappropriate there's also methods to |
|
|
|
00:17:15.679 --> 00:17:20.000 |
|
learn these |
|
|
|
00:17:17.160 --> 00:17:23.199 |
|
retrievers based on kind of not |
|
|
|
00:17:20.000 --> 00:17:26.199 |
|
supervised data so one major bottleneck |
|
|
|
00:17:23.199 --> 00:17:29.000 |
|
if you're taking the positive documents |
|
|
|
00:17:26.199 --> 00:17:30.440 |
|
from Human annotations of whether |
|
|
|
00:17:29.000 --> 00:17:33.440 |
|
something is correct or not or human |
|
|
|
00:17:30.440 --> 00:17:37.880 |
|
clickthrough logs or other things like |
|
|
|
00:17:33.440 --> 00:17:40.640 |
|
this is that you need that data in order |
|
|
|
00:17:37.880 --> 00:17:44.440 |
|
to start training a bottle so uh |
|
|
|
00:17:40.640 --> 00:17:47.880 |
|
contriver is another method that uses |
|
|
|
00:17:44.440 --> 00:17:51.520 |
|
two random spans within a document is a |
|
|
|
00:17:47.880 --> 00:17:54.440 |
|
positive pair and random spans from |
|
|
|
00:17:51.520 --> 00:17:56.559 |
|
across documents is negative Pairs and |
|
|
|
00:17:54.440 --> 00:17:58.960 |
|
so this can be used for you know very |
|
|
|
00:17:56.559 --> 00:18:00.039 |
|
very large scale initial pre-training of |
|
|
|
00:17:58.960 --> 00:18:02.280 |
|
the |
|
|
|
00:18:00.039 --> 00:18:04.520 |
|
models and then after you've done that |
|
|
|
00:18:02.280 --> 00:18:06.840 |
|
large scale initial pre-training you can |
|
|
|
00:18:04.520 --> 00:18:10.799 |
|
then go in and fine-tune it on you know |
|
|
|
00:18:06.840 --> 00:18:10.799 |
|
actually annotate the data to improve it |
|
|
|
00:18:12.120 --> 00:18:18.799 |
|
further Okay so we've talked about |
|
|
|
00:18:15.159 --> 00:18:21.559 |
|
training uh these dense product uh |
|
|
|
00:18:18.799 --> 00:18:24.559 |
|
models these uh models that look at |
|
|
|
00:18:21.559 --> 00:18:27.720 |
|
dense embedding overlap for nearest |
|
|
|
00:18:24.559 --> 00:18:28.919 |
|
neighbors but the problem is in order to |
|
|
|
00:18:27.720 --> 00:18:30.919 |
|
calculate this you would need to |
|
|
|
00:18:28.919 --> 00:18:35.159 |
|
calculate it over a very very large |
|
|
|
00:18:30.919 --> 00:18:37.960 |
|
document base and just taking a product |
|
|
|
00:18:35.159 --> 00:18:40.480 |
|
between the query and all of the other |
|
|
|
00:18:37.960 --> 00:18:42.400 |
|
documents in the document base is |
|
|
|
00:18:40.480 --> 00:18:46.080 |
|
extremely |
|
|
|
00:18:42.400 --> 00:18:48.080 |
|
costly and so in order to fix this there |
|
|
|
00:18:46.080 --> 00:18:49.080 |
|
are methods for approximate nearest |
|
|
|
00:18:48.080 --> 00:18:52.280 |
|
neighbor |
|
|
|
00:18:49.080 --> 00:18:54.200 |
|
search and these are methods that allow |
|
|
|
00:18:52.280 --> 00:18:57.360 |
|
you to retrieve embeddings that have the |
|
|
|
00:18:54.200 --> 00:19:00.280 |
|
maximum inner product between them in |
|
|
|
00:18:57.360 --> 00:19:02.520 |
|
sublinear time and because you're doing |
|
|
|
00:19:00.280 --> 00:19:03.960 |
|
the maximum inner product this is also |
|
|
|
00:19:02.520 --> 00:19:06.600 |
|
often called maximum inner product |
|
|
|
00:19:03.960 --> 00:19:06.600 |
|
search or |
|
|
|
00:19:06.679 --> 00:19:12.360 |
|
myips so I'm going to introduce on a |
|
|
|
00:19:09.440 --> 00:19:15.360 |
|
very high level two common methods to do |
|
|
|
00:19:12.360 --> 00:19:19.320 |
|
this the first one is locality sensitive |
|
|
|
00:19:15.360 --> 00:19:22.440 |
|
hashen um or this can also be called |
|
|
|
00:19:19.320 --> 00:19:24.799 |
|
kind of inverted index as well and what |
|
|
|
00:19:22.440 --> 00:19:26.840 |
|
you do is you make partitions in |
|
|
|
00:19:24.799 --> 00:19:29.320 |
|
continuous space and then you use it |
|
|
|
00:19:26.840 --> 00:19:31.240 |
|
like an inverted index |
|
|
|
00:19:29.320 --> 00:19:33.679 |
|
so let's say we have a whole bunch of |
|
|
|
00:19:31.240 --> 00:19:34.919 |
|
embeddings uh I demonstrated two |
|
|
|
00:19:33.679 --> 00:19:36.640 |
|
dimensional embeddings here but in |
|
|
|
00:19:34.919 --> 00:19:38.440 |
|
reality this would be you know as large |
|
|
|
00:19:36.640 --> 00:19:41.159 |
|
as your word |
|
|
|
00:19:38.440 --> 00:19:42.880 |
|
embedding your query and document |
|
|
|
00:19:41.159 --> 00:19:47.120 |
|
embedding space so this would be you |
|
|
|
00:19:42.880 --> 00:19:49.760 |
|
know 512 or 1024 or something like that |
|
|
|
00:19:47.120 --> 00:19:53.480 |
|
and what you do is you define a whole |
|
|
|
00:19:49.760 --> 00:19:56.720 |
|
bunch of planes that separate these |
|
|
|
00:19:53.480 --> 00:19:59.320 |
|
points into two spaces so if this is our |
|
|
|
00:19:56.720 --> 00:20:02.520 |
|
first plane all the points above the |
|
|
|
00:19:59.320 --> 00:20:04.280 |
|
plane will get a one for this partition |
|
|
|
00:20:02.520 --> 00:20:06.799 |
|
and all the points below the plane will |
|
|
|
00:20:04.280 --> 00:20:08.840 |
|
get a zero for this partition and we do |
|
|
|
00:20:06.799 --> 00:20:12.400 |
|
it similarly we we create a whole bunch |
|
|
|
00:20:08.840 --> 00:20:15.840 |
|
of them and then based on this we can |
|
|
|
00:20:12.400 --> 00:20:18.440 |
|
now assign sparse vectors depending on |
|
|
|
00:20:15.840 --> 00:20:21.520 |
|
each of these planes so we have uh for |
|
|
|
00:20:18.440 --> 00:20:24.000 |
|
example the top one uh one0 0 because |
|
|
|
00:20:21.520 --> 00:20:26.400 |
|
it's on the right side of the blue plane |
|
|
|
00:20:24.000 --> 00:20:28.760 |
|
and the um wrong side of the red and the |
|
|
|
00:20:26.400 --> 00:20:30.679 |
|
green planes and then for the top right |
|
|
|
00:20:28.760 --> 00:20:32.799 |
|
we have one1 because it's on the right |
|
|
|
00:20:30.679 --> 00:20:37.159 |
|
side of the blueing the green planes and |
|
|
|
00:20:32.799 --> 00:20:39.440 |
|
the wrong side of the red plane and So |
|
|
|
00:20:37.159 --> 00:20:41.000 |
|
based on this now we have a sparse |
|
|
|
00:20:39.440 --> 00:20:42.600 |
|
vector and we already know what to do |
|
|
|
00:20:41.000 --> 00:20:44.640 |
|
with a sparse Vector right we look it up |
|
|
|
00:20:42.600 --> 00:20:49.039 |
|
in an inverted index just like we did |
|
|
|
00:20:44.640 --> 00:20:51.520 |
|
for a sparse um you know sparse lookup |
|
|
|
00:20:49.039 --> 00:20:54.520 |
|
table so that's one |
|
|
|
00:20:51.520 --> 00:20:57.799 |
|
method another method uses a graph-based |
|
|
|
00:20:54.520 --> 00:21:01.320 |
|
search and the basic idea behind this is |
|
|
|
00:20:57.799 --> 00:21:02.480 |
|
that we create hubs uh and these hubs |
|
|
|
00:21:01.320 --> 00:21:05.200 |
|
are kind |
|
|
|
00:21:02.480 --> 00:21:07.960 |
|
of a small number of points that are |
|
|
|
00:21:05.200 --> 00:21:09.440 |
|
close to other points in the space and |
|
|
|
00:21:07.960 --> 00:21:10.880 |
|
so we create some hubs and then we |
|
|
|
00:21:09.440 --> 00:21:12.200 |
|
search from there so if we have a |
|
|
|
00:21:10.880 --> 00:21:16.880 |
|
similar |
|
|
|
00:21:12.200 --> 00:21:19.159 |
|
looking uh set of points in the space we |
|
|
|
00:21:16.880 --> 00:21:21.520 |
|
find these hubs which are something like |
|
|
|
00:21:19.159 --> 00:21:24.880 |
|
cluster centroids and then based on the |
|
|
|
00:21:21.520 --> 00:21:28.559 |
|
cluster centroids we then rule down or |
|
|
|
00:21:24.880 --> 00:21:31.200 |
|
we greatly reduce the number of |
|
|
|
00:21:28.559 --> 00:21:33.400 |
|
points that we need to be looking at and |
|
|
|
00:21:31.200 --> 00:21:36.960 |
|
then we search through only those points |
|
|
|
00:21:33.400 --> 00:21:38.600 |
|
in a more kind of extensive Manner and |
|
|
|
00:21:36.960 --> 00:21:41.840 |
|
you can even turn this into a tree where |
|
|
|
00:21:38.600 --> 00:21:43.760 |
|
you have hubs and then you have uh kind |
|
|
|
00:21:41.840 --> 00:21:46.600 |
|
of mini hubs and then you have all the |
|
|
|
00:21:43.760 --> 00:21:50.200 |
|
points so this allows you to do a kind |
|
|
|
00:21:46.600 --> 00:21:50.200 |
|
of tree based or graph based |
|
|
|
00:21:50.600 --> 00:21:55.840 |
|
search so obviously unless you're really |
|
|
|
00:21:54.159 --> 00:21:57.039 |
|
excited about these algorithms this is |
|
|
|
00:21:55.840 --> 00:22:00.080 |
|
something that you probably don't want |
|
|
|
00:21:57.039 --> 00:22:01.440 |
|
to be implementing yourself um and the |
|
|
|
00:22:00.080 --> 00:22:03.000 |
|
good news is there's lots of very good |
|
|
|
00:22:01.440 --> 00:22:04.480 |
|
libraries that help you do this in fact |
|
|
|
00:22:03.000 --> 00:22:08.799 |
|
there are so many libraries it's hard to |
|
|
|
00:22:04.480 --> 00:22:11.960 |
|
manage them but some libraries that |
|
|
|
00:22:08.799 --> 00:22:13.799 |
|
people very commonly use I I think face |
|
|
|
00:22:11.960 --> 00:22:17.320 |
|
uh FIS |
|
|
|
00:22:13.799 --> 00:22:20.200 |
|
SS is a widely used one created by uh |
|
|
|
00:22:17.320 --> 00:22:23.760 |
|
fair and meta and chroma DB is a |
|
|
|
00:22:20.200 --> 00:22:27.720 |
|
separate one uh that is kind of an AI |
|
|
|
00:22:23.760 --> 00:22:30.720 |
|
native uh embedding search database so |
|
|
|
00:22:27.720 --> 00:22:30.720 |
|
both those are good |
|
|
|
00:22:32.960 --> 00:22:41.120 |
|
options even with intelligent training |
|
|
|
00:22:37.880 --> 00:22:42.640 |
|
of dense embeddings however there still |
|
|
|
00:22:41.120 --> 00:22:45.600 |
|
are |
|
|
|
00:22:42.640 --> 00:22:48.240 |
|
problems and the biggest |
|
|
|
00:22:45.600 --> 00:22:51.720 |
|
problem that you face when you're |
|
|
|
00:22:48.240 --> 00:22:54.000 |
|
looking at something like uh cross |
|
|
|
00:22:51.720 --> 00:22:56.880 |
|
encoders um that sorry when you're |
|
|
|
00:22:54.000 --> 00:23:00.240 |
|
looking at dense embeddings is that in |
|
|
|
00:22:56.880 --> 00:23:02.159 |
|
order to form a good dense embedding you |
|
|
|
00:23:00.240 --> 00:23:03.840 |
|
need to kind of know in advance what |
|
|
|
00:23:02.159 --> 00:23:05.799 |
|
you're looking for right because you're |
|
|
|
00:23:03.840 --> 00:23:09.120 |
|
taking a long document you're condensing |
|
|
|
00:23:05.799 --> 00:23:10.679 |
|
it down into a single embedding and or a |
|
|
|
00:23:09.120 --> 00:23:13.320 |
|
long passage and you're condensing it |
|
|
|
00:23:10.679 --> 00:23:16.200 |
|
down to a single embedding and so if |
|
|
|
00:23:13.320 --> 00:23:19.520 |
|
that during that condensation process |
|
|
|
00:23:16.200 --> 00:23:21.240 |
|
actually there's other information that |
|
|
|
00:23:19.520 --> 00:23:23.159 |
|
is relevant to a query but you have to |
|
|
|
00:23:21.240 --> 00:23:27.600 |
|
throw out because of the limited |
|
|
|
00:23:23.159 --> 00:23:30.600 |
|
embedding capacity this causes you to |
|
|
|
00:23:27.600 --> 00:23:32.320 |
|
you know essentially fail at um doing |
|
|
|
00:23:30.600 --> 00:23:34.840 |
|
retrieval |
|
|
|
00:23:32.320 --> 00:23:38.159 |
|
appropriately so there's a couple |
|
|
|
00:23:34.840 --> 00:23:40.880 |
|
methods that can be used to fix this so |
|
|
|
00:23:38.159 --> 00:23:42.279 |
|
the first method is in contrast to the |
|
|
|
00:23:40.880 --> 00:23:44.159 |
|
buy encoder which is what I've been |
|
|
|
00:23:42.279 --> 00:23:47.000 |
|
talking out about at this point where |
|
|
|
00:23:44.159 --> 00:23:48.520 |
|
you kind of do full encoding of queries |
|
|
|
00:23:47.000 --> 00:23:52.120 |
|
full encoding of documents and then do |
|
|
|
00:23:48.520 --> 00:23:53.840 |
|
inner product search for a score uh you |
|
|
|
00:23:52.120 --> 00:23:56.760 |
|
can use a cross encoder and the way the |
|
|
|
00:23:53.840 --> 00:23:58.559 |
|
cross- encoder works is you append the |
|
|
|
00:23:56.760 --> 00:24:00.799 |
|
query and document and then you run them |
|
|
|
00:23:58.559 --> 00:24:03.400 |
|
through a model like a Transformer model |
|
|
|
00:24:00.799 --> 00:24:07.840 |
|
and you calculate the output |
|
|
|
00:24:03.400 --> 00:24:09.880 |
|
score so the problem with this um so |
|
|
|
00:24:07.840 --> 00:24:12.480 |
|
this this is great uh because it gives |
|
|
|
00:24:09.880 --> 00:24:15.799 |
|
you maximum flexibility um Transformer |
|
|
|
00:24:12.480 --> 00:24:18.799 |
|
models are powerful you can uh assess |
|
|
|
00:24:15.799 --> 00:24:20.520 |
|
relevance very well the problem with |
|
|
|
00:24:18.799 --> 00:24:22.200 |
|
this is this precludes approximate |
|
|
|
00:24:20.520 --> 00:24:23.720 |
|
nearest neighbor lookup because now |
|
|
|
00:24:22.200 --> 00:24:25.799 |
|
you're running through you know many |
|
|
|
00:24:23.720 --> 00:24:28.880 |
|
many nonlinearities |
|
|
|
00:24:25.799 --> 00:24:32.760 |
|
here so this is can only be used for |
|
|
|
00:24:28.880 --> 00:24:34.360 |
|
reranking documents um or if even if |
|
|
|
00:24:32.760 --> 00:24:36.880 |
|
you're doing retrieval doing retrieval |
|
|
|
00:24:34.360 --> 00:24:39.679 |
|
over a very very small number of |
|
|
|
00:24:36.880 --> 00:24:41.960 |
|
documents but if you really want maximal |
|
|
|
00:24:39.679 --> 00:24:44.080 |
|
accuracy I definitely would recommend uh |
|
|
|
00:24:41.960 --> 00:24:45.720 |
|
doing something like this because it can |
|
|
|
00:24:44.080 --> 00:24:47.960 |
|
allow you to do kind of a second pass |
|
|
|
00:24:45.720 --> 00:24:49.360 |
|
filtering over the most relevant looking |
|
|
|
00:24:47.960 --> 00:24:52.399 |
|
documents to identify the ones you |
|
|
|
00:24:49.360 --> 00:24:52.399 |
|
really want to add to your |
|
|
|
00:24:54.240 --> 00:24:58.240 |
|
context so then there are also |
|
|
|
00:24:56.760 --> 00:25:01.360 |
|
approaches that are kind kind of in the |
|
|
|
00:24:58.240 --> 00:25:02.159 |
|
middle of these two uh the most famous |
|
|
|
00:25:01.360 --> 00:25:05.880 |
|
one is |
|
|
|
00:25:02.159 --> 00:25:08.320 |
|
Kar and the I called this token level |
|
|
|
00:25:05.880 --> 00:25:10.840 |
|
dense retrieval it's also called uh late |
|
|
|
00:25:08.320 --> 00:25:12.720 |
|
interaction in the coold bear paper but |
|
|
|
00:25:10.840 --> 00:25:14.919 |
|
the way it works is you use |
|
|
|
00:25:12.720 --> 00:25:18.440 |
|
contextualized representations of all |
|
|
|
00:25:14.919 --> 00:25:19.440 |
|
query and document tokens to compute a |
|
|
|
00:25:18.440 --> 00:25:23.559 |
|
retrieval |
|
|
|
00:25:19.440 --> 00:25:26.919 |
|
score and so you do offline indexing of |
|
|
|
00:25:23.559 --> 00:25:29.159 |
|
every token in the document and then |
|
|
|
00:25:26.919 --> 00:25:31.399 |
|
based on this offline X indexing of |
|
|
|
00:25:29.159 --> 00:25:35.320 |
|
every token in the document you then |
|
|
|
00:25:31.399 --> 00:25:38.760 |
|
have a query encoder and you do matching |
|
|
|
00:25:35.320 --> 00:25:41.799 |
|
between each token in the query and the |
|
|
|
00:25:38.760 --> 00:25:43.399 |
|
highest scoring tokens in each |
|
|
|
00:25:41.799 --> 00:25:46.320 |
|
document |
|
|
|
00:25:43.399 --> 00:25:48.399 |
|
and the reason why this is good is it |
|
|
|
00:25:46.320 --> 00:25:49.600 |
|
still allows you to encode all of the |
|
|
|
00:25:48.399 --> 00:25:52.120 |
|
tokens in the |
|
|
|
00:25:49.600 --> 00:25:55.440 |
|
document and but each of these |
|
|
|
00:25:52.120 --> 00:25:59.679 |
|
similarity searches is still just |
|
|
|
00:25:55.440 --> 00:26:03.559 |
|
a kind of maximum product search and |
|
|
|
00:25:59.679 --> 00:26:06.279 |
|
because of this this allows you to do |
|
|
|
00:26:03.559 --> 00:26:07.960 |
|
each of these searches efficiently and |
|
|
|
00:26:06.279 --> 00:26:09.840 |
|
doesn't preclude you from running it |
|
|
|
00:26:07.960 --> 00:26:12.919 |
|
over an entire |
|
|
|
00:26:09.840 --> 00:26:16.399 |
|
database the downside to this method uh |
|
|
|
00:26:12.919 --> 00:26:19.120 |
|
may already be obvious but in the |
|
|
|
00:26:16.399 --> 00:26:22.200 |
|
traditional bu encoder we have a single |
|
|
|
00:26:19.120 --> 00:26:26.880 |
|
Vector for each document but here we |
|
|
|
00:26:22.200 --> 00:26:29.320 |
|
have one vector for um each token in the |
|
|
|
00:26:26.880 --> 00:26:31.880 |
|
document so BAS basically your vector |
|
|
|
00:26:29.320 --> 00:26:34.399 |
|
database gets n times larger where n is |
|
|
|
00:26:31.880 --> 00:26:36.679 |
|
the number of tokens in the document and |
|
|
|
00:26:34.399 --> 00:26:38.080 |
|
there are certain methods to make this |
|
|
|
00:26:36.679 --> 00:26:41.559 |
|
better like you can compress each |
|
|
|
00:26:38.080 --> 00:26:42.960 |
|
document to a smaller number of n uh but |
|
|
|
00:26:41.559 --> 00:26:45.880 |
|
still this is definitely going to be |
|
|
|
00:26:42.960 --> 00:26:48.399 |
|
more costly than looking up each uh |
|
|
|
00:26:45.880 --> 00:26:50.360 |
|
token so this is definitely something to |
|
|
|
00:26:48.399 --> 00:26:53.520 |
|
consider if you want to get you know |
|
|
|
00:26:50.360 --> 00:26:55.159 |
|
very good scores and Co bear is a good |
|
|
|
00:26:53.520 --> 00:26:59.600 |
|
implementation of that to start with if |
|
|
|
00:26:55.159 --> 00:26:59.600 |
|
you're interested in trying it out |
|
|
|
00:27:00.480 --> 00:27:07.000 |
|
so this is a final thing this is uh |
|
|
|
00:27:03.080 --> 00:27:08.679 |
|
something that is a little bit uh |
|
|
|
00:27:07.000 --> 00:27:10.080 |
|
different than all the other things I I |
|
|
|
00:27:08.679 --> 00:27:12.399 |
|
talked about before but I've used it |
|
|
|
00:27:10.080 --> 00:27:15.840 |
|
myself and it actually can be pretty |
|
|
|
00:27:12.399 --> 00:27:18.799 |
|
effective um it was also made at CMU so |
|
|
|
00:27:15.840 --> 00:27:24.399 |
|
by Lal so I would like to promote our |
|
|
|
00:27:18.799 --> 00:27:26.880 |
|
CMU work of course but um the HP idea |
|
|
|
00:27:24.399 --> 00:27:28.080 |
|
between behind a hypothetical document |
|
|
|
00:27:26.880 --> 00:27:30.320 |
|
embedding |
|
|
|
00:27:28.080 --> 00:27:33.440 |
|
is that it's actually somewhat difficult |
|
|
|
00:27:30.320 --> 00:27:36.200 |
|
to match a query and a document right |
|
|
|
00:27:33.440 --> 00:27:38.919 |
|
because a query is a very short possibly |
|
|
|
00:27:36.200 --> 00:27:42.240 |
|
ungrammatical output that's asking a |
|
|
|
00:27:38.919 --> 00:27:44.799 |
|
question and then a document is a very |
|
|
|
00:27:42.240 --> 00:27:49.440 |
|
long output that's written in a |
|
|
|
00:27:44.799 --> 00:27:50.799 |
|
different proos style and you you know |
|
|
|
00:27:49.440 --> 00:27:53.159 |
|
it might have lots of irrelevant |
|
|
|
00:27:50.799 --> 00:27:54.519 |
|
information or or boiler plate or fluff |
|
|
|
00:27:53.159 --> 00:27:57.640 |
|
or something like |
|
|
|
00:27:54.519 --> 00:28:00.640 |
|
that so the idea behind a hypothetical |
|
|
|
00:27:57.640 --> 00:28:03.120 |
|
document embedding is that it's e easier |
|
|
|
00:28:00.640 --> 00:28:05.279 |
|
to match a document in a document than |
|
|
|
00:28:03.120 --> 00:28:08.159 |
|
it is to match a query in a |
|
|
|
00:28:05.279 --> 00:28:10.159 |
|
document but the input to our model is a |
|
|
|
00:28:08.159 --> 00:28:14.360 |
|
query right so what do we |
|
|
|
00:28:10.159 --> 00:28:17.919 |
|
do and so essentially what we do is we |
|
|
|
00:28:14.360 --> 00:28:20.399 |
|
then take a large language model we feed |
|
|
|
00:28:17.919 --> 00:28:23.320 |
|
it in a query in a prompt and say |
|
|
|
00:28:20.399 --> 00:28:25.399 |
|
generate a document that looks like it |
|
|
|
00:28:23.320 --> 00:28:30.080 |
|
should be the answer to this |
|
|
|
00:28:25.399 --> 00:28:32.120 |
|
query and so so then the llm goes in and |
|
|
|
00:28:30.080 --> 00:28:34.440 |
|
it generates a document and hopefully |
|
|
|
00:28:32.120 --> 00:28:38.440 |
|
this document looks more similar to the |
|
|
|
00:28:34.440 --> 00:28:41.440 |
|
documents you want to retrieve than the |
|
|
|
00:28:38.440 --> 00:28:44.039 |
|
um than the original query does and I've |
|
|
|
00:28:41.440 --> 00:28:47.240 |
|
actually found this to be relatively |
|
|
|
00:28:44.039 --> 00:28:51.880 |
|
effective at improving accuracy |
|
|
|
00:28:47.240 --> 00:28:53.200 |
|
on kind of difficult uh tasks especially |
|
|
|
00:28:51.880 --> 00:28:55.840 |
|
ones that are out of domain from the |
|
|
|
00:28:53.200 --> 00:28:58.000 |
|
trend models that I'm |
|
|
|
00:28:55.840 --> 00:29:01.880 |
|
using so I've gone through a whole bunch |
|
|
|
00:28:58.000 --> 00:29:04.039 |
|
of methods and I would like to finish up |
|
|
|
00:29:01.880 --> 00:29:05.679 |
|
this section by giving some insight |
|
|
|
00:29:04.039 --> 00:29:11.399 |
|
about which one you should be |
|
|
|
00:29:05.679 --> 00:29:14.559 |
|
using so my impression right now is |
|
|
|
00:29:11.399 --> 00:29:17.760 |
|
that a good basine to start out with is |
|
|
|
00:29:14.559 --> 00:29:20.679 |
|
something like bm25 it's very easy to |
|
|
|
00:29:17.760 --> 00:29:23.080 |
|
start out and compared to embedding |
|
|
|
00:29:20.679 --> 00:29:26.120 |
|
based models it tends to be relatively |
|
|
|
00:29:23.080 --> 00:29:28.279 |
|
robust to new domains so if you have a |
|
|
|
00:29:26.120 --> 00:29:30.559 |
|
new domain you're more less guaranteed |
|
|
|
00:29:28.279 --> 00:29:32.240 |
|
that bm25 will give you some performance |
|
|
|
00:29:30.559 --> 00:29:35.320 |
|
whereas embeddings may be really good |
|
|
|
00:29:32.240 --> 00:29:38.399 |
|
but they may be really bad uh depending |
|
|
|
00:29:35.320 --> 00:29:40.880 |
|
on how out of domain that is compared to |
|
|
|
00:29:38.399 --> 00:29:42.799 |
|
your underlying embedding |
|
|
|
00:29:40.880 --> 00:29:44.760 |
|
model |
|
|
|
00:29:42.799 --> 00:29:48.039 |
|
so however if you want to get the |
|
|
|
00:29:44.760 --> 00:29:51.080 |
|
highest accuracy definitely tuned models |
|
|
|
00:29:48.039 --> 00:29:53.200 |
|
are going to be better and if you're not |
|
|
|
00:29:51.080 --> 00:29:56.039 |
|
worried about computation efficiency |
|
|
|
00:29:53.200 --> 00:29:58.480 |
|
using something like P bear um with kind |
|
|
|
00:29:56.039 --> 00:30:01.320 |
|
of the token level retrieval will |
|
|
|
00:29:58.480 --> 00:30:05.559 |
|
definitely give you uh good accuracy |
|
|
|
00:30:01.320 --> 00:30:08.559 |
|
here however there's better support for |
|
|
|
00:30:05.559 --> 00:30:12.159 |
|
bu encoder style models um in kind of |
|
|
|
00:30:08.559 --> 00:30:15.240 |
|
standard Vector databases like feice and |
|
|
|
00:30:12.159 --> 00:30:17.519 |
|
uh chroma and other things like that so |
|
|
|
00:30:15.240 --> 00:30:19.799 |
|
if you want a kind of easier method to |
|
|
|
00:30:17.519 --> 00:30:23.279 |
|
get started very quickly then using a bu |
|
|
|
00:30:19.799 --> 00:30:23.279 |
|
encoder is probably the best way to |
|
|
|
00:30:25.080 --> 00:30:31.080 |
|
go okay so now moving on to actual |
|
|
|
00:30:28.279 --> 00:30:33.159 |
|
retrieval augmented generation models we |
|
|
|
00:30:31.080 --> 00:30:38.360 |
|
have uh retriever reader |
|
|
|
00:30:33.159 --> 00:30:40.880 |
|
models and the way these work is you |
|
|
|
00:30:38.360 --> 00:30:43.279 |
|
basically the simplest way they can work |
|
|
|
00:30:40.880 --> 00:30:45.799 |
|
is you basically just chain retrieval |
|
|
|
00:30:43.279 --> 00:30:47.640 |
|
and reading together so you use an outof |
|
|
|
00:30:45.799 --> 00:30:52.519 |
|
thebox Retriever and an outof thebox |
|
|
|
00:30:47.640 --> 00:30:54.039 |
|
reader model and you have your query uh |
|
|
|
00:30:52.519 --> 00:30:56.159 |
|
you could for example look something up |
|
|
|
00:30:54.039 --> 00:30:58.039 |
|
on Google get a whole bunch of passages |
|
|
|
00:30:56.159 --> 00:30:59.760 |
|
and then feed them into a GP key model |
|
|
|
00:30:58.039 --> 00:31:03.919 |
|
and get an |
|
|
|
00:30:59.760 --> 00:31:06.960 |
|
answer this overall is quite effective |
|
|
|
00:31:03.919 --> 00:31:09.159 |
|
um you it's easy to implement and it |
|
|
|
00:31:06.960 --> 00:31:10.600 |
|
will give you decent results so |
|
|
|
00:31:09.159 --> 00:31:15.480 |
|
definitely it's something to be worth |
|
|
|
00:31:10.600 --> 00:31:20.720 |
|
thinking about uh for assignment two in |
|
|
|
00:31:15.480 --> 00:31:24.799 |
|
the um in the class you're required to |
|
|
|
00:31:20.720 --> 00:31:26.679 |
|
only use uh kind of public models or |
|
|
|
00:31:24.799 --> 00:31:29.760 |
|
open source implementations so you could |
|
|
|
00:31:26.679 --> 00:31:34.360 |
|
still replace that with Apachi Lucine |
|
|
|
00:31:29.760 --> 00:31:36.360 |
|
and then um you know any standard llm |
|
|
|
00:31:34.360 --> 00:31:39.159 |
|
and that could be you know llama llama |
|
|
|
00:31:36.360 --> 00:31:41.600 |
|
Chad or M mistol or mixol or something |
|
|
|
00:31:39.159 --> 00:31:45.360 |
|
like that so uh you could definitely |
|
|
|
00:31:41.600 --> 00:31:48.120 |
|
feel feel free to do something like |
|
|
|
00:31:45.360 --> 00:31:51.559 |
|
that um of course the passages are |
|
|
|
00:31:48.120 --> 00:31:53.200 |
|
concatenated to the context and so |
|
|
|
00:31:51.559 --> 00:31:54.799 |
|
because the passages are concatenated to |
|
|
|
00:31:53.200 --> 00:31:56.679 |
|
context the contacts can get relatively |
|
|
|
00:31:54.799 --> 00:31:58.399 |
|
long and expensive and other things like |
|
|
|
00:31:56.679 --> 00:32:01.960 |
|
that but it's just something you have to |
|
|
|
00:31:58.399 --> 00:32:01.960 |
|
deal with when you're using |
|
|
|
00:32:02.600 --> 00:32:07.480 |
|
R there are methods also for Retriever |
|
|
|
00:32:05.799 --> 00:32:11.600 |
|
and Generator endtoend |
|
|
|
00:32:07.480 --> 00:32:14.720 |
|
training so this is the paper actually |
|
|
|
00:32:11.600 --> 00:32:17.600 |
|
where the name rag came from and I'll |
|
|
|
00:32:14.720 --> 00:32:20.200 |
|
use that as an example here uh but |
|
|
|
00:32:17.600 --> 00:32:21.600 |
|
basically um there are several methods |
|
|
|
00:32:20.200 --> 00:32:23.399 |
|
that propos to train the Retriever and |
|
|
|
00:32:21.600 --> 00:32:27.440 |
|
reader to improve |
|
|
|
00:32:23.399 --> 00:32:31.240 |
|
accuracy and specifically the rag p by |
|
|
|
00:32:27.440 --> 00:32:33.200 |
|
Lewis at all the way it trained the um |
|
|
|
00:32:31.240 --> 00:32:35.639 |
|
reader was to maximize generation |
|
|
|
00:32:33.200 --> 00:32:38.600 |
|
likelihood given a single retrieved |
|
|
|
00:32:35.639 --> 00:32:40.279 |
|
document and for the retriever it |
|
|
|
00:32:38.600 --> 00:32:41.880 |
|
maximized overall likelihood by |
|
|
|
00:32:40.279 --> 00:32:44.480 |
|
optimizing the mixture weight over |
|
|
|
00:32:41.880 --> 00:32:46.559 |
|
documents so here's kind of a a |
|
|
|
00:32:44.480 --> 00:32:50.480 |
|
schematic uh which is you have your |
|
|
|
00:32:46.559 --> 00:32:54.039 |
|
query encoder um you run the Retriever |
|
|
|
00:32:50.480 --> 00:32:57.760 |
|
with uh maximum inner product search it |
|
|
|
00:32:54.039 --> 00:33:00.919 |
|
gives you several documents and each |
|
|
|
00:32:57.760 --> 00:33:05.880 |
|
document has a score and then based on |
|
|
|
00:33:00.919 --> 00:33:09.399 |
|
the documents and the scores you |
|
|
|
00:33:05.880 --> 00:33:11.200 |
|
generate uh with each document in the |
|
|
|
00:33:09.399 --> 00:33:15.360 |
|
context and |
|
|
|
00:33:11.200 --> 00:33:17.080 |
|
then sum together the probabilities |
|
|
|
00:33:15.360 --> 00:33:18.639 |
|
multiplied by the weights and I have the |
|
|
|
00:33:17.080 --> 00:33:20.320 |
|
actual equations here because I think |
|
|
|
00:33:18.639 --> 00:33:23.039 |
|
it'll be a little bit easier to |
|
|
|
00:33:20.320 --> 00:33:25.760 |
|
understand after looking at the |
|
|
|
00:33:23.039 --> 00:33:28.360 |
|
equations so generation is a mixture |
|
|
|
00:33:25.760 --> 00:33:31.440 |
|
model and you pick a document and |
|
|
|
00:33:28.360 --> 00:33:36.519 |
|
generate from the document this |
|
|
|
00:33:31.440 --> 00:33:40.080 |
|
p z given X is the probability of |
|
|
|
00:33:36.519 --> 00:33:44.679 |
|
picking that document given the query X |
|
|
|
00:33:40.080 --> 00:33:48.880 |
|
and then this P Theta x z and all of the |
|
|
|
00:33:44.679 --> 00:33:51.480 |
|
previous tokens is basically the uh |
|
|
|
00:33:48.880 --> 00:33:54.840 |
|
probability of the next token given that |
|
|
|
00:33:51.480 --> 00:33:56.559 |
|
you have this particular document so you |
|
|
|
00:33:54.840 --> 00:34:00.840 |
|
can see that this is basically linearly |
|
|
|
00:33:56.559 --> 00:34:00.840 |
|
interpr ating between the multiple |
|
|
|
00:34:01.559 --> 00:34:05.760 |
|
documents and if we look this can be |
|
|
|
00:34:04.600 --> 00:34:09.039 |
|
considered the Retriever and the |
|
|
|
00:34:05.760 --> 00:34:09.039 |
|
generator the Retriever and the |
|
|
|
00:34:10.839 --> 00:34:16.119 |
|
reader one really important thing here |
|
|
|
00:34:13.639 --> 00:34:17.760 |
|
uh that enables endtoend training is |
|
|
|
00:34:16.119 --> 00:34:19.639 |
|
they have this probability of the |
|
|
|
00:34:17.760 --> 00:34:22.919 |
|
retriever be based on |
|
|
|
00:34:19.639 --> 00:34:25.480 |
|
embeddings and so here we have the |
|
|
|
00:34:22.919 --> 00:34:29.040 |
|
document embedding and the query |
|
|
|
00:34:25.480 --> 00:34:31.440 |
|
embedding and the probability is |
|
|
|
00:34:29.040 --> 00:34:33.320 |
|
proportional to the inner product of |
|
|
|
00:34:31.440 --> 00:34:36.599 |
|
these exponentiated so you're basically |
|
|
|
00:34:33.320 --> 00:34:38.839 |
|
taking a soft Max over uh the inner |
|
|
|
00:34:36.599 --> 00:34:40.599 |
|
product between the |
|
|
|
00:34:38.839 --> 00:34:44.200 |
|
two |
|
|
|
00:34:40.599 --> 00:34:47.919 |
|
and this adjusts the retriever to give |
|
|
|
00:34:44.200 --> 00:34:49.560 |
|
higher similarities to helpful |
|
|
|
00:34:47.919 --> 00:34:52.560 |
|
documents |
|
|
|
00:34:49.560 --> 00:34:52.560 |
|
and |
|
|
|
00:34:54.040 --> 00:35:02.800 |
|
so because the prob probability of the |
|
|
|
00:34:59.800 --> 00:35:04.839 |
|
retriever model here is included in the |
|
|
|
00:35:02.800 --> 00:35:07.160 |
|
endtoend probability you don't actually |
|
|
|
00:35:04.839 --> 00:35:10.680 |
|
need any annotations |
|
|
|
00:35:07.160 --> 00:35:12.839 |
|
about which documents are useful you can |
|
|
|
00:35:10.680 --> 00:35:16.680 |
|
just train all of this end to end and |
|
|
|
00:35:12.839 --> 00:35:19.480 |
|
let backrop do its thing to update the |
|
|
|
00:35:16.680 --> 00:35:22.640 |
|
uh the retriever as |
|
|
|
00:35:19.480 --> 00:35:25.000 |
|
well one important issue when training |
|
|
|
00:35:22.640 --> 00:35:27.480 |
|
models like this is that the search |
|
|
|
00:35:25.000 --> 00:35:30.400 |
|
index will become stale so what do I |
|
|
|
00:35:27.480 --> 00:35:34.760 |
|
mean by this if we go back to our |
|
|
|
00:35:30.400 --> 00:35:34.760 |
|
previous uh thing about dense |
|
|
|
00:35:35.480 --> 00:35:43.560 |
|
models creating this blue search index |
|
|
|
00:35:39.800 --> 00:35:45.400 |
|
on the right side of the figure here is |
|
|
|
00:35:43.560 --> 00:35:48.680 |
|
very costly so like let's say you want |
|
|
|
00:35:45.400 --> 00:35:50.520 |
|
to embed a million documents or a |
|
|
|
00:35:48.680 --> 00:35:55.240 |
|
billion documents if you're a big search |
|
|
|
00:35:50.520 --> 00:35:58.200 |
|
engine company so doing this is very |
|
|
|
00:35:55.240 --> 00:36:00.599 |
|
slow and |
|
|
|
00:35:58.200 --> 00:36:01.920 |
|
in contrast doing lookup with kind of |
|
|
|
00:36:00.599 --> 00:36:04.160 |
|
these approximate nearest neighbor |
|
|
|
00:36:01.920 --> 00:36:05.440 |
|
searches is sublinear time or even you |
|
|
|
00:36:04.160 --> 00:36:08.119 |
|
know log time so you can do it |
|
|
|
00:36:05.440 --> 00:36:12.319 |
|
relatively quickly |
|
|
|
00:36:08.119 --> 00:36:15.680 |
|
so it's fine to do lookup over this big |
|
|
|
00:36:12.319 --> 00:36:17.520 |
|
index but if you start updating this |
|
|
|
00:36:15.680 --> 00:36:19.920 |
|
document embedding you need to recreate |
|
|
|
00:36:17.520 --> 00:36:23.760 |
|
the entire index and that would be you |
|
|
|
00:36:19.920 --> 00:36:27.240 |
|
know very computationally costly so the |
|
|
|
00:36:23.760 --> 00:36:30.119 |
|
solution to this proposed in this rag |
|
|
|
00:36:27.240 --> 00:36:33.640 |
|
paper by Lewis at all is uh we only |
|
|
|
00:36:30.119 --> 00:36:35.640 |
|
train the query embeddings and we keep |
|
|
|
00:36:33.640 --> 00:36:39.640 |
|
the document embedding |
|
|
|
00:36:35.640 --> 00:36:41.920 |
|
swix there's other Alternatives like um |
|
|
|
00:36:39.640 --> 00:36:45.000 |
|
there was a paper called realm uh from |
|
|
|
00:36:41.920 --> 00:36:48.040 |
|
early in retrieval base modeling and in |
|
|
|
00:36:45.000 --> 00:36:50.040 |
|
that in that method they basically had |
|
|
|
00:36:48.040 --> 00:36:51.520 |
|
an asynchronous process that was going |
|
|
|
00:36:50.040 --> 00:36:55.760 |
|
through and using the most recent |
|
|
|
00:36:51.520 --> 00:36:59.960 |
|
document in better to re-update the |
|
|
|
00:36:55.760 --> 00:37:03.359 |
|
search index during training but that is |
|
|
|
00:36:59.960 --> 00:37:05.960 |
|
uh you know kind of a very onerous |
|
|
|
00:37:03.359 --> 00:37:07.800 |
|
process so I think it's quite common to |
|
|
|
00:37:05.960 --> 00:37:11.000 |
|
use kind of a fixed document embedding |
|
|
|
00:37:07.800 --> 00:37:11.000 |
|
in update only the |
|
|
|
00:37:12.079 --> 00:37:17.720 |
|
queries another thing to think about is |
|
|
|
00:37:14.359 --> 00:37:21.160 |
|
when do we do retrieval um so there's a |
|
|
|
00:37:17.720 --> 00:37:23.079 |
|
bunch of different methods the rag paper |
|
|
|
00:37:21.160 --> 00:37:24.440 |
|
that I mentioned before did this only |
|
|
|
00:37:23.079 --> 00:37:26.359 |
|
once right at the very beginning of |
|
|
|
00:37:24.440 --> 00:37:29.400 |
|
generation it grabbed a single document |
|
|
|
00:37:26.359 --> 00:37:32.560 |
|
and generated the entire output this is |
|
|
|
00:37:29.400 --> 00:37:34.800 |
|
the default method used by most |
|
|
|
00:37:32.560 --> 00:37:37.240 |
|
systems however there's other options as |
|
|
|
00:37:34.800 --> 00:37:39.640 |
|
well you can retrieve uh several times |
|
|
|
00:37:37.240 --> 00:37:43.040 |
|
during generation as |
|
|
|
00:37:39.640 --> 00:37:44.480 |
|
necessary and the way this works uh we |
|
|
|
00:37:43.040 --> 00:37:46.280 |
|
can do this either by generating a |
|
|
|
00:37:44.480 --> 00:37:48.480 |
|
search token uh saying that we should |
|
|
|
00:37:46.280 --> 00:37:50.200 |
|
start searching or searching when the |
|
|
|
00:37:48.480 --> 00:37:52.640 |
|
model is |
|
|
|
00:37:50.200 --> 00:37:55.920 |
|
uncertain and another way is to do this |
|
|
|
00:37:52.640 --> 00:37:58.079 |
|
every token so we can do this by finding |
|
|
|
00:37:55.920 --> 00:37:59.760 |
|
similar final embeddings and using this |
|
|
|
00:37:58.079 --> 00:38:02.240 |
|
to influence the |
|
|
|
00:37:59.760 --> 00:38:04.720 |
|
probabilities or approximating attention |
|
|
|
00:38:02.240 --> 00:38:06.440 |
|
with nearest neighbors so I'm going to |
|
|
|
00:38:04.720 --> 00:38:08.920 |
|
explain about each of these in a bit |
|
|
|
00:38:06.440 --> 00:38:12.480 |
|
more detail |
|
|
|
00:38:08.920 --> 00:38:16.119 |
|
in so triggering retrieval with token |
|
|
|
00:38:12.480 --> 00:38:19.720 |
|
embeddings is um was proposed by Tool |
|
|
|
00:38:16.119 --> 00:38:22.119 |
|
forer shik all and the way it works is |
|
|
|
00:38:19.720 --> 00:38:25.000 |
|
you generate tokens that Tri trigger |
|
|
|
00:38:22.119 --> 00:38:27.880 |
|
retrieval or other tools so in this |
|
|
|
00:38:25.000 --> 00:38:30.079 |
|
particular method it uh had several |
|
|
|
00:38:27.880 --> 00:38:32.000 |
|
tools including asking a QA model or |
|
|
|
00:38:30.079 --> 00:38:34.800 |
|
getting a calculator or having a machine |
|
|
|
00:38:32.000 --> 00:38:37.200 |
|
translation system but with respect to |
|
|
|
00:38:34.800 --> 00:38:40.000 |
|
retrieval augmented generation it had |
|
|
|
00:38:37.200 --> 00:38:41.560 |
|
this essentially Wiki search |
|
|
|
00:38:40.000 --> 00:38:43.680 |
|
functionality that would look up |
|
|
|
00:38:41.560 --> 00:38:46.680 |
|
something in Wikipedia and then use that |
|
|
|
00:38:43.680 --> 00:38:46.680 |
|
to influence the final |
|
|
|
00:38:46.760 --> 00:38:52.200 |
|
probabilities |
|
|
|
00:38:48.800 --> 00:38:55.160 |
|
and the way this was trained is training |
|
|
|
00:38:52.200 --> 00:38:59.800 |
|
was done in an inative manner where it |
|
|
|
00:38:55.160 --> 00:38:59.800 |
|
basically generated uh kind |
|
|
|
00:39:00.000 --> 00:39:05.680 |
|
of examples of tools being useful and |
|
|
|
00:39:04.359 --> 00:39:09.560 |
|
when the |
|
|
|
00:39:05.680 --> 00:39:14.160 |
|
tools improve the probability of the |
|
|
|
00:39:09.560 --> 00:39:16.119 |
|
following output then that would be kind |
|
|
|
00:39:14.160 --> 00:39:19.560 |
|
of treated as a positive example and |
|
|
|
00:39:16.119 --> 00:39:21.520 |
|
used to further train the model so this |
|
|
|
00:39:19.560 --> 00:39:23.400 |
|
was really influential and in fact this |
|
|
|
00:39:21.520 --> 00:39:27.000 |
|
is how things are implemented in chat |
|
|
|
00:39:23.400 --> 00:39:29.319 |
|
GPT nowadays not only for um doing |
|
|
|
00:39:27.000 --> 00:39:33.400 |
|
retrieval but also doing other tools |
|
|
|
00:39:29.319 --> 00:39:35.200 |
|
like um for example uh generating code |
|
|
|
00:39:33.400 --> 00:39:37.440 |
|
or generating images or other things |
|
|
|
00:39:35.200 --> 00:39:37.440 |
|
like |
|
|
|
00:39:38.200 --> 00:39:45.079 |
|
this another option is to trigger |
|
|
|
00:39:40.920 --> 00:39:48.240 |
|
retrieval uh with uncertainty estimates |
|
|
|
00:39:45.079 --> 00:39:52.280 |
|
so flare this is a paper by my student |
|
|
|
00:39:48.240 --> 00:39:55.160 |
|
Jang bang um where we try to generate |
|
|
|
00:39:52.280 --> 00:39:58.560 |
|
content and then do retrieval if the |
|
|
|
00:39:55.160 --> 00:40:01.800 |
|
language model certainty is low so |
|
|
|
00:39:58.560 --> 00:40:05.599 |
|
here's a schematic of how this works but |
|
|
|
00:40:01.800 --> 00:40:09.160 |
|
basically um if we have |
|
|
|
00:40:05.599 --> 00:40:13.440 |
|
some uh retrieved documents we can say |
|
|
|
00:40:09.160 --> 00:40:16.560 |
|
generate a a summary about Joe Biden and |
|
|
|
00:40:13.440 --> 00:40:19.560 |
|
when it generates a summary maybe for |
|
|
|
00:40:16.560 --> 00:40:20.960 |
|
the first output um the language model |
|
|
|
00:40:19.560 --> 00:40:22.960 |
|
has high |
|
|
|
00:40:20.960 --> 00:40:24.240 |
|
confidence and because the language |
|
|
|
00:40:22.960 --> 00:40:25.359 |
|
model has high confidence we just |
|
|
|
00:40:24.240 --> 00:40:27.520 |
|
generate the |
|
|
|
00:40:25.359 --> 00:40:29.599 |
|
output |
|
|
|
00:40:27.520 --> 00:40:31.839 |
|
however in the next step if it might |
|
|
|
00:40:29.599 --> 00:40:33.599 |
|
generate something like saying Joe Biden |
|
|
|
00:40:31.839 --> 00:40:35.680 |
|
attended the University of Pennsylvania |
|
|
|
00:40:33.599 --> 00:40:37.160 |
|
where he earned a law degree but the |
|
|
|
00:40:35.680 --> 00:40:39.000 |
|
model might not be very certain about |
|
|
|
00:40:37.160 --> 00:40:41.560 |
|
this it might have a low probability of |
|
|
|
00:40:39.000 --> 00:40:45.839 |
|
certain important entities and So based |
|
|
|
00:40:41.560 --> 00:40:48.839 |
|
on this uh we then form a a query where |
|
|
|
00:40:45.839 --> 00:40:52.119 |
|
what we do is essentially we blank out |
|
|
|
00:40:48.839 --> 00:40:55.079 |
|
the low probability parts of this and we |
|
|
|
00:40:52.119 --> 00:40:57.200 |
|
do a search and so this is also a little |
|
|
|
00:40:55.079 --> 00:41:00.240 |
|
bit like the hypothetical |
|
|
|
00:40:57.200 --> 00:41:02.520 |
|
edings method where we basically create |
|
|
|
00:41:00.240 --> 00:41:04.040 |
|
a document that we think will look |
|
|
|
00:41:02.520 --> 00:41:07.119 |
|
similar to the document that we want to |
|
|
|
00:41:04.040 --> 00:41:09.480 |
|
find we use that to create search |
|
|
|
00:41:07.119 --> 00:41:11.359 |
|
results and then we generate the output |
|
|
|
00:41:09.480 --> 00:41:13.880 |
|
and then we continue doing that and |
|
|
|
00:41:11.359 --> 00:41:15.960 |
|
whenever we have a high confidence |
|
|
|
00:41:13.880 --> 00:41:18.800 |
|
output like the one here we don't do any |
|
|
|
00:41:15.960 --> 00:41:20.040 |
|
retrieval we just you know generate uh |
|
|
|
00:41:18.800 --> 00:41:21.880 |
|
directly from the parameters of the |
|
|
|
00:41:20.040 --> 00:41:23.960 |
|
model but whenever we have low |
|
|
|
00:41:21.880 --> 00:41:27.400 |
|
confidence outputs we do the retrieval |
|
|
|
00:41:23.960 --> 00:41:30.400 |
|
and base the output on this and so I I |
|
|
|
00:41:27.400 --> 00:41:33.119 |
|
think this is uh you know a nice method |
|
|
|
00:41:30.400 --> 00:41:35.000 |
|
that could potentially be uh used the |
|
|
|
00:41:33.119 --> 00:41:36.920 |
|
downside to that is you might sometimes |
|
|
|
00:41:35.000 --> 00:41:38.920 |
|
need to generate twice because you would |
|
|
|
00:41:36.920 --> 00:41:40.480 |
|
generate the output once and then find |
|
|
|
00:41:38.920 --> 00:41:42.720 |
|
the low confidence parts and generate |
|
|
|
00:41:40.480 --> 00:41:45.400 |
|
again but you know if you really care |
|
|
|
00:41:42.720 --> 00:41:47.319 |
|
about the uh kind of quality of the |
|
|
|
00:41:45.400 --> 00:41:49.640 |
|
output this is I think a reasonable |
|
|
|
00:41:47.319 --> 00:41:49.640 |
|
thing to |
|
|
|
00:41:50.160 --> 00:41:54.920 |
|
do okay so now moving on to the Token by |
|
|
|
00:41:53.000 --> 00:41:59.800 |
|
token retrieval |
|
|
|
00:41:54.920 --> 00:42:03.560 |
|
methods the kind of original or one of |
|
|
|
00:41:59.800 --> 00:42:05.200 |
|
the methods that popularized this idea |
|
|
|
00:42:03.560 --> 00:42:08.720 |
|
of token by token retrieval is something |
|
|
|
00:42:05.200 --> 00:42:10.760 |
|
called K&N LM and the way it works is it |
|
|
|
00:42:08.720 --> 00:42:13.839 |
|
retrieves similar |
|
|
|
00:42:10.760 --> 00:42:16.680 |
|
examples and then uses the following |
|
|
|
00:42:13.839 --> 00:42:20.880 |
|
tokens from these |
|
|
|
00:42:16.680 --> 00:42:23.800 |
|
examples and this is kind of like a very |
|
|
|
00:42:20.880 --> 00:42:25.839 |
|
powerful count-based byr model in a way |
|
|
|
00:42:23.800 --> 00:42:28.440 |
|
so if you remember back to when we were |
|
|
|
00:42:25.839 --> 00:42:32.920 |
|
talking about count based Pam models |
|
|
|
00:42:28.440 --> 00:42:36.440 |
|
what we would do is we would take the |
|
|
|
00:42:32.920 --> 00:42:39.400 |
|
previous token and we would calculate |
|
|
|
00:42:36.440 --> 00:42:41.319 |
|
the probability of the next token by |
|
|
|
00:42:39.400 --> 00:42:43.040 |
|
summing up together all of the next |
|
|
|
00:42:41.319 --> 00:42:44.800 |
|
tokens and dividing by the total number |
|
|
|
00:42:43.040 --> 00:42:49.240 |
|
of times that previous token |
|
|
|
00:42:44.800 --> 00:42:52.720 |
|
occurred and so given that background uh |
|
|
|
00:42:49.240 --> 00:42:56.760 |
|
we can talk about how the KLM |
|
|
|
00:42:52.720 --> 00:43:00.319 |
|
works so we have the text context X |
|
|
|
00:42:56.760 --> 00:43:02.240 |
|
and we want to generate a Target output |
|
|
|
00:43:00.319 --> 00:43:04.839 |
|
separately from this we have all of the |
|
|
|
00:43:02.240 --> 00:43:06.440 |
|
training contexts so this is all of the |
|
|
|
00:43:04.839 --> 00:43:09.920 |
|
contexts that appeared in our training |
|
|
|
00:43:06.440 --> 00:43:13.520 |
|
data and we encode all of these training |
|
|
|
00:43:09.920 --> 00:43:15.720 |
|
contexts specifically by calculating the |
|
|
|
00:43:13.520 --> 00:43:18.559 |
|
representation of the final layer or |
|
|
|
00:43:15.720 --> 00:43:21.119 |
|
near the final layer of the model and so |
|
|
|
00:43:18.559 --> 00:43:23.200 |
|
we encode that as |
|
|
|
00:43:21.119 --> 00:43:25.240 |
|
representations separately from that we |
|
|
|
00:43:23.200 --> 00:43:27.920 |
|
remember the next word that appeared |
|
|
|
00:43:25.240 --> 00:43:29.720 |
|
after this Contex |
|
|
|
00:43:27.920 --> 00:43:32.920 |
|
so now we have a data store consisting |
|
|
|
00:43:29.720 --> 00:43:35.040 |
|
of representations in next words we then |
|
|
|
00:43:32.920 --> 00:43:38.440 |
|
take the representation of the current |
|
|
|
00:43:35.040 --> 00:43:40.880 |
|
context and we calculate the distance |
|
|
|
00:43:38.440 --> 00:43:43.400 |
|
between the current context and all of |
|
|
|
00:43:40.880 --> 00:43:47.119 |
|
the other similar context in the |
|
|
|
00:43:43.400 --> 00:43:49.839 |
|
database we take the nearest K so we |
|
|
|
00:43:47.119 --> 00:43:52.440 |
|
take the top uh K examples here which |
|
|
|
00:43:49.839 --> 00:43:55.240 |
|
would be Hawaii Illinois and |
|
|
|
00:43:52.440 --> 00:43:57.520 |
|
Hawaii we then do uh some sort of |
|
|
|
00:43:55.240 --> 00:44:01.440 |
|
normalization based on the |
|
|
|
00:43:57.520 --> 00:44:05.200 |
|
distance and this gives us a probability |
|
|
|
00:44:01.440 --> 00:44:06.680 |
|
distribution over all of the next tokens |
|
|
|
00:44:05.200 --> 00:44:10.599 |
|
sometimes these tokens are duplicated |
|
|
|
00:44:06.680 --> 00:44:13.599 |
|
multiple times and so we aggregate all |
|
|
|
00:44:10.599 --> 00:44:15.800 |
|
of these counts to be Hawaii for example |
|
|
|
00:44:13.599 --> 00:44:18.839 |
|
0.8 and Illinois |
|
|
|
00:44:15.800 --> 00:44:21.839 |
|
0.2 and then we interpolate this with |
|
|
|
00:44:18.839 --> 00:44:24.040 |
|
the probability given by the standard |
|
|
|
00:44:21.839 --> 00:44:26.440 |
|
language model using an interpolation |
|
|
|
00:44:24.040 --> 00:44:28.400 |
|
coefficient Lambda and this gives us our |
|
|
|
00:44:26.440 --> 00:44:31.000 |
|
final |
|
|
|
00:44:28.400 --> 00:44:34.559 |
|
probability so the nice thing about this |
|
|
|
00:44:31.000 --> 00:44:38.000 |
|
is this allows us to explicitly ground |
|
|
|
00:44:34.559 --> 00:44:42.079 |
|
our outputs in individual |
|
|
|
00:44:38.000 --> 00:44:45.319 |
|
examples uh and it's a pretty effective |
|
|
|
00:44:42.079 --> 00:44:48.760 |
|
way to improve the probability of models |
|
|
|
00:44:45.319 --> 00:44:53.839 |
|
improve translation and other stuff like |
|
|
|
00:44:48.760 --> 00:44:56.119 |
|
this the disadvantage of doing this is |
|
|
|
00:44:53.839 --> 00:44:59.319 |
|
that it provides it it kind of ADD add |
|
|
|
00:44:56.119 --> 00:45:01.800 |
|
an extra component of the model it adds |
|
|
|
00:44:59.319 --> 00:45:05.440 |
|
extra |
|
|
|
00:45:01.800 --> 00:45:08.520 |
|
um kind of hyperparameters like Lambda |
|
|
|
00:45:05.440 --> 00:45:11.680 |
|
and things like this so it is a little |
|
|
|
00:45:08.520 --> 00:45:16.960 |
|
bit finicky and it doesn't work in all |
|
|
|
00:45:11.680 --> 00:45:21.440 |
|
situations and so another method that we |
|
|
|
00:45:16.960 --> 00:45:23.559 |
|
uh proposed or by Manda Birch who gave |
|
|
|
00:45:21.440 --> 00:45:26.920 |
|
the uh previous lecture on generation in |
|
|
|
00:45:23.559 --> 00:45:29.240 |
|
this class is unlimi forer and basically |
|
|
|
00:45:26.920 --> 00:45:32.680 |
|
what unlimi forer does is it notes that |
|
|
|
00:45:29.240 --> 00:45:36.079 |
|
attention itself is an in inner product |
|
|
|
00:45:32.680 --> 00:45:40.440 |
|
search and it does topk |
|
|
|
00:45:36.079 --> 00:45:42.680 |
|
attention and the way we do this is we |
|
|
|
00:45:40.440 --> 00:45:45.160 |
|
first process the input with a sliding |
|
|
|
00:45:42.680 --> 00:45:47.480 |
|
window and then perform attention using |
|
|
|
00:45:45.160 --> 00:45:49.960 |
|
a vector index so if we have a really |
|
|
|
00:45:47.480 --> 00:45:54.280 |
|
long input that we want to encode what |
|
|
|
00:45:49.960 --> 00:45:56.559 |
|
we do is we first encode chunks so we |
|
|
|
00:45:54.280 --> 00:46:01.960 |
|
encode for example AB |
|
|
|
00:45:56.559 --> 00:46:03.839 |
|
then we encode CD and we encode EF we |
|
|
|
00:46:01.960 --> 00:46:06.240 |
|
concatenate them together into a big |
|
|
|
00:46:03.839 --> 00:46:07.800 |
|
index of one long input so in a way that |
|
|
|
00:46:06.240 --> 00:46:10.920 |
|
this is similar to what they did in the |
|
|
|
00:46:07.800 --> 00:46:12.720 |
|
KLM you know concatenate all of these |
|
|
|
00:46:10.920 --> 00:46:16.520 |
|
embeddings into a single |
|
|
|
00:46:12.720 --> 00:46:18.680 |
|
input but the difference is that this is |
|
|
|
00:46:16.520 --> 00:46:21.640 |
|
done with |
|
|
|
00:46:18.680 --> 00:46:24.280 |
|
um the values that we are attending to |
|
|
|
00:46:21.640 --> 00:46:27.559 |
|
as opposed to just the final |
|
|
|
00:46:24.280 --> 00:46:30.079 |
|
layer and |
|
|
|
00:46:27.559 --> 00:46:33.680 |
|
the interesting thing about this is now |
|
|
|
00:46:30.079 --> 00:46:36.200 |
|
we have an index of one long input and |
|
|
|
00:46:33.680 --> 00:46:39.800 |
|
when we want to do our next version of |
|
|
|
00:46:36.200 --> 00:46:42.240 |
|
attention we do KNN search from the |
|
|
|
00:46:39.800 --> 00:46:44.280 |
|
query we take the retrieved hidden |
|
|
|
00:46:42.240 --> 00:46:47.880 |
|
States and then we just do attention |
|
|
|
00:46:44.280 --> 00:46:50.440 |
|
over them so the nice thing about this |
|
|
|
00:46:47.880 --> 00:46:53.079 |
|
is in the extreme case this makes no |
|
|
|
00:46:50.440 --> 00:46:55.240 |
|
changes to the model what I mean by this |
|
|
|
00:46:53.079 --> 00:46:57.520 |
|
is let's say our input was small enough |
|
|
|
00:46:55.240 --> 00:47:02.240 |
|
that we could coded in only a single |
|
|
|
00:46:57.520 --> 00:47:06.400 |
|
chunk and for KNN search we also did KNN |
|
|
|
00:47:02.240 --> 00:47:09.559 |
|
search um we did you know exact Canon |
|
|
|
00:47:06.400 --> 00:47:12.400 |
|
search over all of the embeddings in the |
|
|
|
00:47:09.559 --> 00:47:14.680 |
|
trunk in that case this would just be |
|
|
|
00:47:12.400 --> 00:47:16.520 |
|
normal attention it's exactly the same |
|
|
|
00:47:14.680 --> 00:47:18.640 |
|
as normal |
|
|
|
00:47:16.520 --> 00:47:20.160 |
|
attention however there are some |
|
|
|
00:47:18.640 --> 00:47:21.760 |
|
approximations that go into here like |
|
|
|
00:47:20.160 --> 00:47:24.000 |
|
when we encode chunks they might not be |
|
|
|
00:47:21.760 --> 00:47:26.359 |
|
exactly the same as if we encoded the |
|
|
|
00:47:24.000 --> 00:47:29.839 |
|
entire thing together and we're also |
|
|
|
00:47:26.359 --> 00:47:33.640 |
|
chopping off some of the values with |
|
|
|
00:47:29.839 --> 00:47:35.800 |
|
very low um kind of inner products and |
|
|
|
00:47:33.640 --> 00:47:37.400 |
|
so because of this there are some |
|
|
|
00:47:35.800 --> 00:47:38.760 |
|
approximations being made but in the |
|
|
|
00:47:37.400 --> 00:47:40.160 |
|
extreme case if we made no |
|
|
|
00:47:38.760 --> 00:47:41.880 |
|
approximations this would just be |
|
|
|
00:47:40.160 --> 00:47:44.359 |
|
exactly the same model as we were using |
|
|
|
00:47:41.880 --> 00:47:46.160 |
|
before so I find this pretty attractive |
|
|
|
00:47:44.359 --> 00:47:48.760 |
|
and uh you know empirically it gives |
|
|
|
00:47:46.160 --> 00:47:51.720 |
|
very good results over long |
|
|
|
00:47:48.760 --> 00:47:53.440 |
|
distances and you know we can always |
|
|
|
00:47:51.720 --> 00:47:56.240 |
|
make our approximations better and |
|
|
|
00:47:53.440 --> 00:47:57.680 |
|
improve this model as well so I I think |
|
|
|
00:47:56.240 --> 00:48:00.960 |
|
this is a attractive method that you |
|
|
|
00:47:57.680 --> 00:48:00.960 |
|
might be interested in taking a look |
|
|
|
00:48:02.240 --> 00:48:06.200 |
|
at okay for the final part of this I'd |
|
|
|
00:48:04.559 --> 00:48:08.079 |
|
like to talk about long context |
|
|
|
00:48:06.200 --> 00:48:12.400 |
|
Transformers and these are models that |
|
|
|
00:48:08.079 --> 00:48:15.119 |
|
are explicitly trained in a way that |
|
|
|
00:48:12.400 --> 00:48:16.920 |
|
allows you to attend to longer contexts |
|
|
|
00:48:15.119 --> 00:48:18.839 |
|
in an efficient |
|
|
|
00:48:16.920 --> 00:48:21.960 |
|
manner |
|
|
|
00:48:18.839 --> 00:48:23.680 |
|
so one way that we can train over longer |
|
|
|
00:48:21.960 --> 00:48:25.880 |
|
context is just append all of the |
|
|
|
00:48:23.680 --> 00:48:28.040 |
|
context together and in fact shortly |
|
|
|
00:48:25.880 --> 00:48:32.200 |
|
after Transformers came out uh this |
|
|
|
00:48:28.040 --> 00:48:34.280 |
|
paper by VOA at all demonstrated that um |
|
|
|
00:48:32.200 --> 00:48:36.160 |
|
it doing this can learn you know |
|
|
|
00:48:34.280 --> 00:48:38.119 |
|
interesting document level phenomena so |
|
|
|
00:48:36.160 --> 00:48:40.440 |
|
it can identify when |
|
|
|
00:48:38.119 --> 00:48:42.480 |
|
multiple uh words refer to the same |
|
|
|
00:48:40.440 --> 00:48:43.680 |
|
thing or co-reference and other things |
|
|
|
00:48:42.480 --> 00:48:45.640 |
|
like |
|
|
|
00:48:43.680 --> 00:48:47.720 |
|
this however the problem with |
|
|
|
00:48:45.640 --> 00:48:51.119 |
|
Transformers is that computation is |
|
|
|
00:48:47.720 --> 00:48:52.799 |
|
quadratic in the sentence length because |
|
|
|
00:48:51.119 --> 00:48:54.599 |
|
you're multiplying all of the query |
|
|
|
00:48:52.799 --> 00:48:56.799 |
|
vectors by all of the key |
|
|
|
00:48:54.599 --> 00:48:59.480 |
|
vectors |
|
|
|
00:48:56.799 --> 00:49:02.799 |
|
and that basically causes a big problem |
|
|
|
00:48:59.480 --> 00:49:02.799 |
|
if your sequences become very |
|
|
|
00:49:03.480 --> 00:49:09.760 |
|
long so if we go back to what we did in |
|
|
|
00:49:07.480 --> 00:49:12.400 |
|
rnns uh from the very beginning of the |
|
|
|
00:49:09.760 --> 00:49:14.359 |
|
class in rnns they don't have this |
|
|
|
00:49:12.400 --> 00:49:16.280 |
|
problem because computation is linear in |
|
|
|
00:49:14.359 --> 00:49:20.440 |
|
the length of the sequence you just pass |
|
|
|
00:49:16.280 --> 00:49:22.200 |
|
along the RNN State and every single |
|
|
|
00:49:20.440 --> 00:49:23.839 |
|
time you do the same computation over it |
|
|
|
00:49:22.200 --> 00:49:26.559 |
|
so there's no quadratic term in |
|
|
|
00:49:23.839 --> 00:49:32.400 |
|
calculating rnns |
|
|
|
00:49:26.559 --> 00:49:34.880 |
|
another thing is that when doing rnns |
|
|
|
00:49:32.400 --> 00:49:37.680 |
|
you can actually P State infinitely |
|
|
|
00:49:34.880 --> 00:49:39.040 |
|
during the forward pass by just |
|
|
|
00:49:37.680 --> 00:49:40.240 |
|
calculating the hidden State and then |
|
|
|
00:49:39.040 --> 00:49:42.119 |
|
throwing away the rest of the |
|
|
|
00:49:40.240 --> 00:49:43.359 |
|
computation graph that was used in |
|
|
|
00:49:42.119 --> 00:49:45.160 |
|
calculating that hidden State and |
|
|
|
00:49:43.359 --> 00:49:48.319 |
|
there's no approximation that goes on |
|
|
|
00:49:45.160 --> 00:49:49.680 |
|
there so unlike on in un liform that I |
|
|
|
00:49:48.319 --> 00:49:51.640 |
|
was talking about before where we needed |
|
|
|
00:49:49.680 --> 00:49:54.119 |
|
to make approximations none need to be |
|
|
|
00:49:51.640 --> 00:49:56.400 |
|
made in this |
|
|
|
00:49:54.119 --> 00:50:00.200 |
|
case however there is a problem with |
|
|
|
00:49:56.400 --> 00:50:02.040 |
|
doing back propop uh because in order to |
|
|
|
00:50:00.200 --> 00:50:05.839 |
|
do back propop normally you maintain the |
|
|
|
00:50:02.040 --> 00:50:09.720 |
|
entire you know state of the computation |
|
|
|
00:50:05.839 --> 00:50:12.400 |
|
graph and so there a common method to |
|
|
|
00:50:09.720 --> 00:50:15.280 |
|
fix this is basically you pass along the |
|
|
|
00:50:12.400 --> 00:50:16.920 |
|
RNN state from the previous sentence but |
|
|
|
00:50:15.280 --> 00:50:19.240 |
|
you just don't do backdrop into the |
|
|
|
00:50:16.920 --> 00:50:21.200 |
|
previous sentence and this is called |
|
|
|
00:50:19.240 --> 00:50:24.040 |
|
truncated backrop or truncated back |
|
|
|
00:50:21.200 --> 00:50:27.280 |
|
propagation through time and this allows |
|
|
|
00:50:24.040 --> 00:50:30.160 |
|
you to essentially train models with |
|
|
|
00:50:27.280 --> 00:50:32.319 |
|
infinite context um or at least models |
|
|
|
00:50:30.160 --> 00:50:33.720 |
|
that can pass along context infinitely |
|
|
|
00:50:32.319 --> 00:50:36.359 |
|
even if you're not back propping into |
|
|
|
00:50:33.720 --> 00:50:36.359 |
|
they Cod ear |
|
|
|
00:50:37.480 --> 00:50:43.520 |
|
there so of course a problem with this |
|
|
|
00:50:40.720 --> 00:50:45.880 |
|
over long contexts is recurrents uh |
|
|
|
00:50:43.520 --> 00:50:47.520 |
|
recurrent models can be slow due to the |
|
|
|
00:50:45.880 --> 00:50:51.400 |
|
kind of sequential dependence they're |
|
|
|
00:50:47.520 --> 00:50:54.280 |
|
not ideal for um you know running on |
|
|
|
00:50:51.400 --> 00:50:57.359 |
|
gpus or things like that and this is |
|
|
|
00:50:54.280 --> 00:51:01.960 |
|
improved by recent architectures like |
|
|
|
00:50:57.359 --> 00:51:05.359 |
|
Mamba and RW KV which are more conducive |
|
|
|
00:51:01.960 --> 00:51:07.079 |
|
to GPU Based training um while still |
|
|
|
00:51:05.359 --> 00:51:08.599 |
|
maintaining linear time complexity and |
|
|
|
00:51:07.079 --> 00:51:11.480 |
|
so I'm looking forward to talking about |
|
|
|
00:51:08.599 --> 00:51:11.480 |
|
that more in a future |
|
|
|
00:51:13.000 --> 00:51:17.559 |
|
class so actually if we take this idea |
|
|
|
00:51:15.880 --> 00:51:20.440 |
|
of truncated back propagation through |
|
|
|
00:51:17.559 --> 00:51:22.359 |
|
time this can also be applied to |
|
|
|
00:51:20.440 --> 00:51:25.440 |
|
Transformers and there's a really nice |
|
|
|
00:51:22.359 --> 00:51:27.880 |
|
paper Transformer XEL also created by |
|
|
|
00:51:25.440 --> 00:51:31.119 |
|
kungai who was formerly at |
|
|
|
00:51:27.880 --> 00:51:33.119 |
|
CMU and what this does is this attempts |
|
|
|
00:51:31.119 --> 00:51:35.760 |
|
to fix vectors from the previous |
|
|
|
00:51:33.119 --> 00:51:39.440 |
|
sentence so if we have a standard |
|
|
|
00:51:35.760 --> 00:51:40.720 |
|
Transformer uh in a Transformer XL |
|
|
|
00:51:39.440 --> 00:51:44.640 |
|
normally what we do in the standard |
|
|
|
00:51:40.720 --> 00:51:48.480 |
|
Transformer is each Vector attends back |
|
|
|
00:51:44.640 --> 00:51:50.920 |
|
to all the other vectors in the current |
|
|
|
00:51:48.480 --> 00:51:53.839 |
|
context what Transformer XEL does |
|
|
|
00:51:50.920 --> 00:51:56.359 |
|
instead is when you have a new segment |
|
|
|
00:51:53.839 --> 00:51:58.960 |
|
that you want to do backrop |
|
|
|
00:51:56.359 --> 00:52:01.200 |
|
into um you have a new segment that you |
|
|
|
00:51:58.960 --> 00:52:03.960 |
|
want to basically train over you also |
|
|
|
00:52:01.200 --> 00:52:06.400 |
|
attend to all of the previous tokens in |
|
|
|
00:52:03.960 --> 00:52:07.640 |
|
the previous segment but you don't do |
|
|
|
00:52:06.400 --> 00:52:10.319 |
|
back propop into |
|
|
|
00:52:07.640 --> 00:52:12.079 |
|
them so this is essentially truncated |
|
|
|
00:52:10.319 --> 00:52:14.480 |
|
backpropagation through time from the |
|
|
|
00:52:12.079 --> 00:52:17.760 |
|
Transformer |
|
|
|
00:52:14.480 --> 00:52:19.520 |
|
perspective this is also really nice |
|
|
|
00:52:17.760 --> 00:52:21.200 |
|
because what it allows you to do is if |
|
|
|
00:52:19.520 --> 00:52:25.880 |
|
you have a multi-layer |
|
|
|
00:52:21.200 --> 00:52:27.720 |
|
Transformer it allows you to attend far |
|
|
|
00:52:25.880 --> 00:52:30.520 |
|
back so if you look at the last layer |
|
|
|
00:52:27.720 --> 00:52:33.520 |
|
it's attending um to things in the |
|
|
|
00:52:30.520 --> 00:52:36.599 |
|
previous context window but the second |
|
|
|
00:52:33.520 --> 00:52:39.760 |
|
to last layer is attending to things in |
|
|
|
00:52:36.599 --> 00:52:41.520 |
|
the um not just one context window |
|
|
|
00:52:39.760 --> 00:52:44.079 |
|
before but multiple context windows |
|
|
|
00:52:41.520 --> 00:52:45.760 |
|
before and actually this allows you to |
|
|
|
00:52:44.079 --> 00:52:47.880 |
|
very effectively attend a very long |
|
|
|
00:52:45.760 --> 00:52:51.720 |
|
context because each time kind of the |
|
|
|
00:52:47.880 --> 00:52:54.799 |
|
context expands in an exponential |
|
|
|
00:52:51.720 --> 00:52:56.520 |
|
manner so um recently there's a popular |
|
|
|
00:52:54.799 --> 00:52:57.799 |
|
model called mistol that I'm sure a lot |
|
|
|
00:52:56.520 --> 00:52:59.480 |
|
of people have heard about and this is |
|
|
|
00:52:57.799 --> 00:53:01.920 |
|
using sliding window attention which is |
|
|
|
00:52:59.480 --> 00:53:04.160 |
|
essentially the same mechanism proposed |
|
|
|
00:53:01.920 --> 00:53:09.240 |
|
by Transformer XEL so this method is |
|
|
|
00:53:04.160 --> 00:53:09.240 |
|
still uh used in uh very practical |
|
|
|
00:53:10.400 --> 00:53:17.359 |
|
systems another paper that has been |
|
|
|
00:53:13.440 --> 00:53:19.319 |
|
pretty influential in this general area |
|
|
|
00:53:17.359 --> 00:53:21.079 |
|
is something called sparse |
|
|
|
00:53:19.319 --> 00:53:23.359 |
|
Transformers and the way sparse |
|
|
|
00:53:21.079 --> 00:53:25.960 |
|
Transformers work is instead of |
|
|
|
00:53:23.359 --> 00:53:29.520 |
|
attending to every single previous state |
|
|
|
00:53:25.960 --> 00:53:32.640 |
|
you attend to every n previous |
|
|
|
00:53:29.520 --> 00:53:34.599 |
|
States and what this allows you to do is |
|
|
|
00:53:32.640 --> 00:53:37.119 |
|
this allows you to essentially create |
|
|
|
00:53:34.599 --> 00:53:40.319 |
|
something like the strided uh |
|
|
|
00:53:37.119 --> 00:53:42.079 |
|
convolutions or um pyramidal recurrent |
|
|
|
00:53:40.319 --> 00:53:45.520 |
|
neural networks that I talked about |
|
|
|
00:53:42.079 --> 00:53:49.760 |
|
earlier um so what this looks like |
|
|
|
00:53:45.520 --> 00:53:51.079 |
|
essentially is you have um this like if |
|
|
|
00:53:49.760 --> 00:53:54.880 |
|
you have a particular state it might |
|
|
|
00:53:51.079 --> 00:53:56.480 |
|
attend to all of the previous end tokens |
|
|
|
00:53:54.880 --> 00:54:00.240 |
|
but then it |
|
|
|
00:53:56.480 --> 00:54:04.400 |
|
also attends to all of the |
|
|
|
00:54:00.240 --> 00:54:06.880 |
|
previous um kind of M chunks so you kind |
|
|
|
00:54:04.400 --> 00:54:08.920 |
|
of have a combination of local and |
|
|
|
00:54:06.880 --> 00:54:11.640 |
|
Global |
|
|
|
00:54:08.920 --> 00:54:14.760 |
|
attention or not local and Global but |
|
|
|
00:54:11.640 --> 00:54:16.760 |
|
local and kind of longer range attention |
|
|
|
00:54:14.760 --> 00:54:18.760 |
|
and this can be very effective because |
|
|
|
00:54:16.760 --> 00:54:22.319 |
|
you can attend to you know much longer |
|
|
|
00:54:18.760 --> 00:54:24.079 |
|
context with a minimal increase in a |
|
|
|
00:54:22.319 --> 00:54:26.520 |
|
computational |
|
|
|
00:54:24.079 --> 00:54:28.720 |
|
complexity |
|
|
|
00:54:26.520 --> 00:54:31.160 |
|
so another method that's a little bit |
|
|
|
00:54:28.720 --> 00:54:32.960 |
|
like this uh or it's very similar in |
|
|
|
00:54:31.160 --> 00:54:34.359 |
|
spirit but slightly different in |
|
|
|
00:54:32.960 --> 00:54:35.599 |
|
implementation is something called the |
|
|
|
00:54:34.359 --> 00:54:37.520 |
|
compressive |
|
|
|
00:54:35.599 --> 00:54:40.400 |
|
Transformer and in the compressive |
|
|
|
00:54:37.520 --> 00:54:43.000 |
|
Transformer you also have this idea of a |
|
|
|
00:54:40.400 --> 00:54:44.319 |
|
local memory and then a longer term |
|
|
|
00:54:43.000 --> 00:54:47.200 |
|
compressed |
|
|
|
00:54:44.319 --> 00:54:50.799 |
|
memory but you have an explicit |
|
|
|
00:54:47.200 --> 00:54:54.319 |
|
compression step that |
|
|
|
00:54:50.799 --> 00:54:58.079 |
|
directly essentially generates this uh |
|
|
|
00:54:54.319 --> 00:55:00.960 |
|
compressed mem M itself and so this is a |
|
|
|
00:54:58.079 --> 00:55:04.119 |
|
little bit more flexible I guess it |
|
|
|
00:55:00.960 --> 00:55:06.280 |
|
allows you to take all of the you know |
|
|
|
00:55:04.119 --> 00:55:09.000 |
|
relevant things from your local memory |
|
|
|
00:55:06.280 --> 00:55:12.000 |
|
and compress it down so it's another |
|
|
|
00:55:09.000 --> 00:55:12.000 |
|
method that's worth thinking |
|
|
|
00:55:12.760 --> 00:55:18.400 |
|
about finally uh there are some very |
|
|
|
00:55:15.799 --> 00:55:20.200 |
|
interesting methods that do low rank |
|
|
|
00:55:18.400 --> 00:55:23.039 |
|
approximations for |
|
|
|
00:55:20.200 --> 00:55:25.920 |
|
Transformers and so calculating the |
|
|
|
00:55:23.039 --> 00:55:29.119 |
|
attention Matrix is expensive but this |
|
|
|
00:55:25.920 --> 00:55:31.640 |
|
is a matrix and because it's a matrix we |
|
|
|
00:55:29.119 --> 00:55:32.640 |
|
can also approximate it with a lower |
|
|
|
00:55:31.640 --> 00:55:35.480 |
|
rank |
|
|
|
00:55:32.640 --> 00:55:38.559 |
|
Matrix and there's a couple methods that |
|
|
|
00:55:35.480 --> 00:55:40.599 |
|
do things uh like this uh the first one |
|
|
|
00:55:38.559 --> 00:55:42.680 |
|
is something called Blind forer which |
|
|
|
00:55:40.599 --> 00:55:44.520 |
|
adds low rank linear projections into |
|
|
|
00:55:42.680 --> 00:55:47.319 |
|
the model at appropriate |
|
|
|
00:55:44.520 --> 00:55:50.359 |
|
places and um there's another one called |
|
|
|
00:55:47.319 --> 00:55:52.200 |
|
NR forer which approximates using the ni |
|
|
|
00:55:50.359 --> 00:55:54.440 |
|
run method which is based on sampling |
|
|
|
00:55:52.200 --> 00:55:56.520 |
|
Landmark points but basically the |
|
|
|
00:55:54.440 --> 00:56:00.319 |
|
general IDE aide behind this is normally |
|
|
|
00:55:56.520 --> 00:56:03.400 |
|
we do this kind of softmax over you know |
|
|
|
00:56:00.319 --> 00:56:06.240 |
|
a very large attention Vector but |
|
|
|
00:56:03.400 --> 00:56:08.440 |
|
instead we can approximate the softmax |
|
|
|
00:56:06.240 --> 00:56:11.520 |
|
by having some low rank vectors kind of |
|
|
|
00:56:08.440 --> 00:56:12.799 |
|
like what we used in Laura and uh |
|
|
|
00:56:11.520 --> 00:56:16.440 |
|
nonetheless get a reasonable |
|
|
|
00:56:12.799 --> 00:56:16.440 |
|
approximation of the softmax used |
|
|
|
00:56:17.799 --> 00:56:24.039 |
|
inion okay so we're nearing the end of |
|
|
|
00:56:21.520 --> 00:56:26.000 |
|
what I want to talk about today and |
|
|
|
00:56:24.039 --> 00:56:29.720 |
|
finally the thing that I'd like to talk |
|
|
|
00:56:26.000 --> 00:56:33.240 |
|
about is benchmarks for long PEX models |
|
|
|
00:56:29.720 --> 00:56:35.000 |
|
and there's a few benchmarks one very |
|
|
|
00:56:33.240 --> 00:56:37.359 |
|
well-known one is something called long |
|
|
|
00:56:35.000 --> 00:56:40.599 |
|
range Arena this is a composite |
|
|
|
00:56:37.359 --> 00:56:43.000 |
|
Benchmark containing mostly non NLP |
|
|
|
00:56:40.599 --> 00:56:45.280 |
|
tasks and it's definitely used for long |
|
|
|
00:56:43.000 --> 00:56:46.760 |
|
sequence modeling but the results on the |
|
|
|
00:56:45.280 --> 00:56:49.400 |
|
long range Arena actually tend to |
|
|
|
00:56:46.760 --> 00:56:51.599 |
|
diverge uh somewhat from the results |
|
|
|
00:56:49.400 --> 00:56:54.440 |
|
that you get for longdistance language |
|
|
|
00:56:51.599 --> 00:56:56.520 |
|
modeling so in addition to this another |
|
|
|
00:56:54.440 --> 00:56:58.400 |
|
benchmark that I uh personally like and |
|
|
|
00:56:56.520 --> 00:57:01.960 |
|
have used a bit is something called |
|
|
|
00:56:58.400 --> 00:57:05.720 |
|
Scrolls which uh combines together a |
|
|
|
00:57:01.960 --> 00:57:07.960 |
|
whole bunch of kind of QA style or |
|
|
|
00:57:05.720 --> 00:57:10.920 |
|
summarization style tasks that have very |
|
|
|
00:57:07.960 --> 00:57:13.280 |
|
long contexts including over narratives |
|
|
|
00:57:10.920 --> 00:57:15.680 |
|
or books or government reports or other |
|
|
|
00:57:13.280 --> 00:57:17.280 |
|
things like that so you can also take a |
|
|
|
00:57:15.680 --> 00:57:20.680 |
|
look at this if you're interested in |
|
|
|
00:57:17.280 --> 00:57:20.680 |
|
kind of benchmarking longer range |
|
|
|
00:57:21.839 --> 00:57:28.280 |
|
models okay the final thing I'd like to |
|
|
|
00:57:24.559 --> 00:57:30.280 |
|
talk about is now that we have retriever |
|
|
|
00:57:28.280 --> 00:57:31.680 |
|
models we have reader models we maybe |
|
|
|
00:57:30.280 --> 00:57:34.000 |
|
even have reader models that can |
|
|
|
00:57:31.680 --> 00:57:35.520 |
|
effectively use very long contexts like |
|
|
|
00:57:34.000 --> 00:57:37.880 |
|
the ones that we retrieve over whole |
|
|
|
00:57:35.520 --> 00:57:39.240 |
|
documents how do we effectively use them |
|
|
|
00:57:37.880 --> 00:57:43.640 |
|
in our |
|
|
|
00:57:39.240 --> 00:57:46.680 |
|
models so there was a very nice paper um |
|
|
|
00:57:43.640 --> 00:57:48.880 |
|
by Nelson Leo at Stanford that about a |
|
|
|
00:57:46.680 --> 00:57:51.160 |
|
phenomenon that was kinded lost in the |
|
|
|
00:57:48.880 --> 00:57:53.079 |
|
middle and basically what it does is it |
|
|
|
00:57:51.160 --> 00:57:55.119 |
|
demonstrates that many many different |
|
|
|
00:57:53.079 --> 00:57:57.720 |
|
models including state-of-the-art model |
|
|
|
00:57:55.119 --> 00:58:00.799 |
|
models pay less attention to things in |
|
|
|
00:57:57.720 --> 00:58:03.960 |
|
the middle of long context windows and |
|
|
|
00:58:00.799 --> 00:58:06.760 |
|
so if we have an answer and we put it in |
|
|
|
00:58:03.960 --> 00:58:09.200 |
|
you know the first position in Doc in |
|
|
|
00:58:06.760 --> 00:58:12.280 |
|
you know a concatenated context or the |
|
|
|
00:58:09.200 --> 00:58:13.799 |
|
20th position in a concatenated context |
|
|
|
00:58:12.280 --> 00:58:15.240 |
|
it tends to attend more to the ones at |
|
|
|
00:58:13.799 --> 00:58:18.359 |
|
the beginning or the |
|
|
|
00:58:15.240 --> 00:58:19.480 |
|
end in contrast the ones in the middle |
|
|
|
00:58:18.359 --> 00:58:22.760 |
|
kind of get |
|
|
|
00:58:19.480 --> 00:58:26.680 |
|
lost hence the name lost in the middle |
|
|
|
00:58:22.760 --> 00:58:29.520 |
|
and the problem with this is you know if |
|
|
|
00:58:26.680 --> 00:58:32.480 |
|
we are doing something like retrieval in |
|
|
|
00:58:29.520 --> 00:58:34.160 |
|
Reading then that's maybe not such a |
|
|
|
00:58:32.480 --> 00:58:35.680 |
|
huge problem because we could just put |
|
|
|
00:58:34.160 --> 00:58:37.680 |
|
you know the highest scoring documents |
|
|
|
00:58:35.680 --> 00:58:39.920 |
|
at the beginning that might even be more |
|
|
|
00:58:37.680 --> 00:58:42.440 |
|
effective than uh you know concatenating |
|
|
|
00:58:39.920 --> 00:58:44.160 |
|
lots of low scoring documents together |
|
|
|
00:58:42.440 --> 00:58:45.559 |
|
but if we want to read a really long |
|
|
|
00:58:44.160 --> 00:58:48.839 |
|
document and synthesize something |
|
|
|
00:58:45.559 --> 00:58:52.200 |
|
without doing kind of another uh scoring |
|
|
|
00:58:48.839 --> 00:58:54.200 |
|
step uh that can be an issue and also |
|
|
|
00:58:52.200 --> 00:58:56.359 |
|
you know our retriever is not perfect so |
|
|
|
00:58:54.200 --> 00:58:58.799 |
|
we would like the model to the reader |
|
|
|
00:58:56.359 --> 00:59:00.520 |
|
model to do a good job with the outputs |
|
|
|
00:58:58.799 --> 00:59:04.839 |
|
that it |
|
|
|
00:59:00.520 --> 00:59:06.359 |
|
has so there are methods uh to ensure |
|
|
|
00:59:04.839 --> 00:59:09.440 |
|
use of relevant |
|
|
|
00:59:06.359 --> 00:59:12.119 |
|
context so of course better retrievers |
|
|
|
00:59:09.440 --> 00:59:14.880 |
|
make more relevant context you can do |
|
|
|
00:59:12.119 --> 00:59:16.240 |
|
you know reranking or other things like |
|
|
|
00:59:14.880 --> 00:59:17.280 |
|
that and only include the context that |
|
|
|
00:59:16.240 --> 00:59:19.680 |
|
looks most |
|
|
|
00:59:17.280 --> 00:59:22.880 |
|
relevant um or you know refine your |
|
|
|
00:59:19.680 --> 00:59:25.200 |
|
reader model but there's also methods |
|
|
|
00:59:22.880 --> 00:59:28.720 |
|
that can decide whether contact should |
|
|
|
00:59:25.200 --> 00:59:32.400 |
|
be used in the first place so um there |
|
|
|
00:59:28.720 --> 00:59:35.440 |
|
are methods uh to decide whether to use |
|
|
|
00:59:32.400 --> 00:59:37.559 |
|
whether to include passages or not and |
|
|
|
00:59:35.440 --> 00:59:39.920 |
|
also uh recently we proposed a method to |
|
|
|
00:59:37.559 --> 00:59:42.640 |
|
filter down to parts of retrieve |
|
|
|
00:59:39.920 --> 00:59:44.920 |
|
passages uh to have only appropriate |
|
|
|
00:59:42.640 --> 00:59:47.480 |
|
content and this is a model uh that we |
|
|
|
00:59:44.920 --> 00:59:49.319 |
|
called filco it basically filters the |
|
|
|
00:59:47.480 --> 00:59:52.160 |
|
context down to the most relevant |
|
|
|
00:59:49.319 --> 00:59:53.920 |
|
content that we think is appropriate and |
|
|
|
00:59:52.160 --> 00:59:56.960 |
|
that allows us to get better results |
|
|
|
00:59:53.920 --> 00:59:56.960 |
|
when it's fed to the |
|
|
|
00:59:57.079 --> 01:00:03.640 |
|
generator so that's all I have for today |
|
|
|
01:00:00.319 --> 01:00:06.200 |
|
um thank you for watching the video and |
|
|
|
01:00:03.640 --> 01:00:08.599 |
|
for people in the class I'll be happy to |
|
|
|
01:00:06.200 --> 01:00:13.079 |
|
take questions on Piaza or during the |
|
|
|
01:00:08.599 --> 01:00:13.079 |
|
office hours that I had planned thanks a |
|
|
|
01:00:15.319 --> 01:00:18.319 |
|
lot |
|
|