ahmedelsayed's picture
commit files to HF hub
2ffb90d
WEBVTT
00:00:02.720 --> 00:00:06.720
yeah today I'll talk about fine tuning
00:00:04.400 --> 00:00:09.599
and instruction tuning uh so this is
00:00:06.720 --> 00:00:12.679
kind of the first step in the pipeline
00:00:09.599 --> 00:00:14.480
of steps that people use to prepare
00:00:12.679 --> 00:00:16.320
models to be ready to be used as
00:00:14.480 --> 00:00:20.760
chatbots like you know what you see in
00:00:16.320 --> 00:00:22.880
chat GPT or uh gemini or whatever else
00:00:20.760 --> 00:00:26.240
you want to be
00:00:22.880 --> 00:00:28.240
using and what
00:00:26.240 --> 00:00:29.679
these what this basically takes
00:00:28.240 --> 00:00:32.160
advantage of is that we have many many
00:00:29.679 --> 00:00:33.200
different tasks that we can be solving
00:00:32.160 --> 00:00:35.960
in
00:00:33.200 --> 00:00:37.160
NLP and each requires different
00:00:35.960 --> 00:00:40.440
varieties of
00:00:37.160 --> 00:00:42.680
data so we up until this point we've
00:00:40.440 --> 00:00:46.239
talked a lot about the varieties of
00:00:42.680 --> 00:00:47.520
tasks that only require text uh such as
00:00:46.239 --> 00:00:51.600
language
00:00:47.520 --> 00:00:54.280
modeling and then we also have other
00:00:51.600 --> 00:00:56.160
varieties of tasks that require only
00:00:54.280 --> 00:00:58.160
naturally occurring data so like data
00:00:56.160 --> 00:01:01.600
that we don't actually have to create by
00:00:58.160 --> 00:01:04.560
hand and or do that we don't have to
00:01:01.600 --> 00:01:08.240
create by hand for the purpose of
00:01:04.560 --> 00:01:10.840
training like language models or M uh
00:01:08.240 --> 00:01:12.680
NLP models and this includes stuff like
00:01:10.840 --> 00:01:14.280
machine translation and the reason why
00:01:12.680 --> 00:01:16.240
we have lots of machine translation data
00:01:14.280 --> 00:01:19.479
is people do translation anyway even if
00:01:16.240 --> 00:01:20.799
we didn't have like chat GPT or Google
00:01:19.479 --> 00:01:22.600
translate or something people would be
00:01:20.799 --> 00:01:24.920
doing translation a lot of this data can
00:01:22.600 --> 00:01:27.400
be used to train
00:01:24.920 --> 00:01:29.640
models then other things are hand
00:01:27.400 --> 00:01:33.040
labeled data and so this is like for a
00:01:29.640 --> 00:01:35.159
lot of things like question answering or
00:01:33.040 --> 00:01:37.280
um other
00:01:35.159 --> 00:01:40.000
tasks that you need to create data like
00:01:37.280 --> 00:01:42.399
name dty recognition or stuff like this
00:01:40.000 --> 00:01:44.079
there that data really mostly isn't
00:01:42.399 --> 00:01:46.159
naturally occurring so we need to go in
00:01:44.079 --> 00:01:47.960
and actually create it by hand in order
00:01:46.159 --> 00:01:50.399
to do
00:01:47.960 --> 00:01:53.280
training so like one of the interesting
00:01:50.399 --> 00:01:54.840
things about you know the whole Paradigm
00:01:53.280 --> 00:01:57.960
of training language models over the
00:01:54.840 --> 00:02:00.880
past several years is that it we have
00:01:57.960 --> 00:02:03.439
been remarkably successful in getting
00:02:00.880 --> 00:02:07.640
models to work at a very large number of
00:02:03.439 --> 00:02:09.319
tasks by training only on text so you
00:02:07.640 --> 00:02:11.920
know we train something like llama we
00:02:09.319 --> 00:02:13.720
train something like the early GP models
00:02:11.920 --> 00:02:16.360
that were trained only on text without
00:02:13.720 --> 00:02:19.560
uh very much supervised training
00:02:16.360 --> 00:02:21.680
data the and the reason why is like what
00:02:19.560 --> 00:02:23.920
I mentioned last class which is like
00:02:21.680 --> 00:02:27.239
actually a lot of data on the internet
00:02:23.920 --> 00:02:28.760
just occurs in this form anyway so we
00:02:27.239 --> 00:02:31.519
have
00:02:28.760 --> 00:02:34.840
uh things
00:02:31.519 --> 00:02:36.519
like phrase books that appear online and
00:02:34.840 --> 00:02:38.959
these phrase books weren't explicitly
00:02:36.519 --> 00:02:41.000
created it's machine translation data or
00:02:38.959 --> 00:02:43.519
translation data even but they appear
00:02:41.000 --> 00:02:46.519
online and there's actually a
00:02:43.519 --> 00:02:46.519
paper
00:02:49.879 --> 00:02:53.959
um that examines
00:02:58.519 --> 00:03:03.159
this did I didn't cite in the slides but
00:03:01.440 --> 00:03:08.440
it's it's a kind of interesting paper
00:03:03.159 --> 00:03:10.200
from ACL this year where they find that
00:03:08.440 --> 00:03:12.680
despite the fact that that there's a
00:03:10.200 --> 00:03:14.920
language model that was trained on just
00:03:12.680 --> 00:03:17.360
you know random data from the web they
00:03:14.920 --> 00:03:20.799
found over 30 million translation pairs
00:03:17.360 --> 00:03:22.959
across at least 44 languages um in this
00:03:20.799 --> 00:03:25.920
data that was just like scrip in the web
00:03:22.959 --> 00:03:28.080
not you know explicitly for translation
00:03:25.920 --> 00:03:32.000
and so there's lots of other examples of
00:03:28.080 --> 00:03:35.239
this uh you know question pairs from FAQ
00:03:32.000 --> 00:03:38.280
pages on sites or other things like that
00:03:35.239 --> 00:03:41.319
so but anyway yeah getting back to the
00:03:38.280 --> 00:03:43.959
original uh the original thing here in
00:03:41.319 --> 00:03:47.120
many cases uh your models will have
00:03:43.959 --> 00:03:48.640
already been exposed to some data uh
00:03:47.120 --> 00:03:51.319
there's some naturally occurring data
00:03:48.640 --> 00:03:53.239
that you can Harvest and curate in an
00:03:51.319 --> 00:03:54.720
appropriate way and then sometimes if
00:03:53.239 --> 00:03:56.720
you really want models to do something
00:03:54.720 --> 00:03:57.799
well you can do handl but that's very
00:03:56.720 --> 00:04:00.959
expensive and you're not going to be
00:03:57.799 --> 00:04:04.720
able to create very much data
00:04:00.959 --> 00:04:07.319
so one one very funny thing is uh I was
00:04:04.720 --> 00:04:10.079
playing around with GPT for
00:04:07.319 --> 00:04:11.879
translation and I asked it to translate
00:04:10.079 --> 00:04:15.239
from English to
00:04:11.879 --> 00:04:17.079
Japanese and it did really well most of
00:04:15.239 --> 00:04:19.639
the time it did you know very good
00:04:17.079 --> 00:04:23.880
translations on English to Japanese and
00:04:19.639 --> 00:04:25.160
like 900 900 out of a thousand examples
00:04:23.880 --> 00:04:26.680
sometimes it just got it wrong because
00:04:25.160 --> 00:04:28.199
it's not a perfect translation system
00:04:26.680 --> 00:04:30.560
but every once in a while it would
00:04:28.199 --> 00:04:32.320
translate into Japanese uh which is in
00:04:30.560 --> 00:04:35.280
Japanese characters and then it would
00:04:32.320 --> 00:04:36.280
translate into romanized Japanese into
00:04:35.280 --> 00:04:41.160
like the
00:04:36.280 --> 00:04:42.520
pronunciation um so no Japanese
00:04:41.160 --> 00:04:44.080
translator that you ask to translate
00:04:42.520 --> 00:04:45.759
into Japanese would ever do that that
00:04:44.080 --> 00:04:47.039
would be like extremely unprofessional
00:04:45.759 --> 00:04:48.639
right you know you're saying trans
00:04:47.039 --> 00:04:51.720
please translate this into Japanese for
00:04:48.639 --> 00:04:55.360
Japanese speakers but why would GP do
00:04:51.720 --> 00:04:55.360
this anyone have any
00:04:56.639 --> 00:05:02.240
ideas yeah someone on the internet
00:05:00.600 --> 00:05:05.240
yeah someone on the internet did it that
00:05:02.240 --> 00:05:07.199
way so a lot of the times when you have
00:05:05.240 --> 00:05:08.560
incidental training data on the internet
00:05:07.199 --> 00:05:10.280
it would be from like phrase books and
00:05:08.560 --> 00:05:12.800
people who are trying to teach Japanese
00:05:10.280 --> 00:05:14.880
for example so every once in a while
00:05:12.800 --> 00:05:16.840
like GPD got the idea that it should be
00:05:14.880 --> 00:05:18.840
translating like it did in a phrase book
00:05:16.840 --> 00:05:21.199
for Japanese Learners as opposed to you
00:05:18.840 --> 00:05:23.759
know like actually English to Japanese
00:05:21.199 --> 00:05:26.400
translations so the problem is if you're
00:05:23.759 --> 00:05:28.560
learning only on this language modeling
00:05:26.400 --> 00:05:29.919
based text you might get exactly what
00:05:28.560 --> 00:05:31.440
you want but everyone once in a while
00:05:29.919 --> 00:05:34.160
you'll get something completely crazy
00:05:31.440 --> 00:05:36.919
that you never expected to happen so uh
00:05:34.160 --> 00:05:39.639
that's the problem with just relying on
00:05:36.919 --> 00:05:39.639
base language
00:05:41.280 --> 00:05:47.240
mods so the all the methods that I'm
00:05:44.880 --> 00:05:50.560
going to be talking about here uh fall
00:05:47.240 --> 00:05:52.600
in under the class of multitask learning
00:05:50.560 --> 00:05:54.600
and so multitask learning is training
00:05:52.600 --> 00:05:57.759
models to do well on multiple tasks at
00:05:54.600 --> 00:05:59.160
once um just to give an example uh you
00:05:57.759 --> 00:06:01.400
could have this as an example and you
00:05:59.160 --> 00:06:02.919
could be doing language modeling on it
00:06:01.400 --> 00:06:04.720
you could also be training a model to do
00:06:02.919 --> 00:06:06.720
tagging on it and other things like this
00:06:04.720 --> 00:06:10.319
and exactly how you do this can be
00:06:06.720 --> 00:06:13.560
different but the important thing is
00:06:10.319 --> 00:06:15.599
that you have some shared parameters
00:06:13.560 --> 00:06:17.840
between the models that are trained on
00:06:15.599 --> 00:06:19.280
all tasks and if you're just training a
00:06:17.840 --> 00:06:21.360
big language model then you'll probably
00:06:19.280 --> 00:06:25.440
be sharing all of the parameters if
00:06:21.360 --> 00:06:27.199
you're training a uh something like Bert
00:06:25.440 --> 00:06:29.080
or like you're you're pre-training and
00:06:27.199 --> 00:06:31.000
the then fine tuning you might train the
00:06:29.080 --> 00:06:32.800
body the model on multiple tasks but
00:06:31.000 --> 00:06:35.479
have a separate classification for
00:06:32.800 --> 00:06:37.520
different tasks so there's different
00:06:35.479 --> 00:06:40.880
ways you can do that but the basic idea
00:06:37.520 --> 00:06:40.880
is that you need to have lots of shared
00:06:40.960 --> 00:06:46.280
parameters um one easy way to do this uh
00:06:44.160 --> 00:06:49.479
the very simplest way to do this is to
00:06:46.280 --> 00:06:51.800
train the model and Sample One Mini
00:06:49.479 --> 00:06:53.520
batch for one task another mini batch
00:06:51.800 --> 00:06:55.720
for another task and just alternate
00:06:53.520 --> 00:06:58.400
between them or alternate between them
00:06:55.720 --> 00:07:01.400
and Sample four from one task and one
00:06:58.400 --> 00:07:03.879
from another tasks so uh it's often as
00:07:01.400 --> 00:07:03.879
simple as
00:07:04.199 --> 00:07:08.599
that or you can just uh sorry or you can
00:07:06.840 --> 00:07:11.319
just mix all of the data together so if
00:07:08.599 --> 00:07:12.639
you're doing like text um everything is
00:07:11.319 --> 00:07:15.280
text based then you don't even need to
00:07:12.639 --> 00:07:15.280
worry about mini
00:07:15.560 --> 00:07:21.440
batches cool so separately from this uh
00:07:18.759 --> 00:07:23.960
pre-train and fine-tune so in pre-train
00:07:21.440 --> 00:07:26.360
and fine-tune you first train on one
00:07:23.960 --> 00:07:28.240
task and then on another and the way
00:07:26.360 --> 00:07:30.599
this works is you first train for
00:07:28.240 --> 00:07:31.960
example language modeling objective and
00:07:30.599 --> 00:07:35.199
then after you're done training the
00:07:31.960 --> 00:07:37.440
language modeling objective you uh you
00:07:35.199 --> 00:07:41.360
train on something else like
00:07:37.440 --> 00:07:43.520
tagging and there's several reasons why
00:07:41.360 --> 00:07:45.199
you might want to do this is does anyone
00:07:43.520 --> 00:07:48.479
have an idea about why you might want to
00:07:45.199 --> 00:07:50.720
do this as opposed to something like
00:07:48.479 --> 00:07:53.319
standard multitask learning where you do
00:07:50.720 --> 00:07:57.000
both of them at the same time this is a
00:07:53.319 --> 00:07:57.000
straightforward question perhaps
00:07:57.479 --> 00:08:03.520
but now when I say straightforward I
00:08:00.039 --> 00:08:03.520
don't mean easy I mean not a tricked
00:08:03.599 --> 00:08:06.800
question any
00:08:09.039 --> 00:08:15.120
ideas um okay how many of you have
00:08:11.800 --> 00:08:17.960
trained uh a 70 billion parameter
00:08:15.120 --> 00:08:17.960
language model from
00:08:18.960 --> 00:08:23.080
scratch I see somebody I see somebody
00:08:21.360 --> 00:08:27.680
saying maybe so that's actually pretty
00:08:23.080 --> 00:08:27.680
impressive but um why why
00:08:27.720 --> 00:08:31.240
not yeah
00:08:31.800 --> 00:08:35.440
yeah it's an unbel it's unbelievably
00:08:33.680 --> 00:08:37.320
expensive and a waste of resources yeah
00:08:35.440 --> 00:08:39.440
so like if everybody was doing it it
00:08:37.320 --> 00:08:41.240
would be a waste of resources so we
00:08:39.440 --> 00:08:42.479
actually benefit a lot by a very small
00:08:41.240 --> 00:08:45.600
number of people doing this free
00:08:42.479 --> 00:08:48.240
training and then the rest of us doing
00:08:45.600 --> 00:08:50.560
you know fine tuning uh on a smaller
00:08:48.240 --> 00:08:53.320
amount of data so if you were doing all
00:08:50.560 --> 00:08:55.040
the multitasking uh from scratch then
00:08:53.320 --> 00:08:57.600
that could be a
00:08:55.040 --> 00:09:01.200
waste does anyone have an idea why you
00:08:57.600 --> 00:09:01.200
might not want to do this
00:09:02.640 --> 00:09:06.800
or actually there there's some other
00:09:04.079 --> 00:09:08.240
reasons why you might want to do this um
00:09:06.800 --> 00:09:10.480
another reason why you might want to do
00:09:08.240 --> 00:09:13.240
this is for example if your pre-training
00:09:10.480 --> 00:09:15.040
data is big and messy uh like for
00:09:13.240 --> 00:09:17.600
example if your pre-training data is all
00:09:15.040 --> 00:09:20.600
of the internet and the all the internet
00:09:17.600 --> 00:09:22.000
contains like lots of toxic text and
00:09:20.600 --> 00:09:23.640
text that's in a format that you don't
00:09:22.000 --> 00:09:25.959
want you can still train on it and learn
00:09:23.640 --> 00:09:28.800
from it but then fine-tuning can you
00:09:25.959 --> 00:09:32.000
know make your model safer or uh remove
00:09:28.800 --> 00:09:33.360
tox or other like that as well so does
00:09:32.000 --> 00:09:34.480
anyone have an idea why you might not
00:09:33.360 --> 00:09:38.480
want to do
00:09:34.480 --> 00:09:38.480
this this is a trickier
00:09:40.200 --> 00:09:43.440
question any
00:09:45.320 --> 00:09:49.720
ideas or or to put it in a different way
00:09:48.079 --> 00:09:52.880
while you might want to do standard
00:09:49.720 --> 00:09:56.000
multitasking instead of this yeah just
00:09:52.880 --> 00:09:59.279
again so if you don't have much teching
00:09:56.000 --> 00:10:01.480
data for example then you might consider
00:09:59.279 --> 00:10:01.480
like
00:10:02.399 --> 00:10:10.200
doing uh so if you have lots of tagging
00:10:06.480 --> 00:10:12.320
data I I think you're you're yeah so I I
00:10:10.200 --> 00:10:13.560
think you're basically this is a good
00:10:12.320 --> 00:10:15.880
point so if you don't have lots of
00:10:13.560 --> 00:10:17.240
tagging data um you might have much much
00:10:15.880 --> 00:10:18.800
more language modeling data than you
00:10:17.240 --> 00:10:21.200
have tagging data so it's a better idea
00:10:18.800 --> 00:10:24.959
to train more on it that is true but you
00:10:21.200 --> 00:10:26.519
could sample like 99 mini batches of uh
00:10:24.959 --> 00:10:29.480
of language modeling data and One Mini
00:10:26.519 --> 00:10:31.399
batch of train data or 999 of language
00:10:29.480 --> 00:10:34.480
model in data
00:10:31.399 --> 00:10:37.040
so it's a good you're in going in a good
00:10:34.480 --> 00:10:40.040
direction anything
00:10:37.040 --> 00:10:40.040
else
00:10:44.639 --> 00:10:50.800
yeah uh so if your pre-training data has
00:10:48.959 --> 00:10:52.000
certain biases you might inherit it do
00:10:50.800 --> 00:10:54.240
you think that's a bigger problem with
00:10:52.000 --> 00:10:56.839
pre-training or pre- tring and F traing
00:10:54.240 --> 00:10:56.839
or standard
00:10:58.040 --> 00:11:01.040
Ming
00:11:12.660 --> 00:11:15.750
[Music]
00:11:18.600 --> 00:11:23.240
yeah um so you might you might lose some
00:11:21.320 --> 00:11:25.560
of the information that exists in the
00:11:23.240 --> 00:11:27.480
multitask data set I think that's pretty
00:11:25.560 --> 00:11:29.560
close to what I'm going to say so let me
00:11:27.480 --> 00:11:30.920
um let me just go ahead and give the
00:11:29.560 --> 00:11:35.160
hopefully everybody had time to think
00:11:30.920 --> 00:11:37.279
about it but um this is a paper that we
00:11:35.160 --> 00:11:40.320
wrote previously and basically one
00:11:37.279 --> 00:11:41.320
interesting thing is that you actually
00:11:40.320 --> 00:11:44.560
do
00:11:41.320 --> 00:11:47.320
better um you do better if you train on
00:11:44.560 --> 00:11:50.160
multiple tasks at the same time and our
00:11:47.320 --> 00:11:51.480
hypothesis about why the reason uh you
00:11:50.160 --> 00:11:53.279
do better on the end task that you
00:11:51.480 --> 00:11:55.200
finally want to do well on compared to
00:11:53.279 --> 00:11:58.079
pre-training and fine tuning and our
00:11:55.200 --> 00:12:01.079
hypothesis about this um which I've also
00:11:58.079 --> 00:12:03.160
seen a few other works is if you
00:12:01.079 --> 00:12:05.040
pre-train on the task that you finally
00:12:03.160 --> 00:12:07.959
want to solve while you're also solving
00:12:05.040 --> 00:12:12.120
the language modeling task
00:12:07.959 --> 00:12:14.279
the essentially the model is learning
00:12:12.120 --> 00:12:17.000
representations that are useful for both
00:12:14.279 --> 00:12:18.839
at the same time as opposed to if you're
00:12:17.000 --> 00:12:20.680
training on the language modeling task
00:12:18.839 --> 00:12:22.079
it will be learning representations that
00:12:20.680 --> 00:12:24.440
are useful for the language modeling
00:12:22.079 --> 00:12:26.079
task but not necessarily focusing on the
00:12:24.440 --> 00:12:28.639
representations that would be useful for
00:12:26.079 --> 00:12:31.360
the end so like for example if you're
00:12:28.639 --> 00:12:33.040
joining training on sentiment analysis
00:12:31.360 --> 00:12:34.600
and language modeling the
00:12:33.040 --> 00:12:36.600
representations that are useful for
00:12:34.600 --> 00:12:39.600
sentiment analysis will be more Salient
00:12:36.600 --> 00:12:43.199
than the like in the model essentially
00:12:39.600 --> 00:12:45.160
and so we um that will be particularly a
00:12:43.199 --> 00:12:46.639
problem when there's multiple when you
00:12:45.160 --> 00:12:49.560
have a
00:12:46.639 --> 00:12:51.519
like a varied optimization landscape in
00:12:49.560 --> 00:12:53.199
multiple local Optima and the language
00:12:51.519 --> 00:12:55.199
modeling might not get you into the
00:12:53.199 --> 00:12:57.480
global Optimum uh that you want for the
00:12:55.199 --> 00:12:59.279
end task that you're solving there's
00:12:57.480 --> 00:13:02.519
also another interesting paper from
00:12:59.279 --> 00:13:04.120
anthropic more recently than ours that
00:13:02.519 --> 00:13:05.399
shows something a little bit similar
00:13:04.120 --> 00:13:06.279
specifically from the point of view of
00:13:05.399 --> 00:13:08.760
safety
00:13:06.279 --> 00:13:12.000
training and they demonstrate that if
00:13:08.760 --> 00:13:14.040
you start out by having a concept of
00:13:12.000 --> 00:13:17.279
safety early in your training you're
00:13:14.040 --> 00:13:19.600
able to reach better um better final
00:13:17.279 --> 00:13:21.000
results than if you start safety
00:13:19.600 --> 00:13:23.760
training after you trained your model
00:13:21.000 --> 00:13:26.480
for a while so um and this is
00:13:23.760 --> 00:13:28.880
particularly for things like toxicity to
00:13:26.480 --> 00:13:30.920
so there are downsides to pre-training
00:13:28.880 --> 00:13:32.720
find tuning but the upsides of you know
00:13:30.920 --> 00:13:34.360
spending lots of compute once and then F
00:13:32.720 --> 00:13:36.440
tuning for lots of different you know
00:13:34.360 --> 00:13:40.839
Downstream tests is like large enough
00:13:36.440 --> 00:13:40.839
that that's still the standard not
00:13:41.160 --> 00:13:49.720
this um any questions about
00:13:44.920 --> 00:13:49.720
that okay cool let's uh let's move
00:13:49.959 --> 00:13:55.040
on um so we talked about prompting
00:13:53.199 --> 00:13:57.399
before I'm just going to go over that
00:13:55.040 --> 00:13:59.920
very quick you know just say it for
00:13:57.399 --> 00:14:03.079
completeness but when we're prompting uh
00:13:59.920 --> 00:14:04.839
we have an encoder uh we train it on
00:14:03.079 --> 00:14:07.000
language modeling or whatever else but
00:14:04.839 --> 00:14:10.399
then we freeze it and then we specify
00:14:07.000 --> 00:14:13.240
the task by a prefix like
00:14:10.399 --> 00:14:15.000
this and and what instruction tuning
00:14:13.240 --> 00:14:17.240
does is instruction tuning is like a
00:14:15.000 --> 00:14:20.839
combination of fine-tuning and prompting
00:14:17.240 --> 00:14:23.160
and so what we do is we pre-train and
00:14:20.839 --> 00:14:27.360
then we
00:14:23.160 --> 00:14:29.040
oh sorry I uh guess I I failed to update
00:14:27.360 --> 00:14:31.440
the the figure here so this is just a
00:14:29.040 --> 00:14:37.199
figure for f tuning so normally what you
00:14:31.440 --> 00:14:39.519
do is you um you have a prompt for one
00:14:37.199 --> 00:14:42.480
task a prompt for another task a prompt
00:14:39.519 --> 00:14:45.440
for another task and then you uh train
00:14:42.480 --> 00:14:47.040
your model specifically so that it does
00:14:45.440 --> 00:14:49.079
good completions of those prps and it'll
00:14:47.040 --> 00:14:51.680
give some actual examples of that right
00:14:49.079 --> 00:14:54.199
so yeah sorry the the figure uh I I will
00:14:51.680 --> 00:14:54.199
need to fix
00:14:57.680 --> 00:15:03.079
it
00:15:00.399 --> 00:15:03.079
just taking a
00:15:03.800 --> 00:15:11.399
note
00:15:06.560 --> 00:15:14.560
um okay so we haven't really covered
00:15:11.399 --> 00:15:16.279
um fine tuning yet in general so I want
00:15:14.560 --> 00:15:18.240
to talk a little bit about what we do uh
00:15:16.279 --> 00:15:20.160
for fine tuning and particularly what we
00:15:18.240 --> 00:15:22.079
do for fine tuning very large models
00:15:20.160 --> 00:15:23.639
because I think that's what a lot of
00:15:22.079 --> 00:15:27.680
people want to do
00:15:23.639 --> 00:15:30.360
nowadays so for full fine tuning um full
00:15:27.680 --> 00:15:31.120
fine tuning is relative L easy uh what
00:15:30.360 --> 00:15:35.120
we
00:15:31.120 --> 00:15:36.920
do um easy conceptually hard in practice
00:15:35.120 --> 00:15:40.360
so what we do is we simply continue
00:15:36.920 --> 00:15:43.480
training the language model on uh
00:15:40.360 --> 00:15:45.839
whatever data we want to be fitting to
00:15:43.480 --> 00:15:47.240
so this could be like translation pairs
00:15:45.839 --> 00:15:49.199
it could be question answering pairs it
00:15:47.240 --> 00:15:52.000
could be anything else like
00:15:49.199 --> 00:15:53.839
that um but the issue is depending on
00:15:52.000 --> 00:15:56.720
the method that you're using to optimize
00:15:53.839 --> 00:15:59.120
your model uh the method can take lots
00:15:56.720 --> 00:16:00.959
of memory and also in some some cases it
00:15:59.120 --> 00:16:02.319
can be relatively unstable compared to
00:16:00.959 --> 00:16:04.240
some other Alternatives that I'm going
00:16:02.319 --> 00:16:07.079
to talk about in a
00:16:04.240 --> 00:16:10.440
bit and just to give an example uh
00:16:07.079 --> 00:16:13.560
training a 65 billion parameter model uh
00:16:10.440 --> 00:16:16.319
which is the largest version of llama 1
00:16:13.560 --> 00:16:18.880
uh with 16 bit mixed Precision actually
00:16:16.319 --> 00:16:21.759
takes uh much more memory than you would
00:16:18.880 --> 00:16:26.440
expect uh if you haven't done this
00:16:21.759 --> 00:16:29.240
before so if you look at the amount of
00:16:26.440 --> 00:16:32.160
memory required for
00:16:29.240 --> 00:16:34.120
holding the model in the first place if
00:16:32.160 --> 00:16:38.040
we have 65 billion
00:16:34.120 --> 00:16:40.120
parameters uh times two that would be
00:16:38.040 --> 00:16:43.160
130 gigabytes of memory already so
00:16:40.120 --> 00:16:47.079
that's already a lot of memory right but
00:16:43.160 --> 00:16:49.639
if we want to do um if we want to hold
00:16:47.079 --> 00:16:52.399
both the parameters and the gradients of
00:16:49.639 --> 00:16:55.839
the model um obviously we need to double
00:16:52.399 --> 00:16:58.240
the number of uh points here so we
00:16:55.839 --> 00:16:59.880
double we also have 130 gbt for the
00:16:58.240 --> 00:17:01.880
param
00:16:59.880 --> 00:17:04.160
uh sorry for the
00:17:01.880 --> 00:17:06.240
gradients then we have the optimizer and
00:17:04.160 --> 00:17:09.039
this could be an Optimizer like atam if
00:17:06.240 --> 00:17:10.959
people remember atom has first moments
00:17:09.039 --> 00:17:12.360
and second moments so it has the mean
00:17:10.959 --> 00:17:13.280
and the and something that looks like
00:17:12.360 --> 00:17:15.160
the
00:17:13.280 --> 00:17:17.839
variance
00:17:15.160 --> 00:17:20.079
and
00:17:17.839 --> 00:17:21.240
these at least according to this paper
00:17:20.079 --> 00:17:25.280
from
00:17:21.240 --> 00:17:28.760
2019 uh needed to be stored in 32
00:17:25.280 --> 00:17:31.520
bits so um these needed to be stored in
00:17:28.760 --> 00:17:33.960
uh 32 bits of memory because if you
00:17:31.520 --> 00:17:35.480
stored them in smaller amounts of memory
00:17:33.960 --> 00:17:39.000
they would have underflow issues
00:17:35.480 --> 00:17:40.640
overflow issues and uh basically uh the
00:17:39.000 --> 00:17:43.960
numerical Precision would destabilize
00:17:40.640 --> 00:17:47.000
your training and then in addition the
00:17:43.960 --> 00:17:49.440
parameters also needed to be measured in
00:17:47.000 --> 00:17:51.760
uh in 32-bits so you needed a 32-bit
00:17:49.440 --> 00:17:54.280
copy of the
00:17:51.760 --> 00:17:55.919
parameters this is just the parameters
00:17:54.280 --> 00:17:57.320
of the model and then separately from
00:17:55.919 --> 00:17:59.320
that you also need to do the forward and
00:17:57.320 --> 00:18:01.039
backward passes and so if you do the
00:17:59.320 --> 00:18:04.640
forward and backward passes depending on
00:18:01.039 --> 00:18:07.520
how big your batch size is how many uh
00:18:04.640 --> 00:18:09.120
tokens you have in each instance this
00:18:07.520 --> 00:18:11.559
could take you know significant amounts
00:18:09.120 --> 00:18:14.679
of memory too like 100 to 200
00:18:11.559 --> 00:18:17.679
gigabytes so overall this would take
00:18:14.679 --> 00:18:21.240
around a, to 1,400 gigabytes of GPU
00:18:17.679 --> 00:18:24.520
memory in the very naive scenario and
00:18:21.240 --> 00:18:27.360
this is uh not that
00:18:24.520 --> 00:18:30.440
great now uh this paper was written in
00:18:27.360 --> 00:18:33.880
2019 and there's have been some ADV uh
00:18:30.440 --> 00:18:36.440
advances since then in optimizing models
00:18:33.880 --> 00:18:37.720
so to give some examples of things that
00:18:36.440 --> 00:18:39.400
can be
00:18:37.720 --> 00:18:43.000
fixed
00:18:39.400 --> 00:18:47.520
previously when we were using
00:18:43.000 --> 00:18:49.280
fp16 uh so like the regular uh ansy
00:18:47.520 --> 00:18:53.280
floating Point numbers like we use on
00:18:49.280 --> 00:18:55.400
our CPU this was it you needed 32bit
00:18:53.280 --> 00:18:57.840
integer uh 32-bit floats to make this
00:18:55.400 --> 00:19:01.080
stable now it's pretty standard to use
00:18:57.840 --> 00:19:04.799
BF 16 uh brain float 16 like I talked
00:19:01.080 --> 00:19:06.559
about earlier in the uh in the class and
00:19:04.799 --> 00:19:08.799
because of that this can be made more
00:19:06.559 --> 00:19:11.880
stable so you can reduce this to things
00:19:08.799 --> 00:19:15.919
like two bytes instead of four bytes uh
00:19:11.880 --> 00:19:17.159
we can also uh if we make do that we
00:19:15.919 --> 00:19:18.760
don't need this extra copy of the
00:19:17.159 --> 00:19:21.760
parameters so we can get away with about
00:19:18.760 --> 00:19:24.039
eight bytes per uh parameter we want to
00:19:21.760 --> 00:19:26.480
optimize but that's still you know a lot
00:19:24.039 --> 00:19:29.000
of memory that's 130 gabt of memory uh
00:19:26.480 --> 00:19:30.360
for a 65 gigabyte m
00:19:29.000 --> 00:19:32.960
and the forward and backward path is
00:19:30.360 --> 00:19:35.120
still play SP as well so basically what
00:19:32.960 --> 00:19:38.159
I want to say is full fine tuning is uh
00:19:35.120 --> 00:19:42.400
pretty memory intensive
00:19:38.159 --> 00:19:47.480
and if we look at how big a standard GPU
00:19:42.400 --> 00:19:49.679
is I took some specs here the memory is
00:19:47.480 --> 00:19:53.039
uh the memory is just the memory on the
00:19:49.679 --> 00:19:55.840
GPU the cost I did a very unscientific
00:19:53.039 --> 00:19:58.280
thing of uh Google the price on Amazon
00:19:55.840 --> 00:20:01.240
and and take a look at the price of the
00:19:58.280 --> 00:20:04.000
GPU here and then on the right side this
00:20:01.240 --> 00:20:08.000
is uh the types of cloud machines that
00:20:04.000 --> 00:20:09.880
support these gpus and in this class uh
00:20:08.000 --> 00:20:13.559
a lot of people are using Google collab
00:20:09.880 --> 00:20:15.640
I think for your uh for your current
00:20:13.559 --> 00:20:17.640
assignment and soon we'll have AWS
00:20:15.640 --> 00:20:20.080
credits for everybody so you can use AWS
00:20:17.640 --> 00:20:22.039
machines so if you look at the gpus that
00:20:20.080 --> 00:20:26.880
are available we have things everywhere
00:20:22.039 --> 00:20:29.799
from 24 gigabytes uh 32 gigabytes 40 40
00:20:26.880 --> 00:20:33.520
to 80 gigabytes uh 48
00:20:29.799 --> 00:20:35.760
gabes um or on your Mac the GPU and CPU
00:20:33.520 --> 00:20:40.000
memory is shared
00:20:35.760 --> 00:20:42.720
and basically what we can see is that
00:20:40.000 --> 00:20:44.760
there's no GPU with 130 gigabytes of
00:20:42.720 --> 00:20:47.039
memory right so none of them can do this
00:20:44.760 --> 00:20:49.400
with a single
00:20:47.039 --> 00:20:52.000
GPU uh there's also a bunch of other
00:20:49.400 --> 00:20:54.960
Hardware options like AMD gpus Google
00:20:52.000 --> 00:20:58.640
PPU special purpose uh training things
00:20:54.960 --> 00:20:59.760
like cerebrus awsum Etc but I think for
00:20:58.640 --> 00:21:01.120
the purpose of this class you're
00:20:59.760 --> 00:21:04.520
probably going to use standard Hardware
00:21:01.120 --> 00:21:07.679
like this so anyway like that model will
00:21:04.520 --> 00:21:10.720
not fit on any or that fine tuning will
00:21:07.679 --> 00:21:15.000
not fit on any GPU that you have to get
00:21:10.720 --> 00:21:15.000
to um any questions about
00:21:16.200 --> 00:21:19.200
this
00:21:21.360 --> 00:21:28.880
yeah so a lot of these are created
00:21:25.080 --> 00:21:30.360
specifically for training neur networks
00:21:28.880 --> 00:21:32.799
so they're like really really good at
00:21:30.360 --> 00:21:37.360
the things you need to be training their
00:21:32.799 --> 00:21:39.600
networks for um I haven't actually used
00:21:37.360 --> 00:21:43.000
any of these so I I can't like endorse
00:21:39.600 --> 00:21:44.120
or disor any of them but they're made to
00:21:43.000 --> 00:21:46.640
be like really good at training
00:21:44.120 --> 00:21:48.320
Transformer langage models or like the
00:21:46.640 --> 00:21:50.960
specific thing that everybody wants to
00:21:48.320 --> 00:21:52.320
train uh the disadvantage is if you
00:21:50.960 --> 00:21:54.720
start wanting to be like a little bit
00:21:52.320 --> 00:21:57.840
more creative than you know what they
00:21:54.720 --> 00:22:00.159
imagined it might not support that so um
00:21:57.840 --> 00:22:02.200
then that's also a problem with tpus
00:22:00.159 --> 00:22:03.919
tpus are very good at certain things
00:22:02.200 --> 00:22:05.600
like they're very good at like batch
00:22:03.919 --> 00:22:08.480
large operations but they're less good
00:22:05.600 --> 00:22:10.679
at nimbly executing dynamic constition
00:22:08.480 --> 00:22:12.720
graphs and stuff so from that point of
00:22:10.679 --> 00:22:15.360
view I think most people in research
00:22:12.720 --> 00:22:15.360
selles
00:22:15.679 --> 00:22:22.000
to um one one thing I should mention is
00:22:18.799 --> 00:22:25.000
the AMD AMD gpus uh a lot of people have
00:22:22.000 --> 00:22:27.080
started using them in like 2023 2024
00:22:25.000 --> 00:22:28.480
like I think previously uh it was kind
00:22:27.080 --> 00:22:30.120
of a Nvidia
00:22:28.480 --> 00:22:32.880
one horse race but I've heard more and
00:22:30.120 --> 00:22:36.720
more people using amds so and they're
00:22:32.880 --> 00:22:39.919
they're not price Val quite as much so
00:22:36.720 --> 00:22:39.919
um any other
00:22:47.919 --> 00:22:53.279
questions um so training models like if
00:22:51.799 --> 00:22:57.240
they're training pre-training models
00:22:53.279 --> 00:23:01.960
they're using like a th a th000 2,000
00:22:57.240 --> 00:23:05.279
4,000 or something um like they meta
00:23:01.960 --> 00:23:07.360
just announced that they
00:23:05.279 --> 00:23:12.480
got
00:23:07.360 --> 00:23:14.760
350,000 h100s or something like this and
00:23:12.480 --> 00:23:18.360
in case you're you are too lazy to
00:23:14.760 --> 00:23:20.559
calculate that's about um 10 to20
00:23:18.360 --> 00:23:24.360
billion
00:23:20.559 --> 00:23:25.840
doll it's a lot of money um and I'm sure
00:23:24.360 --> 00:23:28.159
not all of them are being used to train
00:23:25.840 --> 00:23:29.640
a model uh you know a lot of them are
00:23:28.159 --> 00:23:32.520
used for model surveying and stuff like
00:23:29.640 --> 00:23:34.240
that so um but pre there's a reason why
00:23:32.520 --> 00:23:36.360
we're not all pre-training models right
00:23:34.240 --> 00:23:38.159
you know um it's a big it's a big effort
00:23:36.360 --> 00:23:43.640
it's very expensive
00:23:38.159 --> 00:23:43.640
so cool any other uh any other
00:23:44.320 --> 00:23:50.400
questions cool okay so how can we
00:23:48.240 --> 00:23:52.039
overcome this uh the first way we can
00:23:50.400 --> 00:23:53.919
overcome this is using things like
00:23:52.039 --> 00:23:56.919
multi-gpu
00:23:53.919 --> 00:23:59.279
training and uh one solution is just to
00:23:56.919 --> 00:24:02.600
throw more Hardware at the models and
00:23:59.279 --> 00:24:06.159
distribute the models over multiple
00:24:02.600 --> 00:24:08.760
places and the canonical or the most
00:24:06.159 --> 00:24:10.400
well-known version of this that still
00:24:08.760 --> 00:24:12.159
many many people use when they're
00:24:10.400 --> 00:24:14.799
pre-training or F tuning language models
00:24:12.159 --> 00:24:16.679
is something called Deep speed zero and
00:24:14.799 --> 00:24:18.760
the way deep speed zero works is it
00:24:16.679 --> 00:24:19.720
works by partitioning optimization over
00:24:18.760 --> 00:24:22.559
different
00:24:19.720 --> 00:24:25.399
devices and
00:24:22.559 --> 00:24:28.640
so there's different stages of deep
00:24:25.399 --> 00:24:31.799
speed uh the F zero the first one is
00:24:28.640 --> 00:24:35.880
this one right here and this says 2 + 2
00:24:31.799 --> 00:24:39.399
+ K where K is the size of the optimizer
00:24:35.880 --> 00:24:41.919
state that I had here so two uh two
00:24:39.399 --> 00:24:44.600
bytes two byes plus all of the bytes
00:24:41.919 --> 00:24:44.600
required for
00:24:44.880 --> 00:24:49.360
this and the blue is the first two the
00:24:47.840 --> 00:24:50.720
Orange is the second two and the green
00:24:49.360 --> 00:24:54.279
is the third
00:24:50.720 --> 00:24:56.559
one and so basically the Baseline is you
00:24:54.279 --> 00:24:59.399
you hold all of these on each
00:24:56.559 --> 00:25:01.320
GPU the the second thing is you
00:24:59.399 --> 00:25:03.279
partition the optimizer State across
00:25:01.320 --> 00:25:06.039
different gpus and because Optimizer
00:25:03.279 --> 00:25:08.200
state is generally larger or at least as
00:25:06.039 --> 00:25:10.440
large as all of the others this can
00:25:08.200 --> 00:25:13.919
reduce memory requirements significantly
00:25:10.440 --> 00:25:16.000
so this um went from 120 gabyt for
00:25:13.919 --> 00:25:19.240
whatever model they were doing there to
00:25:16.000 --> 00:25:22.799
31 gigabytes based on
00:25:19.240 --> 00:25:26.600
um so this was a 7.5 billion parameter
00:25:22.799 --> 00:25:29.600
model so the seven and they had let's
00:25:26.600 --> 00:25:34.120
see yeah and they had devices so they
00:25:29.600 --> 00:25:36.640
went down from 120 to 31 um this is with
00:25:34.120 --> 00:25:38.799
12 bytes for their Optimizer State like
00:25:36.640 --> 00:25:40.480
I said here um but actually we can get
00:25:38.799 --> 00:25:43.399
away with four bytes for the optimizer
00:25:40.480 --> 00:25:46.120
state so actually you can train a seven
00:25:43.399 --> 00:25:49.200
uh billion model reasonably easily on
00:25:46.120 --> 00:25:52.360
you know one or two devices uh one or
00:25:49.200 --> 00:25:55.200
several devices now with
00:25:52.360 --> 00:25:57.159
this so This is called stage one this is
00:25:55.200 --> 00:26:00.320
partition partitioning the optimizer
00:25:57.159 --> 00:26:02.440
state stage two this is partition
00:26:00.320 --> 00:26:04.640
partitioning the optimizer State and the
00:26:02.440 --> 00:26:06.880
gradients the optimizer state is
00:26:04.640 --> 00:26:09.600
actually doing the optimizer state is
00:26:06.880 --> 00:26:13.679
actually relatively like harmless it
00:26:09.600 --> 00:26:15.600
doesn't slow down too much um partition
00:26:13.679 --> 00:26:17.799
the gradients gets a little bit more
00:26:15.600 --> 00:26:20.520
tricky because you start having to uh
00:26:17.799 --> 00:26:22.320
move things between devices a lot and
00:26:20.520 --> 00:26:25.880
then uh if you do this for the
00:26:22.320 --> 00:26:28.159
parameters you can uh you can do even
00:26:25.880 --> 00:26:30.760
more so you can get it to like
00:26:28.159 --> 00:26:32.399
ridiculously small uh values here but
00:26:30.760 --> 00:26:35.279
this is going to be very expensive in
00:26:32.399 --> 00:26:37.919
terms of uh you know moving things
00:26:35.279 --> 00:26:41.360
around so that you can calculate your
00:26:37.919 --> 00:26:43.159
gradients so I I'd say that by default
00:26:41.360 --> 00:26:45.720
if you can go to deep speed with like
00:26:43.159 --> 00:26:48.799
stage one or stage two you can spread
00:26:45.720 --> 00:26:52.940
this out across different uh devices in
00:26:48.799 --> 00:26:56.019
in trads yeah is this a DAT
00:26:52.940 --> 00:26:56.019
[Music]
00:26:56.600 --> 00:26:59.600
question
00:27:02.520 --> 00:27:09.200
I does your central device um your
00:27:05.720 --> 00:27:10.640
central device can basically be your CPU
00:27:09.200 --> 00:27:13.520
when you say multi- device sorry do you
00:27:10.640 --> 00:27:16.520
mean multi-gpu or do you mean
00:27:13.520 --> 00:27:16.520
multi
00:27:20.640 --> 00:27:26.240
okay able
00:27:23.640 --> 00:27:27.919
to I don't I don't think so I mean it
00:27:26.240 --> 00:27:30.320
depends on the implementation but not
00:27:27.919 --> 00:27:32.360
theoretically any way and I can do speed
00:27:30.320 --> 00:27:34.640
this that for
00:27:32.360 --> 00:27:36.880
you yeah other otherwise you'd have lots
00:27:34.640 --> 00:27:40.159
of trouble like getting a machine that
00:27:36.880 --> 00:27:43.600
had you know a thousand gigabytes
00:27:40.159 --> 00:27:46.039
in um so yeah I would suggest definitely
00:27:43.600 --> 00:27:48.720
using something like uh deep speed but
00:27:46.039 --> 00:27:51.080
actually a lot of uh a lot of libraries
00:27:48.720 --> 00:27:53.960
use deep speed under the hood also so
00:27:51.080 --> 00:27:56.720
things like um uh hugging case
00:27:53.960 --> 00:27:59.720
accelerate GPD neox other things like
00:27:56.720 --> 00:28:01.039
this they they all uh interface many of
00:27:59.720 --> 00:28:03.760
them interface to deep speed or
00:28:01.039 --> 00:28:06.039
something similar to it so uh whatever
00:28:03.760 --> 00:28:09.080
Library you're using for uh training
00:28:06.039 --> 00:28:12.000
like this you can do I don't have a a
00:28:09.080 --> 00:28:14.640
list but there's a whole bunch of them
00:28:12.000 --> 00:28:16.960
you can either use use deep speed uh
00:28:14.640 --> 00:28:20.480
things like hugging face accelerator TRL
00:28:16.960 --> 00:28:23.640
I think we might have a TRL um uh
00:28:20.480 --> 00:28:26.000
recitation later uh also I haven't used
00:28:23.640 --> 00:28:28.960
it myself or worked with people who used
00:28:26.000 --> 00:28:31.120
it but ax aot a lot of people are using
00:28:28.960 --> 00:28:33.799
um so uh we maybe we could come up with
00:28:31.120 --> 00:28:33.799
a list of those
00:28:37.480 --> 00:28:43.039
later
00:28:39.760 --> 00:28:44.799
so the other option that you can use is
00:28:43.039 --> 00:28:48.399
don't tune all of the parameters of the
00:28:44.799 --> 00:28:51.399
model but just some of them and this is
00:28:48.399 --> 00:28:54.039
really popular nowadays because this
00:28:51.399 --> 00:28:57.799
further improves your ability to train
00:28:54.039 --> 00:29:01.240
on many different uh you know uh data
00:28:57.799 --> 00:29:03.120
sets without huge uh gpus or without
00:29:01.240 --> 00:29:06.919
many many GPU
00:29:03.120 --> 00:29:08.519
devices and so the first one is
00:29:06.919 --> 00:29:10.399
something like prefix tuning so I
00:29:08.519 --> 00:29:12.240
already talked about this last time
00:29:10.399 --> 00:29:13.679
prefix tuning is like a bridge between
00:29:12.240 --> 00:29:17.480
parameter efficient F tuning and
00:29:13.679 --> 00:29:21.640
prompting right so it Tunes
00:29:17.480 --> 00:29:21.640
one prefix for each of the
00:29:22.799 --> 00:29:28.480
layers so the next one that I'd like to
00:29:25.320 --> 00:29:32.840
talk about is adapters and adapters
00:29:28.480 --> 00:29:37.559
basically look like this so what you do
00:29:32.840 --> 00:29:40.000
is you have your standard Transformer
00:29:37.559 --> 00:29:41.440
architecture uh which um has you know
00:29:40.000 --> 00:29:47.480
like multi-headed
00:29:41.440 --> 00:29:47.480
detention um and other things like this
00:29:47.760 --> 00:29:51.360
and yeah this is written in a slightly
00:29:50.159 --> 00:29:53.200
different way than I wrote the
00:29:51.360 --> 00:29:56.200
Transformer diagram but it's saying the
00:29:53.200 --> 00:29:59.960
same things so multi-headed attention
00:29:56.200 --> 00:30:03.399
this is uh kind of of your W your q k
00:29:59.960 --> 00:30:06.679
and V matrices and then this is your o
00:30:03.399 --> 00:30:09.240
Matrix in um in the Transformer
00:30:06.679 --> 00:30:10.600
architecture so this is what we were
00:30:09.240 --> 00:30:13.279
calling multi-head attension in the
00:30:10.600 --> 00:30:15.440
previous diagram this says 2x feed
00:30:13.279 --> 00:30:17.960
forward layer it's basically 2x linear
00:30:15.440 --> 00:30:21.039
layer with a sandwiched nonlinearity so
00:30:17.960 --> 00:30:25.000
it's basically a a feed forward block so
00:30:21.039 --> 00:30:27.679
this is just the standard um the
00:30:25.000 --> 00:30:30.039
standard like Transformer so what
00:30:27.679 --> 00:30:33.600
adapters do is they add yet another
00:30:30.039 --> 00:30:35.000
layer right here and you freeze the
00:30:33.600 --> 00:30:37.000
things that are in Gray here like the
00:30:35.000 --> 00:30:40.000
feed forward layer feed forward layer
00:30:37.000 --> 00:30:41.200
multi-headed attention but only train
00:30:40.000 --> 00:30:44.399
this
00:30:41.200 --> 00:30:46.760
adapter and the way the adapter works is
00:30:44.399 --> 00:30:49.880
you have a standard
00:30:46.760 --> 00:30:52.760
large representation Vector here and you
00:30:49.880 --> 00:30:55.000
have a feed forward down projection that
00:30:52.760 --> 00:30:58.000
down projects to a very small number of
00:30:55.000 --> 00:30:59.679
nodes here and then you have a linearity
00:30:58.000 --> 00:31:02.000
and then you have a feed forward up
00:30:59.679 --> 00:31:04.679
projection that projects it back to the
00:31:02.000 --> 00:31:08.720
standard space and this
00:31:04.679 --> 00:31:13.840
is uh included within the uh residual
00:31:08.720 --> 00:31:13.840
layer here and so
00:31:14.440 --> 00:31:21.320
ideally this will project down from like
00:31:17.519 --> 00:31:21.320
512 to something like
00:31:23.559 --> 00:31:31.519
16 and then back up to 512
00:31:27.919 --> 00:31:35.679
so if it was just a 5002 by 5002 Matrix
00:31:31.519 --> 00:31:36.720
that would be 2 the 9 right so you get
00:31:35.679 --> 00:31:41.399
two to
00:31:36.720 --> 00:31:41.399
the you get two to the 10 par
00:31:49.200 --> 00:31:59.159
Ang yeah for is um this is only 2 to the
00:31:56.159 --> 00:31:59.159
4
00:31:59.440 --> 00:32:08.360
so if you have uh this that would be 29
00:32:03.200 --> 00:32:13.720
+ 4 + 1 is 2
00:32:08.360 --> 00:32:15.720
14 um so you would have 16 times less
00:32:13.720 --> 00:32:17.799
parameters for the adapters than you
00:32:15.720 --> 00:32:21.200
would have for the uh for the full
00:32:17.799 --> 00:32:24.080
Matrix so and then if we instead of
00:32:21.200 --> 00:32:25.760
using 16 we just did two or one or
00:32:24.080 --> 00:32:30.000
something like that it would be you know
00:32:25.760 --> 00:32:33.519
much much less so basically uh by making
00:32:30.000 --> 00:32:34.600
this making these matrices or these
00:32:33.519 --> 00:32:38.840
vectors
00:32:34.600 --> 00:32:44.360
very um very skinny this allows us to
00:32:38.840 --> 00:32:44.360
minimize our um minimize the additional
00:32:47.080 --> 00:32:52.159
paramet so are there any uh any
00:32:49.519 --> 00:32:52.159
questions about
00:32:52.519 --> 00:32:55.519
this
00:32:56.039 --> 00:32:59.039
yeah
00:33:02.200 --> 00:33:05.440
yeah so why do they make it smaller and
00:33:03.919 --> 00:33:06.880
then larger the main reason why they
00:33:05.440 --> 00:33:08.159
make it smaller and then larger is
00:33:06.880 --> 00:33:09.799
because that's a way to reduce the
00:33:08.159 --> 00:33:12.480
parameter count so if they kept it the
00:33:09.799 --> 00:33:14.320
same size um if they kept it the same
00:33:12.480 --> 00:33:17.159
size it would be two to 18 but you would
00:33:14.320 --> 00:33:18.799
actually have two of them uh you would
00:33:17.159 --> 00:33:21.639
have two of them so you'd have even more
00:33:18.799 --> 00:33:21.639
parameters
00:33:24.399 --> 00:33:30.399
but so it would hurt the performance uh
00:33:28.720 --> 00:33:31.919
so making them smaller would hurt the
00:33:30.399 --> 00:33:34.440
performance if he had lots and lots of
00:33:31.919 --> 00:33:36.320
training data so if you have lots and
00:33:34.440 --> 00:33:39.000
lots of training data you would benefit
00:33:36.320 --> 00:33:41.440
by making the adapter Dimension larger
00:33:39.000 --> 00:33:43.279
and uh just you know fitting fitting
00:33:41.440 --> 00:33:45.080
very well but if you have lots and lots
00:33:43.279 --> 00:33:47.919
of training data and you have the memory
00:33:45.080 --> 00:33:49.080
that allows you to train a larger model
00:33:47.919 --> 00:33:50.440
then you might as well just train the
00:33:49.080 --> 00:33:51.679
whole model itself you might as well do
00:33:50.440 --> 00:33:53.960
full fine
00:33:51.679 --> 00:33:56.200
tuning there's two advantages to
00:33:53.960 --> 00:33:58.120
parameter efficient uh fine tuning
00:33:56.200 --> 00:34:00.279
methods uh the first one is that they
00:33:58.120 --> 00:34:01.960
reduce memory like I mentioned here
00:34:00.279 --> 00:34:03.799
reduce the memory for the parameters
00:34:01.960 --> 00:34:05.679
you're training also because there's
00:34:03.799 --> 00:34:07.320
fewer parameters it's harder to like
00:34:05.679 --> 00:34:08.960
overfit so if you have very small
00:34:07.320 --> 00:34:12.320
training data full fine tuning can
00:34:08.960 --> 00:34:14.399
overfit and become unstable but because
00:34:12.320 --> 00:34:18.000
this has fewer parameters it
00:34:14.399 --> 00:34:21.040
essentially is less easy to overfit and
00:34:18.000 --> 00:34:21.040
will generalize better
00:34:24.599 --> 00:34:29.359
often so when you find tune you only
00:34:27.159 --> 00:34:30.679
fine-tune the parameters of the adapters
00:34:29.359 --> 00:34:32.440
and so we assume that we have a
00:34:30.679 --> 00:34:34.200
pre-trained model like the gray parts
00:34:32.440 --> 00:34:36.480
are pre-trained and then we fine tune
00:34:34.200 --> 00:34:36.480
just
00:34:37.960 --> 00:34:40.960
that
00:34:43.720 --> 00:34:51.760
butay so very good
00:34:48.040 --> 00:34:51.760
question you need
00:34:53.760 --> 00:34:59.280
to so the question was even
00:34:57.880 --> 00:35:00.760
though we are only fine tuning the
00:34:59.280 --> 00:35:02.320
adapter layers we still need to store
00:35:00.760 --> 00:35:04.760
the gradients of the other layers right
00:35:02.320 --> 00:35:09.480
so we still need to store this part
00:35:04.760 --> 00:35:13.320
that's actually not the case um
00:35:09.480 --> 00:35:15.000
so when you are doing back propop you
00:35:13.320 --> 00:35:18.680
only need to do back propop into the
00:35:15.000 --> 00:35:20.839
parts of the model that are on the path
00:35:18.680 --> 00:35:23.240
to the gradients that you want to be
00:35:20.839 --> 00:35:25.800
updated so like for
00:35:23.240 --> 00:35:28.760
example if I
00:35:25.800 --> 00:35:32.160
write
00:35:28.760 --> 00:35:32.160
if I write the computation
00:35:55.800 --> 00:35:58.800
graph
00:36:22.599 --> 00:36:28.240
so this is like the computation graph of
00:36:25.720 --> 00:36:32.240
a
00:36:28.240 --> 00:36:34.319
um in a tension block so we get our loss
00:36:32.240 --> 00:36:36.400
like the gradient from the loss is
00:36:34.319 --> 00:36:41.160
flowing in
00:36:36.400 --> 00:36:44.000
here and so it goes
00:36:41.160 --> 00:36:47.640
back to the Fe forward Network to the
00:36:44.000 --> 00:36:49.200
adapter to the attention and then here
00:36:47.640 --> 00:36:51.119
so we definitely need to pass it back
00:36:49.200 --> 00:36:53.880
through the layers so we get to you know
00:36:51.119 --> 00:36:56.160
like the earlier layers and stuff we
00:36:53.880 --> 00:36:57.720
don't actually need to pass it into this
00:36:56.160 --> 00:36:59.400
into the weights of the attention
00:36:57.720 --> 00:37:01.280
because we're not we're not updating
00:36:59.400 --> 00:37:02.520
them so we don't really need to even
00:37:01.280 --> 00:37:04.640
calculate the gradients of the weights
00:37:02.520 --> 00:37:07.800
of the attention we also don't need to
00:37:04.640 --> 00:37:09.160
calculate the gradient of this um but we
00:37:07.800 --> 00:37:11.280
do need to calculate the gradient of
00:37:09.160 --> 00:37:14.240
this because we're updating it so
00:37:11.280 --> 00:37:15.839
basically um you don't even need to do
00:37:14.240 --> 00:37:19.800
backrop in the parts that you can just
00:37:15.839 --> 00:37:21.560
cut off without updating yeah so forward
00:37:19.800 --> 00:37:23.200
you do need to you know use them
00:37:21.560 --> 00:37:25.440
obviously to calculate the forward path
00:37:23.200 --> 00:37:27.560
so by like being smart about that you
00:37:25.440 --> 00:37:31.119
can fix that there's also something
00:37:27.560 --> 00:37:33.319
called um uh checkpointing like
00:37:31.119 --> 00:37:34.920
computation graph checkpointing or
00:37:33.319 --> 00:37:36.720
forward pass or backward pass
00:37:34.920 --> 00:37:38.640
checkpointing where basically what you
00:37:36.720 --> 00:37:40.040
do is you calculate part part of the way
00:37:38.640 --> 00:37:41.359
through the graph and then throw out the
00:37:40.040 --> 00:37:45.280
intermediate
00:37:41.359 --> 00:37:47.760
calculation um and so for example you
00:37:45.280 --> 00:37:50.000
might you might do the forward pass all
00:37:47.760 --> 00:37:52.240
the way up to here and then throw out
00:37:50.000 --> 00:37:53.720
all the intermediate States and then
00:37:52.240 --> 00:37:55.400
recalculate them when you're doing the
00:37:53.720 --> 00:37:57.240
backward pass and so there's like lots
00:37:55.400 --> 00:37:58.920
of tricky things that we can do
00:37:57.240 --> 00:38:01.920
like squeeze your memory
00:37:58.920 --> 00:38:01.920
you
00:38:02.839 --> 00:38:10.200
yeah how uh great question
00:38:06.079 --> 00:38:12.079
um do I have that on the slide maybe
00:38:10.200 --> 00:38:17.119
not
00:38:12.079 --> 00:38:19.599
um so one way that you can do it this is
00:38:17.119 --> 00:38:22.960
from Laura but the B the same idea is
00:38:19.599 --> 00:38:25.960
basically there so in Laura you do the
00:38:22.960 --> 00:38:28.920
upscaling with a zero Matrix you
00:38:25.960 --> 00:38:30.599
initiate it to a zero Matrix and the
00:38:28.920 --> 00:38:34.680
downscaling you can initialize it to
00:38:30.599 --> 00:38:38.000
zero or like some random uh random no
00:38:34.680 --> 00:38:40.000
actually this needs to be random uh and
00:38:38.000 --> 00:38:42.480
so the reason why this is zero is
00:38:40.000 --> 00:38:46.839
because then if you don't do anything it
00:38:42.480 --> 00:38:51.119
will just stay the same right so uh so
00:38:46.839 --> 00:38:51.119
that is uh the standard one standard
00:38:52.839 --> 00:38:57.440
way
00:38:54.520 --> 00:38:58.960
cool okay so um another thing that I
00:38:57.440 --> 00:39:00.839
want to mention this is a kind of
00:38:58.960 --> 00:39:03.800
interesting technique it's not super
00:39:00.839 --> 00:39:06.359
standard but I I like it so I'm going to
00:39:03.800 --> 00:39:08.880
uh going to talk about it anyway this is
00:39:06.359 --> 00:39:10.760
something called adapter Fusion and the
00:39:08.880 --> 00:39:13.240
basic idea is to learn an adapter for
00:39:10.760 --> 00:39:16.040
various tasks and combine them
00:39:13.240 --> 00:39:17.880
together and so instead of having just
00:39:16.040 --> 00:39:19.400
your adapter layer you have multiple
00:39:17.880 --> 00:39:20.880
adapters and then you have adapter
00:39:19.400 --> 00:39:22.400
Fusion up
00:39:20.880 --> 00:39:26.680
here
00:39:22.400 --> 00:39:28.000
and the basic idea is uh an adapter is
00:39:26.680 --> 00:39:30.560
just you know what I wrote on the
00:39:28.000 --> 00:39:33.599
previous slide but adapter Fusion is
00:39:30.560 --> 00:39:36.000
attention over adapters so you can
00:39:33.599 --> 00:39:39.720
decide which adapter to use in which
00:39:36.000 --> 00:39:42.160
case and each of the adapters is trained
00:39:39.720 --> 00:39:44.800
separately on like task specific data so
00:39:42.160 --> 00:39:47.200
you have uh data from lots of question
00:39:44.800 --> 00:39:49.119
answering data sets and you train a
00:39:47.200 --> 00:39:50.640
question answering adapter you have data
00:39:49.119 --> 00:39:53.160
from
00:39:50.640 --> 00:39:54.880
uh I don't know translation data sets
00:39:53.160 --> 00:39:57.560
and you train a translation adapter you
00:39:54.880 --> 00:40:00.440
have uh other things like that
00:39:57.560 --> 00:40:03.920
and so then when you actually use them
00:40:00.440 --> 00:40:06.400
you do attension over which adapter to
00:40:03.920 --> 00:40:08.880
use and then uh take the value from that
00:40:06.400 --> 00:40:10.520
adapter and I I kind of like this idea
00:40:08.880 --> 00:40:12.560
because it allows you to you know train
00:40:10.520 --> 00:40:15.200
modules that are useful for a particular
00:40:12.560 --> 00:40:17.680
task and then decide which one to use at
00:40:15.200 --> 00:40:19.319
any particular point so uh I think
00:40:17.680 --> 00:40:22.040
there's lots of creative things that we
00:40:19.319 --> 00:40:24.599
could do with this there's also um
00:40:22.040 --> 00:40:26.560
multilingual versions so you train
00:40:24.599 --> 00:40:28.520
adapters for individual languages and
00:40:26.560 --> 00:40:30.119
you train adapter for individual tasks
00:40:28.520 --> 00:40:32.200
and then you combine them together too
00:40:30.119 --> 00:40:34.079
so if that's interesting you can take a
00:40:32.200 --> 00:40:36.319
look at that
00:40:34.079 --> 00:40:37.960
Pap in a way this is kind of like a
00:40:36.319 --> 00:40:39.200
mixture of experts model if you've heard
00:40:37.960 --> 00:40:40.599
of that we're going to talk about that
00:40:39.200 --> 00:40:42.760
in a future class so I won't go into
00:40:40.599 --> 00:40:45.760
lots of detail but um I wanted to talk
00:40:42.760 --> 00:40:48.079
about it here and we we talk about about
00:40:45.760 --> 00:40:52.160
this
00:40:48.079 --> 00:40:54.480
cool okay so now I want to go into
00:40:52.160 --> 00:40:56.440
talking about Laura and Laura is very
00:40:54.480 --> 00:40:57.560
popular you it's very likely that you've
00:40:56.440 --> 00:41:02.000
heard of it
00:40:57.560 --> 00:41:03.960
nowadays um the way Laura works is very
00:41:02.000 --> 00:41:05.800
similar conceptually to adapters but it
00:41:03.960 --> 00:41:09.000
has an important implementation
00:41:05.800 --> 00:41:14.680
difference and the difference is
00:41:09.000 --> 00:41:17.560
that in contrast to adapters which had
00:41:14.680 --> 00:41:20.720
a um in contrast to adapters which had a
00:41:17.560 --> 00:41:23.599
nonlinear layer here Laura has no
00:41:20.720 --> 00:41:27.000
nonlinear layer so basically what it is
00:41:23.599 --> 00:41:29.560
doing is it is uh taking
00:41:27.000 --> 00:41:32.880
downscaled Matrix in upscale uh
00:41:29.560 --> 00:41:36.440
downscale Matrix in upscale Matrix and
00:41:32.880 --> 00:41:38.319
just doing a linear transform with them
00:41:36.440 --> 00:41:42.560
and
00:41:38.319 --> 00:41:44.560
so in this graph or in this figure here
00:41:42.560 --> 00:41:46.520
which I took from the Laura paper it's
00:41:44.560 --> 00:41:48.480
actually showing them as like separate
00:41:46.520 --> 00:41:50.040
computation paths it's showing like you
00:41:48.480 --> 00:41:54.119
use a normal Matrix and then you use the
00:41:50.040 --> 00:41:56.079
Laura Matrix separately but actually um
00:41:54.119 --> 00:41:59.240
you can just add them together and you
00:41:56.079 --> 00:42:01.200
get the equivalent result so you add
00:41:59.240 --> 00:42:04.319
this Matrix times this Matrix into the
00:42:01.200 --> 00:42:05.960
pre-rain weights and that gives you the
00:42:04.319 --> 00:42:07.960
same result as if you calculated them
00:42:05.960 --> 00:42:12.599
separately and then added them
00:42:07.960 --> 00:42:14.319
afterwards so why is Laura so popular uh
00:42:12.599 --> 00:42:16.599
I would say Laura is so popular because
00:42:14.319 --> 00:42:18.760
it's super convenient after you finished
00:42:16.599 --> 00:42:19.920
training with Laura because after you
00:42:18.760 --> 00:42:22.680
finished training with Laura you can
00:42:19.920 --> 00:42:25.040
just add that the learn matrices back
00:42:22.680 --> 00:42:26.440
into the original weight Matrix and you
00:42:25.040 --> 00:42:27.800
have a model that's exactly the same
00:42:26.440 --> 00:42:29.280
shape it doesn't have any other
00:42:27.800 --> 00:42:31.839
components you don't need any different
00:42:29.280 --> 00:42:34.760
code path you just have updated
00:42:31.839 --> 00:42:36.640
parameters and that contrasts to
00:42:34.760 --> 00:42:38.359
adapters because in adapters you
00:42:36.640 --> 00:42:39.760
actually need to add extra model
00:42:38.359 --> 00:42:43.599
components you have to have different
00:42:39.760 --> 00:42:46.160
pip merch code to implement this so um I
00:42:43.599 --> 00:42:48.359
think that's the big reason why Laura is
00:42:46.160 --> 00:42:48.359
so
00:42:48.880 --> 00:42:53.920
po it's not actually that complicated
00:42:51.359 --> 00:42:55.160
it's pretty simple but um it's important
00:42:53.920 --> 00:42:56.960
to
00:42:55.160 --> 00:42:58.800
know
00:42:56.960 --> 00:43:02.160
cool
00:42:58.800 --> 00:43:05.839
um so another popular thing uh that you
00:43:02.160 --> 00:43:07.359
might have heard of is Cur and qora
00:43:05.839 --> 00:43:10.440
combines together
00:43:07.359 --> 00:43:11.760
quantization um with parameter efficient
00:43:10.440 --> 00:43:13.480
tuning and we're going to talk a lot
00:43:11.760 --> 00:43:17.040
more about quantisation in a future
00:43:13.480 --> 00:43:18.760
class in maybe a week or so but
00:43:17.040 --> 00:43:21.720
basically there are ways to compress the
00:43:18.760 --> 00:43:25.640
model down to not be in like 16 bits but
00:43:21.720 --> 00:43:27.319
be in like four bits and um so if each
00:43:25.640 --> 00:43:31.720
parameter and four bits that makes the
00:43:27.319 --> 00:43:31.720
model very very Compact and
00:43:32.240 --> 00:43:40.240
so if we go back to our calculation in
00:43:35.599 --> 00:43:44.640
this previous slide uh if we
00:43:40.240 --> 00:43:48.000
had if we had a 16bit model to fit
00:43:44.640 --> 00:43:49.839
llama uh on your memory you needed 130
00:43:48.000 --> 00:43:54.160
gigabytes but like let's say we have a
00:43:49.839 --> 00:43:56.880
4bit model Suddenly It's not 130 it's uh
00:43:54.160 --> 00:44:00.559
something closer to 32 and a half I
00:43:56.880 --> 00:44:03.880
guess and 32 and a half
00:44:00.559 --> 00:44:07.960
is actually fits on a lot of Hardware it
00:44:03.880 --> 00:44:12.119
fits on A1 100s or h100s easily it also
00:44:07.960 --> 00:44:16.119
fits on these like less expensive gpus I
00:44:12.119 --> 00:44:17.599
mean less expensive might be you know
00:44:16.119 --> 00:44:19.559
relative it's still very expensive but
00:44:17.599 --> 00:44:21.559
it'll also F on your Mac probably if you
00:44:19.559 --> 00:44:22.960
have a Mac with a fair amount of memory
00:44:21.559 --> 00:44:27.480
so you could just run it on a local
00:44:22.960 --> 00:44:32.559
machine in your CPU memory also so
00:44:27.480 --> 00:44:34.559
um so basically the idea is we can press
00:44:32.559 --> 00:44:36.720
down the model to be much smaller so the
00:44:34.559 --> 00:44:41.559
forward and backward um so the
00:44:36.720 --> 00:44:45.000
parameters are small and then we have a
00:44:41.559 --> 00:44:47.000
very very compact Laura layer that
00:44:45.000 --> 00:44:48.280
doesn't Laura which doesn't take very
00:44:47.000 --> 00:44:51.079
much memory
00:44:48.280 --> 00:44:53.480
itself and that allows us to basically
00:44:51.079 --> 00:44:58.280
train a model on you know commodity
00:44:53.480 --> 00:45:00.119
Hardware like 48 48 gigabyte uh GPU or
00:44:58.280 --> 00:45:02.599
uh something like your your MacBook or
00:45:00.119 --> 00:45:05.880
something like that and it it also has
00:45:02.599 --> 00:45:07.400
uh like paging to page things from CPU
00:45:05.880 --> 00:45:10.760
to GPU memory to make it even more
00:45:07.400 --> 00:45:12.359
efficient but uh basically that's the
00:45:10.760 --> 00:45:15.880
general
00:45:12.359 --> 00:45:18.000
idea so um I definitely if you want to
00:45:15.880 --> 00:45:19.520
train a large model on limited Hardware
00:45:18.000 --> 00:45:21.480
I'd recommend this if you're not
00:45:19.520 --> 00:45:23.880
training a super large model like 65
00:45:21.480 --> 00:45:25.960
gabes I think just Laura should be fine
00:45:23.880 --> 00:45:28.319
like you can probably train a 7D model
00:45:25.960 --> 00:45:31.400
or a 1B model with just Laur and that
00:45:28.319 --> 00:45:36.000
should be know on a single GP
00:45:31.400 --> 00:45:36.000
VI cool uh any questions about
00:45:41.079 --> 00:45:48.000
this does low Precision not cause any
00:45:43.680 --> 00:45:49.559
problems it definitely is you need to be
00:45:48.000 --> 00:45:51.680
a little bit concerned about it but
00:45:49.559 --> 00:45:53.440
you're not doing optimization in low
00:45:51.680 --> 00:45:55.359
Precision you're just keeping the
00:45:53.440 --> 00:45:59.040
original model in low Precision so from
00:45:55.359 --> 00:46:01.119
that point of view it's you know it's
00:45:59.040 --> 00:46:03.599
manageable I guess
00:46:01.119 --> 00:46:06.880
so and you can also look at the hura
00:46:03.599 --> 00:46:08.400
paper they have very extensive
00:46:06.880 --> 00:46:10.680
experiments
00:46:08.400 --> 00:46:14.040
cool um a final one that I'd like to
00:46:10.680 --> 00:46:15.880
talk about is bitfit um this is very
00:46:14.040 --> 00:46:17.680
very simple you basically just train the
00:46:15.880 --> 00:46:22.440
biases of the model for any model that
00:46:17.680 --> 00:46:24.520
has biases uh this also can fit uh
00:46:22.440 --> 00:46:26.119
models it's very simple because you
00:46:24.520 --> 00:46:28.359
don't even need to change you don't need
00:46:26.119 --> 00:46:30.000
to add any extra code uh you just need
00:46:28.359 --> 00:46:33.520
to freeze all the parameters except the
00:46:30.000 --> 00:46:36.160
biases so from that point of view it's
00:46:33.520 --> 00:46:38.559
very
00:46:36.160 --> 00:46:40.520
easy so I talked about this a little bit
00:46:38.559 --> 00:46:41.319
last time but I think everybody didn't
00:46:40.520 --> 00:46:43.400
have
00:46:41.319 --> 00:46:44.960
full understanding of all the parameter
00:46:43.400 --> 00:46:48.760
efficient tuning methods to understand
00:46:44.960 --> 00:46:50.839
this well um but we had a paper where we
00:46:48.760 --> 00:46:52.559
basically looked at all of these tuning
00:46:50.839 --> 00:46:56.280
methods and we kind of decomposed them
00:46:52.559 --> 00:46:59.240
into several different design components
00:46:56.280 --> 00:47:01.839
and actually um maybe I'll
00:46:59.240 --> 00:47:04.319
also pull up
00:47:01.839 --> 00:47:07.440
the table that we have of this that
00:47:04.319 --> 00:47:07.440
might be even easier to
00:47:14.079 --> 00:47:17.079
follow
00:47:20.839 --> 00:47:25.800
so basically there there's different
00:47:23.599 --> 00:47:27.960
things that you can look at with respect
00:47:25.800 --> 00:47:30.160
to parameter efficient tuning methods
00:47:27.960 --> 00:47:33.000
there's the functional form of the
00:47:30.160 --> 00:47:36.680
nonlinearity that you're using there's
00:47:33.000 --> 00:47:38.280
the place where you insert the model
00:47:36.680 --> 00:47:39.760
there's how you modify the
00:47:38.280 --> 00:47:41.200
representation and then there's a
00:47:39.760 --> 00:47:42.880
composition function for how you take
00:47:41.200 --> 00:47:44.559
the modified representation and add it
00:47:42.880 --> 00:47:48.040
into the original
00:47:44.559 --> 00:47:49.800
representation so if you if you want to
00:47:48.040 --> 00:47:52.559
take a look at the table you can take a
00:47:49.800 --> 00:47:55.319
look at this it's also in the references
00:47:52.559 --> 00:47:56.359
but basically what we can find is that
00:47:55.319 --> 00:47:59.680
things like
00:47:56.359 --> 00:48:01.800
adapters uh Laura and prefix tuning are
00:47:59.680 --> 00:48:04.280
actually very uh very similar to each
00:48:01.800 --> 00:48:07.119
other but the difference being where do
00:48:04.280 --> 00:48:09.079
you get the original representation that
00:48:07.119 --> 00:48:11.839
you're feeding in so adapters generally
00:48:09.079 --> 00:48:15.040
get it from after the the module that
00:48:11.839 --> 00:48:17.160
you're uh adapting prefix tuning gets it
00:48:15.040 --> 00:48:19.800
from before Laura also gets it from
00:48:17.160 --> 00:48:23.559
before also what's nonlinearity it's a
00:48:19.800 --> 00:48:25.440
relu A softmax or nothing um Laura
00:48:23.559 --> 00:48:27.599
actually this isn't really mentioned in
00:48:25.440 --> 00:48:29.200
the paper but it is uh like actually
00:48:27.599 --> 00:48:31.920
implemented in the code there's also a
00:48:29.200 --> 00:48:33.680
scalar scaling Factor here uh which is a
00:48:31.920 --> 00:48:36.280
hyper parameter so that's something to
00:48:33.680 --> 00:48:37.640
be aware of um and so basically by
00:48:36.280 --> 00:48:40.079
breaking these down you can number one
00:48:37.640 --> 00:48:42.359
better understand each of the uh modules
00:48:40.079 --> 00:48:44.280
and how they or each of the methods and
00:48:42.359 --> 00:48:47.200
how they interact with each
00:48:44.280 --> 00:48:48.760
other and also uh what we show in this
00:48:47.200 --> 00:48:51.680
paper is that this understanding can
00:48:48.760 --> 00:48:53.119
lead you to you know new variants that
00:48:51.680 --> 00:48:56.400
can be more effective than any of the
00:48:53.119 --> 00:48:59.160
existing variants and so we proposed two
00:48:56.400 --> 00:49:00.880
things called The Parallel adapter and
00:48:59.160 --> 00:49:04.400
uh the scaled parallel adapter and we
00:49:00.880 --> 00:49:06.559
demonstrate that they get better
00:49:04.400 --> 00:49:09.760
results so then the question is which
00:49:06.559 --> 00:49:11.200
one to choose um for convenience Laura
00:49:09.760 --> 00:49:13.799
and bitfit don't change the model
00:49:11.200 --> 00:49:15.920
architecture so if you don't really care
00:49:13.799 --> 00:49:17.319
about like the absolute best accuracy
00:49:15.920 --> 00:49:20.079
out of these tuning methods I would
00:49:17.319 --> 00:49:22.119
definitely recommend um you use
00:49:20.079 --> 00:49:24.960
something like this it's definitely the
00:49:22.119 --> 00:49:27.640
easiest thing after you're done training
00:49:24.960 --> 00:49:29.960
for AC accy uh one thing that we found
00:49:27.640 --> 00:49:31.920
in our paper for simpler tasks it really
00:49:29.960 --> 00:49:33.559
actually doesn't matter very much so if
00:49:31.920 --> 00:49:35.480
you're just doing classification tasks
00:49:33.559 --> 00:49:37.440
even something super simple like bitfit
00:49:35.480 --> 00:49:38.280
is rather competitive with all of the
00:49:37.440 --> 00:49:41.319
other
00:49:38.280 --> 00:49:43.880
methods for more complex tasks and a
00:49:41.319 --> 00:49:46.680
small parameter budget uh we found
00:49:43.880 --> 00:49:49.960
prefix tuning to do a pretty good job uh
00:49:46.680 --> 00:49:52.359
this is not a like Universal finding but
00:49:49.960 --> 00:49:54.319
it's what we found in our paper and then
00:49:52.359 --> 00:49:57.319
for more complex tasks plus larger
00:49:54.319 --> 00:50:00.079
parameter budgets um adapters or some
00:49:57.319 --> 00:50:03.400
sort of mixture of multiple methods can
00:50:00.079 --> 00:50:04.720
be can give you better results so again
00:50:03.400 --> 00:50:07.160
all of this is into paper if you want to
00:50:04.720 --> 00:50:07.160
look at more
00:50:07.960 --> 00:50:14.359
details
00:50:10.200 --> 00:50:16.000
cool okay so any any questions about
00:50:14.359 --> 00:50:18.880
that
00:50:16.000 --> 00:50:20.920
or okay uh next I'm going to go through
00:50:18.880 --> 00:50:22.440
some NLP tasks and the reason why I'm
00:50:20.920 --> 00:50:23.640
going to go through some NLP tasks is
00:50:22.440 --> 00:50:25.240
because when we're fine-tuning we need
00:50:23.640 --> 00:50:26.680
to be fine-tuning towards individual
00:50:25.240 --> 00:50:29.400
tasks we want to
00:50:26.680 --> 00:50:30.760
solve um and so basic fine tuning we
00:50:29.400 --> 00:50:32.400
build a model that's good at performing
00:50:30.760 --> 00:50:34.160
a single task instruction tuning we
00:50:32.400 --> 00:50:35.640
build a general General list model that
00:50:34.160 --> 00:50:37.240
is good at many
00:50:35.640 --> 00:50:40.040
tasks
00:50:37.240 --> 00:50:41.799
um and what I want to go through now is
00:50:40.040 --> 00:50:46.119
I want to go through some tasks that
00:50:41.799 --> 00:50:48.520
I've seen people use number one being
00:50:46.119 --> 00:50:50.720
really important in like actual
00:50:48.520 --> 00:50:52.559
applications of NLP models in industry
00:50:50.720 --> 00:50:54.760
but number two what what is the set of
00:50:52.559 --> 00:50:56.680
tasks that people use to evaluate gener
00:50:54.760 --> 00:51:00.000
models so like if you look at the GPD
00:50:56.680 --> 00:51:01.400
papers or you look at the Gemini paper
00:51:00.000 --> 00:51:02.960
what is the set of tasks that they're
00:51:01.400 --> 00:51:06.400
using to demonstrate that their models
00:51:02.960 --> 00:51:07.599
work well so the first one is context
00:51:06.400 --> 00:51:11.000
free question
00:51:07.599 --> 00:51:13.880
answering also called open book QA
00:51:11.000 --> 00:51:15.640
basically this requires answering a
00:51:13.880 --> 00:51:17.720
question without any specific grounding
00:51:15.640 --> 00:51:19.480
into documents it's also what happens
00:51:17.720 --> 00:51:21.119
when chat GPT answers your questions
00:51:19.480 --> 00:51:22.799
without looking something up on the web
00:51:21.119 --> 00:51:25.160
for
00:51:22.799 --> 00:51:26.920
example an example data set that lots of
00:51:25.160 --> 00:51:30.920
people use is something called
00:51:26.920 --> 00:51:33.119
MML um this is uh a massively multitask
00:51:30.920 --> 00:51:35.920
language understanding data set and it
00:51:33.119 --> 00:51:38.559
has questions in a number of relatively
00:51:35.920 --> 00:51:42.599
difficult areas like professional law so
00:51:38.559 --> 00:51:45.079
this is asking what happens when a
00:51:42.599 --> 00:51:47.920
Salesman ignores that trespassers will
00:51:45.079 --> 00:51:52.000
be prosecuted signed and enters a
00:51:47.920 --> 00:51:54.839
hermit's house he drives up the driveway
00:51:52.000 --> 00:51:56.319
and an explosive charge explodes the
00:51:54.839 --> 00:51:58.319
seller was was injured can the Celler
00:51:56.319 --> 00:52:01.960
recover damages from The
00:51:58.319 --> 00:52:03.880
Hermit so I I would not be able to
00:52:01.960 --> 00:52:06.480
answer this with you know certainty
00:52:03.880 --> 00:52:08.799
because I'm not a lawyer um the answer
00:52:06.480 --> 00:52:10.720
is yes if the hermit was responsible for
00:52:08.799 --> 00:52:13.240
the explosive charge under the driveway
00:52:10.720 --> 00:52:15.200
so now you know uh you can collect
00:52:13.240 --> 00:52:17.559
images if somebody tries to blow you up
00:52:15.200 --> 00:52:20.559
when you trespass on their
00:52:17.559 --> 00:52:22.880
property but uh yeah and this has lots
00:52:20.559 --> 00:52:25.079
and lots of categories like
00:52:22.880 --> 00:52:27.000
this the next thing is contextual
00:52:25.079 --> 00:52:29.720
question question answering and this is
00:52:27.000 --> 00:52:30.839
uh question answering uh grounded in
00:52:29.720 --> 00:52:34.440
actual
00:52:30.839 --> 00:52:35.640
context um one example data set that a
00:52:34.440 --> 00:52:38.839
lot of people use is something called
00:52:35.640 --> 00:52:40.680
natural questions and this is uh
00:52:38.839 --> 00:52:43.200
questions grounded in a Wikipedia
00:52:40.680 --> 00:52:46.440
document or the Wikipedia document
00:52:43.200 --> 00:52:48.079
collection so grounded in a Wikipedia
00:52:46.440 --> 00:52:49.440
document means they give you the actual
00:52:48.079 --> 00:52:50.559
document you should be answering the
00:52:49.440 --> 00:52:52.559
question about and then you need to
00:52:50.559 --> 00:52:55.640
answer the question about
00:52:52.559 --> 00:52:57.440
it this is often called machine reading
00:52:55.640 --> 00:52:59.960
because you expect it to like read and
00:52:57.440 --> 00:53:02.599
answer questions about the
00:52:59.960 --> 00:53:04.799
document or it could be okay we're going
00:53:02.599 --> 00:53:06.400
to give you all of Wikipedia please
00:53:04.799 --> 00:53:10.280
provide us the answer to this question
00:53:06.400 --> 00:53:11.880
and this is uh often called uh retrieval
00:53:10.280 --> 00:53:14.319
based question answering or retrieval
00:53:11.880 --> 00:53:18.000
augmented one variety of retrial
00:53:14.319 --> 00:53:21.960
augmented generation or rag so this is
00:53:18.000 --> 00:53:23.520
really really important um I think most
00:53:21.960 --> 00:53:25.880
many people that I talked to who want to
00:53:23.520 --> 00:53:29.079
build actual systems
00:53:25.880 --> 00:53:31.400
from language models or NLP systems are
00:53:29.079 --> 00:53:34.319
are trying to do this sort of
00:53:31.400 --> 00:53:36.680
thing the second most popular thing that
00:53:34.319 --> 00:53:39.040
I talk to people who are trying to build
00:53:36.680 --> 00:53:41.960
uh like NLP systems of some variety is
00:53:39.040 --> 00:53:45.119
code generation and basically this is
00:53:41.960 --> 00:53:47.440
simply generating code like python SQL
00:53:45.119 --> 00:53:50.160
from a natural language
00:53:47.440 --> 00:53:52.799
command uh the most popular data set for
00:53:50.160 --> 00:53:55.359
this is something called human ofel and
00:53:52.799 --> 00:53:56.920
basically it has questions about about
00:53:55.359 --> 00:53:58.720
the python standard how you do things
00:53:56.920 --> 00:54:00.440
with the python standard Library like
00:53:58.720 --> 00:54:04.799
return a list with elements incremented
00:54:00.440 --> 00:54:08.160
by one um
00:54:04.799 --> 00:54:09.880
the it gives you the text and several
00:54:08.160 --> 00:54:11.119
examples of what the inputs and outputs
00:54:09.880 --> 00:54:12.480
should be and you're supposed to return
00:54:11.119 --> 00:54:14.040
a program like this and this is a
00:54:12.480 --> 00:54:16.680
simpler version of this there's also
00:54:14.040 --> 00:54:19.079
more complex ones one thing I should
00:54:16.680 --> 00:54:21.760
note um this is a area that I do a lot
00:54:19.079 --> 00:54:24.119
of research in human Nel is a very
00:54:21.760 --> 00:54:26.079
simple uh example of this it doesn't use
00:54:24.119 --> 00:54:27.280
any external Library it doesn't use
00:54:26.079 --> 00:54:29.920
context and other stuff like that
00:54:27.280 --> 00:54:31.839
there's a lot of other more interesting
00:54:29.920 --> 00:54:33.400
data sets also so if you're working on
00:54:31.839 --> 00:54:36.839
code generation I can recommend those as
00:54:33.400 --> 00:54:36.839
well and I'll do that later in the class
00:54:38.000 --> 00:54:43.599
too cool next is uh summarization and
00:54:41.839 --> 00:54:45.319
summarization uh there's a couple
00:54:43.599 --> 00:54:47.480
varieties of this one is single document
00:54:45.319 --> 00:54:49.359
summarization another is multi-document
00:54:47.480 --> 00:54:50.920
summarization uh single document
00:54:49.359 --> 00:54:53.240
compresses a longer document to a
00:54:50.920 --> 00:54:57.040
shorter one multi-document compresses
00:54:53.240 --> 00:54:59.799
multiple documents into one
00:54:57.040 --> 00:55:02.319
um honestly right now single document
00:54:59.799 --> 00:55:05.000
summarization in English works pretty
00:55:02.319 --> 00:55:07.079
well out of the box uh it's not perfect
00:55:05.000 --> 00:55:09.480
but it's close enough to being perfect
00:55:07.079 --> 00:55:10.720
that um I've worked in summarization
00:55:09.480 --> 00:55:12.760
before and I don't know if there's a
00:55:10.720 --> 00:55:15.000
whole lot more that we can do there of
00:55:12.760 --> 00:55:16.400
course multilingual is interesting
00:55:15.000 --> 00:55:18.319
multi-document summarization is
00:55:16.400 --> 00:55:19.920
definitely not solved um in
00:55:18.319 --> 00:55:22.160
multi-document summarization is when you
00:55:19.920 --> 00:55:23.960
have lots of documents about a
00:55:22.160 --> 00:55:25.920
particular topic and you want to
00:55:23.960 --> 00:55:29.039
summarize them down into a coherent
00:55:25.920 --> 00:55:31.480
summary of that topic one example of
00:55:29.039 --> 00:55:34.039
that is wikum this is a data set where
00:55:31.480 --> 00:55:37.319
you're provided with all of the links to
00:55:34.039 --> 00:55:39.680
pages about a Wikipedia article and
00:55:37.319 --> 00:55:41.400
you're expected to generate the first
00:55:39.680 --> 00:55:44.200
paragraph or few paragraphs of the
00:55:41.400 --> 00:55:48.039
article and so you're expected to take
00:55:44.200 --> 00:55:50.000
like lots of noisy you know incoherent
00:55:48.039 --> 00:55:52.160
articles about Barack Obama and actually
00:55:50.000 --> 00:55:55.039
write about Barack OB something like
00:55:52.160 --> 00:55:57.680
this uh some other example interesting
00:55:55.039 --> 00:56:00.680
tasks for this include things like uh
00:55:57.680 --> 00:56:02.400
survey generation for papers or
00:56:00.680 --> 00:56:05.480
something like that you want to know
00:56:02.400 --> 00:56:07.920
everything about a scientific topic or
00:56:05.480 --> 00:56:10.400
um generating
00:56:07.920 --> 00:56:12.599
a report of all the things that happened
00:56:10.400 --> 00:56:14.839
in the stock market today or something
00:56:12.599 --> 00:56:17.720
like that you know there's lots of uh
00:56:14.839 --> 00:56:17.720
places where this could be
00:56:18.240 --> 00:56:23.359
useful another class of tasks is
00:56:20.520 --> 00:56:25.400
information extraction um there's lots
00:56:23.359 --> 00:56:27.799
of examples of this but basically they
00:56:25.400 --> 00:56:31.319
all boil down to extracting some sort of
00:56:27.799 --> 00:56:33.200
information in structured format uh from
00:56:31.319 --> 00:56:35.240
text and this is things like entity
00:56:33.200 --> 00:56:37.960
recognition identifying which words are
00:56:35.240 --> 00:56:40.920
entities entity linking linking entities
00:56:37.960 --> 00:56:42.799
to a knowledge base entity co-reference
00:56:40.920 --> 00:56:45.319
finding which entities in an input
00:56:42.799 --> 00:56:47.440
correspond to each other uh event
00:56:45.319 --> 00:56:49.079
recognition linking co- reference so all
00:56:47.440 --> 00:56:50.799
of the same things except doing it for
00:56:49.079 --> 00:56:53.839
events instead of
00:56:50.799 --> 00:56:55.480
entities um an example data set is uh
00:56:53.839 --> 00:56:57.119
something called ontonotes it's an older
00:56:55.480 --> 00:56:59.280
data set but it has all these things
00:56:57.119 --> 00:57:00.680
annotated and you can extract things
00:56:59.280 --> 00:57:03.119
from this there's lots of other data
00:57:00.680 --> 00:57:04.839
sets for this too Al also kind of more
00:57:03.119 --> 00:57:07.440
in general you can think of like what if
00:57:04.839 --> 00:57:09.680
I gave you an Excel sheet uh could you
00:57:07.440 --> 00:57:11.319
go and like Fill in Excel or Google
00:57:09.680 --> 00:57:12.880
sheet could you go and fill in all of
00:57:11.319 --> 00:57:14.760
the columns in the sheet uh
00:57:12.880 --> 00:57:18.160
appropriately given all the information
00:57:14.760 --> 00:57:22.000
on the internet so um this is a a pretty
00:57:18.160 --> 00:57:22.000
important T category as
00:57:22.079 --> 00:57:26.599
well translation so I don't really to
00:57:25.160 --> 00:57:30.319
talk that much about it it's translating
00:57:26.599 --> 00:57:32.319
from one language to another um for both
00:57:30.319 --> 00:57:34.039
translation and summarization uh
00:57:32.319 --> 00:57:35.960
evaluation is kind of tricky I'll talk
00:57:34.039 --> 00:57:38.680
about this uh in the future but
00:57:35.960 --> 00:57:41.559
basically uh you assess quality based on
00:57:38.680 --> 00:57:45.079
similarity to some sort of reference
00:57:41.559 --> 00:57:46.960
uh using things like blue score or uh
00:57:45.079 --> 00:57:49.680
neural
00:57:46.960 --> 00:57:51.160
metrics an example of this uh which I
00:57:49.680 --> 00:57:52.760
think is actually a really nice example
00:57:51.160 --> 00:57:56.200
is something called the Flores data set
00:57:52.760 --> 00:57:59.480
and this is a translation of several uh
00:57:56.200 --> 00:57:59.480
or like a thousand
00:57:59.520 --> 00:58:05.799
Wikipedia not a thousand but like seever
00:58:03.079 --> 00:58:07.960
quite a few Wikipedia articles into 101
00:58:05.799 --> 00:58:09.400
languages the reason why I like this
00:58:07.960 --> 00:58:10.960
data set a lot is because if you could
00:58:09.400 --> 00:58:12.720
translate into all of these languages
00:58:10.960 --> 00:58:14.799
you would be able to you know Aid
00:58:12.720 --> 00:58:16.720
information dissemination across the
00:58:14.799 --> 00:58:20.640
world make access to information more
00:58:16.720 --> 00:58:23.799
Equitable so I I like this D
00:58:20.640 --> 00:58:25.440
well separately from this there are
00:58:23.799 --> 00:58:27.480
general purpose vent marks these
00:58:25.440 --> 00:58:31.119
benchmarks are not really for the
00:58:27.480 --> 00:58:33.559
purpose of evaluating any specific task
00:58:31.119 --> 00:58:35.280
that people think is actually useful but
00:58:33.559 --> 00:58:38.200
rather trying to test the language
00:58:35.280 --> 00:58:41.119
abilities of language models themselves
00:58:38.200 --> 00:58:44.480
a typical example of this is big bench
00:58:41.119 --> 00:58:46.640
and this contains a whole bunch of tasks
00:58:44.480 --> 00:58:48.720
that uh test you know different
00:58:46.640 --> 00:58:50.240
abilities I have some examples here
00:58:48.720 --> 00:58:52.440
these are very small so you might need
00:58:50.240 --> 00:58:54.359
to look at the slides to see them but
00:58:52.440 --> 00:58:57.760
for example this is tracking shuffled
00:58:54.359 --> 00:59:00.359
objects like um Ellis Bob and CLA are
00:58:57.760 --> 00:59:01.880
friends uh who occasionally trade books
00:59:00.359 --> 00:59:04.119
at the start of the semester each one
00:59:01.880 --> 00:59:05.640
has a new book then they trade then they
00:59:04.119 --> 00:59:09.039
trade then they trade then they trade
00:59:05.640 --> 00:59:11.599
which one does Bob have um today is
00:59:09.039 --> 00:59:13.640
Christmas Eve of 1937 what is the date
00:59:11.599 --> 00:59:17.599
tomorrow and you need to write it in the
00:59:13.640 --> 00:59:20.359
appropriate format um Sherry tells the
00:59:17.599 --> 00:59:22.960
truth Vernal says Sherry tells the truth
00:59:20.359 --> 00:59:25.240
Alexis says burn lies Michaela says
00:59:22.960 --> 00:59:26.880
Alexis tells the truth enor says
00:59:25.240 --> 00:59:29.880
Michaela tells the truth does Elanor
00:59:26.880 --> 00:59:31.119
tell the truth um hope you all got that
00:59:29.880 --> 00:59:34.319
one
00:59:31.119 --> 00:59:37.440
right um so like it's just these kind of
00:59:34.319 --> 00:59:38.880
exercises and like when you look at how
00:59:37.440 --> 00:59:40.520
language models are being evaluated
00:59:38.880 --> 00:59:42.559
they're being evaluated against like
00:59:40.520 --> 00:59:44.400
many of these tasks not all of them
00:59:42.559 --> 00:59:47.200
necessarily but many of them I think
00:59:44.400 --> 00:59:48.920
Gemini evaluated with respect to every
00:59:47.200 --> 00:59:51.680
all of these task categories except
00:59:48.920 --> 00:59:53.799
information Extraction movie so um these
00:59:51.680 --> 00:59:56.640
are kind of typical task categories that
00:59:53.799 --> 00:59:56.640
people look at
00:59:57.039 --> 01:00:00.680
cool um any questions about
01:00:02.359 --> 01:00:06.680
this nice okay uh yeah
01:00:09.400 --> 01:00:14.400
sorry how how do they ensure that
01:00:12.880 --> 01:00:18.280
similar data does not appear in the
01:00:14.400 --> 01:00:19.680
training data so people have tried uh a
01:00:18.280 --> 01:00:20.920
bunch of different things this is
01:00:19.680 --> 01:00:23.240
actually this actually might be a good
01:00:20.920 --> 01:00:25.480
thing to talk about uh at some point
01:00:23.240 --> 01:00:30.559
when we talk about data cation or things
01:00:25.480 --> 01:00:33.720
like this um the first thing is uh you
01:00:30.559 --> 01:00:35.920
actually create the data so similar is
01:00:33.720 --> 01:00:39.119
actually okay right because you
01:00:35.920 --> 01:00:42.160
know if it appears everywhere on the
01:00:39.119 --> 01:00:44.599
internet gp4 will learn it the problem
01:00:42.160 --> 01:00:47.680
is like if the exact same thing appears
01:00:44.599 --> 01:00:49.520
um so number one how do we prevent this
01:00:47.680 --> 01:00:52.520
from happening number two how do we even
01:00:49.520 --> 01:00:54.000
tell that it did happen um so some
01:00:52.520 --> 01:00:56.280
things that people do to tell that it
01:00:54.000 --> 01:01:00.319
did happen is they make small
01:00:56.280 --> 01:01:04.119
perturbations to the the test data and
01:01:00.319 --> 01:01:06.200
test whether that like drops the model
01:01:04.119 --> 01:01:07.680
score by a whole lot there was actually
01:01:06.200 --> 01:01:09.119
a paper I don't know if I can find it
01:01:07.680 --> 01:01:12.280
immediately but there was a paper
01:01:09.119 --> 01:01:16.520
recently that just like swapped the
01:01:12.280 --> 01:01:18.280
order of uh of outputs in mlu and saw
01:01:16.520 --> 01:01:21.880
that the accuracy went down for some
01:01:18.280 --> 01:01:22.880
language models so that should have no
01:01:21.880 --> 01:01:24.119
that should make no difference
01:01:22.880 --> 01:01:26.960
whatsoever because you're just changing
01:01:24.119 --> 01:01:29.000
the order of answers but it it ca
01:01:26.960 --> 01:01:30.880
outputs to go down if that's the case
01:01:29.000 --> 01:01:33.920
it's a pretty clear sign that it's
01:01:30.880 --> 01:01:35.760
leaking um other things that people do
01:01:33.920 --> 01:01:38.480
are change the input a little bit so
01:01:35.760 --> 01:01:39.880
like change the number in a math problem
01:01:38.480 --> 01:01:41.760
uh to be a slightly different value and
01:01:39.880 --> 01:01:43.640
see if that hurts the accuracy overall
01:01:41.760 --> 01:01:45.200
and like making these little
01:01:43.640 --> 01:01:46.280
perturbations that shouldn't change the
01:01:45.200 --> 01:01:47.640
accuracy and then if they do
01:01:46.280 --> 01:01:49.920
significantly then you think there's a
01:01:47.640 --> 01:01:53.119
problem so I think that's a basic tool
01:01:49.920 --> 01:01:56.079
to diagnose this how do you prevent it
01:01:53.119 --> 01:01:58.160
from happening um there's really simple
01:01:56.079 --> 01:02:04.039
and silly things that you can do
01:01:58.160 --> 01:02:07.240
like uh ZIP the file and like put a
01:02:04.039 --> 01:02:09.119
password on the file um and then like a
01:02:07.240 --> 01:02:12.440
scraper even if it's scraping all of
01:02:09.119 --> 01:02:14.680
GitHub for training data it won't scrape
01:02:12.440 --> 01:02:16.960
your zipped and password protected file
01:02:14.680 --> 01:02:19.279
right so that's kind of a first line of
01:02:16.960 --> 01:02:21.799
defense it doesn't work if someone puts
01:02:19.279 --> 01:02:25.200
it like you know someone puts it
01:02:21.799 --> 01:02:26.839
somewhere uh in unzipped formats so
01:02:25.200 --> 01:02:28.039
that's a problem but you know there
01:02:26.839 --> 01:02:29.839
there are things that you can do like
01:02:28.039 --> 01:02:31.799
that another thing you can do is just
01:02:29.839 --> 01:02:34.440
not reveal your data whatsoever so you
01:02:31.799 --> 01:02:36.160
can keep a private version of the data
01:02:34.440 --> 01:02:39.319
um you can keep a private version of the
01:02:36.160 --> 01:02:42.359
data and not you know let anybody else
01:02:39.319 --> 01:02:45.000
see the the outputs so yeah it's pretty
01:02:42.359 --> 01:02:45.000
tricky
01:02:48.279 --> 01:02:54.920
yeah how do you control for test
01:02:50.520 --> 01:02:56.480
complexity that's a great question um
01:02:54.920 --> 01:02:59.119
I don't think there's any really good
01:02:56.480 --> 01:03:01.400
definition of task complexity yet um
01:02:59.119 --> 01:03:04.559
some things that you can do are control
01:03:01.400 --> 01:03:07.520
for like length or control
01:03:04.559 --> 01:03:11.400
for um you know the
01:03:07.520 --> 01:03:15.119
number the number of hops that are
01:03:11.400 --> 01:03:19.839
required in like multihop reasoning um
01:03:15.119 --> 01:03:19.839
there's actually one really interesting
01:03:21.760 --> 01:03:26.880
work that tries to do um not control
01:03:25.440 --> 01:03:29.880
necessarily but at
01:03:26.880 --> 01:03:29.880
least
01:03:32.720 --> 01:03:39.359
evaluate there's actually a
01:03:35.640 --> 01:03:42.480
couple so what this tries to do is this
01:03:39.359 --> 01:03:44.160
tries to break down questions into kind
01:03:42.480 --> 01:03:49.039
of like operations that you would need
01:03:44.160 --> 01:03:51.640
to solve do to solve them so it's like
01:03:49.039 --> 01:03:53.920
uh which keywords have been contained by
01:03:51.640 --> 01:03:55.279
more than 100 ACL papers and they say
01:03:53.920 --> 01:03:56.839
okay for you need to select then you
01:03:55.279 --> 01:03:58.799
need to filter then you need to project
01:03:56.839 --> 01:04:00.760
and stuff like that so they try to at
01:03:58.799 --> 01:04:03.520
least Express the level of complexity in
01:04:00.760 --> 01:04:06.680
this way um there's also another one
01:04:03.520 --> 01:04:08.920
that's not on so much on real
01:04:06.680 --> 01:04:10.960
data sorry this is this is called the
01:04:08.920 --> 01:04:13.079
break Benchmark if you're
01:04:10.960 --> 01:04:16.440
interested there was also a more recent
01:04:13.079 --> 01:04:16.440
one paper that I
01:04:17.599 --> 01:04:22.960
liked that tried to do something
01:04:20.559 --> 01:04:22.960
somewhat
01:04:23.200 --> 01:04:26.200
similar
01:04:27.160 --> 01:04:31.920
um where they express they come up with
01:04:29.760 --> 01:04:34.480
like math or programming problems and
01:04:31.920 --> 01:04:36.279
try to express them as a graph and then
01:04:34.480 --> 01:04:37.799
do some examination of how Transformer
01:04:36.279 --> 01:04:39.760
models do on things of different
01:04:37.799 --> 01:04:40.920
complexity I think the problem is
01:04:39.760 --> 01:04:43.039
there's so many different things that
01:04:40.920 --> 01:04:44.720
could make something hard or easy uh
01:04:43.039 --> 01:04:47.000
there's also like is it in distribution
01:04:44.720 --> 01:04:50.640
or out of distribution um from the point
01:04:47.000 --> 01:04:53.119
of view of topic or language or speaking
01:04:50.640 --> 01:04:55.640
style or things like that um and
01:04:53.119 --> 01:04:58.839
actually I think uh we're going to talk
01:04:55.640 --> 01:05:00.440
about this in the debugging and
01:04:58.839 --> 01:05:01.880
evaluation lecture but like one of the
01:05:00.440 --> 01:05:03.799
things I really like to do is I like to
01:05:01.880 --> 01:05:05.119
Subs segment to data and look at
01:05:03.799 --> 01:05:06.960
different Subs segments of the data
01:05:05.119 --> 01:05:09.920
where I think the subs segments will
01:05:06.960 --> 01:05:11.880
affect accuracy by a lot and basically
01:05:09.920 --> 01:05:14.359
anything that you could Subs segment on
01:05:11.880 --> 01:05:17.720
is like something that determines
01:05:14.359 --> 01:05:19.279
difficulties so um yeah lots to lots to
01:05:17.720 --> 01:05:21.960
say about that
01:05:19.279 --> 01:05:24.520
basically cool any other
01:05:21.960 --> 01:05:26.119
questions okay um if not let me get on
01:05:24.520 --> 01:05:27.680
to instruction tuning I don't have a
01:05:26.119 --> 01:05:30.079
whole lot about instruction tuning
01:05:27.680 --> 01:05:31.720
because it's uh you know conceptually
01:05:30.079 --> 01:05:34.799
pretty simple but I I would like to talk
01:05:31.720 --> 01:05:37.640
about all of it so basic instruction
01:05:34.799 --> 01:05:39.359
tuning the uh was proposed almost
01:05:37.640 --> 01:05:41.920
simultaneously by people at Google and
01:05:39.359 --> 01:05:45.799
people at hugging face uh the way it
01:05:41.920 --> 01:05:49.760
works is you have
01:05:45.799 --> 01:05:54.960
tasks and you train on lots of tasks
01:05:49.760 --> 01:05:57.240
where you append The Prompt and you uh
01:05:54.960 --> 01:05:58.599
you append a prompt you append the input
01:05:57.240 --> 01:06:01.400
and then you just try to train to
01:05:58.599 --> 01:06:03.160
generate the output and so this is this
01:06:01.400 --> 01:06:04.480
contrast from like base language model
01:06:03.160 --> 01:06:06.039
training because you're still training a
01:06:04.480 --> 01:06:08.480
language model based on a prompt and an
01:06:06.039 --> 01:06:10.559
output but you're
01:06:08.480 --> 01:06:12.400
specifically formatting them in a
01:06:10.559 --> 01:06:14.680
particular way so it corresponds to
01:06:12.400 --> 01:06:16.119
solving tasks it's essentially
01:06:14.680 --> 01:06:17.640
supervised training but supervised
01:06:16.119 --> 01:06:19.200
training over many many tasks fine
01:06:17.640 --> 01:06:21.520
tuning over many many
01:06:19.200 --> 01:06:25.480
tests the interesting thing that these
01:06:21.520 --> 01:06:29.359
papers showed was that basically if you
01:06:25.480 --> 01:06:31.000
do this instruction tuning you do well
01:06:29.359 --> 01:06:32.920
not only on the tasks that you trained
01:06:31.000 --> 01:06:36.640
on but also on new tasks that you didn't
01:06:32.920 --> 01:06:38.720
train on um and so this is now really
01:06:36.640 --> 01:06:40.960
like important it's Incorporated in
01:06:38.720 --> 01:06:43.279
every serious language model that's used
01:06:40.960 --> 01:06:48.720
in a kind of like production setting
01:06:43.279 --> 01:06:52.599
nowadays and all um yeah so uh I think
01:06:48.720 --> 01:06:52.599
that's the basic idea
01:06:53.000 --> 01:06:57.520
here
01:06:55.160 --> 01:06:59.480
you can also do things like learn to in
01:06:57.520 --> 01:07:02.160
context learn so we talked about in
01:06:59.480 --> 01:07:05.160
context learning so in context learning
01:07:02.160 --> 01:07:07.799
instead of uh giving just a prompt you
01:07:05.160 --> 01:07:09.960
give training examples in the context
01:07:07.799 --> 01:07:12.240
and so that's what you do in this paper
01:07:09.960 --> 01:07:14.400
here as well you sample a whole bunch of
01:07:12.240 --> 01:07:17.880
training examples you append them to the
01:07:14.400 --> 01:07:19.720
context and then you train the model and
01:07:17.880 --> 01:07:21.359
so why is this good this is good because
01:07:19.720 --> 01:07:24.400
it will train a model that's better in
01:07:21.359 --> 01:07:26.160
context learning basically so if you um
01:07:24.400 --> 01:07:29.480
if you want to provide these training
01:07:26.160 --> 01:07:29.480
examples then you can trade it like
01:07:30.039 --> 01:07:35.480
that so these are the two basic ways of
01:07:32.680 --> 01:07:37.039
doing instruction tuning um all all came
01:07:35.480 --> 01:07:40.920
out around the same
01:07:37.039 --> 01:07:43.400
time um there are a bunch of data sets
01:07:40.920 --> 01:07:45.440
that people have compiled and you
01:07:43.400 --> 01:07:47.160
probably if you want to do any sort of
01:07:45.440 --> 01:07:48.599
instruction tuning you probably want to
01:07:47.160 --> 01:07:50.680
use one of these data sets because
01:07:48.599 --> 01:07:52.920
compiling together a bunch of data sets
01:07:50.680 --> 01:07:55.720
is just annoying it's not hard but it's
01:07:52.920 --> 01:07:59.319
annoying um and
01:07:55.720 --> 01:08:01.520
so some pop I I very highly recommend
01:07:59.319 --> 01:08:03.960
this paper the on the flan collection
01:08:01.520 --> 01:08:05.480
because it gives a good uh summary it
01:08:03.960 --> 01:08:08.079
has this really nice table that breaks
01:08:05.480 --> 01:08:10.960
them down based on like um what's the
01:08:08.079 --> 01:08:14.440
name of the data set uh what is the size
01:08:10.960 --> 01:08:17.880
of the training data um what prompts do
01:08:14.440 --> 01:08:20.040
they use zero shot or F shot uh so like
01:08:17.880 --> 01:08:21.799
fot is learning in context learn like I
01:08:20.040 --> 01:08:24.719
mentioned before how many tasks are
01:08:21.799 --> 01:08:26.799
there um and what detailed methods do
01:08:24.719 --> 01:08:28.480
they use so you can take a look at this
01:08:26.799 --> 01:08:30.520
some very popular ones that lots of
01:08:28.480 --> 01:08:33.920
people use are things like the FL
01:08:30.520 --> 01:08:36.520
collection from here also uh natural
01:08:33.920 --> 01:08:40.640
instructions is a very popular one uh
01:08:36.520 --> 01:08:43.040
that still people use a lot um and self-
01:08:40.640 --> 01:08:46.560
instruct is a popular one that I'll I'll
01:08:43.040 --> 01:08:46.560
talk about in in a
01:08:47.640 --> 01:08:53.159
second cool um so the second thing that
01:08:50.960 --> 01:08:55.359
I want to talk about is instruction
01:08:53.159 --> 01:08:57.359
tuned models or the next thing I want to
01:08:55.359 --> 01:08:59.120
talk about is instruction tun models
01:08:57.359 --> 01:09:01.600
these are examples of models that like I
01:08:59.120 --> 01:09:05.560
can recommend you use now in 2024
01:09:01.600 --> 01:09:10.839
they're like good bottles to use um
01:09:05.560 --> 01:09:12.279
flant T5 I think is a very good model
01:09:10.839 --> 01:09:16.199
especially it's a very good model for
01:09:12.279 --> 01:09:19.679
its size and it comes in various sizes
01:09:16.199 --> 01:09:20.880
uh from smaller models to uh those up to
01:09:19.679 --> 01:09:23.199
11 billion
01:09:20.880 --> 01:09:25.839
parameters and it's an encoder decoder
01:09:23.199 --> 01:09:29.279
model based on T5 that was trained on
01:09:25.839 --> 01:09:32.080
lots of data my impression is that this
01:09:29.279 --> 01:09:34.920
is a model that's like consistently good
01:09:32.080 --> 01:09:38.400
at anything that's like a simple input
01:09:34.920 --> 01:09:40.759
output style task not like a chat task
01:09:38.400 --> 01:09:43.040
um so if you just have input output you
01:09:40.759 --> 01:09:45.839
want to do like uh code generation you
01:09:43.040 --> 01:09:47.319
want to do maybe not code generation you
01:09:45.839 --> 01:09:49.199
want to do like summarization or other
01:09:47.319 --> 01:09:52.640
things like that that's a good model to
01:09:49.199 --> 01:09:55.560
use um another one is Lama 2 chat so
01:09:52.640 --> 01:09:58.120
Lama 2 chat was in instruction tuned and
01:09:55.560 --> 01:10:02.719
uh kind of tuned with human preferences
01:09:58.120 --> 01:10:05.600
but it it is quite good at following
01:10:02.719 --> 01:10:07.520
instructions and then there's also
01:10:05.600 --> 01:10:10.600
excuse me mixol instruct and these are
01:10:07.520 --> 01:10:13.360
both decoder only models mixol is a
01:10:10.600 --> 01:10:17.280
decoder only mixture of experts model
01:10:13.360 --> 01:10:19.400
mixol is smaller and quite strong so I
01:10:17.280 --> 01:10:20.920
would recommend that you consider this
01:10:19.400 --> 01:10:24.480
maybe as a default if you want a decoder
01:10:20.920 --> 01:10:26.840
only model and then a flant T5 if few
01:10:24.480 --> 01:10:26.840
inod
01:10:28.840 --> 01:10:33.800
decod
01:10:30.480 --> 01:10:35.719
cool um the final thing I'd like to talk
01:10:33.800 --> 01:10:37.000
about a little bit um and then we're
01:10:35.719 --> 01:10:39.239
also going to talk about it a bit more
01:10:37.000 --> 01:10:42.000
in the distillation class is data set
01:10:39.239 --> 01:10:43.440
generation so it's possible to
01:10:42.000 --> 01:10:46.440
automatically generate instruction
01:10:43.440 --> 01:10:48.199
tuning data sets and the first or
01:10:46.440 --> 01:10:51.560
typical example of this is self-
01:10:48.199 --> 01:10:55.080
instruct and the way self instruct works
01:10:51.560 --> 01:10:56.840
is you have uh a bunch of of seed tasks
01:10:55.080 --> 01:10:59.640
that have one instruction and one
01:10:56.840 --> 01:11:02.560
instance per task you throw them into
01:10:59.640 --> 01:11:05.960
the task pool and then based on this you
01:11:02.560 --> 01:11:07.239
do a prompting to try to generate new
01:11:05.960 --> 01:11:11.159
tasks
01:11:07.239 --> 01:11:14.440
basically and um you identify what type
01:11:11.159 --> 01:11:18.640
of uh what type of task it is and then
01:11:14.440 --> 01:11:19.640
based on the task you generate uh inputs
01:11:18.640 --> 01:11:22.440
and
01:11:19.640 --> 01:11:24.400
outputs and from these inputs and
01:11:22.440 --> 01:11:26.199
outputs they do a little bit of minimal
01:11:24.400 --> 01:11:27.800
filtering to D duplicate the data set
01:11:26.199 --> 01:11:29.640
and also remove things that require like
01:11:27.800 --> 01:11:31.679
visual information and other stuff like
01:11:29.640 --> 01:11:34.080
that and then feed that back into the
01:11:31.679 --> 01:11:36.560
task pool so basically like they start
01:11:34.080 --> 01:11:38.159
with 175 examples and then they expand
01:11:36.560 --> 01:11:40.520
this data set to be very large to cover
01:11:38.159 --> 01:11:45.320
many many different tasks
01:11:40.520 --> 01:11:46.520
um so uh this is pretty influential and
01:11:45.320 --> 01:11:49.679
like one interesting thing that they
01:11:46.520 --> 01:11:52.560
showed here is that you can improve the
01:11:49.679 --> 01:11:55.960
model that was used to generate uh these
01:11:52.560 --> 01:11:58.600
itself um so basically they took this
01:11:55.960 --> 01:12:01.719
and they used it to fine-tune uh gpt3
01:11:58.600 --> 01:12:04.679
basically um they used GPT 3 to generate
01:12:01.719 --> 01:12:04.679
the test and they use it to
01:12:04.760 --> 01:12:11.639
find um some other more recent examples
01:12:07.920 --> 01:12:15.600
are Chain of Thought um uh tuning for
01:12:11.639 --> 01:12:17.320
Chain of Thought so um Orca is a nice
01:12:15.600 --> 01:12:20.840
example of this this is uh something
01:12:17.320 --> 01:12:23.120
where they generated explanations for
01:12:20.840 --> 01:12:24.679
why um for why the model made a
01:12:23.120 --> 01:12:27.159
particular decision and then they use
01:12:24.679 --> 01:12:30.400
that to train models uh and improve
01:12:27.159 --> 01:12:30.400
their essentially reasoning
01:12:31.120 --> 01:12:37.280
capabilities another interesting example
01:12:34.159 --> 01:12:38.880
is uh something called Evol instruct and
01:12:37.280 --> 01:12:40.760
basically the idea here is they start
01:12:38.880 --> 01:12:43.440
out with a seed set of instructions from
01:12:40.760 --> 01:12:45.800
any data set that you want to be using
01:12:43.440 --> 01:12:48.239
and they modify those instructions to
01:12:45.800 --> 01:12:50.480
make them more complex so they say okay
01:12:48.239 --> 01:12:52.920
this is too easy let's make this harder
01:12:50.480 --> 01:12:55.679
um and that makes it possible to uh
01:12:52.920 --> 01:12:58.320
improve uh the ability of models to
01:12:55.679 --> 01:13:00.440
solve complex problems so this is
01:12:58.320 --> 01:13:02.120
actually a really popular you know area
01:13:00.440 --> 01:13:04.080
overall nowadays I I'm not going to do
01:13:02.120 --> 01:13:06.960
it justce in one slide so we'll talk a
01:13:04.080 --> 01:13:09.199
bit more about it later but um uh this
01:13:06.960 --> 01:13:11.440
is the the general
01:13:09.199 --> 01:13:14.280
idea
01:13:11.440 --> 01:13:18.159
than cool and yeah that's all I have for
01:13:14.280 --> 01:13:18.159
today uh any questions or
01:13:20.679 --> 01:13:25.360
yeah talk about other places
01:13:26.760 --> 01:13:31.199
oh yeah yeah sorry sorry very very good
01:13:29.480 --> 01:13:32.880
question and I actually wanted to put
01:13:31.199 --> 01:13:34.960
that on my slide but I just realized I
01:13:32.880 --> 01:13:36.800
forgot so thank you for prompting me um
01:13:34.960 --> 01:13:38.920
so when would you want to do Bas uh
01:13:36.800 --> 01:13:40.760
single task fine tuning versus
01:13:38.920 --> 01:13:45.199
instruction
01:13:40.760 --> 01:13:47.880
tuning if you have a very carefully like
01:13:45.199 --> 01:13:49.360
if you have a very clear task definition
01:13:47.880 --> 01:13:51.280
and you have lots of training data doing
01:13:49.360 --> 01:13:53.440
full fine tuning can be good for a
01:13:51.280 --> 01:13:56.120
number of reasons number one you can get
01:13:53.440 --> 01:13:57.800
get maybe slightly Superior accuracy
01:13:56.120 --> 01:14:00.280
with bigger models but you can get much
01:13:57.800 --> 01:14:01.719
Superior accuracy with smaller models
01:14:00.280 --> 01:14:04.120
because smaller models don't have the
01:14:01.719 --> 01:14:07.960
capacity to like do really really well
01:14:04.120 --> 01:14:10.280
on lots of different tasks so um I think
01:14:07.960 --> 01:14:12.760
you'll see you know some some
01:14:10.280 --> 01:14:14.840
improvement maybe a somewhat marginal
01:14:12.760 --> 01:14:17.560
Improvement on bigger models but you'll
01:14:14.840 --> 01:14:20.040
see a big Improvement on smaller models
01:14:17.560 --> 01:14:22.440
and there have been some
01:14:20.040 --> 01:14:24.639
interesting results on this recently
01:14:22.440 --> 01:14:27.000
like there's a really strong text tosql
01:14:24.639 --> 01:14:28.760
model that was based on llama 7B that
01:14:27.000 --> 01:14:32.639
was just trained on tons and tons of
01:14:28.760 --> 01:14:35.520
Text tosql data for example um and so
01:14:32.639 --> 01:14:38.520
there's certain tasks where it's really
01:14:35.520 --> 01:14:41.520
important another example is
01:14:38.520 --> 01:14:45.120
um on translation
01:14:41.520 --> 01:14:48.280
tasks uh there's a model called NL which
01:14:45.120 --> 01:14:50.880
is 3.3 billion parameters and it's
01:14:48.280 --> 01:14:53.560
competitive with gd4 on very
01:14:50.880 --> 01:14:55.000
large uh on very large languages with
01:14:53.560 --> 01:14:57.199
lots of prining data and way better than
01:14:55.000 --> 01:15:00.080
gp4 on languages with lots of prining
01:14:57.199 --> 01:15:01.800
data so um it just shows how like if you
01:15:00.080 --> 01:15:03.880
very carefully work on a special purpose
01:15:01.800 --> 01:15:05.800
model even if it's very small compared
01:15:03.880 --> 01:15:08.280
to the bigger model you can still do a
01:15:05.800 --> 01:15:10.560
really good job so I think that's the
01:15:08.280 --> 01:15:13.440
biggest
01:15:10.560 --> 01:15:15.199
ESS another thing is um another thing is
01:15:13.440 --> 01:15:16.600
if you have a very fixed format and you
01:15:15.199 --> 01:15:17.880
always want something in a format you
01:15:16.600 --> 01:15:25.199
might want to be
01:15:17.880 --> 01:15:25.199
doing the prev page thank only one Inu
01:15:37.639 --> 01:15:43.840
well it you are inputting at least 175
01:15:41.400 --> 01:15:47.280
seed ones and
01:15:43.840 --> 01:15:49.080
um you know you're sampling from the
01:15:47.280 --> 01:15:51.320
model you're asking it to generate new
01:15:49.080 --> 01:15:52.400
instructions so if you have a a model
01:15:51.320 --> 01:15:54.000
that's good enough at following
01:15:52.400 --> 01:15:56.320
instructions it'll be be able toate
01:15:54.000 --> 01:15:56.320
something
01:16:00.400 --> 01:16:05.400
new for
01:16:02.400 --> 01:16:07.600
this yeah they have a class I believe
01:16:05.400 --> 01:16:12.560
they have a classifier that says it will
01:16:07.600 --> 01:16:12.560
be one of these two yeah
01:16:15.000 --> 01:16:18.639
dur it can be
01:16:20.280 --> 01:16:26.760
both yeah well but also the
01:16:24.080 --> 01:16:28.440
um the C test can be input first and
01:16:26.760 --> 01:16:31.520
output first and you're like generating
01:16:28.440 --> 01:16:34.760
a new instruction for the LM
01:16:31.520 --> 01:16:36.199
here so this this is from the task pool
01:16:34.760 --> 01:16:37.960
but you're asking the LM to generate a
01:16:36.199 --> 01:16:40.120
new
01:16:37.960 --> 01:16:44.800
instruction
01:16:40.120 --> 01:16:44.800
yeah cool and anything
01:16:45.320 --> 01:16:51.239
else okay um yeah that that's all we
01:16:48.719 --> 01:16:54.239
have for today so thank
01:16:51.239 --> 01:16:54.239
you