|
WEBVTT |
|
|
|
00:00:00.760 --> 00:00:07.240 |
|
he everyone so I'd like to get |
|
|
|
00:00:03.279 --> 00:00:09.320 |
|
started the first thing is that um I |
|
|
|
00:00:07.240 --> 00:00:11.160 |
|
heard from the adws people that they |
|
|
|
00:00:09.320 --> 00:00:14.440 |
|
started the |
|
|
|
00:00:11.160 --> 00:00:17.840 |
|
process of |
|
|
|
00:00:14.440 --> 00:00:19.400 |
|
getting things issued on the 26th which |
|
|
|
00:00:17.840 --> 00:00:21.480 |
|
is three days ago so you should be |
|
|
|
00:00:19.400 --> 00:00:23.560 |
|
getting it soon uh for reference I |
|
|
|
00:00:21.480 --> 00:00:25.599 |
|
submitted the form about seven days |
|
|
|
00:00:23.560 --> 00:00:28.359 |
|
before that so they're moving very |
|
|
|
00:00:25.599 --> 00:00:29.599 |
|
slowly but I think you should have AWS |
|
|
|
00:00:28.359 --> 00:00:31.920 |
|
credits by the end of the week if you |
|
|
|
00:00:29.599 --> 00:00:35.120 |
|
need them to run uh GPU machines or |
|
|
|
00:00:31.920 --> 00:00:37.960 |
|
stuff like that the moment you get AWS |
|
|
|
00:00:35.120 --> 00:00:39.960 |
|
credits or maybe even before you get AWS |
|
|
|
00:00:37.960 --> 00:00:43.320 |
|
credits I might suggest that you try to |
|
|
|
00:00:39.960 --> 00:00:46.760 |
|
start uh a GPU machine like a P2 machine |
|
|
|
00:00:43.320 --> 00:00:49.160 |
|
or something like that because um |
|
|
|
00:00:46.760 --> 00:00:51.760 |
|
sometimes you need to file for a limit |
|
|
|
00:00:49.160 --> 00:00:53.640 |
|
increase uh to get a P2 machine and that |
|
|
|
00:00:51.760 --> 00:00:55.879 |
|
also takes a little bit of time so I I |
|
|
|
00:00:53.640 --> 00:00:59.160 |
|
would suggest that you uh you take a |
|
|
|
00:00:55.879 --> 00:01:01.160 |
|
look at doing that um so you go to like |
|
|
|
00:00:59.160 --> 00:01:02.800 |
|
if you're using AWS if you're not using |
|
|
|
00:01:01.160 --> 00:01:05.119 |
|
AWS it doesn't matter but if you're |
|
|
|
00:01:02.800 --> 00:01:08.119 |
|
using AWS you can go to launch instance |
|
|
|
00:01:05.119 --> 00:01:11.520 |
|
and try to launch a p2x large machine um |
|
|
|
00:01:08.119 --> 00:01:13.159 |
|
or something like that so uh but yeah |
|
|
|
00:01:11.520 --> 00:01:14.920 |
|
anyway hopefully that will be done soon |
|
|
|
00:01:13.159 --> 00:01:16.600 |
|
I'm sorry about the delay on this they |
|
|
|
00:01:14.920 --> 00:01:21.400 |
|
said it would take seven days and it's |
|
|
|
00:01:16.600 --> 00:01:24.280 |
|
taken almost twice at now so um my |
|
|
|
00:01:21.400 --> 00:01:26.439 |
|
apologies any other uh things before we |
|
|
|
00:01:24.280 --> 00:01:26.439 |
|
get |
|
|
|
00:01:28.759 --> 00:01:34.520 |
|
started um okay I I don't see any so |
|
|
|
00:01:31.920 --> 00:01:37.280 |
|
I'll go ahead with this um I have |
|
|
|
00:01:34.520 --> 00:01:39.240 |
|
slightly fewer slides today so I might |
|
|
|
00:01:37.280 --> 00:01:40.960 |
|
go a little bit off the slides and talk |
|
|
|
00:01:39.240 --> 00:01:44.759 |
|
about papers and stuff or we might |
|
|
|
00:01:40.960 --> 00:01:46.920 |
|
finish early uh either way so um but |
|
|
|
00:01:44.759 --> 00:01:48.439 |
|
what I would like to talk about is um |
|
|
|
00:01:46.920 --> 00:01:53.320 |
|
combining multiple |
|
|
|
00:01:48.439 --> 00:01:55.479 |
|
models and this is uh really important |
|
|
|
00:01:53.320 --> 00:01:57.520 |
|
and useful if you want to get like an |
|
|
|
00:01:55.479 --> 00:02:00.719 |
|
extra few points of |
|
|
|
00:01:57.520 --> 00:02:03.159 |
|
accuracy uh for anything basically |
|
|
|
00:02:00.719 --> 00:02:04.039 |
|
because it's a pretty reliable way to |
|
|
|
00:02:03.159 --> 00:02:06.960 |
|
get |
|
|
|
00:02:04.039 --> 00:02:08.879 |
|
improvements um and there's a a bunch of |
|
|
|
00:02:06.960 --> 00:02:11.239 |
|
different kind of related but different |
|
|
|
00:02:08.879 --> 00:02:13.680 |
|
topics that I'm going to talk about |
|
|
|
00:02:11.239 --> 00:02:15.519 |
|
today but anyway the the basic |
|
|
|
00:02:13.680 --> 00:02:19.239 |
|
background is that we have many models |
|
|
|
00:02:15.519 --> 00:02:22.920 |
|
uh that exist and the reason why we have |
|
|
|
00:02:19.239 --> 00:02:25.840 |
|
many models that exist is multiple fold |
|
|
|
00:02:22.920 --> 00:02:28.160 |
|
number one we could have different model |
|
|
|
00:02:25.840 --> 00:02:30.080 |
|
architectures um and we could also have |
|
|
|
00:02:28.160 --> 00:02:34.440 |
|
different initializations of those model |
|
|
|
00:02:30.080 --> 00:02:37.879 |
|
architectures so um normally you know if |
|
|
|
00:02:34.440 --> 00:02:40.319 |
|
we do initialization we will initial |
|
|
|
00:02:37.879 --> 00:02:42.360 |
|
initialize our model architecture like |
|
|
|
00:02:40.319 --> 00:02:44.680 |
|
let's say we initialize a llama |
|
|
|
00:02:42.360 --> 00:02:45.920 |
|
architecture uh we start out with random |
|
|
|
00:02:44.680 --> 00:02:49.319 |
|
7B |
|
|
|
00:02:45.920 --> 00:02:52.879 |
|
parameters and then we train and we get |
|
|
|
00:02:49.319 --> 00:02:53.840 |
|
llama 7B for uh our pre-training or |
|
|
|
00:02:52.879 --> 00:02:57.280 |
|
llama |
|
|
|
00:02:53.840 --> 00:02:58.599 |
|
27b um we might initialize another model |
|
|
|
00:02:57.280 --> 00:03:00.599 |
|
this could be you know the same |
|
|
|
00:02:58.599 --> 00:03:02.360 |
|
architecture different architecture Ure |
|
|
|
00:03:00.599 --> 00:03:04.840 |
|
train it on the same data or different |
|
|
|
00:03:02.360 --> 00:03:07.000 |
|
data and get something like mistol |
|
|
|
00:03:04.840 --> 00:03:08.599 |
|
mistol 7B in this case actually maybe |
|
|
|
00:03:07.000 --> 00:03:10.080 |
|
these are I should have indicated that |
|
|
|
00:03:08.599 --> 00:03:11.680 |
|
these are different architectures but |
|
|
|
00:03:10.080 --> 00:03:13.879 |
|
you know we get a different pre-rain |
|
|
|
00:03:11.680 --> 00:03:15.599 |
|
model and of course uh we could also |
|
|
|
00:03:13.879 --> 00:03:18.640 |
|
make it bigger or smaller or whatever |
|
|
|
00:03:15.599 --> 00:03:21.720 |
|
else and then we get llama 270b over |
|
|
|
00:03:18.640 --> 00:03:23.519 |
|
here and then after we do that there's a |
|
|
|
00:03:21.720 --> 00:03:25.319 |
|
lot of fine tuning that goes on |
|
|
|
00:03:23.519 --> 00:03:29.360 |
|
according to different strategies so we |
|
|
|
00:03:25.319 --> 00:03:32.640 |
|
have um you know llama 27b instruct uh |
|
|
|
00:03:29.360 --> 00:03:37.760 |
|
vun 7B uh version |
|
|
|
00:03:32.640 --> 00:03:41.000 |
|
1.5 um mistol 7B instruct uh news uh |
|
|
|
00:03:37.760 --> 00:03:45.239 |
|
Hermes 2 mistal 7B or llama 270b |
|
|
|
00:03:41.000 --> 00:03:47.239 |
|
instruct so we have um a variety of |
|
|
|
00:03:45.239 --> 00:03:49.400 |
|
architectures a variety of random |
|
|
|
00:03:47.239 --> 00:03:51.480 |
|
initializations of those architectures a |
|
|
|
00:03:49.400 --> 00:03:54.799 |
|
variety of pre-train models due to |
|
|
|
00:03:51.480 --> 00:03:57.439 |
|
pre-training data or base models and |
|
|
|
00:03:54.799 --> 00:03:58.920 |
|
then a variety of fine dun models um and |
|
|
|
00:03:57.439 --> 00:04:01.120 |
|
so we have this kind of like branching |
|
|
|
00:03:58.920 --> 00:04:02.959 |
|
tree basically |
|
|
|
00:04:01.120 --> 00:04:04.319 |
|
um the reason why this is important is |
|
|
|
00:04:02.959 --> 00:04:06.680 |
|
because when we're combining multiple |
|
|
|
00:04:04.319 --> 00:04:08.400 |
|
models together some of the methods are |
|
|
|
00:04:06.680 --> 00:04:09.959 |
|
applicable to completely different |
|
|
|
00:04:08.400 --> 00:04:12.439 |
|
models some of the methods are only |
|
|
|
00:04:09.959 --> 00:04:15.000 |
|
applicable to models that share the same |
|
|
|
00:04:12.439 --> 00:04:16.720 |
|
architecture and some of them are only |
|
|
|
00:04:15.000 --> 00:04:19.199 |
|
applicable to models that share the same |
|
|
|
00:04:16.720 --> 00:04:20.959 |
|
initialization and training trajectory |
|
|
|
00:04:19.199 --> 00:04:23.680 |
|
and so I'll try to distinguish between |
|
|
|
00:04:20.959 --> 00:04:23.680 |
|
those as we go |
|
|
|
00:04:24.040 --> 00:04:27.919 |
|
forward |
|
|
|
00:04:25.560 --> 00:04:29.960 |
|
cool so the first thing I I'll talk |
|
|
|
00:04:27.919 --> 00:04:32.600 |
|
about is model ensembling and and |
|
|
|
00:04:29.960 --> 00:04:34.320 |
|
ensembling is kind of the a very general |
|
|
|
00:04:32.600 --> 00:04:37.600 |
|
technique that you can use in a lot of |
|
|
|
00:04:34.320 --> 00:04:39.360 |
|
different uh ways but it has its |
|
|
|
00:04:37.600 --> 00:04:43.039 |
|
disadvantages as |
|
|
|
00:04:39.360 --> 00:04:47.199 |
|
well so basically embling is combining |
|
|
|
00:04:43.039 --> 00:04:50.320 |
|
the predictions from multiple models |
|
|
|
00:04:47.199 --> 00:04:52.400 |
|
and the easiest way to do this ignore |
|
|
|
00:04:50.320 --> 00:04:53.800 |
|
the lstm here this is just any sequence |
|
|
|
00:04:52.400 --> 00:04:56.320 |
|
modeling thing it's because the slides |
|
|
|
00:04:53.800 --> 00:05:00.120 |
|
are old but like let's say this is a a |
|
|
|
00:04:56.320 --> 00:05:03.360 |
|
Transformer it is calculating the |
|
|
|
00:05:00.120 --> 00:05:05.600 |
|
current decoder State and you make a |
|
|
|
00:05:03.360 --> 00:05:07.600 |
|
prediction um this is calculating a |
|
|
|
00:05:05.600 --> 00:05:09.199 |
|
current decoder State and make uh |
|
|
|
00:05:07.600 --> 00:05:11.560 |
|
current decoders sayate in making a |
|
|
|
00:05:09.199 --> 00:05:13.039 |
|
prediction and based on some combination |
|
|
|
00:05:11.560 --> 00:05:17.120 |
|
of the two predictions you decide what |
|
|
|
00:05:13.039 --> 00:05:17.120 |
|
you actually want to Output at the next |
|
|
|
00:05:17.680 --> 00:05:23.840 |
|
step so why would we want to do this um |
|
|
|
00:05:22.080 --> 00:05:25.880 |
|
does anyone have any ideas why we want |
|
|
|
00:05:23.840 --> 00:05:28.639 |
|
to use two models instead of using one |
|
|
|
00:05:25.880 --> 00:05:31.639 |
|
model or just using the best |
|
|
|
00:05:28.639 --> 00:05:31.639 |
|
model |
|
|
|
00:05:32.319 --> 00:05:36.440 |
|
or maybe in what situations we would |
|
|
|
00:05:34.520 --> 00:05:39.440 |
|
want to do |
|
|
|
00:05:36.440 --> 00:05:39.440 |
|
this |
|
|
|
00:05:45.400 --> 00:05:50.319 |
|
yeah and what what's the advantage of |
|
|
|
00:05:47.960 --> 00:05:50.319 |
|
doing |
|
|
|
00:05:51.600 --> 00:05:57.000 |
|
that yeah it reduces a bias kind kind of |
|
|
|
00:05:54.800 --> 00:05:57.000 |
|
yeah |
|
|
|
00:05:58.639 --> 00:06:01.639 |
|
sure |
|
|
|
00:06:28.560 --> 00:06:31.560 |
|
m |
|
|
|
00:06:35.400 --> 00:06:40.360 |
|
yeah so um I I'll repeat all of these I |
|
|
|
00:06:38.599 --> 00:06:43.960 |
|
think all of these are correct so number |
|
|
|
00:06:40.360 --> 00:06:47.479 |
|
one um it reduces the bias uh caused by |
|
|
|
00:06:43.960 --> 00:06:49.199 |
|
a single model uh number two it was it's |
|
|
|
00:06:47.479 --> 00:06:52.199 |
|
kind of like a beian perspective which |
|
|
|
00:06:49.199 --> 00:06:54.000 |
|
I'll talk about in a second and then |
|
|
|
00:06:52.199 --> 00:06:56.039 |
|
number three we have different models |
|
|
|
00:06:54.000 --> 00:06:58.520 |
|
and models are better at some things and |
|
|
|
00:06:56.039 --> 00:07:00.400 |
|
worse at other things |
|
|
|
00:06:58.520 --> 00:07:02.720 |
|
um |
|
|
|
00:07:00.400 --> 00:07:05.960 |
|
so talking about the better at some |
|
|
|
00:07:02.720 --> 00:07:08.319 |
|
things and worse at other things um the |
|
|
|
00:07:05.960 --> 00:07:10.960 |
|
basic idea behind embling is that the |
|
|
|
00:07:08.319 --> 00:07:14.240 |
|
errors that model m models make tend to |
|
|
|
00:07:10.960 --> 00:07:15.840 |
|
not be consistent it not tend to not be |
|
|
|
00:07:14.240 --> 00:07:21.520 |
|
as consistent as when the model is |
|
|
|
00:07:15.840 --> 00:07:24.800 |
|
getting it correct so we might have um |
|
|
|
00:07:21.520 --> 00:07:26.160 |
|
we might have one model that says uh |
|
|
|
00:07:24.800 --> 00:07:28.199 |
|
like let's say we just have really |
|
|
|
00:07:26.160 --> 00:07:30.680 |
|
really bad models this is kind of a |
|
|
|
00:07:28.199 --> 00:07:31.720 |
|
really um |
|
|
|
00:07:30.680 --> 00:07:35.960 |
|
obvious |
|
|
|
00:07:31.720 --> 00:07:38.440 |
|
example but we have like the dog the dog |
|
|
|
00:07:35.960 --> 00:07:42.639 |
|
barks and then |
|
|
|
00:07:38.440 --> 00:07:46.039 |
|
runs and then uh Dives or something like |
|
|
|
00:07:42.639 --> 00:07:49.000 |
|
that and we have uh one one model that |
|
|
|
00:07:46.039 --> 00:07:50.560 |
|
just had tons of stuff about diving in |
|
|
|
00:07:49.000 --> 00:07:52.120 |
|
its training data another model that had |
|
|
|
00:07:50.560 --> 00:07:54.240 |
|
tons of stuff about running in its |
|
|
|
00:07:52.120 --> 00:07:56.560 |
|
training data or or marathons or |
|
|
|
00:07:54.240 --> 00:08:00.039 |
|
something staining data so we'll get |
|
|
|
00:07:56.560 --> 00:08:01.800 |
|
model one and model one we'll to give |
|
|
|
00:08:00.039 --> 00:08:06.240 |
|
like a probability of like |
|
|
|
00:08:01.800 --> 00:08:08.280 |
|
0.3 maybe 0.4 and |
|
|
|
00:08:06.240 --> 00:08:10.360 |
|
0.05 and then we'll have another one |
|
|
|
00:08:08.280 --> 00:08:13.039 |
|
over here that's like |
|
|
|
00:08:10.360 --> 00:08:17.319 |
|
0.32 |
|
|
|
00:08:13.039 --> 00:08:19.759 |
|
0.41 and 0 sorry |
|
|
|
00:08:17.319 --> 00:08:23.039 |
|
0.05 and |
|
|
|
00:08:19.759 --> 00:08:25.759 |
|
0.41 or something like this and so when |
|
|
|
00:08:23.039 --> 00:08:27.639 |
|
you average the two together you tend to |
|
|
|
00:08:25.759 --> 00:08:29.240 |
|
get the right answer more often because |
|
|
|
00:08:27.639 --> 00:08:31.720 |
|
kind of the mistakes that they make tend |
|
|
|
00:08:29.240 --> 00:08:33.479 |
|
to less correlated than the probability |
|
|
|
00:08:31.720 --> 00:08:35.880 |
|
of getting and of course it's not |
|
|
|
00:08:33.479 --> 00:08:38.200 |
|
perfect because unbled models are not |
|
|
|
00:08:35.880 --> 00:08:39.880 |
|
perfect but this is a a general tendency |
|
|
|
00:08:38.200 --> 00:08:42.240 |
|
that we see a lot in |
|
|
|
00:08:39.880 --> 00:08:45.959 |
|
models |
|
|
|
00:08:42.240 --> 00:08:47.720 |
|
um and um it's because of this it kind |
|
|
|
00:08:45.959 --> 00:08:52.320 |
|
of Smooths over the idiosyncrasies of |
|
|
|
00:08:47.720 --> 00:08:54.800 |
|
the models you can even um gist Ensemble |
|
|
|
00:08:52.320 --> 00:08:57.519 |
|
models from different checkpoints and |
|
|
|
00:08:54.800 --> 00:08:58.959 |
|
that still gives you improvements and so |
|
|
|
00:08:57.519 --> 00:09:00.560 |
|
when you Ensemble models from different |
|
|
|
00:08:58.959 --> 00:09:02.600 |
|
checkpoints it's basically just what |
|
|
|
00:09:00.560 --> 00:09:05.920 |
|
data did they see most recently and that |
|
|
|
00:09:02.600 --> 00:09:07.839 |
|
also Smooths over you know uh the fact |
|
|
|
00:09:05.920 --> 00:09:10.600 |
|
that like this model happened to see |
|
|
|
00:09:07.839 --> 00:09:13.000 |
|
some data more recently and so it's less |
|
|
|
00:09:10.600 --> 00:09:16.120 |
|
uh you know it's biased towards doing |
|
|
|
00:09:13.000 --> 00:09:18.440 |
|
that so uh this is a a pretty effective |
|
|
|
00:09:16.120 --> 00:09:20.079 |
|
method this is one of the few methods |
|
|
|
00:09:18.440 --> 00:09:21.959 |
|
that I know is going to improve my |
|
|
|
00:09:20.079 --> 00:09:25.120 |
|
accuracy almost every time like there's |
|
|
|
00:09:21.959 --> 00:09:27.880 |
|
a bunch of methods that you can apply um |
|
|
|
00:09:25.120 --> 00:09:29.680 |
|
and I ensembling it's very rare for me |
|
|
|
00:09:27.880 --> 00:09:31.959 |
|
to Ensemble two models together not get |
|
|
|
00:09:29.680 --> 00:09:34.839 |
|
a boost in accuracy in some way so it's |
|
|
|
00:09:31.959 --> 00:09:34.839 |
|
a good thing to |
|
|
|
00:09:35.600 --> 00:09:41.040 |
|
that there's two main ways to combine |
|
|
|
00:09:38.680 --> 00:09:42.560 |
|
models together and both of them are |
|
|
|
00:09:41.040 --> 00:09:45.800 |
|
useful in different |
|
|
|
00:09:42.560 --> 00:09:48.079 |
|
situations the first one is linear |
|
|
|
00:09:45.800 --> 00:09:49.600 |
|
interpolation and when you do linear |
|
|
|
00:09:48.079 --> 00:09:51.240 |
|
interpolation basically what you're |
|
|
|
00:09:49.600 --> 00:09:53.720 |
|
doing is you're taking the weighted |
|
|
|
00:09:51.240 --> 00:09:56.839 |
|
average of model |
|
|
|
00:09:53.720 --> 00:10:00.360 |
|
probabilities and the way that looks |
|
|
|
00:09:56.839 --> 00:10:04.040 |
|
mathematically is like this um this is a |
|
|
|
00:10:00.360 --> 00:10:05.680 |
|
probability according to the model M so |
|
|
|
00:10:04.040 --> 00:10:08.000 |
|
this is just you know the probability of |
|
|
|
00:10:05.680 --> 00:10:11.720 |
|
the next token according to model M this |
|
|
|
00:10:08.000 --> 00:10:13.200 |
|
is the probability of selecting model M |
|
|
|
00:10:11.720 --> 00:10:18.040 |
|
so you talked a little bit about the |
|
|
|
00:10:13.200 --> 00:10:19.920 |
|
basian approach uh to this and this is |
|
|
|
00:10:18.040 --> 00:10:23.519 |
|
basically saying what is the probability |
|
|
|
00:10:19.920 --> 00:10:26.519 |
|
that the parameters of model M |
|
|
|
00:10:23.519 --> 00:10:30.320 |
|
are the ones that we want to be choosing |
|
|
|
00:10:26.519 --> 00:10:32.680 |
|
in this at this particular time step and |
|
|
|
00:10:30.320 --> 00:10:34.640 |
|
then we will we will calculate this and |
|
|
|
00:10:32.680 --> 00:10:38.120 |
|
so then you take the sum over this and |
|
|
|
00:10:34.640 --> 00:10:38.120 |
|
this gives you the next |
|
|
|
00:10:39.560 --> 00:10:44.800 |
|
probability for the second term you can |
|
|
|
00:10:42.639 --> 00:10:47.120 |
|
do this in two ways the most common way |
|
|
|
00:10:44.800 --> 00:10:51.800 |
|
to do this is just to have this be a |
|
|
|
00:10:47.120 --> 00:10:55.279 |
|
constant so you you basically |
|
|
|
00:10:51.800 --> 00:10:55.279 |
|
Define mixture |
|
|
|
00:10:55.920 --> 00:11:01.240 |
|
weights uh which are like um |
|
|
|
00:11:08.480 --> 00:11:13.480 |
|
where the sum of the mixture weights is |
|
|
|
00:11:10.760 --> 00:11:16.160 |
|
equal to one and this is always between |
|
|
|
00:11:13.480 --> 00:11:18.639 |
|
zero and one and so if you do this then |
|
|
|
00:11:16.160 --> 00:11:21.000 |
|
this is just constant and you can uh |
|
|
|
00:11:18.639 --> 00:11:23.519 |
|
interpolate them together constantly but |
|
|
|
00:11:21.000 --> 00:11:25.680 |
|
you can also actually explicitly model |
|
|
|
00:11:23.519 --> 00:11:27.240 |
|
this probability and say oh I'm |
|
|
|
00:11:25.680 --> 00:11:30.279 |
|
currently in a situation where I really |
|
|
|
00:11:27.240 --> 00:11:31.880 |
|
think model M will do a good job of uh |
|
|
|
00:11:30.279 --> 00:11:33.440 |
|
you know predicting the probability so I |
|
|
|
00:11:31.880 --> 00:11:36.160 |
|
want to put most of my probability on |
|
|
|
00:11:33.440 --> 00:11:39.000 |
|
model M so you can actually learn this |
|
|
|
00:11:36.160 --> 00:11:40.079 |
|
dynamically as well um and so if you |
|
|
|
00:11:39.000 --> 00:11:44.360 |
|
have |
|
|
|
00:11:40.079 --> 00:11:45.920 |
|
uh this actually um is rather practical |
|
|
|
00:11:44.360 --> 00:11:47.120 |
|
and easy to do because what you can do |
|
|
|
00:11:45.920 --> 00:11:48.920 |
|
is you can just calculate the |
|
|
|
00:11:47.120 --> 00:11:51.399 |
|
probability according to each model at |
|
|
|
00:11:48.920 --> 00:11:53.120 |
|
each time step and train this model |
|
|
|
00:11:51.399 --> 00:11:55.519 |
|
separately without loading these models |
|
|
|
00:11:53.120 --> 00:11:59.399 |
|
into memory at at the time of training |
|
|
|
00:11:55.519 --> 00:12:00.959 |
|
those models so uh yeah this is um some |
|
|
|
00:11:59.399 --> 00:12:04.800 |
|
you can do as |
|
|
|
00:12:00.959 --> 00:12:04.800 |
|
well any questions about |
|
|
|
00:12:06.680 --> 00:12:11.920 |
|
this |
|
|
|
00:12:08.519 --> 00:12:14.000 |
|
Okay cool so the other option is log |
|
|
|
00:12:11.920 --> 00:12:15.800 |
|
linear interpolation and so linear |
|
|
|
00:12:14.000 --> 00:12:18.680 |
|
interpolation you're taking a linear |
|
|
|
00:12:15.800 --> 00:12:22.040 |
|
combination of the probabilities of each |
|
|
|
00:12:18.680 --> 00:12:24.959 |
|
model log linear interpolation you're |
|
|
|
00:12:22.040 --> 00:12:26.079 |
|
combining together the log probabilities |
|
|
|
00:12:24.959 --> 00:12:29.519 |
|
of each |
|
|
|
00:12:26.079 --> 00:12:32.639 |
|
model and then renormalizing so so that |
|
|
|
00:12:29.519 --> 00:12:34.920 |
|
you get um that you get an actual |
|
|
|
00:12:32.639 --> 00:12:37.760 |
|
probabilistic output so basically what |
|
|
|
00:12:34.920 --> 00:12:40.720 |
|
you do is you have this uh interpolation |
|
|
|
00:12:37.760 --> 00:12:44.040 |
|
coefficient like I had before but you're |
|
|
|
00:12:40.720 --> 00:12:44.040 |
|
combining together the log |
|
|
|
00:12:44.639 --> 00:12:49.639 |
|
probabilities and so here we need to |
|
|
|
00:12:47.680 --> 00:12:51.320 |
|
take the soft |
|
|
|
00:12:49.639 --> 00:12:53.760 |
|
Max |
|
|
|
00:12:51.320 --> 00:12:55.760 |
|
um thinking back here I didn't take the |
|
|
|
00:12:53.760 --> 00:12:58.120 |
|
softmax does anyone have an idea why I |
|
|
|
00:12:55.760 --> 00:13:02.000 |
|
didn't take the soft |
|
|
|
00:12:58.120 --> 00:13:02.000 |
|
Max or why I didn't need |
|
|
|
00:13:08.160 --> 00:13:12.199 |
|
to why why I need to |
|
|
|
00:13:21.600 --> 00:13:27.680 |
|
here yeah |
|
|
|
00:13:23.639 --> 00:13:30.440 |
|
so this probability is gu to be z z and |
|
|
|
00:13:27.680 --> 00:13:32.240 |
|
one and add up to one this probability |
|
|
|
00:13:30.440 --> 00:13:33.760 |
|
is also guaranteed to be zero and one |
|
|
|
00:13:32.240 --> 00:13:35.680 |
|
and add up to one and then when you |
|
|
|
00:13:33.760 --> 00:13:37.120 |
|
multiply those together uh you can do a |
|
|
|
00:13:35.680 --> 00:13:39.160 |
|
little bit of math and demonstrate that |
|
|
|
00:13:37.120 --> 00:13:41.440 |
|
the resulting thing will be between zero |
|
|
|
00:13:39.160 --> 00:13:42.839 |
|
and one and add up to one that's not the |
|
|
|
00:13:41.440 --> 00:13:44.399 |
|
case anymore when we start doing things |
|
|
|
00:13:42.839 --> 00:13:47.639 |
|
in log space because it's just not a |
|
|
|
00:13:44.399 --> 00:13:50.160 |
|
linear function anyway so um you need to |
|
|
|
00:13:47.639 --> 00:13:51.959 |
|
renormalize like this luckily this is |
|
|
|
00:13:50.160 --> 00:13:54.920 |
|
super easy like anything else you do in |
|
|
|
00:13:51.959 --> 00:13:56.959 |
|
py torch you just add things together |
|
|
|
00:13:54.920 --> 00:13:59.320 |
|
and take a soft Max and you'll you'll |
|
|
|
00:13:56.959 --> 00:14:02.519 |
|
get an output but you do need to do |
|
|
|
00:13:59.320 --> 00:14:05.279 |
|
otherwise you're going to get something |
|
|
|
00:14:02.519 --> 00:14:07.279 |
|
weird um the interpolation coefficient |
|
|
|
00:14:05.279 --> 00:14:09.639 |
|
here also can be set to a constant so |
|
|
|
00:14:07.279 --> 00:14:12.759 |
|
you can you could learn it uh kind of |
|
|
|
00:14:09.639 --> 00:14:15.320 |
|
dynamically or it could be |
|
|
|
00:14:12.759 --> 00:14:17.720 |
|
separate cool and these actually have |
|
|
|
00:14:15.320 --> 00:14:19.639 |
|
different meaning oh sorry go ahead you |
|
|
|
00:14:17.720 --> 00:14:23.880 |
|
T on |
|
|
|
00:14:19.639 --> 00:14:26.759 |
|
the Yeah Yeah so basically the |
|
|
|
00:14:23.880 --> 00:14:29.880 |
|
way the way you would do this is you |
|
|
|
00:14:26.759 --> 00:14:32.399 |
|
would have either |
|
|
|
00:14:29.880 --> 00:14:33.920 |
|
the same model you you would either take |
|
|
|
00:14:32.399 --> 00:14:35.279 |
|
representations from one of these |
|
|
|
00:14:33.920 --> 00:14:37.480 |
|
language models or you would take |
|
|
|
00:14:35.279 --> 00:14:38.440 |
|
representations from another model and |
|
|
|
00:14:37.480 --> 00:14:41.639 |
|
you would |
|
|
|
00:14:38.440 --> 00:14:43.959 |
|
just have a model that |
|
|
|
00:14:41.639 --> 00:14:46.480 |
|
predicts uh what this interpolation |
|
|
|
00:14:43.959 --> 00:14:48.279 |
|
coefficient would be and the |
|
|
|
00:14:46.480 --> 00:14:49.720 |
|
optimization objective for that |
|
|
|
00:14:48.279 --> 00:14:52.759 |
|
interpolation coefficient is just |
|
|
|
00:14:49.720 --> 00:14:56.120 |
|
maximizing the probability |
|
|
|
00:14:52.759 --> 00:14:59.600 |
|
whatever so this could also be good um |
|
|
|
00:14:56.120 --> 00:15:01.839 |
|
because this interpolation coefficient |
|
|
|
00:14:59.600 --> 00:15:07.160 |
|
only like let's say you're interpolating |
|
|
|
00:15:01.839 --> 00:15:09.399 |
|
two models together it has one degree of |
|
|
|
00:15:07.160 --> 00:15:13.320 |
|
Freedom at each time step right because |
|
|
|
00:15:09.399 --> 00:15:15.320 |
|
you're only predicting a probability um |
|
|
|
00:15:13.320 --> 00:15:17.839 |
|
if you have uh if you have five models |
|
|
|
00:15:15.320 --> 00:15:20.240 |
|
you have uh you basically would be doing |
|
|
|
00:15:17.839 --> 00:15:24.199 |
|
a soft match over |
|
|
|
00:15:20.240 --> 00:15:25.519 |
|
five five outputs and that's a lot fewer |
|
|
|
00:15:24.199 --> 00:15:27.600 |
|
that's a lot fewer than the whole |
|
|
|
00:15:25.519 --> 00:15:29.880 |
|
vocabulary right and so this is |
|
|
|
00:15:27.600 --> 00:15:31.639 |
|
relatively learning a good interpolation |
|
|
|
00:15:29.880 --> 00:15:34.160 |
|
coefficient is relatively easy compared |
|
|
|
00:15:31.639 --> 00:15:35.800 |
|
to learning what word to predict next |
|
|
|
00:15:34.160 --> 00:15:36.880 |
|
and because of this you could actually |
|
|
|
00:15:35.800 --> 00:15:39.759 |
|
tune |
|
|
|
00:15:36.880 --> 00:15:42.880 |
|
this um sorry you could tune this |
|
|
|
00:15:39.759 --> 00:15:44.600 |
|
probability on a very small data set and |
|
|
|
00:15:42.880 --> 00:15:46.959 |
|
you could even have it be context |
|
|
|
00:15:44.600 --> 00:15:48.480 |
|
independent so you could just be you |
|
|
|
00:15:46.959 --> 00:15:51.399 |
|
know |
|
|
|
00:15:48.480 --> 00:15:55.880 |
|
calculating literally five five |
|
|
|
00:15:51.399 --> 00:15:57.399 |
|
parameters here um and so because of |
|
|
|
00:15:55.880 --> 00:16:00.319 |
|
that like let's say you have a special |
|
|
|
00:15:57.399 --> 00:16:02.639 |
|
domain or a special task where you have |
|
|
|
00:16:00.319 --> 00:16:04.920 |
|
like 50 training examples or something |
|
|
|
00:16:02.639 --> 00:16:07.399 |
|
like that or you know 100 training |
|
|
|
00:16:04.920 --> 00:16:08.959 |
|
examples you can learn this |
|
|
|
00:16:07.399 --> 00:16:12.480 |
|
interpolation coefficient very |
|
|
|
00:16:08.959 --> 00:16:15.880 |
|
effectively uh on just a few a very |
|
|
|
00:16:12.480 --> 00:16:18.120 |
|
small number of training examples um but |
|
|
|
00:16:15.880 --> 00:16:20.000 |
|
like it could be very useful because |
|
|
|
00:16:18.120 --> 00:16:23.920 |
|
like let's say you have a special domain |
|
|
|
00:16:20.000 --> 00:16:25.639 |
|
medical language model that's 1.3 |
|
|
|
00:16:23.920 --> 00:16:27.759 |
|
billion parameters that you trained |
|
|
|
00:16:25.639 --> 00:16:29.639 |
|
yourself and then you have a 70 billion |
|
|
|
00:16:27.759 --> 00:16:31.079 |
|
parameter language model |
|
|
|
00:16:29.639 --> 00:16:33.680 |
|
that's like really good at modeling |
|
|
|
00:16:31.079 --> 00:16:35.399 |
|
General English um so then you could |
|
|
|
00:16:33.680 --> 00:16:39.120 |
|
learn the interpolation coefficient |
|
|
|
00:16:35.399 --> 00:16:40.600 |
|
between those two such that um the large |
|
|
|
00:16:39.120 --> 00:16:41.800 |
|
general purpose language model will be |
|
|
|
00:16:40.600 --> 00:16:43.959 |
|
generating all of the kind of |
|
|
|
00:16:41.800 --> 00:16:46.360 |
|
grammatical stuff but whenever you |
|
|
|
00:16:43.959 --> 00:16:48.480 |
|
switch over to modeling technical terms |
|
|
|
00:16:46.360 --> 00:16:50.040 |
|
from the medical domain then it learns |
|
|
|
00:16:48.480 --> 00:16:52.480 |
|
to upweight the medical language model |
|
|
|
00:16:50.040 --> 00:16:54.199 |
|
or something so this can be quite uh |
|
|
|
00:16:52.480 --> 00:16:57.000 |
|
this can be quite effective if you have |
|
|
|
00:16:54.199 --> 00:17:00.839 |
|
a limited amount of data that you want |
|
|
|
00:16:57.000 --> 00:17:00.839 |
|
toing thiss |
|
|
|
00:17:01.240 --> 00:17:05.600 |
|
um any other questions about that |
|
|
|
00:17:09.079 --> 00:17:14.880 |
|
yeah yeah I'm just gonna talk about that |
|
|
|
00:17:11.760 --> 00:17:17.640 |
|
next so linear versus log linear you can |
|
|
|
00:17:14.880 --> 00:17:20.880 |
|
actually think of this in logic um and |
|
|
|
00:17:17.640 --> 00:17:23.640 |
|
what I mean by that is um linear is kind |
|
|
|
00:17:20.880 --> 00:17:26.640 |
|
of like a logical or it tries to come up |
|
|
|
00:17:23.640 --> 00:17:29.600 |
|
with examples where either one of the |
|
|
|
00:17:26.640 --> 00:17:31.679 |
|
two assigns a high probability so we |
|
|
|
00:17:29.600 --> 00:17:36.200 |
|
have the example of like bark |
|
|
|
00:17:31.679 --> 00:17:36.200 |
|
run um bark run |
|
|
|
00:17:55.640 --> 00:18:03.840 |
|
diet so if we take the average of these |
|
|
|
00:18:00.360 --> 00:18:03.840 |
|
two in linear |
|
|
|
00:18:04.120 --> 00:18:10.240 |
|
space this would be |
|
|
|
00:18:07.159 --> 00:18:13.679 |
|
0.2 this would be |
|
|
|
00:18:10.240 --> 00:18:17.240 |
|
0.26 and this would |
|
|
|
00:18:13.679 --> 00:18:17.240 |
|
be um |
|
|
|
00:18:17.400 --> 00:18:26.280 |
|
0.21 and so a a linear combination |
|
|
|
00:18:21.480 --> 00:18:28.600 |
|
between the two will find run to be the |
|
|
|
00:18:26.280 --> 00:18:30.600 |
|
highest scoring one because on the left |
|
|
|
00:18:28.600 --> 00:18:32.280 |
|
side we have one model that really likes |
|
|
|
00:18:30.600 --> 00:18:33.159 |
|
this output and we have another model |
|
|
|
00:18:32.280 --> 00:18:35.159 |
|
that |
|
|
|
00:18:33.159 --> 00:18:39.280 |
|
doesn't |
|
|
|
00:18:35.159 --> 00:18:42.159 |
|
um this is this can be good at using |
|
|
|
00:18:39.280 --> 00:18:44.440 |
|
models that capture uh different traits |
|
|
|
00:18:42.159 --> 00:18:47.679 |
|
or it can also be useful if like for |
|
|
|
00:18:44.440 --> 00:18:49.840 |
|
example you have a you have a small |
|
|
|
00:18:47.679 --> 00:18:52.320 |
|
model that you really that really |
|
|
|
00:18:49.840 --> 00:18:53.840 |
|
captures like very specific vocabulary |
|
|
|
00:18:52.320 --> 00:18:55.520 |
|
and you want to upgrate that specific |
|
|
|
00:18:53.840 --> 00:18:56.799 |
|
vocabulary that gets a really low |
|
|
|
00:18:55.520 --> 00:18:57.720 |
|
probability according to a general |
|
|
|
00:18:56.799 --> 00:19:01.360 |
|
purpose |
|
|
|
00:18:57.720 --> 00:19:03.200 |
|
model um this is also necessary when any |
|
|
|
00:19:01.360 --> 00:19:04.520 |
|
model can assign zero probabilities so |
|
|
|
00:19:03.200 --> 00:19:06.720 |
|
if you have like an example of |
|
|
|
00:19:04.520 --> 00:19:10.080 |
|
vocabulary that isn't included in the |
|
|
|
00:19:06.720 --> 00:19:11.159 |
|
the like vocabulary of another model or |
|
|
|
00:19:10.080 --> 00:19:14.280 |
|
you have models with different |
|
|
|
00:19:11.159 --> 00:19:17.200 |
|
vocabularies it's necessary to do this |
|
|
|
00:19:14.280 --> 00:19:19.200 |
|
log linear is more like logical and um |
|
|
|
00:19:17.200 --> 00:19:22.240 |
|
so the interpolated model only likes |
|
|
|
00:19:19.200 --> 00:19:23.799 |
|
choices where all the models agree and |
|
|
|
00:19:22.240 --> 00:19:25.640 |
|
this is particularly good when you want |
|
|
|
00:19:23.799 --> 00:19:27.440 |
|
to restrict possible answers like you |
|
|
|
00:19:25.640 --> 00:19:29.280 |
|
want to have one model be able to say no |
|
|
|
00:19:27.440 --> 00:19:32.080 |
|
I really don't like this so never output |
|
|
|
00:19:29.280 --> 00:19:34.200 |
|
it so um for example if you wanted to |
|
|
|
00:19:32.080 --> 00:19:37.360 |
|
train a model that you knew was very |
|
|
|
00:19:34.200 --> 00:19:38.919 |
|
adverse to toxic language and prevent uh |
|
|
|
00:19:37.360 --> 00:19:42.600 |
|
the model from outputting toxic language |
|
|
|
00:19:38.919 --> 00:19:45.200 |
|
you could use log linear mod so I I |
|
|
|
00:19:42.600 --> 00:19:47.559 |
|
can't unfortunately uh calculate logs |
|
|
|
00:19:45.200 --> 00:19:50.080 |
|
and exponents in my head well enough to |
|
|
|
00:19:47.559 --> 00:19:51.600 |
|
uh to decide this but I'm sure that a |
|
|
|
00:19:50.080 --> 00:19:53.840 |
|
linear |
|
|
|
00:19:51.600 --> 00:19:56.840 |
|
model the linear model would pick the |
|
|
|
00:19:53.840 --> 00:19:59.600 |
|
first one here and the log linear |
|
|
|
00:19:56.840 --> 00:20:01.679 |
|
model would pick the second one because |
|
|
|
00:19:59.600 --> 00:20:05.640 |
|
the second one has a very low score here |
|
|
|
00:20:01.679 --> 00:20:08.640 |
|
so that would be downrated um |
|
|
|
00:20:05.640 --> 00:20:08.640 |
|
by |
|
|
|
00:20:16.919 --> 00:20:20.640 |
|
yeah yeah so |
|
|
|
00:20:25.840 --> 00:20:31.000 |
|
if yeah and if there's any chance of |
|
|
|
00:20:28.760 --> 00:20:34.159 |
|
assigning zero probability according to |
|
|
|
00:20:31.000 --> 00:20:36.520 |
|
a language model then really you can't |
|
|
|
00:20:34.159 --> 00:20:38.200 |
|
even test that language model on that on |
|
|
|
00:20:36.520 --> 00:20:42.120 |
|
that test set |
|
|
|
00:20:38.200 --> 00:20:43.640 |
|
um so the issue becomes like let's say |
|
|
|
00:20:42.120 --> 00:20:45.559 |
|
you have two models with different |
|
|
|
00:20:43.640 --> 00:20:47.080 |
|
vocabulary if you have two models with |
|
|
|
00:20:45.559 --> 00:20:49.080 |
|
different vocabulary it becomes very |
|
|
|
00:20:47.080 --> 00:20:50.559 |
|
tricky how to reconcile those two but |
|
|
|
00:20:49.080 --> 00:20:53.440 |
|
you could do linear interpolation |
|
|
|
00:20:50.559 --> 00:20:55.200 |
|
between them like match the vocab the |
|
|
|
00:20:53.440 --> 00:20:57.559 |
|
output vocabularies that they do have |
|
|
|
00:20:55.200 --> 00:21:00.120 |
|
and then just not worry about the fact |
|
|
|
00:20:57.559 --> 00:21:02.760 |
|
that the vocabularies are dis jointed |
|
|
|
00:21:00.120 --> 00:21:05.039 |
|
and because one will assign a zero |
|
|
|
00:21:02.760 --> 00:21:07.280 |
|
probability to those vocabulary items |
|
|
|
00:21:05.039 --> 00:21:12.240 |
|
but the other one is fine so you can |
|
|
|
00:21:07.280 --> 00:21:14.919 |
|
just do that but if you're in general it |
|
|
|
00:21:12.240 --> 00:21:16.480 |
|
will be very tricky to try to get two |
|
|
|
00:21:14.919 --> 00:21:18.559 |
|
models with different vocabularies to |
|
|
|
00:21:16.480 --> 00:21:21.480 |
|
play together nicely so I I would |
|
|
|
00:21:18.559 --> 00:21:22.919 |
|
suggest um thinking about thinking |
|
|
|
00:21:21.480 --> 00:21:25.600 |
|
seriously about whether you need to do |
|
|
|
00:21:22.919 --> 00:21:31.360 |
|
that or not before you start out but |
|
|
|
00:21:25.600 --> 00:21:31.360 |
|
yeah um uh yes there any |
|
|
|
00:21:35.559 --> 00:21:40.960 |
|
other |
|
|
|
00:21:38.039 --> 00:21:43.360 |
|
um you could definitely so the question |
|
|
|
00:21:40.960 --> 00:21:45.000 |
|
is are there any other types of |
|
|
|
00:21:43.360 --> 00:21:47.760 |
|
interpolation that have other types of |
|
|
|
00:21:45.000 --> 00:21:50.159 |
|
logical components like exor or nor um |
|
|
|
00:21:47.760 --> 00:21:52.840 |
|
you could definitely come up with one uh |
|
|
|
00:21:50.159 --> 00:21:55.440 |
|
I I am struggling a little bit to think |
|
|
|
00:21:52.840 --> 00:21:57.520 |
|
about when you would want to do that but |
|
|
|
00:21:55.440 --> 00:22:02.840 |
|
I'm sure |
|
|
|
00:21:57.520 --> 00:22:05.840 |
|
you is is the inherent that the |
|
|
|
00:22:02.840 --> 00:22:05.840 |
|
err |
|
|
|
00:22:09.120 --> 00:22:14.480 |
|
not so what what if the errors are not |
|
|
|
00:22:12.640 --> 00:22:15.919 |
|
what if the errors are correlated so |
|
|
|
00:22:14.480 --> 00:22:18.200 |
|
think about what happens if the errors |
|
|
|
00:22:15.919 --> 00:22:20.000 |
|
are perfectly correlated um which is |
|
|
|
00:22:18.200 --> 00:22:25.840 |
|
when you're using the same model in two |
|
|
|
00:22:20.000 --> 00:22:25.840 |
|
parts of the uh like on top so you |
|
|
|
00:22:27.000 --> 00:22:30.520 |
|
literally uh these |
|
|
|
00:22:29.159 --> 00:22:32.679 |
|
model one and model two are the same |
|
|
|
00:22:30.520 --> 00:22:36.720 |
|
model if that's the case nothing happens |
|
|
|
00:22:32.679 --> 00:22:39.200 |
|
it doesn't get worse um and |
|
|
|
00:22:36.720 --> 00:22:43.039 |
|
so of course because this is machine |
|
|
|
00:22:39.200 --> 00:22:45.080 |
|
learning there's no guarantee like you |
|
|
|
00:22:43.039 --> 00:22:47.559 |
|
know unless we make some assumptions |
|
|
|
00:22:45.080 --> 00:22:49.200 |
|
about the relationship between like the |
|
|
|
00:22:47.559 --> 00:22:52.279 |
|
training set and the test set or the |
|
|
|
00:22:49.200 --> 00:22:53.760 |
|
models errors in the test set um you can |
|
|
|
00:22:52.279 --> 00:22:57.039 |
|
always do something that will make your |
|
|
|
00:22:53.760 --> 00:22:59.240 |
|
accuracy worse um like let's say we flip |
|
|
|
00:22:57.039 --> 00:23:00.360 |
|
the labels of a binary class |
|
|
|
00:22:59.240 --> 00:23:03.120 |
|
no matter what you do you're going to |
|
|
|
00:23:00.360 --> 00:23:06.320 |
|
make your accuracy worse but |
|
|
|
00:23:03.120 --> 00:23:09.000 |
|
um no matter what the normal thing you |
|
|
|
00:23:06.320 --> 00:23:10.640 |
|
would do is it would make your if it |
|
|
|
00:23:09.000 --> 00:23:12.480 |
|
would improve accuracy normally it would |
|
|
|
00:23:10.640 --> 00:23:14.760 |
|
decrease your accuracy but like under |
|
|
|
00:23:12.480 --> 00:23:16.080 |
|
pretty reasonable assumptions it's |
|
|
|
00:23:14.760 --> 00:23:20.400 |
|
mostly going to be the case that errors |
|
|
|
00:23:16.080 --> 00:23:22.320 |
|
are deated to some extent um |
|
|
|
00:23:20.400 --> 00:23:25.559 |
|
so |
|
|
|
00:23:22.320 --> 00:23:30.440 |
|
yeah you and because of that ensembly |
|
|
|
00:23:25.559 --> 00:23:30.440 |
|
usually helps yeah |
|
|
|
00:23:36.120 --> 00:23:42.019 |
|
um about which one |
|
|
|
00:23:38.760 --> 00:23:42.019 |
|
[Music] |
|
|
|
00:23:53.559 --> 00:24:01.240 |
|
which let me make sure I didn't mess it |
|
|
|
00:23:55.640 --> 00:24:01.240 |
|
up on sides okay so in my |
|
|
|
00:24:06.960 --> 00:24:13.120 |
|
example yeah yeah |
|
|
|
00:24:09.640 --> 00:24:13.120 |
|
yeah sorry about |
|
|
|
00:24:14.360 --> 00:24:19.320 |
|
that because this is this is where the |
|
|
|
00:24:17.039 --> 00:24:21.840 |
|
average is higher and then this is |
|
|
|
00:24:19.320 --> 00:24:27.200 |
|
one take |
|
|
|
00:24:21.840 --> 00:24:29.039 |
|
you uh cool any other any other |
|
|
|
00:24:27.200 --> 00:24:31.840 |
|
questions okay |
|
|
|
00:24:29.039 --> 00:24:34.440 |
|
okay so |
|
|
|
00:24:31.840 --> 00:24:36.320 |
|
um another thing I should point out is |
|
|
|
00:24:34.440 --> 00:24:39.600 |
|
that we don't |
|
|
|
00:24:36.320 --> 00:24:41.840 |
|
necessarily need to use models only as |
|
|
|
00:24:39.600 --> 00:24:44.080 |
|
positive evidence so if you're using log |
|
|
|
00:24:41.840 --> 00:24:46.039 |
|
linear interpolation actually your |
|
|
|
00:24:44.080 --> 00:24:49.919 |
|
interpolation coefficients do not need |
|
|
|
00:24:46.039 --> 00:24:52.520 |
|
to be positive they can also be negative |
|
|
|
00:24:49.919 --> 00:24:55.360 |
|
and you can have uh things where you |
|
|
|
00:24:52.520 --> 00:24:57.840 |
|
penalize the probabilities given by a |
|
|
|
00:24:55.360 --> 00:24:59.679 |
|
particular model and this has actually |
|
|
|
00:24:57.840 --> 00:25:01.520 |
|
been used for a long time it was |
|
|
|
00:24:59.679 --> 00:25:04.440 |
|
actually used in machine translation |
|
|
|
00:25:01.520 --> 00:25:08.840 |
|
since like uh 2005 or something like |
|
|
|
00:25:04.440 --> 00:25:11.480 |
|
this but the basic idea is um that you |
|
|
|
00:25:08.840 --> 00:25:13.600 |
|
have some models that serve as negative |
|
|
|
00:25:11.480 --> 00:25:15.559 |
|
evidence so you have kind of a core |
|
|
|
00:25:13.600 --> 00:25:17.880 |
|
model this might be your really strong |
|
|
|
00:25:15.559 --> 00:25:21.520 |
|
general purpose language model you have |
|
|
|
00:25:17.880 --> 00:25:23.080 |
|
a positive uh model which is the model |
|
|
|
00:25:21.520 --> 00:25:25.240 |
|
that you want to kind of boost up and |
|
|
|
00:25:23.080 --> 00:25:27.320 |
|
improve and a negative model which you |
|
|
|
00:25:25.240 --> 00:25:31.159 |
|
want to |
|
|
|
00:25:27.320 --> 00:25:33.679 |
|
decrease and um one example of this is |
|
|
|
00:25:31.159 --> 00:25:36.760 |
|
in uh a paper that we did in |
|
|
|
00:25:33.679 --> 00:25:40.159 |
|
2019 um the core was a machine |
|
|
|
00:25:36.760 --> 00:25:42.960 |
|
translation model and the negative model |
|
|
|
00:25:40.159 --> 00:25:44.880 |
|
is an outof domain language model and |
|
|
|
00:25:42.960 --> 00:25:46.960 |
|
the positive model is an in domain |
|
|
|
00:25:44.880 --> 00:25:51.039 |
|
language model and so the idea behind |
|
|
|
00:25:46.960 --> 00:25:53.880 |
|
this is a machine translation model um |
|
|
|
00:25:51.039 --> 00:25:55.600 |
|
you have to train it on machine |
|
|
|
00:25:53.880 --> 00:25:58.320 |
|
translation data and machine translation |
|
|
|
00:25:55.600 --> 00:26:00.640 |
|
data is not very easy to get for |
|
|
|
00:25:58.320 --> 00:26:02.360 |
|
particular domains for example um you |
|
|
|
00:26:00.640 --> 00:26:03.880 |
|
might only have machine translation data |
|
|
|
00:26:02.360 --> 00:26:06.919 |
|
in the news domain and you actually want |
|
|
|
00:26:03.880 --> 00:26:09.240 |
|
to be uh doing uh translation in the |
|
|
|
00:26:06.919 --> 00:26:12.720 |
|
medical domain or something so what you |
|
|
|
00:26:09.240 --> 00:26:14.640 |
|
do is you have your positive model here |
|
|
|
00:26:12.720 --> 00:26:17.600 |
|
this could be a new this is a machine |
|
|
|
00:26:14.640 --> 00:26:19.919 |
|
translation model this could be a news |
|
|
|
00:26:17.600 --> 00:26:21.320 |
|
domain or sorry this could be a medical |
|
|
|
00:26:19.919 --> 00:26:22.919 |
|
domain language model and this could be |
|
|
|
00:26:21.320 --> 00:26:24.360 |
|
a news domain language model so you're |
|
|
|
00:26:22.919 --> 00:26:25.840 |
|
subtracting out the news domain |
|
|
|
00:26:24.360 --> 00:26:27.600 |
|
probabilities and adding in medical |
|
|
|
00:26:25.840 --> 00:26:30.240 |
|
domain probabilities move it in that |
|
|
|
00:26:27.600 --> 00:26:30.240 |
|
direction |
|
|
|
00:26:30.440 --> 00:26:36.799 |
|
um another example of this is uh |
|
|
|
00:26:32.919 --> 00:26:40.000 |
|
something called uh D experts um or |
|
|
|
00:26:36.799 --> 00:26:43.440 |
|
dexperts and the idea here is here you |
|
|
|
00:26:40.000 --> 00:26:46.120 |
|
have a strong language model as your |
|
|
|
00:26:43.440 --> 00:26:48.320 |
|
core and then as negative you have a |
|
|
|
00:26:46.120 --> 00:26:50.240 |
|
weak toxic language model so it was |
|
|
|
00:26:48.320 --> 00:26:52.760 |
|
trained on lot lots of like bad texts |
|
|
|
00:26:50.240 --> 00:26:55.799 |
|
that you don't want to be generating and |
|
|
|
00:26:52.760 --> 00:26:57.159 |
|
the positive is a weak non-toxic |
|
|
|
00:26:55.799 --> 00:26:59.279 |
|
language model that was trained on lots |
|
|
|
00:26:57.159 --> 00:27:03.200 |
|
of like inocua |
|
|
|
00:26:59.279 --> 00:27:04.399 |
|
posts so that would help you detoxify |
|
|
|
00:27:03.200 --> 00:27:06.679 |
|
the outputs of the |
|
|
|
00:27:04.399 --> 00:27:09.799 |
|
language so there's lots of examples of |
|
|
|
00:27:06.679 --> 00:27:09.799 |
|
things like this that you can do |
|
|
|
00:27:10.720 --> 00:27:15.880 |
|
through |
|
|
|
00:27:12.880 --> 00:27:15.880 |
|
yeah |
|
|
|
00:27:19.320 --> 00:27:25.880 |
|
yeah um so the positive in the machine |
|
|
|
00:27:22.840 --> 00:27:27.679 |
|
translation example this is a so this is |
|
|
|
00:27:25.880 --> 00:27:31.760 |
|
a machine translation model where the |
|
|
|
00:27:27.679 --> 00:27:34.080 |
|
input is is like in um English and out |
|
|
|
00:27:31.760 --> 00:27:37.880 |
|
is in Japanese something like |
|
|
|
00:27:34.080 --> 00:27:39.679 |
|
that this is only trained on Japanese |
|
|
|
00:27:37.880 --> 00:27:42.919 |
|
but it's trained on like medical |
|
|
|
00:27:39.679 --> 00:27:44.440 |
|
Japanese for example Med the domain one |
|
|
|
00:27:42.919 --> 00:27:48.480 |
|
this is a language model that was |
|
|
|
00:27:44.440 --> 00:27:50.600 |
|
trained on like news domain um Japanese |
|
|
|
00:27:48.480 --> 00:27:54.039 |
|
or it could even literally just be |
|
|
|
00:27:50.600 --> 00:27:56.360 |
|
trained on the side of the machine |
|
|
|
00:27:54.039 --> 00:28:00.120 |
|
trans um so it's trying to remove out |
|
|
|
00:27:56.360 --> 00:28:00.120 |
|
the language modeling component from the |
|
|
|
00:28:03.720 --> 00:28:06.720 |
|
cool |
|
|
|
00:28:06.880 --> 00:28:11.480 |
|
okay so another thing that I should |
|
|
|
00:28:09.880 --> 00:28:14.720 |
|
point out I didn't actually put it on |
|
|
|
00:28:11.480 --> 00:28:18.399 |
|
the slides is um there's a lot of other |
|
|
|
00:28:14.720 --> 00:28:19.640 |
|
ways to get multiple models and um I |
|
|
|
00:28:18.399 --> 00:28:22.600 |
|
think a lot of people are probably |
|
|
|
00:28:19.640 --> 00:28:23.559 |
|
familiar with Dropout um it's a method |
|
|
|
00:28:22.600 --> 00:28:27.120 |
|
for |
|
|
|
00:28:23.559 --> 00:28:29.080 |
|
regularizing um it's a method for |
|
|
|
00:28:27.120 --> 00:28:31.120 |
|
regularizing |
|
|
|
00:28:29.080 --> 00:28:33.760 |
|
neural networks or deep learning models |
|
|
|
00:28:31.120 --> 00:28:37.279 |
|
in general and basically the idea is |
|
|
|
00:28:33.760 --> 00:28:41.840 |
|
every once in a while um during training |
|
|
|
00:28:37.279 --> 00:28:45.720 |
|
you drop out some portion of the uh like |
|
|
|
00:28:41.840 --> 00:28:48.919 |
|
nodes in the neural network model and |
|
|
|
00:28:45.720 --> 00:28:51.320 |
|
you can actually drop |
|
|
|
00:28:48.919 --> 00:28:52.640 |
|
out and normally what you do is at test |
|
|
|
00:28:51.320 --> 00:28:53.919 |
|
time then you just don't drop out |
|
|
|
00:28:52.640 --> 00:28:56.039 |
|
anything and you use the whole neural |
|
|
|
00:28:53.919 --> 00:28:59.960 |
|
network model but another thing you can |
|
|
|
00:28:56.039 --> 00:29:02.559 |
|
do is you can drop out a test time drop |
|
|
|
00:28:59.960 --> 00:29:04.679 |
|
out five times and combine those |
|
|
|
00:29:02.559 --> 00:29:06.600 |
|
different models together through ensom |
|
|
|
00:29:04.679 --> 00:29:10.600 |
|
and that's actually something uh that |
|
|
|
00:29:06.600 --> 00:29:14.480 |
|
people tried in the uh in the Dropout |
|
|
|
00:29:10.600 --> 00:29:17.600 |
|
paper and this is one way to get |
|
|
|
00:29:14.480 --> 00:29:19.640 |
|
multiple models uh and actually you can |
|
|
|
00:29:17.600 --> 00:29:21.919 |
|
demonstrate that this helps the original |
|
|
|
00:29:19.640 --> 00:29:24.519 |
|
motivation behind Dropout was precisely |
|
|
|
00:29:21.919 --> 00:29:26.279 |
|
coming from this idea of |
|
|
|
00:29:24.519 --> 00:29:29.080 |
|
ensembling |
|
|
|
00:29:26.279 --> 00:29:31.399 |
|
another method |
|
|
|
00:29:29.080 --> 00:29:34.799 |
|
that has been around for a very long |
|
|
|
00:29:31.399 --> 00:29:37.760 |
|
time it's another embling method is |
|
|
|
00:29:34.799 --> 00:29:41.919 |
|
bagging and basically the way bagging |
|
|
|
00:29:37.760 --> 00:29:41.919 |
|
works is you have a data |
|
|
|
00:29:44.000 --> 00:29:50.159 |
|
set like this and you just resample the |
|
|
|
00:29:47.519 --> 00:29:52.919 |
|
data set so you sample all of the output |
|
|
|
00:29:50.159 --> 00:29:55.200 |
|
with uh replacement and you get another |
|
|
|
00:29:52.919 --> 00:29:57.799 |
|
data set of equal size and then you |
|
|
|
00:29:55.200 --> 00:29:58.559 |
|
train on this but you do that like 10 |
|
|
|
00:29:57.799 --> 00:30:00.120 |
|
times |
|
|
|
00:29:58.559 --> 00:30:02.679 |
|
and you train 10 different models and |
|
|
|
00:30:00.120 --> 00:30:04.360 |
|
then you emble those models together and |
|
|
|
00:30:02.679 --> 00:30:06.000 |
|
so this is another way to get multiple |
|
|
|
00:30:04.360 --> 00:30:07.519 |
|
models and both of these still improve |
|
|
|
00:30:06.000 --> 00:30:09.640 |
|
your robustness because they basically |
|
|
|
00:30:07.519 --> 00:30:11.440 |
|
get a different view on the data so they |
|
|
|
00:30:09.640 --> 00:30:13.440 |
|
smooth over some of the |
|
|
|
00:30:11.440 --> 00:30:15.360 |
|
idiosyncrasies um and as I mentioned |
|
|
|
00:30:13.440 --> 00:30:17.960 |
|
before you can also get multiple models |
|
|
|
00:30:15.360 --> 00:30:20.120 |
|
from different checkpoints and then uh |
|
|
|
00:30:17.960 --> 00:30:22.159 |
|
put them together and all of these |
|
|
|
00:30:20.120 --> 00:30:24.159 |
|
methods are pretty related both of them |
|
|
|
00:30:22.159 --> 00:30:25.960 |
|
basically what they're doing is they're |
|
|
|
00:30:24.159 --> 00:30:28.279 |
|
taking advantage of the fact that you |
|
|
|
00:30:25.960 --> 00:30:29.919 |
|
have particular models that saw |
|
|
|
00:30:28.279 --> 00:30:32.760 |
|
different data or saw data in a |
|
|
|
00:30:29.919 --> 00:30:34.120 |
|
different order or different nodes saw |
|
|
|
00:30:32.760 --> 00:30:35.679 |
|
different parts of the data because you |
|
|
|
00:30:34.120 --> 00:30:37.799 |
|
dropped out some of the nodes when they |
|
|
|
00:30:35.679 --> 00:30:41.840 |
|
were back propping on particular |
|
|
|
00:30:37.799 --> 00:30:44.840 |
|
varieties of the data so um even things |
|
|
|
00:30:41.840 --> 00:30:46.720 |
|
like this can give you models that are |
|
|
|
00:30:44.840 --> 00:30:49.760 |
|
different enough that to help uh when |
|
|
|
00:30:46.720 --> 00:30:49.760 |
|
you're onbling or |
|
|
|
00:30:52.559 --> 00:30:59.360 |
|
combining and then of course um you can |
|
|
|
00:30:56.919 --> 00:31:00.799 |
|
also |
|
|
|
00:30:59.360 --> 00:31:02.480 |
|
then of course you can also combine |
|
|
|
00:31:00.799 --> 00:31:06.960 |
|
together like very different models like |
|
|
|
00:31:02.480 --> 00:31:06.960 |
|
this and that also works in different |
|
|
|
00:31:07.240 --> 00:31:11.159 |
|
ways |
|
|
|
00:31:09.000 --> 00:31:13.039 |
|
cool part of the reason why I wanted to |
|
|
|
00:31:11.159 --> 00:31:15.320 |
|
mention that Dropout though in |
|
|
|
00:31:13.039 --> 00:31:17.120 |
|
particular is there's also other |
|
|
|
00:31:15.320 --> 00:31:19.240 |
|
efficient methods for using multiple |
|
|
|
00:31:17.120 --> 00:31:22.000 |
|
models so the big problem with |
|
|
|
00:31:19.240 --> 00:31:25.399 |
|
ensembling is the cost |
|
|
|
00:31:22.000 --> 00:31:27.159 |
|
and simple ensembling is very expensive |
|
|
|
00:31:25.399 --> 00:31:29.240 |
|
because it requires you to run multiple |
|
|
|
00:31:27.159 --> 00:31:30.519 |
|
models at test test time at inference |
|
|
|
00:31:29.240 --> 00:31:33.720 |
|
time and this is something you don't |
|
|
|
00:31:30.519 --> 00:31:35.279 |
|
want to be doing if you're you know |
|
|
|
00:31:33.720 --> 00:31:38.679 |
|
deploying a service or something because |
|
|
|
00:31:35.279 --> 00:31:41.080 |
|
it like linearly increases your cost by |
|
|
|
00:31:38.679 --> 00:31:45.200 |
|
um the amount of bottles that you're |
|
|
|
00:31:41.080 --> 00:31:47.799 |
|
running and it requires both end times |
|
|
|
00:31:45.200 --> 00:31:50.120 |
|
of computation and end times of memory |
|
|
|
00:31:47.799 --> 00:31:51.720 |
|
and memory is actually probably the |
|
|
|
00:31:50.120 --> 00:31:54.279 |
|
worst thing because you need to deploy |
|
|
|
00:31:51.720 --> 00:31:58.159 |
|
extra GPU machines and other stuff like |
|
|
|
00:31:54.279 --> 00:31:59.880 |
|
that so um the question is is there any |
|
|
|
00:31:58.159 --> 00:32:03.279 |
|
way we can get some of the benefits of |
|
|
|
00:31:59.880 --> 00:32:06.519 |
|
embling without having to create |
|
|
|
00:32:03.279 --> 00:32:07.320 |
|
multiple models and luckily the answer |
|
|
|
00:32:06.519 --> 00:32:09.240 |
|
is |
|
|
|
00:32:07.320 --> 00:32:11.919 |
|
yes |
|
|
|
00:32:09.240 --> 00:32:13.960 |
|
the method the easiest method for doing |
|
|
|
00:32:11.919 --> 00:32:16.600 |
|
so is something called parameter |
|
|
|
00:32:13.960 --> 00:32:18.399 |
|
averaging and basically what you do is |
|
|
|
00:32:16.600 --> 00:32:21.960 |
|
you just average the parameters of |
|
|
|
00:32:18.399 --> 00:32:26.039 |
|
multiple models together um this only |
|
|
|
00:32:21.960 --> 00:32:29.200 |
|
works under certain conditions so does |
|
|
|
00:32:26.039 --> 00:32:31.120 |
|
anyone um does anyone know what these |
|
|
|
00:32:29.200 --> 00:32:33.320 |
|
conditions might be there's a few |
|
|
|
00:32:31.120 --> 00:32:35.919 |
|
obvious ones and maybe a few slightly |
|
|
|
00:32:33.320 --> 00:32:35.919 |
|
less obvious |
|
|
|
00:32:36.039 --> 00:32:40.799 |
|
ones so like first question do you think |
|
|
|
00:32:38.799 --> 00:32:41.919 |
|
you could combine together do you think |
|
|
|
00:32:40.799 --> 00:32:45.880 |
|
you could average together the |
|
|
|
00:32:41.919 --> 00:32:45.880 |
|
parameters of llama 7B and Lama |
|
|
|
00:32:46.440 --> 00:32:52.639 |
|
70b |
|
|
|
00:32:48.480 --> 00:32:52.639 |
|
no the answer is no but why |
|
|
|
00:32:54.480 --> 00:32:58.440 |
|
not I mean what does that even mean in |
|
|
|
00:32:56.760 --> 00:33:00.480 |
|
the first place right like they have |
|
|
|
00:32:58.440 --> 00:33:02.799 |
|
totally different numbers of parameters |
|
|
|
00:33:00.480 --> 00:33:05.840 |
|
uh you wouldn't be able to find a one |
|
|
|
00:33:02.799 --> 00:33:07.840 |
|
toone association between 7B and like 7 |
|
|
|
00:33:05.840 --> 00:33:12.320 |
|
billion parameters and 70 billion |
|
|
|
00:33:07.840 --> 00:33:16.880 |
|
parameters um what about averaging |
|
|
|
00:33:12.320 --> 00:33:19.399 |
|
together uh let's let's say llama 7B and |
|
|
|
00:33:16.880 --> 00:33:19.399 |
|
mistol |
|
|
|
00:33:23.080 --> 00:33:29.760 |
|
7bs yes no y I'm guessing that like for |
|
|
|
00:33:27.440 --> 00:33:29.760 |
|
the |
|
|
|
00:33:33.760 --> 00:33:38.120 |
|
yeah for different architectures the um |
|
|
|
00:33:36.760 --> 00:33:41.799 |
|
the parameters could mean different |
|
|
|
00:33:38.120 --> 00:33:44.159 |
|
things and even if the architecture is |
|
|
|
00:33:41.799 --> 00:33:45.880 |
|
exactly the same even if your random |
|
|
|
00:33:44.159 --> 00:33:49.880 |
|
initialization is different then that |
|
|
|
00:33:45.880 --> 00:33:52.360 |
|
would be a disastrous because basically |
|
|
|
00:33:49.880 --> 00:33:54.760 |
|
in neural networks there's no inherent |
|
|
|
00:33:52.360 --> 00:33:58.559 |
|
meaning to like parameter number one |
|
|
|
00:33:54.760 --> 00:34:01.919 |
|
right um and there's the idea of permut |
|
|
|
00:33:58.559 --> 00:34:06.679 |
|
Inari which is |
|
|
|
00:34:01.919 --> 00:34:07.639 |
|
um you could like randomly Swap all of |
|
|
|
00:34:06.679 --> 00:34:10.280 |
|
the |
|
|
|
00:34:07.639 --> 00:34:12.079 |
|
dimensions uh between within a neural |
|
|
|
00:34:10.280 --> 00:34:14.760 |
|
network and get exactly the same |
|
|
|
00:34:12.079 --> 00:34:17.919 |
|
function |
|
|
|
00:34:14.760 --> 00:34:22.560 |
|
uh as long as kind |
|
|
|
00:34:17.919 --> 00:34:24.839 |
|
of in layer number one you swap and then |
|
|
|
00:34:22.560 --> 00:34:30.359 |
|
also take the inputs in the next layer |
|
|
|
00:34:24.839 --> 00:34:30.359 |
|
also so um you know you know as long |
|
|
|
00:34:30.960 --> 00:34:36.399 |
|
as if you have a weight Matrix that |
|
|
|
00:34:33.679 --> 00:34:40.800 |
|
results in the um in the outputs being |
|
|
|
00:34:36.399 --> 00:34:49.639 |
|
ordered like 1 two three four |
|
|
|
00:34:40.800 --> 00:34:54.159 |
|
five one or 2 1 3 five four as long as |
|
|
|
00:34:49.639 --> 00:34:55.720 |
|
you also swap the input direct input |
|
|
|
00:34:54.159 --> 00:34:58.400 |
|
dimensions of this weight Matrix you get |
|
|
|
00:34:55.720 --> 00:35:01.520 |
|
exactly the same because they |
|
|
|
00:34:58.400 --> 00:35:04.200 |
|
linear combinations of the parameters |
|
|
|
00:35:01.520 --> 00:35:06.480 |
|
together so neural networks have this |
|
|
|
00:35:04.200 --> 00:35:08.599 |
|
feature of permutation and variance so |
|
|
|
00:35:06.480 --> 00:35:11.800 |
|
models that were trained from like |
|
|
|
00:35:08.599 --> 00:35:13.280 |
|
different uh different initializations |
|
|
|
00:35:11.800 --> 00:35:15.040 |
|
won't be able to be combined together in |
|
|
|
00:35:13.280 --> 00:35:18.320 |
|
this |
|
|
|
00:35:15.040 --> 00:35:20.079 |
|
way um but the good luck the good thing |
|
|
|
00:35:18.320 --> 00:35:21.359 |
|
is actually we have a whole bunch of |
|
|
|
00:35:20.079 --> 00:35:25.320 |
|
models that come from the same |
|
|
|
00:35:21.359 --> 00:35:26.720 |
|
pre-trained model right uh so we we have |
|
|
|
00:35:25.320 --> 00:35:28.640 |
|
this initialization here this |
|
|
|
00:35:26.720 --> 00:35:31.280 |
|
initialization was used to train Lama |
|
|
|
00:35:28.640 --> 00:35:32.920 |
|
27b but now we have like hundreds |
|
|
|
00:35:31.280 --> 00:35:34.440 |
|
hundreds of models that are DED from |
|
|
|
00:35:32.920 --> 00:35:37.400 |
|
Lama 2 we have hundreds of models that |
|
|
|
00:35:34.440 --> 00:35:39.599 |
|
are DED from mixol and there all of the |
|
|
|
00:35:37.400 --> 00:35:40.920 |
|
dimensions actually mean the same thing |
|
|
|
00:35:39.599 --> 00:35:43.280 |
|
because they're derived from the same |
|
|
|
00:35:40.920 --> 00:35:46.680 |
|
parameters in the first place so those |
|
|
|
00:35:43.280 --> 00:35:48.119 |
|
ones we can average together and um |
|
|
|
00:35:46.680 --> 00:35:50.359 |
|
there's basically two ways that we can |
|
|
|
00:35:48.119 --> 00:35:53.520 |
|
do this uh one is by averaging together |
|
|
|
00:35:50.359 --> 00:35:55.240 |
|
multiple checkpoints during training so |
|
|
|
00:35:53.520 --> 00:35:57.960 |
|
originally this was the big thing that |
|
|
|
00:35:55.240 --> 00:36:00.359 |
|
people did uh like you would train model |
|
|
|
00:35:57.960 --> 00:36:02.119 |
|
from scratch for a really long time but |
|
|
|
00:36:00.359 --> 00:36:03.920 |
|
then you would take the final five |
|
|
|
00:36:02.119 --> 00:36:07.520 |
|
checkpoints and you would just average |
|
|
|
00:36:03.920 --> 00:36:09.280 |
|
them together and this helps reduce some |
|
|
|
00:36:07.520 --> 00:36:11.040 |
|
of the noise that you get from |
|
|
|
00:36:09.280 --> 00:36:13.839 |
|
stochastic gradient descent and can |
|
|
|
00:36:11.040 --> 00:36:15.520 |
|
improve your overall accuracy if you're |
|
|
|
00:36:13.839 --> 00:36:17.280 |
|
fine-tuning any models this is something |
|
|
|
00:36:15.520 --> 00:36:18.680 |
|
you can do also uh because you're |
|
|
|
00:36:17.280 --> 00:36:19.800 |
|
probably going to be saving checkpoints |
|
|
|
00:36:18.680 --> 00:36:21.160 |
|
you can just take the best five |
|
|
|
00:36:19.800 --> 00:36:23.079 |
|
checkpoints and average them together |
|
|
|
00:36:21.160 --> 00:36:27.280 |
|
and that actually can improve your |
|
|
|
00:36:23.079 --> 00:36:28.160 |
|
accuracy quite a bit um another thing is |
|
|
|
00:36:27.280 --> 00:36:31.520 |
|
find |
|
|
|
00:36:28.160 --> 00:36:32.880 |
|
uh tuned model merging soine tune um in |
|
|
|
00:36:31.520 --> 00:36:35.000 |
|
several ways and then merge them |
|
|
|
00:36:32.880 --> 00:36:39.079 |
|
together and so for example we might |
|
|
|
00:36:35.000 --> 00:36:41.240 |
|
take Lama 27b instruct and um vuna 7B |
|
|
|
00:36:39.079 --> 00:36:44.760 |
|
1.5 and merg them together with some |
|
|
|
00:36:41.240 --> 00:36:47.599 |
|
weights and uh we could you |
|
|
|
00:36:44.760 --> 00:36:50.319 |
|
know smooth over their idos synchr dises |
|
|
|
00:36:47.599 --> 00:36:52.520 |
|
and get better results |
|
|
|
00:36:50.319 --> 00:36:56.280 |
|
too |
|
|
|
00:36:52.520 --> 00:36:56.280 |
|
cool uh any questions |
|
|
|
00:36:56.520 --> 00:36:59.520 |
|
here |
|
|
|
00:37:00.920 --> 00:37:03.119 |
|
oh |
|
|
|
00:37:04.680 --> 00:37:11.920 |
|
yeah want to so I just |
|
|
|
00:37:09.680 --> 00:37:14.079 |
|
came |
|
|
|
00:37:11.920 --> 00:37:19.040 |
|
non I |
|
|
|
00:37:14.079 --> 00:37:19.040 |
|
use like those different chain and |
|
|
|
00:37:19.640 --> 00:37:23.319 |
|
just |
|
|
|
00:37:21.160 --> 00:37:26.640 |
|
I pretty |
|
|
|
00:37:23.319 --> 00:37:29.520 |
|
efficient because on the same model you |
|
|
|
00:37:26.640 --> 00:37:29.520 |
|
get |
|
|
|
00:37:35.640 --> 00:37:40.839 |
|
yeah so would this would this parameter |
|
|
|
00:37:38.000 --> 00:37:46.119 |
|
averaging be a good method for U making |
|
|
|
00:37:40.839 --> 00:37:49.839 |
|
a model less toxic for example the |
|
|
|
00:37:46.119 --> 00:37:53.200 |
|
answer is a little bit trickier there I |
|
|
|
00:37:49.839 --> 00:37:56.119 |
|
guess because um I I feel like this is |
|
|
|
00:37:53.200 --> 00:37:58.160 |
|
good for mixing two models together so |
|
|
|
00:37:56.119 --> 00:38:01.400 |
|
if you're mixing your |
|
|
|
00:37:58.160 --> 00:38:03.359 |
|
like non-toxicity tuned model or your |
|
|
|
00:38:01.400 --> 00:38:06.079 |
|
safety tuned model with the original |
|
|
|
00:38:03.359 --> 00:38:07.520 |
|
base model that was not uh safety tuned |
|
|
|
00:38:06.079 --> 00:38:08.800 |
|
or something like that then you might |
|
|
|
00:38:07.520 --> 00:38:11.240 |
|
get something in the middle so you might |
|
|
|
00:38:08.800 --> 00:38:13.319 |
|
get something that's less safe than the |
|
|
|
00:38:11.240 --> 00:38:18.720 |
|
uh like the model that was tuned to not |
|
|
|
00:38:13.319 --> 00:38:21.400 |
|
be toxic so it might be uh yeah I'm not |
|
|
|
00:38:18.720 --> 00:38:23.920 |
|
sure but like let's say you let's say |
|
|
|
00:38:21.400 --> 00:38:26.240 |
|
you have a model that somebody |
|
|
|
00:38:23.920 --> 00:38:28.640 |
|
else did like a really good job |
|
|
|
00:38:26.240 --> 00:38:31.359 |
|
instruction tuning for you |
|
|
|
00:38:28.640 --> 00:38:33.640 |
|
um and anytime you start using safety |
|
|
|
00:38:31.359 --> 00:38:35.560 |
|
tuning on it you like hurt the |
|
|
|
00:38:33.640 --> 00:38:38.680 |
|
instruction tuning like the model gets |
|
|
|
00:38:35.560 --> 00:38:40.560 |
|
worse I could see a world where you take |
|
|
|
00:38:38.680 --> 00:38:43.000 |
|
the base model the same base model you |
|
|
|
00:38:40.560 --> 00:38:45.280 |
|
take llama 27b you train like a less |
|
|
|
00:38:43.000 --> 00:38:47.480 |
|
toxic version of llama 27d and then do |
|
|
|
00:38:45.280 --> 00:38:51.319 |
|
parameter averaging with the like well |
|
|
|
00:38:47.480 --> 00:38:53.160 |
|
instruction tuned model um that might |
|
|
|
00:38:51.319 --> 00:38:55.359 |
|
work that might make something that's |
|
|
|
00:38:53.160 --> 00:38:57.560 |
|
more safe and like not much worse |
|
|
|
00:38:55.359 --> 00:39:01.440 |
|
instruction to so there's definitely I |
|
|
|
00:38:57.560 --> 00:39:01.440 |
|
think creative things that you can do |
|
|
|
00:39:01.520 --> 00:39:08.400 |
|
that um maybe I'll go directly into the |
|
|
|
00:39:04.960 --> 00:39:11.480 |
|
methods um |
|
|
|
00:39:08.400 --> 00:39:13.240 |
|
so uh there's a few uh recent papers on |
|
|
|
00:39:11.480 --> 00:39:16.000 |
|
this like this method has been around |
|
|
|
00:39:13.240 --> 00:39:17.880 |
|
for a long time since at least 1996 but |
|
|
|
00:39:16.000 --> 00:39:20.880 |
|
uh recently people have examined it a |
|
|
|
00:39:17.880 --> 00:39:24.800 |
|
lot in the context of uh kind of modern |
|
|
|
00:39:20.880 --> 00:39:27.400 |
|
networks and uh this paper model soup uh |
|
|
|
00:39:24.800 --> 00:39:29.000 |
|
examines two strategies the first one is |
|
|
|
00:39:27.400 --> 00:39:31.400 |
|
uniform averaging where you just average |
|
|
|
00:39:29.000 --> 00:39:33.560 |
|
all the parameters together uh like as |
|
|
|
00:39:31.400 --> 00:39:35.480 |
|
you would expect but they also have a |
|
|
|
00:39:33.560 --> 00:39:38.319 |
|
greedy averaging method and basically |
|
|
|
00:39:35.480 --> 00:39:40.240 |
|
what they do here is they add one model |
|
|
|
00:39:38.319 --> 00:39:42.119 |
|
and check if the whole like averaged |
|
|
|
00:39:40.240 --> 00:39:43.680 |
|
model improves and then only if the |
|
|
|
00:39:42.119 --> 00:39:45.760 |
|
whole averaged model improves do they |
|
|
|
00:39:43.680 --> 00:39:49.040 |
|
keep that model otherwise they throw it |
|
|
|
00:39:45.760 --> 00:39:52.960 |
|
out and then they um they don't uh use |
|
|
|
00:39:49.040 --> 00:39:54.520 |
|
it so what they demonstrate uh this is a |
|
|
|
00:39:52.960 --> 00:39:57.560 |
|
little bit small but basically the |
|
|
|
00:39:54.520 --> 00:40:00.520 |
|
purple star here is uh when the use |
|
|
|
00:39:57.560 --> 00:40:02.480 |
|
greedy averaging and then the blue |
|
|
|
00:40:00.520 --> 00:40:05.119 |
|
circle here is when they use the uniform |
|
|
|
00:40:02.480 --> 00:40:08.280 |
|
averaging and then green is all of the |
|
|
|
00:40:05.119 --> 00:40:09.960 |
|
models that they they put into this |
|
|
|
00:40:08.280 --> 00:40:12.560 |
|
average |
|
|
|
00:40:09.960 --> 00:40:16.680 |
|
and what they found |
|
|
|
00:40:12.560 --> 00:40:18.480 |
|
is this is average uh accuracy on image |
|
|
|
00:40:16.680 --> 00:40:22.400 |
|
net which is the thing that they they |
|
|
|
00:40:18.480 --> 00:40:25.160 |
|
used in deciding which models to merge |
|
|
|
00:40:22.400 --> 00:40:26.920 |
|
in greedily and then this is on |
|
|
|
00:40:25.160 --> 00:40:28.640 |
|
distribution shifts so this is on other |
|
|
|
00:40:26.920 --> 00:40:31.119 |
|
data sets other than the ones they use |
|
|
|
00:40:28.640 --> 00:40:33.040 |
|
specifically for training and what you |
|
|
|
00:40:31.119 --> 00:40:34.720 |
|
can see is the greedy averaging method |
|
|
|
00:40:33.040 --> 00:40:38.720 |
|
does |
|
|
|
00:40:34.720 --> 00:40:40.839 |
|
better um than the best single model on |
|
|
|
00:40:38.720 --> 00:40:42.319 |
|
the data set that they used to decide |
|
|
|
00:40:40.839 --> 00:40:44.800 |
|
that greedy |
|
|
|
00:40:42.319 --> 00:40:46.560 |
|
average the uniform average actually |
|
|
|
00:40:44.800 --> 00:40:48.359 |
|
does worse than the best model so you |
|
|
|
00:40:46.560 --> 00:40:50.960 |
|
would actually be better off for image |
|
|
|
00:40:48.359 --> 00:40:52.960 |
|
net accuracy to just use the best model |
|
|
|
00:40:50.960 --> 00:40:56.000 |
|
but it's more robust so on the |
|
|
|
00:40:52.960 --> 00:40:57.319 |
|
distribution shift like data set it |
|
|
|
00:40:56.000 --> 00:41:00.000 |
|
actually does better than any of them |
|
|
|
00:40:57.319 --> 00:41:02.280 |
|
models so um you can see that there's |
|
|
|
00:41:00.000 --> 00:41:04.720 |
|
kind of trade-offs between choosing |
|
|
|
00:41:02.280 --> 00:41:06.480 |
|
those |
|
|
|
00:41:04.720 --> 00:41:09.319 |
|
essentially |
|
|
|
00:41:06.480 --> 00:41:12.040 |
|
um whoops that's a that's a typo that |
|
|
|
00:41:09.319 --> 00:41:15.760 |
|
should be ensembling but um they also |
|
|
|
00:41:12.040 --> 00:41:18.440 |
|
demonstrate that um averaging is |
|
|
|
00:41:15.760 --> 00:41:22.720 |
|
correlated with ensembling so this is |
|
|
|
00:41:18.440 --> 00:41:25.200 |
|
the um image accuracy of the parameter |
|
|
|
00:41:22.720 --> 00:41:27.000 |
|
average model this is image not accuracy |
|
|
|
00:41:25.200 --> 00:41:30.200 |
|
of the Ensemble so this is actually I |
|
|
|
00:41:27.000 --> 00:41:33.720 |
|
think really interesting figure um what |
|
|
|
00:41:30.200 --> 00:41:36.440 |
|
it shows is that there's a pretty strong |
|
|
|
00:41:33.720 --> 00:41:38.760 |
|
correlation between the two averaging is |
|
|
|
00:41:36.440 --> 00:41:41.400 |
|
almost never better than ensembling the |
|
|
|
00:41:38.760 --> 00:41:44.800 |
|
two together but it's faster of course |
|
|
|
00:41:41.400 --> 00:41:48.119 |
|
so it's better because it's faster and |
|
|
|
00:41:44.800 --> 00:41:50.000 |
|
there are situations where the Ensemble |
|
|
|
00:41:48.119 --> 00:41:51.680 |
|
is much better than the average model so |
|
|
|
00:41:50.000 --> 00:41:55.720 |
|
like the average model hurts the |
|
|
|
00:41:51.680 --> 00:41:58.560 |
|
averaging hurts um onbling does not hurt |
|
|
|
00:41:55.720 --> 00:42:01.319 |
|
so what this shows you is parameter |
|
|
|
00:41:58.560 --> 00:42:03.119 |
|
averaging is is safe and it nearly |
|
|
|
00:42:01.319 --> 00:42:04.359 |
|
approximates model on samping most of |
|
|
|
00:42:03.119 --> 00:42:06.720 |
|
the time but there are cases where it |
|
|
|
00:42:04.359 --> 00:42:08.119 |
|
doesn't so you do need to be a little |
|
|
|
00:42:06.720 --> 00:42:11.720 |
|
bit careful and it might hurt your |
|
|
|
00:42:08.119 --> 00:42:11.720 |
|
accuracy in some cases |
|
|
|
00:42:16.680 --> 00:42:21.520 |
|
yeah oh yeah sorry very good point yes |
|
|
|
00:42:19.280 --> 00:42:21.520 |
|
it's |
|
|
|
00:42:22.319 --> 00:42:29.119 |
|
paralel yeah |
|
|
|
00:42:26.119 --> 00:42:29.119 |
|
this |
|
|
|
00:42:36.480 --> 00:42:41.520 |
|
um how do you know |
|
|
|
00:42:39.400 --> 00:42:45.720 |
|
it's |
|
|
|
00:42:41.520 --> 00:42:48.280 |
|
particular yeah so notably all of these |
|
|
|
00:42:45.720 --> 00:42:48.280 |
|
are |
|
|
|
00:42:48.800 --> 00:42:52.240 |
|
initialized it's been a little while |
|
|
|
00:42:50.800 --> 00:42:54.079 |
|
since I read this but I know all of |
|
|
|
00:42:52.240 --> 00:42:56.520 |
|
these were initialized from a model that |
|
|
|
00:42:54.079 --> 00:42:58.160 |
|
was already pretty good on image that |
|
|
|
00:42:56.520 --> 00:43:01.760 |
|
and then they were tuned in different |
|
|
|
00:42:58.160 --> 00:43:03.800 |
|
ways I guess and so this I think this |
|
|
|
00:43:01.760 --> 00:43:05.319 |
|
might be initialized with a model that |
|
|
|
00:43:03.800 --> 00:43:09.160 |
|
was trained on a different data set or |
|
|
|
00:43:05.319 --> 00:43:10.160 |
|
something like that um and so they are |
|
|
|
00:43:09.160 --> 00:43:12.480 |
|
all starting from the same |
|
|
|
00:43:10.160 --> 00:43:14.480 |
|
initialization so parameter U |
|
|
|
00:43:12.480 --> 00:43:16.599 |
|
permutation inv variance is not an issue |
|
|
|
00:43:14.480 --> 00:43:19.200 |
|
there because they're starting from the |
|
|
|
00:43:16.599 --> 00:43:23.480 |
|
pre um but despite the fact that it's |
|
|
|
00:43:19.200 --> 00:43:26.520 |
|
not a problem there are there are cases |
|
|
|
00:43:23.480 --> 00:43:29.119 |
|
where like averaging is detrimental |
|
|
|
00:43:26.520 --> 00:43:29.119 |
|
compared to |
|
|
|
00:43:32.839 --> 00:43:37.559 |
|
um okay so |
|
|
|
00:43:42.800 --> 00:43:45.800 |
|
yeah |
|
|
|
00:43:51.720 --> 00:43:54.720 |
|
yep |
|
|
|
00:43:56.040 --> 00:43:59.040 |
|
y |
|
|
|
00:44:07.079 --> 00:44:10.079 |
|
okay |
|
|
|
00:44:26.040 --> 00:44:29.040 |
|
y |
|
|
|
00:44:46.319 --> 00:44:52.520 |
|
yeah so that's a great question um I'll |
|
|
|
00:44:48.240 --> 00:44:54.920 |
|
just repeat it which is um the these |
|
|
|
00:44:52.520 --> 00:44:57.520 |
|
experiments were done on CNN's or image |
|
|
|
00:44:54.920 --> 00:44:59.280 |
|
net like uh CNN based image that |
|
|
|
00:44:57.520 --> 00:45:01.119 |
|
classifiers is there something different |
|
|
|
00:44:59.280 --> 00:45:04.040 |
|
than Transformers particularly because |
|
|
|
00:45:01.119 --> 00:45:06.240 |
|
Transformer representations tend to be |
|
|
|
00:45:04.040 --> 00:45:09.000 |
|
uh like very concentrated in particular |
|
|
|
00:45:06.240 --> 00:45:11.359 |
|
parts of the space that's an excellent |
|
|
|
00:45:09.000 --> 00:45:14.040 |
|
question um what I do know is a lot of |
|
|
|
00:45:11.359 --> 00:45:15.319 |
|
people do merge together Transformer |
|
|
|
00:45:14.040 --> 00:45:18.319 |
|
models in fact if you look at the |
|
|
|
00:45:15.319 --> 00:45:20.079 |
|
hugging face leaderboard there's like |
|
|
|
00:45:18.319 --> 00:45:22.240 |
|
something and something merg together |
|
|
|
00:45:20.079 --> 00:45:24.200 |
|
like all over the leader board and it |
|
|
|
00:45:22.240 --> 00:45:25.960 |
|
does tend to improve accuracy so I I |
|
|
|
00:45:24.200 --> 00:45:27.480 |
|
know it is definitely effective for |
|
|
|
00:45:25.960 --> 00:45:28.559 |
|
Transformers |
|
|
|
00:45:27.480 --> 00:45:32.040 |
|
however Are |
|
|
|
00:45:28.559 --> 00:45:34.640 |
|
there specific model like parameter |
|
|
|
00:45:32.040 --> 00:45:37.040 |
|
averaging or model merging methods that |
|
|
|
00:45:34.640 --> 00:45:38.599 |
|
could improve accuracy by taking |
|
|
|
00:45:37.040 --> 00:45:40.680 |
|
advantage of the fact that Transformers |
|
|
|
00:45:38.599 --> 00:45:42.480 |
|
behaving a c certain way I think that's |
|
|
|
00:45:40.680 --> 00:45:44.920 |
|
totally possible and you know it would |
|
|
|
00:45:42.480 --> 00:45:48.800 |
|
be an interesting research Direction um |
|
|
|
00:45:44.920 --> 00:45:51.680 |
|
I'm not familiar enough with that |
|
|
|
00:45:48.800 --> 00:45:53.359 |
|
particular part myself to say oh I have |
|
|
|
00:45:51.680 --> 00:45:55.160 |
|
this great idea that you should work on |
|
|
|
00:45:53.359 --> 00:45:55.920 |
|
but I think if you're interested in it |
|
|
|
00:45:55.160 --> 00:45:58.160 |
|
you |
|
|
|
00:45:55.920 --> 00:46:00.280 |
|
definitely |
|
|
|
00:45:58.160 --> 00:46:05.240 |
|
cool anything |
|
|
|
00:46:00.280 --> 00:46:08.920 |
|
El okay so there's also the idea of uh |
|
|
|
00:46:05.240 --> 00:46:12.440 |
|
task vectors and um basically task |
|
|
|
00:46:08.920 --> 00:46:15.280 |
|
vectors here we are just merging |
|
|
|
00:46:12.440 --> 00:46:17.280 |
|
together two models by taking the |
|
|
|
00:46:15.280 --> 00:46:18.280 |
|
parameters of the models and averaging |
|
|
|
00:46:17.280 --> 00:46:22.079 |
|
them |
|
|
|
00:46:18.280 --> 00:46:24.480 |
|
together task vectors and other related |
|
|
|
00:46:22.079 --> 00:46:26.040 |
|
works specifically take advantage of the |
|
|
|
00:46:24.480 --> 00:46:27.640 |
|
fact that we're looking at different |
|
|
|
00:46:26.040 --> 00:46:29.160 |
|
fine-tuned models |
|
|
|
00:46:27.640 --> 00:46:31.480 |
|
and so these are models where we have a |
|
|
|
00:46:29.160 --> 00:46:33.920 |
|
base model and we know that uh that we |
|
|
|
00:46:31.480 --> 00:46:35.760 |
|
fine-tuned from this base model and the |
|
|
|
00:46:33.920 --> 00:46:38.480 |
|
basic idea is that we have our base |
|
|
|
00:46:35.760 --> 00:46:40.319 |
|
model here and the task Vector is the |
|
|
|
00:46:38.480 --> 00:46:43.280 |
|
difference between the base models |
|
|
|
00:46:40.319 --> 00:46:45.559 |
|
Vector uh parameters and the uh fine |
|
|
|
00:46:43.280 --> 00:46:49.480 |
|
tune models parameters so that's what |
|
|
|
00:46:45.559 --> 00:46:52.720 |
|
they Define as a task Vector um what |
|
|
|
00:46:49.480 --> 00:46:56.000 |
|
does this allow us to do this allows us |
|
|
|
00:46:52.720 --> 00:46:58.040 |
|
to do a number of interesting things um |
|
|
|
00:46:56.000 --> 00:47:02.359 |
|
the first one |
|
|
|
00:46:58.040 --> 00:47:05.119 |
|
is that we can actually subtract out uh |
|
|
|
00:47:02.359 --> 00:47:08.960 |
|
quote unquote tasks that we don't want |
|
|
|
00:47:05.119 --> 00:47:11.559 |
|
so like let's say we had a model that |
|
|
|
00:47:08.960 --> 00:47:13.440 |
|
was trained on lots of toxic text or we |
|
|
|
00:47:11.559 --> 00:47:15.760 |
|
had a model that was trained on lots of |
|
|
|
00:47:13.440 --> 00:47:18.760 |
|
private text or something like that we |
|
|
|
00:47:15.760 --> 00:47:22.040 |
|
could actually subtract out the task |
|
|
|
00:47:18.760 --> 00:47:24.240 |
|
Vector from this and basically attempt |
|
|
|
00:47:22.040 --> 00:47:27.480 |
|
to remove the model's ability to uh do |
|
|
|
00:47:24.240 --> 00:47:31.240 |
|
that sort of things um you can also |
|
|
|
00:47:27.480 --> 00:47:36.040 |
|
take two task vectors and combine them |
|
|
|
00:47:31.240 --> 00:47:39.280 |
|
together and uh like get the model uh |
|
|
|
00:47:36.040 --> 00:47:42.200 |
|
from the combination of the two um this |
|
|
|
00:47:39.280 --> 00:47:44.280 |
|
isn't exactly the same as averaging the |
|
|
|
00:47:42.200 --> 00:47:45.440 |
|
parameters because if you average the |
|
|
|
00:47:44.280 --> 00:47:47.400 |
|
parameters you would probably get |
|
|
|
00:47:45.440 --> 00:47:49.160 |
|
something in the middle right here but |
|
|
|
00:47:47.400 --> 00:47:50.440 |
|
if you average the two vectors or add |
|
|
|
00:47:49.160 --> 00:47:52.040 |
|
the two vectors together you would get |
|
|
|
00:47:50.440 --> 00:47:53.760 |
|
something over here actually sorry if |
|
|
|
00:47:52.040 --> 00:47:56.520 |
|
you average the vectors maybe it's the |
|
|
|
00:47:53.760 --> 00:47:58.119 |
|
same so you could like add together the |
|
|
|
00:47:56.520 --> 00:47:59.480 |
|
two vectors and and that would be |
|
|
|
00:47:58.119 --> 00:48:01.640 |
|
something different than taking the |
|
|
|
00:47:59.480 --> 00:48:05.280 |
|
average so it gives you a little bit |
|
|
|
00:48:01.640 --> 00:48:07.720 |
|
more flexibility about things to do |
|
|
|
00:48:05.280 --> 00:48:09.599 |
|
um and another thing this allows you to |
|
|
|
00:48:07.720 --> 00:48:12.920 |
|
do is this allows you to try to resolve |
|
|
|
00:48:09.599 --> 00:48:15.400 |
|
conflicts between um vectors of |
|
|
|
00:48:12.920 --> 00:48:19.720 |
|
different tasks and so this is an |
|
|
|
00:48:15.400 --> 00:48:22.480 |
|
illustration of of this method here |
|
|
|
00:48:19.720 --> 00:48:25.680 |
|
and this has three tasks basically it |
|
|
|
00:48:22.480 --> 00:48:27.720 |
|
has model one model two model three and |
|
|
|
00:48:25.680 --> 00:48:29.920 |
|
each of them has vectors and you'll see |
|
|
|
00:48:27.720 --> 00:48:32.880 |
|
that in some cases these vectors |
|
|
|
00:48:29.920 --> 00:48:34.599 |
|
conflict so we have like pink going up |
|
|
|
00:48:32.880 --> 00:48:36.079 |
|
we have yellow and purple going down we |
|
|
|
00:48:34.599 --> 00:48:37.800 |
|
have yellow going up we have pink and |
|
|
|
00:48:36.079 --> 00:48:40.720 |
|
purple going down etc |
|
|
|
00:48:37.800 --> 00:48:43.040 |
|
etc and what this does is this |
|
|
|
00:48:40.720 --> 00:48:45.960 |
|
identifies the vectors that are uh |
|
|
|
00:48:43.040 --> 00:48:48.040 |
|
pointing the most strongly in particular |
|
|
|
00:48:45.960 --> 00:48:50.440 |
|
directions and then it resolves |
|
|
|
00:48:48.040 --> 00:48:52.240 |
|
conflicts between them and comes up with |
|
|
|
00:48:50.440 --> 00:48:54.559 |
|
a vector that tries to move in a |
|
|
|
00:48:52.240 --> 00:48:55.920 |
|
direction that improves all of the tasks |
|
|
|
00:48:54.559 --> 00:48:59.319 |
|
at the same time and they demonstrate |
|
|
|
00:48:55.920 --> 00:49:01.480 |
|
that this is better method for um kind |
|
|
|
00:48:59.319 --> 00:49:04.599 |
|
of improving the ability to do all of |
|
|
|
00:49:01.480 --> 00:49:09.599 |
|
the tasks compared to just averaging |
|
|
|
00:49:04.599 --> 00:49:09.599 |
|
things together so yeah first |
|
|
|
00:49:11.920 --> 00:49:15.559 |
|
exle like it just |
|
|
|
00:49:16.880 --> 00:49:23.640 |
|
add yeah so this is |
|
|
|
00:49:20.680 --> 00:49:25.760 |
|
um yeah you could move it more in that |
|
|
|
00:49:23.640 --> 00:49:27.319 |
|
direction it there's obviously no |
|
|
|
00:49:25.760 --> 00:49:29.720 |
|
guarantee that it would make it better |
|
|
|
00:49:27.319 --> 00:49:32.319 |
|
but it might make it more extreme at |
|
|
|
00:49:29.720 --> 00:49:35.760 |
|
least so uh |
|
|
|
00:49:32.319 --> 00:49:35.760 |
|
yeah any other |
|
|
|
00:49:36.680 --> 00:49:39.960 |
|
questions all |
|
|
|
00:49:55.640 --> 00:49:58.640 |
|
yes |
|
|
|
00:50:25.640 --> 00:50:28.640 |
|
one |
|
|
|
00:50:32.319 --> 00:50:37.240 |
|
yeah yeah so this is a a great question |
|
|
|
00:50:35.599 --> 00:50:38.760 |
|
um I can explain a little bit I'm not |
|
|
|
00:50:37.240 --> 00:50:40.760 |
|
going to talk about Metal learning |
|
|
|
00:50:38.760 --> 00:50:42.680 |
|
extensively in this class but just to |
|
|
|
00:50:40.760 --> 00:50:46.040 |
|
give a very quick primer for people who |
|
|
|
00:50:42.680 --> 00:50:46.040 |
|
don't know about it |
|
|
|
00:50:55.640 --> 00:50:58.640 |
|
um |
|
|
|
00:51:00.359 --> 00:51:06.040 |
|
this is an example of a paper on metal |
|
|
|
00:51:03.319 --> 00:51:09.559 |
|
learning for low resource machine |
|
|
|
00:51:06.040 --> 00:51:12.680 |
|
translation um I you can take a look at |
|
|
|
00:51:09.559 --> 00:51:16.200 |
|
this paper um or not take a look at this |
|
|
|
00:51:12.680 --> 00:51:17.760 |
|
paper um uh but the reason why I wanted |
|
|
|
00:51:16.200 --> 00:51:20.799 |
|
to look at this paper is because it has |
|
|
|
00:51:17.760 --> 00:51:25.160 |
|
a good um uh it has a good illustration |
|
|
|
00:51:20.799 --> 00:51:27.200 |
|
of what metal learning is and basically |
|
|
|
00:51:25.160 --> 00:51:29.160 |
|
um if we |
|
|
|
00:51:27.200 --> 00:51:33.839 |
|
are doing transfer learning from a |
|
|
|
00:51:29.160 --> 00:51:35.880 |
|
single task what we do is we have like a |
|
|
|
00:51:33.839 --> 00:51:37.960 |
|
Spanish English machine translation |
|
|
|
00:51:35.880 --> 00:51:41.839 |
|
system and then we fine-tune it to try |
|
|
|
00:51:37.960 --> 00:51:45.280 |
|
to hit like to try to be a good Romanian |
|
|
|
00:51:41.839 --> 00:51:48.680 |
|
uh English or latan English system if |
|
|
|
00:51:45.280 --> 00:51:50.400 |
|
we're doing multitask learning um or |
|
|
|
00:51:48.680 --> 00:51:53.079 |
|
which also could be equivalent to like |
|
|
|
00:51:50.400 --> 00:51:55.680 |
|
instruction tuning for example we have |
|
|
|
00:51:53.079 --> 00:51:57.680 |
|
uh French uh Spanish and Portuguese we |
|
|
|
00:51:55.680 --> 00:52:03.319 |
|
train on all the then we |
|
|
|
00:51:57.680 --> 00:52:06.520 |
|
fine-tune to uh to be a good Romanian uh |
|
|
|
00:52:03.319 --> 00:52:09.240 |
|
translator latan trans uh |
|
|
|
00:52:06.520 --> 00:52:10.760 |
|
translator whereas metal learning what |
|
|
|
00:52:09.240 --> 00:52:12.119 |
|
it's trying to do is it's trying to |
|
|
|
00:52:10.760 --> 00:52:14.680 |
|
learn a good |
|
|
|
00:52:12.119 --> 00:52:17.480 |
|
initialization that makes it easy to |
|
|
|
00:52:14.680 --> 00:52:21.280 |
|
fine-tune to try to come up with a model |
|
|
|
00:52:17.480 --> 00:52:23.839 |
|
that is good uh for fine-tuning into new |
|
|
|
00:52:21.280 --> 00:52:29.040 |
|
tasks |
|
|
|
00:52:23.839 --> 00:52:32.200 |
|
um the way you do this is basically um |
|
|
|
00:52:29.040 --> 00:52:36.599 |
|
you have two |
|
|
|
00:52:32.200 --> 00:52:39.400 |
|
steps um of gradient descent and so you |
|
|
|
00:52:36.599 --> 00:52:42.400 |
|
have a first step where you uh train the |
|
|
|
00:52:39.400 --> 00:52:42.400 |
|
model |
|
|
|
00:52:42.599 --> 00:52:50.160 |
|
um where you have an update on like data |
|
|
|
00:52:47.119 --> 00:52:50.160 |
|
from French for |
|
|
|
00:52:55.440 --> 00:53:02.400 |
|
example |
|
|
|
00:52:57.920 --> 00:53:02.400 |
|
and then you have another |
|
|
|
00:53:04.640 --> 00:53:10.599 |
|
update um where you train on like black |
|
|
|
00:53:07.880 --> 00:53:10.599 |
|
or something like |
|
|
|
00:53:12.559 --> 00:53:17.040 |
|
this and this is a very informal very |
|
|
|
00:53:15.599 --> 00:53:18.200 |
|
informal description there's a lot of |
|
|
|
00:53:17.040 --> 00:53:19.599 |
|
stuff we could talk about here I could |
|
|
|
00:53:18.200 --> 00:53:22.119 |
|
have a whole class on this but we're not |
|
|
|
00:53:19.599 --> 00:53:27.200 |
|
going to um I don't have one planned at |
|
|
|
00:53:22.119 --> 00:53:28.559 |
|
the moment um and so you uh you up once |
|
|
|
00:53:27.200 --> 00:53:30.319 |
|
and then you update again and you |
|
|
|
00:53:28.559 --> 00:53:33.400 |
|
differentiate through this update |
|
|
|
00:53:30.319 --> 00:53:35.160 |
|
process uh so that this becomes like |
|
|
|
00:53:33.400 --> 00:53:37.440 |
|
essentially a good initialization for |
|
|
|
00:53:35.160 --> 00:53:40.640 |
|
training on other languages or for other |
|
|
|
00:53:37.440 --> 00:53:43.000 |
|
tasks or things like that |
|
|
|
00:53:40.640 --> 00:53:44.920 |
|
um now going back to the original |
|
|
|
00:53:43.000 --> 00:53:46.240 |
|
question the original question is is |
|
|
|
00:53:44.920 --> 00:53:50.000 |
|
there a connection between metal |
|
|
|
00:53:46.240 --> 00:53:50.000 |
|
learning in these uh task |
|
|
|
00:53:54.720 --> 00:53:58.440 |
|
vectors I'm not |
|
|
|
00:53:59.079 --> 00:54:03.720 |
|
100% sure about that because I think |
|
|
|
00:54:01.760 --> 00:54:06.599 |
|
these test backs are generally created |
|
|
|
00:54:03.720 --> 00:54:08.480 |
|
post Haw and so they're not like there's |
|
|
|
00:54:06.599 --> 00:54:12.680 |
|
no explicit learning step to try to make |
|
|
|
00:54:08.480 --> 00:54:14.440 |
|
them uh you know generalize well um one |
|
|
|
00:54:12.680 --> 00:54:15.960 |
|
one thing that maybe might be |
|
|
|
00:54:14.440 --> 00:54:18.559 |
|
interesting to people this is a paper |
|
|
|
00:54:15.960 --> 00:54:23.040 |
|
that we like literally just put on |
|
|
|
00:54:18.559 --> 00:54:23.040 |
|
archive about last week |
|
|
|
00:54:25.359 --> 00:54:28.359 |
|
um |
|
|
|
00:54:34.520 --> 00:54:39.880 |
|
and we didn't actually use metal |
|
|
|
00:54:36.400 --> 00:54:41.960 |
|
learning in this uh in this paper um |
|
|
|
00:54:39.880 --> 00:54:44.520 |
|
just because metal learning actually is |
|
|
|
00:54:41.960 --> 00:54:46.160 |
|
hard to implement uh because you need to |
|
|
|
00:54:44.520 --> 00:54:48.680 |
|
do this kind of double differentiation |
|
|
|
00:54:46.160 --> 00:54:50.720 |
|
and can become very very expensive for |
|
|
|
00:54:48.680 --> 00:54:52.839 |
|
large models but we did something a |
|
|
|
00:54:50.720 --> 00:54:55.920 |
|
little bit motivated by |
|
|
|
00:54:52.839 --> 00:54:59.680 |
|
um uh by metal learning and what we did |
|
|
|
00:54:55.920 --> 00:55:01.280 |
|
is we took a pre-trained LM and normally |
|
|
|
00:54:59.680 --> 00:55:04.359 |
|
what you do is something like continued |
|
|
|
00:55:01.280 --> 00:55:06.799 |
|
pre-training on new documents to learn |
|
|
|
00:55:04.359 --> 00:55:10.160 |
|
knowledge from the new documents or |
|
|
|
00:55:06.799 --> 00:55:12.200 |
|
maybe um instruction tuning including |
|
|
|
00:55:10.160 --> 00:55:15.960 |
|
instruction tuning on data on documents |
|
|
|
00:55:12.200 --> 00:55:17.520 |
|
about the kind of uh data that you would |
|
|
|
00:55:15.960 --> 00:55:18.880 |
|
want to be answering questions about so |
|
|
|
00:55:17.520 --> 00:55:20.640 |
|
like let's say you're trying to train a |
|
|
|
00:55:18.880 --> 00:55:23.000 |
|
medical language model you might train |
|
|
|
00:55:20.640 --> 00:55:26.680 |
|
on lots of medical documents but what we |
|
|
|
00:55:23.000 --> 00:55:29.839 |
|
did here is we had a step where we train |
|
|
|
00:55:26.680 --> 00:55:33.720 |
|
in advance to |
|
|
|
00:55:29.839 --> 00:55:38.079 |
|
get on question answer Pairs and |
|
|
|
00:55:33.720 --> 00:55:40.400 |
|
documents from another domain and then |
|
|
|
00:55:38.079 --> 00:55:43.359 |
|
we have a step after that where we train |
|
|
|
00:55:40.400 --> 00:55:46.400 |
|
on documents from the domain we want to |
|
|
|
00:55:43.359 --> 00:55:48.400 |
|
answer on so like we might train on |
|
|
|
00:55:46.400 --> 00:55:51.079 |
|
Wikipedia question answer Pairs and |
|
|
|
00:55:48.400 --> 00:55:52.559 |
|
Wikipedia documents and then in the |
|
|
|
00:55:51.079 --> 00:55:54.079 |
|
second step we would train on medical |
|
|
|
00:55:52.559 --> 00:55:56.680 |
|
documents and we demonstrate that |
|
|
|
00:55:54.079 --> 00:55:58.880 |
|
basically this allows the model to do a |
|
|
|
00:55:56.680 --> 00:56:00.880 |
|
better job of question answering over |
|
|
|
00:55:58.880 --> 00:56:03.640 |
|
these uh documents that we find tune on |
|
|
|
00:56:00.880 --> 00:56:05.000 |
|
over here and so kind of going back to |
|
|
|
00:56:03.640 --> 00:56:06.760 |
|
the metal learning paper that I talked |
|
|
|
00:56:05.000 --> 00:56:08.359 |
|
about before the metal learning paper |
|
|
|
00:56:06.760 --> 00:56:10.640 |
|
tries to get the parameters in a good |
|
|
|
00:56:08.359 --> 00:56:12.559 |
|
space so that after you find ton on |
|
|
|
00:56:10.640 --> 00:56:15.520 |
|
another data set you do a good job of |
|
|
|
00:56:12.559 --> 00:56:17.799 |
|
that in this paper our motivation is |
|
|
|
00:56:15.520 --> 00:56:20.359 |
|
that the model kind of learns that when |
|
|
|
00:56:17.799 --> 00:56:22.039 |
|
you train on documents you should be |
|
|
|
00:56:20.359 --> 00:56:24.079 |
|
able to answer questions about those |
|
|
|
00:56:22.039 --> 00:56:25.480 |
|
documents and so when you get a new set |
|
|
|
00:56:24.079 --> 00:56:27.200 |
|
of documents it's kind of in a good part |
|
|
|
00:56:25.480 --> 00:56:31.079 |
|
of the parameter space to make that easy |
|
|
|
00:56:27.200 --> 00:56:33.520 |
|
to do so um if that if metal learning is |
|
|
|
00:56:31.079 --> 00:56:34.640 |
|
interesting um there are tutorials on |
|
|
|
00:56:33.520 --> 00:56:37.119 |
|
metal learning that I could probably |
|
|
|
00:56:34.640 --> 00:56:39.599 |
|
share and then um if you're interested |
|
|
|
00:56:37.119 --> 00:56:42.599 |
|
in kind of like learning Knowledge from |
|
|
|
00:56:39.599 --> 00:56:45.039 |
|
uh learning Knowledge |
|
|
|
00:56:42.599 --> 00:56:46.079 |
|
from continued pre-training or something |
|
|
|
00:56:45.039 --> 00:56:47.400 |
|
like that you could take a look at this |
|
|
|
00:56:46.079 --> 00:56:49.920 |
|
right there as |
|
|
|
00:56:47.400 --> 00:56:54.480 |
|
well uh |
|
|
|
00:56:49.920 --> 00:56:54.480 |
|
cool any questions about that |
|
|
|
00:56:55.240 --> 00:57:00.880 |
|
or |
|
|
|
00:56:57.599 --> 00:57:02.480 |
|
okay cool I I'll jump on this so anyway |
|
|
|
00:57:00.880 --> 00:57:05.520 |
|
um I talked about several methods for |
|
|
|
00:57:02.480 --> 00:57:07.520 |
|
merging models together um there's a |
|
|
|
00:57:05.520 --> 00:57:09.440 |
|
popular toolkit called merge kit that |
|
|
|
00:57:07.520 --> 00:57:10.960 |
|
makes it relatively easy to do this it |
|
|
|
00:57:09.440 --> 00:57:13.280 |
|
implements a lot of the models that I |
|
|
|
00:57:10.960 --> 00:57:17.160 |
|
talked about here including uh the |
|
|
|
00:57:13.280 --> 00:57:19.880 |
|
linear methods um uh the task arithmetic |
|
|
|
00:57:17.160 --> 00:57:23.079 |
|
method and ties uh so I talked about |
|
|
|
00:57:19.880 --> 00:57:25.480 |
|
these there is kind of like a expansion |
|
|
|
00:57:23.079 --> 00:57:27.240 |
|
on this so if you want to merge together |
|
|
|
00:57:25.480 --> 00:57:28.760 |
|
models it's Rel easy to do from a |
|
|
|
00:57:27.240 --> 00:57:30.760 |
|
software standpoint as so so you can |
|
|
|
00:57:28.760 --> 00:57:35.119 |
|
take a look at |
|
|
|
00:57:30.760 --> 00:57:38.000 |
|
that um another really simple thing uh |
|
|
|
00:57:35.119 --> 00:57:39.880 |
|
is uh distilling ensembles and so we |
|
|
|
00:57:38.000 --> 00:57:43.039 |
|
already talked about distillation the |
|
|
|
00:57:39.880 --> 00:57:45.599 |
|
idea is simple um |
|
|
|
00:57:43.039 --> 00:57:47.680 |
|
you so parameter averaging only really |
|
|
|
00:57:45.599 --> 00:57:49.200 |
|
works for models within the same run uh |
|
|
|
00:57:47.680 --> 00:57:51.760 |
|
same model architecture same |
|
|
|
00:57:49.200 --> 00:57:54.280 |
|
initialization so knowledge distillation |
|
|
|
00:57:51.760 --> 00:57:55.559 |
|
uh trains a model to copy The Ensemble |
|
|
|
00:57:54.280 --> 00:57:57.359 |
|
and so it tries to match the |
|
|
|
00:57:55.559 --> 00:57:59.119 |
|
distribution over the predicted words |
|
|
|
00:57:57.359 --> 00:58:00.760 |
|
for an |
|
|
|
00:57:59.119 --> 00:58:05.319 |
|
on |
|
|
|
00:58:00.760 --> 00:58:07.799 |
|
um and so this allows the model to make |
|
|
|
00:58:05.319 --> 00:58:09.079 |
|
the same you know good predictions as |
|
|
|
00:58:07.799 --> 00:58:11.079 |
|
The Ensemble make the same bad |
|
|
|
00:58:09.079 --> 00:58:12.799 |
|
predictions as Ensemble it just allows |
|
|
|
00:58:11.079 --> 00:58:14.799 |
|
you to learn more efficiently just like |
|
|
|
00:58:12.799 --> 00:58:16.680 |
|
distillation does in general and they |
|
|
|
00:58:14.799 --> 00:58:18.960 |
|
actually model distillation the original |
|
|
|
00:58:16.680 --> 00:58:22.240 |
|
motivation for it when Jeff Hinton |
|
|
|
00:58:18.960 --> 00:58:24.599 |
|
proposed it in 2015 in in this paper was |
|
|
|
00:58:22.240 --> 00:58:25.680 |
|
to copy an ensemble now we use it for a |
|
|
|
00:58:24.599 --> 00:58:27.039 |
|
lot of other things like in the |
|
|
|
00:58:25.680 --> 00:58:31.160 |
|
distillation |
|
|
|
00:58:27.039 --> 00:58:31.160 |
|
like weed the class but was the |
|
|
|
00:58:34.119 --> 00:58:39.599 |
|
original |
|
|
|
00:58:35.760 --> 00:58:42.640 |
|
um next I'll move on to sparse mixture |
|
|
|
00:58:39.599 --> 00:58:44.960 |
|
of experts models and this is really |
|
|
|
00:58:42.640 --> 00:58:47.599 |
|
important uh this is used in a lot of |
|
|
|
00:58:44.960 --> 00:58:51.319 |
|
modern models it's allegedly used in GPD |
|
|
|
00:58:47.599 --> 00:58:53.160 |
|
4 um and it is uh definitely used in |
|
|
|
00:58:51.319 --> 00:58:55.280 |
|
mixl uh which is kind of one of the |
|
|
|
00:58:53.160 --> 00:58:58.039 |
|
state-ofthe-art open models so I think |
|
|
|
00:58:55.280 --> 00:58:58.039 |
|
it's a good thing to know |
|
|
|
00:58:59.880 --> 00:59:05.720 |
|
um what these do is they take advantage |
|
|
|
00:59:02.680 --> 00:59:08.160 |
|
of sparse computation so if you think |
|
|
|
00:59:05.720 --> 00:59:09.359 |
|
about what happens when you do a scalar |
|
|
|
00:59:08.160 --> 00:59:12.760 |
|
tensor |
|
|
|
00:59:09.359 --> 00:59:14.720 |
|
multiply where the scaler is zero and |
|
|
|
00:59:12.760 --> 00:59:17.160 |
|
basically the result of the entire |
|
|
|
00:59:14.720 --> 00:59:19.680 |
|
resulting tensor is guaranteed to be |
|
|
|
00:59:17.160 --> 00:59:21.440 |
|
zero and so you don't even need to do |
|
|
|
00:59:19.680 --> 00:59:25.440 |
|
the computation you don't need to even |
|
|
|
00:59:21.440 --> 00:59:27.520 |
|
bother um and so this manifests itself |
|
|
|
00:59:25.440 --> 00:59:30.240 |
|
in a bunch of different places in modern |
|
|
|
00:59:27.520 --> 00:59:35.000 |
|
models um the first one could be single |
|
|
|
00:59:30.240 --> 00:59:38.400 |
|
rows in a matrix multiply so um if you |
|
|
|
00:59:35.000 --> 00:59:40.480 |
|
have a big Matrix multiply like |
|
|
|
00:59:38.400 --> 00:59:44.240 |
|
this |
|
|
|
00:59:40.480 --> 00:59:47.880 |
|
um or Matrix Vector multiply like this |
|
|
|
00:59:44.240 --> 00:59:50.200 |
|
um and some of the rows are zero then uh |
|
|
|
00:59:47.880 --> 00:59:54.559 |
|
that that's one place where it |
|
|
|
00:59:50.200 --> 00:59:58.200 |
|
happens um you can also uh do this |
|
|
|
00:59:54.559 --> 01:00:00.119 |
|
between zero and in not just rows but |
|
|
|
00:59:58.200 --> 01:00:02.200 |
|
also larger |
|
|
|
01:00:00.119 --> 01:00:05.799 |
|
tensors um and you can even do it in |
|
|
|
01:00:02.200 --> 01:00:07.599 |
|
whole models in an ensemble so um the |
|
|
|
01:00:05.799 --> 01:00:10.799 |
|
first one this can be optimized |
|
|
|
01:00:07.599 --> 01:00:13.880 |
|
automatically by GPU um the second one |
|
|
|
01:00:10.799 --> 01:00:15.400 |
|
this often occurs in uh sparse mixture |
|
|
|
01:00:13.880 --> 01:00:18.000 |
|
of experts |
|
|
|
01:00:15.400 --> 01:00:19.400 |
|
models and the final one uh basically |
|
|
|
01:00:18.000 --> 01:00:21.880 |
|
you just don't need to even use the |
|
|
|
01:00:19.400 --> 01:00:24.119 |
|
model in emble so if you somehow |
|
|
|
01:00:21.880 --> 01:00:25.640 |
|
optimize an ensemble and it turns out |
|
|
|
01:00:24.119 --> 01:00:27.599 |
|
that the probability of one of the |
|
|
|
01:00:25.640 --> 01:00:29.680 |
|
models is zero you just can throw it out |
|
|
|
01:00:27.599 --> 01:00:33.640 |
|
and not use it at |
|
|
|
01:00:29.680 --> 01:00:36.839 |
|
all so um GPU level sparsity |
|
|
|
01:00:33.640 --> 01:00:39.839 |
|
support uh Nvidia gpus support a bunch |
|
|
|
01:00:36.839 --> 01:00:42.559 |
|
of different types of sparsity and uh |
|
|
|
01:00:39.839 --> 01:00:44.599 |
|
the people the wonderful people at |
|
|
|
01:00:42.559 --> 01:00:48.280 |
|
Nvidia have worked hard to make the |
|
|
|
01:00:44.599 --> 01:00:51.319 |
|
support uh work to some extent anyway |
|
|
|
01:00:48.280 --> 01:00:53.119 |
|
and uh there's a library called cpar and |
|
|
|
01:00:51.319 --> 01:00:56.119 |
|
this is used in pytorch and all these |
|
|
|
01:00:53.119 --> 01:00:58.280 |
|
other things as well and just to give |
|
|
|
01:00:56.119 --> 01:01:01.240 |
|
example a vector Matrix multiply with a |
|
|
|
01:00:58.280 --> 01:01:03.240 |
|
sparse Vector um such as one that comes |
|
|
|
01:01:01.240 --> 01:01:06.160 |
|
from a relu activation basically what |
|
|
|
01:01:03.240 --> 01:01:09.319 |
|
happens is let's say you only have three |
|
|
|
01:01:06.160 --> 01:01:11.799 |
|
uh parts of this Vector that are active |
|
|
|
01:01:09.319 --> 01:01:15.240 |
|
um you actually just don't need to cop |
|
|
|
01:01:11.799 --> 01:01:18.200 |
|
uh calculate any of the columns here so |
|
|
|
01:01:15.240 --> 01:01:19.720 |
|
that makes your life relatively |
|
|
|
01:01:18.200 --> 01:01:22.880 |
|
easy |
|
|
|
01:01:19.720 --> 01:01:24.480 |
|
um but the specific thing that I wanted |
|
|
|
01:01:22.880 --> 01:01:26.640 |
|
to talk about is a sparsely gated |
|
|
|
01:01:24.480 --> 01:01:29.799 |
|
mixture of experts layer because this is |
|
|
|
01:01:26.640 --> 01:01:33.960 |
|
uh what is used in mixol and probably uh |
|
|
|
01:01:29.799 --> 01:01:38.200 |
|
the GPT models as well and what you do |
|
|
|
01:01:33.960 --> 01:01:41.760 |
|
is you have a feed forward Network and |
|
|
|
01:01:38.200 --> 01:01:41.760 |
|
normally a feed forward Network in a |
|
|
|
01:01:43.640 --> 01:01:52.119 |
|
Transformer is this like really wide |
|
|
|
01:01:49.319 --> 01:01:57.240 |
|
thing this huge wide feed forward |
|
|
|
01:01:52.119 --> 01:01:59.359 |
|
Network um that you use to extract a |
|
|
|
01:01:57.240 --> 01:02:00.520 |
|
whole bunch of features at each layer |
|
|
|
01:01:59.359 --> 01:02:02.640 |
|
and that's where a lot of the |
|
|
|
01:02:00.520 --> 01:02:05.799 |
|
computation and Transformer |
|
|
|
01:02:02.640 --> 01:02:10.079 |
|
happens um and what sparsely gated |
|
|
|
01:02:05.799 --> 01:02:13.079 |
|
mixture of uh experts layers do is they |
|
|
|
01:02:10.079 --> 01:02:15.640 |
|
first have this gating Network here |
|
|
|
01:02:13.079 --> 01:02:17.880 |
|
where it calculates uh mixture |
|
|
|
01:02:15.640 --> 01:02:21.119 |
|
probability but the mixture probability |
|
|
|
01:02:17.880 --> 01:02:23.039 |
|
is zero and for many or most of the |
|
|
|
01:02:21.119 --> 01:02:26.880 |
|
parts of this feed forward |
|
|
|
01:02:23.039 --> 01:02:28.760 |
|
Network and so for the ones where it's |
|
|
|
01:02:26.880 --> 01:02:31.319 |
|
zero you just don't calculate |
|
|
|
01:02:28.760 --> 01:02:34.319 |
|
it um and then when you mix them |
|
|
|
01:02:31.319 --> 01:02:37.359 |
|
together you use the mixture rates and |
|
|
|
01:02:34.319 --> 01:02:39.520 |
|
this is actually really simple um it's |
|
|
|
01:02:37.359 --> 01:02:42.400 |
|
like several lines of pytorch code maybe |
|
|
|
01:02:39.520 --> 01:02:45.319 |
|
like seven or eight lines of P torch |
|
|
|
01:02:42.400 --> 01:02:48.720 |
|
code but the basic uh idea here is you |
|
|
|
01:02:45.319 --> 01:02:50.599 |
|
have um this gating function where you |
|
|
|
01:02:48.720 --> 01:02:52.799 |
|
calculate the gating function based on |
|
|
|
01:02:50.599 --> 01:02:53.640 |
|
the input and then you have this keep |
|
|
|
01:02:52.799 --> 01:02:56.720 |
|
top |
|
|
|
01:02:53.640 --> 01:02:58.319 |
|
K uh operation and then you take the |
|
|
|
01:02:56.720 --> 01:03:02.559 |
|
soft Max over |
|
|
|
01:02:58.319 --> 01:03:04.359 |
|
this and the keep top K operation is if |
|
|
|
01:03:02.559 --> 01:03:06.160 |
|
the value is within the top K you just |
|
|
|
01:03:04.359 --> 01:03:07.319 |
|
keep it and if it's not in the top K you |
|
|
|
01:03:06.160 --> 01:03:11.960 |
|
don't keep |
|
|
|
01:03:07.319 --> 01:03:13.119 |
|
it so that that's all basically but what |
|
|
|
01:03:11.960 --> 01:03:14.760 |
|
what's great about this is then you |
|
|
|
01:03:13.119 --> 01:03:17.799 |
|
don't have to calculate like many of |
|
|
|
01:03:14.760 --> 01:03:20.119 |
|
them and so for example um uh if you |
|
|
|
01:03:17.799 --> 01:03:22.640 |
|
keep the top two out of eight you reduce |
|
|
|
01:03:20.119 --> 01:03:26.760 |
|
your calcul uh your computation by four |
|
|
|
01:03:22.640 --> 01:03:30.000 |
|
times for this part so |
|
|
|
01:03:26.760 --> 01:03:33.000 |
|
um any any questions |
|
|
|
01:03:30.000 --> 01:03:33.000 |
|
here |
|
|
|
01:03:54.720 --> 01:03:57.720 |
|
yeah |
|
|
|
01:04:03.160 --> 01:04:07.039 |
|
um sorry what what exactly do you mean |
|
|
|
01:04:05.559 --> 01:04:09.400 |
|
by easy to paralyze are you talking |
|
|
|
01:04:07.039 --> 01:04:12.400 |
|
about like a GPU can calculate lots of |
|
|
|
01:04:09.400 --> 01:04:15.680 |
|
things at the same time yeah so I think |
|
|
|
01:04:12.400 --> 01:04:17.720 |
|
if you have a very small model um you're |
|
|
|
01:04:15.680 --> 01:04:21.680 |
|
actually not going to get as much from |
|
|
|
01:04:17.720 --> 01:04:25.079 |
|
this uh because you're not you're |
|
|
|
01:04:21.680 --> 01:04:26.359 |
|
essentially not bound by computation uh |
|
|
|
01:04:25.079 --> 01:04:27.880 |
|
like you're bound more by memory |
|
|
|
01:04:26.359 --> 01:04:29.079 |
|
movement and the GPU and other stuff |
|
|
|
01:04:27.880 --> 01:04:30.520 |
|
like that but once you start getting up |
|
|
|
01:04:29.079 --> 01:04:32.920 |
|
to the bigger models you actually are |
|
|
|
01:04:30.520 --> 01:04:34.640 |
|
bound by computation so reducing your |
|
|
|
01:04:32.920 --> 01:04:37.039 |
|
computation by four actually is a big |
|
|
|
01:04:34.640 --> 01:04:42.559 |
|
one so it's a really really good |
|
|
|
01:04:37.039 --> 01:04:42.559 |
|
question um any any other questions |
|
|
|
01:04:44.039 --> 01:04:50.520 |
|
yeah so so this will |
|
|
|
01:04:48.240 --> 01:04:53.160 |
|
um probably |
|
|
|
01:04:50.520 --> 01:04:56.039 |
|
be |
|
|
|
01:04:53.160 --> 01:04:59.279 |
|
just oh sorry I I don't have this here |
|
|
|
01:04:56.039 --> 01:05:01.760 |
|
but this will be a often a linear layer |
|
|
|
01:04:59.279 --> 01:05:01.760 |
|
followed by a |
|
|
|
01:05:03.039 --> 01:05:08.000 |
|
seance um or or actually no it doesn't |
|
|
|
01:05:06.359 --> 01:05:10.520 |
|
even need to be followed by softb it |
|
|
|
01:05:08.000 --> 01:05:10.520 |
|
could just be a |
|
|
|
01:05:12.520 --> 01:05:17.920 |
|
linear and I think actually I didn't put |
|
|
|
01:05:14.960 --> 01:05:19.680 |
|
it on this slide but I have the in the |
|
|
|
01:05:17.920 --> 01:05:21.359 |
|
references on the website I have the |
|
|
|
01:05:19.680 --> 01:05:22.760 |
|
actual implementation in mix roll you |
|
|
|
01:05:21.359 --> 01:05:25.279 |
|
can go in and look at it it's really |
|
|
|
01:05:22.760 --> 01:05:27.160 |
|
simple um one thing I didn't put on here |
|
|
|
01:05:25.279 --> 01:05:31.000 |
|
um which actually uh relates to the |
|
|
|
01:05:27.160 --> 01:05:32.920 |
|
question before is Hardware wise this |
|
|
|
01:05:31.000 --> 01:05:34.799 |
|
implementation is tricky if you do |
|
|
|
01:05:32.920 --> 01:05:37.599 |
|
batching um and the reason why It's |
|
|
|
01:05:34.799 --> 01:05:39.480 |
|
Tricky if you do batching is because um |
|
|
|
01:05:37.599 --> 01:05:43.000 |
|
different experts will be active for |
|
|
|
01:05:39.480 --> 01:05:45.240 |
|
different like parts of the batch so if |
|
|
|
01:05:43.000 --> 01:05:48.559 |
|
you do that you need to do some tricky |
|
|
|
01:05:45.240 --> 01:05:48.559 |
|
stuff uh there's |
|
|
|
01:05:54.640 --> 01:05:57.640 |
|
this |
|
|
|
01:06:03.240 --> 01:06:12.039 |
|
like so much of AI research nowadays uh |
|
|
|
01:06:08.200 --> 01:06:12.039 |
|
the best resource for this is social |
|
|
|
01:06:13.680 --> 01:06:20.000 |
|
media so this is uh there's a kind of |
|
|
|
01:06:16.880 --> 01:06:23.240 |
|
interesting discussion of |
|
|
|
01:06:20.000 --> 01:06:25.359 |
|
this um if you search for like gpk Fast |
|
|
|
01:06:23.240 --> 01:06:28.400 |
|
mixed r on Twitter it it talks about |
|
|
|
01:06:25.359 --> 01:06:30.200 |
|
this but basically there's a bunch of uh |
|
|
|
01:06:28.400 --> 01:06:32.680 |
|
little little things you need to pay |
|
|
|
01:06:30.200 --> 01:06:34.760 |
|
attention to um and ways that you can do |
|
|
|
01:06:32.680 --> 01:06:36.960 |
|
tricks to make this work fast on GPU |
|
|
|
01:06:34.760 --> 01:06:40.000 |
|
which also kind of uh addresses the |
|
|
|
01:06:36.960 --> 01:06:42.359 |
|
concern so you can look for Horus H's |
|
|
|
01:06:40.000 --> 01:06:44.200 |
|
discussion |
|
|
|
01:06:42.359 --> 01:06:46.680 |
|
this |
|
|
|
01:06:44.200 --> 01:06:49.000 |
|
cool |
|
|
|
01:06:46.680 --> 01:06:50.799 |
|
um so the final thing I'd like to talk |
|
|
|
01:06:49.000 --> 01:06:52.480 |
|
about in the last 10 minutes is pipeline |
|
|
|
01:06:50.799 --> 01:06:55.359 |
|
systems |
|
|
|
01:06:52.480 --> 01:06:57.039 |
|
um and pipeline systems are systems |
|
|
|
01:06:55.359 --> 01:07:00.279 |
|
where we |
|
|
|
01:06:57.039 --> 01:07:02.319 |
|
have models that basically the output of |
|
|
|
01:07:00.279 --> 01:07:05.319 |
|
one model becomes the input of another |
|
|
|
01:07:02.319 --> 01:07:05.319 |
|
model |
|
|
|
01:07:05.599 --> 01:07:10.359 |
|
and to give an example of this a |
|
|
|
01:07:08.200 --> 01:07:13.480 |
|
cascaded system is basically a system |
|
|
|
01:07:10.359 --> 01:07:15.119 |
|
like this where you uh take the output |
|
|
|
01:07:13.480 --> 01:07:16.960 |
|
of one system and then you feed it into |
|
|
|
01:07:15.119 --> 01:07:19.640 |
|
the input of another system so a very |
|
|
|
01:07:16.960 --> 01:07:22.880 |
|
stereotypical example of This is speech |
|
|
|
01:07:19.640 --> 01:07:25.559 |
|
translation um where you run speech and |
|
|
|
01:07:22.880 --> 01:07:27.720 |
|
then you uh do speech recognition into |
|
|
|
01:07:25.559 --> 01:07:29.319 |
|
text and then text you do machine |
|
|
|
01:07:27.720 --> 01:07:32.160 |
|
translation into another |
|
|
|
01:07:29.319 --> 01:07:33.920 |
|
language |
|
|
|
01:07:32.160 --> 01:07:36.440 |
|
and |
|
|
|
01:07:33.920 --> 01:07:39.039 |
|
um one of the frustrating things about |
|
|
|
01:07:36.440 --> 01:07:43.000 |
|
speech translation is these systems are |
|
|
|
01:07:39.039 --> 01:07:45.799 |
|
stubbornly better uh for a long time |
|
|
|
01:07:43.000 --> 01:07:47.680 |
|
than many systems that try to do end to |
|
|
|
01:07:45.799 --> 01:07:49.960 |
|
end like speech to text in another |
|
|
|
01:07:47.680 --> 01:07:52.160 |
|
language there's a couple reasons for |
|
|
|
01:07:49.960 --> 01:07:54.440 |
|
this does anyone have an idea why what |
|
|
|
01:07:52.160 --> 01:07:57.039 |
|
one of those reasons might |
|
|
|
01:07:54.440 --> 01:07:58.839 |
|
be |
|
|
|
01:07:57.039 --> 01:08:01.559 |
|
yeah the |
|
|
|
01:07:58.839 --> 01:08:05.279 |
|
data |
|
|
|
01:08:01.559 --> 01:08:08.680 |
|
anying exactly so data data availability |
|
|
|
01:08:05.279 --> 01:08:10.920 |
|
is way better for speech to text in the |
|
|
|
01:08:08.680 --> 01:08:13.319 |
|
same language and text to text in |
|
|
|
01:08:10.920 --> 01:08:15.720 |
|
another language than it is for uh |
|
|
|
01:08:13.319 --> 01:08:17.759 |
|
Speech to te text in another language |
|
|
|
01:08:15.720 --> 01:08:19.319 |
|
because there just aren't large data |
|
|
|
01:08:17.759 --> 01:08:21.679 |
|
sets that have speech and text in many |
|
|
|
01:08:19.319 --> 01:08:25.719 |
|
languages so there's a bunch of tricks |
|
|
|
01:08:21.679 --> 01:08:31.759 |
|
that you can do uh to you know fix this |
|
|
|
01:08:25.719 --> 01:08:34.239 |
|
but still it it's uh you know uh tricky |
|
|
|
01:08:31.759 --> 01:08:36.120 |
|
and there's a couple other reasons |
|
|
|
01:08:34.239 --> 01:08:38.159 |
|
another reason is like actually speech |
|
|
|
01:08:36.120 --> 01:08:39.319 |
|
to text in the same language is just a |
|
|
|
01:08:38.159 --> 01:08:42.520 |
|
much more |
|
|
|
01:08:39.319 --> 01:08:45.359 |
|
straightforward task um and so it's a |
|
|
|
01:08:42.520 --> 01:08:47.839 |
|
bit easier to learn another thing is |
|
|
|
01:08:45.359 --> 01:08:50.839 |
|
interpretability and the reason why |
|
|
|
01:08:47.839 --> 01:08:52.120 |
|
interpretability is important is |
|
|
|
01:08:50.839 --> 01:08:54.920 |
|
basically |
|
|
|
01:08:52.120 --> 01:08:56.640 |
|
like if I'm talking to you in a |
|
|
|
01:08:54.920 --> 01:08:58.000 |
|
different language like you speak a |
|
|
|
01:08:56.640 --> 01:09:00.319 |
|
different language I'm talking to you |
|
|
|
01:08:58.000 --> 01:09:02.679 |
|
through a speech translation system I |
|
|
|
01:09:00.319 --> 01:09:05.799 |
|
actually want to know if the speech |
|
|
|
01:09:02.679 --> 01:09:07.600 |
|
recognition worked because I know if the |
|
|
|
01:09:05.799 --> 01:09:08.920 |
|
speech recognition didn't work then I'll |
|
|
|
01:09:07.600 --> 01:09:10.440 |
|
I'm pretty sure that the translation |
|
|
|
01:09:08.920 --> 01:09:11.920 |
|
didn't work either right and I can |
|
|
|
01:09:10.440 --> 01:09:14.880 |
|
verify the speech recognition but I |
|
|
|
01:09:11.920 --> 01:09:16.199 |
|
can't verify the transation so um |
|
|
|
01:09:14.880 --> 01:09:18.279 |
|
there's other reasons why you might want |
|
|
|
01:09:16.199 --> 01:09:20.239 |
|
a Cascade system other than just like |
|
|
|
01:09:18.279 --> 01:09:22.440 |
|
accuracy or or other things like that |
|
|
|
01:09:20.239 --> 01:09:25.880 |
|
but this is a thing we definitely |
|
|
|
01:09:22.440 --> 01:09:29.120 |
|
do um there's another idea of stacking |
|
|
|
01:09:25.880 --> 01:09:32.560 |
|
and stacking is um very similar to cast |
|
|
|
01:09:29.120 --> 01:09:34.560 |
|
skating but it allows you to take two |
|
|
|
01:09:32.560 --> 01:09:37.120 |
|
different models for the same task but |
|
|
|
01:09:34.560 --> 01:09:39.400 |
|
with predictions in different ways so |
|
|
|
01:09:37.120 --> 01:09:41.120 |
|
just taking another um |
|
|
|
01:09:39.400 --> 01:09:43.600 |
|
example |
|
|
|
01:09:41.120 --> 01:09:45.040 |
|
uh actually maybe maybe ignore the |
|
|
|
01:09:43.600 --> 01:09:47.159 |
|
example I have here but we could just |
|
|
|
01:09:45.040 --> 01:09:50.679 |
|
take the example of speech uh |
|
|
|
01:09:47.159 --> 01:09:53.000 |
|
translation um the speech translation |
|
|
|
01:09:50.679 --> 01:09:55.760 |
|
model uh we would first do speech |
|
|
|
01:09:53.000 --> 01:09:57.520 |
|
recognition into like let's say English |
|
|
|
01:09:55.760 --> 01:09:59.640 |
|
and then we would do translation and the |
|
|
|
01:09:57.520 --> 01:10:03.840 |
|
input to the translation model would be |
|
|
|
01:09:59.640 --> 01:10:05.560 |
|
speech in English um text in English and |
|
|
|
01:10:03.840 --> 01:10:07.320 |
|
we would generate the output in Japanese |
|
|
|
01:10:05.560 --> 01:10:10.080 |
|
so it would take both the speech and the |
|
|
|
01:10:07.320 --> 01:10:12.920 |
|
text uh when it was doing translation |
|
|
|
01:10:10.080 --> 01:10:14.840 |
|
and that would allow it to number one |
|
|
|
01:10:12.920 --> 01:10:17.719 |
|
basically get a second opinion about |
|
|
|
01:10:14.840 --> 01:10:21.080 |
|
whether the transcription was correct |
|
|
|
01:10:17.719 --> 01:10:23.800 |
|
but also like let's say there was |
|
|
|
01:10:21.080 --> 01:10:26.440 |
|
some unique information that only |
|
|
|
01:10:23.800 --> 01:10:29.480 |
|
appeared in the |
|
|
|
01:10:26.440 --> 01:10:31.679 |
|
um uh that only appeared in the speech |
|
|
|
01:10:29.480 --> 01:10:34.840 |
|
so just to give an example I read the |
|
|
|
01:10:31.679 --> 01:10:37.040 |
|
book I read the book are both |
|
|
|
01:10:34.840 --> 01:10:38.640 |
|
transcribed exactly the same way and |
|
|
|
01:10:37.040 --> 01:10:41.679 |
|
they're different translations obviously |
|
|
|
01:10:38.640 --> 01:10:42.920 |
|
because one is uh you know present or |
|
|
|
01:10:41.679 --> 01:10:45.560 |
|
present tense and the other is past |
|
|
|
01:10:42.920 --> 01:10:47.239 |
|
tense so there are examples where uh |
|
|
|
01:10:45.560 --> 01:10:51.600 |
|
adding a cascaded system would lose |
|
|
|
01:10:47.239 --> 01:10:51.600 |
|
information and a stacked system would |
|
|
|
01:10:53.400 --> 01:10:57.679 |
|
not another thing is of refinement I |
|
|
|
01:10:56.440 --> 01:10:59.480 |
|
think this is actually really |
|
|
|
01:10:57.679 --> 01:11:01.000 |
|
interesting because large language |
|
|
|
01:10:59.480 --> 01:11:03.920 |
|
models have opened up a whole bunch of |
|
|
|
01:11:01.000 --> 01:11:05.640 |
|
possibilities for us in this space um |
|
|
|
01:11:03.920 --> 01:11:07.760 |
|
this is like cascading and stacking but |
|
|
|
01:11:05.640 --> 01:11:09.640 |
|
it it can be done multiple times and it |
|
|
|
01:11:07.760 --> 01:11:12.960 |
|
can be done multiple times with the same |
|
|
|
01:11:09.640 --> 01:11:15.040 |
|
model so um we have an input we feed it |
|
|
|
01:11:12.960 --> 01:11:17.320 |
|
into the model we get an output and then |
|
|
|
01:11:15.040 --> 01:11:19.360 |
|
we feed the output back in and gradually |
|
|
|
01:11:17.320 --> 01:11:23.080 |
|
refine it and make it better and |
|
|
|
01:11:19.360 --> 01:11:24.760 |
|
better and the first time this was done |
|
|
|
01:11:23.080 --> 01:11:27.440 |
|
in neural networks was through something |
|
|
|
01:11:24.760 --> 01:11:29.679 |
|
called Del ation networks and basically |
|
|
|
01:11:27.440 --> 01:11:32.360 |
|
deliberation networks what they do is |
|
|
|
01:11:29.679 --> 01:11:33.760 |
|
they uh take in an output and then they |
|
|
|
01:11:32.360 --> 01:11:34.920 |
|
just gradually refine it to make it |
|
|
|
01:11:33.760 --> 01:11:37.280 |
|
better and better they used a |
|
|
|
01:11:34.920 --> 01:11:39.159 |
|
reinforcement learning algorithm to do |
|
|
|
01:11:37.280 --> 01:11:41.159 |
|
this where you generated the output and |
|
|
|
01:11:39.159 --> 01:11:43.600 |
|
then um improved |
|
|
|
01:11:41.159 --> 01:11:46.719 |
|
it another thing that's really popular |
|
|
|
01:11:43.600 --> 01:11:48.280 |
|
nowadays is uh diffusion models and I |
|
|
|
01:11:46.719 --> 01:11:50.400 |
|
haven't quite decided whether I'll have |
|
|
|
01:11:48.280 --> 01:11:51.880 |
|
time to cover diffusion models in depth |
|
|
|
01:11:50.400 --> 01:11:54.880 |
|
but basically the way a diffusion model |
|
|
|
01:11:51.880 --> 01:11:55.880 |
|
works is very similar you start out with |
|
|
|
01:11:54.880 --> 01:11:57.239 |
|
nothing |
|
|
|
01:11:55.880 --> 01:11:59.840 |
|
and then you gradually make it better |
|
|
|
01:11:57.239 --> 01:12:01.360 |
|
and better um the key difference between |
|
|
|
01:11:59.840 --> 01:12:03.520 |
|
deliberation networks and diffusion |
|
|
|
01:12:01.360 --> 01:12:05.520 |
|
models is diffusion models um you can |
|
|
|
01:12:03.520 --> 01:12:08.600 |
|
train from scratch by basically noising |
|
|
|
01:12:05.520 --> 01:12:10.600 |
|
the input uh applying noise to the input |
|
|
|
01:12:08.600 --> 01:12:12.880 |
|
um in training very efficiently and |
|
|
|
01:12:10.600 --> 01:12:15.639 |
|
these are very widely used |
|
|
|
01:12:12.880 --> 01:12:18.199 |
|
in image generation they're not super |
|
|
|
01:12:15.639 --> 01:12:20.120 |
|
widely used in text just because regular |
|
|
|
01:12:18.199 --> 01:12:22.840 |
|
autor regressive models are so good for |
|
|
|
01:12:20.120 --> 01:12:24.159 |
|
text um but there are a few efforts to |
|
|
|
01:12:22.840 --> 01:12:26.880 |
|
do |
|
|
|
01:12:24.159 --> 01:12:30.920 |
|
that and then a final one is self- |
|
|
|
01:12:26.880 --> 01:12:35.120 |
|
refine and the idea behind self- refine |
|
|
|
01:12:30.920 --> 01:12:39.400 |
|
is you um actually maybe I can open the |
|
|
|
01:12:35.120 --> 01:12:39.400 |
|
paper because the paper has a good |
|
|
|
01:12:54.120 --> 01:12:58.239 |
|
figure |
|
|
|
01:12:56.280 --> 01:13:02.679 |
|
actually I thought it had a good |
|
|
|
01:12:58.239 --> 01:13:05.600 |
|
figure um yeah so maybe this is a figure |
|
|
|
01:13:02.679 --> 01:13:08.639 |
|
um so basically uh what you do is you |
|
|
|
01:13:05.600 --> 01:13:10.639 |
|
feed in the input you generate an output |
|
|
|
01:13:08.639 --> 01:13:12.679 |
|
and then you ask the model to give you |
|
|
|
01:13:10.639 --> 01:13:15.520 |
|
feedback on the output and say yes this |
|
|
|
01:13:12.679 --> 01:13:16.760 |
|
output is good or um like let's say |
|
|
|
01:13:15.520 --> 01:13:19.679 |
|
you're doing code generation it could |
|
|
|
01:13:16.760 --> 01:13:21.920 |
|
say no this output has an error in it um |
|
|
|
01:13:19.679 --> 01:13:24.719 |
|
this is a problem with your output and |
|
|
|
01:13:21.920 --> 01:13:27.840 |
|
then you feed in both the output and the |
|
|
|
01:13:24.719 --> 01:13:29.480 |
|
feedback back uh and ask the model to |
|
|
|
01:13:27.840 --> 01:13:32.239 |
|
refine its output and you do this over |
|
|
|
01:13:29.480 --> 01:13:35.280 |
|
and over again and this allows you to uh |
|
|
|
01:13:32.239 --> 01:13:36.840 |
|
improve the output and uh this is has |
|
|
|
01:13:35.280 --> 01:13:39.600 |
|
ended up being pretty effective in a |
|
|
|
01:13:36.840 --> 01:13:41.159 |
|
pretty wide number of tasks one caveat |
|
|
|
01:13:39.600 --> 01:13:44.040 |
|
about this is your model has to be |
|
|
|
01:13:41.159 --> 01:13:47.000 |
|
really good for this to work so um only |
|
|
|
01:13:44.040 --> 01:13:49.239 |
|
models kind of on the level of GPT 4 not |
|
|
|
01:13:47.000 --> 01:13:52.000 |
|
on the level of GPT 3.5 have the ability |
|
|
|
01:13:49.239 --> 01:13:54.040 |
|
to do this pretty consistently so it is |
|
|
|
01:13:52.000 --> 01:13:57.040 |
|
something you need to be aware |
|
|
|
01:13:54.040 --> 01:13:57.040 |
|
of |
|
|
|
01:13:59.760 --> 01:14:03.600 |
|
cool yep that's all I I had for today |
|
|
|
01:14:02.400 --> 01:14:06.600 |
|
I'm happy |
|
|
|
01:14:03.600 --> 01:14:06.600 |
|
to |
|
|
|
01:14:07.159 --> 01:14:10.159 |
|
take |
|
|
|
01:14:20.600 --> 01:14:27.320 |
|
yep yep that this is a great question so |
|
|
|
01:14:23.920 --> 01:14:28.840 |
|
if sta has the potential to address |
|
|
|
01:14:27.320 --> 01:14:32.120 |
|
information loss why would we ever |
|
|
|
01:14:28.840 --> 01:14:33.840 |
|
choose a Cascade model I think basically |
|
|
|
01:14:32.120 --> 01:14:37.440 |
|
there's potentially two reasons one |
|
|
|
01:14:33.840 --> 01:14:39.199 |
|
reason is um data availability so in |
|
|
|
01:14:37.440 --> 01:14:42.639 |
|
order to train a stacked model you |
|
|
|
01:14:39.199 --> 01:14:43.430 |
|
obviously need the outputs I guess you |
|
|
|
01:14:42.639 --> 01:14:44.639 |
|
could |
|
|
|
01:14:43.430 --> 01:14:48.440 |
|
[Music] |
|
|
|
01:14:44.639 --> 01:14:50.880 |
|
um yeah I guess you could run |
|
|
|
01:14:48.440 --> 01:14:53.199 |
|
the and generate outputs for every |
|
|
|
01:14:50.880 --> 01:14:54.840 |
|
training example you have um but you |
|
|
|
01:14:53.199 --> 01:14:55.840 |
|
would need to do that so you would need |
|
|
|
01:14:54.840 --> 01:14:58.639 |
|
to to |
|
|
|
01:14:55.840 --> 01:14:59.920 |
|
run speech recognition for every example |
|
|
|
01:14:58.639 --> 01:15:02.760 |
|
and you also |
|
|
|
01:14:59.920 --> 01:15:05.199 |
|
couldn't you couldn't use any examples |
|
|
|
01:15:02.760 --> 01:15:07.600 |
|
where you don't have the original input |
|
|
|
01:15:05.199 --> 01:15:10.320 |
|
so you couldn't use text to text |
|
|
|
01:15:07.600 --> 01:15:12.239 |
|
examples unless you like synthesize |
|
|
|
01:15:10.320 --> 01:15:14.159 |
|
speech from text for machine translation |
|
|
|
01:15:12.239 --> 01:15:15.840 |
|
for example so makes it a little bit |
|
|
|
01:15:14.159 --> 01:15:17.360 |
|
more tricky due to the data requirements |
|
|
|
01:15:15.840 --> 01:15:19.239 |
|
but that's not |
|
|
|
01:15:17.360 --> 01:15:22.560 |
|
insurmountable the second reason is |
|
|
|
01:15:19.239 --> 01:15:24.400 |
|
complexity and efficiency so you know |
|
|
|
01:15:22.560 --> 01:15:27.920 |
|
you do have to come up with a model that |
|
|
|
01:15:24.400 --> 01:15:29.520 |
|
takes in speed and text and run set and |
|
|
|
01:15:27.920 --> 01:15:30.920 |
|
it might be easier just to hook together |
|
|
|
01:15:29.520 --> 01:15:34.719 |
|
a speech recognitional with a |
|
|
|
01:15:30.920 --> 01:15:37.920 |
|
translation so but like I think overall |
|
|
|
01:15:34.719 --> 01:15:39.639 |
|
I I like these methods I I think these |
|
|
|
01:15:37.920 --> 01:15:41.159 |
|
are good methods to use if you're if |
|
|
|
01:15:39.639 --> 01:15:42.480 |
|
you're thinking about using a Cascade |
|
|
|
01:15:41.159 --> 01:15:44.199 |
|
system you should definitely consider |
|
|
|
01:15:42.480 --> 01:15:47.199 |
|
using a stack system in |
|
|
|
01:15:44.199 --> 01:15:47.199 |
|
sense |
|
|
|
01:15:52.080 --> 01:15:56.960 |
|
yeah yeah can you measure the |
|
|
|
01:15:55.159 --> 01:15:59.400 |
|
contribution of each component to an |
|
|
|
01:15:56.960 --> 01:16:00.639 |
|
ensemble um the very very easy way to do |
|
|
|
01:15:59.400 --> 01:16:02.199 |
|
that is look at the interpolation |
|
|
|
01:16:00.639 --> 01:16:05.360 |
|
coefficients if you train the |
|
|
|
01:16:02.199 --> 01:16:06.800 |
|
interpolation coefficients um otherwise |
|
|
|
01:16:05.360 --> 01:16:08.920 |
|
I guess it depends on what you mean by |
|
|
|
01:16:06.800 --> 01:16:10.480 |
|
each contribution but I you know looking |
|
|
|
01:16:08.920 --> 01:16:12.280 |
|
at the interpolation coefficients is a |
|
|
|
01:16:10.480 --> 01:16:16.320 |
|
pretty good way to do |
|
|
|
01:16:12.280 --> 01:16:16.320 |
|
it also just how much did the |
|
|
|
01:16:21.480 --> 01:16:27.400 |
|
accuracy is iterative refinement the |
|
|
|
01:16:24.159 --> 01:16:30.199 |
|
same idea as boosting in traditional |
|
|
|
01:16:27.400 --> 01:16:30.199 |
|
like machine Learning |
|
|
|
01:16:30.320 --> 01:16:34.920 |
|
Systems I think it's a little bit |
|
|
|
01:16:32.920 --> 01:16:36.520 |
|
different um because iterative |
|
|
|
01:16:34.920 --> 01:16:38.920 |
|
refinement what I'm talking about here |
|
|
|
01:16:36.520 --> 01:16:41.120 |
|
it's usually taking in the output like |
|
|
|
01:16:38.920 --> 01:16:43.320 |
|
rather complex output of a system and |
|
|
|
01:16:41.120 --> 01:16:44.920 |
|
modifying it so you're not just |
|
|
|
01:16:43.320 --> 01:16:47.080 |
|
modifying the |
|
|
|
01:16:44.920 --> 01:16:49.880 |
|
probabilities of like a single |
|
|
|
01:16:47.080 --> 01:16:53.080 |
|
classifier you're modifying the actual |
|
|
|
01:16:49.880 --> 01:16:55.960 |
|
outputs that were generated then from |
|
|
|
01:16:53.080 --> 01:16:59.560 |
|
the point of view of a boosting |
|
|
|
01:16:55.960 --> 01:17:02.560 |
|
model over a single categorical output |
|
|
|
01:16:59.560 --> 01:17:04.520 |
|
it might actually be similar or the same |
|
|
|
01:17:02.560 --> 01:17:06.480 |
|
but this is more like uh you you |
|
|
|
01:17:04.520 --> 01:17:08.159 |
|
generated a textual output and then you |
|
|
|
01:17:06.480 --> 01:17:10.400 |
|
feed in the textual output to the other |
|
|
|
01:17:08.159 --> 01:17:12.120 |
|
model and refine like generated a new |
|
|
|
01:17:10.400 --> 01:17:14.239 |
|
textual output so I feel like it's a lot |
|
|
|
01:17:12.120 --> 01:17:18.639 |
|
more |
|
|
|
01:17:14.239 --> 01:17:18.639 |
|
complex cool okay thank thanks a lot |
|
|
|
01:17:18.840 --> 01:17:21.840 |
|
everyone |
|
|