ahmedelsayed's picture
commit files to HF hub
2ffb90d
WEBVTT
00:00:00.399 --> 00:00:04.120
so this time I'm going to be talking
00:00:02.080 --> 00:00:05.799
about language modeling uh obviously
00:00:04.120 --> 00:00:07.240
language modeling is a big topic and I'm
00:00:05.799 --> 00:00:09.880
not going to be able to cover it all in
00:00:07.240 --> 00:00:11.320
one class but this is kind of the basics
00:00:09.880 --> 00:00:13.080
of uh what does it mean to build a
00:00:11.320 --> 00:00:15.320
language model what is a language model
00:00:13.080 --> 00:00:18.439
how do we evaluate language models and
00:00:15.320 --> 00:00:19.920
other stuff like that and around the end
00:00:18.439 --> 00:00:21.320
I'm going to talk a little bit about
00:00:19.920 --> 00:00:23.039
efficiently implementing things in
00:00:21.320 --> 00:00:25.080
neural networks it's not directly
00:00:23.039 --> 00:00:27.760
related to language models but it's very
00:00:25.080 --> 00:00:31.200
important to know how to do uh to solve
00:00:27.760 --> 00:00:34.200
your assignments so I'll cover both
00:00:31.200 --> 00:00:34.200
is
00:00:34.239 --> 00:00:38.480
cool okay so the first thing I'd like to
00:00:36.760 --> 00:00:41.239
talk about is generative versus
00:00:38.480 --> 00:00:43.000
discriminative models and the reason why
00:00:41.239 --> 00:00:45.280
is up until now we've been talking about
00:00:43.000 --> 00:00:47.559
discriminative models and these are
00:00:45.280 --> 00:00:49.640
models uh that are mainly designed to
00:00:47.559 --> 00:00:53.800
calculate the probability of a latent
00:00:49.640 --> 00:00:56.039
trait uh given the data and so this is
00:00:53.800 --> 00:00:58.800
uh P of Y given X where Y is the lat and
00:00:56.039 --> 00:01:00.800
trait we want to calculate and X is uh
00:00:58.800 --> 00:01:04.760
the input data that we're calculating it
00:01:00.800 --> 00:01:07.799
over so just review from last class what
00:01:04.760 --> 00:01:10.240
was X from last class from the example
00:01:07.799 --> 00:01:10.240
in L
00:01:11.360 --> 00:01:15.880
class
00:01:13.040 --> 00:01:18.280
anybody yeah some text yeah and then
00:01:15.880 --> 00:01:18.280
what was
00:01:20.400 --> 00:01:26.119
why it shouldn't be too
00:01:23.799 --> 00:01:27.920
hard yeah it was a category or a
00:01:26.119 --> 00:01:31.680
sentiment label precisely in the
00:01:27.920 --> 00:01:33.399
sentiment analysis tasks so so um a
00:01:31.680 --> 00:01:35.560
generative model on the other hand is a
00:01:33.399 --> 00:01:38.840
model that calculates the probability of
00:01:35.560 --> 00:01:40.880
data itself and is not specifically
00:01:38.840 --> 00:01:43.439
conditional and there's a couple of
00:01:40.880 --> 00:01:45.439
varieties um this isn't like super
00:01:43.439 --> 00:01:48.280
standard terminology I just uh wrote it
00:01:45.439 --> 00:01:51.520
myself but here we have a standalone
00:01:48.280 --> 00:01:54.360
probability of P of X and we can also
00:01:51.520 --> 00:01:58.000
calculate the joint probability P of X
00:01:54.360 --> 00:01:58.000
and Y
00:01:58.159 --> 00:02:02.880
so probabilistic language models
00:02:01.079 --> 00:02:06.640
basically what they do is they calculate
00:02:02.880 --> 00:02:08.560
this uh probability usually uh we think
00:02:06.640 --> 00:02:10.360
of it as a standalone probability of P
00:02:08.560 --> 00:02:11.800
of X where X is something like a
00:02:10.360 --> 00:02:15.160
sentence or a
00:02:11.800 --> 00:02:16.920
document and it's a generative model
00:02:15.160 --> 00:02:19.640
that calculates the probability of
00:02:16.920 --> 00:02:22.360
language recently the definition of
00:02:19.640 --> 00:02:23.959
language model has expanded a little bit
00:02:22.360 --> 00:02:26.160
so now
00:02:23.959 --> 00:02:28.640
um people also call things that
00:02:26.160 --> 00:02:31.080
calculate the probability of text and
00:02:28.640 --> 00:02:35.200
images as like multimodal language
00:02:31.080 --> 00:02:38.160
models or uh what are some of the other
00:02:35.200 --> 00:02:40.480
ones yeah I think that's the main the
00:02:38.160 --> 00:02:42.840
main exception to this rule usually
00:02:40.480 --> 00:02:45.080
usually it's calculating either of text
00:02:42.840 --> 00:02:47.680
or over text in some multimodal data but
00:02:45.080 --> 00:02:47.680
for now we're going to
00:02:48.800 --> 00:02:54.200
consider
00:02:50.319 --> 00:02:56.440
um then there's kind of two fundamental
00:02:54.200 --> 00:02:58.159
operations that we perform with LMS
00:02:56.440 --> 00:03:00.519
almost everything else we do with LMS
00:02:58.159 --> 00:03:03.640
can be considered like one of these two
00:03:00.519 --> 00:03:05.319
types of things the first thing is calc
00:03:03.640 --> 00:03:06.440
scoring sentences or calculating the
00:03:05.319 --> 00:03:09.599
probability of
00:03:06.440 --> 00:03:12.280
sentences and this
00:03:09.599 --> 00:03:14.720
is uh for example if we calculate the
00:03:12.280 --> 00:03:16.400
probability of Jane went to the store uh
00:03:14.720 --> 00:03:19.000
this would have a high probability
00:03:16.400 --> 00:03:20.879
ideally um and if we have this kind of
00:03:19.000 --> 00:03:23.400
word salid like this this would be given
00:03:20.879 --> 00:03:26.080
a low probability uh according to a
00:03:23.400 --> 00:03:28.000
English language model if we had a
00:03:26.080 --> 00:03:30.000
Chinese language model ideally it would
00:03:28.000 --> 00:03:31.319
also probably give low probability first
00:03:30.000 --> 00:03:32.879
sentence too because it's a language
00:03:31.319 --> 00:03:35.000
model of natural Chinese and not of
00:03:32.879 --> 00:03:36.200
natural English so there's also
00:03:35.000 --> 00:03:37.360
different types of language models
00:03:36.200 --> 00:03:38.400
depending on the type of data you play
00:03:37.360 --> 00:03:41.360
in
00:03:38.400 --> 00:03:43.599
the another thing I can do is generate
00:03:41.360 --> 00:03:45.239
sentences and we'll talk more about the
00:03:43.599 --> 00:03:48.280
different methods for generating
00:03:45.239 --> 00:03:50.319
sentences but typically they fall into
00:03:48.280 --> 00:03:51.799
one of two categories one is sampling
00:03:50.319 --> 00:03:53.200
like this where you try to sample a
00:03:51.799 --> 00:03:55.480
sentence from the probability
00:03:53.200 --> 00:03:57.280
distribution of the language model
00:03:55.480 --> 00:03:58.360
possibly with some modifications to the
00:03:57.280 --> 00:04:00.760
probability
00:03:58.360 --> 00:04:03.079
distribution um the other thing which I
00:04:00.760 --> 00:04:04.760
didn't write on the slide is uh finding
00:04:03.079 --> 00:04:07.439
the highest scoring sentence according
00:04:04.760 --> 00:04:09.760
to the language model um and we do both
00:04:07.439 --> 00:04:09.760
of those
00:04:10.560 --> 00:04:17.600
S so more concretely how can we apply
00:04:15.199 --> 00:04:21.199
these these can be applied to answer
00:04:17.600 --> 00:04:23.840
questions so for example um if we have a
00:04:21.199 --> 00:04:27.240
multiple choice question we can score
00:04:23.840 --> 00:04:30.639
possible multiple choice answers and uh
00:04:27.240 --> 00:04:32.880
the way we do this is we calculate
00:04:30.639 --> 00:04:35.440
we first
00:04:32.880 --> 00:04:38.440
take uh like we have
00:04:35.440 --> 00:04:38.440
like
00:04:38.560 --> 00:04:43.919
um
00:04:40.960 --> 00:04:46.919
where is
00:04:43.919 --> 00:04:46.919
CMU
00:04:47.560 --> 00:04:51.600
located um
00:04:51.960 --> 00:04:59.560
that's and actually maybe promete this
00:04:54.560 --> 00:05:01.360
all again to an a here and then we say X
00:04:59.560 --> 00:05:05.800
X1 is equal to
00:05:01.360 --> 00:05:07.520
this and then we have X2 which is
00:05:05.800 --> 00:05:09.720
Q
00:05:07.520 --> 00:05:12.479
where is
00:05:09.720 --> 00:05:14.120
CMU
00:05:12.479 --> 00:05:18.080
located
00:05:14.120 --> 00:05:19.720
a um what's something
00:05:18.080 --> 00:05:21.960
plausible
00:05:19.720 --> 00:05:24.560
uh what was
00:05:21.960 --> 00:05:26.319
it okay now now you're going to make it
00:05:24.560 --> 00:05:27.960
tricky and make me talk about when we
00:05:26.319 --> 00:05:29.960
have multiple right answers and how we
00:05:27.960 --> 00:05:31.759
evaluate and stuff let let's ignore that
00:05:29.960 --> 00:05:35.080
for now it's say New
00:05:31.759 --> 00:05:37.199
York it's not located in New York is
00:05:35.080 --> 00:05:40.560
it
00:05:37.199 --> 00:05:40.560
okay let's say
00:05:40.960 --> 00:05:45.199
Birmingham hopefully there's no CMU
00:05:43.199 --> 00:05:47.120
affiliate in Birmingham I think we're
00:05:45.199 --> 00:05:49.000
we're pretty so um and then you would
00:05:47.120 --> 00:05:53.880
just calculate the probability of X1 and
00:05:49.000 --> 00:05:56.440
the probability of X2 X3 X4 Etc and um
00:05:53.880 --> 00:06:01.479
then pick the highest saring one and
00:05:56.440 --> 00:06:01.479
actually um there's a famous
00:06:03.199 --> 00:06:07.440
there's a famous uh leaderboard for
00:06:05.840 --> 00:06:08.759
language models that probably a lot of
00:06:07.440 --> 00:06:09.759
people know about it's called the open
00:06:08.759 --> 00:06:13.120
llm
00:06:09.759 --> 00:06:15.639
leaderboard and a lot of these tasks
00:06:13.120 --> 00:06:17.319
here basically correspond to doing
00:06:15.639 --> 00:06:21.000
something like that like hel swag is
00:06:17.319 --> 00:06:22.599
kind of a multiple choice uh is a
00:06:21.000 --> 00:06:24.160
multiple choice question answering thing
00:06:22.599 --> 00:06:27.880
about common sense where they calculate
00:06:24.160 --> 00:06:30.280
it by scoring uh scoring the
00:06:27.880 --> 00:06:31.880
outputs so that's a very common way to
00:06:30.280 --> 00:06:35.000
use language
00:06:31.880 --> 00:06:36.960
models um another thing is generating a
00:06:35.000 --> 00:06:40.080
continuation of a question prompt so
00:06:36.960 --> 00:06:42.639
basically this is when you uh
00:06:40.080 --> 00:06:44.759
sample and so what you would do is you
00:06:42.639 --> 00:06:48.440
would prompt the
00:06:44.759 --> 00:06:50.560
model with this uh X here and then you
00:06:48.440 --> 00:06:53.800
would ask it to generate either the most
00:06:50.560 --> 00:06:56.400
likely uh completion or generate um
00:06:53.800 --> 00:06:58.960
sample multiple completions to get the
00:06:56.400 --> 00:07:00.720
answer so this is very common uh people
00:06:58.960 --> 00:07:03.759
are very familiar with this there's lots
00:07:00.720 --> 00:07:07.160
of other uh things you can do though so
00:07:03.759 --> 00:07:09.400
um you can classify text and there's a
00:07:07.160 --> 00:07:12.720
couple ways you can do this uh one way
00:07:09.400 --> 00:07:15.960
you can do this is um like let's say we
00:07:12.720 --> 00:07:15.960
have a sentiment sentence
00:07:16.160 --> 00:07:21.520
here
00:07:17.759 --> 00:07:25.440
um you can say uh
00:07:21.520 --> 00:07:30.919
this is
00:07:25.440 --> 00:07:33.919
gr and then you can say um
00:07:30.919 --> 00:07:37.680
star
00:07:33.919 --> 00:07:38.879
rating five or something like that and
00:07:37.680 --> 00:07:41.400
then you could also have star rating
00:07:38.879 --> 00:07:43.680
four star rating three star rating two
00:07:41.400 --> 00:07:45.080
star rating one and calculate the
00:07:43.680 --> 00:07:46.639
probability of all of these and find
00:07:45.080 --> 00:07:50.360
which one has the highest probability so
00:07:46.639 --> 00:07:51.800
this is a a common way you can do things
00:07:50.360 --> 00:07:54.319
another thing you can do which is kind
00:07:51.800 --> 00:07:55.240
of interesting and um there are papers
00:07:54.319 --> 00:07:58.319
on this but they're kind of
00:07:55.240 --> 00:08:00.800
underexplored is you can do like star
00:07:58.319 --> 00:08:04.800
rating
00:08:00.800 --> 00:08:04.800
five and then
00:08:04.879 --> 00:08:13.280
generate generate the output um and so
00:08:10.319 --> 00:08:15.039
that basically says Okay I I want a
00:08:13.280 --> 00:08:16.680
positive sentence now I'm going to score
00:08:15.039 --> 00:08:19.120
the actual review and see whether that
00:08:16.680 --> 00:08:22.319
matches my like conception of a positive
00:08:19.120 --> 00:08:24.080
sentence and there's a few uh papers
00:08:22.319 --> 00:08:25.680
that do
00:08:24.080 --> 00:08:28.240
this
00:08:25.680 --> 00:08:31.240
um let
00:08:28.240 --> 00:08:31.240
me
00:08:34.640 --> 00:08:38.760
this is a kind of older one and then
00:08:36.240 --> 00:08:42.080
there's another more recent one by Sean
00:08:38.760 --> 00:08:43.839
Min I believe um uh but they demonstrate
00:08:42.080 --> 00:08:45.480
how you can do both generative and
00:08:43.839 --> 00:08:47.600
discriminative classification in this
00:08:45.480 --> 00:08:51.760
way so that's another thing that you can
00:08:47.600 --> 00:08:51.760
do uh with language
00:08:53.279 --> 00:08:56.839
models and then the other thing you can
00:08:55.200 --> 00:08:59.000
do is you can generate the label given a
00:08:56.839 --> 00:09:00.680
classification proc so you you say this
00:08:59.000 --> 00:09:03.079
is is great star rating and then
00:09:00.680 --> 00:09:05.720
generate five
00:09:03.079 --> 00:09:09.320
whatever finally um you can do things
00:09:05.720 --> 00:09:10.920
like correct a grammar so uh for example
00:09:09.320 --> 00:09:12.560
if you score the probability of each
00:09:10.920 --> 00:09:14.839
word and you find words that are really
00:09:12.560 --> 00:09:17.760
low probability then you can uh replace
00:09:14.839 --> 00:09:20.160
them with higher probability words um or
00:09:17.760 --> 00:09:21.720
you could ask a model please paraphrase
00:09:20.160 --> 00:09:24.000
this output and it will paraphrase it
00:09:21.720 --> 00:09:27.640
into something that gives you uh you
00:09:24.000 --> 00:09:30.720
know that has better gra so basically
00:09:27.640 --> 00:09:33.079
like as I said language models are very
00:09:30.720 --> 00:09:34.600
diverse um and they can do a ton of
00:09:33.079 --> 00:09:35.680
different things but most of them boil
00:09:34.600 --> 00:09:38.440
down to doing one of these two
00:09:35.680 --> 00:09:42.079
operations scoring or
00:09:38.440 --> 00:09:42.079
generating any questions
00:09:42.480 --> 00:09:47.600
s
00:09:44.640 --> 00:09:50.000
okay so next I I want to talk about a
00:09:47.600 --> 00:09:52.279
specific type of language models uh Auto
00:09:50.000 --> 00:09:54.240
regressive language models and auto
00:09:52.279 --> 00:09:56.720
regressive language models are language
00:09:54.240 --> 00:10:00.240
models that specifically calculate this
00:09:56.720 --> 00:10:02.320
probability um in a fashion where you
00:10:00.240 --> 00:10:03.680
calculate the probability of one token
00:10:02.320 --> 00:10:05.519
and then you calculate the probability
00:10:03.680 --> 00:10:07.680
of the next token given the previous
00:10:05.519 --> 00:10:10.519
token the probability of the third token
00:10:07.680 --> 00:10:13.760
G given the previous two tokens almost
00:10:10.519 --> 00:10:18.600
always this happens left to right um or
00:10:13.760 --> 00:10:20.519
start to finish um and so this is the
00:10:18.600 --> 00:10:25.000
next token here this is a context where
00:10:20.519 --> 00:10:28.440
usually um the context is the previous
00:10:25.000 --> 00:10:29.640
tokens Can anyone think of a time when
00:10:28.440 --> 00:10:32.440
you might want to do
00:10:29.640 --> 00:10:37.839
right to left instead of left to
00:10:32.440 --> 00:10:40.399
right yeah language that's from right to
00:10:37.839 --> 00:10:41.680
yeah that's actually exactly what I what
00:10:40.399 --> 00:10:43.079
I was looking for so if you have a
00:10:41.680 --> 00:10:46.839
language that's written from right to
00:10:43.079 --> 00:10:49.320
left actually uh things like uh Arabic
00:10:46.839 --> 00:10:51.360
and Hebrew are written right to left so
00:10:49.320 --> 00:10:53.720
um both of those are
00:10:51.360 --> 00:10:56.360
chronologically like earlier to later
00:10:53.720 --> 00:10:59.399
because you know if if you're thinking
00:10:56.360 --> 00:11:01.079
about how people speak um the the first
00:10:59.399 --> 00:11:02.440
word that an English speaker speaks is
00:11:01.079 --> 00:11:04.000
on the left just because that's the way
00:11:02.440 --> 00:11:06.079
you write it but the first word that an
00:11:04.000 --> 00:11:09.639
Arabic speaker speaks is on the the
00:11:06.079 --> 00:11:12.360
right because chronologically that's uh
00:11:09.639 --> 00:11:13.519
that's how it works um there's other
00:11:12.360 --> 00:11:16.320
reasons why you might want to do right
00:11:13.519 --> 00:11:17.839
to left but uh it's not really that left
00:11:16.320 --> 00:11:21.720
to right is important it's that like
00:11:17.839 --> 00:11:24.440
start to finish is important in spoken
00:11:21.720 --> 00:11:27.880
language so um one thing I should
00:11:24.440 --> 00:11:30.240
mention here is that this is just a rule
00:11:27.880 --> 00:11:31.560
of probability that if you have multiple
00:11:30.240 --> 00:11:33.720
variables and you're calculating the
00:11:31.560 --> 00:11:35.760
joint probability of variables the
00:11:33.720 --> 00:11:38.000
probability of all of the variables
00:11:35.760 --> 00:11:40.240
together is equal to this probability
00:11:38.000 --> 00:11:41.920
here so we're not making any
00:11:40.240 --> 00:11:44.399
approximations we're not making any
00:11:41.920 --> 00:11:46.959
compromises in order to do this but it
00:11:44.399 --> 00:11:51.639
all hinges on whether we can predict
00:11:46.959 --> 00:11:53.440
this probability um accurately uh
00:11:51.639 --> 00:11:56.160
actually another question does anybody
00:11:53.440 --> 00:11:57.800
know why we do this decomposition why
00:11:56.160 --> 00:12:00.959
don't we just try to predict the
00:11:57.800 --> 00:12:00.959
probability of x
00:12:02.120 --> 00:12:05.399
directly any
00:12:07.680 --> 00:12:12.760
ideas uh of big X sorry uh why don't we
00:12:11.079 --> 00:12:17.560
try to calculate the probability of this
00:12:12.760 --> 00:12:21.360
is great directly without deated the
00:12:17.560 --> 00:12:21.360
IND that
00:12:25.519 --> 00:12:31.560
possibility it could be word salid if
00:12:27.760 --> 00:12:35.279
you did it in a in a particular way yes
00:12:31.560 --> 00:12:35.279
um so that that's a good point
00:12:39.519 --> 00:12:47.000
yeah yeah so for example we talked about
00:12:43.760 --> 00:12:50.120
um uh we'll talk about
00:12:47.000 --> 00:12:51.920
models um or I I mentioned this briefly
00:12:50.120 --> 00:12:54.000
last time you can mention it in more
00:12:51.920 --> 00:12:55.639
detail this time but this is great we
00:12:54.000 --> 00:12:59.880
probably have never seen this before
00:12:55.639 --> 00:13:01.399
right so if we predict only things that
00:12:59.880 --> 00:13:03.199
we've seen before if we only assign a
00:13:01.399 --> 00:13:04.600
non-zero probability to the things we've
00:13:03.199 --> 00:13:06.000
seen before there's going to be lots of
00:13:04.600 --> 00:13:07.079
sentences that we've never seen before
00:13:06.000 --> 00:13:10.000
it makes it
00:13:07.079 --> 00:13:13.760
supercars um that that's basically close
00:13:10.000 --> 00:13:16.399
to what I wanted to say so um the reason
00:13:13.760 --> 00:13:18.040
why we don't typically do it with um
00:13:16.399 --> 00:13:21.240
predicting the whole sentence directly
00:13:18.040 --> 00:13:22.800
is because if we think about the size of
00:13:21.240 --> 00:13:24.959
the classification problem we need to
00:13:22.800 --> 00:13:27.880
solve in order to predict the next word
00:13:24.959 --> 00:13:30.320
it's a v uh where V is the size of the
00:13:27.880 --> 00:13:33.120
vocabulary but the size of the
00:13:30.320 --> 00:13:35.399
classification problem that we need to
00:13:33.120 --> 00:13:38.040
um we need to solve if we predict
00:13:35.399 --> 00:13:40.079
everything directly is V to the N where
00:13:38.040 --> 00:13:42.240
n is the length of the sequence and
00:13:40.079 --> 00:13:45.240
that's just huge the vocabulary is so
00:13:42.240 --> 00:13:48.440
big that it's hard to kind of uh know
00:13:45.240 --> 00:13:51.000
how we handle that so basically by doing
00:13:48.440 --> 00:13:53.160
this sort of decomposition we decompose
00:13:51.000 --> 00:13:56.440
this into uh
00:13:53.160 --> 00:13:58.120
n um prediction problems of size V and
00:13:56.440 --> 00:13:59.519
that's kind of just a lot more
00:13:58.120 --> 00:14:03.079
manageable for from the point of view of
00:13:59.519 --> 00:14:06.000
how we train uh know how we train
00:14:03.079 --> 00:14:09.399
models um that being said there are
00:14:06.000 --> 00:14:11.360
other Alternatives um something very
00:14:09.399 --> 00:14:13.920
widely known uh very widely used is
00:14:11.360 --> 00:14:16.440
called a MK language model um a mast
00:14:13.920 --> 00:14:19.480
language model is something like Bert or
00:14:16.440 --> 00:14:21.680
debera or Roberta or all of these models
00:14:19.480 --> 00:14:25.000
that you might have heard if you've been
00:14:21.680 --> 00:14:28.279
in MLP for more than two years I guess
00:14:25.000 --> 00:14:30.680
um and basically what they do is they
00:14:28.279 --> 00:14:30.680
predict
00:14:32.199 --> 00:14:37.480
uh they like mask out this word and they
00:14:34.839 --> 00:14:39.480
predict the middle word so they mask out
00:14:37.480 --> 00:14:41.440
is and then try to predict that given
00:14:39.480 --> 00:14:45.320
all the other words the problem with
00:14:41.440 --> 00:14:48.959
these models is uh twofold number one
00:14:45.320 --> 00:14:51.880
they don't actually give you a uh good
00:14:48.959 --> 00:14:55.399
probability here uh like a a properly
00:14:51.880 --> 00:14:57.800
formed probability here
00:14:55.399 --> 00:14:59.160
because this is true only as long as
00:14:57.800 --> 00:15:01.920
you're only conditioning on things that
00:14:59.160 --> 00:15:03.480
you've previously generated so that
00:15:01.920 --> 00:15:04.839
they're not actually true language
00:15:03.480 --> 00:15:06.920
models from the point of view of being
00:15:04.839 --> 00:15:10.040
able to easily predict the probability
00:15:06.920 --> 00:15:11.399
of a sequence um and also it's hard to
00:15:10.040 --> 00:15:13.399
generate from them because you need to
00:15:11.399 --> 00:15:15.440
generate in some order and mass language
00:15:13.399 --> 00:15:17.600
models don't specify economical orders
00:15:15.440 --> 00:15:19.120
so they're good for some things like
00:15:17.600 --> 00:15:21.720
calculating representations of the
00:15:19.120 --> 00:15:22.920
output but they're not useful uh they're
00:15:21.720 --> 00:15:25.240
not as useful for
00:15:22.920 --> 00:15:26.880
Generation Um there's also energy based
00:15:25.240 --> 00:15:28.759
language models which basically create a
00:15:26.880 --> 00:15:30.000
scoring function that's not necessarily
00:15:28.759 --> 00:15:31.279
left to right or right to left or
00:15:30.000 --> 00:15:33.120
anything like that but that's very
00:15:31.279 --> 00:15:34.639
Advanced um if you're interested in them
00:15:33.120 --> 00:15:36.319
I can talk more about them that we'll
00:15:34.639 --> 00:15:38.920
skip
00:15:36.319 --> 00:15:41.600
them and um also all of the language
00:15:38.920 --> 00:15:45.639
models that you hear about nowadays GPT
00:15:41.600 --> 00:15:48.800
uh llama whatever else are all other
00:15:45.639 --> 00:15:52.880
models cool so I'm going to go into the
00:15:48.800 --> 00:15:52.880
very um any questions about that
00:15:57.600 --> 00:16:00.600
yeah
00:16:00.680 --> 00:16:04.160
yeah so in Mass language models the
00:16:02.680 --> 00:16:06.000
question was in Mass language models
00:16:04.160 --> 00:16:08.360
couldn't you just mask out the last
00:16:06.000 --> 00:16:10.759
token and predict that sure you could do
00:16:08.360 --> 00:16:13.079
that but there it's just not trained
00:16:10.759 --> 00:16:14.720
that way so it won't do a very good job
00:16:13.079 --> 00:16:16.880
if you always trained it that way it's
00:16:14.720 --> 00:16:18.160
an autor regressive language model so
00:16:16.880 --> 00:16:22.240
you're you're back to where you were in
00:16:18.160 --> 00:16:24.800
the first place um cool so now we I'll
00:16:22.240 --> 00:16:26.399
talk about unigram language models and
00:16:24.800 --> 00:16:29.319
so the simplest language models are
00:16:26.399 --> 00:16:33.560
count-based unigram language models and
00:16:29.319 --> 00:16:35.319
the way they work is um basically we
00:16:33.560 --> 00:16:38.519
want to calculate this probability
00:16:35.319 --> 00:16:41.240
conditioned on all the previous ones and
00:16:38.519 --> 00:16:42.360
the way we do this is we just say
00:16:41.240 --> 00:16:45.680
actually we're not going to worry about
00:16:42.360 --> 00:16:48.759
the order at all and we're just going to
00:16:45.680 --> 00:16:52.240
uh predict the probability of the next
00:16:48.759 --> 00:16:55.279
word uh independently of all the other
00:16:52.240 --> 00:16:57.519
words so if you have something like this
00:16:55.279 --> 00:16:59.720
it's actually extremely easy to predict
00:16:57.519 --> 00:17:02.480
the probability of this word and the way
00:16:59.720 --> 00:17:04.280
you do this is you just count up the
00:17:02.480 --> 00:17:08.360
number of times this word appeared in
00:17:04.280 --> 00:17:10.480
the training data set and divide by the
00:17:08.360 --> 00:17:12.559
uh divide by the total number of words
00:17:10.480 --> 00:17:14.240
in the pring data set and now you have a
00:17:12.559 --> 00:17:15.959
language model this is like language
00:17:14.240 --> 00:17:17.760
model 101 it's the easiest possible
00:17:15.959 --> 00:17:19.520
language model you can write in you know
00:17:17.760 --> 00:17:21.120
three lines of python
00:17:19.520 --> 00:17:25.039
basically
00:17:21.120 --> 00:17:28.480
um so it has a few problems uh the first
00:17:25.039 --> 00:17:31.120
problem with this language model is um
00:17:28.480 --> 00:17:32.960
handling unknown words so what happens
00:17:31.120 --> 00:17:38.679
if you have a word that you've never
00:17:32.960 --> 00:17:41.000
seen before um in this language model
00:17:38.679 --> 00:17:42.240
here what is the probability of any
00:17:41.000 --> 00:17:44.720
sequence that has a word that you've
00:17:42.240 --> 00:17:47.440
never seen before yeah the probability
00:17:44.720 --> 00:17:49.240
of the sequence gets zero so there might
00:17:47.440 --> 00:17:51.120
not be such a big problem for generating
00:17:49.240 --> 00:17:52.480
things from the language model because
00:17:51.120 --> 00:17:54.520
you know maybe it's fine if you only
00:17:52.480 --> 00:17:55.960
generate words that you've seen before
00:17:54.520 --> 00:17:57.679
uh but it is definitely a problem of
00:17:55.960 --> 00:17:59.720
scoring things with the language model
00:17:57.679 --> 00:18:02.039
and it's also a problem of uh for
00:17:59.720 --> 00:18:04.440
something like translation if you get an
00:18:02.039 --> 00:18:05.840
unknown word uh when you're translating
00:18:04.440 --> 00:18:07.799
something then you would like to be able
00:18:05.840 --> 00:18:11.320
to translate it reasonably but you can't
00:18:07.799 --> 00:18:13.799
do that so um that's an issue so how do
00:18:11.320 --> 00:18:15.840
we how do we fix this um there's a
00:18:13.799 --> 00:18:17.640
couple options the first option is to
00:18:15.840 --> 00:18:19.440
segment to characters and subwords and
00:18:17.640 --> 00:18:21.720
this is now the preferred option that
00:18:19.440 --> 00:18:24.360
most people use nowadays uh just run
00:18:21.720 --> 00:18:26.840
sentence piece segment your vocabulary
00:18:24.360 --> 00:18:28.400
and you're all set you're you'll now no
00:18:26.840 --> 00:18:29.679
longer have any unknown words because
00:18:28.400 --> 00:18:30.840
all the unknown words get split into
00:18:29.679 --> 00:18:33.559
shorter
00:18:30.840 --> 00:18:36.240
units there's also other options that
00:18:33.559 --> 00:18:37.919
you can use if you're uh very interested
00:18:36.240 --> 00:18:41.280
in or serious about this and want to
00:18:37.919 --> 00:18:43.720
handle this like uh as part of a
00:18:41.280 --> 00:18:45.960
research project or something like this
00:18:43.720 --> 00:18:48.520
and uh the way you can do this is you
00:18:45.960 --> 00:18:50.120
can build an unknown word model and an
00:18:48.520 --> 00:18:52.200
unknown word model basically what it
00:18:50.120 --> 00:18:54.520
does is it uh predicts the probability
00:18:52.200 --> 00:18:56.200
of unknown words using characters and
00:18:54.520 --> 00:18:59.559
then it models the probability of words
00:18:56.200 --> 00:19:01.159
using words and so now you can you have
00:18:59.559 --> 00:19:02.559
kind of like a hierarchical model where
00:19:01.159 --> 00:19:03.919
you first try to predict words and then
00:19:02.559 --> 00:19:06.720
if you can't predict words you predict
00:19:03.919 --> 00:19:08.960
unknown words so this isn't us as widely
00:19:06.720 --> 00:19:11.520
anymore but it's worth thinking about uh
00:19:08.960 --> 00:19:11.520
or knowing
00:19:11.840 --> 00:19:20.880
about okay uh so a second detail um a
00:19:17.200 --> 00:19:22.799
parameter uh so parameterizing in log
00:19:20.880 --> 00:19:25.880
space
00:19:22.799 --> 00:19:28.400
so the um multiplication of
00:19:25.880 --> 00:19:29.840
probabilities can be reexpressed is the
00:19:28.400 --> 00:19:31.840
addition of log
00:19:29.840 --> 00:19:34.159
probabilities uh so this is really
00:19:31.840 --> 00:19:35.720
important and this is widely used in all
00:19:34.159 --> 00:19:37.520
language models whether they're unigram
00:19:35.720 --> 00:19:39.640
language models or or neural language
00:19:37.520 --> 00:19:41.799
models there's actually a very simple
00:19:39.640 --> 00:19:45.440
reason why we why we do it this way does
00:19:41.799 --> 00:19:45.440
anybody uh know the
00:19:46.440 --> 00:19:52.679
answer what would happen if we
00:19:48.280 --> 00:19:56.720
multiplied uh let's say uh 30 30 tokens
00:19:52.679 --> 00:20:00.360
worth of probabilities together um
00:19:56.720 --> 00:20:02.120
yeah uh yeah too too small um so
00:20:00.360 --> 00:20:06.120
basically the problem is numerical
00:20:02.120 --> 00:20:07.520
underflow um so modern computers if if
00:20:06.120 --> 00:20:08.840
we weren't doing this on a computer and
00:20:07.520 --> 00:20:11.240
we were just doing math it wouldn't
00:20:08.840 --> 00:20:14.280
matter at all um but because we're doing
00:20:11.240 --> 00:20:17.280
it on a computer uh we
00:20:14.280 --> 00:20:17.280
have
00:20:20.880 --> 00:20:26.000
ours we have our
00:20:23.000 --> 00:20:26.000
32bit
00:20:27.159 --> 00:20:30.159
float
00:20:32.320 --> 00:20:37.720
where we have uh the exponent in the the
00:20:35.799 --> 00:20:40.159
fraction over here so the largest the
00:20:37.720 --> 00:20:41.960
exponent can get is limited by the
00:20:40.159 --> 00:20:45.880
number of exponent bits that we have in
00:20:41.960 --> 00:20:48.039
a 32-bit float and um if that's the case
00:20:45.880 --> 00:20:52.480
I forget exactly how large it is it's
00:20:48.039 --> 00:20:53.440
like yeah something like 30 minus 38 is
00:20:52.480 --> 00:20:56.640
that
00:20:53.440 --> 00:20:58.520
right yeah but anyway like if the number
00:20:56.640 --> 00:21:00.640
gets too small you'll underflow it goes
00:20:58.520 --> 00:21:02.400
to zero and you'll get a zero
00:21:00.640 --> 00:21:05.720
probability despite the fact that it's
00:21:02.400 --> 00:21:07.640
not actually zero so um that's usually
00:21:05.720 --> 00:21:09.440
why we do this it's also a little bit
00:21:07.640 --> 00:21:12.960
easier for people just to look at like
00:21:09.440 --> 00:21:15.200
minus 30 instead of looking to something
00:21:12.960 --> 00:21:19.960
something time 10 to the minus 30 or
00:21:15.200 --> 00:21:24.520
something so uh that is why we normally
00:21:19.960 --> 00:21:27.159
go um another thing that you can note is
00:21:24.520 --> 00:21:28.760
uh you can treat each of these in a
00:21:27.159 --> 00:21:31.360
unigram model you can treat each of
00:21:28.760 --> 00:21:37.039
these as parameters so we talked about
00:21:31.360 --> 00:21:39.640
parameters of a model uh like a um like
00:21:37.039 --> 00:21:41.120
a bag of words model and we can
00:21:39.640 --> 00:21:44.080
similarly treat these unigram
00:21:41.120 --> 00:21:47.760
probabilities as parameters so um how
00:21:44.080 --> 00:21:47.760
many parameters does a unigram model
00:21:48.080 --> 00:21:51.320
have any
00:21:57.039 --> 00:22:02.400
ideas
00:21:59.600 --> 00:22:04.440
yeah yeah exactly parameters equal to
00:22:02.400 --> 00:22:08.120
the size of the vocabulary so this one's
00:22:04.440 --> 00:22:10.880
easy and then we can go um we can go to
00:22:08.120 --> 00:22:13.880
the slightly less easy ones
00:22:10.880 --> 00:22:16.039
there so anyway this is a unigram model
00:22:13.880 --> 00:22:17.960
uh it's it's not too hard um you
00:22:16.039 --> 00:22:20.480
basically count up and divide and then
00:22:17.960 --> 00:22:22.720
you add the the probabilities here you
00:22:20.480 --> 00:22:25.440
could easily do it in a short Python
00:22:22.720 --> 00:22:28.400
program higher order engram models so
00:22:25.440 --> 00:22:31.600
higher order engram models um what these
00:22:28.400 --> 00:22:35.520
do is they essentially limit the context
00:22:31.600 --> 00:22:40.240
length to a length of N and then they
00:22:35.520 --> 00:22:42.600
count and divide so the way it works
00:22:40.240 --> 00:22:45.559
here maybe this is a little bit uh
00:22:42.600 --> 00:22:47.320
tricky but I can show an example so what
00:22:45.559 --> 00:22:49.840
we do is we count up the number of times
00:22:47.320 --> 00:22:51.320
we've seen this is an example and then
00:22:49.840 --> 00:22:53.480
we divide by the number of times we've
00:22:51.320 --> 00:22:55.960
seen this is n and that's the
00:22:53.480 --> 00:22:56.960
probability of example given the the
00:22:55.960 --> 00:22:58.720
previous
00:22:56.960 --> 00:23:00.559
coms
00:22:58.720 --> 00:23:02.039
so the problem with this is anytime we
00:23:00.559 --> 00:23:03.400
get a sequence that we've never seen
00:23:02.039 --> 00:23:04.960
before like we would like to model
00:23:03.400 --> 00:23:07.200
longer sequences to make this more
00:23:04.960 --> 00:23:08.600
accurate but anytime we've get a uh we
00:23:07.200 --> 00:23:10.720
get a sequence that we've never seen
00:23:08.600 --> 00:23:12.919
before um it will get a probability of
00:23:10.720 --> 00:23:15.919
zero similarly because this count on top
00:23:12.919 --> 00:23:19.919
of here will be zero so the way that uh
00:23:15.919 --> 00:23:22.640
engram language models work with this uh
00:23:19.919 --> 00:23:27.320
handle this is they have fall back to
00:23:22.640 --> 00:23:31.840
Shorter uh engram models so um this
00:23:27.320 --> 00:23:33.480
model sorry when I say NR uh n is the
00:23:31.840 --> 00:23:35.520
length of the context so this is a four
00:23:33.480 --> 00:23:37.679
gr model here because the top context is
00:23:35.520 --> 00:23:40.520
four so the photogram model would
00:23:37.679 --> 00:23:46.640
calculate this and then interpolate it
00:23:40.520 --> 00:23:48.640
like this with a um with a trigram model
00:23:46.640 --> 00:23:50.400
uh and then the trigram model itself
00:23:48.640 --> 00:23:51.720
would interpolate with the Byram model
00:23:50.400 --> 00:23:53.440
the Byram model would interpolate with
00:23:51.720 --> 00:23:56.880
the unram
00:23:53.440 --> 00:23:59.880
model oh this one oh
00:23:56.880 --> 00:23:59.880
okay
00:24:02.159 --> 00:24:05.440
um one
00:24:07.039 --> 00:24:12.320
second could you uh help get it from the
00:24:10.000 --> 00:24:12.320
lock
00:24:26.799 --> 00:24:29.799
box
00:24:43.640 --> 00:24:50.200
um okay sorry
00:24:46.880 --> 00:24:53.640
so getting bad
00:24:50.200 --> 00:24:56.640
here just
00:24:53.640 --> 00:24:56.640
actually
00:24:56.760 --> 00:25:02.559
okay uh oh wow that's a lot
00:25:02.960 --> 00:25:12.080
better cool okay so
00:25:08.279 --> 00:25:14.159
um so this is uh how we deal with the
00:25:12.080 --> 00:25:18.799
fact that models can
00:25:14.159 --> 00:25:23.919
be um models can be more precise but
00:25:18.799 --> 00:25:26.679
more sparse and less precise but less
00:25:23.919 --> 00:25:28.720
sparse this is also another concept that
00:25:26.679 --> 00:25:31.039
we're going to talk about more later uh
00:25:28.720 --> 00:25:33.240
in another class but this is a variety
00:25:31.039 --> 00:25:33.240
of
00:25:33.679 --> 00:25:38.440
ensembling where we have different
00:25:35.960 --> 00:25:40.360
models that are good at different things
00:25:38.440 --> 00:25:42.279
and we combine them together so this is
00:25:40.360 --> 00:25:44.760
the first instance that you would see of
00:25:42.279 --> 00:25:46.159
this there are other instances of this
00:25:44.760 --> 00:25:50.320
but the reason why I mentioned that this
00:25:46.159 --> 00:25:51.840
is a a variety of ensembling is actually
00:25:50.320 --> 00:25:55.520
you're probably not going to be using
00:25:51.840 --> 00:25:57.840
engram models super widely unless you
00:25:55.520 --> 00:26:00.520
really want to process huge data sets
00:25:57.840 --> 00:26:02.399
because that is one advantage of them
00:26:00.520 --> 00:26:03.960
but some of these smoothing methods
00:26:02.399 --> 00:26:05.720
actually might be interesting even if
00:26:03.960 --> 00:26:10.520
you're using other models and ensembling
00:26:05.720 --> 00:26:10.520
them together so
00:26:10.600 --> 00:26:15.679
the in order to decide this
00:26:13.679 --> 00:26:19.559
interpolation coefficient one way we can
00:26:15.679 --> 00:26:23.440
do it is just set a fixed um set a fixed
00:26:19.559 --> 00:26:26.039
amount of probability that we use for
00:26:23.440 --> 00:26:29.000
every um every time so we could say that
00:26:26.039 --> 00:26:32.000
we always set this Lambda to 0.8 and
00:26:29.000 --> 00:26:34.320
some always set this Lambda 1us Lambda
00:26:32.000 --> 00:26:36.559
to 0.2 and interpolate those two
00:26:34.320 --> 00:26:39.120
together but actually there's more
00:26:36.559 --> 00:26:42.240
sophisticated methods of doing this and
00:26:39.120 --> 00:26:44.080
so one way of doing this is uh called
00:26:42.240 --> 00:26:47.240
additive
00:26:44.080 --> 00:26:50.600
smoothing excuse me and the the way that
00:26:47.240 --> 00:26:54.039
additive smoothing works is um basically
00:26:50.600 --> 00:26:54.919
we add Alpha to the uh to the top and
00:26:54.039 --> 00:26:58.000
the
00:26:54.919 --> 00:27:02.159
bottom and the reason why this is slight
00:26:58.000 --> 00:27:06.279
different as is as our accounts get
00:27:02.159 --> 00:27:10.799
larger we start to approach the true
00:27:06.279 --> 00:27:10.799
distribution so just to give an
00:27:12.080 --> 00:27:19.480
example let's say we have uh the
00:27:17.640 --> 00:27:21.640
box
00:27:19.480 --> 00:27:26.279
is
00:27:21.640 --> 00:27:26.279
um let's say initially we
00:27:26.520 --> 00:27:29.520
have
00:27:31.159 --> 00:27:37.600
uh let let's say our Alpha is
00:27:33.840 --> 00:27:43.559
one so initially if we have
00:27:37.600 --> 00:27:47.320
nothing um if we have no no evidence for
00:27:43.559 --> 00:27:47.320
our sorry I I
00:27:49.720 --> 00:27:54.960
realize let's say this is
00:27:52.640 --> 00:27:56.840
our fallback
00:27:54.960 --> 00:27:59.240
distribution um where this is a
00:27:56.840 --> 00:28:01.880
probability of Z 0.5 this is a
00:27:59.240 --> 00:28:03.360
probability of 0.3 and this is a
00:28:01.880 --> 00:28:06.559
probability of
00:28:03.360 --> 00:28:09.919
0.2 so now let's talk about our byr
00:28:06.559 --> 00:28:13.399
model um and our byr
00:28:09.919 --> 00:28:18.000
model has counts which is the
00:28:13.399 --> 00:28:18.000
the the box and the
00:28:19.039 --> 00:28:24.480
is so if we do something like this then
00:28:22.720 --> 00:28:26.720
um initially we have no counts like
00:28:24.480 --> 00:28:28.159
let's say we we have no data uh about
00:28:26.720 --> 00:28:30.760
this distribution
00:28:28.159 --> 00:28:33.200
um our counts would be zero and our
00:28:30.760 --> 00:28:35.919
Alpha would be
00:28:33.200 --> 00:28:37.840
one and so we would just fall back to
00:28:35.919 --> 00:28:40.960
this distribution we just have like one
00:28:37.840 --> 00:28:43.320
times uh one times this distribution
00:28:40.960 --> 00:28:45.679
let's say we then we have one piece of
00:28:43.320 --> 00:28:48.640
evidence and once we have one piece of
00:28:45.679 --> 00:28:52.279
evidence now this would be
00:28:48.640 --> 00:28:53.960
0.33 um and this would uh be Alpha equal
00:28:52.279 --> 00:28:56.399
to 1 so we'd have
00:28:53.960 --> 00:28:58.679
0.5 *
00:28:56.399 --> 00:29:00.399
0.33
00:28:58.679 --> 00:29:04.039
uh and
00:29:00.399 --> 00:29:07.720
0.5 time
00:29:04.039 --> 00:29:10.840
0.3 uh is the probability of the Box
00:29:07.720 --> 00:29:12.840
because um basically we we have one
00:29:10.840 --> 00:29:14.720
piece of evidence and we are adding a
00:29:12.840 --> 00:29:17.080
count of one to the lower order
00:29:14.720 --> 00:29:18.320
distribution then if we increase our
00:29:17.080 --> 00:29:24.159
count
00:29:18.320 --> 00:29:24.159
here um now we rely more
00:29:24.880 --> 00:29:30.960
strongly sorry that that would be wrong
00:29:27.720 --> 00:29:32.399
so so now we rely more strongly on the
00:29:30.960 --> 00:29:33.880
higher order distribution because we
00:29:32.399 --> 00:29:37.039
have more evidence for the higher order
00:29:33.880 --> 00:29:39.610
distribution so basically in this case
00:29:37.039 --> 00:29:41.240
um the probability
00:29:39.610 --> 00:29:44.559
[Music]
00:29:41.240 --> 00:29:48.200
of Lambda which I showed
00:29:44.559 --> 00:29:52.000
before is equal to the the sum of the
00:29:48.200 --> 00:29:54.200
counts plus um the sum of the counts
00:29:52.000 --> 00:29:56.480
over the sum of the counts plus
00:29:54.200 --> 00:29:58.159
Ali so as the sum of the counts gets
00:29:56.480 --> 00:30:00.240
larger you rely on the higher order
00:29:58.159 --> 00:30:01.640
distribution is the sum of the counts is
00:30:00.240 --> 00:30:02.760
if the sum of the counts is smaller you
00:30:01.640 --> 00:30:04.320
rely more on the lower order
00:30:02.760 --> 00:30:06.720
distribution so the more evidence you
00:30:04.320 --> 00:30:11.640
have the more you rely on so that's the
00:30:06.720 --> 00:30:11.640
basic idea behind these smoothing things
00:30:11.679 --> 00:30:16.679
um there's also a number of other
00:30:14.519 --> 00:30:18.760
varieties called uh
00:30:16.679 --> 00:30:20.799
discounting so uh the discount
00:30:18.760 --> 00:30:23.679
hyperparameter basically you subtract
00:30:20.799 --> 00:30:26.080
this off um uh you subtract this from
00:30:23.679 --> 00:30:27.840
the count so you would subtract like 0.5
00:30:26.080 --> 00:30:32.679
from each of the counts that you it's
00:30:27.840 --> 00:30:36.279
just empirically this is a better match
00:30:32.679 --> 00:30:38.600
for the fact that um natural language
00:30:36.279 --> 00:30:40.039
has a very longtailed distribution um
00:30:38.600 --> 00:30:41.600
you can kind of do the math and show
00:30:40.039 --> 00:30:43.720
that that works and that's actually in
00:30:41.600 --> 00:30:46.080
this um in this paper if you're
00:30:43.720 --> 00:30:49.880
interested in looking at more details of
00:30:46.080 --> 00:30:51.519
that um and then kind of the
00:30:49.880 --> 00:30:53.440
stateoftheart in language modeling
00:30:51.519 --> 00:30:56.600
before neural language models came out
00:30:53.440 --> 00:30:59.919
was this kesser smoothing and what it
00:30:56.600 --> 00:31:02.440
does is it discounts but it also
00:30:59.919 --> 00:31:04.480
modifies the lower order distribution so
00:31:02.440 --> 00:31:07.200
in the lower order distribution you
00:31:04.480 --> 00:31:09.039
basically um modify the counts with
00:31:07.200 --> 00:31:11.919
respect to how many times that word has
00:31:09.039 --> 00:31:13.519
appeared in new contexts with the IDE
00:31:11.919 --> 00:31:16.360
idea being that you only use the lower
00:31:13.519 --> 00:31:18.880
order distribution when you have uh new
00:31:16.360 --> 00:31:21.200
contexts um and so you can kind of Be
00:31:18.880 --> 00:31:23.600
Clever
00:31:21.200 --> 00:31:25.399
About You Can Be Clever about how you
00:31:23.600 --> 00:31:27.639
build this distribution based on the
00:31:25.399 --> 00:31:29.360
fact that you're only using it in the
00:31:27.639 --> 00:31:31.320
case when this distribution is not very
00:31:29.360 --> 00:31:33.960
Rel
00:31:31.320 --> 00:31:36.080
so I I would spend a lot more time
00:31:33.960 --> 00:31:37.960
teaching this when uh engram models were
00:31:36.080 --> 00:31:39.840
kind of the thing uh that people were
00:31:37.960 --> 00:31:41.960
using but now I'm going to go over them
00:31:39.840 --> 00:31:43.600
very quickly so you know don't worry if
00:31:41.960 --> 00:31:46.559
you weren't able to follow all the
00:31:43.600 --> 00:31:47.960
details but the basic um the basic thing
00:31:46.559 --> 00:31:49.279
take away from this is number one these
00:31:47.960 --> 00:31:51.639
are the methods that people use for
00:31:49.279 --> 00:31:53.440
engram language models number two if
00:31:51.639 --> 00:31:55.720
you're thinking about combining language
00:31:53.440 --> 00:31:57.519
models together in some way through you
00:31:55.720 --> 00:31:59.279
know ensembling their probability or
00:31:57.519 --> 00:32:00.480
something like this this is something
00:31:59.279 --> 00:32:02.279
that you should think about a little bit
00:32:00.480 --> 00:32:03.679
more carefully because like some
00:32:02.279 --> 00:32:05.240
language models might be good in some
00:32:03.679 --> 00:32:07.440
context other language models might be
00:32:05.240 --> 00:32:09.440
good in other contexts so you would need
00:32:07.440 --> 00:32:11.799
to think about that when you're doing um
00:32:09.440 --> 00:32:18.200
when you're combining the model
00:32:11.799 --> 00:32:18.200
that cool um any any questions about
00:32:19.080 --> 00:32:24.840
this Okay
00:32:21.159 --> 00:32:27.840
cool so there's a lot of problems that
00:32:24.840 --> 00:32:30.760
we have to deal with um when were
00:32:27.840 --> 00:32:32.600
creating engram models and that actually
00:32:30.760 --> 00:32:35.279
kind of motivated the reason why we
00:32:32.600 --> 00:32:36.639
moved to neural language models the
00:32:35.279 --> 00:32:38.720
first one is similar to what I talked
00:32:36.639 --> 00:32:40.519
about last time with text classification
00:32:38.720 --> 00:32:42.600
um that they can't share strength among
00:32:40.519 --> 00:32:45.159
similar words like bought and
00:32:42.600 --> 00:32:46.919
purchase um another thing is that they
00:32:45.159 --> 00:32:49.440
can't easily condition on context with
00:32:46.919 --> 00:32:51.240
intervening words so engram models if
00:32:49.440 --> 00:32:52.799
you have a rare word in your context
00:32:51.240 --> 00:32:54.320
immediately start falling back to the
00:32:52.799 --> 00:32:56.799
unigram distribution and they end up
00:32:54.320 --> 00:32:58.720
being very bad so uh that was another
00:32:56.799 --> 00:33:01.000
issue
00:32:58.720 --> 00:33:04.760
and they couldn't handle long distance
00:33:01.000 --> 00:33:09.080
um dependencies so if this was beyond
00:33:04.760 --> 00:33:10.559
the engram context that they would uh be
00:33:09.080 --> 00:33:14.320
handling then you wouldn't be able to
00:33:10.559 --> 00:33:15.840
manage this so actually before neural
00:33:14.320 --> 00:33:18.000
language models became a really big
00:33:15.840 --> 00:33:19.960
thing uh people came up with a bunch of
00:33:18.000 --> 00:33:22.760
individual solutions for this in order
00:33:19.960 --> 00:33:24.440
to solve the problems but actually it
00:33:22.760 --> 00:33:26.679
wasn't that these Solutions didn't work
00:33:24.440 --> 00:33:29.159
at all it was just that engineering all
00:33:26.679 --> 00:33:30.519
of them together was so hard that nobody
00:33:29.159 --> 00:33:32.120
actually ever did that and so they
00:33:30.519 --> 00:33:35.120
relied on just engram models out of the
00:33:32.120 --> 00:33:37.600
box and that wasn't scalable so it's
00:33:35.120 --> 00:33:39.279
kind of a funny example of how like
00:33:37.600 --> 00:33:42.000
actually neural networks despite all the
00:33:39.279 --> 00:33:43.559
pain that they cause in some areas are a
00:33:42.000 --> 00:33:47.120
much better engineering solution to
00:33:43.559 --> 00:33:51.279
solve all the issues that previous
00:33:47.120 --> 00:33:53.159
method cool um so when they use uh Eng
00:33:51.279 --> 00:33:54.799
grab models neural language models
00:33:53.159 --> 00:33:56.559
achieve better performance but Eng grab
00:33:54.799 --> 00:33:58.440
models are very very fast to estimate
00:33:56.559 --> 00:33:59.880
and apply you can even estimate them
00:33:58.440 --> 00:34:04.399
completely in
00:33:59.880 --> 00:34:07.720
parallel um engram models also I I don't
00:34:04.399 --> 00:34:10.399
know if this is necessarily
00:34:07.720 --> 00:34:13.200
A a thing that
00:34:10.399 --> 00:34:15.079
you a reason to use engram language
00:34:13.200 --> 00:34:17.720
models but it is a reason to think a
00:34:15.079 --> 00:34:20.320
little bit critically about uh neural
00:34:17.720 --> 00:34:22.720
language models which is neural language
00:34:20.320 --> 00:34:24.320
models actually can be worse than engram
00:34:22.720 --> 00:34:26.679
language models at modeling very low
00:34:24.320 --> 00:34:28.480
frequency phenomenas so engram language
00:34:26.679 --> 00:34:29.960
model can learn from a single example
00:34:28.480 --> 00:34:32.119
they only need a single example of
00:34:29.960 --> 00:34:36.879
anything before the probability of that
00:34:32.119 --> 00:34:38.639
continuation goes up very high um and uh
00:34:36.879 --> 00:34:41.359
but neural language models actually can
00:34:38.639 --> 00:34:43.599
forget or not memorize uh appropriately
00:34:41.359 --> 00:34:46.280
from single examples so they can be
00:34:43.599 --> 00:34:48.040
better at that um there's a toolkit the
00:34:46.280 --> 00:34:49.919
standard toolkit for estimating engram
00:34:48.040 --> 00:34:54.359
language models is called KLM it's kind
00:34:49.919 --> 00:34:57.599
of frighteningly fast um and so people
00:34:54.359 --> 00:35:00.400
have been uh saying like I've seen some
00:34:57.599 --> 00:35:01.599
jokes which are like job postings that
00:35:00.400 --> 00:35:04.040
say people who have been working on
00:35:01.599 --> 00:35:05.880
large language models uh for we want
00:35:04.040 --> 00:35:07.359
people who have been 10 years of
00:35:05.880 --> 00:35:09.240
experience working on large language
00:35:07.359 --> 00:35:11.960
models or something like that and a lot
00:35:09.240 --> 00:35:13.440
of people are saying wait nobody has 10
00:35:11.960 --> 00:35:16.400
years of experience working on large
00:35:13.440 --> 00:35:18.160
language models well Kenneth hfield who
00:35:16.400 --> 00:35:19.440
created KLM does have 10 years of
00:35:18.160 --> 00:35:22.800
experience working on large language
00:35:19.440 --> 00:35:24.599
models because he was estimating uh
00:35:22.800 --> 00:35:27.720
seven gr
00:35:24.599 --> 00:35:30.320
bottles um seven models with a
00:35:27.720 --> 00:35:35.040
vocabulary of let's say
00:35:30.320 --> 00:35:37.720
100,000 on um you know web text so how
00:35:35.040 --> 00:35:41.119
many parameters is at that's more than
00:35:37.720 --> 00:35:44.320
any you know large neural language model
00:35:41.119 --> 00:35:45.640
that we have nowadays so um they they
00:35:44.320 --> 00:35:47.520
have a lot of these parameters are
00:35:45.640 --> 00:35:49.400
sparse they're zero counts so obviously
00:35:47.520 --> 00:35:52.160
you don't uh you don't memorize all of
00:35:49.400 --> 00:35:55.040
them but uh
00:35:52.160 --> 00:35:57.800
yeah cool um another thing that maybe I
00:35:55.040 --> 00:35:59.359
should mention like so this doesn't
00:35:57.800 --> 00:36:01.960
sound completely outdated there was a
00:35:59.359 --> 00:36:05.400
really good paper
00:36:01.960 --> 00:36:08.400
recently that used the fact that engrams
00:36:05.400 --> 00:36:08.400
are
00:36:11.079 --> 00:36:17.319
so uses effect that engram models are so
00:36:14.280 --> 00:36:18.960
scalable it's this paper um it's called
00:36:17.319 --> 00:36:21.079
Data selection for language models via
00:36:18.960 --> 00:36:22.359
importance rese sampling and one
00:36:21.079 --> 00:36:24.359
interesting thing that they do in this
00:36:22.359 --> 00:36:28.920
paper is that they don't
00:36:24.359 --> 00:36:31.560
actually um they don't
00:36:28.920 --> 00:36:32.800
actually use neural models in any way
00:36:31.560 --> 00:36:34.920
despite the fact that they use the
00:36:32.800 --> 00:36:36.880
downstream data that they sample in
00:36:34.920 --> 00:36:41.319
order to calculate neural models but
00:36:36.880 --> 00:36:42.880
they run engram models over um over lots
00:36:41.319 --> 00:36:47.359
and lots of data and then they fit a
00:36:42.880 --> 00:36:50.000
gaussian distribution to the enr model
00:36:47.359 --> 00:36:51.520
counts basically uh in order to select
00:36:50.000 --> 00:36:53.040
the data in the reason why they do this
00:36:51.520 --> 00:36:55.280
is they want to do this over the entire
00:36:53.040 --> 00:36:56.760
web and running a neural model over the
00:36:55.280 --> 00:36:58.920
entire web would be too expensive so
00:36:56.760 --> 00:37:00.319
they use angr models instead so that's
00:36:58.920 --> 00:37:02.359
just an example of something in the
00:37:00.319 --> 00:37:04.920
modern context where keeping this in
00:37:02.359 --> 00:37:04.920
mind is a good
00:37:08.200 --> 00:37:14.000
idea okay I'd like to move to the next
00:37:10.960 --> 00:37:15.319
part so a language model evaluation uh
00:37:14.000 --> 00:37:17.200
this is important to know I'm not going
00:37:15.319 --> 00:37:19.079
to talk about language model evaluation
00:37:17.200 --> 00:37:20.599
on other tasks I'm only going to talk
00:37:19.079 --> 00:37:23.800
right now about language model
00:37:20.599 --> 00:37:26.280
evaluation on the task of language
00:37:23.800 --> 00:37:29.079
modeling and there's a number of metrics
00:37:26.280 --> 00:37:30.680
that we use for the task of language
00:37:29.079 --> 00:37:32.720
modeling evaluating language models on
00:37:30.680 --> 00:37:35.560
the task of language modeling the first
00:37:32.720 --> 00:37:38.480
one is log likelihood and basically uh
00:37:35.560 --> 00:37:40.160
the way we calculate log likelihood is
00:37:38.480 --> 00:37:41.640
uh sorry there's an extra parenthesis
00:37:40.160 --> 00:37:45.480
here but the way we calculate log
00:37:41.640 --> 00:37:47.160
likelihood is we get a test set that
00:37:45.480 --> 00:37:50.400
ideally has not been included in our
00:37:47.160 --> 00:37:52.520
training data and we take all of the
00:37:50.400 --> 00:37:54.200
documents or sentences in the test set
00:37:52.520 --> 00:37:57.040
we calculate the log probability of all
00:37:54.200 --> 00:37:59.520
of them uh we don't actually use this
00:37:57.040 --> 00:38:02.640
super broadly to evaluate models and the
00:37:59.520 --> 00:38:04.200
reason why is because this number is
00:38:02.640 --> 00:38:05.720
very dependent on the size of the data
00:38:04.200 --> 00:38:07.119
set so if you have a larger data set
00:38:05.720 --> 00:38:08.720
this number will be larger if you have a
00:38:07.119 --> 00:38:10.960
smaller data set this number will be
00:38:08.720 --> 00:38:14.040
smaller so the more common thing to do
00:38:10.960 --> 00:38:15.839
is per word uh log likelihood and per
00:38:14.040 --> 00:38:19.800
word log likelihood is basically
00:38:15.839 --> 00:38:22.760
dividing the um dividing the log
00:38:19.800 --> 00:38:25.520
probability of the entire corpus with uh
00:38:22.760 --> 00:38:28.359
the number of words that you have in the
00:38:25.520 --> 00:38:31.000
corpus
00:38:28.359 --> 00:38:34.599
um it's also common for papers to report
00:38:31.000 --> 00:38:36.359
negative log likelihood uh where because
00:38:34.599 --> 00:38:37.800
that's used as a loss and there lower is
00:38:36.359 --> 00:38:40.440
better so you just need to be careful
00:38:37.800 --> 00:38:42.560
about which one is being
00:38:40.440 --> 00:38:43.880
reported so this is pretty common I
00:38:42.560 --> 00:38:45.400
think most people are are somewhat
00:38:43.880 --> 00:38:49.040
familiar with
00:38:45.400 --> 00:38:49.800
this another thing that you might see is
00:38:49.040 --> 00:38:53.079
uh
00:38:49.800 --> 00:38:55.000
entropy and uh specifically this is
00:38:53.079 --> 00:38:57.319
often called cross entropy because
00:38:55.000 --> 00:38:59.880
you're calculating
00:38:57.319 --> 00:39:01.599
the you're estimating the model on a
00:38:59.880 --> 00:39:05.079
training data set and then evaluating it
00:39:01.599 --> 00:39:08.400
on a separate data set uh so uh on the
00:39:05.079 --> 00:39:12.200
test data set and this is calcul often
00:39:08.400 --> 00:39:14.640
or usually calculated as log 2 um of the
00:39:12.200 --> 00:39:17.119
probability divided by the number of
00:39:14.640 --> 00:39:18.760
words or units in the Corpus does anyone
00:39:17.119 --> 00:39:23.839
know why this is log
00:39:18.760 --> 00:39:23.839
two as opposed to a normal uh
00:39:25.440 --> 00:39:31.319
log
00:39:28.440 --> 00:39:31.319
anyone yeah
00:39:33.119 --> 00:39:38.720
so yeah so it's calculating as bits um
00:39:36.760 --> 00:39:43.160
and this is kind of
00:39:38.720 --> 00:39:45.240
a um this is kind of a historical thing
00:39:43.160 --> 00:39:47.119
and it's not super super important for
00:39:45.240 --> 00:39:51.800
language models but it's actually pretty
00:39:47.119 --> 00:39:54.599
interesting uh to to think about and so
00:39:51.800 --> 00:39:57.480
actually any probabilistic distribution
00:39:54.599 --> 00:40:00.040
can also be used for data compression
00:39:57.480 --> 00:40:03.319
um and so you know when you're running a
00:40:00.040 --> 00:40:05.000
zip file or you're running gzip or bz2
00:40:03.319 --> 00:40:07.359
or something like that uh you're
00:40:05.000 --> 00:40:09.240
compressing a file into a smaller file
00:40:07.359 --> 00:40:12.000
and any language model can also be used
00:40:09.240 --> 00:40:15.280
to compress a SM file into a smaller
00:40:12.000 --> 00:40:17.119
file um and so the way it does this is
00:40:15.280 --> 00:40:19.200
if you have more likely
00:40:17.119 --> 00:40:20.960
sequences uh for example more likely
00:40:19.200 --> 00:40:25.079
sentences or more likely documents you
00:40:20.960 --> 00:40:26.920
can press them into a a shorter uh
00:40:25.079 --> 00:40:29.440
output and
00:40:26.920 --> 00:40:29.440
kind of
00:40:29.640 --> 00:40:33.800
the
00:40:31.480 --> 00:40:35.720
ideal I I think it's pretty safe to say
00:40:33.800 --> 00:40:37.920
ideal because I think you can't get a
00:40:35.720 --> 00:40:42.920
better method for compression than this
00:40:37.920 --> 00:40:45.000
uh if I unless I'm uh you know not well
00:40:42.920 --> 00:40:46.800
versed enough in information Theory but
00:40:45.000 --> 00:40:49.240
I I think this is basically the ideal
00:40:46.800 --> 00:40:51.960
method for data compression and the way
00:40:49.240 --> 00:40:54.640
it works is um I have a figure up here
00:40:51.960 --> 00:40:58.800
but I'd like to recreate it here which
00:40:54.640 --> 00:41:02.640
is let's say we have a vocabulary of
00:40:58.800 --> 00:41:07.200
a um which has
00:41:02.640 --> 00:41:08.800
50% and then we have a vocabulary uh B
00:41:07.200 --> 00:41:11.560
which is
00:41:08.800 --> 00:41:14.040
33% and a vocabulary
00:41:11.560 --> 00:41:18.520
C
00:41:14.040 --> 00:41:18.520
uh yeah C which is about
00:41:18.640 --> 00:41:25.640
17% and so if you have a single token
00:41:22.960 --> 00:41:26.839
sequence um if you have a single token
00:41:25.640 --> 00:41:30.880
sequence
00:41:26.839 --> 00:41:30.880
what you do is you can
00:41:31.319 --> 00:41:38.800
see divide this into zero and one so if
00:41:36.400 --> 00:41:40.680
your single token sequence is a you can
00:41:38.800 --> 00:41:42.760
just put zero and you'll be done
00:41:40.680 --> 00:41:46.800
encoding it if your single token
00:41:42.760 --> 00:41:51.920
sequence is B
00:41:46.800 --> 00:41:56.520
then um one overlaps with b and c so now
00:41:51.920 --> 00:42:00.920
you need to further split this up into
00:41:56.520 --> 00:42:00.920
uh o and one and you can see
00:42:04.880 --> 00:42:11.440
that let make sure I did that right yeah
00:42:08.359 --> 00:42:11.440
you can you can see
00:42:15.599 --> 00:42:25.720
that one zero is entirely encompassed by
00:42:19.680 --> 00:42:29.200
uh by B so now B is one Z and C uh C is
00:42:25.720 --> 00:42:32.359
not L encompassed by that so you would
00:42:29.200 --> 00:42:39.240
need to further break this up and say
00:42:32.359 --> 00:42:41.880
it's Z one here and now one one
00:42:39.240 --> 00:42:45.520
one is encompassed by this so you would
00:42:41.880 --> 00:42:48.680
get uh you would get C if it was 111 and
00:42:45.520 --> 00:42:51.119
so every every sequence that started
00:42:48.680 --> 00:42:53.000
with zero would start out with a every
00:42:51.119 --> 00:42:54.960
sequence that started out with one zero
00:42:53.000 --> 00:42:57.200
would start with b and every sequence
00:42:54.960 --> 00:43:02.079
that started with 11 one1
00:42:57.200 --> 00:43:04.920
start um and so then you can look at the
00:43:02.079 --> 00:43:06.960
next word and let's say we're using a
00:43:04.920 --> 00:43:09.839
unigram model if we're using a unigram
00:43:06.960 --> 00:43:12.960
model for the next uh the next token
00:43:09.839 --> 00:43:18.200
let's say the next token is C
00:43:12.960 --> 00:43:23.640
so now the next token being C we already
00:43:18.200 --> 00:43:27.920
have B and now we take we subdivide
00:43:23.640 --> 00:43:33.040
B into
00:43:27.920 --> 00:43:35.720
a BC ba a BB and BC and then we find the
00:43:33.040 --> 00:43:40.720
next binary sequence that is entirely
00:43:35.720 --> 00:43:44.000
encompassed by uh BC by this like
00:43:40.720 --> 00:43:45.359
interval and so the moment we find a a
00:43:44.000 --> 00:43:48.520
binary sequence that's entirely
00:43:45.359 --> 00:43:50.599
encompassed by the interval uh then that
00:43:48.520 --> 00:43:53.400
is the the sequence that we can use to
00:43:50.599 --> 00:43:54.640
represent that SC and so um if you're
00:43:53.400 --> 00:43:56.520
interested in this you can look up the
00:43:54.640 --> 00:44:00.400
arithmetic coding on on wikip it's
00:43:56.520 --> 00:44:02.079
pretty fascinating but basically um here
00:44:00.400 --> 00:44:04.040
this is showing the example of the
00:44:02.079 --> 00:44:07.160
unigram model where the probabilities
00:44:04.040 --> 00:44:10.240
don't change based on the context but
00:44:07.160 --> 00:44:13.000
what if we knew that
00:44:10.240 --> 00:44:15.599
c had a really high probability of
00:44:13.000 --> 00:44:22.160
following B so if that's the case now we
00:44:15.599 --> 00:44:24.559
have like a a b c here um like based on
00:44:22.160 --> 00:44:25.880
our our byr model or neural language
00:44:24.559 --> 00:44:29.319
model or something like that so now this
00:44:25.880 --> 00:44:31.240
is interval is much much larger so it's
00:44:29.319 --> 00:44:35.079
much more likely to entirely Encompass a
00:44:31.240 --> 00:44:39.720
shorter string and because of that the
00:44:35.079 --> 00:44:42.440
um the output can be much shorter and so
00:44:39.720 --> 00:44:45.760
if you use this arithmetic encoding um
00:44:42.440 --> 00:44:49.440
over a very long sequence of outputs
00:44:45.760 --> 00:44:52.440
your the length of the sequence that is
00:44:49.440 --> 00:44:56.000
needed to encode this uh this particular
00:44:52.440 --> 00:45:00.359
output is going to be essentially um the
00:44:56.000 --> 00:45:03.319
number of bits according to times the
00:45:00.359 --> 00:45:06.480
times the sequence so this is very
00:45:03.319 --> 00:45:10.000
directly connected to like compression
00:45:06.480 --> 00:45:13.160
and information Theory and stuff like
00:45:10.000 --> 00:45:15.359
that so that that's where entropy comes
00:45:13.160 --> 00:45:17.680
from uh are are there any questions
00:45:15.359 --> 00:45:17.680
about
00:45:19.319 --> 00:45:22.319
this
00:45:24.880 --> 00:45:28.119
yeah
00:45:26.800 --> 00:45:31.880
uh for
00:45:28.119 --> 00:45:34.319
c um so
00:45:31.880 --> 00:45:36.599
111 is
00:45:34.319 --> 00:45:37.920
because let me let me see if I can do
00:45:36.599 --> 00:45:40.559
this
00:45:37.920 --> 00:45:44.240
again
00:45:40.559 --> 00:45:44.240
so I had one
00:45:46.079 --> 00:45:54.520
one so here this interval is
00:45:50.920 --> 00:45:56.839
one this interval is one one this
00:45:54.520 --> 00:46:00.079
interval is 111
00:45:56.839 --> 00:46:03.520
and 111 is the first interval that is
00:46:00.079 --> 00:46:05.520
entirely overlapping with with c um and
00:46:03.520 --> 00:46:08.760
it's not one Z because one one Z is
00:46:05.520 --> 00:46:08.760
overlaping with b and
00:46:09.960 --> 00:46:13.599
c so which
00:46:14.280 --> 00:46:21.720
Cas so which case one
00:46:20.160 --> 00:46:24.800
Z
00:46:21.720 --> 00:46:26.319
one one one
00:46:24.800 --> 00:46:30.800
Z
00:46:26.319 --> 00:46:30.800
when would you use 110 to represent
00:46:32.119 --> 00:46:38.839
something it's a good question I guess
00:46:36.119 --> 00:46:40.599
maybe you wouldn't which seems a little
00:46:38.839 --> 00:46:43.280
bit wasteful
00:46:40.599 --> 00:46:46.160
so let me let me think about that I
00:46:43.280 --> 00:46:49.920
think um it might be the case that you
00:46:46.160 --> 00:46:52.319
just don't use it um
00:46:49.920 --> 00:46:53.559
but yeah I'll try to think about that a
00:46:52.319 --> 00:46:55.920
little bit more because it seems like
00:46:53.559 --> 00:46:59.200
you should use every bet string right so
00:46:55.920 --> 00:47:01.559
um yeah if anybody uh has has the answer
00:46:59.200 --> 00:47:05.160
I'd be happy to hear it otherwise I take
00:47:01.559 --> 00:47:07.079
you cool um so next thing is perplexity
00:47:05.160 --> 00:47:10.640
so this is another one that you see
00:47:07.079 --> 00:47:13.240
commonly and um so perplexity is
00:47:10.640 --> 00:47:16.880
basically two to the ENT uh two to the
00:47:13.240 --> 00:47:20.760
per word entropy or e to the uh negative
00:47:16.880 --> 00:47:24.880
word level log likelihood in log space
00:47:20.760 --> 00:47:28.240
um and so this uh T larger tends to be
00:47:24.880 --> 00:47:32.559
better I'd like to do a little exercise
00:47:28.240 --> 00:47:34.599
to see uh if this works so like let's
00:47:32.559 --> 00:47:39.079
say we have one a dog sees a squirrel it
00:47:34.599 --> 00:47:40.960
will usually um and can anyone guess the
00:47:39.079 --> 00:47:43.480
next word just yell it
00:47:40.960 --> 00:47:46.400
out bar
00:47:43.480 --> 00:47:47.400
okay uh what about that what about
00:47:46.400 --> 00:47:50.400
something
00:47:47.400 --> 00:47:50.400
else
00:47:52.640 --> 00:47:57.520
Chase Run
00:47:54.720 --> 00:48:00.800
Run
00:47:57.520 --> 00:48:00.800
okay John
00:48:01.960 --> 00:48:05.280
John anything
00:48:07.000 --> 00:48:10.400
else any other
00:48:11.280 --> 00:48:16.960
ones so basically what this shows is
00:48:13.640 --> 00:48:16.960
humans are really bad language
00:48:17.160 --> 00:48:24.079
models so uh interestingly every single
00:48:21.520 --> 00:48:26.559
one of the words you predicted here is a
00:48:24.079 --> 00:48:32.240
uh a regular verb
00:48:26.559 --> 00:48:35.200
um but in natural language model gpt2 uh
00:48:32.240 --> 00:48:38.079
the first thing it predicts is B uh
00:48:35.200 --> 00:48:40.440
which is kind of a like the Cula there's
00:48:38.079 --> 00:48:43.400
also start and that will be like start
00:48:40.440 --> 00:48:44.880
running start something um and humans
00:48:43.400 --> 00:48:46.400
actually are really bad at doing this
00:48:44.880 --> 00:48:49.079
are really bad at predicting next words
00:48:46.400 --> 00:48:51.760
we're not trained that way um and so uh
00:48:49.079 --> 00:48:54.319
we end up having these biases but anyway
00:48:51.760 --> 00:48:55.799
um the reason why I did this quiz was
00:48:54.319 --> 00:48:57.280
because that's essentially what
00:48:55.799 --> 00:49:01.160
perplexity
00:48:57.280 --> 00:49:02.680
means um and what what perplexity is is
00:49:01.160 --> 00:49:04.559
it's the number of times you'd have to
00:49:02.680 --> 00:49:07.000
sample from the probability distribution
00:49:04.559 --> 00:49:09.200
before you get the answer right so you
00:49:07.000 --> 00:49:11.160
were a little bit biased here because we
00:49:09.200 --> 00:49:13.359
were doing sampling without replacement
00:49:11.160 --> 00:49:15.480
so like nobody was actually picking a
00:49:13.359 --> 00:49:17.000
word that had already been said but it's
00:49:15.480 --> 00:49:18.319
essentially like if you guessed over and
00:49:17.000 --> 00:49:20.839
over and over again how many times would
00:49:18.319 --> 00:49:22.720
you need until you get it right and so
00:49:20.839 --> 00:49:25.119
here like if the actual answer was start
00:49:22.720 --> 00:49:27.480
the perplexity would be 4.66 so we'd
00:49:25.119 --> 00:49:30.240
expect language model to get it in uh
00:49:27.480 --> 00:49:34.400
four guesses uh between four and five
00:49:30.240 --> 00:49:38.559
guesses and you guys all did six so you
00:49:34.400 --> 00:49:41.599
lose um so uh another important thing to
00:49:38.559 --> 00:49:42.799
mention is evaluation in vocabulary uh
00:49:41.599 --> 00:49:44.880
so for fair
00:49:42.799 --> 00:49:47.319
comparison um make sure that the
00:49:44.880 --> 00:49:49.559
denominator is the same so uh if you're
00:49:47.319 --> 00:49:51.559
calculating the perplexity make sure
00:49:49.559 --> 00:49:53.359
that you're dividing by the same number
00:49:51.559 --> 00:49:55.799
uh every time you're dividing by words
00:49:53.359 --> 00:49:58.520
if it's uh the other paper or whatever
00:49:55.799 --> 00:50:00.680
is dividing by words or like let's say
00:49:58.520 --> 00:50:02.160
you're comparing llama to gp2 they have
00:50:00.680 --> 00:50:04.880
different tokenizers so they'll have
00:50:02.160 --> 00:50:07.040
different numbers of tokens so comparing
00:50:04.880 --> 00:50:10.880
uh with different denominators is not uh
00:50:07.040 --> 00:50:12.440
not fair um if you're allowing unknown
00:50:10.880 --> 00:50:14.559
words or characters so if you allow the
00:50:12.440 --> 00:50:17.640
model to not predict
00:50:14.559 --> 00:50:19.119
any token then you need to be fair about
00:50:17.640 --> 00:50:22.040
that
00:50:19.119 --> 00:50:25.160
too um so I'd like to go into a few
00:50:22.040 --> 00:50:27.960
Alternatives these are very similar to
00:50:25.160 --> 00:50:29.400
the Network classifiers and bag of words
00:50:27.960 --> 00:50:30.680
classifiers that I talked about before
00:50:29.400 --> 00:50:32.480
so I'm going to go through them rather
00:50:30.680 --> 00:50:35.480
quickly because I think you should get
00:50:32.480 --> 00:50:38.119
the basic idea but basically the
00:50:35.480 --> 00:50:40.000
alternative is uh featued models so we
00:50:38.119 --> 00:50:42.559
calculate features of to account based
00:50:40.000 --> 00:50:44.599
models as featued models so we calculate
00:50:42.559 --> 00:50:46.880
features of the context and based on the
00:50:44.599 --> 00:50:48.280
features calculate probabilities
00:50:46.880 --> 00:50:50.480
optimize the feature weights using
00:50:48.280 --> 00:50:53.839
gradient descent uh
00:50:50.480 --> 00:50:56.119
Etc and so for example if we have uh
00:50:53.839 --> 00:50:58.880
input giving a
00:50:56.119 --> 00:51:02.960
uh we calculate features so um we might
00:50:58.880 --> 00:51:05.400
look up uh the word identity of the two
00:51:02.960 --> 00:51:08.240
previous words look up the word identity
00:51:05.400 --> 00:51:11.000
of the word uh directly previous add a
00:51:08.240 --> 00:51:13.480
bias add them all together get scores
00:51:11.000 --> 00:51:14.960
and calculate probabilities where each
00:51:13.480 --> 00:51:16.920
Vector is the size of the output
00:51:14.960 --> 00:51:19.680
vocabulary and feature weights are
00:51:16.920 --> 00:51:21.799
optimized using SGD so this is basically
00:51:19.680 --> 00:51:24.240
a bag of words classifier but it's a
00:51:21.799 --> 00:51:27.200
multiclass bag of words classifier over
00:51:24.240 --> 00:51:28.960
the next token so it's very similar to
00:51:27.200 --> 00:51:30.839
our classification task before except
00:51:28.960 --> 00:51:33.160
now instead of having two classes we
00:51:30.839 --> 00:51:36.280
have you know 10,000 classes or 100,000
00:51:33.160 --> 00:51:38.480
classes oh yeah sorry very quick aside
00:51:36.280 --> 00:51:40.280
um these were actually invented by Rony
00:51:38.480 --> 00:51:41.440
Rosenfeld who's the head of the machine
00:51:40.280 --> 00:51:45.119
learning department at the end the
00:51:41.440 --> 00:51:47.799
machine learning Department uh so um 27
00:51:45.119 --> 00:51:50.760
years ago I guess so he has even more
00:51:47.799 --> 00:51:52.680
experience large language modeling than
00:51:50.760 --> 00:51:55.880
um
00:51:52.680 --> 00:51:58.599
cool so um the one difference with a bag
00:51:55.880 --> 00:52:02.119
of words classifier is
00:51:58.599 --> 00:52:05.480
um we we have
00:52:02.119 --> 00:52:07.640
biases um and we have the probability
00:52:05.480 --> 00:52:09.400
Vector given the previous word but
00:52:07.640 --> 00:52:11.720
instead of using a bag of words this
00:52:09.400 --> 00:52:15.440
actually is using uh How likely is it
00:52:11.720 --> 00:52:16.960
giving given two words previous so uh
00:52:15.440 --> 00:52:18.040
the feature design would be a little bit
00:52:16.960 --> 00:52:19.119
different and that would give you a
00:52:18.040 --> 00:52:22.920
total
00:52:19.119 --> 00:52:24.359
score um as a reminder uh last time we
00:52:22.920 --> 00:52:26.440
did a training algorithm where we
00:52:24.359 --> 00:52:27.480
calculated gradients loss function with
00:52:26.440 --> 00:52:29.960
respect to the
00:52:27.480 --> 00:52:32.319
parameters and uh we can use the chain
00:52:29.960 --> 00:52:33.839
Rule and back propagation and updates to
00:52:32.319 --> 00:52:36.400
move in the direction that increases
00:52:33.839 --> 00:52:39.040
enough so nothing extremely different
00:52:36.400 --> 00:52:42.640
from what we had for our
00:52:39.040 --> 00:52:44.240
B um similarly this solves some problems
00:52:42.640 --> 00:52:47.240
so this didn't solve the problem of
00:52:44.240 --> 00:52:49.119
sharing strength among similar words it
00:52:47.240 --> 00:52:50.839
did solve the problem of conditioning on
00:52:49.119 --> 00:52:52.839
context with intervening words because
00:52:50.839 --> 00:52:56.920
now we can condition directly on Doctor
00:52:52.839 --> 00:52:59.680
without having to um combine with
00:52:56.920 --> 00:53:01.200
gitrid um and it doesn't necessarily
00:52:59.680 --> 00:53:03.480
handle longdistance dependencies because
00:53:01.200 --> 00:53:05.240
we're still limited in our context with
00:53:03.480 --> 00:53:09.079
the model I just
00:53:05.240 --> 00:53:11.920
described so um if we so sorry back to
00:53:09.079 --> 00:53:13.480
neural networks is what I should say um
00:53:11.920 --> 00:53:15.160
so if we have a feedforward neural
00:53:13.480 --> 00:53:18.480
network language model the way this
00:53:15.160 --> 00:53:20.400
could work is instead of looking up
00:53:18.480 --> 00:53:23.079
discrete features uh like we had in a
00:53:20.400 --> 00:53:25.960
bag of words model uh we would look up
00:53:23.079 --> 00:53:27.400
dents embeddings and so we concatenate
00:53:25.960 --> 00:53:29.359
together these dense
00:53:27.400 --> 00:53:32.319
embeddings and based on the dense
00:53:29.359 --> 00:53:34.599
embeddings uh we do some sort of uh
00:53:32.319 --> 00:53:36.079
intermediate layer transforms to extract
00:53:34.599 --> 00:53:37.200
features like we did for our neural
00:53:36.079 --> 00:53:39.359
network based
00:53:37.200 --> 00:53:41.520
classifier um we multiply this by
00:53:39.359 --> 00:53:43.559
weights uh we have a bias and we
00:53:41.520 --> 00:53:46.559
calculate
00:53:43.559 --> 00:53:49.200
scores and uh then we take a soft Max to
00:53:46.559 --> 00:53:49.200
do
00:53:50.400 --> 00:53:55.799
classification so um this can calculate
00:53:53.359 --> 00:53:58.000
combination features uh like we we also
00:53:55.799 --> 00:54:02.280
used in our uh neural network based
00:53:58.000 --> 00:54:04.119
classifiers so um this could uh give us
00:54:02.280 --> 00:54:05.760
a positive number for example if the
00:54:04.119 --> 00:54:07.760
previous word is a determiner and the
00:54:05.760 --> 00:54:10.440
second previous word is a verb so that
00:54:07.760 --> 00:54:14.520
would be like uh in giving and then that
00:54:10.440 --> 00:54:14.520
would allow us upway to that particular
00:54:15.000 --> 00:54:19.559
examples um so this allows us to share
00:54:17.640 --> 00:54:21.640
strength in various places in our model
00:54:19.559 --> 00:54:23.520
which was also You Know instrumental in
00:54:21.640 --> 00:54:25.599
making our our neural network
00:54:23.520 --> 00:54:28.000
classifiers work for similar work and
00:54:25.599 --> 00:54:30.119
stuff and so these would be word
00:54:28.000 --> 00:54:32.160
embeddings so similar words get similar
00:54:30.119 --> 00:54:35.079
embeddings another really important
00:54:32.160 --> 00:54:38.480
thing is uh similar output words also
00:54:35.079 --> 00:54:41.839
get similar rows in The softmax Matrix
00:54:38.480 --> 00:54:44.440
and so here remember if you remember
00:54:41.839 --> 00:54:48.240
from last class this was a big Matrix
00:54:44.440 --> 00:54:50.400
where the size of the Matrix was the
00:54:48.240 --> 00:54:53.319
number of vocabulary items times the
00:54:50.400 --> 00:54:55.920
size of a word embedding this is also a
00:54:53.319 --> 00:54:58.319
matrix where this is
00:54:55.920 --> 00:55:02.200
the number of vocabulary items times the
00:54:58.319 --> 00:55:04.160
size of a context embedding gr and so
00:55:02.200 --> 00:55:06.160
these will also be similar because words
00:55:04.160 --> 00:55:08.280
that appear in similar contexts will
00:55:06.160 --> 00:55:11.920
also you know want similar embeddings so
00:55:08.280 --> 00:55:15.119
they get uploaded in at the same
00:55:11.920 --> 00:55:17.119
time and similar hidden States will have
00:55:15.119 --> 00:55:19.799
similar context so ideally like if you
00:55:17.119 --> 00:55:20.920
have giving a or delivering a or
00:55:19.799 --> 00:55:22.680
something like that those would be
00:55:20.920 --> 00:55:27.000
similar contexts so they would get
00:55:22.680 --> 00:55:27.000
similar purple embeddings out out of the
00:55:28.440 --> 00:55:31.599
so one trick that's widely used in
00:55:30.200 --> 00:55:34.960
language model that further takes
00:55:31.599 --> 00:55:38.799
advantage of this is uh tying
00:55:34.960 --> 00:55:44.160
embeddings so here what this does is
00:55:38.799 --> 00:55:48.280
sharing parameters between this um
00:55:44.160 --> 00:55:49.920
lookup Matrix here and this uh Matrix
00:55:48.280 --> 00:55:51.119
over here that we use for calculating
00:55:49.920 --> 00:55:56.200
the
00:55:51.119 --> 00:55:58.839
softmax and um the reason why this is
00:55:56.200 --> 00:56:00.559
useful is twofold number one it gives
00:55:58.839 --> 00:56:02.079
you essentially more training data to
00:56:00.559 --> 00:56:04.440
learn these embeddings because instead
00:56:02.079 --> 00:56:05.799
of learning the embeddings whenever a
00:56:04.440 --> 00:56:08.520
word is in
00:56:05.799 --> 00:56:10.599
context separately from learning the
00:56:08.520 --> 00:56:13.520
embeddings whenever a word is predicted
00:56:10.599 --> 00:56:15.480
you learn the the same embedding Matrix
00:56:13.520 --> 00:56:19.319
whenever the word is in the context or
00:56:15.480 --> 00:56:21.520
whatever it's predicted and so um that
00:56:19.319 --> 00:56:24.119
makes it more accurate to learn these uh
00:56:21.520 --> 00:56:26.960
embeddings well another thing is the
00:56:24.119 --> 00:56:31.119
embedding mat can actually be very large
00:56:26.960 --> 00:56:34.920
so like let's say we have aab of
00:56:31.119 --> 00:56:37.520
10 100,000 and we have an embedding a
00:56:34.920 --> 00:56:40.799
word embedding size of like 512 or
00:56:37.520 --> 00:56:45.319
something like that
00:56:40.799 --> 00:56:45.319
that's um 51 million
00:56:46.839 --> 00:56:52.440
parameters um and this doesn't sound
00:56:49.559 --> 00:56:55.520
like a lot of parameters at first but it
00:56:52.440 --> 00:56:57.880
actually is a lot to learn when um
00:56:55.520 --> 00:57:01.000
these get updated relatively
00:56:57.880 --> 00:57:03.400
infrequently uh because
00:57:01.000 --> 00:57:06.079
um these get updated relatively
00:57:03.400 --> 00:57:07.960
infrequently because they only are
00:57:06.079 --> 00:57:09.559
updated whenever that word or token
00:57:07.960 --> 00:57:12.319
actually appears in your training data
00:57:09.559 --> 00:57:14.119
so um this can be a good thing for
00:57:12.319 --> 00:57:16.319
parameter savings parameter efficiency
00:57:14.119 --> 00:57:16.319
as
00:57:16.440 --> 00:57:22.520
well um so this uh solves most of the
00:57:19.599 --> 00:57:24.319
problems here um but it doesn't solve
00:57:22.520 --> 00:57:26.839
the problem of longdistance dependencies
00:57:24.319 --> 00:57:29.839
because still limited by the overall
00:57:26.839 --> 00:57:31.359
length of uh the context that we're
00:57:29.839 --> 00:57:32.520
concatenating together here sure we
00:57:31.359 --> 00:57:35.760
could make that longer but that would
00:57:32.520 --> 00:57:37.200
make our model larger and um and bring
00:57:35.760 --> 00:57:39.720
various
00:57:37.200 --> 00:57:42.520
issues and so what I'm going to talk
00:57:39.720 --> 00:57:44.599
about in on thur day is how we solve
00:57:42.520 --> 00:57:47.559
this problem of modeling long contexts
00:57:44.599 --> 00:57:49.720
so how do we um build recurrent neural
00:57:47.559 --> 00:57:52.559
networks uh how do we build
00:57:49.720 --> 00:57:54.960
convolutional uh convolutional networks
00:57:52.559 --> 00:57:57.520
or how do we build attention based
00:57:54.960 --> 00:58:00.720
Transformer models and these are all
00:57:57.520 --> 00:58:02.119
options that are used um Transformers
00:58:00.720 --> 00:58:04.359
are kind of
00:58:02.119 --> 00:58:06.039
the the main thing that people use
00:58:04.359 --> 00:58:08.400
nowadays but there's a lot of versions
00:58:06.039 --> 00:58:11.880
of Transformers that borrow ideas from
00:58:08.400 --> 00:58:14.960
recurrent uh and convolutional models
00:58:11.880 --> 00:58:17.359
um recently a lot of long context models
00:58:14.960 --> 00:58:19.440
us use ideas from recurrent networks and
00:58:17.359 --> 00:58:22.160
a lot of for example speech models or
00:58:19.440 --> 00:58:24.160
things like or image models use ideas
00:58:22.160 --> 00:58:25.920
from convolutional networks so I think
00:58:24.160 --> 00:58:28.760
learning all but at the same time is a
00:58:25.920 --> 00:58:32.160
good idea in comparing
00:58:28.760 --> 00:58:34.319
them cool uh any any questions about
00:58:32.160 --> 00:58:35.799
this part I went through this kind of
00:58:34.319 --> 00:58:37.319
quickly because it's pretty similar to
00:58:35.799 --> 00:58:40.079
the the classification stuff that we
00:58:37.319 --> 00:58:42.680
covered last time but uh any any things
00:58:40.079 --> 00:58:42.680
that people want to
00:58:43.880 --> 00:58:49.039
ask okay so next I'm going to talk about
00:58:46.839 --> 00:58:51.559
a few other desiderata of language
00:58:49.039 --> 00:58:53.039
models so the next one is really really
00:58:51.559 --> 00:58:55.640
important it's a concept I want
00:58:53.039 --> 00:58:57.640
everybody to know I actually
00:58:55.640 --> 00:58:59.520
taught this informally up until this
00:58:57.640 --> 00:59:02.039
class but now I I actually made slides
00:58:59.520 --> 00:59:05.079
for it starting this time which is
00:59:02.039 --> 00:59:07.240
calibration so the idea of calibration
00:59:05.079 --> 00:59:10.200
is that the model quote unquote knows
00:59:07.240 --> 00:59:14.559
when it knows or the the fact that it is
00:59:10.200 --> 00:59:17.480
able to provide a a good answer um uh
00:59:14.559 --> 00:59:21.640
provide a good confidence in its answer
00:59:17.480 --> 00:59:23.640
and more formally this can be specified
00:59:21.640 --> 00:59:25.240
as
00:59:23.640 --> 00:59:27.799
the
00:59:25.240 --> 00:59:29.200
feature that the model probability of
00:59:27.799 --> 00:59:33.119
the answer matches the actual
00:59:29.200 --> 00:59:37.319
probability of getting it right um and
00:59:33.119 --> 00:59:37.319
so what this means
00:59:41.960 --> 00:59:47.480
is the
00:59:44.240 --> 00:59:51.839
probability of the
00:59:47.480 --> 00:59:51.839
answer um is
00:59:52.720 --> 00:59:59.880
correct given the fact that
00:59:56.319 --> 00:59:59.880
the model
01:00:00.160 --> 01:00:07.440
probability is equal to
01:00:03.640 --> 01:00:07.440
P is equal to
01:00:08.559 --> 01:00:12.760
ke
01:00:10.480 --> 01:00:15.319
so I know this is a little bit hard to
01:00:12.760 --> 01:00:18.240
parse I it always took me like a few
01:00:15.319 --> 01:00:21.720
seconds to parse before I uh like when I
01:00:18.240 --> 01:00:25.160
looked at it but basically if the model
01:00:21.720 --> 01:00:26.920
if the model says the probability of it
01:00:25.160 --> 01:00:29.440
being correct is
01:00:26.920 --> 01:00:33.559
0.7 then the probability that the answer
01:00:29.440 --> 01:00:35.960
is correct is actually 0.7 so um you
01:00:33.559 --> 01:00:41.520
know if it says uh the probability is
01:00:35.960 --> 01:00:41.520
0.7 100 times then it will be right 70
01:00:43.640 --> 01:00:52.160
times and so the way we formalize this
01:00:48.039 --> 01:00:55.200
um is is by this uh it was proposed by
01:00:52.160 --> 01:00:57.760
this seminal paper by gu it all in
01:00:55.200 --> 01:01:00.319
2017
01:00:57.760 --> 01:01:03.319
and
01:01:00.319 --> 01:01:05.520
unfortunately this data itself is hard
01:01:03.319 --> 01:01:08.119
to collect
01:01:05.520 --> 01:01:11.200
because the model probability is always
01:01:08.119 --> 01:01:13.359
different right and so if the model
01:01:11.200 --> 01:01:15.359
probability is like if the model
01:01:13.359 --> 01:01:20.480
probability was actually 0.7 that'd be
01:01:15.359 --> 01:01:22.000
nice but actually it's 0.793 to 6 8 5
01:01:20.480 --> 01:01:24.599
and you never get another example where
01:01:22.000 --> 01:01:26.319
the probability is exactly the same so
01:01:24.599 --> 01:01:28.280
what we do instead is we divide the
01:01:26.319 --> 01:01:30.240
model probabilities into buckets so we
01:01:28.280 --> 01:01:32.880
say the model probability is between 0
01:01:30.240 --> 01:01:36.599
and 0.1 we say the model probability is
01:01:32.880 --> 01:01:40.319
between 0.1 and 0.2 0.2 and 0.3 so we
01:01:36.599 --> 01:01:44.599
create buckets like this like these and
01:01:40.319 --> 01:01:46.520
then we looked at the model confidence
01:01:44.599 --> 01:01:52.839
the average model confidence within that
01:01:46.520 --> 01:01:55.000
bucket so maybe uh between 0.1 and 0 uh
01:01:52.839 --> 01:01:58.000
between 0 and 0.1 the model confidence
01:01:55.000 --> 01:02:00.920
on average is 0 055 or something like
01:01:58.000 --> 01:02:02.640
that so that would be this T here and
01:02:00.920 --> 01:02:05.079
then the accuracy is how often did it
01:02:02.640 --> 01:02:06.680
actually get a correct and this can be
01:02:05.079 --> 01:02:09.720
plotted in this thing called a
01:02:06.680 --> 01:02:15.039
reliability diagram and the reliability
01:02:09.720 --> 01:02:17.599
diagram basically um the the
01:02:15.039 --> 01:02:20.359
outputs uh
01:02:17.599 --> 01:02:26.359
here so this is
01:02:20.359 --> 01:02:26.359
um the this is the model
01:02:27.520 --> 01:02:34.119
yeah I think the red is the model
01:02:30.760 --> 01:02:36.400
um expected probability and then the
01:02:34.119 --> 01:02:40.559
blue uh the blue is the actual
01:02:36.400 --> 01:02:43.240
probability and then um
01:02:40.559 --> 01:02:45.160
the difference between the expected and
01:02:43.240 --> 01:02:47.160
the actual probability is kind of like
01:02:45.160 --> 01:02:48.359
the penalty there is how how poorly
01:02:47.160 --> 01:02:52.000
calibrated
01:02:48.359 --> 01:02:55.880
the and one really important thing to
01:02:52.000 --> 01:02:58.440
know is that calibration in accuracy are
01:02:55.880 --> 01:03:00.599
not necessarily they don't go hand inand
01:02:58.440 --> 01:03:02.359
uh they do to some extent but they don't
01:03:00.599 --> 01:03:06.440
uh they don't necessarily go hand in
01:03:02.359 --> 01:03:06.440
hand and
01:03:07.200 --> 01:03:14.319
the example on the left is a a bad model
01:03:11.200 --> 01:03:16.279
but a well calibrated so its accuracy is
01:03:14.319 --> 01:03:18.720
uh its error is
01:03:16.279 --> 01:03:20.000
44.9% um but it's well calibrated as you
01:03:18.720 --> 01:03:21.440
can see like when it says it knows the
01:03:20.000 --> 01:03:23.880
answer it knows the answer when it
01:03:21.440 --> 01:03:27.799
doesn't answer does this model on the
01:03:23.880 --> 01:03:30.000
other hand has better erir and um but
01:03:27.799 --> 01:03:31.880
worse calibration so the reason why is
01:03:30.000 --> 01:03:36.680
the model is very very confident all the
01:03:31.880 --> 01:03:39.640
time and usually what happens is um
01:03:36.680 --> 01:03:41.200
models that overfit to the data
01:03:39.640 --> 01:03:43.359
especially when you do early stopping on
01:03:41.200 --> 01:03:44.760
something like accuracy uh when you stop
01:03:43.359 --> 01:03:47.279
the training on something like accuracy
01:03:44.760 --> 01:03:49.960
will become very overconfident and uh
01:03:47.279 --> 01:03:52.599
give confidence estimates um that are in
01:03:49.960 --> 01:03:54.000
cor like this so this is important to
01:03:52.599 --> 01:03:56.079
know and the reason why it's important
01:03:54.000 --> 01:03:58.000
to know is actually because you know
01:03:56.079 --> 01:04:00.960
models are very good at making up things
01:03:58.000 --> 01:04:02.359
that aren't actually correct nowadays um
01:04:00.960 --> 01:04:04.920
and but if you have a really well
01:04:02.359 --> 01:04:07.760
calibrated model you could at least say
01:04:04.920 --> 01:04:09.920
with what confidence you have this
01:04:07.760 --> 01:04:12.760
working so how do you calculate the
01:04:09.920 --> 01:04:14.160
probability of an answer so H yeah sorry
01:04:12.760 --> 01:04:17.599
uh yes
01:04:14.160 --> 01:04:17.599
yes yeah please
01:04:17.799 --> 01:04:26.559
go the probability of percent or
01:04:23.200 --> 01:04:28.039
percent um usually this would be for a
01:04:26.559 --> 01:04:29.599
generated output because you want to
01:04:28.039 --> 01:04:32.559
know the the probability that the
01:04:29.599 --> 01:04:32.559
generated output is
01:04:53.160 --> 01:04:56.160
cor
01:05:01.079 --> 01:05:06.319
great that's what I'm about to talk
01:05:03.000 --> 01:05:07.839
about so perfect perfect question um so
01:05:06.319 --> 01:05:10.160
how do we calculate the answer
01:05:07.839 --> 01:05:13.279
probability or um how do we calculate
01:05:10.160 --> 01:05:15.039
the confidence in an answer um we're
01:05:13.279 --> 01:05:18.319
actually going to go into more detail
01:05:15.039 --> 01:05:20.760
about this um in a a later class but the
01:05:18.319 --> 01:05:23.200
first thing is probability of the answer
01:05:20.760 --> 01:05:25.799
and this is easy when there's a single
01:05:23.200 --> 01:05:29.079
answer um like if there's only one
01:05:25.799 --> 01:05:31.839
correct answer and you want your model
01:05:29.079 --> 01:05:34.160
to be solving math problems and you want
01:05:31.839 --> 01:05:38.319
it to return only the answer and nothing
01:05:34.160 --> 01:05:40.760
else if it returns anything else like it
01:05:38.319 --> 01:05:44.920
won't work then you can just use the
01:05:40.760 --> 01:05:47.119
probability of the answer but what
01:05:44.920 --> 01:05:49.559
if
01:05:47.119 --> 01:05:52.000
um what if there are multiple acceptable
01:05:49.559 --> 01:05:54.680
answers um and maybe a perfect example
01:05:52.000 --> 01:06:02.240
of that is like where is CMU located
01:05:54.680 --> 01:06:04.400
or um uh where where are we right now um
01:06:02.240 --> 01:06:06.960
if the answer is where are we right
01:06:04.400 --> 01:06:08.880
now um could be
01:06:06.960 --> 01:06:12.880
Pittsburgh could be
01:06:08.880 --> 01:06:12.880
CMU could be carnegy
01:06:16.200 --> 01:06:24.440
melon could be other other things like
01:06:18.760 --> 01:06:26.760
this right um and so another way that
01:06:24.440 --> 01:06:28.319
you can calculate the confidence is
01:06:26.760 --> 01:06:31.240
calculating the probability of the
01:06:28.319 --> 01:06:33.680
answer plus uh you know paraphrases of
01:06:31.240 --> 01:06:35.799
the answer or other uh other things like
01:06:33.680 --> 01:06:37.680
this and so then you would just sum the
01:06:35.799 --> 01:06:38.839
probability over all the qu like
01:06:37.680 --> 01:06:41.680
acceptable
01:06:38.839 --> 01:06:45.359
answers
01:06:41.680 --> 01:06:47.680
um another thing that you can do is um
01:06:45.359 --> 01:06:49.279
sample multiple outputs and count the
01:06:47.680 --> 01:06:51.000
number of times you get a particular
01:06:49.279 --> 01:06:54.440
answer this doesn't solve the problem of
01:06:51.000 --> 01:06:58.119
paraphrasing ex paraphrases existing but
01:06:54.440 --> 01:06:59.880
it does solve the problem of uh it does
01:06:58.119 --> 01:07:01.480
solve two problems sometimes there are
01:06:59.880 --> 01:07:05.240
language models where you can't get
01:07:01.480 --> 01:07:06.640
probabilities out of them um this is not
01:07:05.240 --> 01:07:08.680
so much of a problem anymore with the
01:07:06.640 --> 01:07:11.240
GPT models because they're reintroducing
01:07:08.680 --> 01:07:12.440
the ability to get probabilities but um
01:07:11.240 --> 01:07:13.720
there are some models where you can just
01:07:12.440 --> 01:07:16.279
sample from them and you can't get
01:07:13.720 --> 01:07:18.680
probabilities out but also more
01:07:16.279 --> 01:07:21.039
importantly um sometimes when you're
01:07:18.680 --> 01:07:23.000
using things like uh Chain of Thought
01:07:21.039 --> 01:07:26.520
reasoning which I'll talk about in more
01:07:23.000 --> 01:07:29.839
detail but basically it's like um please
01:07:26.520 --> 01:07:31.480
solve this math problem and explain
01:07:29.839 --> 01:07:33.480
explain your solution and then if it
01:07:31.480 --> 01:07:35.119
will do that it will generate you know a
01:07:33.480 --> 01:07:36.279
really long explanation of how it got to
01:07:35.119 --> 01:07:40.119
the solution and then it will give you
01:07:36.279 --> 01:07:41.640
the answer at the very end and so then
01:07:40.119 --> 01:07:44.960
you can't calculate the probability of
01:07:41.640 --> 01:07:47.720
the actual like answer itself because
01:07:44.960 --> 01:07:49.359
there's this long reasoning chain in
01:07:47.720 --> 01:07:51.960
between and you have like all these
01:07:49.359 --> 01:07:53.559
other all that other text there but what
01:07:51.960 --> 01:07:55.480
you can do is you can sample those
01:07:53.559 --> 01:07:56.920
reasoning chains 100 times and then see
01:07:55.480 --> 01:07:59.599
how many times you got a particular
01:07:56.920 --> 01:08:02.960
answer and that's actually a pretty um a
01:07:59.599 --> 01:08:06.079
Prett pretty reasonable way of uh
01:08:02.960 --> 01:08:09.000
getting a have
01:08:06.079 --> 01:08:11.200
yet this is my favorite one I I love how
01:08:09.000 --> 01:08:12.880
we can do this now it's just absolutely
01:08:11.200 --> 01:08:16.480
ridiculous but you could ask the model
01:08:12.880 --> 01:08:20.279
how confident it is and um it sometimes
01:08:16.480 --> 01:08:22.359
gives you a reasonable uh a reasonable
01:08:20.279 --> 01:08:24.600
answer um there's a really nice
01:08:22.359 --> 01:08:26.400
comparison of different methods uh in
01:08:24.600 --> 01:08:29.679
this paper which is also on on the
01:08:26.400 --> 01:08:31.960
website and basically long story short
01:08:29.679 --> 01:08:34.000
the conclusion from this paper is the
01:08:31.960 --> 01:08:35.640
sampling multiple outputs one is the
01:08:34.000 --> 01:08:36.839
best way to do it if you can't directly
01:08:35.640 --> 01:08:39.520
calculate
01:08:36.839 --> 01:08:41.359
probabilities um another thing that I'd
01:08:39.520 --> 01:08:42.600
like people to pay very close attention
01:08:41.359 --> 01:08:45.040
to is in the
01:08:42.600 --> 01:08:46.480
Generation Um in the generation class
01:08:45.040 --> 01:08:49.600
we're going to be talking about minimum
01:08:46.480 --> 01:08:52.600
based risk which is a Criterion for
01:08:49.600 --> 01:08:54.719
deciding how risky an output is and it's
01:08:52.600 --> 01:08:56.199
actually a really good uh confidence
01:08:54.719 --> 01:08:58.000
metric as well but I'm going to leave
01:08:56.199 --> 01:08:59.440
that till when we discuss it more detail
01:08:58.000 --> 01:09:02.759
with
01:08:59.440 --> 01:09:05.359
it um any any questions
01:09:02.759 --> 01:09:08.440
here okay
01:09:05.359 --> 01:09:10.480
cool um so the other Criterion uh this
01:09:08.440 --> 01:09:12.520
is just yet another Criterion that we
01:09:10.480 --> 01:09:15.239
would like language models to be good at
01:09:12.520 --> 01:09:17.600
um its efficiency and so basically the
01:09:15.239 --> 01:09:21.920
model is easy to run on limited Hardware
01:09:17.600 --> 01:09:25.400
by some you know uh metric of easy and
01:09:21.920 --> 01:09:29.319
some metrics that we like to talk about
01:09:25.400 --> 01:09:32.400
our parameter account so often you will
01:09:29.319 --> 01:09:34.239
see oh this is the best model under
01:09:32.400 --> 01:09:35.520
three billion parameters or this is the
01:09:34.239 --> 01:09:37.960
best model under seven billion
01:09:35.520 --> 01:09:39.600
parameters or um we trained a model with
01:09:37.960 --> 01:09:42.159
one trillion parameters or something
01:09:39.600 --> 01:09:44.719
like that you know
01:09:42.159 --> 01:09:46.839
uh the thing is parameter count doesn't
01:09:44.719 --> 01:09:49.640
really mean that much um from the point
01:09:46.839 --> 01:09:52.839
of view of like ease of using the model
01:09:49.640 --> 01:09:54.400
um unless you also think about other uh
01:09:52.839 --> 01:09:56.480
you know deser
01:09:54.400 --> 01:09:58.840
like just to give one example this is a
01:09:56.480 --> 01:10:00.880
parameter count um let's say you have a
01:09:58.840 --> 01:10:02.960
parameter count of 7 billion is that 7
01:10:00.880 --> 01:10:05.719
billion parameters at 32-bit Precision
01:10:02.960 --> 01:10:07.800
or is that 7 billion parameters at 4bit
01:10:05.719 --> 01:10:09.400
Precision um will make a huge difference
01:10:07.800 --> 01:10:12.960
in your memory footprint your speed
01:10:09.400 --> 01:10:14.920
other things like that um so some of the
01:10:12.960 --> 01:10:18.040
things that are more direct with respect
01:10:14.920 --> 01:10:19.800
to efficiency are memory usage um and
01:10:18.040 --> 01:10:22.440
there's two varieties of memory usage
01:10:19.800 --> 01:10:24.280
one is model uh model only memory usage
01:10:22.440 --> 01:10:27.120
so when you load loaded the model into
01:10:24.280 --> 01:10:29.120
memory uh how much space does it take
01:10:27.120 --> 01:10:31.159
and also Peak memory consumption when
01:10:29.120 --> 01:10:33.159
you run have run the model over a
01:10:31.159 --> 01:10:35.920
sequence of a certain length how much is
01:10:33.159 --> 01:10:40.040
it going to P so that's another
01:10:35.920 --> 01:10:43.000
thing another thing is latency um and
01:10:40.040 --> 01:10:46.440
with respect to latency this can be
01:10:43.000 --> 01:10:49.440
either how long does it take to start
01:10:46.440 --> 01:10:52.080
outputting the first token um and how
01:10:49.440 --> 01:10:54.840
long does it take to uh finish
01:10:52.080 --> 01:10:59.480
outputting uh a generation of a certain
01:10:54.840 --> 01:11:01.199
length and the first will have more to
01:10:59.480 --> 01:11:04.960
do with how long does it take to encode
01:11:01.199 --> 01:11:06.480
a sequence um which is usually faster
01:11:04.960 --> 01:11:09.080
than how long does it take to generate a
01:11:06.480 --> 01:11:11.360
sequence so this will have to do with
01:11:09.080 --> 01:11:13.000
like encoding time this will require
01:11:11.360 --> 01:11:15.880
encoding time of course but it will also
01:11:13.000 --> 01:11:15.880
require generation
01:11:16.280 --> 01:11:21.840
time also throughput so you know how
01:11:19.239 --> 01:11:23.679
much um how many sentences can you
01:11:21.840 --> 01:11:25.400
process in a certain amount of time so
01:11:23.679 --> 01:11:26.480
of these are kind of desad that you you
01:11:25.400 --> 01:11:29.000
would
01:11:26.480 --> 01:11:30.280
say um we're going to be talking about
01:11:29.000 --> 01:11:31.920
this more in the distillation and
01:11:30.280 --> 01:11:33.199
compression and generation algorithms
01:11:31.920 --> 01:11:35.640
classes so I won't go into a whole lot
01:11:33.199 --> 01:11:36.840
of detail about this but um it's just
01:11:35.640 --> 01:11:39.960
another thing that we want to be
01:11:36.840 --> 01:11:43.560
thinking about in addition to
01:11:39.960 --> 01:11:45.360
complexity um but since I'm I'm on the
01:11:43.560 --> 01:11:47.800
topic of efficiency I would like to talk
01:11:45.360 --> 01:11:49.480
just a little bit about it um in terms
01:11:47.800 --> 01:11:51.000
of especially things that will be useful
01:11:49.480 --> 01:11:53.600
for implementing your first
01:11:51.000 --> 01:11:55.840
assignment and uh one thing that every
01:11:53.600 --> 01:11:58.639
body should know about um if you've done
01:11:55.840 --> 01:11:59.920
any like deep learning with pytorch or
01:11:58.639 --> 01:12:02.639
something like this you already know
01:11:59.920 --> 01:12:05.880
about this probably but uh I think it's
01:12:02.639 --> 01:12:08.760
worth mentioning but basically mini
01:12:05.880 --> 01:12:12.120
batching or batching uh is uh very
01:12:08.760 --> 01:12:15.320
useful and the basic idea behind it is
01:12:12.120 --> 01:12:17.560
that on Modern Hardware if you do many
01:12:15.320 --> 01:12:20.520
of the same operations at once it's much
01:12:17.560 --> 01:12:24.320
faster than doing um
01:12:20.520 --> 01:12:25.480
like uh operations executively and
01:12:24.320 --> 01:12:27.280
that's especially the case if you're
01:12:25.480 --> 01:12:30.520
programming in an extremely slow
01:12:27.280 --> 01:12:33.239
programming language like python um I
01:12:30.520 --> 01:12:37.239
love python but it's slow I mean like
01:12:33.239 --> 01:12:38.719
there's no argument about that um and so
01:12:37.239 --> 01:12:40.520
what mini batching does is it combines
01:12:38.719 --> 01:12:43.600
together smaller operations into one big
01:12:40.520 --> 01:12:47.480
one and the basic idea uh for example if
01:12:43.600 --> 01:12:51.679
we want to calculate our um our linear
01:12:47.480 --> 01:12:56.560
layer with a t uh nonlinearity after it
01:12:51.679 --> 01:12:59.760
we will take several inputs X1 X2 X3
01:12:56.560 --> 01:13:02.040
concatenate them together and do a
01:12:59.760 --> 01:13:04.600
Matrix Matrix multiply instead of doing
01:13:02.040 --> 01:13:07.960
three Vector Matrix
01:13:04.600 --> 01:13:09.239
multiplies and so what we do is we take
01:13:07.960 --> 01:13:11.280
a whole bunch of examples we take like
01:13:09.239 --> 01:13:13.840
64 examples or something like that and
01:13:11.280 --> 01:13:18.000
we combine them together and calculate
01:13:13.840 --> 01:13:21.280
out thingsit one thing to know is that
01:13:18.000 --> 01:13:22.560
if you're working with sentences there's
01:13:21.280 --> 01:13:24.719
different ways you can calculate the
01:13:22.560 --> 01:13:27.360
size of your mini
01:13:24.719 --> 01:13:28.880
normally nowadays the thing that people
01:13:27.360 --> 01:13:30.400
do and the thing that I recommend is to
01:13:28.880 --> 01:13:31.679
calculate the size of your mini batches
01:13:30.400 --> 01:13:33.639
based on the number of tokens in the
01:13:31.679 --> 01:13:35.840
mini batch it used to be that you would
01:13:33.639 --> 01:13:39.719
do it based on the number of sequences
01:13:35.840 --> 01:13:43.800
but the the problem is um one like 50
01:13:39.719 --> 01:13:47.120
sequences of length like 100 is much
01:13:43.800 --> 01:13:49.480
more memory intensive than uh 50
01:13:47.120 --> 01:13:51.960
sequences of Link five and so you get
01:13:49.480 --> 01:13:53.920
these vastly varying these mini batches
01:13:51.960 --> 01:13:57.000
of vastly varying size and that's both
01:13:53.920 --> 01:13:59.800
bad for you know memory overflows and
01:13:57.000 --> 01:14:01.639
bad for um and bad for learning
01:13:59.800 --> 01:14:04.280
stability so I I definitely recommend
01:14:01.639 --> 01:14:06.880
doing it based on the number of
01:14:04.280 --> 01:14:09.080
comps uh another thing is gpus versus
01:14:06.880 --> 01:14:12.400
CPUs so
01:14:09.080 --> 01:14:14.600
um uh CPUs one way you can think of it
01:14:12.400 --> 01:14:17.320
is a CPUs kind of like a motorcycle it's
01:14:14.600 --> 01:14:19.600
very fast at picking up and doing a
01:14:17.320 --> 01:14:23.960
bunch of uh things very quickly
01:14:19.600 --> 01:14:26.600
accelerating uh into starting new uh new
01:14:23.960 --> 01:14:28.760
tasks a GPU is more like an airplane
01:14:26.600 --> 01:14:30.719
which uh you wait forever in line in
01:14:28.760 --> 01:14:33.360
security and
01:14:30.719 --> 01:14:34.800
then and then uh it takes a long time to
01:14:33.360 --> 01:14:40.400
get off the ground and start working but
01:14:34.800 --> 01:14:43.679
once it does it's extremely fast um and
01:14:40.400 --> 01:14:45.360
so if we do a simple example of how long
01:14:43.679 --> 01:14:47.600
does it take to do a Matrix Matrix
01:14:45.360 --> 01:14:49.040
multiply I calculated this a really long
01:14:47.600 --> 01:14:51.280
time ago it's probably horribly out of
01:14:49.040 --> 01:14:55.120
date now but the same general principle
01:14:51.280 --> 01:14:56.560
stands which is if we have have um the
01:14:55.120 --> 01:14:58.480
number of seconds that it takes to do a
01:14:56.560 --> 01:15:02.080
Matrix Matrix multiply doing one of size
01:14:58.480 --> 01:15:03.920
16 is actually faster on CPU because uh
01:15:02.080 --> 01:15:07.760
the overhead it takes to get started is
01:15:03.920 --> 01:15:10.880
very low but if you um once you start
01:15:07.760 --> 01:15:13.360
getting up to size like 128 by 128
01:15:10.880 --> 01:15:15.800
Matrix multiplies then doing it on GPU
01:15:13.360 --> 01:15:17.320
is faster and then um it's you know a
01:15:15.800 --> 01:15:19.679
100 times faster once you start getting
01:15:17.320 --> 01:15:21.600
up to very large matrices so um if
01:15:19.679 --> 01:15:24.000
you're dealing with very large networks
01:15:21.600 --> 01:15:26.800
handling a GPU is good
01:15:24.000 --> 01:15:30.159
um and this is the the speed up
01:15:26.800 --> 01:15:31.440
percentage um one thing I should mention
01:15:30.159 --> 01:15:34.239
is
01:15:31.440 --> 01:15:36.440
um compute with respect to like doing
01:15:34.239 --> 01:15:39.800
the assignments for this class if you
01:15:36.440 --> 01:15:43.199
have a relatively recent Mac you're kind
01:15:39.800 --> 01:15:44.760
of in luck because actually the gpus on
01:15:43.199 --> 01:15:47.239
the Mac are pretty fast and they're well
01:15:44.760 --> 01:15:48.960
integrated with um they're well
01:15:47.239 --> 01:15:52.080
integrated with pipor and other things
01:15:48.960 --> 01:15:53.440
like that so decently sized models maybe
01:15:52.080 --> 01:15:54.840
up to the size that you would need to
01:15:53.440 --> 01:15:57.840
run for assignment one or even
01:15:54.840 --> 01:16:00.880
assignment two might uh just run on your
01:15:57.840 --> 01:16:03.639
uh laptop computer um if you don't have
01:16:00.880 --> 01:16:05.280
a GPU uh that you have immediately
01:16:03.639 --> 01:16:06.760
accessible to you I we're going to
01:16:05.280 --> 01:16:08.400
recommend that you use collab where you
01:16:06.760 --> 01:16:10.120
can get a GPU uh for the first
01:16:08.400 --> 01:16:12.440
assignments and then we'll have plug
01:16:10.120 --> 01:16:15.159
reddits that you can use otherwise but
01:16:12.440 --> 01:16:16.800
um GPU is usually like something that
01:16:15.159 --> 01:16:18.440
you can get on the cloud or one that you
01:16:16.800 --> 01:16:21.080
have on your Mac or one that you have on
01:16:18.440 --> 01:16:24.600
your gaming computer or something like
01:16:21.080 --> 01:16:26.040
that um there's a few speed tricks that
01:16:24.600 --> 01:16:30.000
you should know for efficient GPU
01:16:26.040 --> 01:16:32.480
operations so um one mistake that people
01:16:30.000 --> 01:16:35.880
make when creating models is they repeat
01:16:32.480 --> 01:16:38.080
operations over and over again and um
01:16:35.880 --> 01:16:40.600
you don't want to be doing this so like
01:16:38.080 --> 01:16:43.239
for example um this is multiplying a
01:16:40.600 --> 01:16:45.320
matrix by a constant multiple times and
01:16:43.239 --> 01:16:46.880
if you're just using out of thee box pie
01:16:45.320 --> 01:16:49.280
torch this would be really bad because
01:16:46.880 --> 01:16:50.400
you'd be repeating the operation uh when
01:16:49.280 --> 01:16:52.679
it's not
01:16:50.400 --> 01:16:54.480
necessary um you can also reduce the
01:16:52.679 --> 01:16:57.360
number of operations that you need to
01:16:54.480 --> 01:17:00.320
use so uh use Matrix Matrix multiplies
01:16:57.360 --> 01:17:03.080
instead of Matrix Vector
01:17:00.320 --> 01:17:07.920
multiplies and another thing is uh
01:17:03.080 --> 01:17:10.719
reducing CPU GPU data movement and um so
01:17:07.920 --> 01:17:12.360
when you do try to move memory um when
01:17:10.719 --> 01:17:17.080
you do try to move memory try to do it
01:17:12.360 --> 01:17:20.040
as early as possible and as uh and as
01:17:17.080 --> 01:17:22.199
few times as possible and the reason why
01:17:20.040 --> 01:17:24.199
you want to move things early or start
01:17:22.199 --> 01:17:25.920
operations early is many GPU operations
01:17:24.199 --> 01:17:27.159
are asynchronous so you can start the
01:17:25.920 --> 01:17:28.800
operation and it will run in the
01:17:27.159 --> 01:17:33.120
background while other things are
01:17:28.800 --> 01:17:36.080
processing so um it's a good idea to try
01:17:33.120 --> 01:17:39.840
to um to optimize and you can also use
01:17:36.080 --> 01:17:42.360
your python profiler or um envidia GPU
01:17:39.840 --> 01:17:43.679
profilers to try to optimize these
01:17:42.360 --> 01:17:46.520
things as
01:17:43.679 --> 01:17:49.840
well cool that's all I have uh we're
01:17:46.520 --> 01:17:49.840
right at time