|
WEBVTT |
|
|
|
00:00:00.280 --> 00:00:08.320 |
|
can everyone hear Al set okay great so |
|
|
|
00:00:05.400 --> 00:00:09.840 |
|
um today I'll be talking about a tour of |
|
|
|
00:00:08.320 --> 00:00:13.960 |
|
modern uh |
|
|
|
00:00:09.840 --> 00:00:16.600 |
|
llms and basically the idea here is that |
|
|
|
00:00:13.960 --> 00:00:18.600 |
|
there is many many large language models |
|
|
|
00:00:16.600 --> 00:00:20.480 |
|
available nowadays but I wanted to go |
|
|
|
00:00:18.600 --> 00:00:22.760 |
|
through some of the ones that are |
|
|
|
00:00:20.480 --> 00:00:25.880 |
|
particularly interesting for various |
|
|
|
00:00:22.760 --> 00:00:26.880 |
|
reasons either because they disclose a |
|
|
|
00:00:25.880 --> 00:00:29.519 |
|
lot of |
|
|
|
00:00:26.880 --> 00:00:31.119 |
|
information uh you know about exactly |
|
|
|
00:00:29.519 --> 00:00:34.120 |
|
how they were trains so we can get an |
|
|
|
00:00:31.119 --> 00:00:35.559 |
|
idea about what is involved in training |
|
|
|
00:00:34.120 --> 00:00:39.120 |
|
uh a kind of state-ofthe-art large |
|
|
|
00:00:35.559 --> 00:00:40.640 |
|
language model or because they're kind |
|
|
|
00:00:39.120 --> 00:00:43.200 |
|
of the strongest models that you can |
|
|
|
00:00:40.640 --> 00:00:45.160 |
|
download and use on your own um like the |
|
|
|
00:00:43.200 --> 00:00:47.360 |
|
best open weights language models that |
|
|
|
00:00:45.160 --> 00:00:49.559 |
|
are available or because they're |
|
|
|
00:00:47.360 --> 00:00:51.879 |
|
specialized to some particular topic or |
|
|
|
00:00:49.559 --> 00:00:53.480 |
|
because they're the best closed uh |
|
|
|
00:00:51.879 --> 00:00:56.399 |
|
language models but I'm going to |
|
|
|
00:00:53.480 --> 00:00:58.640 |
|
particularly focus on the first two um |
|
|
|
00:00:56.399 --> 00:01:00.640 |
|
just so like everybody has an idea about |
|
|
|
00:00:58.640 --> 00:01:03.239 |
|
you know what what is going into all the |
|
|
|
00:01:00.640 --> 00:01:07.519 |
|
models that you're using for whatever uh |
|
|
|
00:01:03.239 --> 00:01:07.519 |
|
you know tasks that you're trying to |
|
|
|
00:01:09.119 --> 00:01:14.159 |
|
solve so one important thing is uh what |
|
|
|
00:01:12.240 --> 00:01:18.080 |
|
makes a model so we talk about you know |
|
|
|
00:01:14.159 --> 00:01:21.680 |
|
like llama 2 or M roll or mix roll or |
|
|
|
00:01:18.080 --> 00:01:23.320 |
|
whatever else and I think you know this |
|
|
|
00:01:21.680 --> 00:01:24.479 |
|
already but it's worth reiterating again |
|
|
|
00:01:23.320 --> 00:01:27.320 |
|
here because I'm going to talk about it |
|
|
|
00:01:24.479 --> 00:01:29.320 |
|
a lot today but it's basically the model |
|
|
|
00:01:27.320 --> 00:01:31.280 |
|
architecture so what architecture do you |
|
|
|
00:01:29.320 --> 00:01:33.799 |
|
decide to use |
|
|
|
00:01:31.280 --> 00:01:35.840 |
|
um what data do you decide to use and |
|
|
|
00:01:33.799 --> 00:01:39.759 |
|
what training algorithm or Training |
|
|
|
00:01:35.840 --> 00:01:42.520 |
|
Method do you decide to use and all of |
|
|
|
00:01:39.759 --> 00:01:46.040 |
|
these are important um and there was |
|
|
|
00:01:42.520 --> 00:01:49.320 |
|
actually uh a Twitter thread with Tom |
|
|
|
00:01:46.040 --> 00:01:52.399 |
|
Wolf who's I guess CSO or CTO or |
|
|
|
00:01:49.320 --> 00:01:54.840 |
|
something like that at hugging face um |
|
|
|
00:01:52.399 --> 00:01:56.840 |
|
and basically what he was saying is uh a |
|
|
|
00:01:54.840 --> 00:01:59.240 |
|
lot of people don't realize that the |
|
|
|
00:01:56.840 --> 00:02:01.039 |
|
data is actually one of the most |
|
|
|
00:01:59.240 --> 00:02:04.320 |
|
important parts |
|
|
|
00:02:01.039 --> 00:02:07.680 |
|
um and the architectures are a lot less |
|
|
|
00:02:04.320 --> 00:02:10.920 |
|
important nowadays and I think that |
|
|
|
00:02:07.680 --> 00:02:14.280 |
|
there's some truth to that there's also |
|
|
|
00:02:10.920 --> 00:02:15.879 |
|
some you know a counterargument to that |
|
|
|
00:02:14.280 --> 00:02:17.920 |
|
uh the truth to that which you'll see |
|
|
|
00:02:15.879 --> 00:02:19.760 |
|
today is that almost all of the models |
|
|
|
00:02:17.920 --> 00:02:21.360 |
|
that we're using use very similar |
|
|
|
00:02:19.760 --> 00:02:23.120 |
|
architectures like almost all of the |
|
|
|
00:02:21.360 --> 00:02:26.879 |
|
models use an architecture that's very |
|
|
|
00:02:23.120 --> 00:02:28.760 |
|
similar Dilma um but despite the fact |
|
|
|
00:02:26.879 --> 00:02:31.280 |
|
that they use very similar architectures |
|
|
|
00:02:28.760 --> 00:02:33.599 |
|
they're um accuracy is vastly different |
|
|
|
00:02:31.280 --> 00:02:36.080 |
|
or their their abilities are vastly |
|
|
|
00:02:33.599 --> 00:02:38.519 |
|
different so that must come from the |
|
|
|
00:02:36.080 --> 00:02:40.040 |
|
data or the training decisions right so |
|
|
|
00:02:38.519 --> 00:02:41.640 |
|
that's an argument for the fact that |
|
|
|
00:02:40.040 --> 00:02:44.040 |
|
architecture decisions are a lot less |
|
|
|
00:02:41.640 --> 00:02:48.000 |
|
important my counterargument to that is |
|
|
|
00:02:44.040 --> 00:02:49.840 |
|
we spent N9 to 10 years fine-tuning and |
|
|
|
00:02:48.000 --> 00:02:51.560 |
|
finding the Llama architecture so now we |
|
|
|
00:02:49.840 --> 00:02:53.120 |
|
have the Llama architecture which is a |
|
|
|
00:02:51.560 --> 00:02:55.480 |
|
really good architecture it works really |
|
|
|
00:02:53.120 --> 00:02:57.640 |
|
well when training very large models on |
|
|
|
00:02:55.480 --> 00:02:59.239 |
|
lots of data and so now we don't need to |
|
|
|
00:02:57.640 --> 00:03:01.360 |
|
use another architecture because the |
|
|
|
00:02:59.239 --> 00:03:02.920 |
|
architecture using is good but if we |
|
|
|
00:03:01.360 --> 00:03:06.200 |
|
were trying to do the same thing with |
|
|
|
00:03:02.920 --> 00:03:07.640 |
|
the like lstm from 2014 uh then none of |
|
|
|
00:03:06.200 --> 00:03:09.440 |
|
the stuff we're doing today would work |
|
|
|
00:03:07.640 --> 00:03:11.760 |
|
so that's an argument in favor of you |
|
|
|
00:03:09.440 --> 00:03:13.560 |
|
know architectures being also |
|
|
|
00:03:11.760 --> 00:03:16.920 |
|
architectures can make things faster and |
|
|
|
00:03:13.560 --> 00:03:16.920 |
|
that's included in s decisions |
|
|
|
00:03:17.280 --> 00:03:21.280 |
|
that |
|
|
|
00:03:19.040 --> 00:03:22.640 |
|
so um the first thing I'd like to talk |
|
|
|
00:03:21.280 --> 00:03:25.280 |
|
about before I get into any of the |
|
|
|
00:03:22.640 --> 00:03:28.000 |
|
actual details is um open versus closed |
|
|
|
00:03:25.280 --> 00:03:30.480 |
|
access uh this is not like modeling |
|
|
|
00:03:28.000 --> 00:03:31.760 |
|
stuff but I think it's important and |
|
|
|
00:03:30.480 --> 00:03:35.599 |
|
also helps you understand the |
|
|
|
00:03:31.760 --> 00:03:39.519 |
|
environment a little bit so um there's a |
|
|
|
00:03:35.599 --> 00:03:42.200 |
|
nice blog by pyang and others uh at |
|
|
|
00:03:39.519 --> 00:03:45.560 |
|
which is also in the reference and they |
|
|
|
00:03:42.200 --> 00:03:47.720 |
|
discuss several different varieties of |
|
|
|
00:03:45.560 --> 00:03:50.599 |
|
like openness of release of language |
|
|
|
00:03:47.720 --> 00:03:52.560 |
|
models in advanced AI systems and there |
|
|
|
00:03:50.599 --> 00:03:55.200 |
|
are some things that we can talk about |
|
|
|
00:03:52.560 --> 00:03:59.000 |
|
we can talk about the weights being open |
|
|
|
00:03:55.200 --> 00:04:01.439 |
|
um described or closed inference uh code |
|
|
|
00:03:59.000 --> 00:04:03.319 |
|
being open or inference methods being |
|
|
|
00:04:01.439 --> 00:04:04.959 |
|
described or it being fully closed |
|
|
|
00:04:03.319 --> 00:04:08.120 |
|
training being open described or closed |
|
|
|
00:04:04.959 --> 00:04:13.040 |
|
and data being open described or closed |
|
|
|
00:04:08.120 --> 00:04:14.760 |
|
and um in general uh we have like the |
|
|
|
00:04:13.040 --> 00:04:16.519 |
|
open weights models that are on hugging |
|
|
|
00:04:14.760 --> 00:04:19.040 |
|
face that might just mean the weights |
|
|
|
00:04:16.519 --> 00:04:20.600 |
|
are open the inference code also needs |
|
|
|
00:04:19.040 --> 00:04:21.919 |
|
to be open because otherwise you can't |
|
|
|
00:04:20.600 --> 00:04:24.160 |
|
do inference on them if they're on |
|
|
|
00:04:21.919 --> 00:04:25.800 |
|
hugging face but that doesn't mean that |
|
|
|
00:04:24.160 --> 00:04:28.120 |
|
the training code is open it also |
|
|
|
00:04:25.800 --> 00:04:32.479 |
|
doesn't mean that the data is open um |
|
|
|
00:04:28.120 --> 00:04:34.280 |
|
and so there's various degrees of |
|
|
|
00:04:32.479 --> 00:04:37.320 |
|
openness |
|
|
|
00:04:34.280 --> 00:04:40.919 |
|
um and then of course there are things |
|
|
|
00:04:37.320 --> 00:04:42.520 |
|
like uh GPT for or GPT models where |
|
|
|
00:04:40.919 --> 00:04:45.560 |
|
basically all of this is closed and we |
|
|
|
00:04:42.520 --> 00:04:48.880 |
|
don't know anything about it or know |
|
|
|
00:04:45.560 --> 00:04:50.560 |
|
very little about it another thing is |
|
|
|
00:04:48.880 --> 00:04:52.600 |
|
about licenses and |
|
|
|
00:04:50.560 --> 00:04:54.199 |
|
permissiveness and this is kind of |
|
|
|
00:04:52.600 --> 00:04:56.880 |
|
important if you want to do a research |
|
|
|
00:04:54.199 --> 00:05:01.240 |
|
project to know because |
|
|
|
00:04:56.880 --> 00:05:04.080 |
|
it means it it an impact on the things |
|
|
|
00:05:01.240 --> 00:05:05.520 |
|
that you legally can do or can't do in |
|
|
|
00:05:04.080 --> 00:05:08.039 |
|
universities I mean we should be |
|
|
|
00:05:05.520 --> 00:05:09.479 |
|
following the law but we're maybe people |
|
|
|
00:05:08.039 --> 00:05:10.720 |
|
think about this a little bit less if |
|
|
|
00:05:09.479 --> 00:05:12.240 |
|
you're in a big company this is |
|
|
|
00:05:10.720 --> 00:05:14.919 |
|
something that becomes really important |
|
|
|
00:05:12.240 --> 00:05:17.199 |
|
so it's uh it's important to think |
|
|
|
00:05:14.919 --> 00:05:20.039 |
|
about so I'm going to go through several |
|
|
|
00:05:17.199 --> 00:05:21.440 |
|
degrees of licenses uh that if you've |
|
|
|
00:05:20.039 --> 00:05:25.759 |
|
done anything in open source you |
|
|
|
00:05:21.440 --> 00:05:27.600 |
|
probably know but um the or you probably |
|
|
|
00:05:25.759 --> 00:05:29.919 |
|
know a lot of these the first one is |
|
|
|
00:05:27.600 --> 00:05:31.479 |
|
public domain or cc0 |
|
|
|
00:05:29.919 --> 00:05:33.440 |
|
and this basically means you can do |
|
|
|
00:05:31.479 --> 00:05:37.240 |
|
anything with it like I could I could |
|
|
|
00:05:33.440 --> 00:05:39.280 |
|
download it and um this includes the |
|
|
|
00:05:37.240 --> 00:05:41.680 |
|
download it and redistribute it not give |
|
|
|
00:05:39.280 --> 00:05:44.560 |
|
you any credit uh modify it in any way I |
|
|
|
00:05:41.680 --> 00:05:47.720 |
|
want and this includes things like old |
|
|
|
00:05:44.560 --> 00:05:49.600 |
|
copyrighted works and products of the US |
|
|
|
00:05:47.720 --> 00:05:51.400 |
|
government workers so if you work for |
|
|
|
00:05:49.600 --> 00:05:53.240 |
|
the US government in some capacities |
|
|
|
00:05:51.400 --> 00:05:58.560 |
|
anything you generate becomes public |
|
|
|
00:05:53.240 --> 00:06:01.000 |
|
domain um so old copyrighted Works um |
|
|
|
00:05:58.560 --> 00:06:04.560 |
|
How how old do you think they need to be |
|
|
|
00:06:01.000 --> 00:06:04.560 |
|
before they become uh |
|
|
|
00:06:04.720 --> 00:06:12.280 |
|
uncopyrighted |
|
|
|
00:06:07.000 --> 00:06:12.280 |
|
yeah uh I think that's pretty close |
|
|
|
00:06:14.319 --> 00:06:21.280 |
|
so it's uh 70 years I |
|
|
|
00:06:18.520 --> 00:06:23.680 |
|
guess oh sorry the life of the author |
|
|
|
00:06:21.280 --> 00:06:25.120 |
|
plus an additional 70 years so like |
|
|
|
00:06:23.680 --> 00:06:28.479 |
|
after the after the person has passed |
|
|
|
00:06:25.120 --> 00:06:30.720 |
|
away 70 years I guess it says um does |
|
|
|
00:06:28.479 --> 00:06:34.520 |
|
anyone know a work that just become |
|
|
|
00:06:30.720 --> 00:06:37.520 |
|
became non-copyrighted yeah uh Mickey |
|
|
|
00:06:34.520 --> 00:06:43.199 |
|
Mouse is still copyrighted |
|
|
|
00:06:37.520 --> 00:06:45.199 |
|
yeah SBO uh did did it I okay so that |
|
|
|
00:06:43.199 --> 00:06:48.400 |
|
that's some new news some other new news |
|
|
|
00:06:45.199 --> 00:06:50.759 |
|
is wi the Poo um so Winnie the Poo just |
|
|
|
00:06:48.400 --> 00:06:54.199 |
|
became non-copyrighted and actually I |
|
|
|
00:06:50.759 --> 00:06:55.840 |
|
just heard uh last week that somebody |
|
|
|
00:06:54.199 --> 00:06:59.680 |
|
made a horror movie where Winnie the |
|
|
|
00:06:55.840 --> 00:07:01.479 |
|
Pooh was a a killer and that one uh a |
|
|
|
00:06:59.680 --> 00:07:04.960 |
|
whole bunch of like bad movie awards in |
|
|
|
00:07:01.479 --> 00:07:06.639 |
|
2023 so um that's the kind of things |
|
|
|
00:07:04.960 --> 00:07:09.080 |
|
that can happen to your copyrighted |
|
|
|
00:07:06.639 --> 00:07:11.479 |
|
works if they are released cc0 somebody |
|
|
|
00:07:09.080 --> 00:07:12.960 |
|
can do anything they want with them uh |
|
|
|
00:07:11.479 --> 00:07:14.400 |
|
you know so you need to be a little bit |
|
|
|
00:07:12.960 --> 00:07:18.080 |
|
careful about |
|
|
|
00:07:14.400 --> 00:07:20.000 |
|
that um next are MIT and bstd these are |
|
|
|
00:07:18.080 --> 00:07:22.400 |
|
very common software licenses you'll see |
|
|
|
00:07:20.000 --> 00:07:25.720 |
|
them on a lot of research projects these |
|
|
|
00:07:22.400 --> 00:07:27.400 |
|
have very few restrictions um other than |
|
|
|
00:07:25.720 --> 00:07:29.319 |
|
maybe maintaining the copyright notice |
|
|
|
00:07:27.400 --> 00:07:31.840 |
|
for BC but that's about it you can do |
|
|
|
00:07:29.319 --> 00:07:33.840 |
|
just about anything you want with it um |
|
|
|
00:07:31.840 --> 00:07:35.599 |
|
actually I'm not sure if people know |
|
|
|
00:07:33.840 --> 00:07:39.599 |
|
this but the Mac operating system is |
|
|
|
00:07:35.599 --> 00:07:42.199 |
|
based on an old BSD Opera uh operating |
|
|
|
00:07:39.599 --> 00:07:44.280 |
|
system where they uh took the they took |
|
|
|
00:07:42.199 --> 00:07:46.080 |
|
the code they made it private they |
|
|
|
00:07:44.280 --> 00:07:49.560 |
|
forked it made it private and now it's |
|
|
|
00:07:46.080 --> 00:07:51.919 |
|
the proprietary Mac operating system so |
|
|
|
00:07:49.560 --> 00:07:53.720 |
|
uh that's something you can do with an m |
|
|
|
00:07:51.919 --> 00:07:57.840 |
|
m or BSD |
|
|
|
00:07:53.720 --> 00:08:00.000 |
|
licensed um there's also a Pachi and CC |
|
|
|
00:07:57.840 --> 00:08:02.560 |
|
by um |
|
|
|
00:08:00.000 --> 00:08:05.039 |
|
here you must acknowledge the owner of |
|
|
|
00:08:02.560 --> 00:08:07.840 |
|
the uh the original creators so you need |
|
|
|
00:08:05.039 --> 00:08:08.960 |
|
to say this person actually created uh |
|
|
|
00:08:07.840 --> 00:08:11.520 |
|
this |
|
|
|
00:08:08.960 --> 00:08:14.680 |
|
originally |
|
|
|
00:08:11.520 --> 00:08:17.319 |
|
um Apachi is also kind of interesting |
|
|
|
00:08:14.680 --> 00:08:21.759 |
|
because they will give you a license to |
|
|
|
00:08:17.319 --> 00:08:25.960 |
|
use that code and any patents that are |
|
|
|
00:08:21.759 --> 00:08:29.599 |
|
associated with that code unless you sue |
|
|
|
00:08:25.960 --> 00:08:32.159 |
|
the company who released it so um just |
|
|
|
00:08:29.599 --> 00:08:34.039 |
|
Give an example let's say uh Google |
|
|
|
00:08:32.159 --> 00:08:36.279 |
|
released their code under the Apache |
|
|
|
00:08:34.039 --> 00:08:38.919 |
|
license and that code implements |
|
|
|
00:08:36.279 --> 00:08:42.680 |
|
Transformers and Google has a patent on |
|
|
|
00:08:38.919 --> 00:08:45.760 |
|
Transformers so if you use uh kind of |
|
|
|
00:08:42.680 --> 00:08:48.200 |
|
jacks or tensorflow a Jack or tensorflow |
|
|
|
00:08:45.760 --> 00:08:50.120 |
|
implementation of Transformers uh that |
|
|
|
00:08:48.200 --> 00:08:51.720 |
|
was created by Google you're okay you're |
|
|
|
00:08:50.120 --> 00:08:54.640 |
|
safe to use that because they've |
|
|
|
00:08:51.720 --> 00:08:57.360 |
|
released it under uh under that license |
|
|
|
00:08:54.640 --> 00:08:59.560 |
|
but if you sue Google uh for anything |
|
|
|
00:08:57.360 --> 00:09:01.760 |
|
related to intellectual property Google |
|
|
|
00:08:59.560 --> 00:09:04.480 |
|
could say uh don't you can't use |
|
|
|
00:09:01.760 --> 00:09:06.040 |
|
Transformers anymore um and so like if |
|
|
|
00:09:04.480 --> 00:09:08.279 |
|
open AI ever sues Google for |
|
|
|
00:09:06.040 --> 00:09:09.680 |
|
intellectual property infringement |
|
|
|
00:09:08.279 --> 00:09:12.120 |
|
Google will say okay you can't use |
|
|
|
00:09:09.680 --> 00:09:15.959 |
|
Transformers or word embeddings good |
|
|
|
00:09:12.120 --> 00:09:17.640 |
|
luck uh open so um there's this |
|
|
|
00:09:15.959 --> 00:09:20.760 |
|
interesting thing where all of these uh |
|
|
|
00:09:17.640 --> 00:09:22.760 |
|
tech companies now are using patented um |
|
|
|
00:09:20.760 --> 00:09:24.440 |
|
patented things a lot of it apachi |
|
|
|
00:09:22.760 --> 00:09:26.040 |
|
license software and so none of them can |
|
|
|
00:09:24.440 --> 00:09:28.959 |
|
sue each other for patents so patents |
|
|
|
00:09:26.040 --> 00:09:30.560 |
|
have become basically mostly worthless |
|
|
|
00:09:28.959 --> 00:09:35.320 |
|
uh in big |
|
|
|
00:09:30.560 --> 00:09:36.360 |
|
te um moving on um there's also a g GPL |
|
|
|
00:09:35.320 --> 00:09:39.360 |
|
in |
|
|
|
00:09:36.360 --> 00:09:42.800 |
|
ccbsa these are licenses where if you |
|
|
|
00:09:39.360 --> 00:09:45.680 |
|
use them you need to reshare under that |
|
|
|
00:09:42.800 --> 00:09:47.839 |
|
license um and so like if you create |
|
|
|
00:09:45.680 --> 00:09:49.440 |
|
some software it's GPL licensed and you |
|
|
|
00:09:47.839 --> 00:09:52.160 |
|
build on it and build something new you |
|
|
|
00:09:49.440 --> 00:09:54.839 |
|
need to release it under the GPL license |
|
|
|
00:09:52.160 --> 00:09:58.160 |
|
so a lot of companies will not |
|
|
|
00:09:54.839 --> 00:09:59.640 |
|
use um will not use GPL software because |
|
|
|
00:09:58.160 --> 00:10:01.920 |
|
that would mean that if they incorporate |
|
|
|
00:09:59.640 --> 00:10:04.959 |
|
into their system their whole system |
|
|
|
00:10:01.920 --> 00:10:06.720 |
|
like for example Google uh like all of |
|
|
|
00:10:04.959 --> 00:10:10.240 |
|
Google would have to be GPL licensed in |
|
|
|
00:10:06.720 --> 00:10:11.720 |
|
Rel EAS uh so um and I'm kind of |
|
|
|
00:10:10.240 --> 00:10:14.800 |
|
simplifying these licenses I'm just |
|
|
|
00:10:11.720 --> 00:10:17.519 |
|
giving you the gist CC BSA and sorry CC |
|
|
|
00:10:14.800 --> 00:10:20.640 |
|
licenses are more for data so MIT BSC |
|
|
|
00:10:17.519 --> 00:10:22.640 |
|
Apachi and GPL are more for software CC |
|
|
|
00:10:20.640 --> 00:10:27.640 |
|
Creative Commons licenses are for data |
|
|
|
00:10:22.640 --> 00:10:29.640 |
|
so um for example Wikipedia is CC by SAA |
|
|
|
00:10:27.640 --> 00:10:33.560 |
|
I believe |
|
|
|
00:10:29.640 --> 00:10:33.560 |
|
let me make sure that I'm not lying |
|
|
|
00:10:41.839 --> 00:10:48.240 |
|
there yeah CC bys and so that means that |
|
|
|
00:10:46.040 --> 00:10:52.200 |
|
if you make any derivative work of |
|
|
|
00:10:48.240 --> 00:10:54.160 |
|
Wikipedia you need to share it um the |
|
|
|
00:10:52.200 --> 00:10:57.040 |
|
same way that Wikipedia is uh so you |
|
|
|
00:10:54.160 --> 00:10:59.760 |
|
need to give it the same |
|
|
|
00:10:57.040 --> 00:11:01.560 |
|
license there's also um cre of Commons |
|
|
|
00:10:59.760 --> 00:11:03.240 |
|
non-commercial licenses or software |
|
|
|
00:11:01.560 --> 00:11:05.519 |
|
non-commercial licenses you say you |
|
|
|
00:11:03.240 --> 00:11:07.079 |
|
can't use them for commercial purposes |
|
|
|
00:11:05.519 --> 00:11:09.279 |
|
all the ones above you can use for |
|
|
|
00:11:07.079 --> 00:11:11.519 |
|
commercial purposes once you start |
|
|
|
00:11:09.279 --> 00:11:13.440 |
|
getting down here this is no often no |
|
|
|
00:11:11.519 --> 00:11:15.279 |
|
longer called open source so the open |
|
|
|
00:11:13.440 --> 00:11:16.959 |
|
source initiative says anything with a |
|
|
|
00:11:15.279 --> 00:11:19.839 |
|
restriction on the way that you can use |
|
|
|
00:11:16.959 --> 00:11:22.639 |
|
it is no longer open source and so that |
|
|
|
00:11:19.839 --> 00:11:25.360 |
|
means if you say you can't use this for |
|
|
|
00:11:22.639 --> 00:11:27.720 |
|
commercial purposes or you can't use |
|
|
|
00:11:25.360 --> 00:11:29.639 |
|
this in military systems for example |
|
|
|
00:11:27.720 --> 00:11:32.320 |
|
which some language models say that |
|
|
|
00:11:29.639 --> 00:11:33.680 |
|
nowadays those are no longer called open |
|
|
|
00:11:32.320 --> 00:11:37.040 |
|
source according to the open source |
|
|
|
00:11:33.680 --> 00:11:40.320 |
|
initiative so that's a thing to know |
|
|
|
00:11:37.040 --> 00:11:42.920 |
|
about then separately uh there are these |
|
|
|
00:11:40.320 --> 00:11:45.279 |
|
licenses that a lot of people like meta |
|
|
|
00:11:42.920 --> 00:11:48.160 |
|
or hugging face come up with for their |
|
|
|
00:11:45.279 --> 00:11:50.360 |
|
um for their models recently so the |
|
|
|
00:11:48.160 --> 00:11:51.320 |
|
Llama license um how many people are |
|
|
|
00:11:50.360 --> 00:11:54.200 |
|
using |
|
|
|
00:11:51.320 --> 00:11:56.519 |
|
llama in your projects how many people |
|
|
|
00:11:54.200 --> 00:11:56.519 |
|
read the |
|
|
|
00:11:57.000 --> 00:12:00.880 |
|
license so um are you sure you can use |
|
|
|
00:11:59.639 --> 00:12:04.959 |
|
it in your |
|
|
|
00:12:00.880 --> 00:12:06.839 |
|
project uh so you're you're probably in |
|
|
|
00:12:04.959 --> 00:12:09.000 |
|
luck in your project if you're using it |
|
|
|
00:12:06.839 --> 00:12:11.560 |
|
the Lama license you can read into it to |
|
|
|
00:12:09.000 --> 00:12:13.519 |
|
see what it actually allows but it has |
|
|
|
00:12:11.560 --> 00:12:16.399 |
|
um the original llama license has some |
|
|
|
00:12:13.519 --> 00:12:18.440 |
|
interesting uh things number one you |
|
|
|
00:12:16.399 --> 00:12:21.079 |
|
cannot use llama to train any language |
|
|
|
00:12:18.440 --> 00:12:23.000 |
|
model that is not derived from llama so |
|
|
|
00:12:21.079 --> 00:12:26.120 |
|
you can't generate data from llama in |
|
|
|
00:12:23.000 --> 00:12:30.040 |
|
train M that's not allowed according to |
|
|
|
00:12:26.120 --> 00:12:32.440 |
|
the r Li um another thing is uh you |
|
|
|
00:12:30.040 --> 00:12:34.680 |
|
can't use it for military purposes so |
|
|
|
00:12:32.440 --> 00:12:36.160 |
|
you can't use it um in building a |
|
|
|
00:12:34.680 --> 00:12:37.639 |
|
missile system or something like that |
|
|
|
00:12:36.160 --> 00:12:41.440 |
|
hopefully none of you are doing that for |
|
|
|
00:12:37.639 --> 00:12:42.920 |
|
your project um and you also need to get |
|
|
|
00:12:41.440 --> 00:12:45.399 |
|
a license from meta if you have |
|
|
|
00:12:42.920 --> 00:12:48.000 |
|
something more than 300 million active |
|
|
|
00:12:45.399 --> 00:12:53.800 |
|
user asign your social network service |
|
|
|
00:12:48.000 --> 00:12:56.079 |
|
so if you're Google or um you know X or |
|
|
|
00:12:53.800 --> 00:12:57.680 |
|
Twitter or you know whatever else you |
|
|
|
00:12:56.079 --> 00:13:00.519 |
|
need to get a license for meta before |
|
|
|
00:12:57.680 --> 00:13:02.079 |
|
you can start using one so |
|
|
|
00:13:00.519 --> 00:13:03.240 |
|
basically they created that license so |
|
|
|
00:13:02.079 --> 00:13:06.720 |
|
their competitors don't take their |
|
|
|
00:13:03.240 --> 00:13:08.959 |
|
language model and just use it for free |
|
|
|
00:13:06.720 --> 00:13:11.000 |
|
um and then the final thing is no |
|
|
|
00:13:08.959 --> 00:13:13.240 |
|
license so like let's say you have some |
|
|
|
00:13:11.000 --> 00:13:15.560 |
|
code that you upload to GitHub and you |
|
|
|
00:13:13.240 --> 00:13:17.839 |
|
don't put a license on your code this |
|
|
|
00:13:15.560 --> 00:13:20.880 |
|
means that you have only agreed to the |
|
|
|
00:13:17.839 --> 00:13:23.360 |
|
GitHub licensing terms which means that |
|
|
|
00:13:20.880 --> 00:13:26.199 |
|
actually nobody can use their code they |
|
|
|
00:13:23.360 --> 00:13:30.079 |
|
can view it possibly but they can't you |
|
|
|
00:13:26.199 --> 00:13:31.720 |
|
download it use it they can't like um |
|
|
|
00:13:30.079 --> 00:13:34.160 |
|
they can't incorporate it into their own |
|
|
|
00:13:31.720 --> 00:13:36.000 |
|
system so actually if you release |
|
|
|
00:13:34.160 --> 00:13:39.120 |
|
research code I would highly encourage |
|
|
|
00:13:36.000 --> 00:13:41.120 |
|
you to use MIT or BSD um or one of these |
|
|
|
00:13:39.120 --> 00:13:43.040 |
|
permissive licenses so other people can |
|
|
|
00:13:41.120 --> 00:13:45.720 |
|
use it and follow up and your code can |
|
|
|
00:13:43.040 --> 00:13:46.920 |
|
be effectful so um this is an important |
|
|
|
00:13:45.720 --> 00:13:49.040 |
|
thing to know about there's obviously |
|
|
|
00:13:46.920 --> 00:13:52.959 |
|
lots more to know |
|
|
|
00:13:49.040 --> 00:13:56.440 |
|
about um so then my question my next |
|
|
|
00:13:52.959 --> 00:13:57.360 |
|
question is uh what is most of the text |
|
|
|
00:13:56.440 --> 00:13:59.560 |
|
on the |
|
|
|
00:13:57.360 --> 00:14:01.160 |
|
internet the majority of the text on the |
|
|
|
00:13:59.560 --> 00:14:04.839 |
|
internet falls into one of these |
|
|
|
00:14:01.160 --> 00:14:04.839 |
|
categories any idea which |
|
|
|
00:14:05.120 --> 00:14:12.759 |
|
one so Wikipedia is CC bya what what |
|
|
|
00:14:09.040 --> 00:14:12.759 |
|
about uh Mo most of the text |
|
|
|
00:14:14.199 --> 00:14:18.959 |
|
on yeah it's not maybe not no license |
|
|
|
00:14:16.880 --> 00:14:21.680 |
|
but all rights reserved so basically you |
|
|
|
00:14:18.959 --> 00:14:23.079 |
|
can't use it without having permission |
|
|
|
00:14:21.680 --> 00:14:27.639 |
|
from the copyright |
|
|
|
00:14:23.079 --> 00:14:30.639 |
|
holders and so because of that |
|
|
|
00:14:27.639 --> 00:14:33.800 |
|
um the idea of fair use becomes very |
|
|
|
00:14:30.639 --> 00:14:35.320 |
|
important this is a us specific thing |
|
|
|
00:14:33.800 --> 00:14:36.880 |
|
and the rules in other countries are |
|
|
|
00:14:35.320 --> 00:14:39.199 |
|
different they're not the same as the us |
|
|
|
00:14:36.880 --> 00:14:41.680 |
|
but in the US uh we have rules about |
|
|
|
00:14:39.199 --> 00:14:44.600 |
|
where you can use particular types of |
|
|
|
00:14:41.680 --> 00:14:46.279 |
|
data so the US fair use Doctrine is |
|
|
|
00:14:44.600 --> 00:14:50.240 |
|
basically that you can use copyrighted |
|
|
|
00:14:46.279 --> 00:14:52.920 |
|
material in some cases so |
|
|
|
00:14:50.240 --> 00:14:56.279 |
|
um as a gross |
|
|
|
00:14:52.920 --> 00:15:01.800 |
|
simplification um quoting a small amount |
|
|
|
00:14:56.279 --> 00:15:04.320 |
|
of material in like a textbook or slides |
|
|
|
00:15:01.800 --> 00:15:07.079 |
|
or something like this this is likely |
|
|
|
00:15:04.320 --> 00:15:10.040 |
|
okay um there are going to be very few |
|
|
|
00:15:07.079 --> 00:15:11.399 |
|
cases where this is not going to um you |
|
|
|
00:15:10.040 --> 00:15:12.720 |
|
know where you're going to get in |
|
|
|
00:15:11.399 --> 00:15:15.600 |
|
trouble for |
|
|
|
00:15:12.720 --> 00:15:18.000 |
|
this another important uh judgment |
|
|
|
00:15:15.600 --> 00:15:19.600 |
|
criteria for whether this is fair use is |
|
|
|
00:15:18.000 --> 00:15:22.440 |
|
that it doesn't diminish the value of |
|
|
|
00:15:19.600 --> 00:15:25.120 |
|
the original work so if I quote |
|
|
|
00:15:22.440 --> 00:15:27.759 |
|
something in my like let's say I quoted |
|
|
|
00:15:25.120 --> 00:15:30.839 |
|
all of Harry Potter in a textbook and |
|
|
|
00:15:27.759 --> 00:15:32.600 |
|
then I sold my textbook for $3 anybody |
|
|
|
00:15:30.839 --> 00:15:34.279 |
|
could take my textbook and read all of |
|
|
|
00:15:32.600 --> 00:15:35.800 |
|
Harry Potter for $3 and the money |
|
|
|
00:15:34.279 --> 00:15:37.480 |
|
wouldn't go to JK rolling and that would |
|
|
|
00:15:35.800 --> 00:15:41.040 |
|
not be fair use because it's diminishing |
|
|
|
00:15:37.480 --> 00:15:42.920 |
|
the value of similarly if I create a big |
|
|
|
00:15:41.040 --> 00:15:44.319 |
|
Corpus of books and I upload them to a |
|
|
|
00:15:42.920 --> 00:15:46.079 |
|
site where anyone can browse them that |
|
|
|
00:15:44.319 --> 00:15:48.319 |
|
would also probably not be for use |
|
|
|
00:15:46.079 --> 00:15:49.160 |
|
because the authors would not get paid |
|
|
|
00:15:48.319 --> 00:15:52.319 |
|
for |
|
|
|
00:15:49.160 --> 00:15:54.480 |
|
it another judgment Criterion is whether |
|
|
|
00:15:52.319 --> 00:15:57.399 |
|
it's for non commercial purposes or not |
|
|
|
00:15:54.480 --> 00:15:59.639 |
|
so like in universities we're actually |
|
|
|
00:15:57.399 --> 00:16:01.120 |
|
held to a probably held to a more |
|
|
|
00:15:59.639 --> 00:16:03.000 |
|
lenient standard of fa use if we're |
|
|
|
00:16:01.120 --> 00:16:06.120 |
|
doing non-commercial research compared |
|
|
|
00:16:03.000 --> 00:16:08.519 |
|
to a company that's doing it |
|
|
|
00:16:06.120 --> 00:16:11.480 |
|
so um most data on the Internet is |
|
|
|
00:16:08.519 --> 00:16:13.279 |
|
copyrighted so right now most model |
|
|
|
00:16:11.480 --> 00:16:16.240 |
|
training not all model training but most |
|
|
|
00:16:13.279 --> 00:16:18.680 |
|
model training is done um assuming fair |
|
|
|
00:16:16.240 --> 00:16:21.800 |
|
use which means that training an AI |
|
|
|
00:16:18.680 --> 00:16:25.800 |
|
model on copyrighted |
|
|
|
00:16:21.800 --> 00:16:29.480 |
|
data is number one it cannot reproduce |
|
|
|
00:16:25.800 --> 00:16:32.240 |
|
the material easily so it's instead of |
|
|
|
00:16:29.480 --> 00:16:33.600 |
|
quoting material directly it's kind of |
|
|
|
00:16:32.240 --> 00:16:35.880 |
|
combining the material together to |
|
|
|
00:16:33.600 --> 00:16:37.519 |
|
create a new thing they're saying it |
|
|
|
00:16:35.880 --> 00:16:40.639 |
|
doesn't diminish the commercial value of |
|
|
|
00:16:37.519 --> 00:16:42.360 |
|
the original uh data um and then the |
|
|
|
00:16:40.639 --> 00:16:44.839 |
|
non-commercial purposes is maybe a |
|
|
|
00:16:42.360 --> 00:16:47.240 |
|
secondary concern since the first two |
|
|
|
00:16:44.839 --> 00:16:50.600 |
|
hold um but there are lawsuits about |
|
|
|
00:16:47.240 --> 00:16:52.360 |
|
this and so um this is a clip from The |
|
|
|
00:16:50.600 --> 00:16:55.560 |
|
New York Times where the New York Times |
|
|
|
00:16:52.360 --> 00:16:58.279 |
|
is suing open AI in Microsoft over uh |
|
|
|
00:16:55.560 --> 00:16:59.759 |
|
them training on New York Times articles |
|
|
|
00:16:58.279 --> 00:17:02.040 |
|
and they did do a lot of things like |
|
|
|
00:16:59.759 --> 00:17:05.799 |
|
they demonstrate that you can get uh gp4 |
|
|
|
00:17:02.040 --> 00:17:08.319 |
|
to reproduce uh like um New York Times |
|
|
|
00:17:05.799 --> 00:17:11.480 |
|
articles and they also argue that people |
|
|
|
00:17:08.319 --> 00:17:12.880 |
|
are using this gp4 as a source of news |
|
|
|
00:17:11.480 --> 00:17:14.079 |
|
instead of going to the New York Times |
|
|
|
00:17:12.880 --> 00:17:15.959 |
|
site so they're losing money from |
|
|
|
00:17:14.079 --> 00:17:19.199 |
|
advertising and like other other things |
|
|
|
00:17:15.959 --> 00:17:21.679 |
|
like that um another example is GitHub |
|
|
|
00:17:19.199 --> 00:17:24.000 |
|
co-pilot was sued by people who uh |
|
|
|
00:17:21.679 --> 00:17:26.439 |
|
uploaded software to GitHub and said |
|
|
|
00:17:24.000 --> 00:17:29.039 |
|
that uh basically GitHub didn't have the |
|
|
|
00:17:26.439 --> 00:17:32.400 |
|
right to use it to profit from it and |
|
|
|
00:17:29.039 --> 00:17:34.799 |
|
diminish their uh you know their money |
|
|
|
00:17:32.400 --> 00:17:37.520 |
|
so notably uh on this slide I'm using |
|
|
|
00:17:34.799 --> 00:17:42.039 |
|
fair use I don't know if you've noticed |
|
|
|
00:17:37.520 --> 00:17:44.679 |
|
like I copy I copy pasted an image from |
|
|
|
00:17:42.039 --> 00:17:46.360 |
|
somebody's uh you know website and used |
|
|
|
00:17:44.679 --> 00:17:48.520 |
|
it here that's copyrighted material but |
|
|
|
00:17:46.360 --> 00:17:49.640 |
|
I'm using it because I'm quoting a small |
|
|
|
00:17:48.520 --> 00:17:52.440 |
|
amount of material and I'm not |
|
|
|
00:17:49.640 --> 00:17:54.360 |
|
diminishing the ostial values so um like |
|
|
|
00:17:52.440 --> 00:17:56.320 |
|
fair use is very ubiquitous it's very |
|
|
|
00:17:54.360 --> 00:17:58.480 |
|
important so we can do things like this |
|
|
|
00:17:56.320 --> 00:18:00.840 |
|
but also um it's currently under thep |
|
|
|
00:17:58.480 --> 00:18:00.840 |
|
with this |
|
|
|
00:18:01.280 --> 00:18:07.799 |
|
models so then another question is why |
|
|
|
00:18:04.360 --> 00:18:12.520 |
|
restrict model access why do we number |
|
|
|
00:18:07.799 --> 00:18:14.320 |
|
one make models closed number two um you |
|
|
|
00:18:12.520 --> 00:18:16.159 |
|
know maybe not even describe what we did |
|
|
|
00:18:14.320 --> 00:18:18.880 |
|
in our models and I think there's three |
|
|
|
00:18:16.159 --> 00:18:21.360 |
|
main reasons the first reason is |
|
|
|
00:18:18.880 --> 00:18:23.480 |
|
commercial concerns and so they want to |
|
|
|
00:18:21.360 --> 00:18:25.760 |
|
make money from the models so open AI |
|
|
|
00:18:23.480 --> 00:18:27.520 |
|
makes money from the open AI API Gemini |
|
|
|
00:18:25.760 --> 00:18:29.480 |
|
makes uh sorry Google makes money from |
|
|
|
00:18:27.520 --> 00:18:31.799 |
|
the Gemini API |
|
|
|
00:18:29.480 --> 00:18:33.720 |
|
um and anthropic makes money from the |
|
|
|
00:18:31.799 --> 00:18:34.760 |
|
CLA API these are all models that I'm |
|
|
|
00:18:33.720 --> 00:18:37.640 |
|
going to talk |
|
|
|
00:18:34.760 --> 00:18:39.440 |
|
about number two safety I I think there |
|
|
|
00:18:37.640 --> 00:18:41.640 |
|
are very legitimate concerns where if |
|
|
|
00:18:39.440 --> 00:18:43.840 |
|
you release strong models people might |
|
|
|
00:18:41.640 --> 00:18:47.200 |
|
use them for bad things so you know |
|
|
|
00:18:43.840 --> 00:18:49.120 |
|
creating fake content online or uh doing |
|
|
|
00:18:47.200 --> 00:18:50.720 |
|
spear fishing attacks against people and |
|
|
|
00:18:49.120 --> 00:18:52.600 |
|
trying to you know scam them out of |
|
|
|
00:18:50.720 --> 00:18:55.600 |
|
money or things like that so I think |
|
|
|
00:18:52.600 --> 00:18:57.240 |
|
there are legitimate concerns about this |
|
|
|
00:18:55.600 --> 00:18:58.880 |
|
and then the final one is legal |
|
|
|
00:18:57.240 --> 00:19:01.520 |
|
liability so training models on |
|
|
|
00:18:58.880 --> 00:19:03.640 |
|
copyrighted data is a legal gray area as |
|
|
|
00:19:01.520 --> 00:19:05.159 |
|
I just mentioned so they don't want to |
|
|
|
00:19:03.640 --> 00:19:07.159 |
|
say what data they trained on because if |
|
|
|
00:19:05.159 --> 00:19:10.240 |
|
they say what data they trained on then |
|
|
|
00:19:07.159 --> 00:19:11.960 |
|
they might get sued so these are the |
|
|
|
00:19:10.240 --> 00:19:14.960 |
|
three main |
|
|
|
00:19:11.960 --> 00:19:17.960 |
|
concerns so |
|
|
|
00:19:14.960 --> 00:19:19.480 |
|
um anyway this this is a preface and |
|
|
|
00:19:17.960 --> 00:19:23.360 |
|
then I want to go into like the actual |
|
|
|
00:19:19.480 --> 00:19:23.360 |
|
models but are there any questions about |
|
|
|
00:19:24.679 --> 00:19:30.280 |
|
this so if any of you |
|
|
|
00:19:27.280 --> 00:19:31.720 |
|
are working at a company or starting a |
|
|
|
00:19:30.280 --> 00:19:33.120 |
|
company thinking about working at a |
|
|
|
00:19:31.720 --> 00:19:35.440 |
|
company or starting a company this is |
|
|
|
00:19:33.120 --> 00:19:37.320 |
|
something you should be aware of um you |
|
|
|
00:19:35.440 --> 00:19:39.720 |
|
should also be aware of the fact that |
|
|
|
00:19:37.320 --> 00:19:42.360 |
|
you know open AI has been doing sketchy |
|
|
|
00:19:39.720 --> 00:19:46.640 |
|
things for a long time and look where |
|
|
|
00:19:42.360 --> 00:19:48.440 |
|
they are so you know it it's uh like |
|
|
|
00:19:46.640 --> 00:19:51.400 |
|
this is very much a legal gray area and |
|
|
|
00:19:48.440 --> 00:19:53.880 |
|
people are are uh moving through that |
|
|
|
00:19:51.400 --> 00:19:55.640 |
|
gray area but anyway it's worth knowing |
|
|
|
00:19:53.880 --> 00:19:59.480 |
|
that so next I'm going to talk about |
|
|
|
00:19:55.640 --> 00:20:00.679 |
|
open models um so first bird's eye view |
|
|
|
00:19:59.480 --> 00:20:02.600 |
|
I'm going to talk about five different |
|
|
|
00:20:00.679 --> 00:20:04.080 |
|
models and I picked them for a reason |
|
|
|
00:20:02.600 --> 00:20:06.440 |
|
the first two are because they're open |
|
|
|
00:20:04.080 --> 00:20:08.159 |
|
source and fully reproducible namely |
|
|
|
00:20:06.440 --> 00:20:10.360 |
|
pipia |
|
|
|
00:20:08.159 --> 00:20:11.919 |
|
Ino and the reason why I want to talk |
|
|
|
00:20:10.360 --> 00:20:13.120 |
|
about these is we know everything about |
|
|
|
00:20:11.919 --> 00:20:14.679 |
|
them including what data they were |
|
|
|
00:20:13.120 --> 00:20:16.799 |
|
trained on um what their training |
|
|
|
00:20:14.679 --> 00:20:19.080 |
|
procedures are you can download all the |
|
|
|
00:20:16.799 --> 00:20:21.000 |
|
the stuff so you can kind of know uh |
|
|
|
00:20:19.080 --> 00:20:24.840 |
|
exactly what goes into making a strong |
|
|
|
00:20:21.000 --> 00:20:26.520 |
|
model um Pia is uh actually has many |
|
|
|
00:20:24.840 --> 00:20:28.159 |
|
sizes in checkpoints which is pretty |
|
|
|
00:20:26.520 --> 00:20:30.919 |
|
interesting Ando is maybe the strongest |
|
|
|
00:20:28.159 --> 00:20:32.559 |
|
reproduced model at the moment um then |
|
|
|
00:20:30.919 --> 00:20:34.120 |
|
we have open weights models and these |
|
|
|
00:20:32.559 --> 00:20:35.520 |
|
are models that aren't fully open they |
|
|
|
00:20:34.120 --> 00:20:38.679 |
|
don't disclose everything they don't |
|
|
|
00:20:35.520 --> 00:20:40.760 |
|
release their training data uh or |
|
|
|
00:20:38.679 --> 00:20:43.799 |
|
code um but I'm going to talk about |
|
|
|
00:20:40.760 --> 00:20:46.520 |
|
llama 2 which is the most popular um |
|
|
|
00:20:43.799 --> 00:20:48.280 |
|
it's also heavily safety tuned mistol |
|
|
|
00:20:46.520 --> 00:20:50.840 |
|
and mixol which is a strong and fast |
|
|
|
00:20:48.280 --> 00:20:53.200 |
|
model um it's somewhat multilingual and |
|
|
|
00:20:50.840 --> 00:20:55.200 |
|
also quen which is a very uh strong |
|
|
|
00:20:53.200 --> 00:20:57.520 |
|
model it's more multilingual and |
|
|
|
00:20:55.200 --> 00:21:00.600 |
|
specifically it's good in English and |
|
|
|
00:20:57.520 --> 00:21:03.440 |
|
Chinese because it was train down of |
|
|
|
00:21:00.600 --> 00:21:04.720 |
|
that so first going into Pia for each of |
|
|
|
00:21:03.440 --> 00:21:06.159 |
|
them I'm going to give an overview and |
|
|
|
00:21:04.720 --> 00:21:08.880 |
|
then talk about some interesting points |
|
|
|
00:21:06.159 --> 00:21:12.320 |
|
about them so pythia was created by |
|
|
|
00:21:08.880 --> 00:21:14.799 |
|
alther ai alther ai is one of the first |
|
|
|
00:21:12.320 --> 00:21:16.279 |
|
um kind of open- source AI organizations |
|
|
|
00:21:14.799 --> 00:21:18.720 |
|
they've created a huge number of really |
|
|
|
00:21:16.279 --> 00:21:21.480 |
|
useful things including training code |
|
|
|
00:21:18.720 --> 00:21:25.279 |
|
models training data sets and also |
|
|
|
00:21:21.480 --> 00:21:28.080 |
|
evaluation that's used pretty widely um |
|
|
|
00:21:25.279 --> 00:21:29.760 |
|
the goal of pythia was basically joint |
|
|
|
00:21:28.080 --> 00:21:32.159 |
|
understanding model training Dynamics |
|
|
|
00:21:29.760 --> 00:21:36.320 |
|
and scaling and so from that point of |
|
|
|
00:21:32.159 --> 00:21:39.120 |
|
view um they released eight model sizes |
|
|
|
00:21:36.320 --> 00:21:41.880 |
|
from 70 million parameters to 12 billion |
|
|
|
00:21:39.120 --> 00:21:44.960 |
|
parameters for each model size they have |
|
|
|
00:21:41.880 --> 00:21:47.440 |
|
154 checkpoints throughout the training |
|
|
|
00:21:44.960 --> 00:21:52.880 |
|
process um so they basically trained on |
|
|
|
00:21:47.440 --> 00:21:55.960 |
|
uh 3300 billion uh parameter uh tokens |
|
|
|
00:21:52.880 --> 00:21:57.400 |
|
and uh did checkpoints you know |
|
|
|
00:21:55.960 --> 00:21:59.000 |
|
periodically during that training |
|
|
|
00:21:57.400 --> 00:22:02.400 |
|
process so you can do interest things |
|
|
|
00:21:59.000 --> 00:22:04.400 |
|
like say uh how quickly do small models |
|
|
|
00:22:02.400 --> 00:22:06.919 |
|
learn things how quickly do large models |
|
|
|
00:22:04.400 --> 00:22:09.480 |
|
learn things and other stuff like |
|
|
|
00:22:06.919 --> 00:22:10.760 |
|
that in terms of the architecture as I |
|
|
|
00:22:09.480 --> 00:22:12.760 |
|
mentioned at the very beginning the |
|
|
|
00:22:10.760 --> 00:22:14.799 |
|
architectures are actually very similar |
|
|
|
00:22:12.760 --> 00:22:17.840 |
|
between them so it's almost easier to |
|
|
|
00:22:14.799 --> 00:22:21.080 |
|
point out their differences than uh |
|
|
|
00:22:17.840 --> 00:22:22.559 |
|
their like their similarities um |
|
|
|
00:22:21.080 --> 00:22:25.400 |
|
actually one thing that's not on the |
|
|
|
00:22:22.559 --> 00:22:27.159 |
|
slide is um I mainly focused on the |
|
|
|
00:22:25.400 --> 00:22:29.080 |
|
seven billion models because almost |
|
|
|
00:22:27.159 --> 00:22:30.320 |
|
everybody trains a seven billi model |
|
|
|
00:22:29.080 --> 00:22:32.720 |
|
it's just kind of like one of the |
|
|
|
00:22:30.320 --> 00:22:34.640 |
|
standard sizes it's the smallest size of |
|
|
|
00:22:32.720 --> 00:22:36.559 |
|
llama it's one of the largest it's the |
|
|
|
00:22:34.640 --> 00:22:40.240 |
|
largest size ofo and one of the largest |
|
|
|
00:22:36.559 --> 00:22:46.880 |
|
sizes of pipon 7 billion models are |
|
|
|
00:22:40.240 --> 00:22:52.880 |
|
generally um 4096 wide 32 uh |
|
|
|
00:22:46.880 --> 00:22:52.880 |
|
deep uh 32 attention heads and they're |
|
|
|
00:22:54.200 --> 00:23:01.159 |
|
um and their um hidden layer size is |
|
|
|
00:22:57.400 --> 00:23:04.400 |
|
about like eight3 of the size of this |
|
|
|
00:23:01.159 --> 00:23:07.360 |
|
and this is kind of a standard llama 7B |
|
|
|
00:23:04.400 --> 00:23:09.240 |
|
architecture um as you scale up to |
|
|
|
00:23:07.360 --> 00:23:11.520 |
|
larger sizes you just increase the |
|
|
|
00:23:09.240 --> 00:23:13.880 |
|
number of layers you increase the the |
|
|
|
00:23:11.520 --> 00:23:16.080 |
|
width and other things like that so |
|
|
|
00:23:13.880 --> 00:23:19.039 |
|
that's very standard um the other |
|
|
|
00:23:16.080 --> 00:23:21.320 |
|
standard is everybody uses a Transformer |
|
|
|
00:23:19.039 --> 00:23:24.440 |
|
um everybody uses pre-layer Norm like I |
|
|
|
00:23:21.320 --> 00:23:27.120 |
|
talked about before everybody uses rope |
|
|
|
00:23:24.440 --> 00:23:29.520 |
|
eddings um almost everybody uses a swig |
|
|
|
00:23:27.120 --> 00:23:30.919 |
|
glue activation so this is just kind of |
|
|
|
00:23:29.520 --> 00:23:31.880 |
|
the standard recipe that almost |
|
|
|
00:23:30.919 --> 00:23:35.120 |
|
everybody |
|
|
|
00:23:31.880 --> 00:23:37.000 |
|
uses um where things start to change a |
|
|
|
00:23:35.120 --> 00:23:38.559 |
|
little bit between the architectures |
|
|
|
00:23:37.000 --> 00:23:40.559 |
|
which arguably might not be very |
|
|
|
00:23:38.559 --> 00:23:44.679 |
|
important is how long is the context |
|
|
|
00:23:40.559 --> 00:23:48.320 |
|
length so um pythia is 2K context |
|
|
|
00:23:44.679 --> 00:23:51.360 |
|
compared to llama llama 2's 4K context |
|
|
|
00:23:48.320 --> 00:23:55.000 |
|
um actually llama 1 is 1K context so |
|
|
|
00:23:51.360 --> 00:24:00.000 |
|
Llama Llama Or sorry llama one is 2K |
|
|
|
00:23:55.000 --> 00:24:02.120 |
|
context and llama 2 is 4K context um |
|
|
|
00:24:00.000 --> 00:24:03.880 |
|
another thing is where do they put |
|
|
|
00:24:02.120 --> 00:24:06.240 |
|
biases in the model most people don't |
|
|
|
00:24:03.880 --> 00:24:08.200 |
|
use biases uh anywhere but sometimes |
|
|
|
00:24:06.240 --> 00:24:09.840 |
|
they put them in various places the |
|
|
|
00:24:08.200 --> 00:24:11.919 |
|
other thing is a variety of layer Norm |
|
|
|
00:24:09.840 --> 00:24:13.559 |
|
that people use and Pia was using |
|
|
|
00:24:11.919 --> 00:24:16.240 |
|
standard parametric layer Norm but |
|
|
|
00:24:13.559 --> 00:24:18.000 |
|
gradually people are stepping back from |
|
|
|
00:24:16.240 --> 00:24:21.360 |
|
that and they're using like RMS Norm or |
|
|
|
00:24:18.000 --> 00:24:22.880 |
|
even nonparametric LMS so um small |
|
|
|
00:24:21.360 --> 00:24:25.559 |
|
architecture differences but almost |
|
|
|
00:24:22.880 --> 00:24:29.240 |
|
everybody uses something pretty |
|
|
|
00:24:25.559 --> 00:24:31.960 |
|
similar um the data this was trained on |
|
|
|
00:24:29.240 --> 00:24:34.600 |
|
300 billion tokens of the pile uh which |
|
|
|
00:24:31.960 --> 00:24:37.440 |
|
is on the next slide but one interesting |
|
|
|
00:24:34.600 --> 00:24:39.000 |
|
thing is that they also did a duplicated |
|
|
|
00:24:37.440 --> 00:24:43.320 |
|
training run on |
|
|
|
00:24:39.000 --> 00:24:47.679 |
|
270 s billions of the token ah sorry 207 |
|
|
|
00:24:43.320 --> 00:24:50.559 |
|
billion tokens and um the idea is that |
|
|
|
00:24:47.679 --> 00:24:53.039 |
|
they um they wanted to test how |
|
|
|
00:24:50.559 --> 00:24:54.919 |
|
important it is to duplicate how much do |
|
|
|
00:24:53.039 --> 00:24:56.279 |
|
you gain by D duplicating in terms of |
|
|
|
00:24:54.919 --> 00:24:59.559 |
|
training |
|
|
|
00:24:56.279 --> 00:25:01.520 |
|
efficiency and um |
|
|
|
00:24:59.559 --> 00:25:04.760 |
|
they have different learning rates for |
|
|
|
00:25:01.520 --> 00:25:08.640 |
|
different model sizes the 7B model is uh |
|
|
|
00:25:04.760 --> 00:25:11.760 |
|
1.2 * e to Theus 4 in contrast llama is |
|
|
|
00:25:08.640 --> 00:25:13.120 |
|
3 * eus 4 so this is a potentially big |
|
|
|
00:25:11.760 --> 00:25:16.840 |
|
change because the learning rate is |
|
|
|
00:25:13.120 --> 00:25:18.880 |
|
actually half the size here um is the |
|
|
|
00:25:16.840 --> 00:25:20.559 |
|
batch size they use 2 million tokens and |
|
|
|
00:25:18.880 --> 00:25:23.600 |
|
actually llama 2 uses four million |
|
|
|
00:25:20.559 --> 00:25:26.520 |
|
tokens for the batch size so um there |
|
|
|
00:25:23.600 --> 00:25:29.000 |
|
are some small differences |
|
|
|
00:25:26.520 --> 00:25:31.480 |
|
there so next next I'd like to talk |
|
|
|
00:25:29.000 --> 00:25:33.760 |
|
about the pile um this is kind of the |
|
|
|
00:25:31.480 --> 00:25:36.279 |
|
original open data set for training |
|
|
|
00:25:33.760 --> 00:25:37.960 |
|
large language models um that being said |
|
|
|
00:25:36.279 --> 00:25:42.159 |
|
it's a really nice data set made out of |
|
|
|
00:25:37.960 --> 00:25:47.039 |
|
lots of uh different types of data and |
|
|
|
00:25:42.159 --> 00:25:49.960 |
|
namely it's trained on academic data so |
|
|
|
00:25:47.039 --> 00:25:52.559 |
|
that includes things like PubMed archive |
|
|
|
00:25:49.960 --> 00:25:55.240 |
|
free law the US patent office other |
|
|
|
00:25:52.559 --> 00:25:57.000 |
|
stuff like that it's also trained on |
|
|
|
00:25:55.240 --> 00:26:00.080 |
|
internet data so this is data that's |
|
|
|
00:25:57.000 --> 00:26:02.840 |
|
just scraped from parts of the internet |
|
|
|
00:26:00.080 --> 00:26:05.799 |
|
but also stack Exchange in |
|
|
|
00:26:02.840 --> 00:26:09.480 |
|
Wikipedia um it also has some pros so |
|
|
|
00:26:05.799 --> 00:26:12.200 |
|
these are um like book data sets it has |
|
|
|
00:26:09.480 --> 00:26:15.640 |
|
some code data sets and it has some like |
|
|
|
00:26:12.200 --> 00:26:18.799 |
|
subtitle dialog data sets in it so this |
|
|
|
00:26:15.640 --> 00:26:22.399 |
|
overall is 800 gigabytes or about 300 |
|
|
|
00:26:18.799 --> 00:26:22.399 |
|
billion tokens according to |
|
|
|
00:26:23.360 --> 00:26:28.080 |
|
Tok so some of the findings from the |
|
|
|
00:26:25.760 --> 00:26:30.919 |
|
pipia paper in addition to just being |
|
|
|
00:26:28.080 --> 00:26:33.399 |
|
like one of the original strong uh open |
|
|
|
00:26:30.919 --> 00:26:36.279 |
|
language models is they have some |
|
|
|
00:26:33.399 --> 00:26:38.600 |
|
interesting analysis into um model |
|
|
|
00:26:36.279 --> 00:26:40.960 |
|
memorization and how quickly models |
|
|
|
00:26:38.600 --> 00:26:44.080 |
|
learn uh based on the number of tokens |
|
|
|
00:26:40.960 --> 00:26:45.520 |
|
that you show them and this graph is |
|
|
|
00:26:44.080 --> 00:26:47.520 |
|
maybe a little bit hard to see from the |
|
|
|
00:26:45.520 --> 00:26:49.440 |
|
back so I'll interpret it the left side |
|
|
|
00:26:47.520 --> 00:26:50.840 |
|
is one of their smaller models 160 |
|
|
|
00:26:49.440 --> 00:26:54.880 |
|
million the right side is their biggest |
|
|
|
00:26:50.840 --> 00:26:57.799 |
|
Model 12 billion um the different lines |
|
|
|
00:26:54.880 --> 00:26:58.840 |
|
here are different steps of the training |
|
|
|
00:26:57.799 --> 00:27:03.120 |
|
process |
|
|
|
00:26:58.840 --> 00:27:09.640 |
|
so like uh 13,000 steps uh |
|
|
|
00:27:03.120 --> 00:27:13.840 |
|
30 sorry 39,000 steps and uh etc etc and |
|
|
|
00:27:09.640 --> 00:27:18.240 |
|
the xaxis here is the frequency of a |
|
|
|
00:27:13.840 --> 00:27:21.679 |
|
fact in or a frequency of a fact in the |
|
|
|
00:27:18.240 --> 00:27:24.640 |
|
training data and the y axis is question |
|
|
|
00:27:21.679 --> 00:27:29.159 |
|
answering accuracy about that fact and |
|
|
|
00:27:24.640 --> 00:27:30.919 |
|
so what this is basically showing is |
|
|
|
00:27:29.159 --> 00:27:35.679 |
|
as you scale up the |
|
|
|
00:27:30.919 --> 00:27:38.520 |
|
model um the larger models learn faster |
|
|
|
00:27:35.679 --> 00:27:41.120 |
|
um up to a point so like right here you |
|
|
|
00:27:38.520 --> 00:27:44.519 |
|
see the 2.8 billion model is about the |
|
|
|
00:27:41.120 --> 00:27:46.080 |
|
same as the 12 billion model at earlier |
|
|
|
00:27:44.519 --> 00:27:48.080 |
|
parts of the training |
|
|
|
00:27:46.080 --> 00:27:51.000 |
|
process but as you get later in the |
|
|
|
00:27:48.080 --> 00:27:54.200 |
|
training process the 12 billion model is |
|
|
|
00:27:51.000 --> 00:27:57.279 |
|
like memorizing and being able to recall |
|
|
|
00:27:54.200 --> 00:27:58.840 |
|
more facts uh so like right at the very |
|
|
|
00:27:57.279 --> 00:28:02.519 |
|
beginning you need to scale up to about |
|
|
|
00:27:58.840 --> 00:28:05.840 |
|
2.8 billion to learn efficiently uh but |
|
|
|
00:28:02.519 --> 00:28:07.799 |
|
at the end this model is like better uh |
|
|
|
00:28:05.840 --> 00:28:10.399 |
|
further on |
|
|
|
00:28:07.799 --> 00:28:12.000 |
|
so this is really nice all of this all |
|
|
|
00:28:10.399 --> 00:28:14.240 |
|
of these checkpoints all this data is |
|
|
|
00:28:12.000 --> 00:28:15.840 |
|
open they even made the data loaders so |
|
|
|
00:28:14.240 --> 00:28:17.360 |
|
it's reproducible so you can look at the |
|
|
|
00:28:15.840 --> 00:28:19.559 |
|
actual data that the model was trained |
|
|
|
00:28:17.360 --> 00:28:21.000 |
|
on um at each of the checkpoints so if |
|
|
|
00:28:19.559 --> 00:28:24.320 |
|
you want to do this sort of analysis |
|
|
|
00:28:21.000 --> 00:28:27.120 |
|
this is a good set of um models to look |
|
|
|
00:28:24.320 --> 00:28:28.720 |
|
at um another thing that they did is |
|
|
|
00:28:27.120 --> 00:28:31.120 |
|
they actually did interv itions on the |
|
|
|
00:28:28.720 --> 00:28:35.640 |
|
data so they um tried to intervene on |
|
|
|
00:28:31.120 --> 00:28:37.279 |
|
the data to modify it because uh male or |
|
|
|
00:28:35.640 --> 00:28:38.840 |
|
masculine pronouns were much more |
|
|
|
00:28:37.279 --> 00:28:42.000 |
|
frequent than feminine pronouns in the |
|
|
|
00:28:38.840 --> 00:28:43.919 |
|
data so they intervened on the data um |
|
|
|
00:28:42.000 --> 00:28:45.559 |
|
to try to balance out the distribution |
|
|
|
00:28:43.919 --> 00:28:48.000 |
|
of masculine and feminine pronouns and |
|
|
|
00:28:45.559 --> 00:28:49.559 |
|
demonstrated that the model became less |
|
|
|
00:28:48.000 --> 00:28:52.080 |
|
biased towards generating masculine |
|
|
|
00:28:49.559 --> 00:28:55.480 |
|
pronouns later so they also were able to |
|
|
|
00:28:52.080 --> 00:28:55.480 |
|
do those sorts of intervention |
|
|
|
00:28:55.919 --> 00:29:00.039 |
|
studies um any any questions about |
|
|
|
00:29:00.519 --> 00:29:07.919 |
|
Pia okay um next I want to go too Soo is |
|
|
|
00:29:04.720 --> 00:29:10.279 |
|
a more recent model um Pia I think came |
|
|
|
00:29:07.919 --> 00:29:13.200 |
|
came out around a year agoo is very |
|
|
|
00:29:10.279 --> 00:29:15.440 |
|
recent about a month ago and um this was |
|
|
|
00:29:13.200 --> 00:29:18.360 |
|
created by ai2 the Ellen Institute for |
|
|
|
00:29:15.440 --> 00:29:20.440 |
|
AI one thing you'll notice is the two um |
|
|
|
00:29:18.360 --> 00:29:22.279 |
|
completely open models that I'm talking |
|
|
|
00:29:20.440 --> 00:29:24.799 |
|
about both came from nonprofit |
|
|
|
00:29:22.279 --> 00:29:28.640 |
|
organizations um so Al Luther is |
|
|
|
00:29:24.799 --> 00:29:30.039 |
|
nonprofit uh ai2 is nonprofit so uh |
|
|
|
00:29:28.640 --> 00:29:31.519 |
|
they're maybe a little bit less worried |
|
|
|
00:29:30.039 --> 00:29:34.919 |
|
about people trying to sue them for lots |
|
|
|
00:29:31.519 --> 00:29:36.720 |
|
of money for fair use violations uh so |
|
|
|
00:29:34.919 --> 00:29:38.120 |
|
uh that's the cynical point of view the |
|
|
|
00:29:36.720 --> 00:29:39.679 |
|
the non cynical point of view is they |
|
|
|
00:29:38.120 --> 00:29:42.279 |
|
have nothing to profit by creating a |
|
|
|
00:29:39.679 --> 00:29:44.240 |
|
better model uh by having other people |
|
|
|
00:29:42.279 --> 00:29:47.039 |
|
create a better model so um they're |
|
|
|
00:29:44.240 --> 00:29:50.840 |
|
willing to do this for open uh in good |
|
|
|
00:29:47.039 --> 00:29:54.080 |
|
science um their goal is better science |
|
|
|
00:29:50.840 --> 00:29:55.880 |
|
of State ofth art LMS and uh some of the |
|
|
|
00:29:54.080 --> 00:29:57.600 |
|
unique features are top performance of a |
|
|
|
00:29:55.880 --> 00:29:59.840 |
|
fully documented model and they also |
|
|
|
00:29:57.600 --> 00:30:02.960 |
|
have in construction tun models |
|
|
|
00:29:59.840 --> 00:30:04.960 |
|
Etc looking at the parameters um |
|
|
|
00:30:02.960 --> 00:30:06.240 |
|
basically similar to llama the one big |
|
|
|
00:30:04.960 --> 00:30:08.440 |
|
difference is they're using |
|
|
|
00:30:06.240 --> 00:30:10.440 |
|
non-parametric layer Norm instead of RMS |
|
|
|
00:30:08.440 --> 00:30:13.640 |
|
Norm so this is basically layer Norm |
|
|
|
00:30:10.440 --> 00:30:15.960 |
|
with no parameters whatsoever um they |
|
|
|
00:30:13.640 --> 00:30:18.880 |
|
they didn't super clearly justify why |
|
|
|
00:30:15.960 --> 00:30:21.760 |
|
they decided to do this one difference |
|
|
|
00:30:18.880 --> 00:30:25.519 |
|
from Pia uh this was actually trained on |
|
|
|
00:30:21.760 --> 00:30:29.559 |
|
2.46 trillion tokens uh so compare this |
|
|
|
00:30:25.519 --> 00:30:32.600 |
|
to uh to Pia which was trained on 300 |
|
|
|
00:30:29.559 --> 00:30:34.480 |
|
billion tokens and so they basically |
|
|
|
00:30:32.600 --> 00:30:36.120 |
|
trained it for a lot longer they trained |
|
|
|
00:30:34.480 --> 00:30:37.960 |
|
it on something called the dolma Corpus |
|
|
|
00:30:36.120 --> 00:30:41.480 |
|
which they also created at |
|
|
|
00:30:37.960 --> 00:30:44.279 |
|
ai2 um actually I think this might be |
|
|
|
00:30:41.480 --> 00:30:47.279 |
|
wrong uh so just ignore that that was |
|
|
|
00:30:44.279 --> 00:30:49.760 |
|
copy paste mistake from typ so um they |
|
|
|
00:30:47.279 --> 00:30:52.039 |
|
always use 3E to the minus 4 is a |
|
|
|
00:30:49.760 --> 00:30:53.679 |
|
learning rate which is the same as uh as |
|
|
|
00:30:52.039 --> 00:30:56.039 |
|
llama and the batch size is 4 million |
|
|
|
00:30:53.679 --> 00:30:59.960 |
|
tokens which is also the same as |
|
|
|
00:30:56.039 --> 00:31:02.000 |
|
llama so the domma that they created is |
|
|
|
00:30:59.960 --> 00:31:04.320 |
|
um actually pretty similar to the pile |
|
|
|
00:31:02.000 --> 00:31:07.320 |
|
but it's a larger Corpus it's three |
|
|
|
00:31:04.320 --> 00:31:09.240 |
|
trillion tokens this is also fully open |
|
|
|
00:31:07.320 --> 00:31:11.480 |
|
so you can download it from hugging face |
|
|
|
00:31:09.240 --> 00:31:15.399 |
|
uh if you could find some dis to put |
|
|
|
00:31:11.480 --> 00:31:19.200 |
|
three trillion tokens on um |
|
|
|
00:31:15.399 --> 00:31:21.080 |
|
so uh another thing is that they have a |
|
|
|
00:31:19.200 --> 00:31:23.360 |
|
data processing pipeline of language |
|
|
|
00:31:21.080 --> 00:31:26.240 |
|
filtering quality filtering content |
|
|
|
00:31:23.360 --> 00:31:28.399 |
|
filtering D duplication uh multisource |
|
|
|
00:31:26.240 --> 00:31:31.440 |
|
mixing and tokenization |
|
|
|
00:31:28.399 --> 00:31:33.279 |
|
and so the nice thing about this is a |
|
|
|
00:31:31.440 --> 00:31:35.639 |
|
lot of this stuff is usually proprietary |
|
|
|
00:31:33.279 --> 00:31:38.240 |
|
for most language modeling creators so |
|
|
|
00:31:35.639 --> 00:31:39.600 |
|
if you want to see all of the like data |
|
|
|
00:31:38.240 --> 00:31:41.039 |
|
processing pipeline that goes into |
|
|
|
00:31:39.600 --> 00:31:42.799 |
|
training a model this is a pretty good |
|
|
|
00:31:41.039 --> 00:31:45.320 |
|
example of |
|
|
|
00:31:42.799 --> 00:31:48.120 |
|
that um the document types that are |
|
|
|
00:31:45.320 --> 00:31:51.080 |
|
included are the common crawl and so the |
|
|
|
00:31:48.120 --> 00:31:53.919 |
|
common crawl is just um data crawled |
|
|
|
00:31:51.080 --> 00:31:56.760 |
|
from the Internet it's uh about 2.2 |
|
|
|
00:31:53.919 --> 00:32:00.039 |
|
trillion tokens uh they also have the |
|
|
|
00:31:56.760 --> 00:32:03.399 |
|
stack which is um lots of code about 400 |
|
|
|
00:32:00.039 --> 00:32:09.120 |
|
billion tokens of code um C4 which is |
|
|
|
00:32:03.399 --> 00:32:13.039 |
|
also uh web data uh Reddit um stem |
|
|
|
00:32:09.120 --> 00:32:16.960 |
|
papers books and uh Wikipedia |
|
|
|
00:32:13.039 --> 00:32:19.039 |
|
encyclopedia T so um you can see that it |
|
|
|
00:32:16.960 --> 00:32:21.440 |
|
has a fairly large amount of coverage |
|
|
|
00:32:19.039 --> 00:32:24.480 |
|
although mostly in |
|
|
|
00:32:21.440 --> 00:32:26.799 |
|
English um so some findings from omo |
|
|
|
00:32:24.480 --> 00:32:29.440 |
|
that I found interesting um number one |
|
|
|
00:32:26.799 --> 00:32:31.279 |
|
it has competitive average performance |
|
|
|
00:32:29.440 --> 00:32:34.320 |
|
so as I mentioned I think this is the |
|
|
|
00:32:31.279 --> 00:32:38.519 |
|
first fully open and documented language |
|
|
|
00:32:34.320 --> 00:32:40.639 |
|
model on the 7 billion range that is |
|
|
|
00:32:38.519 --> 00:32:43.360 |
|
competitive with all the other uh kind |
|
|
|
00:32:40.639 --> 00:32:47.080 |
|
of like Less open models in this range |
|
|
|
00:32:43.360 --> 00:32:49.200 |
|
so uh for example uh llama 2 is 70.5 |
|
|
|
00:32:47.080 --> 00:32:51.840 |
|
average on on all of the data sets that |
|
|
|
00:32:49.200 --> 00:32:53.960 |
|
they're evaluating on Falcon is |
|
|
|
00:32:51.840 --> 00:32:58.000 |
|
70.3 MPT is |
|
|
|
00:32:53.960 --> 00:33:00.000 |
|
69.8 and almost 69.3 so it's not a |
|
|
|
00:32:58.000 --> 00:33:04.639 |
|
slouch with respect to accuracy compared |
|
|
|
00:33:00.000 --> 00:33:06.399 |
|
to pipia which had 63 um much of the |
|
|
|
00:33:04.639 --> 00:33:09.120 |
|
issue with pipia could just be that they |
|
|
|
00:33:06.399 --> 00:33:12.080 |
|
didn't train for long enough and some |
|
|
|
00:33:09.120 --> 00:33:15.039 |
|
evidence of this is this is |
|
|
|
00:33:12.080 --> 00:33:17.000 |
|
um where they measured performance |
|
|
|
00:33:15.039 --> 00:33:18.880 |
|
constantly as they train for longer so |
|
|
|
00:33:17.000 --> 00:33:21.440 |
|
the left side is training on 500 billion |
|
|
|
00:33:18.880 --> 00:33:24.080 |
|
tokens which is already more than what |
|
|
|
00:33:21.440 --> 00:33:25.840 |
|
pipia trained on the right side is uh |
|
|
|
00:33:24.080 --> 00:33:30.360 |
|
two uh |
|
|
|
00:33:25.840 --> 00:33:32.679 |
|
2.4 or 2.5 TR I tokens and you can see |
|
|
|
00:33:30.360 --> 00:33:34.440 |
|
interestingly that the numbers are just |
|
|
|
00:33:32.679 --> 00:33:36.760 |
|
continuing to increase as they train for |
|
|
|
00:33:34.440 --> 00:33:39.480 |
|
longer so it seems that training for |
|
|
|
00:33:36.760 --> 00:33:43.679 |
|
longer and longer just kind of |
|
|
|
00:33:39.480 --> 00:33:47.000 |
|
helps um one question is whether they're |
|
|
|
00:33:43.679 --> 00:33:48.679 |
|
like overfitting to uh the data set like |
|
|
|
00:33:47.000 --> 00:33:52.000 |
|
is any of the test data included in |
|
|
|
00:33:48.679 --> 00:33:53.799 |
|
their training data here um they did do |
|
|
|
00:33:52.000 --> 00:33:57.440 |
|
D duplication to some extent to try to |
|
|
|
00:33:53.799 --> 00:33:59.320 |
|
remove the test data so um I I think |
|
|
|
00:33:57.440 --> 00:34:00.919 |
|
it's quite probable that this these are |
|
|
|
00:33:59.320 --> 00:34:02.720 |
|
real gains and if they train for longer |
|
|
|
00:34:00.919 --> 00:34:07.559 |
|
they might get an even better model but |
|
|
|
00:34:02.720 --> 00:34:07.559 |
|
um I'm not you know 100% sure about |
|
|
|
00:34:07.679 --> 00:34:12.639 |
|
that cool |
|
|
|
00:34:10.480 --> 00:34:14.359 |
|
um yeah one one other thing that I |
|
|
|
00:34:12.639 --> 00:34:16.119 |
|
noticed which might be uh might be a |
|
|
|
00:34:14.359 --> 00:34:18.119 |
|
little bit interesting is um all of |
|
|
|
00:34:16.119 --> 00:34:20.240 |
|
these that I didn't mention here is all |
|
|
|
00:34:18.119 --> 00:34:21.760 |
|
of these have a learning rate schedule |
|
|
|
00:34:20.240 --> 00:34:23.679 |
|
and typically they have a learning rate |
|
|
|
00:34:21.760 --> 00:34:25.760 |
|
schedule where they do this standard |
|
|
|
00:34:23.679 --> 00:34:29.159 |
|
warmup where they increase and then they |
|
|
|
00:34:25.760 --> 00:34:30.960 |
|
decrease but they St decreasing at a a |
|
|
|
00:34:29.159 --> 00:34:34.040 |
|
floor and usually that floor is about |
|
|
|
00:34:30.960 --> 00:34:36.720 |
|
one1 the size of the um of the original |
|
|
|
00:34:34.040 --> 00:34:38.520 |
|
learning rate so the if they start out 3 |
|
|
|
00:34:36.720 --> 00:34:41.919 |
|
e to Theus 4 they'll decrease it but |
|
|
|
00:34:38.520 --> 00:34:43.960 |
|
only to 3 eus2 and then they're can so |
|
|
|
00:34:41.919 --> 00:34:46.079 |
|
that might be another good thing to put |
|
|
|
00:34:43.960 --> 00:34:46.079 |
|
it |
|
|
|
00:34:46.480 --> 00:34:51.240 |
|
out cool any questions about |
|
|
|
00:34:51.320 --> 00:34:58.599 |
|
this okay um so now I'll get into L 2 um |
|
|
|
00:34:56.560 --> 00:35:00.200 |
|
in Lama 2 you know is a model that |
|
|
|
00:34:58.599 --> 00:35:04.400 |
|
probably most people have heard about it |
|
|
|
00:35:00.200 --> 00:35:07.599 |
|
was created by meta um it's one of the |
|
|
|
00:35:04.400 --> 00:35:09.480 |
|
uh strongest open language models now |
|
|
|
00:35:07.599 --> 00:35:10.839 |
|
although arguably there might be |
|
|
|
00:35:09.480 --> 00:35:15.000 |
|
stronger open language |
|
|
|
00:35:10.839 --> 00:35:18.400 |
|
models and the goal is a strong and safe |
|
|
|
00:35:15.000 --> 00:35:21.320 |
|
open LM and they have base and chat |
|
|
|
00:35:18.400 --> 00:35:23.400 |
|
versions of it and some unique features |
|
|
|
00:35:21.320 --> 00:35:24.680 |
|
are I think this is the open model with |
|
|
|
00:35:23.400 --> 00:35:30.119 |
|
the strongest |
|
|
|
00:35:24.680 --> 00:35:30.119 |
|
safety uh safeguards so it |
|
|
|
00:35:30.200 --> 00:35:35.079 |
|
is if I were to pick one model that I |
|
|
|
00:35:33.079 --> 00:35:37.200 |
|
wanted to use in an actual system that |
|
|
|
00:35:35.079 --> 00:35:39.599 |
|
was directly conversing with users I |
|
|
|
00:35:37.200 --> 00:35:41.920 |
|
would probably pick this one over |
|
|
|
00:35:39.599 --> 00:35:43.760 |
|
something like uh mistol even though |
|
|
|
00:35:41.920 --> 00:35:46.599 |
|
mistol shows Superior performance some |
|
|
|
00:35:43.760 --> 00:35:48.680 |
|
of the time um it might say things that |
|
|
|
00:35:46.599 --> 00:35:52.000 |
|
you don't want it to be saying to like |
|
|
|
00:35:48.680 --> 00:35:55.520 |
|
users so I think that's one of the uh |
|
|
|
00:35:52.000 --> 00:35:56.880 |
|
the nice things about M so I've been |
|
|
|
00:35:55.520 --> 00:35:58.280 |
|
comparing everything else to it so |
|
|
|
00:35:56.880 --> 00:36:00.560 |
|
that's pretty normal |
|
|
|
00:35:58.280 --> 00:36:03.160 |
|
um one thing about the data is the data |
|
|
|
00:36:00.560 --> 00:36:04.520 |
|
is not open they didn't say what data |
|
|
|
00:36:03.160 --> 00:36:06.960 |
|
they trained on for reasons that I |
|
|
|
00:36:04.520 --> 00:36:08.960 |
|
talked about before um what they did say |
|
|
|
00:36:06.960 --> 00:36:12.400 |
|
is it was trained on public sources |
|
|
|
00:36:08.960 --> 00:36:14.240 |
|
upsampling the most factual sources so |
|
|
|
00:36:12.400 --> 00:36:17.640 |
|
um that's what they |
|
|
|
00:36:14.240 --> 00:36:19.240 |
|
said the Llama one paper has more |
|
|
|
00:36:17.640 --> 00:36:20.760 |
|
information and so I'll talk about what |
|
|
|
00:36:19.240 --> 00:36:22.400 |
|
they did in the Llama one paper and we |
|
|
|
00:36:20.760 --> 00:36:24.920 |
|
can maybe extrapolate that they did |
|
|
|
00:36:22.400 --> 00:36:26.560 |
|
something similar in the LL tube paper |
|
|
|
00:36:24.920 --> 00:36:28.200 |
|
um and then the total training amount is |
|
|
|
00:36:26.560 --> 00:36:30.079 |
|
2 trillion tokens so that's actually |
|
|
|
00:36:28.200 --> 00:36:32.680 |
|
less |
|
|
|
00:36:30.079 --> 00:36:34.520 |
|
than um so if we look at the Llama 1 |
|
|
|
00:36:32.680 --> 00:36:36.319 |
|
training data it looks a little bit like |
|
|
|
00:36:34.520 --> 00:36:38.839 |
|
it looks very much like Theo training |
|
|
|
00:36:36.319 --> 00:36:41.200 |
|
data it's common crawl C4 GitHub |
|
|
|
00:36:38.839 --> 00:36:45.160 |
|
Wikipedia books archives stack |
|
|
|
00:36:41.200 --> 00:36:46.400 |
|
exchange um and one thing you'll notice |
|
|
|
00:36:45.160 --> 00:36:49.200 |
|
is that they |
|
|
|
00:36:46.400 --> 00:36:51.599 |
|
upsampled uh Wikipedia and books and |
|
|
|
00:36:49.200 --> 00:36:53.319 |
|
down sampled GitHub according compared |
|
|
|
00:36:51.599 --> 00:36:57.000 |
|
to the amount of data that they actually |
|
|
|
00:36:53.319 --> 00:37:00.760 |
|
had and so they did 2.4 EPO over |
|
|
|
00:36:57.000 --> 00:37:03.040 |
|
Wikipedia 2.2 epochs over books and only |
|
|
|
00:37:00.760 --> 00:37:05.880 |
|
one Epoch over like the standard web |
|
|
|
00:37:03.040 --> 00:37:08.240 |
|
data and archive and stack exchange and |
|
|
|
00:37:05.880 --> 00:37:09.760 |
|
0.6 epx over the GitHub data that they |
|
|
|
00:37:08.240 --> 00:37:11.520 |
|
had so |
|
|
|
00:37:09.760 --> 00:37:13.800 |
|
obviously |
|
|
|
00:37:11.520 --> 00:37:15.520 |
|
they thought that this Wikipedia and |
|
|
|
00:37:13.800 --> 00:37:17.040 |
|
books data was more valuable for some |
|
|
|
00:37:15.520 --> 00:37:20.560 |
|
reason and they really wanted the model |
|
|
|
00:37:17.040 --> 00:37:22.319 |
|
to to learn well out it so I think um |
|
|
|
00:37:20.560 --> 00:37:24.240 |
|
when they say that they upsampled |
|
|
|
00:37:22.319 --> 00:37:27.960 |
|
factual data I'm assuming that that's |
|
|
|
00:37:24.240 --> 00:37:27.960 |
|
also what they did in mud |
|
|
|
00:37:29.440 --> 00:37:33.640 |
|
so the next thing um that's |
|
|
|
00:37:35.960 --> 00:37:43.160 |
|
yeah uh what does it need to have |
|
|
|
00:37:40.280 --> 00:37:45.400 |
|
like oh um yeah actually that's a really |
|
|
|
00:37:43.160 --> 00:37:47.960 |
|
good question so why are EPO not integer |
|
|
|
00:37:45.400 --> 00:37:50.240 |
|
values there's actually no reason at all |
|
|
|
00:37:47.960 --> 00:37:52.040 |
|
that you should do you know an integer |
|
|
|
00:37:50.240 --> 00:37:54.760 |
|
value of epo you can always save out a |
|
|
|
00:37:52.040 --> 00:37:57.560 |
|
checkpoint every you know 10,000 steps |
|
|
|
00:37:54.760 --> 00:37:59.200 |
|
or something so I'd actually encourage |
|
|
|
00:37:57.560 --> 00:38:02.040 |
|
people to get away from saving out |
|
|
|
00:37:59.200 --> 00:38:03.640 |
|
checkpoints every Epoch because that |
|
|
|
00:38:02.040 --> 00:38:05.319 |
|
kind of discourages you from making your |
|
|
|
00:38:03.640 --> 00:38:07.160 |
|
training data larger because if you make |
|
|
|
00:38:05.319 --> 00:38:09.359 |
|
your training data larger it will take |
|
|
|
00:38:07.160 --> 00:38:11.760 |
|
you'll think oh training takes forever |
|
|
|
00:38:09.359 --> 00:38:13.480 |
|
um because it takes forever to use an |
|
|
|
00:38:11.760 --> 00:38:16.599 |
|
Epoch but in reality you can just save |
|
|
|
00:38:13.480 --> 00:38:18.760 |
|
out you know periodically and um and |
|
|
|
00:38:16.599 --> 00:38:21.319 |
|
keep the checkpoints from earlier |
|
|
|
00:38:18.760 --> 00:38:22.680 |
|
so many language models don't train on |
|
|
|
00:38:21.319 --> 00:38:24.480 |
|
all the data on the web because it would |
|
|
|
00:38:22.680 --> 00:38:25.800 |
|
just be too expensive to do so despite |
|
|
|
00:38:24.480 --> 00:38:27.640 |
|
the fact that they have all the data on |
|
|
|
00:38:25.800 --> 00:38:29.079 |
|
the web |
|
|
|
00:38:27.640 --> 00:38:31.000 |
|
but very good question though it's |
|
|
|
00:38:29.079 --> 00:38:34.560 |
|
that's an important |
|
|
|
00:38:31.000 --> 00:38:36.280 |
|
Point um okay so now I'd like to talk a |
|
|
|
00:38:34.560 --> 00:38:39.440 |
|
little bit about the safety tuning that |
|
|
|
00:38:36.280 --> 00:38:42.359 |
|
goes into uh the Llama models I might |
|
|
|
00:38:39.440 --> 00:38:45.640 |
|
talk a little bit more about this um |
|
|
|
00:38:42.359 --> 00:38:48.960 |
|
later but I I think uh I'll I'll talk |
|
|
|
00:38:45.640 --> 00:38:51.480 |
|
about it now um basically the Llama 2 |
|
|
|
00:38:48.960 --> 00:38:54.200 |
|
developers put a lot of effort into |
|
|
|
00:38:51.480 --> 00:38:56.400 |
|
training the model to be safe because um |
|
|
|
00:38:54.200 --> 00:38:59.599 |
|
you know they're a big company and they |
|
|
|
00:38:56.400 --> 00:39:01.200 |
|
don't want any PR design disasters um uh |
|
|
|
00:38:59.599 --> 00:39:02.680 |
|
and also you know they want an actual |
|
|
|
00:39:01.200 --> 00:39:04.960 |
|
safe model that they can use and to BL |
|
|
|
00:39:02.680 --> 00:39:08.240 |
|
their products so I think they have the |
|
|
|
00:39:04.960 --> 00:39:10.880 |
|
Dual uh you know dual motivation |
|
|
|
00:39:08.240 --> 00:39:13.200 |
|
there the first thing that they did was |
|
|
|
00:39:10.880 --> 00:39:15.960 |
|
they collected lots of data for reward |
|
|
|
00:39:13.200 --> 00:39:17.520 |
|
modeling and reward modeling what they |
|
|
|
00:39:15.960 --> 00:39:19.720 |
|
say what they're calling reward modeling |
|
|
|
00:39:17.520 --> 00:39:23.720 |
|
is basically preference modeling so they |
|
|
|
00:39:19.720 --> 00:39:26.359 |
|
have you know multiple outputs where the |
|
|
|
00:39:23.720 --> 00:39:28.359 |
|
two outputs are somehow ranked for |
|
|
|
00:39:26.359 --> 00:39:29.960 |
|
preferences and I talked about this when |
|
|
|
00:39:28.359 --> 00:39:31.839 |
|
I was talking about DPO in the |
|
|
|
00:39:29.960 --> 00:39:35.720 |
|
reinforcement learning class for |
|
|
|
00:39:31.839 --> 00:39:38.480 |
|
example um a lot of these actually exist |
|
|
|
00:39:35.720 --> 00:39:41.920 |
|
so there's um like the anthropic helpful |
|
|
|
00:39:38.480 --> 00:39:45.599 |
|
and harmless data sets uh these open AI |
|
|
|
00:39:41.920 --> 00:39:48.200 |
|
data sets uh from web GPT stack exchange |
|
|
|
00:39:45.599 --> 00:39:50.160 |
|
on stack exchange they have um helpful |
|
|
|
00:39:48.200 --> 00:39:52.240 |
|
answers and not helpful answers so once |
|
|
|
00:39:50.160 --> 00:39:57.720 |
|
that you give thumbs up and thumbs down |
|
|
|
00:39:52.240 --> 00:39:59.839 |
|
to and um the Stanford uh human |
|
|
|
00:39:57.720 --> 00:40:03.040 |
|
preferences data set I I forget what s |
|
|
|
00:39:59.839 --> 00:40:05.800 |
|
stands for human preferences data set |
|
|
|
00:40:03.040 --> 00:40:09.400 |
|
basically this is um where they tried to |
|
|
|
00:40:05.800 --> 00:40:11.599 |
|
find Reddit posts I think Reddit posts |
|
|
|
00:40:09.400 --> 00:40:13.720 |
|
that got more upvotes despite the fact |
|
|
|
00:40:11.599 --> 00:40:16.400 |
|
that they were posted later than a a |
|
|
|
00:40:13.720 --> 00:40:18.720 |
|
previous one so the idea is like usually |
|
|
|
00:40:16.400 --> 00:40:21.359 |
|
the first post posts get more up votes |
|
|
|
00:40:18.720 --> 00:40:22.880 |
|
so if you get more up votes for a later |
|
|
|
00:40:21.359 --> 00:40:25.240 |
|
post that indicates that you're probably |
|
|
|
00:40:22.880 --> 00:40:27.640 |
|
more valuable than the earlier post so |
|
|
|
00:40:25.240 --> 00:40:30.880 |
|
kind of clever uh clever way of creating |
|
|
|
00:40:27.640 --> 00:40:33.680 |
|
data um I'm actually not sure what the |
|
|
|
00:40:30.880 --> 00:40:36.240 |
|
synthetic jpj was I didn't look at that |
|
|
|
00:40:33.680 --> 00:40:37.640 |
|
and then separately from that um meta |
|
|
|
00:40:36.240 --> 00:40:39.599 |
|
collected a very large amount of |
|
|
|
00:40:37.640 --> 00:40:42.400 |
|
internal data that they didn't release |
|
|
|
00:40:39.599 --> 00:40:44.319 |
|
uh for tuning llama and they did this |
|
|
|
00:40:42.400 --> 00:40:46.760 |
|
through various iterations so basically |
|
|
|
00:40:44.319 --> 00:40:49.839 |
|
what they did is they created a first |
|
|
|
00:40:46.760 --> 00:40:53.240 |
|
version of the model um they let it you |
|
|
|
00:40:49.839 --> 00:40:55.599 |
|
loose on users they also did some uh |
|
|
|
00:40:53.240 --> 00:40:56.960 |
|
some data collection with uh people who |
|
|
|
00:40:55.599 --> 00:40:59.720 |
|
were actually trying to break the model |
|
|
|
00:40:56.960 --> 00:41:01.200 |
|
and get getting it to say bad things |
|
|
|
00:40:59.720 --> 00:41:02.760 |
|
they collected preference data from |
|
|
|
00:41:01.200 --> 00:41:04.599 |
|
these people and then they iterated over |
|
|
|
00:41:02.760 --> 00:41:06.960 |
|
and over again to collect more and more |
|
|
|
00:41:04.599 --> 00:41:09.720 |
|
of this data on various uh versions of |
|
|
|
00:41:06.960 --> 00:41:11.280 |
|
the model so as the model got gets |
|
|
|
00:41:09.720 --> 00:41:14.079 |
|
better you know it's going to be harder |
|
|
|
00:41:11.280 --> 00:41:16.240 |
|
to collect this data but um they want to |
|
|
|
00:41:14.079 --> 00:41:17.920 |
|
try to improve the current model that |
|
|
|
00:41:16.240 --> 00:41:20.599 |
|
they |
|
|
|
00:41:17.920 --> 00:41:22.680 |
|
have so the next step that they did was |
|
|
|
00:41:20.599 --> 00:41:26.079 |
|
they trained a model to follow these |
|
|
|
00:41:22.680 --> 00:41:27.920 |
|
preferences and so they trained a model |
|
|
|
00:41:26.079 --> 00:41:32.560 |
|
that basically can predict human |
|
|
|
00:41:27.920 --> 00:41:35.119 |
|
preference given um given to uh language |
|
|
|
00:41:32.560 --> 00:41:37.680 |
|
model outputs and this is a hard problem |
|
|
|
00:41:35.119 --> 00:41:40.440 |
|
right because these are language model |
|
|
|
00:41:37.680 --> 00:41:42.760 |
|
outputs and the language model thought |
|
|
|
00:41:40.440 --> 00:41:45.480 |
|
it was a good output regardless because |
|
|
|
00:41:42.760 --> 00:41:47.319 |
|
otherwise it wouldn't be sampling and so |
|
|
|
00:41:45.480 --> 00:41:49.720 |
|
you need to distinguish between two very |
|
|
|
00:41:47.319 --> 00:41:52.240 |
|
fluent looking outputs where one is |
|
|
|
00:41:49.720 --> 00:41:56.880 |
|
preferred and one is not preferred so |
|
|
|
00:41:52.240 --> 00:41:58.359 |
|
even kind of strong models like um oh by |
|
|
|
00:41:56.880 --> 00:42:00.319 |
|
the way there are some open reward |
|
|
|
00:41:58.359 --> 00:42:02.119 |
|
models like this open Assistant reward |
|
|
|
00:42:00.319 --> 00:42:03.839 |
|
model is publicly available and you can |
|
|
|
00:42:02.119 --> 00:42:08.520 |
|
just go and download it if you want if |
|
|
|
00:42:03.839 --> 00:42:10.920 |
|
you want it um but this if you evaluate |
|
|
|
00:42:08.520 --> 00:42:14.720 |
|
it on this anthropic uh helpful and |
|
|
|
00:42:10.920 --> 00:42:16.160 |
|
harmless data set um this gets about 67 |
|
|
|
00:42:14.720 --> 00:42:18.760 |
|
or 68 |
|
|
|
00:42:16.160 --> 00:42:24.680 |
|
accuracy |
|
|
|
00:42:18.760 --> 00:42:27.200 |
|
um but if you evaluate it on um this |
|
|
|
00:42:24.680 --> 00:42:29.480 |
|
like open Assistant data set or sorry if |
|
|
|
00:42:27.200 --> 00:42:33.359 |
|
you evaluate the public models including |
|
|
|
00:42:29.480 --> 00:42:36.079 |
|
gp4 on The Meta data set actually it's |
|
|
|
00:42:33.359 --> 00:42:38.720 |
|
pretty hard for um to distinguish |
|
|
|
00:42:36.079 --> 00:42:41.319 |
|
between the things and here they're |
|
|
|
00:42:38.720 --> 00:42:44.720 |
|
evaluating both helpful and harmless or |
|
|
|
00:42:41.319 --> 00:42:47.400 |
|
helpful and safety and the reason why is |
|
|
|
00:42:44.720 --> 00:42:49.119 |
|
because like it's very easy to create a |
|
|
|
00:42:47.400 --> 00:42:51.119 |
|
very safe but not helpful at all model |
|
|
|
00:42:49.119 --> 00:42:53.640 |
|
by saying I don't know all the time it's |
|
|
|
00:42:51.119 --> 00:42:55.480 |
|
very it's relatively easy to create a |
|
|
|
00:42:53.640 --> 00:42:57.880 |
|
helpful model that's very unsafe like it |
|
|
|
00:42:55.480 --> 00:42:59.480 |
|
will do anything you want and so they |
|
|
|
00:42:57.880 --> 00:43:01.599 |
|
want a balance between the two and they |
|
|
|
00:42:59.480 --> 00:43:03.480 |
|
evaluate them separately they also |
|
|
|
00:43:01.599 --> 00:43:05.280 |
|
created two different separate reward |
|
|
|
00:43:03.480 --> 00:43:07.880 |
|
models so they created one reward model |
|
|
|
00:43:05.280 --> 00:43:10.079 |
|
to distinguish safety and another reward |
|
|
|
00:43:07.880 --> 00:43:13.440 |
|
model to distinguish helpfulness and |
|
|
|
00:43:10.079 --> 00:43:14.760 |
|
they Ed these separately to uh to train |
|
|
|
00:43:13.440 --> 00:43:17.359 |
|
the model and you can see that the |
|
|
|
00:43:14.760 --> 00:43:18.920 |
|
helpfulness model does a lot better on |
|
|
|
00:43:17.359 --> 00:43:20.640 |
|
discriminating between helpful things |
|
|
|
00:43:18.920 --> 00:43:22.319 |
|
and the safety model does a lot better |
|
|
|
00:43:20.640 --> 00:43:23.760 |
|
on discriminate or does a little better |
|
|
|
00:43:22.319 --> 00:43:25.960 |
|
on discriminating between safe and |
|
|
|
00:43:23.760 --> 00:43:28.480 |
|
unsafe |
|
|
|
00:43:25.960 --> 00:43:29.920 |
|
things um |
|
|
|
00:43:28.480 --> 00:43:33.640 |
|
actually I didn't include this in the |
|
|
|
00:43:29.920 --> 00:43:35.400 |
|
slides but they also have an interesting |
|
|
|
00:43:33.640 --> 00:43:38.920 |
|
graph that |
|
|
|
00:43:35.400 --> 00:43:41.119 |
|
demonstrates um how good the reward |
|
|
|
00:43:38.920 --> 00:43:42.640 |
|
models are based on their size and it |
|
|
|
00:43:41.119 --> 00:43:44.359 |
|
turns out that this is a place where |
|
|
|
00:43:42.640 --> 00:43:47.559 |
|
it's really really important to use a |
|
|
|
00:43:44.359 --> 00:43:49.760 |
|
large and Powerful language model to |
|
|
|
00:43:47.559 --> 00:43:51.319 |
|
determine your reward because they |
|
|
|
00:43:49.760 --> 00:43:52.680 |
|
demonstrate that the 70 billion |
|
|
|
00:43:51.319 --> 00:43:55.280 |
|
parameter model that they used is |
|
|
|
00:43:52.680 --> 00:43:57.359 |
|
actually far better than the um than the |
|
|
|
00:43:55.280 --> 00:44:00.079 |
|
smaller models that they used it |
|
|
|
00:43:57.359 --> 00:44:00.079 |
|
predicting this |
|
|
|
00:44:01.359 --> 00:44:07.760 |
|
reward so this is um a graph of their |
|
|
|
00:44:05.200 --> 00:44:10.480 |
|
incremental training process for safety |
|
|
|
00:44:07.760 --> 00:44:12.640 |
|
tuning and um you can see they have |
|
|
|
00:44:10.480 --> 00:44:15.920 |
|
their first supervised fine tuned model |
|
|
|
00:44:12.640 --> 00:44:19.440 |
|
this is with no um like RL or anything |
|
|
|
00:44:15.920 --> 00:44:22.240 |
|
like this this is a second model |
|
|
|
00:44:19.440 --> 00:44:24.760 |
|
um and uh it improves a lot with respect |
|
|
|
00:44:22.240 --> 00:44:28.119 |
|
to helpfulness and then they do more and |
|
|
|
00:44:24.760 --> 00:44:30.400 |
|
more rhf uh where they start with the |
|
|
|
00:44:28.119 --> 00:44:33.200 |
|
like supervised fine tune model and and |
|
|
|
00:44:30.400 --> 00:44:36.079 |
|
gradually do um add more reward data |
|
|
|
00:44:33.200 --> 00:44:38.200 |
|
train with a better reward model and get |
|
|
|
00:44:36.079 --> 00:44:39.800 |
|
to the end where they finally have the |
|
|
|
00:44:38.200 --> 00:44:41.359 |
|
best model that and I believe this is |
|
|
|
00:44:39.800 --> 00:44:43.200 |
|
the one that they actually released so |
|
|
|
00:44:41.359 --> 00:44:45.000 |
|
you can see that they really put a lot |
|
|
|
00:44:43.200 --> 00:44:46.520 |
|
of effort into making this model you |
|
|
|
00:44:45.000 --> 00:44:49.800 |
|
know safe and that's one of the main |
|
|
|
00:44:46.520 --> 00:44:49.800 |
|
points of the paper that they had |
|
|
|
00:44:51.319 --> 00:44:57.920 |
|
here um another interesting part of the |
|
|
|
00:44:55.119 --> 00:45:02.319 |
|
Llama 2 paper is how how they got it to |
|
|
|
00:44:57.920 --> 00:45:05.280 |
|
follow chat instructions and so um I I |
|
|
|
00:45:02.319 --> 00:45:06.640 |
|
think you're all familiar from the class |
|
|
|
00:45:05.280 --> 00:45:10.040 |
|
where I talked about |
|
|
|
00:45:06.640 --> 00:45:13.000 |
|
prompting B where basically they um |
|
|
|
00:45:10.040 --> 00:45:16.119 |
|
prompt the language model using a system |
|
|
|
00:45:13.000 --> 00:45:20.359 |
|
message and um a user message and an |
|
|
|
00:45:16.119 --> 00:45:23.160 |
|
assistant message and so um the |
|
|
|
00:45:20.359 --> 00:45:25.000 |
|
characteristic of the system message is |
|
|
|
00:45:23.160 --> 00:45:28.240 |
|
this is something that you want to be |
|
|
|
00:45:25.000 --> 00:45:32.319 |
|
obeyed throughout the um entire |
|
|
|
00:45:28.240 --> 00:45:34.599 |
|
conversation right and |
|
|
|
00:45:32.319 --> 00:45:36.760 |
|
so in order to get this obeyed |
|
|
|
00:45:34.599 --> 00:45:38.079 |
|
throughout the entire conversation you |
|
|
|
00:45:36.760 --> 00:45:39.760 |
|
need a model that's good at paying |
|
|
|
00:45:38.079 --> 00:45:40.760 |
|
attent paying particular attention to |
|
|
|
00:45:39.760 --> 00:45:43.160 |
|
the system |
|
|
|
00:45:40.760 --> 00:45:45.319 |
|
message um in this example I'm saying |
|
|
|
00:45:43.160 --> 00:45:46.880 |
|
write in only emojis so you no matter |
|
|
|
00:45:45.319 --> 00:45:48.720 |
|
how long this conversation gets you want |
|
|
|
00:45:46.880 --> 00:45:50.599 |
|
your model to continue writing in emojis |
|
|
|
00:45:48.720 --> 00:45:53.440 |
|
and models don't do this |
|
|
|
00:45:50.599 --> 00:45:56.559 |
|
spontaneously so what they did here and |
|
|
|
00:45:53.440 --> 00:45:58.359 |
|
I'm I'm 90% 95% certain that my |
|
|
|
00:45:56.559 --> 00:45:59.800 |
|
interpret of the paper is correct the |
|
|
|
00:45:58.359 --> 00:46:03.319 |
|
paper is a little bit hard to understand |
|
|
|
00:45:59.800 --> 00:46:06.720 |
|
with respect to this but um the uh what |
|
|
|
00:46:03.319 --> 00:46:10.480 |
|
they I think they do is they take the |
|
|
|
00:46:06.720 --> 00:46:13.200 |
|
system message and then they have a data |
|
|
|
00:46:10.480 --> 00:46:16.160 |
|
generation step where they |
|
|
|
00:46:13.200 --> 00:46:19.079 |
|
basically ask an existing model to write |
|
|
|
00:46:16.160 --> 00:46:21.400 |
|
and only emojis and then say hello and |
|
|
|
00:46:19.079 --> 00:46:23.640 |
|
then the model generates something and |
|
|
|
00:46:21.400 --> 00:46:26.599 |
|
then they say again write in only emojis |
|
|
|
00:46:23.640 --> 00:46:28.440 |
|
how are you doing and then they uh they |
|
|
|
00:46:26.599 --> 00:46:29.599 |
|
generate it again and because this is so |
|
|
|
00:46:28.440 --> 00:46:32.680 |
|
close in the |
|
|
|
00:46:29.599 --> 00:46:35.440 |
|
context um the assistant basically will |
|
|
|
00:46:32.680 --> 00:46:36.760 |
|
be will you know continue paying |
|
|
|
00:46:35.440 --> 00:46:39.119 |
|
attention to these |
|
|
|
00:46:36.760 --> 00:46:40.599 |
|
directions um and then after that now |
|
|
|
00:46:39.119 --> 00:46:42.640 |
|
you have a data set that you can train |
|
|
|
00:46:40.599 --> 00:46:44.280 |
|
your model on you can train your model |
|
|
|
00:46:42.640 --> 00:46:46.880 |
|
on this generated data set that looks |
|
|
|
00:46:44.280 --> 00:46:49.079 |
|
like write an only emojis say hello uh |
|
|
|
00:46:46.880 --> 00:46:50.480 |
|
how are you doing and stuff like this |
|
|
|
00:46:49.079 --> 00:46:54.040 |
|
and they try this with a whole bunch of |
|
|
|
00:46:50.480 --> 00:46:57.880 |
|
rules it's like right um right as if |
|
|
|
00:46:54.040 --> 00:47:00.559 |
|
you're explaining to a 5-year-old or um |
|
|
|
00:46:57.880 --> 00:47:02.720 |
|
write in a very polite manner write in a |
|
|
|
00:47:00.559 --> 00:47:03.960 |
|
very informal Manner and stuff like that |
|
|
|
00:47:02.720 --> 00:47:06.480 |
|
so they generate a whole bunch of the |
|
|
|
00:47:03.960 --> 00:47:08.480 |
|
synthetic data and in doing this they |
|
|
|
00:47:06.480 --> 00:47:09.960 |
|
basically are able to train the model to |
|
|
|
00:47:08.480 --> 00:47:11.559 |
|
pay very close attention to the system |
|
|
|
00:47:09.960 --> 00:47:13.480 |
|
message because it needs to do so in |
|
|
|
00:47:11.559 --> 00:47:17.319 |
|
order to do |
|
|
|
00:47:13.480 --> 00:47:19.160 |
|
better so um yeah these are kind of the |
|
|
|
00:47:17.319 --> 00:47:20.599 |
|
unique characteristics from lava 2 I'd |
|
|
|
00:47:19.160 --> 00:47:21.960 |
|
love to tell you more about its training |
|
|
|
00:47:20.599 --> 00:47:24.520 |
|
data and all that other stuff but they |
|
|
|
00:47:21.960 --> 00:47:26.240 |
|
didn't tell us uh like what they did |
|
|
|
00:47:24.520 --> 00:47:28.839 |
|
with respect to that so we'll just have |
|
|
|
00:47:26.240 --> 00:47:28.839 |
|
to infer |
|
|
|
00:47:28.960 --> 00:47:33.559 |
|
on cool uh any questions about |
|
|
|
00:47:33.800 --> 00:47:39.160 |
|
this okay |
|
|
|
00:47:36.640 --> 00:47:40.839 |
|
go so next I want to go into mistol and |
|
|
|
00:47:39.160 --> 00:47:42.599 |
|
mixol this is going to be a little bit |
|
|
|
00:47:40.839 --> 00:47:44.200 |
|
short because I've kind of covered some |
|
|
|
00:47:42.599 --> 00:47:45.720 |
|
of the stuff already and also they |
|
|
|
00:47:44.200 --> 00:47:48.240 |
|
didn't tell you very much about the |
|
|
|
00:47:45.720 --> 00:47:52.240 |
|
training process um basically it was |
|
|
|
00:47:48.240 --> 00:47:54.079 |
|
created by mistol um AI the company and |
|
|
|
00:47:52.240 --> 00:47:56.839 |
|
it's a strong and somewhat multilingual |
|
|
|
00:47:54.079 --> 00:47:59.400 |
|
open language model um it has some |
|
|
|
00:47:56.839 --> 00:48:01.760 |
|
unique features like speed optimizations |
|
|
|
00:47:59.400 --> 00:48:03.200 |
|
in um including grouped query attention |
|
|
|
00:48:01.760 --> 00:48:06.200 |
|
and mixture of |
|
|
|
00:48:03.200 --> 00:48:06.200 |
|
experts |
|
|
|
00:48:06.599 --> 00:48:12.359 |
|
um it makes unlike the other ones it |
|
|
|
00:48:10.599 --> 00:48:14.599 |
|
makes some actual architectural |
|
|
|
00:48:12.359 --> 00:48:17.599 |
|
modifications including sliding window |
|
|
|
00:48:14.599 --> 00:48:19.160 |
|
attention and um mixture of experts and |
|
|
|
00:48:17.599 --> 00:48:21.079 |
|
I I have actually talked about both of |
|
|
|
00:48:19.160 --> 00:48:23.640 |
|
them so I'll just very briefly go |
|
|
|
00:48:21.079 --> 00:48:26.040 |
|
through them here um the data as far as |
|
|
|
00:48:23.640 --> 00:48:28.559 |
|
I could tell was not disclosed uh very |
|
|
|
00:48:26.040 --> 00:48:30.480 |
|
completely but one important thing is it |
|
|
|
00:48:28.559 --> 00:48:32.160 |
|
includes English and European languages |
|
|
|
00:48:30.480 --> 00:48:35.520 |
|
so at least theoretically it should be |
|
|
|
00:48:32.160 --> 00:48:38.040 |
|
better than llama at this um one |
|
|
|
00:48:35.520 --> 00:48:39.559 |
|
interesting thing about llama is llama |
|
|
|
00:48:38.040 --> 00:48:40.680 |
|
if I remember correctly the actual |
|
|
|
00:48:39.559 --> 00:48:42.880 |
|
numbers are in the paper but it's |
|
|
|
00:48:40.680 --> 00:48:47.920 |
|
something like 85% |
|
|
|
00:48:42.880 --> 00:48:52.400 |
|
English um 8% code and then like |
|
|
|
00:48:47.920 --> 00:48:54.559 |
|
0.3% other languages like um starting at |
|
|
|
00:48:52.400 --> 00:48:57.280 |
|
all the other languages it's like 0.3% |
|
|
|
00:48:54.559 --> 00:48:59.680 |
|
so it's not very multilingual at all |
|
|
|
00:48:57.280 --> 00:49:01.319 |
|
um and they were really only aiming to |
|
|
|
00:48:59.680 --> 00:49:04.799 |
|
create a good uh English |
|
|
|
00:49:01.319 --> 00:49:06.200 |
|
model um also the training uh details |
|
|
|
00:49:04.799 --> 00:49:08.280 |
|
were not disclosed here like I wasn't |
|
|
|
00:49:06.200 --> 00:49:12.400 |
|
able to find the back sides as far as I |
|
|
|
00:49:08.280 --> 00:49:15.119 |
|
know um so mistol uses sliding window |
|
|
|
00:49:12.400 --> 00:49:18.200 |
|
attention uh vanilla attention basically |
|
|
|
00:49:15.119 --> 00:49:21.440 |
|
you always attend to all of the previous |
|
|
|
00:49:18.200 --> 00:49:24.880 |
|
things in the sequence what mistol does |
|
|
|
00:49:21.440 --> 00:49:28.119 |
|
is it attends to the previous n um |
|
|
|
00:49:24.880 --> 00:49:30.559 |
|
examples where n is equal to 4090 6 and |
|
|
|
00:49:28.119 --> 00:49:34.839 |
|
because of this uh what this means is |
|
|
|
00:49:30.559 --> 00:49:37.200 |
|
you can attend uh 4096 back and then in |
|
|
|
00:49:34.839 --> 00:49:39.280 |
|
the next layer you can attend 4096 back |
|
|
|
00:49:37.200 --> 00:49:41.599 |
|
then you can attend 4096 back so |
|
|
|
00:49:39.280 --> 00:49:44.400 |
|
basically as many layers as you have |
|
|
|
00:49:41.599 --> 00:49:47.240 |
|
times 4096 you can attend that many |
|
|
|
00:49:44.400 --> 00:49:49.000 |
|
tokens back for a minimal training |
|
|
|
00:49:47.240 --> 00:49:50.760 |
|
penalty because still the length of |
|
|
|
00:49:49.000 --> 00:49:55.079 |
|
attention for any particular token is |
|
|
|
00:49:50.760 --> 00:49:57.440 |
|
the same uh so that's one |
|
|
|
00:49:55.079 --> 00:50:00.400 |
|
feature oh and then yeah sorry the other |
|
|
|
00:49:57.440 --> 00:50:01.920 |
|
feature is mixol is using um is using a |
|
|
|
00:50:00.400 --> 00:50:05.920 |
|
mixture of experts like we talked about |
|
|
|
00:50:01.920 --> 00:50:07.720 |
|
in the previous time so um despite these |
|
|
|
00:50:05.920 --> 00:50:09.520 |
|
uh these are very strong models they're |
|
|
|
00:50:07.720 --> 00:50:12.960 |
|
generally stronger than llama at a lot |
|
|
|
00:50:09.520 --> 00:50:15.480 |
|
of things um and mixol is actually a lot |
|
|
|
00:50:12.960 --> 00:50:18.200 |
|
faster and easier to deploy than llama |
|
|
|
00:50:15.480 --> 00:50:20.680 |
|
70b uh it's smaller it only has 45 |
|
|
|
00:50:18.200 --> 00:50:23.680 |
|
billion parameters so it's definitely a |
|
|
|
00:50:20.680 --> 00:50:26.680 |
|
good choice if you want to use it yeah |
|
|
|
00:50:23.680 --> 00:50:26.680 |
|
makinging |
|
|
|
00:50:28.720 --> 00:50:33.000 |
|
yeah so it's attending to 496 |
|
|
|
00:50:33.520 --> 00:50:39.559 |
|
C so the contact size |
|
|
|
00:50:37.720 --> 00:50:43.240 |
|
typically like let's say you have a |
|
|
|
00:50:39.559 --> 00:50:45.240 |
|
block of 4096 tokens here typically that |
|
|
|
00:50:43.240 --> 00:50:48.079 |
|
means that the first token attends to |
|
|
|
00:50:45.240 --> 00:50:51.200 |
|
zero tokens the second token attends to |
|
|
|
00:50:48.079 --> 00:50:54.640 |
|
one token and the third token attends to |
|
|
|
00:50:51.200 --> 00:50:58.920 |
|
two tokens here this is maybe a little |
|
|
|
00:50:54.640 --> 00:51:01.680 |
|
bit uh Mis mislead I guess but if your |
|
|
|
00:50:58.920 --> 00:51:04.079 |
|
context length is 4096 you actually get |
|
|
|
00:51:01.680 --> 00:51:07.760 |
|
a block of twice that size you get a |
|
|
|
00:51:04.079 --> 00:51:10.960 |
|
block of 8192 tokens and so the first |
|
|
|
00:51:07.760 --> 00:51:15.839 |
|
one attends to all of the previous |
|
|
|
00:51:10.960 --> 00:51:17.760 |
|
ones so the first uh sorry so |
|
|
|
00:51:15.839 --> 00:51:19.960 |
|
the |
|
|
|
00:51:17.760 --> 00:51:22.280 |
|
um so the |
|
|
|
00:51:19.960 --> 00:51:26.760 |
|
40 |
|
|
|
00:51:22.280 --> 00:51:29.280 |
|
9 7 token |
|
|
|
00:51:26.760 --> 00:51:32.280 |
|
back to um all from |
|
|
|
00:51:29.280 --> 00:51:36.319 |
|
[Music] |
|
|
|
00:51:32.280 --> 00:51:36.319 |
|
to sorry either |
|
|
|
00:51:41.160 --> 00:51:46.880 |
|
one96 and |
|
|
|
00:51:43.839 --> 00:51:50.520 |
|
so because of that you moan to the very |
|
|
|
00:51:46.880 --> 00:51:50.520 |
|
end then you have the 8198 |
|
|
|
00:51:50.880 --> 00:51:55.359 |
|
seconding from like9 |
|
|
|
00:51:58.480 --> 00:52:01.920 |
|
and so like every token is always |
|
|
|
00:52:00.319 --> 00:52:05.280 |
|
attending to the previous one and that |
|
|
|
00:52:01.920 --> 00:52:08.200 |
|
allows you to um to kind of attend to |
|
|
|
00:52:05.280 --> 00:52:08.200 |
|
things in the previous |
|
|
|
00:52:11.760 --> 00:52:18.520 |
|
BL uh no it's big so that allows them to |
|
|
|
00:52:15.000 --> 00:52:22.000 |
|
attend a very large |
|
|
|
00:52:18.520 --> 00:52:24.599 |
|
am cool um so the next one I'd like to |
|
|
|
00:52:22.000 --> 00:52:26.559 |
|
talk about is quen this is one that in |
|
|
|
00:52:24.599 --> 00:52:29.040 |
|
the US at least people maybe pay a a |
|
|
|
00:52:26.559 --> 00:52:33.000 |
|
little bit less attention to um but it |
|
|
|
00:52:29.040 --> 00:52:35.680 |
|
was created by Alibaba and it's a strong |
|
|
|
00:52:33.000 --> 00:52:37.559 |
|
um multilingual model especially English |
|
|
|
00:52:35.680 --> 00:52:39.119 |
|
and Chinese but even uh in other |
|
|
|
00:52:37.559 --> 00:52:41.000 |
|
languages as |
|
|
|
00:52:39.119 --> 00:52:43.480 |
|
well |
|
|
|
00:52:41.000 --> 00:52:45.160 |
|
and uh one of its defining |
|
|
|
00:52:43.480 --> 00:52:48.240 |
|
characteristics other than just being a |
|
|
|
00:52:45.160 --> 00:52:50.160 |
|
strong model overall is that it's has a |
|
|
|
00:52:48.240 --> 00:52:51.799 |
|
large vocabulary for multilingual |
|
|
|
00:52:50.160 --> 00:52:56.000 |
|
support and strong |
|
|
|
00:52:51.799 --> 00:52:58.760 |
|
performance um it comes in several sizes |
|
|
|
00:52:56.000 --> 00:53:01.880 |
|
um I |
|
|
|
00:52:58.760 --> 00:53:04.799 |
|
believe uh there's a 7B version and then |
|
|
|
00:53:01.880 --> 00:53:10.119 |
|
there's also like a large like 70b |
|
|
|
00:53:04.799 --> 00:53:13.480 |
|
version 72b I think and it's using very |
|
|
|
00:53:10.119 --> 00:53:15.319 |
|
standard uh architecture things the only |
|
|
|
00:53:13.480 --> 00:53:18.119 |
|
small difference it has is it has a bias |
|
|
|
00:53:15.319 --> 00:53:19.920 |
|
in the attention layer which is doesn't |
|
|
|
00:53:18.119 --> 00:53:23.559 |
|
uh exist in |
|
|
|
00:53:19.920 --> 00:53:25.880 |
|
llama um an important thing is it's |
|
|
|
00:53:23.559 --> 00:53:28.920 |
|
actually trained on multilingual data |
|
|
|
00:53:25.880 --> 00:53:32.720 |
|
and they use a large vocabulary um they |
|
|
|
00:53:28.920 --> 00:53:33.839 |
|
use a vocabulary of 150k in contrast to |
|
|
|
00:53:32.720 --> 00:53:36.599 |
|
llama's |
|
|
|
00:53:33.839 --> 00:53:39.839 |
|
32k and that allows it to handle |
|
|
|
00:53:36.599 --> 00:53:41.720 |
|
multilingual uh data relatively |
|
|
|
00:53:39.839 --> 00:53:47.079 |
|
well |
|
|
|
00:53:41.720 --> 00:53:49.359 |
|
and um we have the three uh similar you |
|
|
|
00:53:47.079 --> 00:53:52.760 |
|
know training regimes so overall it's |
|
|
|
00:53:49.359 --> 00:53:55.559 |
|
not very diff different from uh |
|
|
|
00:53:52.760 --> 00:53:57.040 |
|
llama what might be different is data |
|
|
|
00:53:55.559 --> 00:53:59.319 |
|
engineering |
|
|
|
00:53:57.040 --> 00:54:00.680 |
|
uh and actually I I expect the data |
|
|
|
00:53:59.319 --> 00:54:02.760 |
|
engineering part is a bit different |
|
|
|
00:54:00.680 --> 00:54:06.400 |
|
because overall it's a bit stronger than |
|
|
|
00:54:02.760 --> 00:54:09.920 |
|
llama 2 um and I I think uh that has to |
|
|
|
00:54:06.400 --> 00:54:12.119 |
|
do with data in in various areas one |
|
|
|
00:54:09.920 --> 00:54:16.920 |
|
interesting piece from the paper that |
|
|
|
00:54:12.119 --> 00:54:18.280 |
|
they have is uh if we think all the way |
|
|
|
00:54:16.920 --> 00:54:21.720 |
|
back to when we talked about word |
|
|
|
00:54:18.280 --> 00:54:23.839 |
|
subword models and word tokenization we |
|
|
|
00:54:21.720 --> 00:54:27.760 |
|
remember that subword models split up |
|
|
|
00:54:23.839 --> 00:54:29.920 |
|
the input and they split up the input uh |
|
|
|
00:54:27.760 --> 00:54:31.799 |
|
so that frequent tokens get longer |
|
|
|
00:54:29.920 --> 00:54:34.520 |
|
outputs and infrequent tokens get |
|
|
|
00:54:31.799 --> 00:54:36.359 |
|
shorter outputs so one of the problems |
|
|
|
00:54:34.520 --> 00:54:40.559 |
|
as I mentioned a long time ago when we |
|
|
|
00:54:36.359 --> 00:54:42.040 |
|
covered this topic is this causes issues |
|
|
|
00:54:40.559 --> 00:54:43.000 |
|
if you're doing multilingual things |
|
|
|
00:54:42.040 --> 00:54:44.880 |
|
because if you have very little |
|
|
|
00:54:43.000 --> 00:54:47.520 |
|
multilingual data in your training data |
|
|
|
00:54:44.880 --> 00:54:49.040 |
|
for the subword tokenization model um it |
|
|
|
00:54:47.520 --> 00:54:51.559 |
|
will end up splitting all of the words |
|
|
|
00:54:49.040 --> 00:54:55.680 |
|
into basically characters or even bytes |
|
|
|
00:54:51.559 --> 00:54:59.040 |
|
so what this shows here is this is |
|
|
|
00:54:55.680 --> 00:55:00.960 |
|
comparing the amount of subord |
|
|
|
00:54:59.040 --> 00:55:03.040 |
|
tokenization that happens according to |
|
|
|
00:55:00.960 --> 00:55:05.520 |
|
each of the llms |
|
|
|
00:55:03.040 --> 00:55:08.599 |
|
tokenizers with another explicitly |
|
|
|
00:55:05.520 --> 00:55:10.799 |
|
multilingual model xlmr so xlmr is kind |
|
|
|
00:55:08.599 --> 00:55:12.760 |
|
of their Baseline here with respect to |
|
|
|
00:55:10.799 --> 00:55:16.319 |
|
how much it tokenizes each |
|
|
|
00:55:12.760 --> 00:55:19.079 |
|
language and on the very left we have |
|
|
|
00:55:16.319 --> 00:55:22.839 |
|
llama and so what we can see is that |
|
|
|
00:55:19.079 --> 00:55:26.599 |
|
llama tokenizes TI |
|
|
|
00:55:22.839 --> 00:55:28.640 |
|
3.7 times as much as it as xlmr does so |
|
|
|
00:55:26.599 --> 00:55:30.359 |
|
it's basically splitting tie into tie up |
|
|
|
00:55:28.640 --> 00:55:32.480 |
|
into little tiny bits which makes it |
|
|
|
00:55:30.359 --> 00:55:35.440 |
|
very expensive and ineffective to |
|
|
|
00:55:32.480 --> 00:55:38.039 |
|
process uh let's let's find some other |
|
|
|
00:55:35.440 --> 00:55:41.599 |
|
languages that we care about we have he |
|
|
|
00:55:38.039 --> 00:55:43.760 |
|
Hebrew Arabic |
|
|
|
00:55:41.599 --> 00:55:47.079 |
|
Korean uh |
|
|
|
00:55:43.760 --> 00:55:49.559 |
|
Japanese uh Chinese so all of these you |
|
|
|
00:55:47.079 --> 00:55:52.319 |
|
can see are split up pretty into many |
|
|
|
00:55:49.559 --> 00:55:55.440 |
|
many different chunks by |
|
|
|
00:55:52.319 --> 00:55:56.799 |
|
Lama and then we we have a few other |
|
|
|
00:55:55.440 --> 00:55:58.359 |
|
language models in the middle and then |
|
|
|
00:55:56.799 --> 00:56:01.440 |
|
we have quen on the right side and what |
|
|
|
00:55:58.359 --> 00:56:04.039 |
|
we can see is basically it's pretty |
|
|
|
00:56:01.440 --> 00:56:06.400 |
|
comparable to xlmr maybe a little bit |
|
|
|
00:56:04.039 --> 00:56:09.520 |
|
more than xlmr but pretty comparable to |
|
|
|
00:56:06.400 --> 00:56:12.839 |
|
xlmr on many languages and then on code |
|
|
|
00:56:09.520 --> 00:56:15.000 |
|
it actually um splits up code much less |
|
|
|
00:56:12.839 --> 00:56:17.039 |
|
so we can see that you know its |
|
|
|
00:56:15.000 --> 00:56:18.960 |
|
tokenizer is heavily |
|
|
|
00:56:17.039 --> 00:56:22.640 |
|
multilingual um another thing I'd like |
|
|
|
00:56:18.960 --> 00:56:24.640 |
|
to point out is um I I let I'm focusing |
|
|
|
00:56:22.640 --> 00:56:27.000 |
|
on this particular language model for a |
|
|
|
00:56:24.640 --> 00:56:29.799 |
|
number of reasons |
|
|
|
00:56:27.000 --> 00:56:32.440 |
|
um the first one is multilinguality and |
|
|
|
00:56:29.799 --> 00:56:36.599 |
|
I I like multilinguality I hope other |
|
|
|
00:56:32.440 --> 00:56:39.039 |
|
people like multilinguality too um but |
|
|
|
00:56:36.599 --> 00:56:43.799 |
|
another motivation is just it has quite |
|
|
|
00:56:39.039 --> 00:56:45.680 |
|
strong performance and it's uh topping |
|
|
|
00:56:43.799 --> 00:56:47.960 |
|
topping the leaderboards in in several |
|
|
|
00:56:45.680 --> 00:56:52.160 |
|
different uh |
|
|
|
00:56:47.960 --> 00:56:57.640 |
|
places so if we look at the open llm |
|
|
|
00:56:52.160 --> 00:56:57.640 |
|
leaderboard um at least recently |
|
|
|
00:56:59.480 --> 00:57:07.440 |
|
this was a fine-tuned model by Abus |
|
|
|
00:57:04.240 --> 00:57:09.440 |
|
AI which was uh originally based on quen |
|
|
|
00:57:07.440 --> 00:57:11.079 |
|
so you can see that this is like a |
|
|
|
00:57:09.440 --> 00:57:13.920 |
|
strong found Foundation model that lots |
|
|
|
00:57:11.079 --> 00:57:16.440 |
|
of people are using for fing things so |
|
|
|
00:57:13.920 --> 00:57:18.960 |
|
um I would definitely uh encourage you |
|
|
|
00:57:16.440 --> 00:57:20.240 |
|
to take a look at that too of course |
|
|
|
00:57:18.960 --> 00:57:22.520 |
|
there's many many different models that |
|
|
|
00:57:20.240 --> 00:57:24.880 |
|
I didn't cover because if I covered all |
|
|
|
00:57:22.520 --> 00:57:26.839 |
|
of the general purpose models then we'd |
|
|
|
00:57:24.880 --> 00:57:29.599 |
|
be here all day but um |
|
|
|
00:57:26.839 --> 00:57:31.200 |
|
that's uh first start so next I want to |
|
|
|
00:57:29.599 --> 00:57:33.200 |
|
go into other kind of special purpose |
|
|
|
00:57:31.200 --> 00:57:36.839 |
|
models but are there any questions about |
|
|
|
00:57:33.200 --> 00:57:36.839 |
|
um about the things I covered so |
|
|
|
00:57:38.000 --> 00:57:44.079 |
|
far cool okay |
|
|
|
00:57:41.440 --> 00:57:47.960 |
|
um so next I'd like to go into other |
|
|
|
00:57:44.079 --> 00:57:49.760 |
|
models um first is code models so code |
|
|
|
00:57:47.960 --> 00:57:52.680 |
|
models are models that were specifically |
|
|
|
00:57:49.760 --> 00:57:55.280 |
|
trained on code actually right now every |
|
|
|
00:57:52.680 --> 00:57:56.960 |
|
model is a code model um like nobody |
|
|
|
00:57:55.280 --> 00:57:58.799 |
|
pre-train a large language model and is |
|
|
|
00:57:56.960 --> 00:58:01.720 |
|
serious about it and doesn't train on |
|
|
|
00:57:58.799 --> 00:58:04.680 |
|
code because um generating code is a |
|
|
|
00:58:01.720 --> 00:58:06.680 |
|
huge use case and also um some work has |
|
|
|
00:58:04.680 --> 00:58:08.880 |
|
demonstrated that gen training on code |
|
|
|
00:58:06.680 --> 00:58:13.720 |
|
seems to improve reasoning abilities of |
|
|
|
00:58:08.880 --> 00:58:16.160 |
|
language models as well um but uh these |
|
|
|
00:58:13.720 --> 00:58:19.319 |
|
models were very heavily trained on code |
|
|
|
00:58:16.160 --> 00:58:22.400 |
|
so um we have star coder 2 this is a |
|
|
|
00:58:19.319 --> 00:58:24.079 |
|
very recent uh entry this is a fully |
|
|
|
00:58:22.400 --> 00:58:26.720 |
|
open model so you can see the data it |
|
|
|
00:58:24.079 --> 00:58:29.039 |
|
was trained on um all the training |
|
|
|
00:58:26.720 --> 00:58:31.640 |
|
details are released and other stuff |
|
|
|
00:58:29.039 --> 00:58:36.760 |
|
like that so this is kind of in the |
|
|
|
00:58:31.640 --> 00:58:38.599 |
|
pythia you know piao category but it's |
|
|
|
00:58:36.760 --> 00:58:41.240 |
|
very uh it's actually a very strong |
|
|
|
00:58:38.599 --> 00:58:42.839 |
|
model very good model so it's uh a good |
|
|
|
00:58:41.240 --> 00:58:46.480 |
|
one to know |
|
|
|
00:58:42.839 --> 00:58:48.680 |
|
about um separately there's code llama |
|
|
|
00:58:46.480 --> 00:58:52.520 |
|
by meta which is a code adaptation of |
|
|
|
00:58:48.680 --> 00:58:54.799 |
|
llama and uh it also gets quite a quite |
|
|
|
00:58:52.520 --> 00:58:57.720 |
|
good performance there's also another |
|
|
|
00:58:54.799 --> 00:58:59.760 |
|
model uh called seek coder I would say |
|
|
|
00:58:57.720 --> 00:59:01.720 |
|
all three of these are topping some |
|
|
|
00:58:59.760 --> 00:59:03.119 |
|
variety of leaderboard where deep seek |
|
|
|
00:59:01.720 --> 00:59:04.640 |
|
maybe is topping a few more leader |
|
|
|
00:59:03.119 --> 00:59:06.319 |
|
boards than the other ones are but all |
|
|
|
00:59:04.640 --> 00:59:09.960 |
|
of them are very competitive and might |
|
|
|
00:59:06.319 --> 00:59:11.680 |
|
be the best in class for code things um |
|
|
|
00:59:09.960 --> 00:59:13.119 |
|
I'm not talking very much about these |
|
|
|
00:59:11.680 --> 00:59:15.119 |
|
because we're going to have a a class on |
|
|
|
00:59:13.119 --> 00:59:18.280 |
|
code generation and code related things |
|
|
|
00:59:15.119 --> 00:59:21.000 |
|
later so um I'm not going to go into a |
|
|
|
00:59:18.280 --> 00:59:21.000 |
|
lot of detail |
|
|
|
00:59:21.319 --> 00:59:27.839 |
|
here another thing is about math models |
|
|
|
00:59:24.680 --> 00:59:31.960 |
|
and so like one thing is large language |
|
|
|
00:59:27.839 --> 00:59:35.480 |
|
models are not particularly good at math |
|
|
|
00:59:31.960 --> 00:59:38.839 |
|
um so there are quite a few models that |
|
|
|
00:59:35.480 --> 00:59:40.200 |
|
were trained specifically for math um |
|
|
|
00:59:38.839 --> 00:59:45.160 |
|
the first one is |
|
|
|
00:59:40.200 --> 00:59:47.280 |
|
Lemma um yes that is a pun um for like |
|
|
|
00:59:45.160 --> 00:59:49.920 |
|
LMA from |
|
|
|
00:59:47.280 --> 00:59:51.160 |
|
maap I I'm I'm not responsible for it |
|
|
|
00:59:49.920 --> 00:59:55.240 |
|
but I I thought it was kind of funny |
|
|
|
00:59:51.160 --> 00:59:56.920 |
|
anyway um so uh this was by alther AI so |
|
|
|
00:59:55.240 --> 01:00:00.359 |
|
because this was by Luther again this is |
|
|
|
00:59:56.920 --> 01:00:03.640 |
|
a fully open model all the data is open |
|
|
|
01:00:00.359 --> 01:00:05.960 |
|
um everything is known about it um also |
|
|
|
01:00:03.640 --> 01:00:08.480 |
|
uh our our very own Shan wck was one of |
|
|
|
01:00:05.960 --> 01:00:10.559 |
|
the contributors to it uh so if you want |
|
|
|
01:00:08.480 --> 01:00:13.839 |
|
to know more about LMA you can go bother |
|
|
|
01:00:10.559 --> 01:00:17.440 |
|
Sean so uh that's another thing that I |
|
|
|
01:00:13.839 --> 01:00:19.240 |
|
should mention um another thing is deep |
|
|
|
01:00:17.440 --> 01:00:20.839 |
|
seek who made the Deep seek Cod model |
|
|
|
01:00:19.240 --> 01:00:23.480 |
|
has also created a very strong math |
|
|
|
01:00:20.839 --> 01:00:26.200 |
|
model uh that's competitive with gp4 on |
|
|
|
01:00:23.480 --> 01:00:28.160 |
|
a lot of math things uh basically the |
|
|
|
01:00:26.200 --> 01:00:30.480 |
|
way they did this was they did this by |
|
|
|
01:00:28.160 --> 01:00:32.559 |
|
um training a classifier to try to |
|
|
|
01:00:30.480 --> 01:00:34.640 |
|
identify data on the web that is related |
|
|
|
01:00:32.559 --> 01:00:37.599 |
|
to math and scraping all of that data |
|
|
|
01:00:34.640 --> 01:00:39.960 |
|
and fine tuning on it so um you can get |
|
|
|
01:00:37.599 --> 01:00:42.280 |
|
gold standard data from like proof pile |
|
|
|
01:00:39.960 --> 01:00:44.359 |
|
and a whole bunch of other sources and |
|
|
|
01:00:42.280 --> 01:00:46.200 |
|
so they trained a like math or not maath |
|
|
|
01:00:44.359 --> 01:00:48.400 |
|
classifier and and harvested a lot of |
|
|
|
01:00:46.200 --> 01:00:52.400 |
|
math related |
|
|
|
01:00:48.400 --> 01:00:52.400 |
|
dat yeah |
|
|
|
01:00:59.880 --> 01:01:04.920 |
|
it's mostly mostly data sets um I |
|
|
|
01:01:03.599 --> 01:01:07.119 |
|
actually might be talking a little bit |
|
|
|
01:01:04.920 --> 01:01:10.039 |
|
more about these in the reasoning class |
|
|
|
01:01:07.119 --> 01:01:11.799 |
|
and I did a lot of uh I did a lot of |
|
|
|
01:01:10.039 --> 01:01:13.599 |
|
prep to create these slides and actually |
|
|
|
01:01:11.799 --> 01:01:15.680 |
|
ran out of time to do the math stuff so |
|
|
|
01:01:13.599 --> 01:01:17.200 |
|
I might talk about it later um but I |
|
|
|
01:01:15.680 --> 01:01:18.480 |
|
don't think they're really doing a lot |
|
|
|
01:01:17.200 --> 01:01:21.799 |
|
of things like you could think of |
|
|
|
01:01:18.480 --> 01:01:23.440 |
|
obvious things like doing RL rlf based |
|
|
|
01:01:21.799 --> 01:01:26.799 |
|
on like whether it gets the answer right |
|
|
|
01:01:23.440 --> 01:01:28.559 |
|
or not in the end um as far as I know |
|
|
|
01:01:26.799 --> 01:01:30.359 |
|
that's not a big ingredient here but |
|
|
|
01:01:28.559 --> 01:01:31.920 |
|
I'll be more sure of that when we talk |
|
|
|
01:01:30.359 --> 01:01:37.599 |
|
about it |
|
|
|
01:01:31.920 --> 01:01:39.559 |
|
later um cool and a final one uh it's |
|
|
|
01:01:37.599 --> 01:01:43.200 |
|
not a Sy model it's a science model |
|
|
|
01:01:39.559 --> 01:01:45.920 |
|
sorry for the typo um but uh this model |
|
|
|
01:01:43.200 --> 01:01:49.160 |
|
Galactica um was a model for science |
|
|
|
01:01:45.920 --> 01:01:51.799 |
|
that was trained by meta |
|
|
|
01:01:49.160 --> 01:01:54.359 |
|
um does anyone remember this model or |
|
|
|
01:01:51.799 --> 01:01:58.079 |
|
was anybody around when this model came |
|
|
|
01:01:54.359 --> 01:01:59.640 |
|
out no there was a big uh a big PR |
|
|
|
01:01:58.079 --> 01:02:01.160 |
|
disaster for meta when they released |
|
|
|
01:01:59.640 --> 01:02:03.480 |
|
this model because they said this is a |
|
|
|
01:02:01.160 --> 01:02:05.520 |
|
great model for math use it in your in |
|
|
|
01:02:03.480 --> 01:02:08.599 |
|
writing your science paper sorry this is |
|
|
|
01:02:05.520 --> 01:02:10.480 |
|
a great model for science try using it |
|
|
|
01:02:08.599 --> 01:02:12.640 |
|
it in your science papers and this came |
|
|
|
01:02:10.480 --> 01:02:14.839 |
|
out about two years ago and two years |
|
|
|
01:02:12.640 --> 01:02:16.640 |
|
ago language models hallucinated all the |
|
|
|
01:02:14.839 --> 01:02:19.279 |
|
time and came up with false scientific |
|
|
|
01:02:16.640 --> 01:02:22.039 |
|
facts and stuff and so basically um a |
|
|
|
01:02:19.279 --> 01:02:25.680 |
|
lot of people kind of bashed this model |
|
|
|
01:02:22.039 --> 01:02:27.440 |
|
uh in my mind kind of unfairly because |
|
|
|
01:02:25.680 --> 01:02:31.200 |
|
they actually have a lot of really |
|
|
|
01:02:27.440 --> 01:02:32.960 |
|
interesting things in this paper um one |
|
|
|
01:02:31.200 --> 01:02:34.720 |
|
interesting thing in this paper is they |
|
|
|
01:02:32.960 --> 01:02:37.000 |
|
tried to create a general purpose model |
|
|
|
01:02:34.720 --> 01:02:38.960 |
|
for science that's able to understand |
|
|
|
01:02:37.000 --> 01:02:41.960 |
|
not only text but also various |
|
|
|
01:02:38.960 --> 01:02:47.720 |
|
modalities of scientific data and so |
|
|
|
01:02:41.960 --> 01:02:51.000 |
|
that includes text it includes latex um |
|
|
|
01:02:47.720 --> 01:02:53.799 |
|
you know equations it includes code but |
|
|
|
01:02:51.000 --> 01:02:58.559 |
|
it also included things like molecular |
|
|
|
01:02:53.799 --> 01:03:01.799 |
|
structures and uh like collagens and DNA |
|
|
|
01:02:58.559 --> 01:03:04.160 |
|
and stuff like this so they tried to |
|
|
|
01:03:01.799 --> 01:03:06.160 |
|
like model biology and other things like |
|
|
|
01:03:04.160 --> 01:03:08.079 |
|
this as well so I I think it's really |
|
|
|
01:03:06.160 --> 01:03:10.640 |
|
kind of too bad that this model got a a |
|
|
|
01:03:08.079 --> 01:03:12.400 |
|
bad WAP because I I really like the you |
|
|
|
01:03:10.640 --> 01:03:14.839 |
|
know the work that went into it and I |
|
|
|
01:03:12.400 --> 01:03:16.359 |
|
hope we'll see more of this um because |
|
|
|
01:03:14.839 --> 01:03:17.640 |
|
language models for science is a really |
|
|
|
01:03:16.359 --> 01:03:19.880 |
|
big topic that a lot of people are |
|
|
|
01:03:17.640 --> 01:03:19.880 |
|
thinking |
|
|
|
01:03:20.760 --> 01:03:24.240 |
|
about |
|
|
|
01:03:22.400 --> 01:03:26.440 |
|
cool |
|
|
|
01:03:24.240 --> 01:03:28.000 |
|
um one thing I didn't talk about is |
|
|
|
01:03:26.440 --> 01:03:29.880 |
|
multimodal models but I hope to talk |
|
|
|
01:03:28.000 --> 01:03:32.440 |
|
about multimodal models in a a future |
|
|
|
01:03:29.880 --> 01:03:33.359 |
|
class so um I'll I'll talk more about |
|
|
|
01:03:32.440 --> 01:03:38.680 |
|
that |
|
|
|
01:03:33.359 --> 01:03:41.640 |
|
soon um the next thing is Clos models um |
|
|
|
01:03:38.680 --> 01:03:44.480 |
|
so Clos models we don't know a whole lot |
|
|
|
01:03:41.640 --> 01:03:46.880 |
|
about them uh most of what we know about |
|
|
|
01:03:44.480 --> 01:03:49.480 |
|
them in their training data and other |
|
|
|
01:03:46.880 --> 01:03:52.359 |
|
things like that is their uh is |
|
|
|
01:03:49.480 --> 01:03:54.720 |
|
conjecture so the |
|
|
|
01:03:52.359 --> 01:03:57.839 |
|
standard the standard format for |
|
|
|
01:03:54.720 --> 01:03:59.599 |
|
releasing in a closed model or not |
|
|
|
01:03:57.839 --> 01:04:02.160 |
|
releasing but you know publicizing a |
|
|
|
01:03:59.599 --> 01:04:04.279 |
|
closed model is people will write a blog |
|
|
|
01:04:02.160 --> 01:04:05.960 |
|
post and they'll write a paper and |
|
|
|
01:04:04.279 --> 01:04:07.720 |
|
generally what the paper does is it only |
|
|
|
01:04:05.960 --> 01:04:09.559 |
|
talks about evaluation it only talks |
|
|
|
01:04:07.720 --> 01:04:12.039 |
|
about like how good the model is on |
|
|
|
01:04:09.559 --> 01:04:13.799 |
|
various things how safe it is how they |
|
|
|
01:04:12.039 --> 01:04:16.279 |
|
put a lot of effort into red teeming the |
|
|
|
01:04:13.799 --> 01:04:17.680 |
|
model uh so that it doesn't do bad |
|
|
|
01:04:16.279 --> 01:04:18.839 |
|
things and stuff like that and it tells |
|
|
|
01:04:17.680 --> 01:04:21.119 |
|
you nothing about how they actually |
|
|
|
01:04:18.839 --> 01:04:23.279 |
|
built the model so mostly like what I |
|
|
|
01:04:21.119 --> 01:04:26.279 |
|
can talk about are capabilities as |
|
|
|
01:04:23.279 --> 01:04:28.520 |
|
opposed to um |
|
|
|
01:04:26.279 --> 01:04:32.440 |
|
talk about our capabilities as opposed |
|
|
|
01:04:28.520 --> 01:04:35.319 |
|
to like what actually went into the |
|
|
|
01:04:32.440 --> 01:04:38.920 |
|
model so um there's |
|
|
|
01:04:35.319 --> 01:04:40.880 |
|
gp4 um gp4 I think everybody knows it's |
|
|
|
01:04:38.920 --> 01:04:43.640 |
|
kind of the de facto standard strong |
|
|
|
01:04:40.880 --> 01:04:45.680 |
|
language model it used to be the only |
|
|
|
01:04:43.640 --> 01:04:47.680 |
|
strong language model like it used to be |
|
|
|
01:04:45.680 --> 01:04:50.079 |
|
on its own the strongest language model |
|
|
|
01:04:47.680 --> 01:04:53.160 |
|
and there were no real competitors to |
|
|
|
01:04:50.079 --> 01:04:55.000 |
|
gp4 from that point of view I think |
|
|
|
01:04:53.160 --> 01:04:56.680 |
|
still if I wanted a strong language |
|
|
|
01:04:55.000 --> 01:04:58.960 |
|
model for just something that I'm I'm |
|
|
|
01:04:56.680 --> 01:05:00.880 |
|
going to do randomly I still rely on G I |
|
|
|
01:04:58.960 --> 01:05:03.680 |
|
still trust gp4 more than anything else |
|
|
|
01:05:00.880 --> 01:05:05.240 |
|
to give me a really good answer um but |
|
|
|
01:05:03.680 --> 01:05:08.480 |
|
there are now other competitors I'd like |
|
|
|
01:05:05.240 --> 01:05:11.960 |
|
to talk about so gp4 anyway um you know |
|
|
|
01:05:08.480 --> 01:05:14.240 |
|
it Powers the pro version of chat GPT it |
|
|
|
01:05:11.960 --> 01:05:18.039 |
|
was tuned to be good as a chat-based |
|
|
|
01:05:14.240 --> 01:05:20.440 |
|
assistant um it accepts image inputs uh |
|
|
|
01:05:18.039 --> 01:05:22.279 |
|
and it supports calling external tools |
|
|
|
01:05:20.440 --> 01:05:23.599 |
|
through function calling uh through a |
|
|
|
01:05:22.279 --> 01:05:27.119 |
|
function calling |
|
|
|
01:05:23.599 --> 01:05:28.720 |
|
interface um |
|
|
|
01:05:27.119 --> 01:05:30.599 |
|
I I think people are are generally |
|
|
|
01:05:28.720 --> 01:05:34.000 |
|
familiar with this but just in case |
|
|
|
01:05:30.599 --> 01:05:36.240 |
|
you're not um I'd like to show a few |
|
|
|
01:05:34.000 --> 01:05:38.039 |
|
things that I like to |
|
|
|
01:05:36.240 --> 01:05:39.640 |
|
do |
|
|
|
01:05:38.039 --> 01:05:42.760 |
|
so let |
|
|
|
01:05:39.640 --> 01:05:42.760 |
|
[Music] |
|
|
|
01:05:46.920 --> 01:05:52.480 |
|
me so I'll just randomly grab one of my |
|
|
|
01:05:50.440 --> 01:05:57.640 |
|
papers from |
|
|
|
01:05:52.480 --> 01:05:57.640 |
|
archive um my Mo my most recent paper |
|
|
|
01:06:03.400 --> 01:06:07.559 |
|
and I can copy paste |
|
|
|
01:06:13.200 --> 01:06:22.240 |
|
this and write uh turn this into Json |
|
|
|
01:06:19.240 --> 01:06:22.240 |
|
forat |
|
|
|
01:06:27.960 --> 01:06:31.640 |
|
and I drop it in |
|
|
|
01:06:29.880 --> 01:06:35.480 |
|
here |
|
|
|
01:06:31.640 --> 01:06:38.279 |
|
and so this is an exhibit of its like |
|
|
|
01:06:35.480 --> 01:06:42.240 |
|
multimodal abilities because I can throw |
|
|
|
01:06:38.279 --> 01:06:44.359 |
|
in a uh in a |
|
|
|
01:06:42.240 --> 01:06:48.400 |
|
table and it basically turns it into |
|
|
|
01:06:44.359 --> 01:06:50.599 |
|
Json clat for so um I I actually turned |
|
|
|
01:06:48.400 --> 01:06:52.119 |
|
a fair amount of data FR in that I |
|
|
|
01:06:50.599 --> 01:06:53.960 |
|
created in creating these slides into |
|
|
|
01:06:52.119 --> 01:06:56.039 |
|
Json format so I can save it later for |
|
|
|
01:06:53.960 --> 01:06:59.079 |
|
whatever I want it for and I did it |
|
|
|
01:06:56.039 --> 01:07:01.720 |
|
through uh this so this is an example of |
|
|
|
01:06:59.079 --> 01:07:06.599 |
|
the multimodal abilities can also tell |
|
|
|
01:07:01.720 --> 01:07:06.599 |
|
you about images and stuff like that |
|
|
|
01:07:07.000 --> 01:07:14.319 |
|
um so also um there was a famous article |
|
|
|
01:07:11.760 --> 01:07:16.760 |
|
written by Gary Marcus that said deep |
|
|
|
01:07:14.319 --> 01:07:19.760 |
|
learning is hitting a wall um it |
|
|
|
01:07:16.760 --> 01:07:22.880 |
|
basically was written two years ago and |
|
|
|
01:07:19.760 --> 01:07:25.160 |
|
uh Gary Marcus was saying deep learning |
|
|
|
01:07:22.880 --> 01:07:26.200 |
|
doesn't uh you know is not the way for |
|
|
|
01:07:25.160 --> 01:07:27.760 |
|
the future sure we're going to need |
|
|
|
01:07:26.200 --> 01:07:31.319 |
|
things other than deep learning in order |
|
|
|
01:07:27.760 --> 01:07:34.559 |
|
to uh you know be able to uh make |
|
|
|
01:07:31.319 --> 01:07:36.400 |
|
progress and whe whether you believe |
|
|
|
01:07:34.559 --> 01:07:40.520 |
|
that is true or not I I will let you to |
|
|
|
01:07:36.400 --> 01:07:46.520 |
|
your own opinion um but uh I could also |
|
|
|
01:07:40.520 --> 01:07:51.359 |
|
say uh create a picture of deep learning |
|
|
|
01:07:46.520 --> 01:07:55.400 |
|
breaking through a brick wall and it can |
|
|
|
01:07:51.359 --> 01:07:55.400 |
|
generate images for you |
|
|
|
01:08:02.599 --> 01:08:07.440 |
|
course if you ever do a live demo even |
|
|
|
01:08:05.319 --> 01:08:10.319 |
|
if it's a live demo of open AI product |
|
|
|
01:08:07.440 --> 01:08:13.559 |
|
that a million people use it will break |
|
|
|
01:08:10.319 --> 01:08:16.719 |
|
when you try to do it so um so this is |
|
|
|
01:08:13.559 --> 01:08:17.799 |
|
another uh thing that it can do so there |
|
|
|
01:08:16.719 --> 01:08:19.560 |
|
we have a picture of deep learning |
|
|
|
01:08:17.799 --> 01:08:22.640 |
|
breaking through a brick wall and it can |
|
|
|
01:08:19.560 --> 01:08:26.159 |
|
you know generate images and stuff so |
|
|
|
01:08:22.640 --> 01:08:28.560 |
|
these are like the kinds of things that |
|
|
|
01:08:26.159 --> 01:08:30.960 |
|
I now |
|
|
|
01:08:28.560 --> 01:08:32.880 |
|
expect so it's not just like reasoning |
|
|
|
01:08:30.960 --> 01:08:35.839 |
|
ability and other stuff like that it's |
|
|
|
01:08:32.880 --> 01:08:39.199 |
|
also multi multimodality being able to |
|
|
|
01:08:35.839 --> 01:08:43.679 |
|
generate code um another thing that's |
|
|
|
01:08:39.199 --> 01:08:46.719 |
|
kind of nice um is make a |
|
|
|
01:08:43.679 --> 01:08:49.440 |
|
histogram of these |
|
|
|
01:08:46.719 --> 01:08:54.640 |
|
numbers one |
|
|
|
01:08:49.440 --> 01:08:54.640 |
|
two one two four |
|
|
|
01:08:57.600 --> 01:09:04.040 |
|
so it can do code generation and and |
|
|
|
01:08:59.719 --> 01:09:05.560 |
|
display the results for you um there are |
|
|
|
01:09:04.040 --> 01:09:08.319 |
|
efforts to |
|
|
|
01:09:05.560 --> 01:09:12.239 |
|
make open source language models be able |
|
|
|
01:09:08.319 --> 01:09:14.000 |
|
to do these things and um in order to do |
|
|
|
01:09:12.239 --> 01:09:16.759 |
|
this you need multimodality you need |
|
|
|
01:09:14.000 --> 01:09:19.359 |
|
also the ability to use tools so |
|
|
|
01:09:16.759 --> 01:09:21.400 |
|
actually the way that this um worked |
|
|
|
01:09:19.359 --> 01:09:24.520 |
|
here is very different than the way that |
|
|
|
01:09:21.400 --> 01:09:27.920 |
|
this worked so this is actually using a |
|
|
|
01:09:24.520 --> 01:09:29.759 |
|
image input into gp4 so what it's doing |
|
|
|
01:09:27.920 --> 01:09:33.040 |
|
is it's encoding the image and then |
|
|
|
01:09:29.759 --> 01:09:34.719 |
|
feeding it in as tokens into gp4 what |
|
|
|
01:09:33.040 --> 01:09:37.920 |
|
this is doing here is this is rather |
|
|
|
01:09:34.719 --> 01:09:40.120 |
|
calling a tool this is calling uh dolly3 |
|
|
|
01:09:37.920 --> 01:09:42.120 |
|
as a tool and it's providing the caption |
|
|
|
01:09:40.120 --> 01:09:46.880 |
|
to Dolly 3 you can even see maybe the |
|
|
|
01:09:42.120 --> 01:09:46.880 |
|
caption that was provided to |
|
|
|
01:09:48.640 --> 01:09:55.560 |
|
dolly3 you you previously were able to |
|
|
|
01:09:51.239 --> 01:09:57.960 |
|
do that um by maybe downloading yeah so |
|
|
|
01:09:55.560 --> 01:10:01.600 |
|
you can see the the |
|
|
|
01:09:57.960 --> 01:10:01.600 |
|
caption uh which |
|
|
|
01:10:03.560 --> 01:10:08.120 |
|
was a visual metaphor of deep learning |
|
|
|
01:10:06.320 --> 01:10:10.679 |
|
is a powerful force breaking through a |
|
|
|
01:10:08.120 --> 01:10:13.400 |
|
brick wall um or something like that and |
|
|
|
01:10:10.679 --> 01:10:15.480 |
|
so gp4 basically what it did is it it |
|
|
|
01:10:13.400 --> 01:10:18.000 |
|
said it wanted to call a tool and then |
|
|
|
01:10:15.480 --> 01:10:19.360 |
|
it g provided the caption uh the caption |
|
|
|
01:10:18.000 --> 01:10:21.280 |
|
and then it called it completely |
|
|
|
01:10:19.360 --> 01:10:22.320 |
|
separate tool as an API in order to |
|
|
|
01:10:21.280 --> 01:10:27.320 |
|
generate the |
|
|
|
01:10:22.320 --> 01:10:27.320 |
|
image so um yeah the final |
|
|
|
01:10:28.199 --> 01:10:34.080 |
|
well I managed to break chat gbt that's |
|
|
|
01:10:30.120 --> 01:10:36.520 |
|
no small accomplishment um so but anyway |
|
|
|
01:10:34.080 --> 01:10:40.199 |
|
these are some of the things that uh |
|
|
|
01:10:36.520 --> 01:10:42.360 |
|
that the systems can do and because open |
|
|
|
01:10:40.199 --> 01:10:47.000 |
|
AI has kind of become a standard that a |
|
|
|
01:10:42.360 --> 01:10:50.040 |
|
lot of people want to uh compete with um |
|
|
|
01:10:47.000 --> 01:10:53.480 |
|
also I would say Gemini Gemini and Claud |
|
|
|
01:10:50.040 --> 01:10:56.400 |
|
are maybe the two um the two models that |
|
|
|
01:10:53.480 --> 01:10:59.440 |
|
can compete with gp4 and terms of uh you |
|
|
|
01:10:56.400 --> 01:11:02.600 |
|
know accuracy Gemini is a much newer |
|
|
|
01:10:59.440 --> 01:11:06.159 |
|
model by Google that uh comes in two |
|
|
|
01:11:02.600 --> 01:11:08.280 |
|
varieties Gemini Pro and Gemini Ultra uh |
|
|
|
01:11:06.159 --> 01:11:11.040 |
|
one interesting thing about Gemini Pro |
|
|
|
01:11:08.280 --> 01:11:13.560 |
|
is that it supports um very long inputs |
|
|
|
01:11:11.040 --> 01:11:15.679 |
|
one to 10 million tokens it also |
|
|
|
01:11:13.560 --> 01:11:16.600 |
|
supports image and video inputs and |
|
|
|
01:11:15.679 --> 01:11:20.239 |
|
image |
|
|
|
01:11:16.600 --> 01:11:22.320 |
|
outputs um I actually put a a video into |
|
|
|
01:11:20.239 --> 01:11:24.600 |
|
it recently and the video recognition |
|
|
|
01:11:22.320 --> 01:11:27.159 |
|
capabilities are pretty pretty nice so |
|
|
|
01:11:24.600 --> 01:11:29.280 |
|
you can uh you can try that out if you |
|
|
|
01:11:27.159 --> 01:11:34.320 |
|
want |
|
|
|
01:11:29.280 --> 01:11:36.640 |
|
um and finally there's Claud it pla 3 it |
|
|
|
01:11:34.320 --> 01:11:39.280 |
|
supports a context window of up to 200k |
|
|
|
01:11:36.640 --> 01:11:41.040 |
|
also allows for processing images and |
|
|
|
01:11:39.280 --> 01:11:46.480 |
|
overall has strong results competitive |
|
|
|
01:11:41.040 --> 01:11:49.880 |
|
with gd4 so if you're looking for um if |
|
|
|
01:11:46.480 --> 01:11:51.480 |
|
you're looking for models to use uh to |
|
|
|
01:11:49.880 --> 01:11:53.600 |
|
try out better closed models you can |
|
|
|
01:11:51.480 --> 01:11:55.719 |
|
definitely use these another thing I'm |
|
|
|
01:11:53.600 --> 01:11:58.239 |
|
really excited about is how can we get |
|
|
|
01:11:55.719 --> 01:11:59.560 |
|
like open models to you know demonstrate |
|
|
|
01:11:58.239 --> 01:12:01.320 |
|
some of the interesting capabilities |
|
|
|
01:11:59.560 --> 01:12:02.840 |
|
that we see in closed models so you know |
|
|
|
01:12:01.320 --> 01:12:07.120 |
|
everybody can benefit and everybody |
|
|
|
01:12:02.840 --> 01:12:10.040 |
|
knows uh you know uh the recipes to make |
|
|
|
01:12:07.120 --> 01:12:12.560 |
|
models like this so I think that's |
|
|
|
01:12:10.040 --> 01:12:16.639 |
|
mostly all I have for today another um |
|
|
|
01:12:12.560 --> 01:12:23.440 |
|
another thing that is kind of neat |
|
|
|
01:12:16.639 --> 01:12:23.440 |
|
is I just found this a little while ago |
|
|
|
01:12:28.800 --> 01:12:32.239 |
|
but there is this uh |
|
|
|
01:12:33.320 --> 01:12:39.239 |
|
interface uh called the god mode that |
|
|
|
01:12:36.880 --> 01:12:41.960 |
|
allows you to put all of the chat apps |
|
|
|
01:12:39.239 --> 01:12:45.840 |
|
next to each other and write the same |
|
|
|
01:12:41.960 --> 01:12:47.080 |
|
chat query into them and uh and get the |
|
|
|
01:12:45.840 --> 01:12:48.719 |
|
result from all of them so you can |
|
|
|
01:12:47.080 --> 01:12:51.080 |
|
actually compare all of them in kind of |
|
|
|
01:12:48.719 --> 01:12:52.840 |
|
an interactive settings so if you want |
|
|
|
01:12:51.080 --> 01:12:54.800 |
|
to look at all especially all of the |
|
|
|
01:12:52.840 --> 01:12:56.679 |
|
closed models open models it's you know |
|
|
|
01:12:54.800 --> 01:12:58.239 |
|
not too are to do it yourself but if you |
|
|
|
01:12:56.679 --> 01:12:59.840 |
|
want to try all of the Clos models |
|
|
|
01:12:58.239 --> 01:13:01.800 |
|
together you can do that and like log |
|
|
|
01:12:59.840 --> 01:13:03.960 |
|
into all of your accounts and then press |
|
|
|
01:13:01.800 --> 01:13:05.320 |
|
go on aquery and see how they all this F |
|
|
|
01:13:03.960 --> 01:13:07.960 |
|
so |
|
|
|
01:13:05.320 --> 01:13:09.800 |
|
um that might be a good way to compare |
|
|
|
01:13:07.960 --> 01:13:12.000 |
|
all of the models kind of qualitatively |
|
|
|
01:13:09.800 --> 01:13:14.679 |
|
as opposed to |
|
|
|
01:13:12.000 --> 01:13:17.280 |
|
qualitatively cool um that's all I have |
|
|
|
01:13:14.679 --> 01:13:19.440 |
|
for today uh I don't know are there any |
|
|
|
01:13:17.280 --> 01:13:23.440 |
|
questions or discussion or things like |
|
|
|
01:13:19.440 --> 01:13:23.440 |
|
this yeah |
|
|
|
01:13:28.840 --> 01:13:35.679 |
|
so a systematic way um the first thing |
|
|
|
01:13:32.760 --> 01:13:37.960 |
|
you can do is look at the Benchmark |
|
|
|
01:13:35.679 --> 01:13:40.800 |
|
results that have been published but |
|
|
|
01:13:37.960 --> 01:13:43.320 |
|
actually I would like to give a caveat |
|
|
|
01:13:40.800 --> 01:13:43.320 |
|
about |
|
|
|
01:13:45.199 --> 01:13:48.440 |
|
this which |
|
|
|
01:13:50.000 --> 01:13:54.000 |
|
is um |
|
|
|
01:14:22.960 --> 01:14:28.239 |
|
so these are are the best bench marking |
|
|
|
01:14:25.600 --> 01:14:30.840 |
|
results for the Gemini |
|
|
|
01:14:28.239 --> 01:14:33.440 |
|
paper um |
|
|
|
01:14:30.840 --> 01:14:36.719 |
|
and they have a table here um and |
|
|
|
01:14:33.440 --> 01:14:38.679 |
|
basically what they kind of obviously to |
|
|
|
01:14:36.719 --> 01:14:41.679 |
|
me wanted to demonstrate is that Gemini |
|
|
|
01:14:38.679 --> 01:14:44.760 |
|
was the best model out of all the models |
|
|
|
01:14:41.679 --> 01:14:47.800 |
|
um and so they have Gemini Pro and |
|
|
|
01:14:44.760 --> 01:14:50.040 |
|
Gemini Ultra and they put Gemini Pro |
|
|
|
01:14:47.800 --> 01:14:52.639 |
|
Ultra against gp4 and Gemini Pro against |
|
|
|
01:14:50.040 --> 01:14:56.360 |
|
GPT 3.5 because they're you know |
|
|
|
01:14:52.639 --> 01:14:58.440 |
|
comparable models um |
|
|
|
01:14:56.360 --> 01:15:01.880 |
|
and they're yeah because they're |
|
|
|
01:14:58.440 --> 01:15:03.040 |
|
comparable models basically and on |
|
|
|
01:15:01.880 --> 01:15:05.880 |
|
things |
|
|
|
01:15:03.040 --> 01:15:07.400 |
|
like um and they demonstrate that |
|
|
|
01:15:05.880 --> 01:15:08.199 |
|
basically they're better in all all of |
|
|
|
01:15:07.400 --> 01:15:10.520 |
|
these |
|
|
|
01:15:08.199 --> 01:15:14.760 |
|
situations however there's a few details |
|
|
|
01:15:10.520 --> 01:15:17.120 |
|
the first detail is um that the method |
|
|
|
01:15:14.760 --> 01:15:20.199 |
|
that they're using to prompt the model |
|
|
|
01:15:17.120 --> 01:15:22.120 |
|
is different here so we have like 94.4 |
|
|
|
01:15:20.199 --> 01:15:23.560 |
|
versus 92 but the method they're using |
|
|
|
01:15:22.120 --> 01:15:25.520 |
|
to prompt the model is different they're |
|
|
|
01:15:23.560 --> 01:15:29.159 |
|
using they're |
|
|
|
01:15:25.520 --> 01:15:33.320 |
|
32 and then basically uh getting the |
|
|
|
01:15:29.159 --> 01:15:36.320 |
|
best from 32 and then another thing |
|
|
|
01:15:33.320 --> 01:15:41.360 |
|
is if we look at this Human ofal |
|
|
|
01:15:36.320 --> 01:15:44.120 |
|
Performance here um they reported their |
|
|
|
01:15:41.360 --> 01:15:47.000 |
|
Human ofel Performance then they pulled |
|
|
|
01:15:44.120 --> 01:15:49.400 |
|
the number from the original gp4 paper |
|
|
|
01:15:47.000 --> 01:15:53.159 |
|
and compared to the number from the gp4 |
|
|
|
01:15:49.400 --> 01:15:54.639 |
|
paper but all of these um you know apis |
|
|
|
01:15:53.159 --> 01:15:57.719 |
|
are constantly changing they're getting |
|
|
|
01:15:54.639 --> 01:15:59.480 |
|
better and better so we went um I I was |
|
|
|
01:15:57.719 --> 01:16:01.400 |
|
very excited when Gemini first came out |
|
|
|
01:15:59.480 --> 01:16:03.120 |
|
and we actually wrote a paper where we |
|
|
|
01:16:01.400 --> 01:16:05.320 |
|
tried to look deeper into the |
|
|
|
01:16:03.120 --> 01:16:08.000 |
|
performance and what we actually found |
|
|
|
01:16:05.320 --> 01:16:10.199 |
|
is comparing Gemini Pro and GPT 3.5 |
|
|
|
01:16:08.000 --> 01:16:12.719 |
|
turbo which should be comparable we |
|
|
|
01:16:10.199 --> 01:16:16.120 |
|
found that actually GPT 3.5 turbo did a |
|
|
|
01:16:12.719 --> 01:16:19.280 |
|
little bit better um in in most cases |
|
|
|
01:16:16.120 --> 01:16:20.920 |
|
although not all cases and one of the |
|
|
|
01:16:19.280 --> 01:16:24.000 |
|
things we noticed in particular is like |
|
|
|
01:16:20.920 --> 01:16:27.960 |
|
human ofel GPD 3.5 had gotten like much |
|
|
|
01:16:24.000 --> 01:16:29.760 |
|
much better over the course of uh like |
|
|
|
01:16:27.960 --> 01:16:31.639 |
|
the time between the original paper was |
|
|
|
01:16:29.760 --> 01:16:34.120 |
|
reported it had gone up by almost 30 |
|
|
|
01:16:31.639 --> 01:16:35.760 |
|
points and also in a few cases we had |
|
|
|
01:16:34.120 --> 01:16:37.480 |
|
like a little bit of trouble reproducing |
|
|
|
01:16:35.760 --> 01:16:39.280 |
|
the Gemini Pro results just because they |
|
|
|
01:16:37.480 --> 01:16:40.360 |
|
had like safety filters and other stuff |
|
|
|
01:16:39.280 --> 01:16:42.520 |
|
like that that we had to get around |
|
|
|
01:16:40.360 --> 01:16:45.280 |
|
before we got the results so it's not |
|
|
|
01:16:42.520 --> 01:16:49.560 |
|
necessarily the case that you can |
|
|
|
01:16:45.280 --> 01:16:52.639 |
|
completely take the um that you can |
|
|
|
01:16:49.560 --> 01:16:55.560 |
|
completely take the results on face |
|
|
|
01:16:52.639 --> 01:16:57.040 |
|
value actually as a first St I would |
|
|
|
01:16:55.560 --> 01:17:00.080 |
|
suggest just trying to chat with the |
|
|
|
01:16:57.040 --> 01:17:03.719 |
|
model um which is also why I introduced |
|
|
|
01:17:00.080 --> 01:17:06.679 |
|
the like quote unquote god mode uh like |
|
|
|
01:17:03.719 --> 01:17:09.159 |
|
browser because like you can kind of |
|
|
|
01:17:06.679 --> 01:17:10.639 |
|
tell when it like when something's way |
|
|
|
01:17:09.159 --> 01:17:14.320 |
|
better than another one just by the |
|
|
|
01:17:10.639 --> 01:17:17.159 |
|
respones ites um separately if you want |
|
|
|
01:17:14.320 --> 01:17:17.159 |
|
to do it much more |
|
|
|
01:17:20.199 --> 01:17:23.840 |
|
systematically there are really nice |
|
|
|
01:17:22.360 --> 01:17:25.400 |
|
tools for evaluation I think I might |
|
|
|
01:17:23.840 --> 01:17:26.960 |
|
have talked about this before but if I |
|
|
|
01:17:25.400 --> 01:17:29.280 |
|
haven't then you should definitely take |
|
|
|
01:17:26.960 --> 01:17:31.880 |
|
a look at this there's the alther |
|
|
|
01:17:29.280 --> 01:17:34.040 |
|
evaluation harness and the alther |
|
|
|
01:17:31.880 --> 01:17:35.679 |
|
evaluation harness makes it really easy |
|
|
|
01:17:34.040 --> 01:17:37.600 |
|
to evaluate for example hugging face |
|
|
|
01:17:35.679 --> 01:17:39.040 |
|
models against many many different tasks |
|
|
|
01:17:37.600 --> 01:17:41.360 |
|
so you can just pick which task you want |
|
|
|
01:17:39.040 --> 01:17:43.719 |
|
to evaluate against pick the model name |
|
|
|
01:17:41.360 --> 01:17:47.400 |
|
and and go and you can get evaluation |
|
|
|
01:17:43.719 --> 01:17:51.960 |
|
results um that won't necessarily work |
|
|
|
01:17:47.400 --> 01:17:53.960 |
|
for close models um but if you look for |
|
|
|
01:17:51.960 --> 01:17:55.480 |
|
Uther language model evaluation harness |
|
|
|
01:17:53.960 --> 01:17:58.800 |
|
that's maybe the easiest way to run |
|
|
|
01:17:55.480 --> 01:17:58.800 |
|
evaluations or s for |
|
|
|
01:17:59.239 --> 01:18:05.239 |
|
L Cool okay um so we're we're at time |
|
|
|
01:18:02.960 --> 01:18:07.480 |
|
now uh but I'd be happy to answer a few |
|
|
|
01:18:05.239 --> 01:18:10.639 |
|
questions if anybody else has any so |
|
|
|
01:18:07.480 --> 01:18:10.639 |
|
thank you |
|
|