diff --git "a/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.srt" "b/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.srt" new file mode 100644--- /dev/null +++ "b/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.srt" @@ -0,0 +1,7079 @@ +1 +00:00:00,280 --> 00:00:08,320 +can everyone hear Al set okay great so + +2 +00:00:05,400 --> 00:00:09,840 +um today I'll be talking about a tour of + +3 +00:00:08,320 --> 00:00:13,960 +modern uh + +4 +00:00:09,840 --> 00:00:16,600 +llms and basically the idea here is that + +5 +00:00:13,960 --> 00:00:18,600 +there is many many large language models + +6 +00:00:16,600 --> 00:00:20,480 +available nowadays but I wanted to go + +7 +00:00:18,600 --> 00:00:22,760 +through some of the ones that are + +8 +00:00:20,480 --> 00:00:25,880 +particularly interesting for various + +9 +00:00:22,760 --> 00:00:26,880 +reasons either because they disclose a + +10 +00:00:25,880 --> 00:00:29,519 +lot of + +11 +00:00:26,880 --> 00:00:31,119 +information uh you know about exactly + +12 +00:00:29,519 --> 00:00:34,120 +how they were trains so we can get an + +13 +00:00:31,119 --> 00:00:35,559 +idea about what is involved in training + +14 +00:00:34,120 --> 00:00:39,120 +uh a kind of state-ofthe-art large + +15 +00:00:35,559 --> 00:00:40,640 +language model or because they're kind + +16 +00:00:39,120 --> 00:00:43,200 +of the strongest models that you can + +17 +00:00:40,640 --> 00:00:45,160 +download and use on your own um like the + +18 +00:00:43,200 --> 00:00:47,360 +best open weights language models that + +19 +00:00:45,160 --> 00:00:49,559 +are available or because they're + +20 +00:00:47,360 --> 00:00:51,879 +specialized to some particular topic or + +21 +00:00:49,559 --> 00:00:53,480 +because they're the best closed uh + +22 +00:00:51,879 --> 00:00:56,399 +language models but I'm going to + +23 +00:00:53,480 --> 00:00:58,640 +particularly focus on the first two um + +24 +00:00:56,399 --> 00:01:00,640 +just so like everybody has an idea about + +25 +00:00:58,640 --> 00:01:03,239 +you know what what is going into all the + +26 +00:01:00,640 --> 00:01:07,519 +models that you're using for whatever uh + +27 +00:01:03,239 --> 00:01:07,519 +you know tasks that you're trying to + +28 +00:01:09,119 --> 00:01:14,159 +solve so one important thing is uh what + +29 +00:01:12,240 --> 00:01:18,080 +makes a model so we talk about you know + +30 +00:01:14,159 --> 00:01:21,680 +like llama 2 or M roll or mix roll or + +31 +00:01:18,080 --> 00:01:23,320 +whatever else and I think you know this + +32 +00:01:21,680 --> 00:01:24,479 +already but it's worth reiterating again + +33 +00:01:23,320 --> 00:01:27,320 +here because I'm going to talk about it + +34 +00:01:24,479 --> 00:01:29,320 +a lot today but it's basically the model + +35 +00:01:27,320 --> 00:01:31,280 +architecture so what architecture do you + +36 +00:01:29,320 --> 00:01:33,799 +decide to use + +37 +00:01:31,280 --> 00:01:35,840 +um what data do you decide to use and + +38 +00:01:33,799 --> 00:01:39,759 +what training algorithm or Training + +39 +00:01:35,840 --> 00:01:42,520 +Method do you decide to use and all of + +40 +00:01:39,759 --> 00:01:46,040 +these are important um and there was + +41 +00:01:42,520 --> 00:01:49,320 +actually uh a Twitter thread with Tom + +42 +00:01:46,040 --> 00:01:52,399 +Wolf who's I guess CSO or CTO or + +43 +00:01:49,320 --> 00:01:54,840 +something like that at hugging face um + +44 +00:01:52,399 --> 00:01:56,840 +and basically what he was saying is uh a + +45 +00:01:54,840 --> 00:01:59,240 +lot of people don't realize that the + +46 +00:01:56,840 --> 00:02:01,039 +data is actually one of the most + +47 +00:01:59,240 --> 00:02:04,320 +important parts + +48 +00:02:01,039 --> 00:02:07,680 +um and the architectures are a lot less + +49 +00:02:04,320 --> 00:02:10,920 +important nowadays and I think that + +50 +00:02:07,680 --> 00:02:14,280 +there's some truth to that there's also + +51 +00:02:10,920 --> 00:02:15,879 +some you know a counterargument to that + +52 +00:02:14,280 --> 00:02:17,920 +uh the truth to that which you'll see + +53 +00:02:15,879 --> 00:02:19,760 +today is that almost all of the models + +54 +00:02:17,920 --> 00:02:21,360 +that we're using use very similar + +55 +00:02:19,760 --> 00:02:23,120 +architectures like almost all of the + +56 +00:02:21,360 --> 00:02:26,879 +models use an architecture that's very + +57 +00:02:23,120 --> 00:02:28,760 +similar Dilma um but despite the fact + +58 +00:02:26,879 --> 00:02:31,280 +that they use very similar architectures + +59 +00:02:28,760 --> 00:02:33,599 +they're um accuracy is vastly different + +60 +00:02:31,280 --> 00:02:36,080 +or their their abilities are vastly + +61 +00:02:33,599 --> 00:02:38,519 +different so that must come from the + +62 +00:02:36,080 --> 00:02:40,040 +data or the training decisions right so + +63 +00:02:38,519 --> 00:02:41,640 +that's an argument for the fact that + +64 +00:02:40,040 --> 00:02:44,040 +architecture decisions are a lot less + +65 +00:02:41,640 --> 00:02:48,000 +important my counterargument to that is + +66 +00:02:44,040 --> 00:02:49,840 +we spent N9 to 10 years fine-tuning and + +67 +00:02:48,000 --> 00:02:51,560 +finding the Llama architecture so now we + +68 +00:02:49,840 --> 00:02:53,120 +have the Llama architecture which is a + +69 +00:02:51,560 --> 00:02:55,480 +really good architecture it works really + +70 +00:02:53,120 --> 00:02:57,640 +well when training very large models on + +71 +00:02:55,480 --> 00:02:59,239 +lots of data and so now we don't need to + +72 +00:02:57,640 --> 00:03:01,360 +use another architecture because the + +73 +00:02:59,239 --> 00:03:02,920 +architecture using is good but if we + +74 +00:03:01,360 --> 00:03:06,200 +were trying to do the same thing with + +75 +00:03:02,920 --> 00:03:07,640 +the like lstm from 2014 uh then none of + +76 +00:03:06,200 --> 00:03:09,440 +the stuff we're doing today would work + +77 +00:03:07,640 --> 00:03:11,760 +so that's an argument in favor of you + +78 +00:03:09,440 --> 00:03:13,560 +know architectures being also + +79 +00:03:11,760 --> 00:03:16,920 +architectures can make things faster and + +80 +00:03:13,560 --> 00:03:16,920 +that's included in s decisions + +81 +00:03:17,280 --> 00:03:21,280 +that + +82 +00:03:19,040 --> 00:03:22,640 +so um the first thing I'd like to talk + +83 +00:03:21,280 --> 00:03:25,280 +about before I get into any of the + +84 +00:03:22,640 --> 00:03:28,000 +actual details is um open versus closed + +85 +00:03:25,280 --> 00:03:30,480 +access uh this is not like modeling + +86 +00:03:28,000 --> 00:03:31,760 +stuff but I think it's important and + +87 +00:03:30,480 --> 00:03:35,599 +also helps you understand the + +88 +00:03:31,760 --> 00:03:39,519 +environment a little bit so um there's a + +89 +00:03:35,599 --> 00:03:42,200 +nice blog by pyang and others uh at + +90 +00:03:39,519 --> 00:03:45,560 +which is also in the reference and they + +91 +00:03:42,200 --> 00:03:47,720 +discuss several different varieties of + +92 +00:03:45,560 --> 00:03:50,599 +like openness of release of language + +93 +00:03:47,720 --> 00:03:52,560 +models in advanced AI systems and there + +94 +00:03:50,599 --> 00:03:55,200 +are some things that we can talk about + +95 +00:03:52,560 --> 00:03:59,000 +we can talk about the weights being open + +96 +00:03:55,200 --> 00:04:01,439 +um described or closed inference uh code + +97 +00:03:59,000 --> 00:04:03,319 +being open or inference methods being + +98 +00:04:01,439 --> 00:04:04,959 +described or it being fully closed + +99 +00:04:03,319 --> 00:04:08,120 +training being open described or closed + +100 +00:04:04,959 --> 00:04:13,040 +and data being open described or closed + +101 +00:04:08,120 --> 00:04:14,760 +and um in general uh we have like the + +102 +00:04:13,040 --> 00:04:16,519 +open weights models that are on hugging + +103 +00:04:14,760 --> 00:04:19,040 +face that might just mean the weights + +104 +00:04:16,519 --> 00:04:20,600 +are open the inference code also needs + +105 +00:04:19,040 --> 00:04:21,919 +to be open because otherwise you can't + +106 +00:04:20,600 --> 00:04:24,160 +do inference on them if they're on + +107 +00:04:21,919 --> 00:04:25,800 +hugging face but that doesn't mean that + +108 +00:04:24,160 --> 00:04:28,120 +the training code is open it also + +109 +00:04:25,800 --> 00:04:32,479 +doesn't mean that the data is open um + +110 +00:04:28,120 --> 00:04:34,280 +and so there's various degrees of + +111 +00:04:32,479 --> 00:04:37,320 +openness + +112 +00:04:34,280 --> 00:04:40,919 +um and then of course there are things + +113 +00:04:37,320 --> 00:04:42,520 +like uh GPT for or GPT models where + +114 +00:04:40,919 --> 00:04:45,560 +basically all of this is closed and we + +115 +00:04:42,520 --> 00:04:48,880 +don't know anything about it or know + +116 +00:04:45,560 --> 00:04:50,560 +very little about it another thing is + +117 +00:04:48,880 --> 00:04:52,600 +about licenses and + +118 +00:04:50,560 --> 00:04:54,199 +permissiveness and this is kind of + +119 +00:04:52,600 --> 00:04:56,880 +important if you want to do a research + +120 +00:04:54,199 --> 00:05:01,240 +project to know because + +121 +00:04:56,880 --> 00:05:04,080 +it means it it an impact on the things + +122 +00:05:01,240 --> 00:05:05,520 +that you legally can do or can't do in + +123 +00:05:04,080 --> 00:05:08,039 +universities I mean we should be + +124 +00:05:05,520 --> 00:05:09,479 +following the law but we're maybe people + +125 +00:05:08,039 --> 00:05:10,720 +think about this a little bit less if + +126 +00:05:09,479 --> 00:05:12,240 +you're in a big company this is + +127 +00:05:10,720 --> 00:05:14,919 +something that becomes really important + +128 +00:05:12,240 --> 00:05:17,199 +so it's uh it's important to think + +129 +00:05:14,919 --> 00:05:20,039 +about so I'm going to go through several + +130 +00:05:17,199 --> 00:05:21,440 +degrees of licenses uh that if you've + +131 +00:05:20,039 --> 00:05:25,759 +done anything in open source you + +132 +00:05:21,440 --> 00:05:27,600 +probably know but um the or you probably + +133 +00:05:25,759 --> 00:05:29,919 +know a lot of these the first one is + +134 +00:05:27,600 --> 00:05:31,479 +public domain or cc0 + +135 +00:05:29,919 --> 00:05:33,440 +and this basically means you can do + +136 +00:05:31,479 --> 00:05:37,240 +anything with it like I could I could + +137 +00:05:33,440 --> 00:05:39,280 +download it and um this includes the + +138 +00:05:37,240 --> 00:05:41,680 +download it and redistribute it not give + +139 +00:05:39,280 --> 00:05:44,560 +you any credit uh modify it in any way I + +140 +00:05:41,680 --> 00:05:47,720 +want and this includes things like old + +141 +00:05:44,560 --> 00:05:49,600 +copyrighted works and products of the US + +142 +00:05:47,720 --> 00:05:51,400 +government workers so if you work for + +143 +00:05:49,600 --> 00:05:53,240 +the US government in some capacities + +144 +00:05:51,400 --> 00:05:58,560 +anything you generate becomes public + +145 +00:05:53,240 --> 00:06:01,000 +domain um so old copyrighted Works um + +146 +00:05:58,560 --> 00:06:04,560 +How how old do you think they need to be + +147 +00:06:01,000 --> 00:06:04,560 +before they become uh + +148 +00:06:04,720 --> 00:06:12,280 +uncopyrighted + +149 +00:06:07,000 --> 00:06:12,280 +yeah uh I think that's pretty close + +150 +00:06:14,319 --> 00:06:21,280 +so it's uh 70 years I + +151 +00:06:18,520 --> 00:06:23,680 +guess oh sorry the life of the author + +152 +00:06:21,280 --> 00:06:25,120 +plus an additional 70 years so like + +153 +00:06:23,680 --> 00:06:28,479 +after the after the person has passed + +154 +00:06:25,120 --> 00:06:30,720 +away 70 years I guess it says um does + +155 +00:06:28,479 --> 00:06:34,520 +anyone know a work that just become + +156 +00:06:30,720 --> 00:06:37,520 +became non-copyrighted yeah uh Mickey + +157 +00:06:34,520 --> 00:06:43,199 +Mouse is still copyrighted + +158 +00:06:37,520 --> 00:06:45,199 +yeah SBO uh did did it I okay so that + +159 +00:06:43,199 --> 00:06:48,400 +that's some new news some other new news + +160 +00:06:45,199 --> 00:06:50,759 +is wi the Poo um so Winnie the Poo just + +161 +00:06:48,400 --> 00:06:54,199 +became non-copyrighted and actually I + +162 +00:06:50,759 --> 00:06:55,840 +just heard uh last week that somebody + +163 +00:06:54,199 --> 00:06:59,680 +made a horror movie where Winnie the + +164 +00:06:55,840 --> 00:07:01,479 +Pooh was a a killer and that one uh a + +165 +00:06:59,680 --> 00:07:04,960 +whole bunch of like bad movie awards in + +166 +00:07:01,479 --> 00:07:06,639 +2023 so um that's the kind of things + +167 +00:07:04,960 --> 00:07:09,080 +that can happen to your copyrighted + +168 +00:07:06,639 --> 00:07:11,479 +works if they are released cc0 somebody + +169 +00:07:09,080 --> 00:07:12,960 +can do anything they want with them uh + +170 +00:07:11,479 --> 00:07:14,400 +you know so you need to be a little bit + +171 +00:07:12,960 --> 00:07:18,080 +careful about + +172 +00:07:14,400 --> 00:07:20,000 +that um next are MIT and bstd these are + +173 +00:07:18,080 --> 00:07:22,400 +very common software licenses you'll see + +174 +00:07:20,000 --> 00:07:25,720 +them on a lot of research projects these + +175 +00:07:22,400 --> 00:07:27,400 +have very few restrictions um other than + +176 +00:07:25,720 --> 00:07:29,319 +maybe maintaining the copyright notice + +177 +00:07:27,400 --> 00:07:31,840 +for BC but that's about it you can do + +178 +00:07:29,319 --> 00:07:33,840 +just about anything you want with it um + +179 +00:07:31,840 --> 00:07:35,599 +actually I'm not sure if people know + +180 +00:07:33,840 --> 00:07:39,599 +this but the Mac operating system is + +181 +00:07:35,599 --> 00:07:42,199 +based on an old BSD Opera uh operating + +182 +00:07:39,599 --> 00:07:44,280 +system where they uh took the they took + +183 +00:07:42,199 --> 00:07:46,080 +the code they made it private they + +184 +00:07:44,280 --> 00:07:49,560 +forked it made it private and now it's + +185 +00:07:46,080 --> 00:07:51,919 +the proprietary Mac operating system so + +186 +00:07:49,560 --> 00:07:53,720 +uh that's something you can do with an m + +187 +00:07:51,919 --> 00:07:57,840 +m or BSD + +188 +00:07:53,720 --> 00:08:00,000 +licensed um there's also a Pachi and CC + +189 +00:07:57,840 --> 00:08:02,560 +by um + +190 +00:08:00,000 --> 00:08:05,039 +here you must acknowledge the owner of + +191 +00:08:02,560 --> 00:08:07,840 +the uh the original creators so you need + +192 +00:08:05,039 --> 00:08:08,960 +to say this person actually created uh + +193 +00:08:07,840 --> 00:08:11,520 +this + +194 +00:08:08,960 --> 00:08:14,680 +originally + +195 +00:08:11,520 --> 00:08:17,319 +um Apachi is also kind of interesting + +196 +00:08:14,680 --> 00:08:21,759 +because they will give you a license to + +197 +00:08:17,319 --> 00:08:25,960 +use that code and any patents that are + +198 +00:08:21,759 --> 00:08:29,599 +associated with that code unless you sue + +199 +00:08:25,960 --> 00:08:32,159 +the company who released it so um just + +200 +00:08:29,599 --> 00:08:34,039 +Give an example let's say uh Google + +201 +00:08:32,159 --> 00:08:36,279 +released their code under the Apache + +202 +00:08:34,039 --> 00:08:38,919 +license and that code implements + +203 +00:08:36,279 --> 00:08:42,680 +Transformers and Google has a patent on + +204 +00:08:38,919 --> 00:08:45,760 +Transformers so if you use uh kind of + +205 +00:08:42,680 --> 00:08:48,200 +jacks or tensorflow a Jack or tensorflow + +206 +00:08:45,760 --> 00:08:50,120 +implementation of Transformers uh that + +207 +00:08:48,200 --> 00:08:51,720 +was created by Google you're okay you're + +208 +00:08:50,120 --> 00:08:54,640 +safe to use that because they've + +209 +00:08:51,720 --> 00:08:57,360 +released it under uh under that license + +210 +00:08:54,640 --> 00:08:59,560 +but if you sue Google uh for anything + +211 +00:08:57,360 --> 00:09:01,760 +related to intellectual property Google + +212 +00:08:59,560 --> 00:09:04,480 +could say uh don't you can't use + +213 +00:09:01,760 --> 00:09:06,040 +Transformers anymore um and so like if + +214 +00:09:04,480 --> 00:09:08,279 +open AI ever sues Google for + +215 +00:09:06,040 --> 00:09:09,680 +intellectual property infringement + +216 +00:09:08,279 --> 00:09:12,120 +Google will say okay you can't use + +217 +00:09:09,680 --> 00:09:15,959 +Transformers or word embeddings good + +218 +00:09:12,120 --> 00:09:17,640 +luck uh open so um there's this + +219 +00:09:15,959 --> 00:09:20,760 +interesting thing where all of these uh + +220 +00:09:17,640 --> 00:09:22,760 +tech companies now are using patented um + +221 +00:09:20,760 --> 00:09:24,440 +patented things a lot of it apachi + +222 +00:09:22,760 --> 00:09:26,040 +license software and so none of them can + +223 +00:09:24,440 --> 00:09:28,959 +sue each other for patents so patents + +224 +00:09:26,040 --> 00:09:30,560 +have become basically mostly worthless + +225 +00:09:28,959 --> 00:09:35,320 +uh in big + +226 +00:09:30,560 --> 00:09:36,360 +te um moving on um there's also a g GPL + +227 +00:09:35,320 --> 00:09:39,360 +in + +228 +00:09:36,360 --> 00:09:42,800 +ccbsa these are licenses where if you + +229 +00:09:39,360 --> 00:09:45,680 +use them you need to reshare under that + +230 +00:09:42,800 --> 00:09:47,839 +license um and so like if you create + +231 +00:09:45,680 --> 00:09:49,440 +some software it's GPL licensed and you + +232 +00:09:47,839 --> 00:09:52,160 +build on it and build something new you + +233 +00:09:49,440 --> 00:09:54,839 +need to release it under the GPL license + +234 +00:09:52,160 --> 00:09:58,160 +so a lot of companies will not + +235 +00:09:54,839 --> 00:09:59,640 +use um will not use GPL software because + +236 +00:09:58,160 --> 00:10:01,920 +that would mean that if they incorporate + +237 +00:09:59,640 --> 00:10:04,959 +into their system their whole system + +238 +00:10:01,920 --> 00:10:06,720 +like for example Google uh like all of + +239 +00:10:04,959 --> 00:10:10,240 +Google would have to be GPL licensed in + +240 +00:10:06,720 --> 00:10:11,720 +Rel EAS uh so um and I'm kind of + +241 +00:10:10,240 --> 00:10:14,800 +simplifying these licenses I'm just + +242 +00:10:11,720 --> 00:10:17,519 +giving you the gist CC BSA and sorry CC + +243 +00:10:14,800 --> 00:10:20,640 +licenses are more for data so MIT BSC + +244 +00:10:17,519 --> 00:10:22,640 +Apachi and GPL are more for software CC + +245 +00:10:20,640 --> 00:10:27,640 +Creative Commons licenses are for data + +246 +00:10:22,640 --> 00:10:29,640 +so um for example Wikipedia is CC by SAA + +247 +00:10:27,640 --> 00:10:33,560 +I believe + +248 +00:10:29,640 --> 00:10:33,560 +let me make sure that I'm not lying + +249 +00:10:41,839 --> 00:10:48,240 +there yeah CC bys and so that means that + +250 +00:10:46,040 --> 00:10:52,200 +if you make any derivative work of + +251 +00:10:48,240 --> 00:10:54,160 +Wikipedia you need to share it um the + +252 +00:10:52,200 --> 00:10:57,040 +same way that Wikipedia is uh so you + +253 +00:10:54,160 --> 00:10:59,760 +need to give it the same + +254 +00:10:57,040 --> 00:11:01,560 +license there's also um cre of Commons + +255 +00:10:59,760 --> 00:11:03,240 +non-commercial licenses or software + +256 +00:11:01,560 --> 00:11:05,519 +non-commercial licenses you say you + +257 +00:11:03,240 --> 00:11:07,079 +can't use them for commercial purposes + +258 +00:11:05,519 --> 00:11:09,279 +all the ones above you can use for + +259 +00:11:07,079 --> 00:11:11,519 +commercial purposes once you start + +260 +00:11:09,279 --> 00:11:13,440 +getting down here this is no often no + +261 +00:11:11,519 --> 00:11:15,279 +longer called open source so the open + +262 +00:11:13,440 --> 00:11:16,959 +source initiative says anything with a + +263 +00:11:15,279 --> 00:11:19,839 +restriction on the way that you can use + +264 +00:11:16,959 --> 00:11:22,639 +it is no longer open source and so that + +265 +00:11:19,839 --> 00:11:25,360 +means if you say you can't use this for + +266 +00:11:22,639 --> 00:11:27,720 +commercial purposes or you can't use + +267 +00:11:25,360 --> 00:11:29,639 +this in military systems for example + +268 +00:11:27,720 --> 00:11:32,320 +which some language models say that + +269 +00:11:29,639 --> 00:11:33,680 +nowadays those are no longer called open + +270 +00:11:32,320 --> 00:11:37,040 +source according to the open source + +271 +00:11:33,680 --> 00:11:40,320 +initiative so that's a thing to know + +272 +00:11:37,040 --> 00:11:42,920 +about then separately uh there are these + +273 +00:11:40,320 --> 00:11:45,279 +licenses that a lot of people like meta + +274 +00:11:42,920 --> 00:11:48,160 +or hugging face come up with for their + +275 +00:11:45,279 --> 00:11:50,360 +um for their models recently so the + +276 +00:11:48,160 --> 00:11:51,320 +Llama license um how many people are + +277 +00:11:50,360 --> 00:11:54,200 +using + +278 +00:11:51,320 --> 00:11:56,519 +llama in your projects how many people + +279 +00:11:54,200 --> 00:11:56,519 +read the + +280 +00:11:57,000 --> 00:12:00,880 +license so um are you sure you can use + +281 +00:11:59,639 --> 00:12:04,959 +it in your + +282 +00:12:00,880 --> 00:12:06,839 +project uh so you're you're probably in + +283 +00:12:04,959 --> 00:12:09,000 +luck in your project if you're using it + +284 +00:12:06,839 --> 00:12:11,560 +the Lama license you can read into it to + +285 +00:12:09,000 --> 00:12:13,519 +see what it actually allows but it has + +286 +00:12:11,560 --> 00:12:16,399 +um the original llama license has some + +287 +00:12:13,519 --> 00:12:18,440 +interesting uh things number one you + +288 +00:12:16,399 --> 00:12:21,079 +cannot use llama to train any language + +289 +00:12:18,440 --> 00:12:23,000 +model that is not derived from llama so + +290 +00:12:21,079 --> 00:12:26,120 +you can't generate data from llama in + +291 +00:12:23,000 --> 00:12:30,040 +train M that's not allowed according to + +292 +00:12:26,120 --> 00:12:32,440 +the r Li um another thing is uh you + +293 +00:12:30,040 --> 00:12:34,680 +can't use it for military purposes so + +294 +00:12:32,440 --> 00:12:36,160 +you can't use it um in building a + +295 +00:12:34,680 --> 00:12:37,639 +missile system or something like that + +296 +00:12:36,160 --> 00:12:41,440 +hopefully none of you are doing that for + +297 +00:12:37,639 --> 00:12:42,920 +your project um and you also need to get + +298 +00:12:41,440 --> 00:12:45,399 +a license from meta if you have + +299 +00:12:42,920 --> 00:12:48,000 +something more than 300 million active + +300 +00:12:45,399 --> 00:12:53,800 +user asign your social network service + +301 +00:12:48,000 --> 00:12:56,079 +so if you're Google or um you know X or + +302 +00:12:53,800 --> 00:12:57,680 +Twitter or you know whatever else you + +303 +00:12:56,079 --> 00:13:00,519 +need to get a license for meta before + +304 +00:12:57,680 --> 00:13:02,079 +you can start using one so + +305 +00:13:00,519 --> 00:13:03,240 +basically they created that license so + +306 +00:13:02,079 --> 00:13:06,720 +their competitors don't take their + +307 +00:13:03,240 --> 00:13:08,959 +language model and just use it for free + +308 +00:13:06,720 --> 00:13:11,000 +um and then the final thing is no + +309 +00:13:08,959 --> 00:13:13,240 +license so like let's say you have some + +310 +00:13:11,000 --> 00:13:15,560 +code that you upload to GitHub and you + +311 +00:13:13,240 --> 00:13:17,839 +don't put a license on your code this + +312 +00:13:15,560 --> 00:13:20,880 +means that you have only agreed to the + +313 +00:13:17,839 --> 00:13:23,360 +GitHub licensing terms which means that + +314 +00:13:20,880 --> 00:13:26,199 +actually nobody can use their code they + +315 +00:13:23,360 --> 00:13:30,079 +can view it possibly but they can't you + +316 +00:13:26,199 --> 00:13:31,720 +download it use it they can't like um + +317 +00:13:30,079 --> 00:13:34,160 +they can't incorporate it into their own + +318 +00:13:31,720 --> 00:13:36,000 +system so actually if you release + +319 +00:13:34,160 --> 00:13:39,120 +research code I would highly encourage + +320 +00:13:36,000 --> 00:13:41,120 +you to use MIT or BSD um or one of these + +321 +00:13:39,120 --> 00:13:43,040 +permissive licenses so other people can + +322 +00:13:41,120 --> 00:13:45,720 +use it and follow up and your code can + +323 +00:13:43,040 --> 00:13:46,920 +be effectful so um this is an important + +324 +00:13:45,720 --> 00:13:49,040 +thing to know about there's obviously + +325 +00:13:46,920 --> 00:13:52,959 +lots more to know + +326 +00:13:49,040 --> 00:13:56,440 +about um so then my question my next + +327 +00:13:52,959 --> 00:13:57,360 +question is uh what is most of the text + +328 +00:13:56,440 --> 00:13:59,560 +on the + +329 +00:13:57,360 --> 00:14:01,160 +internet the majority of the text on the + +330 +00:13:59,560 --> 00:14:04,839 +internet falls into one of these + +331 +00:14:01,160 --> 00:14:04,839 +categories any idea which + +332 +00:14:05,120 --> 00:14:12,759 +one so Wikipedia is CC bya what what + +333 +00:14:09,040 --> 00:14:12,759 +about uh Mo most of the text + +334 +00:14:14,199 --> 00:14:18,959 +on yeah it's not maybe not no license + +335 +00:14:16,880 --> 00:14:21,680 +but all rights reserved so basically you + +336 +00:14:18,959 --> 00:14:23,079 +can't use it without having permission + +337 +00:14:21,680 --> 00:14:27,639 +from the copyright + +338 +00:14:23,079 --> 00:14:30,639 +holders and so because of that + +339 +00:14:27,639 --> 00:14:33,800 +um the idea of fair use becomes very + +340 +00:14:30,639 --> 00:14:35,320 +important this is a us specific thing + +341 +00:14:33,800 --> 00:14:36,880 +and the rules in other countries are + +342 +00:14:35,320 --> 00:14:39,199 +different they're not the same as the us + +343 +00:14:36,880 --> 00:14:41,680 +but in the US uh we have rules about + +344 +00:14:39,199 --> 00:14:44,600 +where you can use particular types of + +345 +00:14:41,680 --> 00:14:46,279 +data so the US fair use Doctrine is + +346 +00:14:44,600 --> 00:14:50,240 +basically that you can use copyrighted + +347 +00:14:46,279 --> 00:14:52,920 +material in some cases so + +348 +00:14:50,240 --> 00:14:56,279 +um as a gross + +349 +00:14:52,920 --> 00:15:01,800 +simplification um quoting a small amount + +350 +00:14:56,279 --> 00:15:04,320 +of material in like a textbook or slides + +351 +00:15:01,800 --> 00:15:07,079 +or something like this this is likely + +352 +00:15:04,320 --> 00:15:10,040 +okay um there are going to be very few + +353 +00:15:07,079 --> 00:15:11,399 +cases where this is not going to um you + +354 +00:15:10,040 --> 00:15:12,720 +know where you're going to get in + +355 +00:15:11,399 --> 00:15:15,600 +trouble for + +356 +00:15:12,720 --> 00:15:18,000 +this another important uh judgment + +357 +00:15:15,600 --> 00:15:19,600 +criteria for whether this is fair use is + +358 +00:15:18,000 --> 00:15:22,440 +that it doesn't diminish the value of + +359 +00:15:19,600 --> 00:15:25,120 +the original work so if I quote + +360 +00:15:22,440 --> 00:15:27,759 +something in my like let's say I quoted + +361 +00:15:25,120 --> 00:15:30,839 +all of Harry Potter in a textbook and + +362 +00:15:27,759 --> 00:15:32,600 +then I sold my textbook for $3 anybody + +363 +00:15:30,839 --> 00:15:34,279 +could take my textbook and read all of + +364 +00:15:32,600 --> 00:15:35,800 +Harry Potter for $3 and the money + +365 +00:15:34,279 --> 00:15:37,480 +wouldn't go to JK rolling and that would + +366 +00:15:35,800 --> 00:15:41,040 +not be fair use because it's diminishing + +367 +00:15:37,480 --> 00:15:42,920 +the value of similarly if I create a big + +368 +00:15:41,040 --> 00:15:44,319 +Corpus of books and I upload them to a + +369 +00:15:42,920 --> 00:15:46,079 +site where anyone can browse them that + +370 +00:15:44,319 --> 00:15:48,319 +would also probably not be for use + +371 +00:15:46,079 --> 00:15:49,160 +because the authors would not get paid + +372 +00:15:48,319 --> 00:15:52,319 +for + +373 +00:15:49,160 --> 00:15:54,480 +it another judgment Criterion is whether + +374 +00:15:52,319 --> 00:15:57,399 +it's for non commercial purposes or not + +375 +00:15:54,480 --> 00:15:59,639 +so like in universities we're actually + +376 +00:15:57,399 --> 00:16:01,120 +held to a probably held to a more + +377 +00:15:59,639 --> 00:16:03,000 +lenient standard of fa use if we're + +378 +00:16:01,120 --> 00:16:06,120 +doing non-commercial research compared + +379 +00:16:03,000 --> 00:16:08,519 +to a company that's doing it + +380 +00:16:06,120 --> 00:16:11,480 +so um most data on the Internet is + +381 +00:16:08,519 --> 00:16:13,279 +copyrighted so right now most model + +382 +00:16:11,480 --> 00:16:16,240 +training not all model training but most + +383 +00:16:13,279 --> 00:16:18,680 +model training is done um assuming fair + +384 +00:16:16,240 --> 00:16:21,800 +use which means that training an AI + +385 +00:16:18,680 --> 00:16:25,800 +model on copyrighted + +386 +00:16:21,800 --> 00:16:29,480 +data is number one it cannot reproduce + +387 +00:16:25,800 --> 00:16:32,240 +the material easily so it's instead of + +388 +00:16:29,480 --> 00:16:33,600 +quoting material directly it's kind of + +389 +00:16:32,240 --> 00:16:35,880 +combining the material together to + +390 +00:16:33,600 --> 00:16:37,519 +create a new thing they're saying it + +391 +00:16:35,880 --> 00:16:40,639 +doesn't diminish the commercial value of + +392 +00:16:37,519 --> 00:16:42,360 +the original uh data um and then the + +393 +00:16:40,639 --> 00:16:44,839 +non-commercial purposes is maybe a + +394 +00:16:42,360 --> 00:16:47,240 +secondary concern since the first two + +395 +00:16:44,839 --> 00:16:50,600 +hold um but there are lawsuits about + +396 +00:16:47,240 --> 00:16:52,360 +this and so um this is a clip from The + +397 +00:16:50,600 --> 00:16:55,560 +New York Times where the New York Times + +398 +00:16:52,360 --> 00:16:58,279 +is suing open AI in Microsoft over uh + +399 +00:16:55,560 --> 00:16:59,759 +them training on New York Times articles + +400 +00:16:58,279 --> 00:17:02,040 +and they did do a lot of things like + +401 +00:16:59,759 --> 00:17:05,799 +they demonstrate that you can get uh gp4 + +402 +00:17:02,040 --> 00:17:08,319 +to reproduce uh like um New York Times + +403 +00:17:05,799 --> 00:17:11,480 +articles and they also argue that people + +404 +00:17:08,319 --> 00:17:12,880 +are using this gp4 as a source of news + +405 +00:17:11,480 --> 00:17:14,079 +instead of going to the New York Times + +406 +00:17:12,880 --> 00:17:15,959 +site so they're losing money from + +407 +00:17:14,079 --> 00:17:19,199 +advertising and like other other things + +408 +00:17:15,959 --> 00:17:21,679 +like that um another example is GitHub + +409 +00:17:19,199 --> 00:17:24,000 +co-pilot was sued by people who uh + +410 +00:17:21,679 --> 00:17:26,439 +uploaded software to GitHub and said + +411 +00:17:24,000 --> 00:17:29,039 +that uh basically GitHub didn't have the + +412 +00:17:26,439 --> 00:17:32,400 +right to use it to profit from it and + +413 +00:17:29,039 --> 00:17:34,799 +diminish their uh you know their money + +414 +00:17:32,400 --> 00:17:37,520 +so notably uh on this slide I'm using + +415 +00:17:34,799 --> 00:17:42,039 +fair use I don't know if you've noticed + +416 +00:17:37,520 --> 00:17:44,679 +like I copy I copy pasted an image from + +417 +00:17:42,039 --> 00:17:46,360 +somebody's uh you know website and used + +418 +00:17:44,679 --> 00:17:48,520 +it here that's copyrighted material but + +419 +00:17:46,360 --> 00:17:49,640 +I'm using it because I'm quoting a small + +420 +00:17:48,520 --> 00:17:52,440 +amount of material and I'm not + +421 +00:17:49,640 --> 00:17:54,360 +diminishing the ostial values so um like + +422 +00:17:52,440 --> 00:17:56,320 +fair use is very ubiquitous it's very + +423 +00:17:54,360 --> 00:17:58,480 +important so we can do things like this + +424 +00:17:56,320 --> 00:18:00,840 +but also um it's currently under thep + +425 +00:17:58,480 --> 00:18:00,840 +with this + +426 +00:18:01,280 --> 00:18:07,799 +models so then another question is why + +427 +00:18:04,360 --> 00:18:12,520 +restrict model access why do we number + +428 +00:18:07,799 --> 00:18:14,320 +one make models closed number two um you + +429 +00:18:12,520 --> 00:18:16,159 +know maybe not even describe what we did + +430 +00:18:14,320 --> 00:18:18,880 +in our models and I think there's three + +431 +00:18:16,159 --> 00:18:21,360 +main reasons the first reason is + +432 +00:18:18,880 --> 00:18:23,480 +commercial concerns and so they want to + +433 +00:18:21,360 --> 00:18:25,760 +make money from the models so open AI + +434 +00:18:23,480 --> 00:18:27,520 +makes money from the open AI API Gemini + +435 +00:18:25,760 --> 00:18:29,480 +makes uh sorry Google makes money from + +436 +00:18:27,520 --> 00:18:31,799 +the Gemini API + +437 +00:18:29,480 --> 00:18:33,720 +um and anthropic makes money from the + +438 +00:18:31,799 --> 00:18:34,760 +CLA API these are all models that I'm + +439 +00:18:33,720 --> 00:18:37,640 +going to talk + +440 +00:18:34,760 --> 00:18:39,440 +about number two safety I I think there + +441 +00:18:37,640 --> 00:18:41,640 +are very legitimate concerns where if + +442 +00:18:39,440 --> 00:18:43,840 +you release strong models people might + +443 +00:18:41,640 --> 00:18:47,200 +use them for bad things so you know + +444 +00:18:43,840 --> 00:18:49,120 +creating fake content online or uh doing + +445 +00:18:47,200 --> 00:18:50,720 +spear fishing attacks against people and + +446 +00:18:49,120 --> 00:18:52,600 +trying to you know scam them out of + +447 +00:18:50,720 --> 00:18:55,600 +money or things like that so I think + +448 +00:18:52,600 --> 00:18:57,240 +there are legitimate concerns about this + +449 +00:18:55,600 --> 00:18:58,880 +and then the final one is legal + +450 +00:18:57,240 --> 00:19:01,520 +liability so training models on + +451 +00:18:58,880 --> 00:19:03,640 +copyrighted data is a legal gray area as + +452 +00:19:01,520 --> 00:19:05,159 +I just mentioned so they don't want to + +453 +00:19:03,640 --> 00:19:07,159 +say what data they trained on because if + +454 +00:19:05,159 --> 00:19:10,240 +they say what data they trained on then + +455 +00:19:07,159 --> 00:19:11,960 +they might get sued so these are the + +456 +00:19:10,240 --> 00:19:14,960 +three main + +457 +00:19:11,960 --> 00:19:17,960 +concerns so + +458 +00:19:14,960 --> 00:19:19,480 +um anyway this this is a preface and + +459 +00:19:17,960 --> 00:19:23,360 +then I want to go into like the actual + +460 +00:19:19,480 --> 00:19:23,360 +models but are there any questions about + +461 +00:19:24,679 --> 00:19:30,280 +this so if any of you + +462 +00:19:27,280 --> 00:19:31,720 +are working at a company or starting a + +463 +00:19:30,280 --> 00:19:33,120 +company thinking about working at a + +464 +00:19:31,720 --> 00:19:35,440 +company or starting a company this is + +465 +00:19:33,120 --> 00:19:37,320 +something you should be aware of um you + +466 +00:19:35,440 --> 00:19:39,720 +should also be aware of the fact that + +467 +00:19:37,320 --> 00:19:42,360 +you know open AI has been doing sketchy + +468 +00:19:39,720 --> 00:19:46,640 +things for a long time and look where + +469 +00:19:42,360 --> 00:19:48,440 +they are so you know it it's uh like + +470 +00:19:46,640 --> 00:19:51,400 +this is very much a legal gray area and + +471 +00:19:48,440 --> 00:19:53,880 +people are are uh moving through that + +472 +00:19:51,400 --> 00:19:55,640 +gray area but anyway it's worth knowing + +473 +00:19:53,880 --> 00:19:59,480 +that so next I'm going to talk about + +474 +00:19:55,640 --> 00:20:00,679 +open models um so first bird's eye view + +475 +00:19:59,480 --> 00:20:02,600 +I'm going to talk about five different + +476 +00:20:00,679 --> 00:20:04,080 +models and I picked them for a reason + +477 +00:20:02,600 --> 00:20:06,440 +the first two are because they're open + +478 +00:20:04,080 --> 00:20:08,159 +source and fully reproducible namely + +479 +00:20:06,440 --> 00:20:10,360 +pipia + +480 +00:20:08,159 --> 00:20:11,919 +Ino and the reason why I want to talk + +481 +00:20:10,360 --> 00:20:13,120 +about these is we know everything about + +482 +00:20:11,919 --> 00:20:14,679 +them including what data they were + +483 +00:20:13,120 --> 00:20:16,799 +trained on um what their training + +484 +00:20:14,679 --> 00:20:19,080 +procedures are you can download all the + +485 +00:20:16,799 --> 00:20:21,000 +the stuff so you can kind of know uh + +486 +00:20:19,080 --> 00:20:24,840 +exactly what goes into making a strong + +487 +00:20:21,000 --> 00:20:26,520 +model um Pia is uh actually has many + +488 +00:20:24,840 --> 00:20:28,159 +sizes in checkpoints which is pretty + +489 +00:20:26,520 --> 00:20:30,919 +interesting Ando is maybe the strongest + +490 +00:20:28,159 --> 00:20:32,559 +reproduced model at the moment um then + +491 +00:20:30,919 --> 00:20:34,120 +we have open weights models and these + +492 +00:20:32,559 --> 00:20:35,520 +are models that aren't fully open they + +493 +00:20:34,120 --> 00:20:38,679 +don't disclose everything they don't + +494 +00:20:35,520 --> 00:20:40,760 +release their training data uh or + +495 +00:20:38,679 --> 00:20:43,799 +code um but I'm going to talk about + +496 +00:20:40,760 --> 00:20:46,520 +llama 2 which is the most popular um + +497 +00:20:43,799 --> 00:20:48,280 +it's also heavily safety tuned mistol + +498 +00:20:46,520 --> 00:20:50,840 +and mixol which is a strong and fast + +499 +00:20:48,280 --> 00:20:53,200 +model um it's somewhat multilingual and + +500 +00:20:50,840 --> 00:20:55,200 +also quen which is a very uh strong + +501 +00:20:53,200 --> 00:20:57,520 +model it's more multilingual and + +502 +00:20:55,200 --> 00:21:00,600 +specifically it's good in English and + +503 +00:20:57,520 --> 00:21:03,440 +Chinese because it was train down of + +504 +00:21:00,600 --> 00:21:04,720 +that so first going into Pia for each of + +505 +00:21:03,440 --> 00:21:06,159 +them I'm going to give an overview and + +506 +00:21:04,720 --> 00:21:08,880 +then talk about some interesting points + +507 +00:21:06,159 --> 00:21:12,320 +about them so pythia was created by + +508 +00:21:08,880 --> 00:21:14,799 +alther ai alther ai is one of the first + +509 +00:21:12,320 --> 00:21:16,279 +um kind of open- source AI organizations + +510 +00:21:14,799 --> 00:21:18,720 +they've created a huge number of really + +511 +00:21:16,279 --> 00:21:21,480 +useful things including training code + +512 +00:21:18,720 --> 00:21:25,279 +models training data sets and also + +513 +00:21:21,480 --> 00:21:28,080 +evaluation that's used pretty widely um + +514 +00:21:25,279 --> 00:21:29,760 +the goal of pythia was basically joint + +515 +00:21:28,080 --> 00:21:32,159 +understanding model training Dynamics + +516 +00:21:29,760 --> 00:21:36,320 +and scaling and so from that point of + +517 +00:21:32,159 --> 00:21:39,120 +view um they released eight model sizes + +518 +00:21:36,320 --> 00:21:41,880 +from 70 million parameters to 12 billion + +519 +00:21:39,120 --> 00:21:44,960 +parameters for each model size they have + +520 +00:21:41,880 --> 00:21:47,440 +154 checkpoints throughout the training + +521 +00:21:44,960 --> 00:21:52,880 +process um so they basically trained on + +522 +00:21:47,440 --> 00:21:55,960 +uh 3300 billion uh parameter uh tokens + +523 +00:21:52,880 --> 00:21:57,400 +and uh did checkpoints you know + +524 +00:21:55,960 --> 00:21:59,000 +periodically during that training + +525 +00:21:57,400 --> 00:22:02,400 +process so you can do interest things + +526 +00:21:59,000 --> 00:22:04,400 +like say uh how quickly do small models + +527 +00:22:02,400 --> 00:22:06,919 +learn things how quickly do large models + +528 +00:22:04,400 --> 00:22:09,480 +learn things and other stuff like + +529 +00:22:06,919 --> 00:22:10,760 +that in terms of the architecture as I + +530 +00:22:09,480 --> 00:22:12,760 +mentioned at the very beginning the + +531 +00:22:10,760 --> 00:22:14,799 +architectures are actually very similar + +532 +00:22:12,760 --> 00:22:17,840 +between them so it's almost easier to + +533 +00:22:14,799 --> 00:22:21,080 +point out their differences than uh + +534 +00:22:17,840 --> 00:22:22,559 +their like their similarities um + +535 +00:22:21,080 --> 00:22:25,400 +actually one thing that's not on the + +536 +00:22:22,559 --> 00:22:27,159 +slide is um I mainly focused on the + +537 +00:22:25,400 --> 00:22:29,080 +seven billion models because almost + +538 +00:22:27,159 --> 00:22:30,320 +everybody trains a seven billi model + +539 +00:22:29,080 --> 00:22:32,720 +it's just kind of like one of the + +540 +00:22:30,320 --> 00:22:34,640 +standard sizes it's the smallest size of + +541 +00:22:32,720 --> 00:22:36,559 +llama it's one of the largest it's the + +542 +00:22:34,640 --> 00:22:40,240 +largest size ofo and one of the largest + +543 +00:22:36,559 --> 00:22:46,880 +sizes of pipon 7 billion models are + +544 +00:22:40,240 --> 00:22:52,880 +generally um 4096 wide 32 uh + +545 +00:22:46,880 --> 00:22:52,880 +deep uh 32 attention heads and they're + +546 +00:22:54,200 --> 00:23:01,159 +um and their um hidden layer size is + +547 +00:22:57,400 --> 00:23:04,400 +about like eight3 of the size of this + +548 +00:23:01,159 --> 00:23:07,360 +and this is kind of a standard llama 7B + +549 +00:23:04,400 --> 00:23:09,240 +architecture um as you scale up to + +550 +00:23:07,360 --> 00:23:11,520 +larger sizes you just increase the + +551 +00:23:09,240 --> 00:23:13,880 +number of layers you increase the the + +552 +00:23:11,520 --> 00:23:16,080 +width and other things like that so + +553 +00:23:13,880 --> 00:23:19,039 +that's very standard um the other + +554 +00:23:16,080 --> 00:23:21,320 +standard is everybody uses a Transformer + +555 +00:23:19,039 --> 00:23:24,440 +um everybody uses pre-layer Norm like I + +556 +00:23:21,320 --> 00:23:27,120 +talked about before everybody uses rope + +557 +00:23:24,440 --> 00:23:29,520 +eddings um almost everybody uses a swig + +558 +00:23:27,120 --> 00:23:30,919 +glue activation so this is just kind of + +559 +00:23:29,520 --> 00:23:31,880 +the standard recipe that almost + +560 +00:23:30,919 --> 00:23:35,120 +everybody + +561 +00:23:31,880 --> 00:23:37,000 +uses um where things start to change a + +562 +00:23:35,120 --> 00:23:38,559 +little bit between the architectures + +563 +00:23:37,000 --> 00:23:40,559 +which arguably might not be very + +564 +00:23:38,559 --> 00:23:44,679 +important is how long is the context + +565 +00:23:40,559 --> 00:23:48,320 +length so um pythia is 2K context + +566 +00:23:44,679 --> 00:23:51,360 +compared to llama llama 2's 4K context + +567 +00:23:48,320 --> 00:23:55,000 +um actually llama 1 is 1K context so + +568 +00:23:51,360 --> 00:24:00,000 +Llama Llama Or sorry llama one is 2K + +569 +00:23:55,000 --> 00:24:02,120 +context and llama 2 is 4K context um + +570 +00:24:00,000 --> 00:24:03,880 +another thing is where do they put + +571 +00:24:02,120 --> 00:24:06,240 +biases in the model most people don't + +572 +00:24:03,880 --> 00:24:08,200 +use biases uh anywhere but sometimes + +573 +00:24:06,240 --> 00:24:09,840 +they put them in various places the + +574 +00:24:08,200 --> 00:24:11,919 +other thing is a variety of layer Norm + +575 +00:24:09,840 --> 00:24:13,559 +that people use and Pia was using + +576 +00:24:11,919 --> 00:24:16,240 +standard parametric layer Norm but + +577 +00:24:13,559 --> 00:24:18,000 +gradually people are stepping back from + +578 +00:24:16,240 --> 00:24:21,360 +that and they're using like RMS Norm or + +579 +00:24:18,000 --> 00:24:22,880 +even nonparametric LMS so um small + +580 +00:24:21,360 --> 00:24:25,559 +architecture differences but almost + +581 +00:24:22,880 --> 00:24:29,240 +everybody uses something pretty + +582 +00:24:25,559 --> 00:24:31,960 +similar um the data this was trained on + +583 +00:24:29,240 --> 00:24:34,600 +300 billion tokens of the pile uh which + +584 +00:24:31,960 --> 00:24:37,440 +is on the next slide but one interesting + +585 +00:24:34,600 --> 00:24:39,000 +thing is that they also did a duplicated + +586 +00:24:37,440 --> 00:24:43,320 +training run on + +587 +00:24:39,000 --> 00:24:47,679 +270 s billions of the token ah sorry 207 + +588 +00:24:43,320 --> 00:24:50,559 +billion tokens and um the idea is that + +589 +00:24:47,679 --> 00:24:53,039 +they um they wanted to test how + +590 +00:24:50,559 --> 00:24:54,919 +important it is to duplicate how much do + +591 +00:24:53,039 --> 00:24:56,279 +you gain by D duplicating in terms of + +592 +00:24:54,919 --> 00:24:59,559 +training + +593 +00:24:56,279 --> 00:25:01,520 +efficiency and um + +594 +00:24:59,559 --> 00:25:04,760 +they have different learning rates for + +595 +00:25:01,520 --> 00:25:08,640 +different model sizes the 7B model is uh + +596 +00:25:04,760 --> 00:25:11,760 +1.2 * e to Theus 4 in contrast llama is + +597 +00:25:08,640 --> 00:25:13,120 +3 * eus 4 so this is a potentially big + +598 +00:25:11,760 --> 00:25:16,840 +change because the learning rate is + +599 +00:25:13,120 --> 00:25:18,880 +actually half the size here um is the + +600 +00:25:16,840 --> 00:25:20,559 +batch size they use 2 million tokens and + +601 +00:25:18,880 --> 00:25:23,600 +actually llama 2 uses four million + +602 +00:25:20,559 --> 00:25:26,520 +tokens for the batch size so um there + +603 +00:25:23,600 --> 00:25:29,000 +are some small differences + +604 +00:25:26,520 --> 00:25:31,480 +there so next next I'd like to talk + +605 +00:25:29,000 --> 00:25:33,760 +about the pile um this is kind of the + +606 +00:25:31,480 --> 00:25:36,279 +original open data set for training + +607 +00:25:33,760 --> 00:25:37,960 +large language models um that being said + +608 +00:25:36,279 --> 00:25:42,159 +it's a really nice data set made out of + +609 +00:25:37,960 --> 00:25:47,039 +lots of uh different types of data and + +610 +00:25:42,159 --> 00:25:49,960 +namely it's trained on academic data so + +611 +00:25:47,039 --> 00:25:52,559 +that includes things like PubMed archive + +612 +00:25:49,960 --> 00:25:55,240 +free law the US patent office other + +613 +00:25:52,559 --> 00:25:57,000 +stuff like that it's also trained on + +614 +00:25:55,240 --> 00:26:00,080 +internet data so this is data that's + +615 +00:25:57,000 --> 00:26:02,840 +just scraped from parts of the internet + +616 +00:26:00,080 --> 00:26:05,799 +but also stack Exchange in + +617 +00:26:02,840 --> 00:26:09,480 +Wikipedia um it also has some pros so + +618 +00:26:05,799 --> 00:26:12,200 +these are um like book data sets it has + +619 +00:26:09,480 --> 00:26:15,640 +some code data sets and it has some like + +620 +00:26:12,200 --> 00:26:18,799 +subtitle dialog data sets in it so this + +621 +00:26:15,640 --> 00:26:22,399 +overall is 800 gigabytes or about 300 + +622 +00:26:18,799 --> 00:26:22,399 +billion tokens according to + +623 +00:26:23,360 --> 00:26:28,080 +Tok so some of the findings from the + +624 +00:26:25,760 --> 00:26:30,919 +pipia paper in addition to just being + +625 +00:26:28,080 --> 00:26:33,399 +like one of the original strong uh open + +626 +00:26:30,919 --> 00:26:36,279 +language models is they have some + +627 +00:26:33,399 --> 00:26:38,600 +interesting analysis into um model + +628 +00:26:36,279 --> 00:26:40,960 +memorization and how quickly models + +629 +00:26:38,600 --> 00:26:44,080 +learn uh based on the number of tokens + +630 +00:26:40,960 --> 00:26:45,520 +that you show them and this graph is + +631 +00:26:44,080 --> 00:26:47,520 +maybe a little bit hard to see from the + +632 +00:26:45,520 --> 00:26:49,440 +back so I'll interpret it the left side + +633 +00:26:47,520 --> 00:26:50,840 +is one of their smaller models 160 + +634 +00:26:49,440 --> 00:26:54,880 +million the right side is their biggest + +635 +00:26:50,840 --> 00:26:57,799 +Model 12 billion um the different lines + +636 +00:26:54,880 --> 00:26:58,840 +here are different steps of the training + +637 +00:26:57,799 --> 00:27:03,120 +process + +638 +00:26:58,840 --> 00:27:09,640 +so like uh 13,000 steps uh + +639 +00:27:03,120 --> 00:27:13,840 +30 sorry 39,000 steps and uh etc etc and + +640 +00:27:09,640 --> 00:27:18,240 +the xaxis here is the frequency of a + +641 +00:27:13,840 --> 00:27:21,679 +fact in or a frequency of a fact in the + +642 +00:27:18,240 --> 00:27:24,640 +training data and the y axis is question + +643 +00:27:21,679 --> 00:27:29,159 +answering accuracy about that fact and + +644 +00:27:24,640 --> 00:27:30,919 +so what this is basically showing is + +645 +00:27:29,159 --> 00:27:35,679 +as you scale up the + +646 +00:27:30,919 --> 00:27:38,520 +model um the larger models learn faster + +647 +00:27:35,679 --> 00:27:41,120 +um up to a point so like right here you + +648 +00:27:38,520 --> 00:27:44,519 +see the 2.8 billion model is about the + +649 +00:27:41,120 --> 00:27:46,080 +same as the 12 billion model at earlier + +650 +00:27:44,519 --> 00:27:48,080 +parts of the training + +651 +00:27:46,080 --> 00:27:51,000 +process but as you get later in the + +652 +00:27:48,080 --> 00:27:54,200 +training process the 12 billion model is + +653 +00:27:51,000 --> 00:27:57,279 +like memorizing and being able to recall + +654 +00:27:54,200 --> 00:27:58,840 +more facts uh so like right at the very + +655 +00:27:57,279 --> 00:28:02,519 +beginning you need to scale up to about + +656 +00:27:58,840 --> 00:28:05,840 +2.8 billion to learn efficiently uh but + +657 +00:28:02,519 --> 00:28:07,799 +at the end this model is like better uh + +658 +00:28:05,840 --> 00:28:10,399 +further on + +659 +00:28:07,799 --> 00:28:12,000 +so this is really nice all of this all + +660 +00:28:10,399 --> 00:28:14,240 +of these checkpoints all this data is + +661 +00:28:12,000 --> 00:28:15,840 +open they even made the data loaders so + +662 +00:28:14,240 --> 00:28:17,360 +it's reproducible so you can look at the + +663 +00:28:15,840 --> 00:28:19,559 +actual data that the model was trained + +664 +00:28:17,360 --> 00:28:21,000 +on um at each of the checkpoints so if + +665 +00:28:19,559 --> 00:28:24,320 +you want to do this sort of analysis + +666 +00:28:21,000 --> 00:28:27,120 +this is a good set of um models to look + +667 +00:28:24,320 --> 00:28:28,720 +at um another thing that they did is + +668 +00:28:27,120 --> 00:28:31,120 +they actually did interv itions on the + +669 +00:28:28,720 --> 00:28:35,640 +data so they um tried to intervene on + +670 +00:28:31,120 --> 00:28:37,279 +the data to modify it because uh male or + +671 +00:28:35,640 --> 00:28:38,840 +masculine pronouns were much more + +672 +00:28:37,279 --> 00:28:42,000 +frequent than feminine pronouns in the + +673 +00:28:38,840 --> 00:28:43,919 +data so they intervened on the data um + +674 +00:28:42,000 --> 00:28:45,559 +to try to balance out the distribution + +675 +00:28:43,919 --> 00:28:48,000 +of masculine and feminine pronouns and + +676 +00:28:45,559 --> 00:28:49,559 +demonstrated that the model became less + +677 +00:28:48,000 --> 00:28:52,080 +biased towards generating masculine + +678 +00:28:49,559 --> 00:28:55,480 +pronouns later so they also were able to + +679 +00:28:52,080 --> 00:28:55,480 +do those sorts of intervention + +680 +00:28:55,919 --> 00:29:00,039 +studies um any any questions about + +681 +00:29:00,519 --> 00:29:07,919 +Pia okay um next I want to go too Soo is + +682 +00:29:04,720 --> 00:29:10,279 +a more recent model um Pia I think came + +683 +00:29:07,919 --> 00:29:13,200 +came out around a year agoo is very + +684 +00:29:10,279 --> 00:29:15,440 +recent about a month ago and um this was + +685 +00:29:13,200 --> 00:29:18,360 +created by ai2 the Ellen Institute for + +686 +00:29:15,440 --> 00:29:20,440 +AI one thing you'll notice is the two um + +687 +00:29:18,360 --> 00:29:22,279 +completely open models that I'm talking + +688 +00:29:20,440 --> 00:29:24,799 +about both came from nonprofit + +689 +00:29:22,279 --> 00:29:28,640 +organizations um so Al Luther is + +690 +00:29:24,799 --> 00:29:30,039 +nonprofit uh ai2 is nonprofit so uh + +691 +00:29:28,640 --> 00:29:31,519 +they're maybe a little bit less worried + +692 +00:29:30,039 --> 00:29:34,919 +about people trying to sue them for lots + +693 +00:29:31,519 --> 00:29:36,720 +of money for fair use violations uh so + +694 +00:29:34,919 --> 00:29:38,120 +uh that's the cynical point of view the + +695 +00:29:36,720 --> 00:29:39,679 +the non cynical point of view is they + +696 +00:29:38,120 --> 00:29:42,279 +have nothing to profit by creating a + +697 +00:29:39,679 --> 00:29:44,240 +better model uh by having other people + +698 +00:29:42,279 --> 00:29:47,039 +create a better model so um they're + +699 +00:29:44,240 --> 00:29:50,840 +willing to do this for open uh in good + +700 +00:29:47,039 --> 00:29:54,080 +science um their goal is better science + +701 +00:29:50,840 --> 00:29:55,880 +of State ofth art LMS and uh some of the + +702 +00:29:54,080 --> 00:29:57,600 +unique features are top performance of a + +703 +00:29:55,880 --> 00:29:59,840 +fully documented model and they also + +704 +00:29:57,600 --> 00:30:02,960 +have in construction tun models + +705 +00:29:59,840 --> 00:30:04,960 +Etc looking at the parameters um + +706 +00:30:02,960 --> 00:30:06,240 +basically similar to llama the one big + +707 +00:30:04,960 --> 00:30:08,440 +difference is they're using + +708 +00:30:06,240 --> 00:30:10,440 +non-parametric layer Norm instead of RMS + +709 +00:30:08,440 --> 00:30:13,640 +Norm so this is basically layer Norm + +710 +00:30:10,440 --> 00:30:15,960 +with no parameters whatsoever um they + +711 +00:30:13,640 --> 00:30:18,880 +they didn't super clearly justify why + +712 +00:30:15,960 --> 00:30:21,760 +they decided to do this one difference + +713 +00:30:18,880 --> 00:30:25,519 +from Pia uh this was actually trained on + +714 +00:30:21,760 --> 00:30:29,559 +2.46 trillion tokens uh so compare this + +715 +00:30:25,519 --> 00:30:32,600 +to uh to Pia which was trained on 300 + +716 +00:30:29,559 --> 00:30:34,480 +billion tokens and so they basically + +717 +00:30:32,600 --> 00:30:36,120 +trained it for a lot longer they trained + +718 +00:30:34,480 --> 00:30:37,960 +it on something called the dolma Corpus + +719 +00:30:36,120 --> 00:30:41,480 +which they also created at + +720 +00:30:37,960 --> 00:30:44,279 +ai2 um actually I think this might be + +721 +00:30:41,480 --> 00:30:47,279 +wrong uh so just ignore that that was + +722 +00:30:44,279 --> 00:30:49,760 +copy paste mistake from typ so um they + +723 +00:30:47,279 --> 00:30:52,039 +always use 3E to the minus 4 is a + +724 +00:30:49,760 --> 00:30:53,679 +learning rate which is the same as uh as + +725 +00:30:52,039 --> 00:30:56,039 +llama and the batch size is 4 million + +726 +00:30:53,679 --> 00:30:59,960 +tokens which is also the same as + +727 +00:30:56,039 --> 00:31:02,000 +llama so the domma that they created is + +728 +00:30:59,960 --> 00:31:04,320 +um actually pretty similar to the pile + +729 +00:31:02,000 --> 00:31:07,320 +but it's a larger Corpus it's three + +730 +00:31:04,320 --> 00:31:09,240 +trillion tokens this is also fully open + +731 +00:31:07,320 --> 00:31:11,480 +so you can download it from hugging face + +732 +00:31:09,240 --> 00:31:15,399 +uh if you could find some dis to put + +733 +00:31:11,480 --> 00:31:19,200 +three trillion tokens on um + +734 +00:31:15,399 --> 00:31:21,080 +so uh another thing is that they have a + +735 +00:31:19,200 --> 00:31:23,360 +data processing pipeline of language + +736 +00:31:21,080 --> 00:31:26,240 +filtering quality filtering content + +737 +00:31:23,360 --> 00:31:28,399 +filtering D duplication uh multisource + +738 +00:31:26,240 --> 00:31:31,440 +mixing and tokenization + +739 +00:31:28,399 --> 00:31:33,279 +and so the nice thing about this is a + +740 +00:31:31,440 --> 00:31:35,639 +lot of this stuff is usually proprietary + +741 +00:31:33,279 --> 00:31:38,240 +for most language modeling creators so + +742 +00:31:35,639 --> 00:31:39,600 +if you want to see all of the like data + +743 +00:31:38,240 --> 00:31:41,039 +processing pipeline that goes into + +744 +00:31:39,600 --> 00:31:42,799 +training a model this is a pretty good + +745 +00:31:41,039 --> 00:31:45,320 +example of + +746 +00:31:42,799 --> 00:31:48,120 +that um the document types that are + +747 +00:31:45,320 --> 00:31:51,080 +included are the common crawl and so the + +748 +00:31:48,120 --> 00:31:53,919 +common crawl is just um data crawled + +749 +00:31:51,080 --> 00:31:56,760 +from the Internet it's uh about 2.2 + +750 +00:31:53,919 --> 00:32:00,039 +trillion tokens uh they also have the + +751 +00:31:56,760 --> 00:32:03,399 +stack which is um lots of code about 400 + +752 +00:32:00,039 --> 00:32:09,120 +billion tokens of code um C4 which is + +753 +00:32:03,399 --> 00:32:13,039 +also uh web data uh Reddit um stem + +754 +00:32:09,120 --> 00:32:16,960 +papers books and uh Wikipedia + +755 +00:32:13,039 --> 00:32:19,039 +encyclopedia T so um you can see that it + +756 +00:32:16,960 --> 00:32:21,440 +has a fairly large amount of coverage + +757 +00:32:19,039 --> 00:32:24,480 +although mostly in + +758 +00:32:21,440 --> 00:32:26,799 +English um so some findings from omo + +759 +00:32:24,480 --> 00:32:29,440 +that I found interesting um number one + +760 +00:32:26,799 --> 00:32:31,279 +it has competitive average performance + +761 +00:32:29,440 --> 00:32:34,320 +so as I mentioned I think this is the + +762 +00:32:31,279 --> 00:32:38,519 +first fully open and documented language + +763 +00:32:34,320 --> 00:32:40,639 +model on the 7 billion range that is + +764 +00:32:38,519 --> 00:32:43,360 +competitive with all the other uh kind + +765 +00:32:40,639 --> 00:32:47,080 +of like Less open models in this range + +766 +00:32:43,360 --> 00:32:49,200 +so uh for example uh llama 2 is 70.5 + +767 +00:32:47,080 --> 00:32:51,840 +average on on all of the data sets that + +768 +00:32:49,200 --> 00:32:53,960 +they're evaluating on Falcon is + +769 +00:32:51,840 --> 00:32:58,000 +70.3 MPT is + +770 +00:32:53,960 --> 00:33:00,000 +69.8 and almost 69.3 so it's not a + +771 +00:32:58,000 --> 00:33:04,639 +slouch with respect to accuracy compared + +772 +00:33:00,000 --> 00:33:06,399 +to pipia which had 63 um much of the + +773 +00:33:04,639 --> 00:33:09,120 +issue with pipia could just be that they + +774 +00:33:06,399 --> 00:33:12,080 +didn't train for long enough and some + +775 +00:33:09,120 --> 00:33:15,039 +evidence of this is this is + +776 +00:33:12,080 --> 00:33:17,000 +um where they measured performance + +777 +00:33:15,039 --> 00:33:18,880 +constantly as they train for longer so + +778 +00:33:17,000 --> 00:33:21,440 +the left side is training on 500 billion + +779 +00:33:18,880 --> 00:33:24,080 +tokens which is already more than what + +780 +00:33:21,440 --> 00:33:25,840 +pipia trained on the right side is uh + +781 +00:33:24,080 --> 00:33:30,360 +two uh + +782 +00:33:25,840 --> 00:33:32,679 +2.4 or 2.5 TR I tokens and you can see + +783 +00:33:30,360 --> 00:33:34,440 +interestingly that the numbers are just + +784 +00:33:32,679 --> 00:33:36,760 +continuing to increase as they train for + +785 +00:33:34,440 --> 00:33:39,480 +longer so it seems that training for + +786 +00:33:36,760 --> 00:33:43,679 +longer and longer just kind of + +787 +00:33:39,480 --> 00:33:47,000 +helps um one question is whether they're + +788 +00:33:43,679 --> 00:33:48,679 +like overfitting to uh the data set like + +789 +00:33:47,000 --> 00:33:52,000 +is any of the test data included in + +790 +00:33:48,679 --> 00:33:53,799 +their training data here um they did do + +791 +00:33:52,000 --> 00:33:57,440 +D duplication to some extent to try to + +792 +00:33:53,799 --> 00:33:59,320 +remove the test data so um I I think + +793 +00:33:57,440 --> 00:34:00,919 +it's quite probable that this these are + +794 +00:33:59,320 --> 00:34:02,720 +real gains and if they train for longer + +795 +00:34:00,919 --> 00:34:07,559 +they might get an even better model but + +796 +00:34:02,720 --> 00:34:07,559 +um I'm not you know 100% sure about + +797 +00:34:07,679 --> 00:34:12,639 +that cool + +798 +00:34:10,480 --> 00:34:14,359 +um yeah one one other thing that I + +799 +00:34:12,639 --> 00:34:16,119 +noticed which might be uh might be a + +800 +00:34:14,359 --> 00:34:18,119 +little bit interesting is um all of + +801 +00:34:16,119 --> 00:34:20,240 +these that I didn't mention here is all + +802 +00:34:18,119 --> 00:34:21,760 +of these have a learning rate schedule + +803 +00:34:20,240 --> 00:34:23,679 +and typically they have a learning rate + +804 +00:34:21,760 --> 00:34:25,760 +schedule where they do this standard + +805 +00:34:23,679 --> 00:34:29,159 +warmup where they increase and then they + +806 +00:34:25,760 --> 00:34:30,960 +decrease but they St decreasing at a a + +807 +00:34:29,159 --> 00:34:34,040 +floor and usually that floor is about + +808 +00:34:30,960 --> 00:34:36,720 +one1 the size of the um of the original + +809 +00:34:34,040 --> 00:34:38,520 +learning rate so the if they start out 3 + +810 +00:34:36,720 --> 00:34:41,919 +e to Theus 4 they'll decrease it but + +811 +00:34:38,520 --> 00:34:43,960 +only to 3 eus2 and then they're can so + +812 +00:34:41,919 --> 00:34:46,079 +that might be another good thing to put + +813 +00:34:43,960 --> 00:34:46,079 +it + +814 +00:34:46,480 --> 00:34:51,240 +out cool any questions about + +815 +00:34:51,320 --> 00:34:58,599 +this okay um so now I'll get into L 2 um + +816 +00:34:56,560 --> 00:35:00,200 +in Lama 2 you know is a model that + +817 +00:34:58,599 --> 00:35:04,400 +probably most people have heard about it + +818 +00:35:00,200 --> 00:35:07,599 +was created by meta um it's one of the + +819 +00:35:04,400 --> 00:35:09,480 +uh strongest open language models now + +820 +00:35:07,599 --> 00:35:10,839 +although arguably there might be + +821 +00:35:09,480 --> 00:35:15,000 +stronger open language + +822 +00:35:10,839 --> 00:35:18,400 +models and the goal is a strong and safe + +823 +00:35:15,000 --> 00:35:21,320 +open LM and they have base and chat + +824 +00:35:18,400 --> 00:35:23,400 +versions of it and some unique features + +825 +00:35:21,320 --> 00:35:24,680 +are I think this is the open model with + +826 +00:35:23,400 --> 00:35:30,119 +the strongest + +827 +00:35:24,680 --> 00:35:30,119 +safety uh safeguards so it + +828 +00:35:30,200 --> 00:35:35,079 +is if I were to pick one model that I + +829 +00:35:33,079 --> 00:35:37,200 +wanted to use in an actual system that + +830 +00:35:35,079 --> 00:35:39,599 +was directly conversing with users I + +831 +00:35:37,200 --> 00:35:41,920 +would probably pick this one over + +832 +00:35:39,599 --> 00:35:43,760 +something like uh mistol even though + +833 +00:35:41,920 --> 00:35:46,599 +mistol shows Superior performance some + +834 +00:35:43,760 --> 00:35:48,680 +of the time um it might say things that + +835 +00:35:46,599 --> 00:35:52,000 +you don't want it to be saying to like + +836 +00:35:48,680 --> 00:35:55,520 +users so I think that's one of the uh + +837 +00:35:52,000 --> 00:35:56,880 +the nice things about M so I've been + +838 +00:35:55,520 --> 00:35:58,280 +comparing everything else to it so + +839 +00:35:56,880 --> 00:36:00,560 +that's pretty normal + +840 +00:35:58,280 --> 00:36:03,160 +um one thing about the data is the data + +841 +00:36:00,560 --> 00:36:04,520 +is not open they didn't say what data + +842 +00:36:03,160 --> 00:36:06,960 +they trained on for reasons that I + +843 +00:36:04,520 --> 00:36:08,960 +talked about before um what they did say + +844 +00:36:06,960 --> 00:36:12,400 +is it was trained on public sources + +845 +00:36:08,960 --> 00:36:14,240 +upsampling the most factual sources so + +846 +00:36:12,400 --> 00:36:17,640 +um that's what they + +847 +00:36:14,240 --> 00:36:19,240 +said the Llama one paper has more + +848 +00:36:17,640 --> 00:36:20,760 +information and so I'll talk about what + +849 +00:36:19,240 --> 00:36:22,400 +they did in the Llama one paper and we + +850 +00:36:20,760 --> 00:36:24,920 +can maybe extrapolate that they did + +851 +00:36:22,400 --> 00:36:26,560 +something similar in the LL tube paper + +852 +00:36:24,920 --> 00:36:28,200 +um and then the total training amount is + +853 +00:36:26,560 --> 00:36:30,079 +2 trillion tokens so that's actually + +854 +00:36:28,200 --> 00:36:32,680 +less + +855 +00:36:30,079 --> 00:36:34,520 +than um so if we look at the Llama 1 + +856 +00:36:32,680 --> 00:36:36,319 +training data it looks a little bit like + +857 +00:36:34,520 --> 00:36:38,839 +it looks very much like Theo training + +858 +00:36:36,319 --> 00:36:41,200 +data it's common crawl C4 GitHub + +859 +00:36:38,839 --> 00:36:45,160 +Wikipedia books archives stack + +860 +00:36:41,200 --> 00:36:46,400 +exchange um and one thing you'll notice + +861 +00:36:45,160 --> 00:36:49,200 +is that they + +862 +00:36:46,400 --> 00:36:51,599 +upsampled uh Wikipedia and books and + +863 +00:36:49,200 --> 00:36:53,319 +down sampled GitHub according compared + +864 +00:36:51,599 --> 00:36:57,000 +to the amount of data that they actually + +865 +00:36:53,319 --> 00:37:00,760 +had and so they did 2.4 EPO over + +866 +00:36:57,000 --> 00:37:03,040 +Wikipedia 2.2 epochs over books and only + +867 +00:37:00,760 --> 00:37:05,880 +one Epoch over like the standard web + +868 +00:37:03,040 --> 00:37:08,240 +data and archive and stack exchange and + +869 +00:37:05,880 --> 00:37:09,760 +0.6 epx over the GitHub data that they + +870 +00:37:08,240 --> 00:37:11,520 +had so + +871 +00:37:09,760 --> 00:37:13,800 +obviously + +872 +00:37:11,520 --> 00:37:15,520 +they thought that this Wikipedia and + +873 +00:37:13,800 --> 00:37:17,040 +books data was more valuable for some + +874 +00:37:15,520 --> 00:37:20,560 +reason and they really wanted the model + +875 +00:37:17,040 --> 00:37:22,319 +to to learn well out it so I think um + +876 +00:37:20,560 --> 00:37:24,240 +when they say that they upsampled + +877 +00:37:22,319 --> 00:37:27,960 +factual data I'm assuming that that's + +878 +00:37:24,240 --> 00:37:27,960 +also what they did in mud + +879 +00:37:29,440 --> 00:37:33,640 +so the next thing um that's + +880 +00:37:35,960 --> 00:37:43,160 +yeah uh what does it need to have + +881 +00:37:40,280 --> 00:37:45,400 +like oh um yeah actually that's a really + +882 +00:37:43,160 --> 00:37:47,960 +good question so why are EPO not integer + +883 +00:37:45,400 --> 00:37:50,240 +values there's actually no reason at all + +884 +00:37:47,960 --> 00:37:52,040 +that you should do you know an integer + +885 +00:37:50,240 --> 00:37:54,760 +value of epo you can always save out a + +886 +00:37:52,040 --> 00:37:57,560 +checkpoint every you know 10,000 steps + +887 +00:37:54,760 --> 00:37:59,200 +or something so I'd actually encourage + +888 +00:37:57,560 --> 00:38:02,040 +people to get away from saving out + +889 +00:37:59,200 --> 00:38:03,640 +checkpoints every Epoch because that + +890 +00:38:02,040 --> 00:38:05,319 +kind of discourages you from making your + +891 +00:38:03,640 --> 00:38:07,160 +training data larger because if you make + +892 +00:38:05,319 --> 00:38:09,359 +your training data larger it will take + +893 +00:38:07,160 --> 00:38:11,760 +you'll think oh training takes forever + +894 +00:38:09,359 --> 00:38:13,480 +um because it takes forever to use an + +895 +00:38:11,760 --> 00:38:16,599 +Epoch but in reality you can just save + +896 +00:38:13,480 --> 00:38:18,760 +out you know periodically and um and + +897 +00:38:16,599 --> 00:38:21,319 +keep the checkpoints from earlier + +898 +00:38:18,760 --> 00:38:22,680 +so many language models don't train on + +899 +00:38:21,319 --> 00:38:24,480 +all the data on the web because it would + +900 +00:38:22,680 --> 00:38:25,800 +just be too expensive to do so despite + +901 +00:38:24,480 --> 00:38:27,640 +the fact that they have all the data on + +902 +00:38:25,800 --> 00:38:29,079 +the web + +903 +00:38:27,640 --> 00:38:31,000 +but very good question though it's + +904 +00:38:29,079 --> 00:38:34,560 +that's an important + +905 +00:38:31,000 --> 00:38:36,280 +Point um okay so now I'd like to talk a + +906 +00:38:34,560 --> 00:38:39,440 +little bit about the safety tuning that + +907 +00:38:36,280 --> 00:38:42,359 +goes into uh the Llama models I might + +908 +00:38:39,440 --> 00:38:45,640 +talk a little bit more about this um + +909 +00:38:42,359 --> 00:38:48,960 +later but I I think uh I'll I'll talk + +910 +00:38:45,640 --> 00:38:51,480 +about it now um basically the Llama 2 + +911 +00:38:48,960 --> 00:38:54,200 +developers put a lot of effort into + +912 +00:38:51,480 --> 00:38:56,400 +training the model to be safe because um + +913 +00:38:54,200 --> 00:38:59,599 +you know they're a big company and they + +914 +00:38:56,400 --> 00:39:01,200 +don't want any PR design disasters um uh + +915 +00:38:59,599 --> 00:39:02,680 +and also you know they want an actual + +916 +00:39:01,200 --> 00:39:04,960 +safe model that they can use and to BL + +917 +00:39:02,680 --> 00:39:08,240 +their products so I think they have the + +918 +00:39:04,960 --> 00:39:10,880 +Dual uh you know dual motivation + +919 +00:39:08,240 --> 00:39:13,200 +there the first thing that they did was + +920 +00:39:10,880 --> 00:39:15,960 +they collected lots of data for reward + +921 +00:39:13,200 --> 00:39:17,520 +modeling and reward modeling what they + +922 +00:39:15,960 --> 00:39:19,720 +say what they're calling reward modeling + +923 +00:39:17,520 --> 00:39:23,720 +is basically preference modeling so they + +924 +00:39:19,720 --> 00:39:26,359 +have you know multiple outputs where the + +925 +00:39:23,720 --> 00:39:28,359 +two outputs are somehow ranked for + +926 +00:39:26,359 --> 00:39:29,960 +preferences and I talked about this when + +927 +00:39:28,359 --> 00:39:31,839 +I was talking about DPO in the + +928 +00:39:29,960 --> 00:39:35,720 +reinforcement learning class for + +929 +00:39:31,839 --> 00:39:38,480 +example um a lot of these actually exist + +930 +00:39:35,720 --> 00:39:41,920 +so there's um like the anthropic helpful + +931 +00:39:38,480 --> 00:39:45,599 +and harmless data sets uh these open AI + +932 +00:39:41,920 --> 00:39:48,200 +data sets uh from web GPT stack exchange + +933 +00:39:45,599 --> 00:39:50,160 +on stack exchange they have um helpful + +934 +00:39:48,200 --> 00:39:52,240 +answers and not helpful answers so once + +935 +00:39:50,160 --> 00:39:57,720 +that you give thumbs up and thumbs down + +936 +00:39:52,240 --> 00:39:59,839 +to and um the Stanford uh human + +937 +00:39:57,720 --> 00:40:03,040 +preferences data set I I forget what s + +938 +00:39:59,839 --> 00:40:05,800 +stands for human preferences data set + +939 +00:40:03,040 --> 00:40:09,400 +basically this is um where they tried to + +940 +00:40:05,800 --> 00:40:11,599 +find Reddit posts I think Reddit posts + +941 +00:40:09,400 --> 00:40:13,720 +that got more upvotes despite the fact + +942 +00:40:11,599 --> 00:40:16,400 +that they were posted later than a a + +943 +00:40:13,720 --> 00:40:18,720 +previous one so the idea is like usually + +944 +00:40:16,400 --> 00:40:21,359 +the first post posts get more up votes + +945 +00:40:18,720 --> 00:40:22,880 +so if you get more up votes for a later + +946 +00:40:21,359 --> 00:40:25,240 +post that indicates that you're probably + +947 +00:40:22,880 --> 00:40:27,640 +more valuable than the earlier post so + +948 +00:40:25,240 --> 00:40:30,880 +kind of clever uh clever way of creating + +949 +00:40:27,640 --> 00:40:33,680 +data um I'm actually not sure what the + +950 +00:40:30,880 --> 00:40:36,240 +synthetic jpj was I didn't look at that + +951 +00:40:33,680 --> 00:40:37,640 +and then separately from that um meta + +952 +00:40:36,240 --> 00:40:39,599 +collected a very large amount of + +953 +00:40:37,640 --> 00:40:42,400 +internal data that they didn't release + +954 +00:40:39,599 --> 00:40:44,319 +uh for tuning llama and they did this + +955 +00:40:42,400 --> 00:40:46,760 +through various iterations so basically + +956 +00:40:44,319 --> 00:40:49,839 +what they did is they created a first + +957 +00:40:46,760 --> 00:40:53,240 +version of the model um they let it you + +958 +00:40:49,839 --> 00:40:55,599 +loose on users they also did some uh + +959 +00:40:53,240 --> 00:40:56,960 +some data collection with uh people who + +960 +00:40:55,599 --> 00:40:59,720 +were actually trying to break the model + +961 +00:40:56,960 --> 00:41:01,200 +and get getting it to say bad things + +962 +00:40:59,720 --> 00:41:02,760 +they collected preference data from + +963 +00:41:01,200 --> 00:41:04,599 +these people and then they iterated over + +964 +00:41:02,760 --> 00:41:06,960 +and over again to collect more and more + +965 +00:41:04,599 --> 00:41:09,720 +of this data on various uh versions of + +966 +00:41:06,960 --> 00:41:11,280 +the model so as the model got gets + +967 +00:41:09,720 --> 00:41:14,079 +better you know it's going to be harder + +968 +00:41:11,280 --> 00:41:16,240 +to collect this data but um they want to + +969 +00:41:14,079 --> 00:41:17,920 +try to improve the current model that + +970 +00:41:16,240 --> 00:41:20,599 +they + +971 +00:41:17,920 --> 00:41:22,680 +have so the next step that they did was + +972 +00:41:20,599 --> 00:41:26,079 +they trained a model to follow these + +973 +00:41:22,680 --> 00:41:27,920 +preferences and so they trained a model + +974 +00:41:26,079 --> 00:41:32,560 +that basically can predict human + +975 +00:41:27,920 --> 00:41:35,119 +preference given um given to uh language + +976 +00:41:32,560 --> 00:41:37,680 +model outputs and this is a hard problem + +977 +00:41:35,119 --> 00:41:40,440 +right because these are language model + +978 +00:41:37,680 --> 00:41:42,760 +outputs and the language model thought + +979 +00:41:40,440 --> 00:41:45,480 +it was a good output regardless because + +980 +00:41:42,760 --> 00:41:47,319 +otherwise it wouldn't be sampling and so + +981 +00:41:45,480 --> 00:41:49,720 +you need to distinguish between two very + +982 +00:41:47,319 --> 00:41:52,240 +fluent looking outputs where one is + +983 +00:41:49,720 --> 00:41:56,880 +preferred and one is not preferred so + +984 +00:41:52,240 --> 00:41:58,359 +even kind of strong models like um oh by + +985 +00:41:56,880 --> 00:42:00,319 +the way there are some open reward + +986 +00:41:58,359 --> 00:42:02,119 +models like this open Assistant reward + +987 +00:42:00,319 --> 00:42:03,839 +model is publicly available and you can + +988 +00:42:02,119 --> 00:42:08,520 +just go and download it if you want if + +989 +00:42:03,839 --> 00:42:10,920 +you want it um but this if you evaluate + +990 +00:42:08,520 --> 00:42:14,720 +it on this anthropic uh helpful and + +991 +00:42:10,920 --> 00:42:16,160 +harmless data set um this gets about 67 + +992 +00:42:14,720 --> 00:42:18,760 +or 68 + +993 +00:42:16,160 --> 00:42:24,680 +accuracy + +994 +00:42:18,760 --> 00:42:27,200 +um but if you evaluate it on um this + +995 +00:42:24,680 --> 00:42:29,480 +like open Assistant data set or sorry if + +996 +00:42:27,200 --> 00:42:33,359 +you evaluate the public models including + +997 +00:42:29,480 --> 00:42:36,079 +gp4 on The Meta data set actually it's + +998 +00:42:33,359 --> 00:42:38,720 +pretty hard for um to distinguish + +999 +00:42:36,079 --> 00:42:41,319 +between the things and here they're + +1000 +00:42:38,720 --> 00:42:44,720 +evaluating both helpful and harmless or + +1001 +00:42:41,319 --> 00:42:47,400 +helpful and safety and the reason why is + +1002 +00:42:44,720 --> 00:42:49,119 +because like it's very easy to create a + +1003 +00:42:47,400 --> 00:42:51,119 +very safe but not helpful at all model + +1004 +00:42:49,119 --> 00:42:53,640 +by saying I don't know all the time it's + +1005 +00:42:51,119 --> 00:42:55,480 +very it's relatively easy to create a + +1006 +00:42:53,640 --> 00:42:57,880 +helpful model that's very unsafe like it + +1007 +00:42:55,480 --> 00:42:59,480 +will do anything you want and so they + +1008 +00:42:57,880 --> 00:43:01,599 +want a balance between the two and they + +1009 +00:42:59,480 --> 00:43:03,480 +evaluate them separately they also + +1010 +00:43:01,599 --> 00:43:05,280 +created two different separate reward + +1011 +00:43:03,480 --> 00:43:07,880 +models so they created one reward model + +1012 +00:43:05,280 --> 00:43:10,079 +to distinguish safety and another reward + +1013 +00:43:07,880 --> 00:43:13,440 +model to distinguish helpfulness and + +1014 +00:43:10,079 --> 00:43:14,760 +they Ed these separately to uh to train + +1015 +00:43:13,440 --> 00:43:17,359 +the model and you can see that the + +1016 +00:43:14,760 --> 00:43:18,920 +helpfulness model does a lot better on + +1017 +00:43:17,359 --> 00:43:20,640 +discriminating between helpful things + +1018 +00:43:18,920 --> 00:43:22,319 +and the safety model does a lot better + +1019 +00:43:20,640 --> 00:43:23,760 +on discriminate or does a little better + +1020 +00:43:22,319 --> 00:43:25,960 +on discriminating between safe and + +1021 +00:43:23,760 --> 00:43:28,480 +unsafe + +1022 +00:43:25,960 --> 00:43:29,920 +things um + +1023 +00:43:28,480 --> 00:43:33,640 +actually I didn't include this in the + +1024 +00:43:29,920 --> 00:43:35,400 +slides but they also have an interesting + +1025 +00:43:33,640 --> 00:43:38,920 +graph that + +1026 +00:43:35,400 --> 00:43:41,119 +demonstrates um how good the reward + +1027 +00:43:38,920 --> 00:43:42,640 +models are based on their size and it + +1028 +00:43:41,119 --> 00:43:44,359 +turns out that this is a place where + +1029 +00:43:42,640 --> 00:43:47,559 +it's really really important to use a + +1030 +00:43:44,359 --> 00:43:49,760 +large and Powerful language model to + +1031 +00:43:47,559 --> 00:43:51,319 +determine your reward because they + +1032 +00:43:49,760 --> 00:43:52,680 +demonstrate that the 70 billion + +1033 +00:43:51,319 --> 00:43:55,280 +parameter model that they used is + +1034 +00:43:52,680 --> 00:43:57,359 +actually far better than the um than the + +1035 +00:43:55,280 --> 00:44:00,079 +smaller models that they used it + +1036 +00:43:57,359 --> 00:44:00,079 +predicting this + +1037 +00:44:01,359 --> 00:44:07,760 +reward so this is um a graph of their + +1038 +00:44:05,200 --> 00:44:10,480 +incremental training process for safety + +1039 +00:44:07,760 --> 00:44:12,640 +tuning and um you can see they have + +1040 +00:44:10,480 --> 00:44:15,920 +their first supervised fine tuned model + +1041 +00:44:12,640 --> 00:44:19,440 +this is with no um like RL or anything + +1042 +00:44:15,920 --> 00:44:22,240 +like this this is a second model + +1043 +00:44:19,440 --> 00:44:24,760 +um and uh it improves a lot with respect + +1044 +00:44:22,240 --> 00:44:28,119 +to helpfulness and then they do more and + +1045 +00:44:24,760 --> 00:44:30,400 +more rhf uh where they start with the + +1046 +00:44:28,119 --> 00:44:33,200 +like supervised fine tune model and and + +1047 +00:44:30,400 --> 00:44:36,079 +gradually do um add more reward data + +1048 +00:44:33,200 --> 00:44:38,200 +train with a better reward model and get + +1049 +00:44:36,079 --> 00:44:39,800 +to the end where they finally have the + +1050 +00:44:38,200 --> 00:44:41,359 +best model that and I believe this is + +1051 +00:44:39,800 --> 00:44:43,200 +the one that they actually released so + +1052 +00:44:41,359 --> 00:44:45,000 +you can see that they really put a lot + +1053 +00:44:43,200 --> 00:44:46,520 +of effort into making this model you + +1054 +00:44:45,000 --> 00:44:49,800 +know safe and that's one of the main + +1055 +00:44:46,520 --> 00:44:49,800 +points of the paper that they had + +1056 +00:44:51,319 --> 00:44:57,920 +here um another interesting part of the + +1057 +00:44:55,119 --> 00:45:02,319 +Llama 2 paper is how how they got it to + +1058 +00:44:57,920 --> 00:45:05,280 +follow chat instructions and so um I I + +1059 +00:45:02,319 --> 00:45:06,640 +think you're all familiar from the class + +1060 +00:45:05,280 --> 00:45:10,040 +where I talked about + +1061 +00:45:06,640 --> 00:45:13,000 +prompting B where basically they um + +1062 +00:45:10,040 --> 00:45:16,119 +prompt the language model using a system + +1063 +00:45:13,000 --> 00:45:20,359 +message and um a user message and an + +1064 +00:45:16,119 --> 00:45:23,160 +assistant message and so um the + +1065 +00:45:20,359 --> 00:45:25,000 +characteristic of the system message is + +1066 +00:45:23,160 --> 00:45:28,240 +this is something that you want to be + +1067 +00:45:25,000 --> 00:45:32,319 +obeyed throughout the um entire + +1068 +00:45:28,240 --> 00:45:34,599 +conversation right and + +1069 +00:45:32,319 --> 00:45:36,760 +so in order to get this obeyed + +1070 +00:45:34,599 --> 00:45:38,079 +throughout the entire conversation you + +1071 +00:45:36,760 --> 00:45:39,760 +need a model that's good at paying + +1072 +00:45:38,079 --> 00:45:40,760 +attent paying particular attention to + +1073 +00:45:39,760 --> 00:45:43,160 +the system + +1074 +00:45:40,760 --> 00:45:45,319 +message um in this example I'm saying + +1075 +00:45:43,160 --> 00:45:46,880 +write in only emojis so you no matter + +1076 +00:45:45,319 --> 00:45:48,720 +how long this conversation gets you want + +1077 +00:45:46,880 --> 00:45:50,599 +your model to continue writing in emojis + +1078 +00:45:48,720 --> 00:45:53,440 +and models don't do this + +1079 +00:45:50,599 --> 00:45:56,559 +spontaneously so what they did here and + +1080 +00:45:53,440 --> 00:45:58,359 +I'm I'm 90% 95% certain that my + +1081 +00:45:56,559 --> 00:45:59,800 +interpret of the paper is correct the + +1082 +00:45:58,359 --> 00:46:03,319 +paper is a little bit hard to understand + +1083 +00:45:59,800 --> 00:46:06,720 +with respect to this but um the uh what + +1084 +00:46:03,319 --> 00:46:10,480 +they I think they do is they take the + +1085 +00:46:06,720 --> 00:46:13,200 +system message and then they have a data + +1086 +00:46:10,480 --> 00:46:16,160 +generation step where they + +1087 +00:46:13,200 --> 00:46:19,079 +basically ask an existing model to write + +1088 +00:46:16,160 --> 00:46:21,400 +and only emojis and then say hello and + +1089 +00:46:19,079 --> 00:46:23,640 +then the model generates something and + +1090 +00:46:21,400 --> 00:46:26,599 +then they say again write in only emojis + +1091 +00:46:23,640 --> 00:46:28,440 +how are you doing and then they uh they + +1092 +00:46:26,599 --> 00:46:29,599 +generate it again and because this is so + +1093 +00:46:28,440 --> 00:46:32,680 +close in the + +1094 +00:46:29,599 --> 00:46:35,440 +context um the assistant basically will + +1095 +00:46:32,680 --> 00:46:36,760 +be will you know continue paying + +1096 +00:46:35,440 --> 00:46:39,119 +attention to these + +1097 +00:46:36,760 --> 00:46:40,599 +directions um and then after that now + +1098 +00:46:39,119 --> 00:46:42,640 +you have a data set that you can train + +1099 +00:46:40,599 --> 00:46:44,280 +your model on you can train your model + +1100 +00:46:42,640 --> 00:46:46,880 +on this generated data set that looks + +1101 +00:46:44,280 --> 00:46:49,079 +like write an only emojis say hello uh + +1102 +00:46:46,880 --> 00:46:50,480 +how are you doing and stuff like this + +1103 +00:46:49,079 --> 00:46:54,040 +and they try this with a whole bunch of + +1104 +00:46:50,480 --> 00:46:57,880 +rules it's like right um right as if + +1105 +00:46:54,040 --> 00:47:00,559 +you're explaining to a 5-year-old or um + +1106 +00:46:57,880 --> 00:47:02,720 +write in a very polite manner write in a + +1107 +00:47:00,559 --> 00:47:03,960 +very informal Manner and stuff like that + +1108 +00:47:02,720 --> 00:47:06,480 +so they generate a whole bunch of the + +1109 +00:47:03,960 --> 00:47:08,480 +synthetic data and in doing this they + +1110 +00:47:06,480 --> 00:47:09,960 +basically are able to train the model to + +1111 +00:47:08,480 --> 00:47:11,559 +pay very close attention to the system + +1112 +00:47:09,960 --> 00:47:13,480 +message because it needs to do so in + +1113 +00:47:11,559 --> 00:47:17,319 +order to do + +1114 +00:47:13,480 --> 00:47:19,160 +better so um yeah these are kind of the + +1115 +00:47:17,319 --> 00:47:20,599 +unique characteristics from lava 2 I'd + +1116 +00:47:19,160 --> 00:47:21,960 +love to tell you more about its training + +1117 +00:47:20,599 --> 00:47:24,520 +data and all that other stuff but they + +1118 +00:47:21,960 --> 00:47:26,240 +didn't tell us uh like what they did + +1119 +00:47:24,520 --> 00:47:28,839 +with respect to that so we'll just have + +1120 +00:47:26,240 --> 00:47:28,839 +to infer + +1121 +00:47:28,960 --> 00:47:33,559 +on cool uh any questions about + +1122 +00:47:33,800 --> 00:47:39,160 +this okay + +1123 +00:47:36,640 --> 00:47:40,839 +go so next I want to go into mistol and + +1124 +00:47:39,160 --> 00:47:42,599 +mixol this is going to be a little bit + +1125 +00:47:40,839 --> 00:47:44,200 +short because I've kind of covered some + +1126 +00:47:42,599 --> 00:47:45,720 +of the stuff already and also they + +1127 +00:47:44,200 --> 00:47:48,240 +didn't tell you very much about the + +1128 +00:47:45,720 --> 00:47:52,240 +training process um basically it was + +1129 +00:47:48,240 --> 00:47:54,079 +created by mistol um AI the company and + +1130 +00:47:52,240 --> 00:47:56,839 +it's a strong and somewhat multilingual + +1131 +00:47:54,079 --> 00:47:59,400 +open language model um it has some + +1132 +00:47:56,839 --> 00:48:01,760 +unique features like speed optimizations + +1133 +00:47:59,400 --> 00:48:03,200 +in um including grouped query attention + +1134 +00:48:01,760 --> 00:48:06,200 +and mixture of + +1135 +00:48:03,200 --> 00:48:06,200 +experts + +1136 +00:48:06,599 --> 00:48:12,359 +um it makes unlike the other ones it + +1137 +00:48:10,599 --> 00:48:14,599 +makes some actual architectural + +1138 +00:48:12,359 --> 00:48:17,599 +modifications including sliding window + +1139 +00:48:14,599 --> 00:48:19,160 +attention and um mixture of experts and + +1140 +00:48:17,599 --> 00:48:21,079 +I I have actually talked about both of + +1141 +00:48:19,160 --> 00:48:23,640 +them so I'll just very briefly go + +1142 +00:48:21,079 --> 00:48:26,040 +through them here um the data as far as + +1143 +00:48:23,640 --> 00:48:28,559 +I could tell was not disclosed uh very + +1144 +00:48:26,040 --> 00:48:30,480 +completely but one important thing is it + +1145 +00:48:28,559 --> 00:48:32,160 +includes English and European languages + +1146 +00:48:30,480 --> 00:48:35,520 +so at least theoretically it should be + +1147 +00:48:32,160 --> 00:48:38,040 +better than llama at this um one + +1148 +00:48:35,520 --> 00:48:39,559 +interesting thing about llama is llama + +1149 +00:48:38,040 --> 00:48:40,680 +if I remember correctly the actual + +1150 +00:48:39,559 --> 00:48:42,880 +numbers are in the paper but it's + +1151 +00:48:40,680 --> 00:48:47,920 +something like 85% + +1152 +00:48:42,880 --> 00:48:52,400 +English um 8% code and then like + +1153 +00:48:47,920 --> 00:48:54,559 +0.3% other languages like um starting at + +1154 +00:48:52,400 --> 00:48:57,280 +all the other languages it's like 0.3% + +1155 +00:48:54,559 --> 00:48:59,680 +so it's not very multilingual at all + +1156 +00:48:57,280 --> 00:49:01,319 +um and they were really only aiming to + +1157 +00:48:59,680 --> 00:49:04,799 +create a good uh English + +1158 +00:49:01,319 --> 00:49:06,200 +model um also the training uh details + +1159 +00:49:04,799 --> 00:49:08,280 +were not disclosed here like I wasn't + +1160 +00:49:06,200 --> 00:49:12,400 +able to find the back sides as far as I + +1161 +00:49:08,280 --> 00:49:15,119 +know um so mistol uses sliding window + +1162 +00:49:12,400 --> 00:49:18,200 +attention uh vanilla attention basically + +1163 +00:49:15,119 --> 00:49:21,440 +you always attend to all of the previous + +1164 +00:49:18,200 --> 00:49:24,880 +things in the sequence what mistol does + +1165 +00:49:21,440 --> 00:49:28,119 +is it attends to the previous n um + +1166 +00:49:24,880 --> 00:49:30,559 +examples where n is equal to 4090 6 and + +1167 +00:49:28,119 --> 00:49:34,839 +because of this uh what this means is + +1168 +00:49:30,559 --> 00:49:37,200 +you can attend uh 4096 back and then in + +1169 +00:49:34,839 --> 00:49:39,280 +the next layer you can attend 4096 back + +1170 +00:49:37,200 --> 00:49:41,599 +then you can attend 4096 back so + +1171 +00:49:39,280 --> 00:49:44,400 +basically as many layers as you have + +1172 +00:49:41,599 --> 00:49:47,240 +times 4096 you can attend that many + +1173 +00:49:44,400 --> 00:49:49,000 +tokens back for a minimal training + +1174 +00:49:47,240 --> 00:49:50,760 +penalty because still the length of + +1175 +00:49:49,000 --> 00:49:55,079 +attention for any particular token is + +1176 +00:49:50,760 --> 00:49:57,440 +the same uh so that's one + +1177 +00:49:55,079 --> 00:50:00,400 +feature oh and then yeah sorry the other + +1178 +00:49:57,440 --> 00:50:01,920 +feature is mixol is using um is using a + +1179 +00:50:00,400 --> 00:50:05,920 +mixture of experts like we talked about + +1180 +00:50:01,920 --> 00:50:07,720 +in the previous time so um despite these + +1181 +00:50:05,920 --> 00:50:09,520 +uh these are very strong models they're + +1182 +00:50:07,720 --> 00:50:12,960 +generally stronger than llama at a lot + +1183 +00:50:09,520 --> 00:50:15,480 +of things um and mixol is actually a lot + +1184 +00:50:12,960 --> 00:50:18,200 +faster and easier to deploy than llama + +1185 +00:50:15,480 --> 00:50:20,680 +70b uh it's smaller it only has 45 + +1186 +00:50:18,200 --> 00:50:23,680 +billion parameters so it's definitely a + +1187 +00:50:20,680 --> 00:50:26,680 +good choice if you want to use it yeah + +1188 +00:50:23,680 --> 00:50:26,680 +makinging + +1189 +00:50:28,720 --> 00:50:33,000 +yeah so it's attending to 496 + +1190 +00:50:33,520 --> 00:50:39,559 +C so the contact size + +1191 +00:50:37,720 --> 00:50:43,240 +typically like let's say you have a + +1192 +00:50:39,559 --> 00:50:45,240 +block of 4096 tokens here typically that + +1193 +00:50:43,240 --> 00:50:48,079 +means that the first token attends to + +1194 +00:50:45,240 --> 00:50:51,200 +zero tokens the second token attends to + +1195 +00:50:48,079 --> 00:50:54,640 +one token and the third token attends to + +1196 +00:50:51,200 --> 00:50:58,920 +two tokens here this is maybe a little + +1197 +00:50:54,640 --> 00:51:01,680 +bit uh Mis mislead I guess but if your + +1198 +00:50:58,920 --> 00:51:04,079 +context length is 4096 you actually get + +1199 +00:51:01,680 --> 00:51:07,760 +a block of twice that size you get a + +1200 +00:51:04,079 --> 00:51:10,960 +block of 8192 tokens and so the first + +1201 +00:51:07,760 --> 00:51:15,839 +one attends to all of the previous + +1202 +00:51:10,960 --> 00:51:17,760 +ones so the first uh sorry so + +1203 +00:51:15,839 --> 00:51:19,960 +the + +1204 +00:51:17,760 --> 00:51:22,280 +um so the + +1205 +00:51:19,960 --> 00:51:26,760 +40 + +1206 +00:51:22,280 --> 00:51:29,280 +9 7 token + +1207 +00:51:26,760 --> 00:51:32,280 +back to um all from + +1208 +00:51:29,280 --> 00:51:36,319 +[Music] + +1209 +00:51:32,280 --> 00:51:36,319 +to sorry either + +1210 +00:51:41,160 --> 00:51:46,880 +one96 and + +1211 +00:51:43,839 --> 00:51:50,520 +so because of that you moan to the very + +1212 +00:51:46,880 --> 00:51:50,520 +end then you have the 8198 + +1213 +00:51:50,880 --> 00:51:55,359 +seconding from like9 + +1214 +00:51:58,480 --> 00:52:01,920 +and so like every token is always + +1215 +00:52:00,319 --> 00:52:05,280 +attending to the previous one and that + +1216 +00:52:01,920 --> 00:52:08,200 +allows you to um to kind of attend to + +1217 +00:52:05,280 --> 00:52:08,200 +things in the previous + +1218 +00:52:11,760 --> 00:52:18,520 +BL uh no it's big so that allows them to + +1219 +00:52:15,000 --> 00:52:22,000 +attend a very large + +1220 +00:52:18,520 --> 00:52:24,599 +am cool um so the next one I'd like to + +1221 +00:52:22,000 --> 00:52:26,559 +talk about is quen this is one that in + +1222 +00:52:24,599 --> 00:52:29,040 +the US at least people maybe pay a a + +1223 +00:52:26,559 --> 00:52:33,000 +little bit less attention to um but it + +1224 +00:52:29,040 --> 00:52:35,680 +was created by Alibaba and it's a strong + +1225 +00:52:33,000 --> 00:52:37,559 +um multilingual model especially English + +1226 +00:52:35,680 --> 00:52:39,119 +and Chinese but even uh in other + +1227 +00:52:37,559 --> 00:52:41,000 +languages as + +1228 +00:52:39,119 --> 00:52:43,480 +well + +1229 +00:52:41,000 --> 00:52:45,160 +and uh one of its defining + +1230 +00:52:43,480 --> 00:52:48,240 +characteristics other than just being a + +1231 +00:52:45,160 --> 00:52:50,160 +strong model overall is that it's has a + +1232 +00:52:48,240 --> 00:52:51,799 +large vocabulary for multilingual + +1233 +00:52:50,160 --> 00:52:56,000 +support and strong + +1234 +00:52:51,799 --> 00:52:58,760 +performance um it comes in several sizes + +1235 +00:52:56,000 --> 00:53:01,880 +um I + +1236 +00:52:58,760 --> 00:53:04,799 +believe uh there's a 7B version and then + +1237 +00:53:01,880 --> 00:53:10,119 +there's also like a large like 70b + +1238 +00:53:04,799 --> 00:53:13,480 +version 72b I think and it's using very + +1239 +00:53:10,119 --> 00:53:15,319 +standard uh architecture things the only + +1240 +00:53:13,480 --> 00:53:18,119 +small difference it has is it has a bias + +1241 +00:53:15,319 --> 00:53:19,920 +in the attention layer which is doesn't + +1242 +00:53:18,119 --> 00:53:23,559 +uh exist in + +1243 +00:53:19,920 --> 00:53:25,880 +llama um an important thing is it's + +1244 +00:53:23,559 --> 00:53:28,920 +actually trained on multilingual data + +1245 +00:53:25,880 --> 00:53:32,720 +and they use a large vocabulary um they + +1246 +00:53:28,920 --> 00:53:33,839 +use a vocabulary of 150k in contrast to + +1247 +00:53:32,720 --> 00:53:36,599 +llama's + +1248 +00:53:33,839 --> 00:53:39,839 +32k and that allows it to handle + +1249 +00:53:36,599 --> 00:53:41,720 +multilingual uh data relatively + +1250 +00:53:39,839 --> 00:53:47,079 +well + +1251 +00:53:41,720 --> 00:53:49,359 +and um we have the three uh similar you + +1252 +00:53:47,079 --> 00:53:52,760 +know training regimes so overall it's + +1253 +00:53:49,359 --> 00:53:55,559 +not very diff different from uh + +1254 +00:53:52,760 --> 00:53:57,040 +llama what might be different is data + +1255 +00:53:55,559 --> 00:53:59,319 +engineering + +1256 +00:53:57,040 --> 00:54:00,680 +uh and actually I I expect the data + +1257 +00:53:59,319 --> 00:54:02,760 +engineering part is a bit different + +1258 +00:54:00,680 --> 00:54:06,400 +because overall it's a bit stronger than + +1259 +00:54:02,760 --> 00:54:09,920 +llama 2 um and I I think uh that has to + +1260 +00:54:06,400 --> 00:54:12,119 +do with data in in various areas one + +1261 +00:54:09,920 --> 00:54:16,920 +interesting piece from the paper that + +1262 +00:54:12,119 --> 00:54:18,280 +they have is uh if we think all the way + +1263 +00:54:16,920 --> 00:54:21,720 +back to when we talked about word + +1264 +00:54:18,280 --> 00:54:23,839 +subword models and word tokenization we + +1265 +00:54:21,720 --> 00:54:27,760 +remember that subword models split up + +1266 +00:54:23,839 --> 00:54:29,920 +the input and they split up the input uh + +1267 +00:54:27,760 --> 00:54:31,799 +so that frequent tokens get longer + +1268 +00:54:29,920 --> 00:54:34,520 +outputs and infrequent tokens get + +1269 +00:54:31,799 --> 00:54:36,359 +shorter outputs so one of the problems + +1270 +00:54:34,520 --> 00:54:40,559 +as I mentioned a long time ago when we + +1271 +00:54:36,359 --> 00:54:42,040 +covered this topic is this causes issues + +1272 +00:54:40,559 --> 00:54:43,000 +if you're doing multilingual things + +1273 +00:54:42,040 --> 00:54:44,880 +because if you have very little + +1274 +00:54:43,000 --> 00:54:47,520 +multilingual data in your training data + +1275 +00:54:44,880 --> 00:54:49,040 +for the subword tokenization model um it + +1276 +00:54:47,520 --> 00:54:51,559 +will end up splitting all of the words + +1277 +00:54:49,040 --> 00:54:55,680 +into basically characters or even bytes + +1278 +00:54:51,559 --> 00:54:59,040 +so what this shows here is this is + +1279 +00:54:55,680 --> 00:55:00,960 +comparing the amount of subord + +1280 +00:54:59,040 --> 00:55:03,040 +tokenization that happens according to + +1281 +00:55:00,960 --> 00:55:05,520 +each of the llms + +1282 +00:55:03,040 --> 00:55:08,599 +tokenizers with another explicitly + +1283 +00:55:05,520 --> 00:55:10,799 +multilingual model xlmr so xlmr is kind + +1284 +00:55:08,599 --> 00:55:12,760 +of their Baseline here with respect to + +1285 +00:55:10,799 --> 00:55:16,319 +how much it tokenizes each + +1286 +00:55:12,760 --> 00:55:19,079 +language and on the very left we have + +1287 +00:55:16,319 --> 00:55:22,839 +llama and so what we can see is that + +1288 +00:55:19,079 --> 00:55:26,599 +llama tokenizes TI + +1289 +00:55:22,839 --> 00:55:28,640 +3.7 times as much as it as xlmr does so + +1290 +00:55:26,599 --> 00:55:30,359 +it's basically splitting tie into tie up + +1291 +00:55:28,640 --> 00:55:32,480 +into little tiny bits which makes it + +1292 +00:55:30,359 --> 00:55:35,440 +very expensive and ineffective to + +1293 +00:55:32,480 --> 00:55:38,039 +process uh let's let's find some other + +1294 +00:55:35,440 --> 00:55:41,599 +languages that we care about we have he + +1295 +00:55:38,039 --> 00:55:43,760 +Hebrew Arabic + +1296 +00:55:41,599 --> 00:55:47,079 +Korean uh + +1297 +00:55:43,760 --> 00:55:49,559 +Japanese uh Chinese so all of these you + +1298 +00:55:47,079 --> 00:55:52,319 +can see are split up pretty into many + +1299 +00:55:49,559 --> 00:55:55,440 +many different chunks by + +1300 +00:55:52,319 --> 00:55:56,799 +Lama and then we we have a few other + +1301 +00:55:55,440 --> 00:55:58,359 +language models in the middle and then + +1302 +00:55:56,799 --> 00:56:01,440 +we have quen on the right side and what + +1303 +00:55:58,359 --> 00:56:04,039 +we can see is basically it's pretty + +1304 +00:56:01,440 --> 00:56:06,400 +comparable to xlmr maybe a little bit + +1305 +00:56:04,039 --> 00:56:09,520 +more than xlmr but pretty comparable to + +1306 +00:56:06,400 --> 00:56:12,839 +xlmr on many languages and then on code + +1307 +00:56:09,520 --> 00:56:15,000 +it actually um splits up code much less + +1308 +00:56:12,839 --> 00:56:17,039 +so we can see that you know its + +1309 +00:56:15,000 --> 00:56:18,960 +tokenizer is heavily + +1310 +00:56:17,039 --> 00:56:22,640 +multilingual um another thing I'd like + +1311 +00:56:18,960 --> 00:56:24,640 +to point out is um I I let I'm focusing + +1312 +00:56:22,640 --> 00:56:27,000 +on this particular language model for a + +1313 +00:56:24,640 --> 00:56:29,799 +number of reasons + +1314 +00:56:27,000 --> 00:56:32,440 +um the first one is multilinguality and + +1315 +00:56:29,799 --> 00:56:36,599 +I I like multilinguality I hope other + +1316 +00:56:32,440 --> 00:56:39,039 +people like multilinguality too um but + +1317 +00:56:36,599 --> 00:56:43,799 +another motivation is just it has quite + +1318 +00:56:39,039 --> 00:56:45,680 +strong performance and it's uh topping + +1319 +00:56:43,799 --> 00:56:47,960 +topping the leaderboards in in several + +1320 +00:56:45,680 --> 00:56:52,160 +different uh + +1321 +00:56:47,960 --> 00:56:57,640 +places so if we look at the open llm + +1322 +00:56:52,160 --> 00:56:57,640 +leaderboard um at least recently + +1323 +00:56:59,480 --> 00:57:07,440 +this was a fine-tuned model by Abus + +1324 +00:57:04,240 --> 00:57:09,440 +AI which was uh originally based on quen + +1325 +00:57:07,440 --> 00:57:11,079 +so you can see that this is like a + +1326 +00:57:09,440 --> 00:57:13,920 +strong found Foundation model that lots + +1327 +00:57:11,079 --> 00:57:16,440 +of people are using for fing things so + +1328 +00:57:13,920 --> 00:57:18,960 +um I would definitely uh encourage you + +1329 +00:57:16,440 --> 00:57:20,240 +to take a look at that too of course + +1330 +00:57:18,960 --> 00:57:22,520 +there's many many different models that + +1331 +00:57:20,240 --> 00:57:24,880 +I didn't cover because if I covered all + +1332 +00:57:22,520 --> 00:57:26,839 +of the general purpose models then we'd + +1333 +00:57:24,880 --> 00:57:29,599 +be here all day but um + +1334 +00:57:26,839 --> 00:57:31,200 +that's uh first start so next I want to + +1335 +00:57:29,599 --> 00:57:33,200 +go into other kind of special purpose + +1336 +00:57:31,200 --> 00:57:36,839 +models but are there any questions about + +1337 +00:57:33,200 --> 00:57:36,839 +um about the things I covered so + +1338 +00:57:38,000 --> 00:57:44,079 +far cool okay + +1339 +00:57:41,440 --> 00:57:47,960 +um so next I'd like to go into other + +1340 +00:57:44,079 --> 00:57:49,760 +models um first is code models so code + +1341 +00:57:47,960 --> 00:57:52,680 +models are models that were specifically + +1342 +00:57:49,760 --> 00:57:55,280 +trained on code actually right now every + +1343 +00:57:52,680 --> 00:57:56,960 +model is a code model um like nobody + +1344 +00:57:55,280 --> 00:57:58,799 +pre-train a large language model and is + +1345 +00:57:56,960 --> 00:58:01,720 +serious about it and doesn't train on + +1346 +00:57:58,799 --> 00:58:04,680 +code because um generating code is a + +1347 +00:58:01,720 --> 00:58:06,680 +huge use case and also um some work has + +1348 +00:58:04,680 --> 00:58:08,880 +demonstrated that gen training on code + +1349 +00:58:06,680 --> 00:58:13,720 +seems to improve reasoning abilities of + +1350 +00:58:08,880 --> 00:58:16,160 +language models as well um but uh these + +1351 +00:58:13,720 --> 00:58:19,319 +models were very heavily trained on code + +1352 +00:58:16,160 --> 00:58:22,400 +so um we have star coder 2 this is a + +1353 +00:58:19,319 --> 00:58:24,079 +very recent uh entry this is a fully + +1354 +00:58:22,400 --> 00:58:26,720 +open model so you can see the data it + +1355 +00:58:24,079 --> 00:58:29,039 +was trained on um all the training + +1356 +00:58:26,720 --> 00:58:31,640 +details are released and other stuff + +1357 +00:58:29,039 --> 00:58:36,760 +like that so this is kind of in the + +1358 +00:58:31,640 --> 00:58:38,599 +pythia you know piao category but it's + +1359 +00:58:36,760 --> 00:58:41,240 +very uh it's actually a very strong + +1360 +00:58:38,599 --> 00:58:42,839 +model very good model so it's uh a good + +1361 +00:58:41,240 --> 00:58:46,480 +one to know + +1362 +00:58:42,839 --> 00:58:48,680 +about um separately there's code llama + +1363 +00:58:46,480 --> 00:58:52,520 +by meta which is a code adaptation of + +1364 +00:58:48,680 --> 00:58:54,799 +llama and uh it also gets quite a quite + +1365 +00:58:52,520 --> 00:58:57,720 +good performance there's also another + +1366 +00:58:54,799 --> 00:58:59,760 +model uh called seek coder I would say + +1367 +00:58:57,720 --> 00:59:01,720 +all three of these are topping some + +1368 +00:58:59,760 --> 00:59:03,119 +variety of leaderboard where deep seek + +1369 +00:59:01,720 --> 00:59:04,640 +maybe is topping a few more leader + +1370 +00:59:03,119 --> 00:59:06,319 +boards than the other ones are but all + +1371 +00:59:04,640 --> 00:59:09,960 +of them are very competitive and might + +1372 +00:59:06,319 --> 00:59:11,680 +be the best in class for code things um + +1373 +00:59:09,960 --> 00:59:13,119 +I'm not talking very much about these + +1374 +00:59:11,680 --> 00:59:15,119 +because we're going to have a a class on + +1375 +00:59:13,119 --> 00:59:18,280 +code generation and code related things + +1376 +00:59:15,119 --> 00:59:21,000 +later so um I'm not going to go into a + +1377 +00:59:18,280 --> 00:59:21,000 +lot of detail + +1378 +00:59:21,319 --> 00:59:27,839 +here another thing is about math models + +1379 +00:59:24,680 --> 00:59:31,960 +and so like one thing is large language + +1380 +00:59:27,839 --> 00:59:35,480 +models are not particularly good at math + +1381 +00:59:31,960 --> 00:59:38,839 +um so there are quite a few models that + +1382 +00:59:35,480 --> 00:59:40,200 +were trained specifically for math um + +1383 +00:59:38,839 --> 00:59:45,160 +the first one is + +1384 +00:59:40,200 --> 00:59:47,280 +Lemma um yes that is a pun um for like + +1385 +00:59:45,160 --> 00:59:49,920 +LMA from + +1386 +00:59:47,280 --> 00:59:51,160 +maap I I'm I'm not responsible for it + +1387 +00:59:49,920 --> 00:59:55,240 +but I I thought it was kind of funny + +1388 +00:59:51,160 --> 00:59:56,920 +anyway um so uh this was by alther AI so + +1389 +00:59:55,240 --> 01:00:00,359 +because this was by Luther again this is + +1390 +00:59:56,920 --> 01:00:03,640 +a fully open model all the data is open + +1391 +01:00:00,359 --> 01:00:05,960 +um everything is known about it um also + +1392 +01:00:03,640 --> 01:00:08,480 +uh our our very own Shan wck was one of + +1393 +01:00:05,960 --> 01:00:10,559 +the contributors to it uh so if you want + +1394 +01:00:08,480 --> 01:00:13,839 +to know more about LMA you can go bother + +1395 +01:00:10,559 --> 01:00:17,440 +Sean so uh that's another thing that I + +1396 +01:00:13,839 --> 01:00:19,240 +should mention um another thing is deep + +1397 +01:00:17,440 --> 01:00:20,839 +seek who made the Deep seek Cod model + +1398 +01:00:19,240 --> 01:00:23,480 +has also created a very strong math + +1399 +01:00:20,839 --> 01:00:26,200 +model uh that's competitive with gp4 on + +1400 +01:00:23,480 --> 01:00:28,160 +a lot of math things uh basically the + +1401 +01:00:26,200 --> 01:00:30,480 +way they did this was they did this by + +1402 +01:00:28,160 --> 01:00:32,559 +um training a classifier to try to + +1403 +01:00:30,480 --> 01:00:34,640 +identify data on the web that is related + +1404 +01:00:32,559 --> 01:00:37,599 +to math and scraping all of that data + +1405 +01:00:34,640 --> 01:00:39,960 +and fine tuning on it so um you can get + +1406 +01:00:37,599 --> 01:00:42,280 +gold standard data from like proof pile + +1407 +01:00:39,960 --> 01:00:44,359 +and a whole bunch of other sources and + +1408 +01:00:42,280 --> 01:00:46,200 +so they trained a like math or not maath + +1409 +01:00:44,359 --> 01:00:48,400 +classifier and and harvested a lot of + +1410 +01:00:46,200 --> 01:00:52,400 +math related + +1411 +01:00:48,400 --> 01:00:52,400 +dat yeah + +1412 +01:00:59,880 --> 01:01:04,920 +it's mostly mostly data sets um I + +1413 +01:01:03,599 --> 01:01:07,119 +actually might be talking a little bit + +1414 +01:01:04,920 --> 01:01:10,039 +more about these in the reasoning class + +1415 +01:01:07,119 --> 01:01:11,799 +and I did a lot of uh I did a lot of + +1416 +01:01:10,039 --> 01:01:13,599 +prep to create these slides and actually + +1417 +01:01:11,799 --> 01:01:15,680 +ran out of time to do the math stuff so + +1418 +01:01:13,599 --> 01:01:17,200 +I might talk about it later um but I + +1419 +01:01:15,680 --> 01:01:18,480 +don't think they're really doing a lot + +1420 +01:01:17,200 --> 01:01:21,799 +of things like you could think of + +1421 +01:01:18,480 --> 01:01:23,440 +obvious things like doing RL rlf based + +1422 +01:01:21,799 --> 01:01:26,799 +on like whether it gets the answer right + +1423 +01:01:23,440 --> 01:01:28,559 +or not in the end um as far as I know + +1424 +01:01:26,799 --> 01:01:30,359 +that's not a big ingredient here but + +1425 +01:01:28,559 --> 01:01:31,920 +I'll be more sure of that when we talk + +1426 +01:01:30,359 --> 01:01:37,599 +about it + +1427 +01:01:31,920 --> 01:01:39,559 +later um cool and a final one uh it's + +1428 +01:01:37,599 --> 01:01:43,200 +not a Sy model it's a science model + +1429 +01:01:39,559 --> 01:01:45,920 +sorry for the typo um but uh this model + +1430 +01:01:43,200 --> 01:01:49,160 +Galactica um was a model for science + +1431 +01:01:45,920 --> 01:01:51,799 +that was trained by meta + +1432 +01:01:49,160 --> 01:01:54,359 +um does anyone remember this model or + +1433 +01:01:51,799 --> 01:01:58,079 +was anybody around when this model came + +1434 +01:01:54,359 --> 01:01:59,640 +out no there was a big uh a big PR + +1435 +01:01:58,079 --> 01:02:01,160 +disaster for meta when they released + +1436 +01:01:59,640 --> 01:02:03,480 +this model because they said this is a + +1437 +01:02:01,160 --> 01:02:05,520 +great model for math use it in your in + +1438 +01:02:03,480 --> 01:02:08,599 +writing your science paper sorry this is + +1439 +01:02:05,520 --> 01:02:10,480 +a great model for science try using it + +1440 +01:02:08,599 --> 01:02:12,640 +it in your science papers and this came + +1441 +01:02:10,480 --> 01:02:14,839 +out about two years ago and two years + +1442 +01:02:12,640 --> 01:02:16,640 +ago language models hallucinated all the + +1443 +01:02:14,839 --> 01:02:19,279 +time and came up with false scientific + +1444 +01:02:16,640 --> 01:02:22,039 +facts and stuff and so basically um a + +1445 +01:02:19,279 --> 01:02:25,680 +lot of people kind of bashed this model + +1446 +01:02:22,039 --> 01:02:27,440 +uh in my mind kind of unfairly because + +1447 +01:02:25,680 --> 01:02:31,200 +they actually have a lot of really + +1448 +01:02:27,440 --> 01:02:32,960 +interesting things in this paper um one + +1449 +01:02:31,200 --> 01:02:34,720 +interesting thing in this paper is they + +1450 +01:02:32,960 --> 01:02:37,000 +tried to create a general purpose model + +1451 +01:02:34,720 --> 01:02:38,960 +for science that's able to understand + +1452 +01:02:37,000 --> 01:02:41,960 +not only text but also various + +1453 +01:02:38,960 --> 01:02:47,720 +modalities of scientific data and so + +1454 +01:02:41,960 --> 01:02:51,000 +that includes text it includes latex um + +1455 +01:02:47,720 --> 01:02:53,799 +you know equations it includes code but + +1456 +01:02:51,000 --> 01:02:58,559 +it also included things like molecular + +1457 +01:02:53,799 --> 01:03:01,799 +structures and uh like collagens and DNA + +1458 +01:02:58,559 --> 01:03:04,160 +and stuff like this so they tried to + +1459 +01:03:01,799 --> 01:03:06,160 +like model biology and other things like + +1460 +01:03:04,160 --> 01:03:08,079 +this as well so I I think it's really + +1461 +01:03:06,160 --> 01:03:10,640 +kind of too bad that this model got a a + +1462 +01:03:08,079 --> 01:03:12,400 +bad WAP because I I really like the you + +1463 +01:03:10,640 --> 01:03:14,839 +know the work that went into it and I + +1464 +01:03:12,400 --> 01:03:16,359 +hope we'll see more of this um because + +1465 +01:03:14,839 --> 01:03:17,640 +language models for science is a really + +1466 +01:03:16,359 --> 01:03:19,880 +big topic that a lot of people are + +1467 +01:03:17,640 --> 01:03:19,880 +thinking + +1468 +01:03:20,760 --> 01:03:24,240 +about + +1469 +01:03:22,400 --> 01:03:26,440 +cool + +1470 +01:03:24,240 --> 01:03:28,000 +um one thing I didn't talk about is + +1471 +01:03:26,440 --> 01:03:29,880 +multimodal models but I hope to talk + +1472 +01:03:28,000 --> 01:03:32,440 +about multimodal models in a a future + +1473 +01:03:29,880 --> 01:03:33,359 +class so um I'll I'll talk more about + +1474 +01:03:32,440 --> 01:03:38,680 +that + +1475 +01:03:33,359 --> 01:03:41,640 +soon um the next thing is Clos models um + +1476 +01:03:38,680 --> 01:03:44,480 +so Clos models we don't know a whole lot + +1477 +01:03:41,640 --> 01:03:46,880 +about them uh most of what we know about + +1478 +01:03:44,480 --> 01:03:49,480 +them in their training data and other + +1479 +01:03:46,880 --> 01:03:52,359 +things like that is their uh is + +1480 +01:03:49,480 --> 01:03:54,720 +conjecture so the + +1481 +01:03:52,359 --> 01:03:57,839 +standard the standard format for + +1482 +01:03:54,720 --> 01:03:59,599 +releasing in a closed model or not + +1483 +01:03:57,839 --> 01:04:02,160 +releasing but you know publicizing a + +1484 +01:03:59,599 --> 01:04:04,279 +closed model is people will write a blog + +1485 +01:04:02,160 --> 01:04:05,960 +post and they'll write a paper and + +1486 +01:04:04,279 --> 01:04:07,720 +generally what the paper does is it only + +1487 +01:04:05,960 --> 01:04:09,559 +talks about evaluation it only talks + +1488 +01:04:07,720 --> 01:04:12,039 +about like how good the model is on + +1489 +01:04:09,559 --> 01:04:13,799 +various things how safe it is how they + +1490 +01:04:12,039 --> 01:04:16,279 +put a lot of effort into red teeming the + +1491 +01:04:13,799 --> 01:04:17,680 +model uh so that it doesn't do bad + +1492 +01:04:16,279 --> 01:04:18,839 +things and stuff like that and it tells + +1493 +01:04:17,680 --> 01:04:21,119 +you nothing about how they actually + +1494 +01:04:18,839 --> 01:04:23,279 +built the model so mostly like what I + +1495 +01:04:21,119 --> 01:04:26,279 +can talk about are capabilities as + +1496 +01:04:23,279 --> 01:04:28,520 +opposed to um + +1497 +01:04:26,279 --> 01:04:32,440 +talk about our capabilities as opposed + +1498 +01:04:28,520 --> 01:04:35,319 +to like what actually went into the + +1499 +01:04:32,440 --> 01:04:38,920 +model so um there's + +1500 +01:04:35,319 --> 01:04:40,880 +gp4 um gp4 I think everybody knows it's + +1501 +01:04:38,920 --> 01:04:43,640 +kind of the de facto standard strong + +1502 +01:04:40,880 --> 01:04:45,680 +language model it used to be the only + +1503 +01:04:43,640 --> 01:04:47,680 +strong language model like it used to be + +1504 +01:04:45,680 --> 01:04:50,079 +on its own the strongest language model + +1505 +01:04:47,680 --> 01:04:53,160 +and there were no real competitors to + +1506 +01:04:50,079 --> 01:04:55,000 +gp4 from that point of view I think + +1507 +01:04:53,160 --> 01:04:56,680 +still if I wanted a strong language + +1508 +01:04:55,000 --> 01:04:58,960 +model for just something that I'm I'm + +1509 +01:04:56,680 --> 01:05:00,880 +going to do randomly I still rely on G I + +1510 +01:04:58,960 --> 01:05:03,680 +still trust gp4 more than anything else + +1511 +01:05:00,880 --> 01:05:05,240 +to give me a really good answer um but + +1512 +01:05:03,680 --> 01:05:08,480 +there are now other competitors I'd like + +1513 +01:05:05,240 --> 01:05:11,960 +to talk about so gp4 anyway um you know + +1514 +01:05:08,480 --> 01:05:14,240 +it Powers the pro version of chat GPT it + +1515 +01:05:11,960 --> 01:05:18,039 +was tuned to be good as a chat-based + +1516 +01:05:14,240 --> 01:05:20,440 +assistant um it accepts image inputs uh + +1517 +01:05:18,039 --> 01:05:22,279 +and it supports calling external tools + +1518 +01:05:20,440 --> 01:05:23,599 +through function calling uh through a + +1519 +01:05:22,279 --> 01:05:27,119 +function calling + +1520 +01:05:23,599 --> 01:05:28,720 +interface um + +1521 +01:05:27,119 --> 01:05:30,599 +I I think people are are generally + +1522 +01:05:28,720 --> 01:05:34,000 +familiar with this but just in case + +1523 +01:05:30,599 --> 01:05:36,240 +you're not um I'd like to show a few + +1524 +01:05:34,000 --> 01:05:38,039 +things that I like to + +1525 +01:05:36,240 --> 01:05:39,640 +do + +1526 +01:05:38,039 --> 01:05:42,760 +so let + +1527 +01:05:39,640 --> 01:05:42,760 +[Music] + +1528 +01:05:46,920 --> 01:05:52,480 +me so I'll just randomly grab one of my + +1529 +01:05:50,440 --> 01:05:57,640 +papers from + +1530 +01:05:52,480 --> 01:05:57,640 +archive um my Mo my most recent paper + +1531 +01:06:03,400 --> 01:06:07,559 +and I can copy paste + +1532 +01:06:13,200 --> 01:06:22,240 +this and write uh turn this into Json + +1533 +01:06:19,240 --> 01:06:22,240 +forat + +1534 +01:06:27,960 --> 01:06:31,640 +and I drop it in + +1535 +01:06:29,880 --> 01:06:35,480 +here + +1536 +01:06:31,640 --> 01:06:38,279 +and so this is an exhibit of its like + +1537 +01:06:35,480 --> 01:06:42,240 +multimodal abilities because I can throw + +1538 +01:06:38,279 --> 01:06:44,359 +in a uh in a + +1539 +01:06:42,240 --> 01:06:48,400 +table and it basically turns it into + +1540 +01:06:44,359 --> 01:06:50,599 +Json clat for so um I I actually turned + +1541 +01:06:48,400 --> 01:06:52,119 +a fair amount of data FR in that I + +1542 +01:06:50,599 --> 01:06:53,960 +created in creating these slides into + +1543 +01:06:52,119 --> 01:06:56,039 +Json format so I can save it later for + +1544 +01:06:53,960 --> 01:06:59,079 +whatever I want it for and I did it + +1545 +01:06:56,039 --> 01:07:01,720 +through uh this so this is an example of + +1546 +01:06:59,079 --> 01:07:06,599 +the multimodal abilities can also tell + +1547 +01:07:01,720 --> 01:07:06,599 +you about images and stuff like that + +1548 +01:07:07,000 --> 01:07:14,319 +um so also um there was a famous article + +1549 +01:07:11,760 --> 01:07:16,760 +written by Gary Marcus that said deep + +1550 +01:07:14,319 --> 01:07:19,760 +learning is hitting a wall um it + +1551 +01:07:16,760 --> 01:07:22,880 +basically was written two years ago and + +1552 +01:07:19,760 --> 01:07:25,160 +uh Gary Marcus was saying deep learning + +1553 +01:07:22,880 --> 01:07:26,200 +doesn't uh you know is not the way for + +1554 +01:07:25,160 --> 01:07:27,760 +the future sure we're going to need + +1555 +01:07:26,200 --> 01:07:31,319 +things other than deep learning in order + +1556 +01:07:27,760 --> 01:07:34,559 +to uh you know be able to uh make + +1557 +01:07:31,319 --> 01:07:36,400 +progress and whe whether you believe + +1558 +01:07:34,559 --> 01:07:40,520 +that is true or not I I will let you to + +1559 +01:07:36,400 --> 01:07:46,520 +your own opinion um but uh I could also + +1560 +01:07:40,520 --> 01:07:51,359 +say uh create a picture of deep learning + +1561 +01:07:46,520 --> 01:07:55,400 +breaking through a brick wall and it can + +1562 +01:07:51,359 --> 01:07:55,400 +generate images for you + +1563 +01:08:02,599 --> 01:08:07,440 +course if you ever do a live demo even + +1564 +01:08:05,319 --> 01:08:10,319 +if it's a live demo of open AI product + +1565 +01:08:07,440 --> 01:08:13,559 +that a million people use it will break + +1566 +01:08:10,319 --> 01:08:16,719 +when you try to do it so um so this is + +1567 +01:08:13,559 --> 01:08:17,799 +another uh thing that it can do so there + +1568 +01:08:16,719 --> 01:08:19,560 +we have a picture of deep learning + +1569 +01:08:17,799 --> 01:08:22,640 +breaking through a brick wall and it can + +1570 +01:08:19,560 --> 01:08:26,159 +you know generate images and stuff so + +1571 +01:08:22,640 --> 01:08:28,560 +these are like the kinds of things that + +1572 +01:08:26,159 --> 01:08:30,960 +I now + +1573 +01:08:28,560 --> 01:08:32,880 +expect so it's not just like reasoning + +1574 +01:08:30,960 --> 01:08:35,839 +ability and other stuff like that it's + +1575 +01:08:32,880 --> 01:08:39,199 +also multi multimodality being able to + +1576 +01:08:35,839 --> 01:08:43,679 +generate code um another thing that's + +1577 +01:08:39,199 --> 01:08:46,719 +kind of nice um is make a + +1578 +01:08:43,679 --> 01:08:49,440 +histogram of these + +1579 +01:08:46,719 --> 01:08:54,640 +numbers one + +1580 +01:08:49,440 --> 01:08:54,640 +two one two four + +1581 +01:08:57,600 --> 01:09:04,040 +so it can do code generation and and + +1582 +01:08:59,719 --> 01:09:05,560 +display the results for you um there are + +1583 +01:09:04,040 --> 01:09:08,319 +efforts to + +1584 +01:09:05,560 --> 01:09:12,239 +make open source language models be able + +1585 +01:09:08,319 --> 01:09:14,000 +to do these things and um in order to do + +1586 +01:09:12,239 --> 01:09:16,759 +this you need multimodality you need + +1587 +01:09:14,000 --> 01:09:19,359 +also the ability to use tools so + +1588 +01:09:16,759 --> 01:09:21,400 +actually the way that this um worked + +1589 +01:09:19,359 --> 01:09:24,520 +here is very different than the way that + +1590 +01:09:21,400 --> 01:09:27,920 +this worked so this is actually using a + +1591 +01:09:24,520 --> 01:09:29,759 +image input into gp4 so what it's doing + +1592 +01:09:27,920 --> 01:09:33,040 +is it's encoding the image and then + +1593 +01:09:29,759 --> 01:09:34,719 +feeding it in as tokens into gp4 what + +1594 +01:09:33,040 --> 01:09:37,920 +this is doing here is this is rather + +1595 +01:09:34,719 --> 01:09:40,120 +calling a tool this is calling uh dolly3 + +1596 +01:09:37,920 --> 01:09:42,120 +as a tool and it's providing the caption + +1597 +01:09:40,120 --> 01:09:46,880 +to Dolly 3 you can even see maybe the + +1598 +01:09:42,120 --> 01:09:46,880 +caption that was provided to + +1599 +01:09:48,640 --> 01:09:55,560 +dolly3 you you previously were able to + +1600 +01:09:51,239 --> 01:09:57,960 +do that um by maybe downloading yeah so + +1601 +01:09:55,560 --> 01:10:01,600 +you can see the the + +1602 +01:09:57,960 --> 01:10:01,600 +caption uh which + +1603 +01:10:03,560 --> 01:10:08,120 +was a visual metaphor of deep learning + +1604 +01:10:06,320 --> 01:10:10,679 +is a powerful force breaking through a + +1605 +01:10:08,120 --> 01:10:13,400 +brick wall um or something like that and + +1606 +01:10:10,679 --> 01:10:15,480 +so gp4 basically what it did is it it + +1607 +01:10:13,400 --> 01:10:18,000 +said it wanted to call a tool and then + +1608 +01:10:15,480 --> 01:10:19,360 +it g provided the caption uh the caption + +1609 +01:10:18,000 --> 01:10:21,280 +and then it called it completely + +1610 +01:10:19,360 --> 01:10:22,320 +separate tool as an API in order to + +1611 +01:10:21,280 --> 01:10:27,320 +generate the + +1612 +01:10:22,320 --> 01:10:27,320 +image so um yeah the final + +1613 +01:10:28,199 --> 01:10:34,080 +well I managed to break chat gbt that's + +1614 +01:10:30,120 --> 01:10:36,520 +no small accomplishment um so but anyway + +1615 +01:10:34,080 --> 01:10:40,199 +these are some of the things that uh + +1616 +01:10:36,520 --> 01:10:42,360 +that the systems can do and because open + +1617 +01:10:40,199 --> 01:10:47,000 +AI has kind of become a standard that a + +1618 +01:10:42,360 --> 01:10:50,040 +lot of people want to uh compete with um + +1619 +01:10:47,000 --> 01:10:53,480 +also I would say Gemini Gemini and Claud + +1620 +01:10:50,040 --> 01:10:56,400 +are maybe the two um the two models that + +1621 +01:10:53,480 --> 01:10:59,440 +can compete with gp4 and terms of uh you + +1622 +01:10:56,400 --> 01:11:02,600 +know accuracy Gemini is a much newer + +1623 +01:10:59,440 --> 01:11:06,159 +model by Google that uh comes in two + +1624 +01:11:02,600 --> 01:11:08,280 +varieties Gemini Pro and Gemini Ultra uh + +1625 +01:11:06,159 --> 01:11:11,040 +one interesting thing about Gemini Pro + +1626 +01:11:08,280 --> 01:11:13,560 +is that it supports um very long inputs + +1627 +01:11:11,040 --> 01:11:15,679 +one to 10 million tokens it also + +1628 +01:11:13,560 --> 01:11:16,600 +supports image and video inputs and + +1629 +01:11:15,679 --> 01:11:20,239 +image + +1630 +01:11:16,600 --> 01:11:22,320 +outputs um I actually put a a video into + +1631 +01:11:20,239 --> 01:11:24,600 +it recently and the video recognition + +1632 +01:11:22,320 --> 01:11:27,159 +capabilities are pretty pretty nice so + +1633 +01:11:24,600 --> 01:11:29,280 +you can uh you can try that out if you + +1634 +01:11:27,159 --> 01:11:34,320 +want + +1635 +01:11:29,280 --> 01:11:36,640 +um and finally there's Claud it pla 3 it + +1636 +01:11:34,320 --> 01:11:39,280 +supports a context window of up to 200k + +1637 +01:11:36,640 --> 01:11:41,040 +also allows for processing images and + +1638 +01:11:39,280 --> 01:11:46,480 +overall has strong results competitive + +1639 +01:11:41,040 --> 01:11:49,880 +with gd4 so if you're looking for um if + +1640 +01:11:46,480 --> 01:11:51,480 +you're looking for models to use uh to + +1641 +01:11:49,880 --> 01:11:53,600 +try out better closed models you can + +1642 +01:11:51,480 --> 01:11:55,719 +definitely use these another thing I'm + +1643 +01:11:53,600 --> 01:11:58,239 +really excited about is how can we get + +1644 +01:11:55,719 --> 01:11:59,560 +like open models to you know demonstrate + +1645 +01:11:58,239 --> 01:12:01,320 +some of the interesting capabilities + +1646 +01:11:59,560 --> 01:12:02,840 +that we see in closed models so you know + +1647 +01:12:01,320 --> 01:12:07,120 +everybody can benefit and everybody + +1648 +01:12:02,840 --> 01:12:10,040 +knows uh you know uh the recipes to make + +1649 +01:12:07,120 --> 01:12:12,560 +models like this so I think that's + +1650 +01:12:10,040 --> 01:12:16,639 +mostly all I have for today another um + +1651 +01:12:12,560 --> 01:12:23,440 +another thing that is kind of neat + +1652 +01:12:16,639 --> 01:12:23,440 +is I just found this a little while ago + +1653 +01:12:28,800 --> 01:12:32,239 +but there is this uh + +1654 +01:12:33,320 --> 01:12:39,239 +interface uh called the god mode that + +1655 +01:12:36,880 --> 01:12:41,960 +allows you to put all of the chat apps + +1656 +01:12:39,239 --> 01:12:45,840 +next to each other and write the same + +1657 +01:12:41,960 --> 01:12:47,080 +chat query into them and uh and get the + +1658 +01:12:45,840 --> 01:12:48,719 +result from all of them so you can + +1659 +01:12:47,080 --> 01:12:51,080 +actually compare all of them in kind of + +1660 +01:12:48,719 --> 01:12:52,840 +an interactive settings so if you want + +1661 +01:12:51,080 --> 01:12:54,800 +to look at all especially all of the + +1662 +01:12:52,840 --> 01:12:56,679 +closed models open models it's you know + +1663 +01:12:54,800 --> 01:12:58,239 +not too are to do it yourself but if you + +1664 +01:12:56,679 --> 01:12:59,840 +want to try all of the Clos models + +1665 +01:12:58,239 --> 01:13:01,800 +together you can do that and like log + +1666 +01:12:59,840 --> 01:13:03,960 +into all of your accounts and then press + +1667 +01:13:01,800 --> 01:13:05,320 +go on aquery and see how they all this F + +1668 +01:13:03,960 --> 01:13:07,960 +so + +1669 +01:13:05,320 --> 01:13:09,800 +um that might be a good way to compare + +1670 +01:13:07,960 --> 01:13:12,000 +all of the models kind of qualitatively + +1671 +01:13:09,800 --> 01:13:14,679 +as opposed to + +1672 +01:13:12,000 --> 01:13:17,280 +qualitatively cool um that's all I have + +1673 +01:13:14,679 --> 01:13:19,440 +for today uh I don't know are there any + +1674 +01:13:17,280 --> 01:13:23,440 +questions or discussion or things like + +1675 +01:13:19,440 --> 01:13:23,440 +this yeah + +1676 +01:13:28,840 --> 01:13:35,679 +so a systematic way um the first thing + +1677 +01:13:32,760 --> 01:13:37,960 +you can do is look at the Benchmark + +1678 +01:13:35,679 --> 01:13:40,800 +results that have been published but + +1679 +01:13:37,960 --> 01:13:43,320 +actually I would like to give a caveat + +1680 +01:13:40,800 --> 01:13:43,320 +about + +1681 +01:13:45,199 --> 01:13:48,440 +this which + +1682 +01:13:50,000 --> 01:13:54,000 +is um + +1683 +01:14:22,960 --> 01:14:28,239 +so these are are the best bench marking + +1684 +01:14:25,600 --> 01:14:30,840 +results for the Gemini + +1685 +01:14:28,239 --> 01:14:33,440 +paper um + +1686 +01:14:30,840 --> 01:14:36,719 +and they have a table here um and + +1687 +01:14:33,440 --> 01:14:38,679 +basically what they kind of obviously to + +1688 +01:14:36,719 --> 01:14:41,679 +me wanted to demonstrate is that Gemini + +1689 +01:14:38,679 --> 01:14:44,760 +was the best model out of all the models + +1690 +01:14:41,679 --> 01:14:47,800 +um and so they have Gemini Pro and + +1691 +01:14:44,760 --> 01:14:50,040 +Gemini Ultra and they put Gemini Pro + +1692 +01:14:47,800 --> 01:14:52,639 +Ultra against gp4 and Gemini Pro against + +1693 +01:14:50,040 --> 01:14:56,360 +GPT 3.5 because they're you know + +1694 +01:14:52,639 --> 01:14:58,440 +comparable models um + +1695 +01:14:56,360 --> 01:15:01,880 +and they're yeah because they're + +1696 +01:14:58,440 --> 01:15:03,040 +comparable models basically and on + +1697 +01:15:01,880 --> 01:15:05,880 +things + +1698 +01:15:03,040 --> 01:15:07,400 +like um and they demonstrate that + +1699 +01:15:05,880 --> 01:15:08,199 +basically they're better in all all of + +1700 +01:15:07,400 --> 01:15:10,520 +these + +1701 +01:15:08,199 --> 01:15:14,760 +situations however there's a few details + +1702 +01:15:10,520 --> 01:15:17,120 +the first detail is um that the method + +1703 +01:15:14,760 --> 01:15:20,199 +that they're using to prompt the model + +1704 +01:15:17,120 --> 01:15:22,120 +is different here so we have like 94.4 + +1705 +01:15:20,199 --> 01:15:23,560 +versus 92 but the method they're using + +1706 +01:15:22,120 --> 01:15:25,520 +to prompt the model is different they're + +1707 +01:15:23,560 --> 01:15:29,159 +using they're + +1708 +01:15:25,520 --> 01:15:33,320 +32 and then basically uh getting the + +1709 +01:15:29,159 --> 01:15:36,320 +best from 32 and then another thing + +1710 +01:15:33,320 --> 01:15:41,360 +is if we look at this Human ofal + +1711 +01:15:36,320 --> 01:15:44,120 +Performance here um they reported their + +1712 +01:15:41,360 --> 01:15:47,000 +Human ofel Performance then they pulled + +1713 +01:15:44,120 --> 01:15:49,400 +the number from the original gp4 paper + +1714 +01:15:47,000 --> 01:15:53,159 +and compared to the number from the gp4 + +1715 +01:15:49,400 --> 01:15:54,639 +paper but all of these um you know apis + +1716 +01:15:53,159 --> 01:15:57,719 +are constantly changing they're getting + +1717 +01:15:54,639 --> 01:15:59,480 +better and better so we went um I I was + +1718 +01:15:57,719 --> 01:16:01,400 +very excited when Gemini first came out + +1719 +01:15:59,480 --> 01:16:03,120 +and we actually wrote a paper where we + +1720 +01:16:01,400 --> 01:16:05,320 +tried to look deeper into the + +1721 +01:16:03,120 --> 01:16:08,000 +performance and what we actually found + +1722 +01:16:05,320 --> 01:16:10,199 +is comparing Gemini Pro and GPT 3.5 + +1723 +01:16:08,000 --> 01:16:12,719 +turbo which should be comparable we + +1724 +01:16:10,199 --> 01:16:16,120 +found that actually GPT 3.5 turbo did a + +1725 +01:16:12,719 --> 01:16:19,280 +little bit better um in in most cases + +1726 +01:16:16,120 --> 01:16:20,920 +although not all cases and one of the + +1727 +01:16:19,280 --> 01:16:24,000 +things we noticed in particular is like + +1728 +01:16:20,920 --> 01:16:27,960 +human ofel GPD 3.5 had gotten like much + +1729 +01:16:24,000 --> 01:16:29,760 +much better over the course of uh like + +1730 +01:16:27,960 --> 01:16:31,639 +the time between the original paper was + +1731 +01:16:29,760 --> 01:16:34,120 +reported it had gone up by almost 30 + +1732 +01:16:31,639 --> 01:16:35,760 +points and also in a few cases we had + +1733 +01:16:34,120 --> 01:16:37,480 +like a little bit of trouble reproducing + +1734 +01:16:35,760 --> 01:16:39,280 +the Gemini Pro results just because they + +1735 +01:16:37,480 --> 01:16:40,360 +had like safety filters and other stuff + +1736 +01:16:39,280 --> 01:16:42,520 +like that that we had to get around + +1737 +01:16:40,360 --> 01:16:45,280 +before we got the results so it's not + +1738 +01:16:42,520 --> 01:16:49,560 +necessarily the case that you can + +1739 +01:16:45,280 --> 01:16:52,639 +completely take the um that you can + +1740 +01:16:49,560 --> 01:16:55,560 +completely take the results on face + +1741 +01:16:52,639 --> 01:16:57,040 +value actually as a first St I would + +1742 +01:16:55,560 --> 01:17:00,080 +suggest just trying to chat with the + +1743 +01:16:57,040 --> 01:17:03,719 +model um which is also why I introduced + +1744 +01:17:00,080 --> 01:17:06,679 +the like quote unquote god mode uh like + +1745 +01:17:03,719 --> 01:17:09,159 +browser because like you can kind of + +1746 +01:17:06,679 --> 01:17:10,639 +tell when it like when something's way + +1747 +01:17:09,159 --> 01:17:14,320 +better than another one just by the + +1748 +01:17:10,639 --> 01:17:17,159 +respones ites um separately if you want + +1749 +01:17:14,320 --> 01:17:17,159 +to do it much more + +1750 +01:17:20,199 --> 01:17:23,840 +systematically there are really nice + +1751 +01:17:22,360 --> 01:17:25,400 +tools for evaluation I think I might + +1752 +01:17:23,840 --> 01:17:26,960 +have talked about this before but if I + +1753 +01:17:25,400 --> 01:17:29,280 +haven't then you should definitely take + +1754 +01:17:26,960 --> 01:17:31,880 +a look at this there's the alther + +1755 +01:17:29,280 --> 01:17:34,040 +evaluation harness and the alther + +1756 +01:17:31,880 --> 01:17:35,679 +evaluation harness makes it really easy + +1757 +01:17:34,040 --> 01:17:37,600 +to evaluate for example hugging face + +1758 +01:17:35,679 --> 01:17:39,040 +models against many many different tasks + +1759 +01:17:37,600 --> 01:17:41,360 +so you can just pick which task you want + +1760 +01:17:39,040 --> 01:17:43,719 +to evaluate against pick the model name + +1761 +01:17:41,360 --> 01:17:47,400 +and and go and you can get evaluation + +1762 +01:17:43,719 --> 01:17:51,960 +results um that won't necessarily work + +1763 +01:17:47,400 --> 01:17:53,960 +for close models um but if you look for + +1764 +01:17:51,960 --> 01:17:55,480 +Uther language model evaluation harness + +1765 +01:17:53,960 --> 01:17:58,800 +that's maybe the easiest way to run + +1766 +01:17:55,480 --> 01:17:58,800 +evaluations or s for + +1767 +01:17:59,239 --> 01:18:05,239 +L Cool okay um so we're we're at time + +1768 +01:18:02,960 --> 01:18:07,480 +now uh but I'd be happy to answer a few + +1769 +01:18:05,239 --> 01:18:10,639 +questions if anybody else has any so + +1770 +01:18:07,480 --> 01:18:10,639 +thank you \ No newline at end of file