diff --git "a/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.srt" "b/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.srt" new file mode 100644--- /dev/null +++ "b/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.srt" @@ -0,0 +1,6599 @@ +1 +00:00:00,760 --> 00:00:07,240 +he everyone so I'd like to get + +2 +00:00:03,279 --> 00:00:09,320 +started the first thing is that um I + +3 +00:00:07,240 --> 00:00:11,160 +heard from the adws people that they + +4 +00:00:09,320 --> 00:00:14,440 +started the + +5 +00:00:11,160 --> 00:00:17,840 +process of + +6 +00:00:14,440 --> 00:00:19,400 +getting things issued on the 26th which + +7 +00:00:17,840 --> 00:00:21,480 +is three days ago so you should be + +8 +00:00:19,400 --> 00:00:23,560 +getting it soon uh for reference I + +9 +00:00:21,480 --> 00:00:25,599 +submitted the form about seven days + +10 +00:00:23,560 --> 00:00:28,359 +before that so they're moving very + +11 +00:00:25,599 --> 00:00:29,599 +slowly but I think you should have AWS + +12 +00:00:28,359 --> 00:00:31,920 +credits by the end of the week if you + +13 +00:00:29,599 --> 00:00:35,120 +need them to run uh GPU machines or + +14 +00:00:31,920 --> 00:00:37,960 +stuff like that the moment you get AWS + +15 +00:00:35,120 --> 00:00:39,960 +credits or maybe even before you get AWS + +16 +00:00:37,960 --> 00:00:43,320 +credits I might suggest that you try to + +17 +00:00:39,960 --> 00:00:46,760 +start uh a GPU machine like a P2 machine + +18 +00:00:43,320 --> 00:00:49,160 +or something like that because um + +19 +00:00:46,760 --> 00:00:51,760 +sometimes you need to file for a limit + +20 +00:00:49,160 --> 00:00:53,640 +increase uh to get a P2 machine and that + +21 +00:00:51,760 --> 00:00:55,879 +also takes a little bit of time so I I + +22 +00:00:53,640 --> 00:00:59,160 +would suggest that you uh you take a + +23 +00:00:55,879 --> 00:01:01,160 +look at doing that um so you go to like + +24 +00:00:59,160 --> 00:01:02,800 +if you're using AWS if you're not using + +25 +00:01:01,160 --> 00:01:05,119 +AWS it doesn't matter but if you're + +26 +00:01:02,800 --> 00:01:08,119 +using AWS you can go to launch instance + +27 +00:01:05,119 --> 00:01:11,520 +and try to launch a p2x large machine um + +28 +00:01:08,119 --> 00:01:13,159 +or something like that so uh but yeah + +29 +00:01:11,520 --> 00:01:14,920 +anyway hopefully that will be done soon + +30 +00:01:13,159 --> 00:01:16,600 +I'm sorry about the delay on this they + +31 +00:01:14,920 --> 00:01:21,400 +said it would take seven days and it's + +32 +00:01:16,600 --> 00:01:24,280 +taken almost twice at now so um my + +33 +00:01:21,400 --> 00:01:26,439 +apologies any other uh things before we + +34 +00:01:24,280 --> 00:01:26,439 +get + +35 +00:01:28,759 --> 00:01:34,520 +started um okay I I don't see any so + +36 +00:01:31,920 --> 00:01:37,280 +I'll go ahead with this um I have + +37 +00:01:34,520 --> 00:01:39,240 +slightly fewer slides today so I might + +38 +00:01:37,280 --> 00:01:40,960 +go a little bit off the slides and talk + +39 +00:01:39,240 --> 00:01:44,759 +about papers and stuff or we might + +40 +00:01:40,960 --> 00:01:46,920 +finish early uh either way so um but + +41 +00:01:44,759 --> 00:01:48,439 +what I would like to talk about is um + +42 +00:01:46,920 --> 00:01:53,320 +combining multiple + +43 +00:01:48,439 --> 00:01:55,479 +models and this is uh really important + +44 +00:01:53,320 --> 00:01:57,520 +and useful if you want to get like an + +45 +00:01:55,479 --> 00:02:00,719 +extra few points of + +46 +00:01:57,520 --> 00:02:03,159 +accuracy uh for anything basically + +47 +00:02:00,719 --> 00:02:04,039 +because it's a pretty reliable way to + +48 +00:02:03,159 --> 00:02:06,960 +get + +49 +00:02:04,039 --> 00:02:08,879 +improvements um and there's a a bunch of + +50 +00:02:06,960 --> 00:02:11,239 +different kind of related but different + +51 +00:02:08,879 --> 00:02:13,680 +topics that I'm going to talk about + +52 +00:02:11,239 --> 00:02:15,519 +today but anyway the the basic + +53 +00:02:13,680 --> 00:02:19,239 +background is that we have many models + +54 +00:02:15,519 --> 00:02:22,920 +uh that exist and the reason why we have + +55 +00:02:19,239 --> 00:02:25,840 +many models that exist is multiple fold + +56 +00:02:22,920 --> 00:02:28,160 +number one we could have different model + +57 +00:02:25,840 --> 00:02:30,080 +architectures um and we could also have + +58 +00:02:28,160 --> 00:02:34,440 +different initializations of those model + +59 +00:02:30,080 --> 00:02:37,879 +architectures so um normally you know if + +60 +00:02:34,440 --> 00:02:40,319 +we do initialization we will initial + +61 +00:02:37,879 --> 00:02:42,360 +initialize our model architecture like + +62 +00:02:40,319 --> 00:02:44,680 +let's say we initialize a llama + +63 +00:02:42,360 --> 00:02:45,920 +architecture uh we start out with random + +64 +00:02:44,680 --> 00:02:49,319 +7B + +65 +00:02:45,920 --> 00:02:52,879 +parameters and then we train and we get + +66 +00:02:49,319 --> 00:02:53,840 +llama 7B for uh our pre-training or + +67 +00:02:52,879 --> 00:02:57,280 +llama + +68 +00:02:53,840 --> 00:02:58,599 +27b um we might initialize another model + +69 +00:02:57,280 --> 00:03:00,599 +this could be you know the same + +70 +00:02:58,599 --> 00:03:02,360 +architecture different architecture Ure + +71 +00:03:00,599 --> 00:03:04,840 +train it on the same data or different + +72 +00:03:02,360 --> 00:03:07,000 +data and get something like mistol + +73 +00:03:04,840 --> 00:03:08,599 +mistol 7B in this case actually maybe + +74 +00:03:07,000 --> 00:03:10,080 +these are I should have indicated that + +75 +00:03:08,599 --> 00:03:11,680 +these are different architectures but + +76 +00:03:10,080 --> 00:03:13,879 +you know we get a different pre-rain + +77 +00:03:11,680 --> 00:03:15,599 +model and of course uh we could also + +78 +00:03:13,879 --> 00:03:18,640 +make it bigger or smaller or whatever + +79 +00:03:15,599 --> 00:03:21,720 +else and then we get llama 270b over + +80 +00:03:18,640 --> 00:03:23,519 +here and then after we do that there's a + +81 +00:03:21,720 --> 00:03:25,319 +lot of fine tuning that goes on + +82 +00:03:23,519 --> 00:03:29,360 +according to different strategies so we + +83 +00:03:25,319 --> 00:03:32,640 +have um you know llama 27b instruct uh + +84 +00:03:29,360 --> 00:03:37,760 +vun 7B uh version + +85 +00:03:32,640 --> 00:03:41,000 +1.5 um mistol 7B instruct uh news uh + +86 +00:03:37,760 --> 00:03:45,239 +Hermes 2 mistal 7B or llama 270b + +87 +00:03:41,000 --> 00:03:47,239 +instruct so we have um a variety of + +88 +00:03:45,239 --> 00:03:49,400 +architectures a variety of random + +89 +00:03:47,239 --> 00:03:51,480 +initializations of those architectures a + +90 +00:03:49,400 --> 00:03:54,799 +variety of pre-train models due to + +91 +00:03:51,480 --> 00:03:57,439 +pre-training data or base models and + +92 +00:03:54,799 --> 00:03:58,920 +then a variety of fine dun models um and + +93 +00:03:57,439 --> 00:04:01,120 +so we have this kind of like branching + +94 +00:03:58,920 --> 00:04:02,959 +tree basically + +95 +00:04:01,120 --> 00:04:04,319 +um the reason why this is important is + +96 +00:04:02,959 --> 00:04:06,680 +because when we're combining multiple + +97 +00:04:04,319 --> 00:04:08,400 +models together some of the methods are + +98 +00:04:06,680 --> 00:04:09,959 +applicable to completely different + +99 +00:04:08,400 --> 00:04:12,439 +models some of the methods are only + +100 +00:04:09,959 --> 00:04:15,000 +applicable to models that share the same + +101 +00:04:12,439 --> 00:04:16,720 +architecture and some of them are only + +102 +00:04:15,000 --> 00:04:19,199 +applicable to models that share the same + +103 +00:04:16,720 --> 00:04:20,959 +initialization and training trajectory + +104 +00:04:19,199 --> 00:04:23,680 +and so I'll try to distinguish between + +105 +00:04:20,959 --> 00:04:23,680 +those as we go + +106 +00:04:24,040 --> 00:04:27,919 +forward + +107 +00:04:25,560 --> 00:04:29,960 +cool so the first thing I I'll talk + +108 +00:04:27,919 --> 00:04:32,600 +about is model ensembling and and + +109 +00:04:29,960 --> 00:04:34,320 +ensembling is kind of the a very general + +110 +00:04:32,600 --> 00:04:37,600 +technique that you can use in a lot of + +111 +00:04:34,320 --> 00:04:39,360 +different uh ways but it has its + +112 +00:04:37,600 --> 00:04:43,039 +disadvantages as + +113 +00:04:39,360 --> 00:04:47,199 +well so basically embling is combining + +114 +00:04:43,039 --> 00:04:50,320 +the predictions from multiple models + +115 +00:04:47,199 --> 00:04:52,400 +and the easiest way to do this ignore + +116 +00:04:50,320 --> 00:04:53,800 +the lstm here this is just any sequence + +117 +00:04:52,400 --> 00:04:56,320 +modeling thing it's because the slides + +118 +00:04:53,800 --> 00:05:00,120 +are old but like let's say this is a a + +119 +00:04:56,320 --> 00:05:03,360 +Transformer it is calculating the + +120 +00:05:00,120 --> 00:05:05,600 +current decoder State and you make a + +121 +00:05:03,360 --> 00:05:07,600 +prediction um this is calculating a + +122 +00:05:05,600 --> 00:05:09,199 +current decoder State and make uh + +123 +00:05:07,600 --> 00:05:11,560 +current decoders sayate in making a + +124 +00:05:09,199 --> 00:05:13,039 +prediction and based on some combination + +125 +00:05:11,560 --> 00:05:17,120 +of the two predictions you decide what + +126 +00:05:13,039 --> 00:05:17,120 +you actually want to Output at the next + +127 +00:05:17,680 --> 00:05:23,840 +step so why would we want to do this um + +128 +00:05:22,080 --> 00:05:25,880 +does anyone have any ideas why we want + +129 +00:05:23,840 --> 00:05:28,639 +to use two models instead of using one + +130 +00:05:25,880 --> 00:05:31,639 +model or just using the best + +131 +00:05:28,639 --> 00:05:31,639 +model + +132 +00:05:32,319 --> 00:05:36,440 +or maybe in what situations we would + +133 +00:05:34,520 --> 00:05:39,440 +want to do + +134 +00:05:36,440 --> 00:05:39,440 +this + +135 +00:05:45,400 --> 00:05:50,319 +yeah and what what's the advantage of + +136 +00:05:47,960 --> 00:05:50,319 +doing + +137 +00:05:51,600 --> 00:05:57,000 +that yeah it reduces a bias kind kind of + +138 +00:05:54,800 --> 00:05:57,000 +yeah + +139 +00:05:58,639 --> 00:06:01,639 +sure + +140 +00:06:28,560 --> 00:06:31,560 +m + +141 +00:06:35,400 --> 00:06:40,360 +yeah so um I I'll repeat all of these I + +142 +00:06:38,599 --> 00:06:43,960 +think all of these are correct so number + +143 +00:06:40,360 --> 00:06:47,479 +one um it reduces the bias uh caused by + +144 +00:06:43,960 --> 00:06:49,199 +a single model uh number two it was it's + +145 +00:06:47,479 --> 00:06:52,199 +kind of like a beian perspective which + +146 +00:06:49,199 --> 00:06:54,000 +I'll talk about in a second and then + +147 +00:06:52,199 --> 00:06:56,039 +number three we have different models + +148 +00:06:54,000 --> 00:06:58,520 +and models are better at some things and + +149 +00:06:56,039 --> 00:07:00,400 +worse at other things + +150 +00:06:58,520 --> 00:07:02,720 +um + +151 +00:07:00,400 --> 00:07:05,960 +so talking about the better at some + +152 +00:07:02,720 --> 00:07:08,319 +things and worse at other things um the + +153 +00:07:05,960 --> 00:07:10,960 +basic idea behind embling is that the + +154 +00:07:08,319 --> 00:07:14,240 +errors that model m models make tend to + +155 +00:07:10,960 --> 00:07:15,840 +not be consistent it not tend to not be + +156 +00:07:14,240 --> 00:07:21,520 +as consistent as when the model is + +157 +00:07:15,840 --> 00:07:24,800 +getting it correct so we might have um + +158 +00:07:21,520 --> 00:07:26,160 +we might have one model that says uh + +159 +00:07:24,800 --> 00:07:28,199 +like let's say we just have really + +160 +00:07:26,160 --> 00:07:30,680 +really bad models this is kind of a + +161 +00:07:28,199 --> 00:07:31,720 +really um + +162 +00:07:30,680 --> 00:07:35,960 +obvious + +163 +00:07:31,720 --> 00:07:38,440 +example but we have like the dog the dog + +164 +00:07:35,960 --> 00:07:42,639 +barks and then + +165 +00:07:38,440 --> 00:07:46,039 +runs and then uh Dives or something like + +166 +00:07:42,639 --> 00:07:49,000 +that and we have uh one one model that + +167 +00:07:46,039 --> 00:07:50,560 +just had tons of stuff about diving in + +168 +00:07:49,000 --> 00:07:52,120 +its training data another model that had + +169 +00:07:50,560 --> 00:07:54,240 +tons of stuff about running in its + +170 +00:07:52,120 --> 00:07:56,560 +training data or or marathons or + +171 +00:07:54,240 --> 00:08:00,039 +something staining data so we'll get + +172 +00:07:56,560 --> 00:08:01,800 +model one and model one we'll to give + +173 +00:08:00,039 --> 00:08:06,240 +like a probability of like + +174 +00:08:01,800 --> 00:08:08,280 +0.3 maybe 0.4 and + +175 +00:08:06,240 --> 00:08:10,360 +0.05 and then we'll have another one + +176 +00:08:08,280 --> 00:08:13,039 +over here that's like + +177 +00:08:10,360 --> 00:08:17,319 +0.32 + +178 +00:08:13,039 --> 00:08:19,759 +0.41 and 0 sorry + +179 +00:08:17,319 --> 00:08:23,039 +0.05 and + +180 +00:08:19,759 --> 00:08:25,759 +0.41 or something like this and so when + +181 +00:08:23,039 --> 00:08:27,639 +you average the two together you tend to + +182 +00:08:25,759 --> 00:08:29,240 +get the right answer more often because + +183 +00:08:27,639 --> 00:08:31,720 +kind of the mistakes that they make tend + +184 +00:08:29,240 --> 00:08:33,479 +to less correlated than the probability + +185 +00:08:31,720 --> 00:08:35,880 +of getting and of course it's not + +186 +00:08:33,479 --> 00:08:38,200 +perfect because unbled models are not + +187 +00:08:35,880 --> 00:08:39,880 +perfect but this is a a general tendency + +188 +00:08:38,200 --> 00:08:42,240 +that we see a lot in + +189 +00:08:39,880 --> 00:08:45,959 +models + +190 +00:08:42,240 --> 00:08:47,720 +um and um it's because of this it kind + +191 +00:08:45,959 --> 00:08:52,320 +of Smooths over the idiosyncrasies of + +192 +00:08:47,720 --> 00:08:54,800 +the models you can even um gist Ensemble + +193 +00:08:52,320 --> 00:08:57,519 +models from different checkpoints and + +194 +00:08:54,800 --> 00:08:58,959 +that still gives you improvements and so + +195 +00:08:57,519 --> 00:09:00,560 +when you Ensemble models from different + +196 +00:08:58,959 --> 00:09:02,600 +checkpoints it's basically just what + +197 +00:09:00,560 --> 00:09:05,920 +data did they see most recently and that + +198 +00:09:02,600 --> 00:09:07,839 +also Smooths over you know uh the fact + +199 +00:09:05,920 --> 00:09:10,600 +that like this model happened to see + +200 +00:09:07,839 --> 00:09:13,000 +some data more recently and so it's less + +201 +00:09:10,600 --> 00:09:16,120 +uh you know it's biased towards doing + +202 +00:09:13,000 --> 00:09:18,440 +that so uh this is a a pretty effective + +203 +00:09:16,120 --> 00:09:20,079 +method this is one of the few methods + +204 +00:09:18,440 --> 00:09:21,959 +that I know is going to improve my + +205 +00:09:20,079 --> 00:09:25,120 +accuracy almost every time like there's + +206 +00:09:21,959 --> 00:09:27,880 +a bunch of methods that you can apply um + +207 +00:09:25,120 --> 00:09:29,680 +and I ensembling it's very rare for me + +208 +00:09:27,880 --> 00:09:31,959 +to Ensemble two models together not get + +209 +00:09:29,680 --> 00:09:34,839 +a boost in accuracy in some way so it's + +210 +00:09:31,959 --> 00:09:34,839 +a good thing to + +211 +00:09:35,600 --> 00:09:41,040 +that there's two main ways to combine + +212 +00:09:38,680 --> 00:09:42,560 +models together and both of them are + +213 +00:09:41,040 --> 00:09:45,800 +useful in different + +214 +00:09:42,560 --> 00:09:48,079 +situations the first one is linear + +215 +00:09:45,800 --> 00:09:49,600 +interpolation and when you do linear + +216 +00:09:48,079 --> 00:09:51,240 +interpolation basically what you're + +217 +00:09:49,600 --> 00:09:53,720 +doing is you're taking the weighted + +218 +00:09:51,240 --> 00:09:56,839 +average of model + +219 +00:09:53,720 --> 00:10:00,360 +probabilities and the way that looks + +220 +00:09:56,839 --> 00:10:04,040 +mathematically is like this um this is a + +221 +00:10:00,360 --> 00:10:05,680 +probability according to the model M so + +222 +00:10:04,040 --> 00:10:08,000 +this is just you know the probability of + +223 +00:10:05,680 --> 00:10:11,720 +the next token according to model M this + +224 +00:10:08,000 --> 00:10:13,200 +is the probability of selecting model M + +225 +00:10:11,720 --> 00:10:18,040 +so you talked a little bit about the + +226 +00:10:13,200 --> 00:10:19,920 +basian approach uh to this and this is + +227 +00:10:18,040 --> 00:10:23,519 +basically saying what is the probability + +228 +00:10:19,920 --> 00:10:26,519 +that the parameters of model M + +229 +00:10:23,519 --> 00:10:30,320 +are the ones that we want to be choosing + +230 +00:10:26,519 --> 00:10:32,680 +in this at this particular time step and + +231 +00:10:30,320 --> 00:10:34,640 +then we will we will calculate this and + +232 +00:10:32,680 --> 00:10:38,120 +so then you take the sum over this and + +233 +00:10:34,640 --> 00:10:38,120 +this gives you the next + +234 +00:10:39,560 --> 00:10:44,800 +probability for the second term you can + +235 +00:10:42,639 --> 00:10:47,120 +do this in two ways the most common way + +236 +00:10:44,800 --> 00:10:51,800 +to do this is just to have this be a + +237 +00:10:47,120 --> 00:10:55,279 +constant so you you basically + +238 +00:10:51,800 --> 00:10:55,279 +Define mixture + +239 +00:10:55,920 --> 00:11:01,240 +weights uh which are like um + +240 +00:11:08,480 --> 00:11:13,480 +where the sum of the mixture weights is + +241 +00:11:10,760 --> 00:11:16,160 +equal to one and this is always between + +242 +00:11:13,480 --> 00:11:18,639 +zero and one and so if you do this then + +243 +00:11:16,160 --> 00:11:21,000 +this is just constant and you can uh + +244 +00:11:18,639 --> 00:11:23,519 +interpolate them together constantly but + +245 +00:11:21,000 --> 00:11:25,680 +you can also actually explicitly model + +246 +00:11:23,519 --> 00:11:27,240 +this probability and say oh I'm + +247 +00:11:25,680 --> 00:11:30,279 +currently in a situation where I really + +248 +00:11:27,240 --> 00:11:31,880 +think model M will do a good job of uh + +249 +00:11:30,279 --> 00:11:33,440 +you know predicting the probability so I + +250 +00:11:31,880 --> 00:11:36,160 +want to put most of my probability on + +251 +00:11:33,440 --> 00:11:39,000 +model M so you can actually learn this + +252 +00:11:36,160 --> 00:11:40,079 +dynamically as well um and so if you + +253 +00:11:39,000 --> 00:11:44,360 +have + +254 +00:11:40,079 --> 00:11:45,920 +uh this actually um is rather practical + +255 +00:11:44,360 --> 00:11:47,120 +and easy to do because what you can do + +256 +00:11:45,920 --> 00:11:48,920 +is you can just calculate the + +257 +00:11:47,120 --> 00:11:51,399 +probability according to each model at + +258 +00:11:48,920 --> 00:11:53,120 +each time step and train this model + +259 +00:11:51,399 --> 00:11:55,519 +separately without loading these models + +260 +00:11:53,120 --> 00:11:59,399 +into memory at at the time of training + +261 +00:11:55,519 --> 00:12:00,959 +those models so uh yeah this is um some + +262 +00:11:59,399 --> 00:12:04,800 +you can do as + +263 +00:12:00,959 --> 00:12:04,800 +well any questions about + +264 +00:12:06,680 --> 00:12:11,920 +this + +265 +00:12:08,519 --> 00:12:14,000 +Okay cool so the other option is log + +266 +00:12:11,920 --> 00:12:15,800 +linear interpolation and so linear + +267 +00:12:14,000 --> 00:12:18,680 +interpolation you're taking a linear + +268 +00:12:15,800 --> 00:12:22,040 +combination of the probabilities of each + +269 +00:12:18,680 --> 00:12:24,959 +model log linear interpolation you're + +270 +00:12:22,040 --> 00:12:26,079 +combining together the log probabilities + +271 +00:12:24,959 --> 00:12:29,519 +of each + +272 +00:12:26,079 --> 00:12:32,639 +model and then renormalizing so so that + +273 +00:12:29,519 --> 00:12:34,920 +you get um that you get an actual + +274 +00:12:32,639 --> 00:12:37,760 +probabilistic output so basically what + +275 +00:12:34,920 --> 00:12:40,720 +you do is you have this uh interpolation + +276 +00:12:37,760 --> 00:12:44,040 +coefficient like I had before but you're + +277 +00:12:40,720 --> 00:12:44,040 +combining together the log + +278 +00:12:44,639 --> 00:12:49,639 +probabilities and so here we need to + +279 +00:12:47,680 --> 00:12:51,320 +take the soft + +280 +00:12:49,639 --> 00:12:53,760 +Max + +281 +00:12:51,320 --> 00:12:55,760 +um thinking back here I didn't take the + +282 +00:12:53,760 --> 00:12:58,120 +softmax does anyone have an idea why I + +283 +00:12:55,760 --> 00:13:02,000 +didn't take the soft + +284 +00:12:58,120 --> 00:13:02,000 +Max or why I didn't need + +285 +00:13:08,160 --> 00:13:12,199 +to why why I need to + +286 +00:13:21,600 --> 00:13:27,680 +here yeah + +287 +00:13:23,639 --> 00:13:30,440 +so this probability is gu to be z z and + +288 +00:13:27,680 --> 00:13:32,240 +one and add up to one this probability + +289 +00:13:30,440 --> 00:13:33,760 +is also guaranteed to be zero and one + +290 +00:13:32,240 --> 00:13:35,680 +and add up to one and then when you + +291 +00:13:33,760 --> 00:13:37,120 +multiply those together uh you can do a + +292 +00:13:35,680 --> 00:13:39,160 +little bit of math and demonstrate that + +293 +00:13:37,120 --> 00:13:41,440 +the resulting thing will be between zero + +294 +00:13:39,160 --> 00:13:42,839 +and one and add up to one that's not the + +295 +00:13:41,440 --> 00:13:44,399 +case anymore when we start doing things + +296 +00:13:42,839 --> 00:13:47,639 +in log space because it's just not a + +297 +00:13:44,399 --> 00:13:50,160 +linear function anyway so um you need to + +298 +00:13:47,639 --> 00:13:51,959 +renormalize like this luckily this is + +299 +00:13:50,160 --> 00:13:54,920 +super easy like anything else you do in + +300 +00:13:51,959 --> 00:13:56,959 +py torch you just add things together + +301 +00:13:54,920 --> 00:13:59,320 +and take a soft Max and you'll you'll + +302 +00:13:56,959 --> 00:14:02,519 +get an output but you do need to do + +303 +00:13:59,320 --> 00:14:05,279 +otherwise you're going to get something + +304 +00:14:02,519 --> 00:14:07,279 +weird um the interpolation coefficient + +305 +00:14:05,279 --> 00:14:09,639 +here also can be set to a constant so + +306 +00:14:07,279 --> 00:14:12,759 +you can you could learn it uh kind of + +307 +00:14:09,639 --> 00:14:15,320 +dynamically or it could be + +308 +00:14:12,759 --> 00:14:17,720 +separate cool and these actually have + +309 +00:14:15,320 --> 00:14:19,639 +different meaning oh sorry go ahead you + +310 +00:14:17,720 --> 00:14:23,880 +T on + +311 +00:14:19,639 --> 00:14:26,759 +the Yeah Yeah so basically the + +312 +00:14:23,880 --> 00:14:29,880 +way the way you would do this is you + +313 +00:14:26,759 --> 00:14:32,399 +would have either + +314 +00:14:29,880 --> 00:14:33,920 +the same model you you would either take + +315 +00:14:32,399 --> 00:14:35,279 +representations from one of these + +316 +00:14:33,920 --> 00:14:37,480 +language models or you would take + +317 +00:14:35,279 --> 00:14:38,440 +representations from another model and + +318 +00:14:37,480 --> 00:14:41,639 +you would + +319 +00:14:38,440 --> 00:14:43,959 +just have a model that + +320 +00:14:41,639 --> 00:14:46,480 +predicts uh what this interpolation + +321 +00:14:43,959 --> 00:14:48,279 +coefficient would be and the + +322 +00:14:46,480 --> 00:14:49,720 +optimization objective for that + +323 +00:14:48,279 --> 00:14:52,759 +interpolation coefficient is just + +324 +00:14:49,720 --> 00:14:56,120 +maximizing the probability + +325 +00:14:52,759 --> 00:14:59,600 +whatever so this could also be good um + +326 +00:14:56,120 --> 00:15:01,839 +because this interpolation coefficient + +327 +00:14:59,600 --> 00:15:07,160 +only like let's say you're interpolating + +328 +00:15:01,839 --> 00:15:09,399 +two models together it has one degree of + +329 +00:15:07,160 --> 00:15:13,320 +Freedom at each time step right because + +330 +00:15:09,399 --> 00:15:15,320 +you're only predicting a probability um + +331 +00:15:13,320 --> 00:15:17,839 +if you have uh if you have five models + +332 +00:15:15,320 --> 00:15:20,240 +you have uh you basically would be doing + +333 +00:15:17,839 --> 00:15:24,199 +a soft match over + +334 +00:15:20,240 --> 00:15:25,519 +five five outputs and that's a lot fewer + +335 +00:15:24,199 --> 00:15:27,600 +that's a lot fewer than the whole + +336 +00:15:25,519 --> 00:15:29,880 +vocabulary right and so this is + +337 +00:15:27,600 --> 00:15:31,639 +relatively learning a good interpolation + +338 +00:15:29,880 --> 00:15:34,160 +coefficient is relatively easy compared + +339 +00:15:31,639 --> 00:15:35,800 +to learning what word to predict next + +340 +00:15:34,160 --> 00:15:36,880 +and because of this you could actually + +341 +00:15:35,800 --> 00:15:39,759 +tune + +342 +00:15:36,880 --> 00:15:42,880 +this um sorry you could tune this + +343 +00:15:39,759 --> 00:15:44,600 +probability on a very small data set and + +344 +00:15:42,880 --> 00:15:46,959 +you could even have it be context + +345 +00:15:44,600 --> 00:15:48,480 +independent so you could just be you + +346 +00:15:46,959 --> 00:15:51,399 +know + +347 +00:15:48,480 --> 00:15:55,880 +calculating literally five five + +348 +00:15:51,399 --> 00:15:57,399 +parameters here um and so because of + +349 +00:15:55,880 --> 00:16:00,319 +that like let's say you have a special + +350 +00:15:57,399 --> 00:16:02,639 +domain or a special task where you have + +351 +00:16:00,319 --> 00:16:04,920 +like 50 training examples or something + +352 +00:16:02,639 --> 00:16:07,399 +like that or you know 100 training + +353 +00:16:04,920 --> 00:16:08,959 +examples you can learn this + +354 +00:16:07,399 --> 00:16:12,480 +interpolation coefficient very + +355 +00:16:08,959 --> 00:16:15,880 +effectively uh on just a few a very + +356 +00:16:12,480 --> 00:16:18,120 +small number of training examples um but + +357 +00:16:15,880 --> 00:16:20,000 +like it could be very useful because + +358 +00:16:18,120 --> 00:16:23,920 +like let's say you have a special domain + +359 +00:16:20,000 --> 00:16:25,639 +medical language model that's 1.3 + +360 +00:16:23,920 --> 00:16:27,759 +billion parameters that you trained + +361 +00:16:25,639 --> 00:16:29,639 +yourself and then you have a 70 billion + +362 +00:16:27,759 --> 00:16:31,079 +parameter language model + +363 +00:16:29,639 --> 00:16:33,680 +that's like really good at modeling + +364 +00:16:31,079 --> 00:16:35,399 +General English um so then you could + +365 +00:16:33,680 --> 00:16:39,120 +learn the interpolation coefficient + +366 +00:16:35,399 --> 00:16:40,600 +between those two such that um the large + +367 +00:16:39,120 --> 00:16:41,800 +general purpose language model will be + +368 +00:16:40,600 --> 00:16:43,959 +generating all of the kind of + +369 +00:16:41,800 --> 00:16:46,360 +grammatical stuff but whenever you + +370 +00:16:43,959 --> 00:16:48,480 +switch over to modeling technical terms + +371 +00:16:46,360 --> 00:16:50,040 +from the medical domain then it learns + +372 +00:16:48,480 --> 00:16:52,480 +to upweight the medical language model + +373 +00:16:50,040 --> 00:16:54,199 +or something so this can be quite uh + +374 +00:16:52,480 --> 00:16:57,000 +this can be quite effective if you have + +375 +00:16:54,199 --> 00:17:00,839 +a limited amount of data that you want + +376 +00:16:57,000 --> 00:17:00,839 +toing thiss + +377 +00:17:01,240 --> 00:17:05,600 +um any other questions about that + +378 +00:17:09,079 --> 00:17:14,880 +yeah yeah I'm just gonna talk about that + +379 +00:17:11,760 --> 00:17:17,640 +next so linear versus log linear you can + +380 +00:17:14,880 --> 00:17:20,880 +actually think of this in logic um and + +381 +00:17:17,640 --> 00:17:23,640 +what I mean by that is um linear is kind + +382 +00:17:20,880 --> 00:17:26,640 +of like a logical or it tries to come up + +383 +00:17:23,640 --> 00:17:29,600 +with examples where either one of the + +384 +00:17:26,640 --> 00:17:31,679 +two assigns a high probability so we + +385 +00:17:29,600 --> 00:17:36,200 +have the example of like bark + +386 +00:17:31,679 --> 00:17:36,200 +run um bark run + +387 +00:17:55,640 --> 00:18:03,840 +diet so if we take the average of these + +388 +00:18:00,360 --> 00:18:03,840 +two in linear + +389 +00:18:04,120 --> 00:18:10,240 +space this would be + +390 +00:18:07,159 --> 00:18:13,679 +0.2 this would be + +391 +00:18:10,240 --> 00:18:17,240 +0.26 and this would + +392 +00:18:13,679 --> 00:18:17,240 +be um + +393 +00:18:17,400 --> 00:18:26,280 +0.21 and so a a linear combination + +394 +00:18:21,480 --> 00:18:28,600 +between the two will find run to be the + +395 +00:18:26,280 --> 00:18:30,600 +highest scoring one because on the left + +396 +00:18:28,600 --> 00:18:32,280 +side we have one model that really likes + +397 +00:18:30,600 --> 00:18:33,159 +this output and we have another model + +398 +00:18:32,280 --> 00:18:35,159 +that + +399 +00:18:33,159 --> 00:18:39,280 +doesn't + +400 +00:18:35,159 --> 00:18:42,159 +um this is this can be good at using + +401 +00:18:39,280 --> 00:18:44,440 +models that capture uh different traits + +402 +00:18:42,159 --> 00:18:47,679 +or it can also be useful if like for + +403 +00:18:44,440 --> 00:18:49,840 +example you have a you have a small + +404 +00:18:47,679 --> 00:18:52,320 +model that you really that really + +405 +00:18:49,840 --> 00:18:53,840 +captures like very specific vocabulary + +406 +00:18:52,320 --> 00:18:55,520 +and you want to upgrate that specific + +407 +00:18:53,840 --> 00:18:56,799 +vocabulary that gets a really low + +408 +00:18:55,520 --> 00:18:57,720 +probability according to a general + +409 +00:18:56,799 --> 00:19:01,360 +purpose + +410 +00:18:57,720 --> 00:19:03,200 +model um this is also necessary when any + +411 +00:19:01,360 --> 00:19:04,520 +model can assign zero probabilities so + +412 +00:19:03,200 --> 00:19:06,720 +if you have like an example of + +413 +00:19:04,520 --> 00:19:10,080 +vocabulary that isn't included in the + +414 +00:19:06,720 --> 00:19:11,159 +the like vocabulary of another model or + +415 +00:19:10,080 --> 00:19:14,280 +you have models with different + +416 +00:19:11,159 --> 00:19:17,200 +vocabularies it's necessary to do this + +417 +00:19:14,280 --> 00:19:19,200 +log linear is more like logical and um + +418 +00:19:17,200 --> 00:19:22,240 +so the interpolated model only likes + +419 +00:19:19,200 --> 00:19:23,799 +choices where all the models agree and + +420 +00:19:22,240 --> 00:19:25,640 +this is particularly good when you want + +421 +00:19:23,799 --> 00:19:27,440 +to restrict possible answers like you + +422 +00:19:25,640 --> 00:19:29,280 +want to have one model be able to say no + +423 +00:19:27,440 --> 00:19:32,080 +I really don't like this so never output + +424 +00:19:29,280 --> 00:19:34,200 +it so um for example if you wanted to + +425 +00:19:32,080 --> 00:19:37,360 +train a model that you knew was very + +426 +00:19:34,200 --> 00:19:38,919 +adverse to toxic language and prevent uh + +427 +00:19:37,360 --> 00:19:42,600 +the model from outputting toxic language + +428 +00:19:38,919 --> 00:19:45,200 +you could use log linear mod so I I + +429 +00:19:42,600 --> 00:19:47,559 +can't unfortunately uh calculate logs + +430 +00:19:45,200 --> 00:19:50,080 +and exponents in my head well enough to + +431 +00:19:47,559 --> 00:19:51,600 +uh to decide this but I'm sure that a + +432 +00:19:50,080 --> 00:19:53,840 +linear + +433 +00:19:51,600 --> 00:19:56,840 +model the linear model would pick the + +434 +00:19:53,840 --> 00:19:59,600 +first one here and the log linear + +435 +00:19:56,840 --> 00:20:01,679 +model would pick the second one because + +436 +00:19:59,600 --> 00:20:05,640 +the second one has a very low score here + +437 +00:20:01,679 --> 00:20:08,640 +so that would be downrated um + +438 +00:20:05,640 --> 00:20:08,640 +by + +439 +00:20:16,919 --> 00:20:20,640 +yeah yeah so + +440 +00:20:25,840 --> 00:20:31,000 +if yeah and if there's any chance of + +441 +00:20:28,760 --> 00:20:34,159 +assigning zero probability according to + +442 +00:20:31,000 --> 00:20:36,520 +a language model then really you can't + +443 +00:20:34,159 --> 00:20:38,200 +even test that language model on that on + +444 +00:20:36,520 --> 00:20:42,120 +that test set + +445 +00:20:38,200 --> 00:20:43,640 +um so the issue becomes like let's say + +446 +00:20:42,120 --> 00:20:45,559 +you have two models with different + +447 +00:20:43,640 --> 00:20:47,080 +vocabulary if you have two models with + +448 +00:20:45,559 --> 00:20:49,080 +different vocabulary it becomes very + +449 +00:20:47,080 --> 00:20:50,559 +tricky how to reconcile those two but + +450 +00:20:49,080 --> 00:20:53,440 +you could do linear interpolation + +451 +00:20:50,559 --> 00:20:55,200 +between them like match the vocab the + +452 +00:20:53,440 --> 00:20:57,559 +output vocabularies that they do have + +453 +00:20:55,200 --> 00:21:00,120 +and then just not worry about the fact + +454 +00:20:57,559 --> 00:21:02,760 +that the vocabularies are dis jointed + +455 +00:21:00,120 --> 00:21:05,039 +and because one will assign a zero + +456 +00:21:02,760 --> 00:21:07,280 +probability to those vocabulary items + +457 +00:21:05,039 --> 00:21:12,240 +but the other one is fine so you can + +458 +00:21:07,280 --> 00:21:14,919 +just do that but if you're in general it + +459 +00:21:12,240 --> 00:21:16,480 +will be very tricky to try to get two + +460 +00:21:14,919 --> 00:21:18,559 +models with different vocabularies to + +461 +00:21:16,480 --> 00:21:21,480 +play together nicely so I I would + +462 +00:21:18,559 --> 00:21:22,919 +suggest um thinking about thinking + +463 +00:21:21,480 --> 00:21:25,600 +seriously about whether you need to do + +464 +00:21:22,919 --> 00:21:31,360 +that or not before you start out but + +465 +00:21:25,600 --> 00:21:31,360 +yeah um uh yes there any + +466 +00:21:35,559 --> 00:21:40,960 +other + +467 +00:21:38,039 --> 00:21:43,360 +um you could definitely so the question + +468 +00:21:40,960 --> 00:21:45,000 +is are there any other types of + +469 +00:21:43,360 --> 00:21:47,760 +interpolation that have other types of + +470 +00:21:45,000 --> 00:21:50,159 +logical components like exor or nor um + +471 +00:21:47,760 --> 00:21:52,840 +you could definitely come up with one uh + +472 +00:21:50,159 --> 00:21:55,440 +I I am struggling a little bit to think + +473 +00:21:52,840 --> 00:21:57,520 +about when you would want to do that but + +474 +00:21:55,440 --> 00:22:02,840 +I'm sure + +475 +00:21:57,520 --> 00:22:05,840 +you is is the inherent that the + +476 +00:22:02,840 --> 00:22:05,840 +err + +477 +00:22:09,120 --> 00:22:14,480 +not so what what if the errors are not + +478 +00:22:12,640 --> 00:22:15,919 +what if the errors are correlated so + +479 +00:22:14,480 --> 00:22:18,200 +think about what happens if the errors + +480 +00:22:15,919 --> 00:22:20,000 +are perfectly correlated um which is + +481 +00:22:18,200 --> 00:22:25,840 +when you're using the same model in two + +482 +00:22:20,000 --> 00:22:25,840 +parts of the uh like on top so you + +483 +00:22:27,000 --> 00:22:30,520 +literally uh these + +484 +00:22:29,159 --> 00:22:32,679 +model one and model two are the same + +485 +00:22:30,520 --> 00:22:36,720 +model if that's the case nothing happens + +486 +00:22:32,679 --> 00:22:39,200 +it doesn't get worse um and + +487 +00:22:36,720 --> 00:22:43,039 +so of course because this is machine + +488 +00:22:39,200 --> 00:22:45,080 +learning there's no guarantee like you + +489 +00:22:43,039 --> 00:22:47,559 +know unless we make some assumptions + +490 +00:22:45,080 --> 00:22:49,200 +about the relationship between like the + +491 +00:22:47,559 --> 00:22:52,279 +training set and the test set or the + +492 +00:22:49,200 --> 00:22:53,760 +models errors in the test set um you can + +493 +00:22:52,279 --> 00:22:57,039 +always do something that will make your + +494 +00:22:53,760 --> 00:22:59,240 +accuracy worse um like let's say we flip + +495 +00:22:57,039 --> 00:23:00,360 +the labels of a binary class + +496 +00:22:59,240 --> 00:23:03,120 +no matter what you do you're going to + +497 +00:23:00,360 --> 00:23:06,320 +make your accuracy worse but + +498 +00:23:03,120 --> 00:23:09,000 +um no matter what the normal thing you + +499 +00:23:06,320 --> 00:23:10,640 +would do is it would make your if it + +500 +00:23:09,000 --> 00:23:12,480 +would improve accuracy normally it would + +501 +00:23:10,640 --> 00:23:14,760 +decrease your accuracy but like under + +502 +00:23:12,480 --> 00:23:16,080 +pretty reasonable assumptions it's + +503 +00:23:14,760 --> 00:23:20,400 +mostly going to be the case that errors + +504 +00:23:16,080 --> 00:23:22,320 +are deated to some extent um + +505 +00:23:20,400 --> 00:23:25,559 +so + +506 +00:23:22,320 --> 00:23:30,440 +yeah you and because of that ensembly + +507 +00:23:25,559 --> 00:23:30,440 +usually helps yeah + +508 +00:23:36,120 --> 00:23:42,019 +um about which one + +509 +00:23:38,760 --> 00:23:42,019 +[Music] + +510 +00:23:53,559 --> 00:24:01,240 +which let me make sure I didn't mess it + +511 +00:23:55,640 --> 00:24:01,240 +up on sides okay so in my + +512 +00:24:06,960 --> 00:24:13,120 +example yeah yeah + +513 +00:24:09,640 --> 00:24:13,120 +yeah sorry about + +514 +00:24:14,360 --> 00:24:19,320 +that because this is this is where the + +515 +00:24:17,039 --> 00:24:21,840 +average is higher and then this is + +516 +00:24:19,320 --> 00:24:27,200 +one take + +517 +00:24:21,840 --> 00:24:29,039 +you uh cool any other any other + +518 +00:24:27,200 --> 00:24:31,840 +questions okay + +519 +00:24:29,039 --> 00:24:34,440 +okay so + +520 +00:24:31,840 --> 00:24:36,320 +um another thing I should point out is + +521 +00:24:34,440 --> 00:24:39,600 +that we don't + +522 +00:24:36,320 --> 00:24:41,840 +necessarily need to use models only as + +523 +00:24:39,600 --> 00:24:44,080 +positive evidence so if you're using log + +524 +00:24:41,840 --> 00:24:46,039 +linear interpolation actually your + +525 +00:24:44,080 --> 00:24:49,919 +interpolation coefficients do not need + +526 +00:24:46,039 --> 00:24:52,520 +to be positive they can also be negative + +527 +00:24:49,919 --> 00:24:55,360 +and you can have uh things where you + +528 +00:24:52,520 --> 00:24:57,840 +penalize the probabilities given by a + +529 +00:24:55,360 --> 00:24:59,679 +particular model and this has actually + +530 +00:24:57,840 --> 00:25:01,520 +been used for a long time it was + +531 +00:24:59,679 --> 00:25:04,440 +actually used in machine translation + +532 +00:25:01,520 --> 00:25:08,840 +since like uh 2005 or something like + +533 +00:25:04,440 --> 00:25:11,480 +this but the basic idea is um that you + +534 +00:25:08,840 --> 00:25:13,600 +have some models that serve as negative + +535 +00:25:11,480 --> 00:25:15,559 +evidence so you have kind of a core + +536 +00:25:13,600 --> 00:25:17,880 +model this might be your really strong + +537 +00:25:15,559 --> 00:25:21,520 +general purpose language model you have + +538 +00:25:17,880 --> 00:25:23,080 +a positive uh model which is the model + +539 +00:25:21,520 --> 00:25:25,240 +that you want to kind of boost up and + +540 +00:25:23,080 --> 00:25:27,320 +improve and a negative model which you + +541 +00:25:25,240 --> 00:25:31,159 +want to + +542 +00:25:27,320 --> 00:25:33,679 +decrease and um one example of this is + +543 +00:25:31,159 --> 00:25:36,760 +in uh a paper that we did in + +544 +00:25:33,679 --> 00:25:40,159 +2019 um the core was a machine + +545 +00:25:36,760 --> 00:25:42,960 +translation model and the negative model + +546 +00:25:40,159 --> 00:25:44,880 +is an outof domain language model and + +547 +00:25:42,960 --> 00:25:46,960 +the positive model is an in domain + +548 +00:25:44,880 --> 00:25:51,039 +language model and so the idea behind + +549 +00:25:46,960 --> 00:25:53,880 +this is a machine translation model um + +550 +00:25:51,039 --> 00:25:55,600 +you have to train it on machine + +551 +00:25:53,880 --> 00:25:58,320 +translation data and machine translation + +552 +00:25:55,600 --> 00:26:00,640 +data is not very easy to get for + +553 +00:25:58,320 --> 00:26:02,360 +particular domains for example um you + +554 +00:26:00,640 --> 00:26:03,880 +might only have machine translation data + +555 +00:26:02,360 --> 00:26:06,919 +in the news domain and you actually want + +556 +00:26:03,880 --> 00:26:09,240 +to be uh doing uh translation in the + +557 +00:26:06,919 --> 00:26:12,720 +medical domain or something so what you + +558 +00:26:09,240 --> 00:26:14,640 +do is you have your positive model here + +559 +00:26:12,720 --> 00:26:17,600 +this could be a new this is a machine + +560 +00:26:14,640 --> 00:26:19,919 +translation model this could be a news + +561 +00:26:17,600 --> 00:26:21,320 +domain or sorry this could be a medical + +562 +00:26:19,919 --> 00:26:22,919 +domain language model and this could be + +563 +00:26:21,320 --> 00:26:24,360 +a news domain language model so you're + +564 +00:26:22,919 --> 00:26:25,840 +subtracting out the news domain + +565 +00:26:24,360 --> 00:26:27,600 +probabilities and adding in medical + +566 +00:26:25,840 --> 00:26:30,240 +domain probabilities move it in that + +567 +00:26:27,600 --> 00:26:30,240 +direction + +568 +00:26:30,440 --> 00:26:36,799 +um another example of this is uh + +569 +00:26:32,919 --> 00:26:40,000 +something called uh D experts um or + +570 +00:26:36,799 --> 00:26:43,440 +dexperts and the idea here is here you + +571 +00:26:40,000 --> 00:26:46,120 +have a strong language model as your + +572 +00:26:43,440 --> 00:26:48,320 +core and then as negative you have a + +573 +00:26:46,120 --> 00:26:50,240 +weak toxic language model so it was + +574 +00:26:48,320 --> 00:26:52,760 +trained on lot lots of like bad texts + +575 +00:26:50,240 --> 00:26:55,799 +that you don't want to be generating and + +576 +00:26:52,760 --> 00:26:57,159 +the positive is a weak non-toxic + +577 +00:26:55,799 --> 00:26:59,279 +language model that was trained on lots + +578 +00:26:57,159 --> 00:27:03,200 +of like inocua + +579 +00:26:59,279 --> 00:27:04,399 +posts so that would help you detoxify + +580 +00:27:03,200 --> 00:27:06,679 +the outputs of the + +581 +00:27:04,399 --> 00:27:09,799 +language so there's lots of examples of + +582 +00:27:06,679 --> 00:27:09,799 +things like this that you can do + +583 +00:27:10,720 --> 00:27:15,880 +through + +584 +00:27:12,880 --> 00:27:15,880 +yeah + +585 +00:27:19,320 --> 00:27:25,880 +yeah um so the positive in the machine + +586 +00:27:22,840 --> 00:27:27,679 +translation example this is a so this is + +587 +00:27:25,880 --> 00:27:31,760 +a machine translation model where the + +588 +00:27:27,679 --> 00:27:34,080 +input is is like in um English and out + +589 +00:27:31,760 --> 00:27:37,880 +is in Japanese something like + +590 +00:27:34,080 --> 00:27:39,679 +that this is only trained on Japanese + +591 +00:27:37,880 --> 00:27:42,919 +but it's trained on like medical + +592 +00:27:39,679 --> 00:27:44,440 +Japanese for example Med the domain one + +593 +00:27:42,919 --> 00:27:48,480 +this is a language model that was + +594 +00:27:44,440 --> 00:27:50,600 +trained on like news domain um Japanese + +595 +00:27:48,480 --> 00:27:54,039 +or it could even literally just be + +596 +00:27:50,600 --> 00:27:56,360 +trained on the side of the machine + +597 +00:27:54,039 --> 00:28:00,120 +trans um so it's trying to remove out + +598 +00:27:56,360 --> 00:28:00,120 +the language modeling component from the + +599 +00:28:03,720 --> 00:28:06,720 +cool + +600 +00:28:06,880 --> 00:28:11,480 +okay so another thing that I should + +601 +00:28:09,880 --> 00:28:14,720 +point out I didn't actually put it on + +602 +00:28:11,480 --> 00:28:18,399 +the slides is um there's a lot of other + +603 +00:28:14,720 --> 00:28:19,640 +ways to get multiple models and um I + +604 +00:28:18,399 --> 00:28:22,600 +think a lot of people are probably + +605 +00:28:19,640 --> 00:28:23,559 +familiar with Dropout um it's a method + +606 +00:28:22,600 --> 00:28:27,120 +for + +607 +00:28:23,559 --> 00:28:29,080 +regularizing um it's a method for + +608 +00:28:27,120 --> 00:28:31,120 +regularizing + +609 +00:28:29,080 --> 00:28:33,760 +neural networks or deep learning models + +610 +00:28:31,120 --> 00:28:37,279 +in general and basically the idea is + +611 +00:28:33,760 --> 00:28:41,840 +every once in a while um during training + +612 +00:28:37,279 --> 00:28:45,720 +you drop out some portion of the uh like + +613 +00:28:41,840 --> 00:28:48,919 +nodes in the neural network model and + +614 +00:28:45,720 --> 00:28:51,320 +you can actually drop + +615 +00:28:48,919 --> 00:28:52,640 +out and normally what you do is at test + +616 +00:28:51,320 --> 00:28:53,919 +time then you just don't drop out + +617 +00:28:52,640 --> 00:28:56,039 +anything and you use the whole neural + +618 +00:28:53,919 --> 00:28:59,960 +network model but another thing you can + +619 +00:28:56,039 --> 00:29:02,559 +do is you can drop out a test time drop + +620 +00:28:59,960 --> 00:29:04,679 +out five times and combine those + +621 +00:29:02,559 --> 00:29:06,600 +different models together through ensom + +622 +00:29:04,679 --> 00:29:10,600 +and that's actually something uh that + +623 +00:29:06,600 --> 00:29:14,480 +people tried in the uh in the Dropout + +624 +00:29:10,600 --> 00:29:17,600 +paper and this is one way to get + +625 +00:29:14,480 --> 00:29:19,640 +multiple models uh and actually you can + +626 +00:29:17,600 --> 00:29:21,919 +demonstrate that this helps the original + +627 +00:29:19,640 --> 00:29:24,519 +motivation behind Dropout was precisely + +628 +00:29:21,919 --> 00:29:26,279 +coming from this idea of + +629 +00:29:24,519 --> 00:29:29,080 +ensembling + +630 +00:29:26,279 --> 00:29:31,399 +another method + +631 +00:29:29,080 --> 00:29:34,799 +that has been around for a very long + +632 +00:29:31,399 --> 00:29:37,760 +time it's another embling method is + +633 +00:29:34,799 --> 00:29:41,919 +bagging and basically the way bagging + +634 +00:29:37,760 --> 00:29:41,919 +works is you have a data + +635 +00:29:44,000 --> 00:29:50,159 +set like this and you just resample the + +636 +00:29:47,519 --> 00:29:52,919 +data set so you sample all of the output + +637 +00:29:50,159 --> 00:29:55,200 +with uh replacement and you get another + +638 +00:29:52,919 --> 00:29:57,799 +data set of equal size and then you + +639 +00:29:55,200 --> 00:29:58,559 +train on this but you do that like 10 + +640 +00:29:57,799 --> 00:30:00,120 +times + +641 +00:29:58,559 --> 00:30:02,679 +and you train 10 different models and + +642 +00:30:00,120 --> 00:30:04,360 +then you emble those models together and + +643 +00:30:02,679 --> 00:30:06,000 +so this is another way to get multiple + +644 +00:30:04,360 --> 00:30:07,519 +models and both of these still improve + +645 +00:30:06,000 --> 00:30:09,640 +your robustness because they basically + +646 +00:30:07,519 --> 00:30:11,440 +get a different view on the data so they + +647 +00:30:09,640 --> 00:30:13,440 +smooth over some of the + +648 +00:30:11,440 --> 00:30:15,360 +idiosyncrasies um and as I mentioned + +649 +00:30:13,440 --> 00:30:17,960 +before you can also get multiple models + +650 +00:30:15,360 --> 00:30:20,120 +from different checkpoints and then uh + +651 +00:30:17,960 --> 00:30:22,159 +put them together and all of these + +652 +00:30:20,120 --> 00:30:24,159 +methods are pretty related both of them + +653 +00:30:22,159 --> 00:30:25,960 +basically what they're doing is they're + +654 +00:30:24,159 --> 00:30:28,279 +taking advantage of the fact that you + +655 +00:30:25,960 --> 00:30:29,919 +have particular models that saw + +656 +00:30:28,279 --> 00:30:32,760 +different data or saw data in a + +657 +00:30:29,919 --> 00:30:34,120 +different order or different nodes saw + +658 +00:30:32,760 --> 00:30:35,679 +different parts of the data because you + +659 +00:30:34,120 --> 00:30:37,799 +dropped out some of the nodes when they + +660 +00:30:35,679 --> 00:30:41,840 +were back propping on particular + +661 +00:30:37,799 --> 00:30:44,840 +varieties of the data so um even things + +662 +00:30:41,840 --> 00:30:46,720 +like this can give you models that are + +663 +00:30:44,840 --> 00:30:49,760 +different enough that to help uh when + +664 +00:30:46,720 --> 00:30:49,760 +you're onbling or + +665 +00:30:52,559 --> 00:30:59,360 +combining and then of course um you can + +666 +00:30:56,919 --> 00:31:00,799 +also + +667 +00:30:59,360 --> 00:31:02,480 +then of course you can also combine + +668 +00:31:00,799 --> 00:31:06,960 +together like very different models like + +669 +00:31:02,480 --> 00:31:06,960 +this and that also works in different + +670 +00:31:07,240 --> 00:31:11,159 +ways + +671 +00:31:09,000 --> 00:31:13,039 +cool part of the reason why I wanted to + +672 +00:31:11,159 --> 00:31:15,320 +mention that Dropout though in + +673 +00:31:13,039 --> 00:31:17,120 +particular is there's also other + +674 +00:31:15,320 --> 00:31:19,240 +efficient methods for using multiple + +675 +00:31:17,120 --> 00:31:22,000 +models so the big problem with + +676 +00:31:19,240 --> 00:31:25,399 +ensembling is the cost + +677 +00:31:22,000 --> 00:31:27,159 +and simple ensembling is very expensive + +678 +00:31:25,399 --> 00:31:29,240 +because it requires you to run multiple + +679 +00:31:27,159 --> 00:31:30,519 +models at test test time at inference + +680 +00:31:29,240 --> 00:31:33,720 +time and this is something you don't + +681 +00:31:30,519 --> 00:31:35,279 +want to be doing if you're you know + +682 +00:31:33,720 --> 00:31:38,679 +deploying a service or something because + +683 +00:31:35,279 --> 00:31:41,080 +it like linearly increases your cost by + +684 +00:31:38,679 --> 00:31:45,200 +um the amount of bottles that you're + +685 +00:31:41,080 --> 00:31:47,799 +running and it requires both end times + +686 +00:31:45,200 --> 00:31:50,120 +of computation and end times of memory + +687 +00:31:47,799 --> 00:31:51,720 +and memory is actually probably the + +688 +00:31:50,120 --> 00:31:54,279 +worst thing because you need to deploy + +689 +00:31:51,720 --> 00:31:58,159 +extra GPU machines and other stuff like + +690 +00:31:54,279 --> 00:31:59,880 +that so um the question is is there any + +691 +00:31:58,159 --> 00:32:03,279 +way we can get some of the benefits of + +692 +00:31:59,880 --> 00:32:06,519 +embling without having to create + +693 +00:32:03,279 --> 00:32:07,320 +multiple models and luckily the answer + +694 +00:32:06,519 --> 00:32:09,240 +is + +695 +00:32:07,320 --> 00:32:11,919 +yes + +696 +00:32:09,240 --> 00:32:13,960 +the method the easiest method for doing + +697 +00:32:11,919 --> 00:32:16,600 +so is something called parameter + +698 +00:32:13,960 --> 00:32:18,399 +averaging and basically what you do is + +699 +00:32:16,600 --> 00:32:21,960 +you just average the parameters of + +700 +00:32:18,399 --> 00:32:26,039 +multiple models together um this only + +701 +00:32:21,960 --> 00:32:29,200 +works under certain conditions so does + +702 +00:32:26,039 --> 00:32:31,120 +anyone um does anyone know what these + +703 +00:32:29,200 --> 00:32:33,320 +conditions might be there's a few + +704 +00:32:31,120 --> 00:32:35,919 +obvious ones and maybe a few slightly + +705 +00:32:33,320 --> 00:32:35,919 +less obvious + +706 +00:32:36,039 --> 00:32:40,799 +ones so like first question do you think + +707 +00:32:38,799 --> 00:32:41,919 +you could combine together do you think + +708 +00:32:40,799 --> 00:32:45,880 +you could average together the + +709 +00:32:41,919 --> 00:32:45,880 +parameters of llama 7B and Lama + +710 +00:32:46,440 --> 00:32:52,639 +70b + +711 +00:32:48,480 --> 00:32:52,639 +no the answer is no but why + +712 +00:32:54,480 --> 00:32:58,440 +not I mean what does that even mean in + +713 +00:32:56,760 --> 00:33:00,480 +the first place right like they have + +714 +00:32:58,440 --> 00:33:02,799 +totally different numbers of parameters + +715 +00:33:00,480 --> 00:33:05,840 +uh you wouldn't be able to find a one + +716 +00:33:02,799 --> 00:33:07,840 +toone association between 7B and like 7 + +717 +00:33:05,840 --> 00:33:12,320 +billion parameters and 70 billion + +718 +00:33:07,840 --> 00:33:16,880 +parameters um what about averaging + +719 +00:33:12,320 --> 00:33:19,399 +together uh let's let's say llama 7B and + +720 +00:33:16,880 --> 00:33:19,399 +mistol + +721 +00:33:23,080 --> 00:33:29,760 +7bs yes no y I'm guessing that like for + +722 +00:33:27,440 --> 00:33:29,760 +the + +723 +00:33:33,760 --> 00:33:38,120 +yeah for different architectures the um + +724 +00:33:36,760 --> 00:33:41,799 +the parameters could mean different + +725 +00:33:38,120 --> 00:33:44,159 +things and even if the architecture is + +726 +00:33:41,799 --> 00:33:45,880 +exactly the same even if your random + +727 +00:33:44,159 --> 00:33:49,880 +initialization is different then that + +728 +00:33:45,880 --> 00:33:52,360 +would be a disastrous because basically + +729 +00:33:49,880 --> 00:33:54,760 +in neural networks there's no inherent + +730 +00:33:52,360 --> 00:33:58,559 +meaning to like parameter number one + +731 +00:33:54,760 --> 00:34:01,919 +right um and there's the idea of permut + +732 +00:33:58,559 --> 00:34:06,679 +Inari which is + +733 +00:34:01,919 --> 00:34:07,639 +um you could like randomly Swap all of + +734 +00:34:06,679 --> 00:34:10,280 +the + +735 +00:34:07,639 --> 00:34:12,079 +dimensions uh between within a neural + +736 +00:34:10,280 --> 00:34:14,760 +network and get exactly the same + +737 +00:34:12,079 --> 00:34:17,919 +function + +738 +00:34:14,760 --> 00:34:22,560 +uh as long as kind + +739 +00:34:17,919 --> 00:34:24,839 +of in layer number one you swap and then + +740 +00:34:22,560 --> 00:34:30,359 +also take the inputs in the next layer + +741 +00:34:24,839 --> 00:34:30,359 +also so um you know you know as long + +742 +00:34:30,960 --> 00:34:36,399 +as if you have a weight Matrix that + +743 +00:34:33,679 --> 00:34:40,800 +results in the um in the outputs being + +744 +00:34:36,399 --> 00:34:49,639 +ordered like 1 two three four + +745 +00:34:40,800 --> 00:34:54,159 +five one or 2 1 3 five four as long as + +746 +00:34:49,639 --> 00:34:55,720 +you also swap the input direct input + +747 +00:34:54,159 --> 00:34:58,400 +dimensions of this weight Matrix you get + +748 +00:34:55,720 --> 00:35:01,520 +exactly the same because they + +749 +00:34:58,400 --> 00:35:04,200 +linear combinations of the parameters + +750 +00:35:01,520 --> 00:35:06,480 +together so neural networks have this + +751 +00:35:04,200 --> 00:35:08,599 +feature of permutation and variance so + +752 +00:35:06,480 --> 00:35:11,800 +models that were trained from like + +753 +00:35:08,599 --> 00:35:13,280 +different uh different initializations + +754 +00:35:11,800 --> 00:35:15,040 +won't be able to be combined together in + +755 +00:35:13,280 --> 00:35:18,320 +this + +756 +00:35:15,040 --> 00:35:20,079 +way um but the good luck the good thing + +757 +00:35:18,320 --> 00:35:21,359 +is actually we have a whole bunch of + +758 +00:35:20,079 --> 00:35:25,320 +models that come from the same + +759 +00:35:21,359 --> 00:35:26,720 +pre-trained model right uh so we we have + +760 +00:35:25,320 --> 00:35:28,640 +this initialization here this + +761 +00:35:26,720 --> 00:35:31,280 +initialization was used to train Lama + +762 +00:35:28,640 --> 00:35:32,920 +27b but now we have like hundreds + +763 +00:35:31,280 --> 00:35:34,440 +hundreds of models that are DED from + +764 +00:35:32,920 --> 00:35:37,400 +Lama 2 we have hundreds of models that + +765 +00:35:34,440 --> 00:35:39,599 +are DED from mixol and there all of the + +766 +00:35:37,400 --> 00:35:40,920 +dimensions actually mean the same thing + +767 +00:35:39,599 --> 00:35:43,280 +because they're derived from the same + +768 +00:35:40,920 --> 00:35:46,680 +parameters in the first place so those + +769 +00:35:43,280 --> 00:35:48,119 +ones we can average together and um + +770 +00:35:46,680 --> 00:35:50,359 +there's basically two ways that we can + +771 +00:35:48,119 --> 00:35:53,520 +do this uh one is by averaging together + +772 +00:35:50,359 --> 00:35:55,240 +multiple checkpoints during training so + +773 +00:35:53,520 --> 00:35:57,960 +originally this was the big thing that + +774 +00:35:55,240 --> 00:36:00,359 +people did uh like you would train model + +775 +00:35:57,960 --> 00:36:02,119 +from scratch for a really long time but + +776 +00:36:00,359 --> 00:36:03,920 +then you would take the final five + +777 +00:36:02,119 --> 00:36:07,520 +checkpoints and you would just average + +778 +00:36:03,920 --> 00:36:09,280 +them together and this helps reduce some + +779 +00:36:07,520 --> 00:36:11,040 +of the noise that you get from + +780 +00:36:09,280 --> 00:36:13,839 +stochastic gradient descent and can + +781 +00:36:11,040 --> 00:36:15,520 +improve your overall accuracy if you're + +782 +00:36:13,839 --> 00:36:17,280 +fine-tuning any models this is something + +783 +00:36:15,520 --> 00:36:18,680 +you can do also uh because you're + +784 +00:36:17,280 --> 00:36:19,800 +probably going to be saving checkpoints + +785 +00:36:18,680 --> 00:36:21,160 +you can just take the best five + +786 +00:36:19,800 --> 00:36:23,079 +checkpoints and average them together + +787 +00:36:21,160 --> 00:36:27,280 +and that actually can improve your + +788 +00:36:23,079 --> 00:36:28,160 +accuracy quite a bit um another thing is + +789 +00:36:27,280 --> 00:36:31,520 +find + +790 +00:36:28,160 --> 00:36:32,880 +uh tuned model merging soine tune um in + +791 +00:36:31,520 --> 00:36:35,000 +several ways and then merge them + +792 +00:36:32,880 --> 00:36:39,079 +together and so for example we might + +793 +00:36:35,000 --> 00:36:41,240 +take Lama 27b instruct and um vuna 7B + +794 +00:36:39,079 --> 00:36:44,760 +1.5 and merg them together with some + +795 +00:36:41,240 --> 00:36:47,599 +weights and uh we could you + +796 +00:36:44,760 --> 00:36:50,319 +know smooth over their idos synchr dises + +797 +00:36:47,599 --> 00:36:52,520 +and get better results + +798 +00:36:50,319 --> 00:36:56,280 +too + +799 +00:36:52,520 --> 00:36:56,280 +cool uh any questions + +800 +00:36:56,520 --> 00:36:59,520 +here + +801 +00:37:00,920 --> 00:37:03,119 +oh + +802 +00:37:04,680 --> 00:37:11,920 +yeah want to so I just + +803 +00:37:09,680 --> 00:37:14,079 +came + +804 +00:37:11,920 --> 00:37:19,040 +non I + +805 +00:37:14,079 --> 00:37:19,040 +use like those different chain and + +806 +00:37:19,640 --> 00:37:23,319 +just + +807 +00:37:21,160 --> 00:37:26,640 +I pretty + +808 +00:37:23,319 --> 00:37:29,520 +efficient because on the same model you + +809 +00:37:26,640 --> 00:37:29,520 +get + +810 +00:37:35,640 --> 00:37:40,839 +yeah so would this would this parameter + +811 +00:37:38,000 --> 00:37:46,119 +averaging be a good method for U making + +812 +00:37:40,839 --> 00:37:49,839 +a model less toxic for example the + +813 +00:37:46,119 --> 00:37:53,200 +answer is a little bit trickier there I + +814 +00:37:49,839 --> 00:37:56,119 +guess because um I I feel like this is + +815 +00:37:53,200 --> 00:37:58,160 +good for mixing two models together so + +816 +00:37:56,119 --> 00:38:01,400 +if you're mixing your + +817 +00:37:58,160 --> 00:38:03,359 +like non-toxicity tuned model or your + +818 +00:38:01,400 --> 00:38:06,079 +safety tuned model with the original + +819 +00:38:03,359 --> 00:38:07,520 +base model that was not uh safety tuned + +820 +00:38:06,079 --> 00:38:08,800 +or something like that then you might + +821 +00:38:07,520 --> 00:38:11,240 +get something in the middle so you might + +822 +00:38:08,800 --> 00:38:13,319 +get something that's less safe than the + +823 +00:38:11,240 --> 00:38:18,720 +uh like the model that was tuned to not + +824 +00:38:13,319 --> 00:38:21,400 +be toxic so it might be uh yeah I'm not + +825 +00:38:18,720 --> 00:38:23,920 +sure but like let's say you let's say + +826 +00:38:21,400 --> 00:38:26,240 +you have a model that somebody + +827 +00:38:23,920 --> 00:38:28,640 +else did like a really good job + +828 +00:38:26,240 --> 00:38:31,359 +instruction tuning for you + +829 +00:38:28,640 --> 00:38:33,640 +um and anytime you start using safety + +830 +00:38:31,359 --> 00:38:35,560 +tuning on it you like hurt the + +831 +00:38:33,640 --> 00:38:38,680 +instruction tuning like the model gets + +832 +00:38:35,560 --> 00:38:40,560 +worse I could see a world where you take + +833 +00:38:38,680 --> 00:38:43,000 +the base model the same base model you + +834 +00:38:40,560 --> 00:38:45,280 +take llama 27b you train like a less + +835 +00:38:43,000 --> 00:38:47,480 +toxic version of llama 27d and then do + +836 +00:38:45,280 --> 00:38:51,319 +parameter averaging with the like well + +837 +00:38:47,480 --> 00:38:53,160 +instruction tuned model um that might + +838 +00:38:51,319 --> 00:38:55,359 +work that might make something that's + +839 +00:38:53,160 --> 00:38:57,560 +more safe and like not much worse + +840 +00:38:55,359 --> 00:39:01,440 +instruction to so there's definitely I + +841 +00:38:57,560 --> 00:39:01,440 +think creative things that you can do + +842 +00:39:01,520 --> 00:39:08,400 +that um maybe I'll go directly into the + +843 +00:39:04,960 --> 00:39:11,480 +methods um + +844 +00:39:08,400 --> 00:39:13,240 +so uh there's a few uh recent papers on + +845 +00:39:11,480 --> 00:39:16,000 +this like this method has been around + +846 +00:39:13,240 --> 00:39:17,880 +for a long time since at least 1996 but + +847 +00:39:16,000 --> 00:39:20,880 +uh recently people have examined it a + +848 +00:39:17,880 --> 00:39:24,800 +lot in the context of uh kind of modern + +849 +00:39:20,880 --> 00:39:27,400 +networks and uh this paper model soup uh + +850 +00:39:24,800 --> 00:39:29,000 +examines two strategies the first one is + +851 +00:39:27,400 --> 00:39:31,400 +uniform averaging where you just average + +852 +00:39:29,000 --> 00:39:33,560 +all the parameters together uh like as + +853 +00:39:31,400 --> 00:39:35,480 +you would expect but they also have a + +854 +00:39:33,560 --> 00:39:38,319 +greedy averaging method and basically + +855 +00:39:35,480 --> 00:39:40,240 +what they do here is they add one model + +856 +00:39:38,319 --> 00:39:42,119 +and check if the whole like averaged + +857 +00:39:40,240 --> 00:39:43,680 +model improves and then only if the + +858 +00:39:42,119 --> 00:39:45,760 +whole averaged model improves do they + +859 +00:39:43,680 --> 00:39:49,040 +keep that model otherwise they throw it + +860 +00:39:45,760 --> 00:39:52,960 +out and then they um they don't uh use + +861 +00:39:49,040 --> 00:39:54,520 +it so what they demonstrate uh this is a + +862 +00:39:52,960 --> 00:39:57,560 +little bit small but basically the + +863 +00:39:54,520 --> 00:40:00,520 +purple star here is uh when the use + +864 +00:39:57,560 --> 00:40:02,480 +greedy averaging and then the blue + +865 +00:40:00,520 --> 00:40:05,119 +circle here is when they use the uniform + +866 +00:40:02,480 --> 00:40:08,280 +averaging and then green is all of the + +867 +00:40:05,119 --> 00:40:09,960 +models that they they put into this + +868 +00:40:08,280 --> 00:40:12,560 +average + +869 +00:40:09,960 --> 00:40:16,680 +and what they found + +870 +00:40:12,560 --> 00:40:18,480 +is this is average uh accuracy on image + +871 +00:40:16,680 --> 00:40:22,400 +net which is the thing that they they + +872 +00:40:18,480 --> 00:40:25,160 +used in deciding which models to merge + +873 +00:40:22,400 --> 00:40:26,920 +in greedily and then this is on + +874 +00:40:25,160 --> 00:40:28,640 +distribution shifts so this is on other + +875 +00:40:26,920 --> 00:40:31,119 +data sets other than the ones they use + +876 +00:40:28,640 --> 00:40:33,040 +specifically for training and what you + +877 +00:40:31,119 --> 00:40:34,720 +can see is the greedy averaging method + +878 +00:40:33,040 --> 00:40:38,720 +does + +879 +00:40:34,720 --> 00:40:40,839 +better um than the best single model on + +880 +00:40:38,720 --> 00:40:42,319 +the data set that they used to decide + +881 +00:40:40,839 --> 00:40:44,800 +that greedy + +882 +00:40:42,319 --> 00:40:46,560 +average the uniform average actually + +883 +00:40:44,800 --> 00:40:48,359 +does worse than the best model so you + +884 +00:40:46,560 --> 00:40:50,960 +would actually be better off for image + +885 +00:40:48,359 --> 00:40:52,960 +net accuracy to just use the best model + +886 +00:40:50,960 --> 00:40:56,000 +but it's more robust so on the + +887 +00:40:52,960 --> 00:40:57,319 +distribution shift like data set it + +888 +00:40:56,000 --> 00:41:00,000 +actually does better than any of them + +889 +00:40:57,319 --> 00:41:02,280 +models so um you can see that there's + +890 +00:41:00,000 --> 00:41:04,720 +kind of trade-offs between choosing + +891 +00:41:02,280 --> 00:41:06,480 +those + +892 +00:41:04,720 --> 00:41:09,319 +essentially + +893 +00:41:06,480 --> 00:41:12,040 +um whoops that's a that's a typo that + +894 +00:41:09,319 --> 00:41:15,760 +should be ensembling but um they also + +895 +00:41:12,040 --> 00:41:18,440 +demonstrate that um averaging is + +896 +00:41:15,760 --> 00:41:22,720 +correlated with ensembling so this is + +897 +00:41:18,440 --> 00:41:25,200 +the um image accuracy of the parameter + +898 +00:41:22,720 --> 00:41:27,000 +average model this is image not accuracy + +899 +00:41:25,200 --> 00:41:30,200 +of the Ensemble so this is actually I + +900 +00:41:27,000 --> 00:41:33,720 +think really interesting figure um what + +901 +00:41:30,200 --> 00:41:36,440 +it shows is that there's a pretty strong + +902 +00:41:33,720 --> 00:41:38,760 +correlation between the two averaging is + +903 +00:41:36,440 --> 00:41:41,400 +almost never better than ensembling the + +904 +00:41:38,760 --> 00:41:44,800 +two together but it's faster of course + +905 +00:41:41,400 --> 00:41:48,119 +so it's better because it's faster and + +906 +00:41:44,800 --> 00:41:50,000 +there are situations where the Ensemble + +907 +00:41:48,119 --> 00:41:51,680 +is much better than the average model so + +908 +00:41:50,000 --> 00:41:55,720 +like the average model hurts the + +909 +00:41:51,680 --> 00:41:58,560 +averaging hurts um onbling does not hurt + +910 +00:41:55,720 --> 00:42:01,319 +so what this shows you is parameter + +911 +00:41:58,560 --> 00:42:03,119 +averaging is is safe and it nearly + +912 +00:42:01,319 --> 00:42:04,359 +approximates model on samping most of + +913 +00:42:03,119 --> 00:42:06,720 +the time but there are cases where it + +914 +00:42:04,359 --> 00:42:08,119 +doesn't so you do need to be a little + +915 +00:42:06,720 --> 00:42:11,720 +bit careful and it might hurt your + +916 +00:42:08,119 --> 00:42:11,720 +accuracy in some cases + +917 +00:42:16,680 --> 00:42:21,520 +yeah oh yeah sorry very good point yes + +918 +00:42:19,280 --> 00:42:21,520 +it's + +919 +00:42:22,319 --> 00:42:29,119 +paralel yeah + +920 +00:42:26,119 --> 00:42:29,119 +this + +921 +00:42:36,480 --> 00:42:41,520 +um how do you know + +922 +00:42:39,400 --> 00:42:45,720 +it's + +923 +00:42:41,520 --> 00:42:48,280 +particular yeah so notably all of these + +924 +00:42:45,720 --> 00:42:48,280 +are + +925 +00:42:48,800 --> 00:42:52,240 +initialized it's been a little while + +926 +00:42:50,800 --> 00:42:54,079 +since I read this but I know all of + +927 +00:42:52,240 --> 00:42:56,520 +these were initialized from a model that + +928 +00:42:54,079 --> 00:42:58,160 +was already pretty good on image that + +929 +00:42:56,520 --> 00:43:01,760 +and then they were tuned in different + +930 +00:42:58,160 --> 00:43:03,800 +ways I guess and so this I think this + +931 +00:43:01,760 --> 00:43:05,319 +might be initialized with a model that + +932 +00:43:03,800 --> 00:43:09,160 +was trained on a different data set or + +933 +00:43:05,319 --> 00:43:10,160 +something like that um and so they are + +934 +00:43:09,160 --> 00:43:12,480 +all starting from the same + +935 +00:43:10,160 --> 00:43:14,480 +initialization so parameter U + +936 +00:43:12,480 --> 00:43:16,599 +permutation inv variance is not an issue + +937 +00:43:14,480 --> 00:43:19,200 +there because they're starting from the + +938 +00:43:16,599 --> 00:43:23,480 +pre um but despite the fact that it's + +939 +00:43:19,200 --> 00:43:26,520 +not a problem there are there are cases + +940 +00:43:23,480 --> 00:43:29,119 +where like averaging is detrimental + +941 +00:43:26,520 --> 00:43:29,119 +compared to + +942 +00:43:32,839 --> 00:43:37,559 +um okay so + +943 +00:43:42,800 --> 00:43:45,800 +yeah + +944 +00:43:51,720 --> 00:43:54,720 +yep + +945 +00:43:56,040 --> 00:43:59,040 +y + +946 +00:44:07,079 --> 00:44:10,079 +okay + +947 +00:44:26,040 --> 00:44:29,040 +y + +948 +00:44:46,319 --> 00:44:52,520 +yeah so that's a great question um I'll + +949 +00:44:48,240 --> 00:44:54,920 +just repeat it which is um the these + +950 +00:44:52,520 --> 00:44:57,520 +experiments were done on CNN's or image + +951 +00:44:54,920 --> 00:44:59,280 +net like uh CNN based image that + +952 +00:44:57,520 --> 00:45:01,119 +classifiers is there something different + +953 +00:44:59,280 --> 00:45:04,040 +than Transformers particularly because + +954 +00:45:01,119 --> 00:45:06,240 +Transformer representations tend to be + +955 +00:45:04,040 --> 00:45:09,000 +uh like very concentrated in particular + +956 +00:45:06,240 --> 00:45:11,359 +parts of the space that's an excellent + +957 +00:45:09,000 --> 00:45:14,040 +question um what I do know is a lot of + +958 +00:45:11,359 --> 00:45:15,319 +people do merge together Transformer + +959 +00:45:14,040 --> 00:45:18,319 +models in fact if you look at the + +960 +00:45:15,319 --> 00:45:20,079 +hugging face leaderboard there's like + +961 +00:45:18,319 --> 00:45:22,240 +something and something merg together + +962 +00:45:20,079 --> 00:45:24,200 +like all over the leader board and it + +963 +00:45:22,240 --> 00:45:25,960 +does tend to improve accuracy so I I + +964 +00:45:24,200 --> 00:45:27,480 +know it is definitely effective for + +965 +00:45:25,960 --> 00:45:28,559 +Transformers + +966 +00:45:27,480 --> 00:45:32,040 +however Are + +967 +00:45:28,559 --> 00:45:34,640 +there specific model like parameter + +968 +00:45:32,040 --> 00:45:37,040 +averaging or model merging methods that + +969 +00:45:34,640 --> 00:45:38,599 +could improve accuracy by taking + +970 +00:45:37,040 --> 00:45:40,680 +advantage of the fact that Transformers + +971 +00:45:38,599 --> 00:45:42,480 +behaving a c certain way I think that's + +972 +00:45:40,680 --> 00:45:44,920 +totally possible and you know it would + +973 +00:45:42,480 --> 00:45:48,800 +be an interesting research Direction um + +974 +00:45:44,920 --> 00:45:51,680 +I'm not familiar enough with that + +975 +00:45:48,800 --> 00:45:53,359 +particular part myself to say oh I have + +976 +00:45:51,680 --> 00:45:55,160 +this great idea that you should work on + +977 +00:45:53,359 --> 00:45:55,920 +but I think if you're interested in it + +978 +00:45:55,160 --> 00:45:58,160 +you + +979 +00:45:55,920 --> 00:46:00,280 +definitely + +980 +00:45:58,160 --> 00:46:05,240 +cool anything + +981 +00:46:00,280 --> 00:46:08,920 +El okay so there's also the idea of uh + +982 +00:46:05,240 --> 00:46:12,440 +task vectors and um basically task + +983 +00:46:08,920 --> 00:46:15,280 +vectors here we are just merging + +984 +00:46:12,440 --> 00:46:17,280 +together two models by taking the + +985 +00:46:15,280 --> 00:46:18,280 +parameters of the models and averaging + +986 +00:46:17,280 --> 00:46:22,079 +them + +987 +00:46:18,280 --> 00:46:24,480 +together task vectors and other related + +988 +00:46:22,079 --> 00:46:26,040 +works specifically take advantage of the + +989 +00:46:24,480 --> 00:46:27,640 +fact that we're looking at different + +990 +00:46:26,040 --> 00:46:29,160 +fine-tuned models + +991 +00:46:27,640 --> 00:46:31,480 +and so these are models where we have a + +992 +00:46:29,160 --> 00:46:33,920 +base model and we know that uh that we + +993 +00:46:31,480 --> 00:46:35,760 +fine-tuned from this base model and the + +994 +00:46:33,920 --> 00:46:38,480 +basic idea is that we have our base + +995 +00:46:35,760 --> 00:46:40,319 +model here and the task Vector is the + +996 +00:46:38,480 --> 00:46:43,280 +difference between the base models + +997 +00:46:40,319 --> 00:46:45,559 +Vector uh parameters and the uh fine + +998 +00:46:43,280 --> 00:46:49,480 +tune models parameters so that's what + +999 +00:46:45,559 --> 00:46:52,720 +they Define as a task Vector um what + +1000 +00:46:49,480 --> 00:46:56,000 +does this allow us to do this allows us + +1001 +00:46:52,720 --> 00:46:58,040 +to do a number of interesting things um + +1002 +00:46:56,000 --> 00:47:02,359 +the first one + +1003 +00:46:58,040 --> 00:47:05,119 +is that we can actually subtract out uh + +1004 +00:47:02,359 --> 00:47:08,960 +quote unquote tasks that we don't want + +1005 +00:47:05,119 --> 00:47:11,559 +so like let's say we had a model that + +1006 +00:47:08,960 --> 00:47:13,440 +was trained on lots of toxic text or we + +1007 +00:47:11,559 --> 00:47:15,760 +had a model that was trained on lots of + +1008 +00:47:13,440 --> 00:47:18,760 +private text or something like that we + +1009 +00:47:15,760 --> 00:47:22,040 +could actually subtract out the task + +1010 +00:47:18,760 --> 00:47:24,240 +Vector from this and basically attempt + +1011 +00:47:22,040 --> 00:47:27,480 +to remove the model's ability to uh do + +1012 +00:47:24,240 --> 00:47:31,240 +that sort of things um you can also + +1013 +00:47:27,480 --> 00:47:36,040 +take two task vectors and combine them + +1014 +00:47:31,240 --> 00:47:39,280 +together and uh like get the model uh + +1015 +00:47:36,040 --> 00:47:42,200 +from the combination of the two um this + +1016 +00:47:39,280 --> 00:47:44,280 +isn't exactly the same as averaging the + +1017 +00:47:42,200 --> 00:47:45,440 +parameters because if you average the + +1018 +00:47:44,280 --> 00:47:47,400 +parameters you would probably get + +1019 +00:47:45,440 --> 00:47:49,160 +something in the middle right here but + +1020 +00:47:47,400 --> 00:47:50,440 +if you average the two vectors or add + +1021 +00:47:49,160 --> 00:47:52,040 +the two vectors together you would get + +1022 +00:47:50,440 --> 00:47:53,760 +something over here actually sorry if + +1023 +00:47:52,040 --> 00:47:56,520 +you average the vectors maybe it's the + +1024 +00:47:53,760 --> 00:47:58,119 +same so you could like add together the + +1025 +00:47:56,520 --> 00:47:59,480 +two vectors and and that would be + +1026 +00:47:58,119 --> 00:48:01,640 +something different than taking the + +1027 +00:47:59,480 --> 00:48:05,280 +average so it gives you a little bit + +1028 +00:48:01,640 --> 00:48:07,720 +more flexibility about things to do + +1029 +00:48:05,280 --> 00:48:09,599 +um and another thing this allows you to + +1030 +00:48:07,720 --> 00:48:12,920 +do is this allows you to try to resolve + +1031 +00:48:09,599 --> 00:48:15,400 +conflicts between um vectors of + +1032 +00:48:12,920 --> 00:48:19,720 +different tasks and so this is an + +1033 +00:48:15,400 --> 00:48:22,480 +illustration of of this method here + +1034 +00:48:19,720 --> 00:48:25,680 +and this has three tasks basically it + +1035 +00:48:22,480 --> 00:48:27,720 +has model one model two model three and + +1036 +00:48:25,680 --> 00:48:29,920 +each of them has vectors and you'll see + +1037 +00:48:27,720 --> 00:48:32,880 +that in some cases these vectors + +1038 +00:48:29,920 --> 00:48:34,599 +conflict so we have like pink going up + +1039 +00:48:32,880 --> 00:48:36,079 +we have yellow and purple going down we + +1040 +00:48:34,599 --> 00:48:37,800 +have yellow going up we have pink and + +1041 +00:48:36,079 --> 00:48:40,720 +purple going down etc + +1042 +00:48:37,800 --> 00:48:43,040 +etc and what this does is this + +1043 +00:48:40,720 --> 00:48:45,960 +identifies the vectors that are uh + +1044 +00:48:43,040 --> 00:48:48,040 +pointing the most strongly in particular + +1045 +00:48:45,960 --> 00:48:50,440 +directions and then it resolves + +1046 +00:48:48,040 --> 00:48:52,240 +conflicts between them and comes up with + +1047 +00:48:50,440 --> 00:48:54,559 +a vector that tries to move in a + +1048 +00:48:52,240 --> 00:48:55,920 +direction that improves all of the tasks + +1049 +00:48:54,559 --> 00:48:59,319 +at the same time and they demonstrate + +1050 +00:48:55,920 --> 00:49:01,480 +that this is better method for um kind + +1051 +00:48:59,319 --> 00:49:04,599 +of improving the ability to do all of + +1052 +00:49:01,480 --> 00:49:09,599 +the tasks compared to just averaging + +1053 +00:49:04,599 --> 00:49:09,599 +things together so yeah first + +1054 +00:49:11,920 --> 00:49:15,559 +exle like it just + +1055 +00:49:16,880 --> 00:49:23,640 +add yeah so this is + +1056 +00:49:20,680 --> 00:49:25,760 +um yeah you could move it more in that + +1057 +00:49:23,640 --> 00:49:27,319 +direction it there's obviously no + +1058 +00:49:25,760 --> 00:49:29,720 +guarantee that it would make it better + +1059 +00:49:27,319 --> 00:49:32,319 +but it might make it more extreme at + +1060 +00:49:29,720 --> 00:49:35,760 +least so uh + +1061 +00:49:32,319 --> 00:49:35,760 +yeah any other + +1062 +00:49:36,680 --> 00:49:39,960 +questions all + +1063 +00:49:55,640 --> 00:49:58,640 +yes + +1064 +00:50:25,640 --> 00:50:28,640 +one + +1065 +00:50:32,319 --> 00:50:37,240 +yeah yeah so this is a a great question + +1066 +00:50:35,599 --> 00:50:38,760 +um I can explain a little bit I'm not + +1067 +00:50:37,240 --> 00:50:40,760 +going to talk about Metal learning + +1068 +00:50:38,760 --> 00:50:42,680 +extensively in this class but just to + +1069 +00:50:40,760 --> 00:50:46,040 +give a very quick primer for people who + +1070 +00:50:42,680 --> 00:50:46,040 +don't know about it + +1071 +00:50:55,640 --> 00:50:58,640 +um + +1072 +00:51:00,359 --> 00:51:06,040 +this is an example of a paper on metal + +1073 +00:51:03,319 --> 00:51:09,559 +learning for low resource machine + +1074 +00:51:06,040 --> 00:51:12,680 +translation um I you can take a look at + +1075 +00:51:09,559 --> 00:51:16,200 +this paper um or not take a look at this + +1076 +00:51:12,680 --> 00:51:17,760 +paper um uh but the reason why I wanted + +1077 +00:51:16,200 --> 00:51:20,799 +to look at this paper is because it has + +1078 +00:51:17,760 --> 00:51:25,160 +a good um uh it has a good illustration + +1079 +00:51:20,799 --> 00:51:27,200 +of what metal learning is and basically + +1080 +00:51:25,160 --> 00:51:29,160 +um if we + +1081 +00:51:27,200 --> 00:51:33,839 +are doing transfer learning from a + +1082 +00:51:29,160 --> 00:51:35,880 +single task what we do is we have like a + +1083 +00:51:33,839 --> 00:51:37,960 +Spanish English machine translation + +1084 +00:51:35,880 --> 00:51:41,839 +system and then we fine-tune it to try + +1085 +00:51:37,960 --> 00:51:45,280 +to hit like to try to be a good Romanian + +1086 +00:51:41,839 --> 00:51:48,680 +uh English or latan English system if + +1087 +00:51:45,280 --> 00:51:50,400 +we're doing multitask learning um or + +1088 +00:51:48,680 --> 00:51:53,079 +which also could be equivalent to like + +1089 +00:51:50,400 --> 00:51:55,680 +instruction tuning for example we have + +1090 +00:51:53,079 --> 00:51:57,680 +uh French uh Spanish and Portuguese we + +1091 +00:51:55,680 --> 00:52:03,319 +train on all the then we + +1092 +00:51:57,680 --> 00:52:06,520 +fine-tune to uh to be a good Romanian uh + +1093 +00:52:03,319 --> 00:52:09,240 +translator latan trans uh + +1094 +00:52:06,520 --> 00:52:10,760 +translator whereas metal learning what + +1095 +00:52:09,240 --> 00:52:12,119 +it's trying to do is it's trying to + +1096 +00:52:10,760 --> 00:52:14,680 +learn a good + +1097 +00:52:12,119 --> 00:52:17,480 +initialization that makes it easy to + +1098 +00:52:14,680 --> 00:52:21,280 +fine-tune to try to come up with a model + +1099 +00:52:17,480 --> 00:52:23,839 +that is good uh for fine-tuning into new + +1100 +00:52:21,280 --> 00:52:29,040 +tasks + +1101 +00:52:23,839 --> 00:52:32,200 +um the way you do this is basically um + +1102 +00:52:29,040 --> 00:52:36,599 +you have two + +1103 +00:52:32,200 --> 00:52:39,400 +steps um of gradient descent and so you + +1104 +00:52:36,599 --> 00:52:42,400 +have a first step where you uh train the + +1105 +00:52:39,400 --> 00:52:42,400 +model + +1106 +00:52:42,599 --> 00:52:50,160 +um where you have an update on like data + +1107 +00:52:47,119 --> 00:52:50,160 +from French for + +1108 +00:52:55,440 --> 00:53:02,400 +example + +1109 +00:52:57,920 --> 00:53:02,400 +and then you have another + +1110 +00:53:04,640 --> 00:53:10,599 +update um where you train on like black + +1111 +00:53:07,880 --> 00:53:10,599 +or something like + +1112 +00:53:12,559 --> 00:53:17,040 +this and this is a very informal very + +1113 +00:53:15,599 --> 00:53:18,200 +informal description there's a lot of + +1114 +00:53:17,040 --> 00:53:19,599 +stuff we could talk about here I could + +1115 +00:53:18,200 --> 00:53:22,119 +have a whole class on this but we're not + +1116 +00:53:19,599 --> 00:53:27,200 +going to um I don't have one planned at + +1117 +00:53:22,119 --> 00:53:28,559 +the moment um and so you uh you up once + +1118 +00:53:27,200 --> 00:53:30,319 +and then you update again and you + +1119 +00:53:28,559 --> 00:53:33,400 +differentiate through this update + +1120 +00:53:30,319 --> 00:53:35,160 +process uh so that this becomes like + +1121 +00:53:33,400 --> 00:53:37,440 +essentially a good initialization for + +1122 +00:53:35,160 --> 00:53:40,640 +training on other languages or for other + +1123 +00:53:37,440 --> 00:53:43,000 +tasks or things like that + +1124 +00:53:40,640 --> 00:53:44,920 +um now going back to the original + +1125 +00:53:43,000 --> 00:53:46,240 +question the original question is is + +1126 +00:53:44,920 --> 00:53:50,000 +there a connection between metal + +1127 +00:53:46,240 --> 00:53:50,000 +learning in these uh task + +1128 +00:53:54,720 --> 00:53:58,440 +vectors I'm not + +1129 +00:53:59,079 --> 00:54:03,720 +100% sure about that because I think + +1130 +00:54:01,760 --> 00:54:06,599 +these test backs are generally created + +1131 +00:54:03,720 --> 00:54:08,480 +post Haw and so they're not like there's + +1132 +00:54:06,599 --> 00:54:12,680 +no explicit learning step to try to make + +1133 +00:54:08,480 --> 00:54:14,440 +them uh you know generalize well um one + +1134 +00:54:12,680 --> 00:54:15,960 +one thing that maybe might be + +1135 +00:54:14,440 --> 00:54:18,559 +interesting to people this is a paper + +1136 +00:54:15,960 --> 00:54:23,040 +that we like literally just put on + +1137 +00:54:18,559 --> 00:54:23,040 +archive about last week + +1138 +00:54:25,359 --> 00:54:28,359 +um + +1139 +00:54:34,520 --> 00:54:39,880 +and we didn't actually use metal + +1140 +00:54:36,400 --> 00:54:41,960 +learning in this uh in this paper um + +1141 +00:54:39,880 --> 00:54:44,520 +just because metal learning actually is + +1142 +00:54:41,960 --> 00:54:46,160 +hard to implement uh because you need to + +1143 +00:54:44,520 --> 00:54:48,680 +do this kind of double differentiation + +1144 +00:54:46,160 --> 00:54:50,720 +and can become very very expensive for + +1145 +00:54:48,680 --> 00:54:52,839 +large models but we did something a + +1146 +00:54:50,720 --> 00:54:55,920 +little bit motivated by + +1147 +00:54:52,839 --> 00:54:59,680 +um uh by metal learning and what we did + +1148 +00:54:55,920 --> 00:55:01,280 +is we took a pre-trained LM and normally + +1149 +00:54:59,680 --> 00:55:04,359 +what you do is something like continued + +1150 +00:55:01,280 --> 00:55:06,799 +pre-training on new documents to learn + +1151 +00:55:04,359 --> 00:55:10,160 +knowledge from the new documents or + +1152 +00:55:06,799 --> 00:55:12,200 +maybe um instruction tuning including + +1153 +00:55:10,160 --> 00:55:15,960 +instruction tuning on data on documents + +1154 +00:55:12,200 --> 00:55:17,520 +about the kind of uh data that you would + +1155 +00:55:15,960 --> 00:55:18,880 +want to be answering questions about so + +1156 +00:55:17,520 --> 00:55:20,640 +like let's say you're trying to train a + +1157 +00:55:18,880 --> 00:55:23,000 +medical language model you might train + +1158 +00:55:20,640 --> 00:55:26,680 +on lots of medical documents but what we + +1159 +00:55:23,000 --> 00:55:29,839 +did here is we had a step where we train + +1160 +00:55:26,680 --> 00:55:33,720 +in advance to + +1161 +00:55:29,839 --> 00:55:38,079 +get on question answer Pairs and + +1162 +00:55:33,720 --> 00:55:40,400 +documents from another domain and then + +1163 +00:55:38,079 --> 00:55:43,359 +we have a step after that where we train + +1164 +00:55:40,400 --> 00:55:46,400 +on documents from the domain we want to + +1165 +00:55:43,359 --> 00:55:48,400 +answer on so like we might train on + +1166 +00:55:46,400 --> 00:55:51,079 +Wikipedia question answer Pairs and + +1167 +00:55:48,400 --> 00:55:52,559 +Wikipedia documents and then in the + +1168 +00:55:51,079 --> 00:55:54,079 +second step we would train on medical + +1169 +00:55:52,559 --> 00:55:56,680 +documents and we demonstrate that + +1170 +00:55:54,079 --> 00:55:58,880 +basically this allows the model to do a + +1171 +00:55:56,680 --> 00:56:00,880 +better job of question answering over + +1172 +00:55:58,880 --> 00:56:03,640 +these uh documents that we find tune on + +1173 +00:56:00,880 --> 00:56:05,000 +over here and so kind of going back to + +1174 +00:56:03,640 --> 00:56:06,760 +the metal learning paper that I talked + +1175 +00:56:05,000 --> 00:56:08,359 +about before the metal learning paper + +1176 +00:56:06,760 --> 00:56:10,640 +tries to get the parameters in a good + +1177 +00:56:08,359 --> 00:56:12,559 +space so that after you find ton on + +1178 +00:56:10,640 --> 00:56:15,520 +another data set you do a good job of + +1179 +00:56:12,559 --> 00:56:17,799 +that in this paper our motivation is + +1180 +00:56:15,520 --> 00:56:20,359 +that the model kind of learns that when + +1181 +00:56:17,799 --> 00:56:22,039 +you train on documents you should be + +1182 +00:56:20,359 --> 00:56:24,079 +able to answer questions about those + +1183 +00:56:22,039 --> 00:56:25,480 +documents and so when you get a new set + +1184 +00:56:24,079 --> 00:56:27,200 +of documents it's kind of in a good part + +1185 +00:56:25,480 --> 00:56:31,079 +of the parameter space to make that easy + +1186 +00:56:27,200 --> 00:56:33,520 +to do so um if that if metal learning is + +1187 +00:56:31,079 --> 00:56:34,640 +interesting um there are tutorials on + +1188 +00:56:33,520 --> 00:56:37,119 +metal learning that I could probably + +1189 +00:56:34,640 --> 00:56:39,599 +share and then um if you're interested + +1190 +00:56:37,119 --> 00:56:42,599 +in kind of like learning Knowledge from + +1191 +00:56:39,599 --> 00:56:45,039 +uh learning Knowledge + +1192 +00:56:42,599 --> 00:56:46,079 +from continued pre-training or something + +1193 +00:56:45,039 --> 00:56:47,400 +like that you could take a look at this + +1194 +00:56:46,079 --> 00:56:49,920 +right there as + +1195 +00:56:47,400 --> 00:56:54,480 +well uh + +1196 +00:56:49,920 --> 00:56:54,480 +cool any questions about that + +1197 +00:56:55,240 --> 00:57:00,880 +or + +1198 +00:56:57,599 --> 00:57:02,480 +okay cool I I'll jump on this so anyway + +1199 +00:57:00,880 --> 00:57:05,520 +um I talked about several methods for + +1200 +00:57:02,480 --> 00:57:07,520 +merging models together um there's a + +1201 +00:57:05,520 --> 00:57:09,440 +popular toolkit called merge kit that + +1202 +00:57:07,520 --> 00:57:10,960 +makes it relatively easy to do this it + +1203 +00:57:09,440 --> 00:57:13,280 +implements a lot of the models that I + +1204 +00:57:10,960 --> 00:57:17,160 +talked about here including uh the + +1205 +00:57:13,280 --> 00:57:19,880 +linear methods um uh the task arithmetic + +1206 +00:57:17,160 --> 00:57:23,079 +method and ties uh so I talked about + +1207 +00:57:19,880 --> 00:57:25,480 +these there is kind of like a expansion + +1208 +00:57:23,079 --> 00:57:27,240 +on this so if you want to merge together + +1209 +00:57:25,480 --> 00:57:28,760 +models it's Rel easy to do from a + +1210 +00:57:27,240 --> 00:57:30,760 +software standpoint as so so you can + +1211 +00:57:28,760 --> 00:57:35,119 +take a look at + +1212 +00:57:30,760 --> 00:57:38,000 +that um another really simple thing uh + +1213 +00:57:35,119 --> 00:57:39,880 +is uh distilling ensembles and so we + +1214 +00:57:38,000 --> 00:57:43,039 +already talked about distillation the + +1215 +00:57:39,880 --> 00:57:45,599 +idea is simple um + +1216 +00:57:43,039 --> 00:57:47,680 +you so parameter averaging only really + +1217 +00:57:45,599 --> 00:57:49,200 +works for models within the same run uh + +1218 +00:57:47,680 --> 00:57:51,760 +same model architecture same + +1219 +00:57:49,200 --> 00:57:54,280 +initialization so knowledge distillation + +1220 +00:57:51,760 --> 00:57:55,559 +uh trains a model to copy The Ensemble + +1221 +00:57:54,280 --> 00:57:57,359 +and so it tries to match the + +1222 +00:57:55,559 --> 00:57:59,119 +distribution over the predicted words + +1223 +00:57:57,359 --> 00:58:00,760 +for an + +1224 +00:57:59,119 --> 00:58:05,319 +on + +1225 +00:58:00,760 --> 00:58:07,799 +um and so this allows the model to make + +1226 +00:58:05,319 --> 00:58:09,079 +the same you know good predictions as + +1227 +00:58:07,799 --> 00:58:11,079 +The Ensemble make the same bad + +1228 +00:58:09,079 --> 00:58:12,799 +predictions as Ensemble it just allows + +1229 +00:58:11,079 --> 00:58:14,799 +you to learn more efficiently just like + +1230 +00:58:12,799 --> 00:58:16,680 +distillation does in general and they + +1231 +00:58:14,799 --> 00:58:18,960 +actually model distillation the original + +1232 +00:58:16,680 --> 00:58:22,240 +motivation for it when Jeff Hinton + +1233 +00:58:18,960 --> 00:58:24,599 +proposed it in 2015 in in this paper was + +1234 +00:58:22,240 --> 00:58:25,680 +to copy an ensemble now we use it for a + +1235 +00:58:24,599 --> 00:58:27,039 +lot of other things like in the + +1236 +00:58:25,680 --> 00:58:31,160 +distillation + +1237 +00:58:27,039 --> 00:58:31,160 +like weed the class but was the + +1238 +00:58:34,119 --> 00:58:39,599 +original + +1239 +00:58:35,760 --> 00:58:42,640 +um next I'll move on to sparse mixture + +1240 +00:58:39,599 --> 00:58:44,960 +of experts models and this is really + +1241 +00:58:42,640 --> 00:58:47,599 +important uh this is used in a lot of + +1242 +00:58:44,960 --> 00:58:51,319 +modern models it's allegedly used in GPD + +1243 +00:58:47,599 --> 00:58:53,160 +4 um and it is uh definitely used in + +1244 +00:58:51,319 --> 00:58:55,280 +mixl uh which is kind of one of the + +1245 +00:58:53,160 --> 00:58:58,039 +state-ofthe-art open models so I think + +1246 +00:58:55,280 --> 00:58:58,039 +it's a good thing to know + +1247 +00:58:59,880 --> 00:59:05,720 +um what these do is they take advantage + +1248 +00:59:02,680 --> 00:59:08,160 +of sparse computation so if you think + +1249 +00:59:05,720 --> 00:59:09,359 +about what happens when you do a scalar + +1250 +00:59:08,160 --> 00:59:12,760 +tensor + +1251 +00:59:09,359 --> 00:59:14,720 +multiply where the scaler is zero and + +1252 +00:59:12,760 --> 00:59:17,160 +basically the result of the entire + +1253 +00:59:14,720 --> 00:59:19,680 +resulting tensor is guaranteed to be + +1254 +00:59:17,160 --> 00:59:21,440 +zero and so you don't even need to do + +1255 +00:59:19,680 --> 00:59:25,440 +the computation you don't need to even + +1256 +00:59:21,440 --> 00:59:27,520 +bother um and so this manifests itself + +1257 +00:59:25,440 --> 00:59:30,240 +in a bunch of different places in modern + +1258 +00:59:27,520 --> 00:59:35,000 +models um the first one could be single + +1259 +00:59:30,240 --> 00:59:38,400 +rows in a matrix multiply so um if you + +1260 +00:59:35,000 --> 00:59:40,480 +have a big Matrix multiply like + +1261 +00:59:38,400 --> 00:59:44,240 +this + +1262 +00:59:40,480 --> 00:59:47,880 +um or Matrix Vector multiply like this + +1263 +00:59:44,240 --> 00:59:50,200 +um and some of the rows are zero then uh + +1264 +00:59:47,880 --> 00:59:54,559 +that that's one place where it + +1265 +00:59:50,200 --> 00:59:58,200 +happens um you can also uh do this + +1266 +00:59:54,559 --> 01:00:00,119 +between zero and in not just rows but + +1267 +00:59:58,200 --> 01:00:02,200 +also larger + +1268 +01:00:00,119 --> 01:00:05,799 +tensors um and you can even do it in + +1269 +01:00:02,200 --> 01:00:07,599 +whole models in an ensemble so um the + +1270 +01:00:05,799 --> 01:00:10,799 +first one this can be optimized + +1271 +01:00:07,599 --> 01:00:13,880 +automatically by GPU um the second one + +1272 +01:00:10,799 --> 01:00:15,400 +this often occurs in uh sparse mixture + +1273 +01:00:13,880 --> 01:00:18,000 +of experts + +1274 +01:00:15,400 --> 01:00:19,400 +models and the final one uh basically + +1275 +01:00:18,000 --> 01:00:21,880 +you just don't need to even use the + +1276 +01:00:19,400 --> 01:00:24,119 +model in emble so if you somehow + +1277 +01:00:21,880 --> 01:00:25,640 +optimize an ensemble and it turns out + +1278 +01:00:24,119 --> 01:00:27,599 +that the probability of one of the + +1279 +01:00:25,640 --> 01:00:29,680 +models is zero you just can throw it out + +1280 +01:00:27,599 --> 01:00:33,640 +and not use it at + +1281 +01:00:29,680 --> 01:00:36,839 +all so um GPU level sparsity + +1282 +01:00:33,640 --> 01:00:39,839 +support uh Nvidia gpus support a bunch + +1283 +01:00:36,839 --> 01:00:42,559 +of different types of sparsity and uh + +1284 +01:00:39,839 --> 01:00:44,599 +the people the wonderful people at + +1285 +01:00:42,559 --> 01:00:48,280 +Nvidia have worked hard to make the + +1286 +01:00:44,599 --> 01:00:51,319 +support uh work to some extent anyway + +1287 +01:00:48,280 --> 01:00:53,119 +and uh there's a library called cpar and + +1288 +01:00:51,319 --> 01:00:56,119 +this is used in pytorch and all these + +1289 +01:00:53,119 --> 01:00:58,280 +other things as well and just to give + +1290 +01:00:56,119 --> 01:01:01,240 +example a vector Matrix multiply with a + +1291 +01:00:58,280 --> 01:01:03,240 +sparse Vector um such as one that comes + +1292 +01:01:01,240 --> 01:01:06,160 +from a relu activation basically what + +1293 +01:01:03,240 --> 01:01:09,319 +happens is let's say you only have three + +1294 +01:01:06,160 --> 01:01:11,799 +uh parts of this Vector that are active + +1295 +01:01:09,319 --> 01:01:15,240 +um you actually just don't need to cop + +1296 +01:01:11,799 --> 01:01:18,200 +uh calculate any of the columns here so + +1297 +01:01:15,240 --> 01:01:19,720 +that makes your life relatively + +1298 +01:01:18,200 --> 01:01:22,880 +easy + +1299 +01:01:19,720 --> 01:01:24,480 +um but the specific thing that I wanted + +1300 +01:01:22,880 --> 01:01:26,640 +to talk about is a sparsely gated + +1301 +01:01:24,480 --> 01:01:29,799 +mixture of experts layer because this is + +1302 +01:01:26,640 --> 01:01:33,960 +uh what is used in mixol and probably uh + +1303 +01:01:29,799 --> 01:01:38,200 +the GPT models as well and what you do + +1304 +01:01:33,960 --> 01:01:41,760 +is you have a feed forward Network and + +1305 +01:01:38,200 --> 01:01:41,760 +normally a feed forward Network in a + +1306 +01:01:43,640 --> 01:01:52,119 +Transformer is this like really wide + +1307 +01:01:49,319 --> 01:01:57,240 +thing this huge wide feed forward + +1308 +01:01:52,119 --> 01:01:59,359 +Network um that you use to extract a + +1309 +01:01:57,240 --> 01:02:00,520 +whole bunch of features at each layer + +1310 +01:01:59,359 --> 01:02:02,640 +and that's where a lot of the + +1311 +01:02:00,520 --> 01:02:05,799 +computation and Transformer + +1312 +01:02:02,640 --> 01:02:10,079 +happens um and what sparsely gated + +1313 +01:02:05,799 --> 01:02:13,079 +mixture of uh experts layers do is they + +1314 +01:02:10,079 --> 01:02:15,640 +first have this gating Network here + +1315 +01:02:13,079 --> 01:02:17,880 +where it calculates uh mixture + +1316 +01:02:15,640 --> 01:02:21,119 +probability but the mixture probability + +1317 +01:02:17,880 --> 01:02:23,039 +is zero and for many or most of the + +1318 +01:02:21,119 --> 01:02:26,880 +parts of this feed forward + +1319 +01:02:23,039 --> 01:02:28,760 +Network and so for the ones where it's + +1320 +01:02:26,880 --> 01:02:31,319 +zero you just don't calculate + +1321 +01:02:28,760 --> 01:02:34,319 +it um and then when you mix them + +1322 +01:02:31,319 --> 01:02:37,359 +together you use the mixture rates and + +1323 +01:02:34,319 --> 01:02:39,520 +this is actually really simple um it's + +1324 +01:02:37,359 --> 01:02:42,400 +like several lines of pytorch code maybe + +1325 +01:02:39,520 --> 01:02:45,319 +like seven or eight lines of P torch + +1326 +01:02:42,400 --> 01:02:48,720 +code but the basic uh idea here is you + +1327 +01:02:45,319 --> 01:02:50,599 +have um this gating function where you + +1328 +01:02:48,720 --> 01:02:52,799 +calculate the gating function based on + +1329 +01:02:50,599 --> 01:02:53,640 +the input and then you have this keep + +1330 +01:02:52,799 --> 01:02:56,720 +top + +1331 +01:02:53,640 --> 01:02:58,319 +K uh operation and then you take the + +1332 +01:02:56,720 --> 01:03:02,559 +soft Max over + +1333 +01:02:58,319 --> 01:03:04,359 +this and the keep top K operation is if + +1334 +01:03:02,559 --> 01:03:06,160 +the value is within the top K you just + +1335 +01:03:04,359 --> 01:03:07,319 +keep it and if it's not in the top K you + +1336 +01:03:06,160 --> 01:03:11,960 +don't keep + +1337 +01:03:07,319 --> 01:03:13,119 +it so that that's all basically but what + +1338 +01:03:11,960 --> 01:03:14,760 +what's great about this is then you + +1339 +01:03:13,119 --> 01:03:17,799 +don't have to calculate like many of + +1340 +01:03:14,760 --> 01:03:20,119 +them and so for example um uh if you + +1341 +01:03:17,799 --> 01:03:22,640 +keep the top two out of eight you reduce + +1342 +01:03:20,119 --> 01:03:26,760 +your calcul uh your computation by four + +1343 +01:03:22,640 --> 01:03:30,000 +times for this part so + +1344 +01:03:26,760 --> 01:03:33,000 +um any any questions + +1345 +01:03:30,000 --> 01:03:33,000 +here + +1346 +01:03:54,720 --> 01:03:57,720 +yeah + +1347 +01:04:03,160 --> 01:04:07,039 +um sorry what what exactly do you mean + +1348 +01:04:05,559 --> 01:04:09,400 +by easy to paralyze are you talking + +1349 +01:04:07,039 --> 01:04:12,400 +about like a GPU can calculate lots of + +1350 +01:04:09,400 --> 01:04:15,680 +things at the same time yeah so I think + +1351 +01:04:12,400 --> 01:04:17,720 +if you have a very small model um you're + +1352 +01:04:15,680 --> 01:04:21,680 +actually not going to get as much from + +1353 +01:04:17,720 --> 01:04:25,079 +this uh because you're not you're + +1354 +01:04:21,680 --> 01:04:26,359 +essentially not bound by computation uh + +1355 +01:04:25,079 --> 01:04:27,880 +like you're bound more by memory + +1356 +01:04:26,359 --> 01:04:29,079 +movement and the GPU and other stuff + +1357 +01:04:27,880 --> 01:04:30,520 +like that but once you start getting up + +1358 +01:04:29,079 --> 01:04:32,920 +to the bigger models you actually are + +1359 +01:04:30,520 --> 01:04:34,640 +bound by computation so reducing your + +1360 +01:04:32,920 --> 01:04:37,039 +computation by four actually is a big + +1361 +01:04:34,640 --> 01:04:42,559 +one so it's a really really good + +1362 +01:04:37,039 --> 01:04:42,559 +question um any any other questions + +1363 +01:04:44,039 --> 01:04:50,520 +yeah so so this will + +1364 +01:04:48,240 --> 01:04:53,160 +um probably + +1365 +01:04:50,520 --> 01:04:56,039 +be + +1366 +01:04:53,160 --> 01:04:59,279 +just oh sorry I I don't have this here + +1367 +01:04:56,039 --> 01:05:01,760 +but this will be a often a linear layer + +1368 +01:04:59,279 --> 01:05:01,760 +followed by a + +1369 +01:05:03,039 --> 01:05:08,000 +seance um or or actually no it doesn't + +1370 +01:05:06,359 --> 01:05:10,520 +even need to be followed by softb it + +1371 +01:05:08,000 --> 01:05:10,520 +could just be a + +1372 +01:05:12,520 --> 01:05:17,920 +linear and I think actually I didn't put + +1373 +01:05:14,960 --> 01:05:19,680 +it on this slide but I have the in the + +1374 +01:05:17,920 --> 01:05:21,359 +references on the website I have the + +1375 +01:05:19,680 --> 01:05:22,760 +actual implementation in mix roll you + +1376 +01:05:21,359 --> 01:05:25,279 +can go in and look at it it's really + +1377 +01:05:22,760 --> 01:05:27,160 +simple um one thing I didn't put on here + +1378 +01:05:25,279 --> 01:05:31,000 +um which actually uh relates to the + +1379 +01:05:27,160 --> 01:05:32,920 +question before is Hardware wise this + +1380 +01:05:31,000 --> 01:05:34,799 +implementation is tricky if you do + +1381 +01:05:32,920 --> 01:05:37,599 +batching um and the reason why It's + +1382 +01:05:34,799 --> 01:05:39,480 +Tricky if you do batching is because um + +1383 +01:05:37,599 --> 01:05:43,000 +different experts will be active for + +1384 +01:05:39,480 --> 01:05:45,240 +different like parts of the batch so if + +1385 +01:05:43,000 --> 01:05:48,559 +you do that you need to do some tricky + +1386 +01:05:45,240 --> 01:05:48,559 +stuff uh there's + +1387 +01:05:54,640 --> 01:05:57,640 +this + +1388 +01:06:03,240 --> 01:06:12,039 +like so much of AI research nowadays uh + +1389 +01:06:08,200 --> 01:06:12,039 +the best resource for this is social + +1390 +01:06:13,680 --> 01:06:20,000 +media so this is uh there's a kind of + +1391 +01:06:16,880 --> 01:06:23,240 +interesting discussion of + +1392 +01:06:20,000 --> 01:06:25,359 +this um if you search for like gpk Fast + +1393 +01:06:23,240 --> 01:06:28,400 +mixed r on Twitter it it talks about + +1394 +01:06:25,359 --> 01:06:30,200 +this but basically there's a bunch of uh + +1395 +01:06:28,400 --> 01:06:32,680 +little little things you need to pay + +1396 +01:06:30,200 --> 01:06:34,760 +attention to um and ways that you can do + +1397 +01:06:32,680 --> 01:06:36,960 +tricks to make this work fast on GPU + +1398 +01:06:34,760 --> 01:06:40,000 +which also kind of uh addresses the + +1399 +01:06:36,960 --> 01:06:42,359 +concern so you can look for Horus H's + +1400 +01:06:40,000 --> 01:06:44,200 +discussion + +1401 +01:06:42,359 --> 01:06:46,680 +this + +1402 +01:06:44,200 --> 01:06:49,000 +cool + +1403 +01:06:46,680 --> 01:06:50,799 +um so the final thing I'd like to talk + +1404 +01:06:49,000 --> 01:06:52,480 +about in the last 10 minutes is pipeline + +1405 +01:06:50,799 --> 01:06:55,359 +systems + +1406 +01:06:52,480 --> 01:06:57,039 +um and pipeline systems are systems + +1407 +01:06:55,359 --> 01:07:00,279 +where we + +1408 +01:06:57,039 --> 01:07:02,319 +have models that basically the output of + +1409 +01:07:00,279 --> 01:07:05,319 +one model becomes the input of another + +1410 +01:07:02,319 --> 01:07:05,319 +model + +1411 +01:07:05,599 --> 01:07:10,359 +and to give an example of this a + +1412 +01:07:08,200 --> 01:07:13,480 +cascaded system is basically a system + +1413 +01:07:10,359 --> 01:07:15,119 +like this where you uh take the output + +1414 +01:07:13,480 --> 01:07:16,960 +of one system and then you feed it into + +1415 +01:07:15,119 --> 01:07:19,640 +the input of another system so a very + +1416 +01:07:16,960 --> 01:07:22,880 +stereotypical example of This is speech + +1417 +01:07:19,640 --> 01:07:25,559 +translation um where you run speech and + +1418 +01:07:22,880 --> 01:07:27,720 +then you uh do speech recognition into + +1419 +01:07:25,559 --> 01:07:29,319 +text and then text you do machine + +1420 +01:07:27,720 --> 01:07:32,160 +translation into another + +1421 +01:07:29,319 --> 01:07:33,920 +language + +1422 +01:07:32,160 --> 01:07:36,440 +and + +1423 +01:07:33,920 --> 01:07:39,039 +um one of the frustrating things about + +1424 +01:07:36,440 --> 01:07:43,000 +speech translation is these systems are + +1425 +01:07:39,039 --> 01:07:45,799 +stubbornly better uh for a long time + +1426 +01:07:43,000 --> 01:07:47,680 +than many systems that try to do end to + +1427 +01:07:45,799 --> 01:07:49,960 +end like speech to text in another + +1428 +01:07:47,680 --> 01:07:52,160 +language there's a couple reasons for + +1429 +01:07:49,960 --> 01:07:54,440 +this does anyone have an idea why what + +1430 +01:07:52,160 --> 01:07:57,039 +one of those reasons might + +1431 +01:07:54,440 --> 01:07:58,839 +be + +1432 +01:07:57,039 --> 01:08:01,559 +yeah the + +1433 +01:07:58,839 --> 01:08:05,279 +data + +1434 +01:08:01,559 --> 01:08:08,680 +anying exactly so data data availability + +1435 +01:08:05,279 --> 01:08:10,920 +is way better for speech to text in the + +1436 +01:08:08,680 --> 01:08:13,319 +same language and text to text in + +1437 +01:08:10,920 --> 01:08:15,720 +another language than it is for uh + +1438 +01:08:13,319 --> 01:08:17,759 +Speech to te text in another language + +1439 +01:08:15,720 --> 01:08:19,319 +because there just aren't large data + +1440 +01:08:17,759 --> 01:08:21,679 +sets that have speech and text in many + +1441 +01:08:19,319 --> 01:08:25,719 +languages so there's a bunch of tricks + +1442 +01:08:21,679 --> 01:08:31,759 +that you can do uh to you know fix this + +1443 +01:08:25,719 --> 01:08:34,239 +but still it it's uh you know uh tricky + +1444 +01:08:31,759 --> 01:08:36,120 +and there's a couple other reasons + +1445 +01:08:34,239 --> 01:08:38,159 +another reason is like actually speech + +1446 +01:08:36,120 --> 01:08:39,319 +to text in the same language is just a + +1447 +01:08:38,159 --> 01:08:42,520 +much more + +1448 +01:08:39,319 --> 01:08:45,359 +straightforward task um and so it's a + +1449 +01:08:42,520 --> 01:08:47,839 +bit easier to learn another thing is + +1450 +01:08:45,359 --> 01:08:50,839 +interpretability and the reason why + +1451 +01:08:47,839 --> 01:08:52,120 +interpretability is important is + +1452 +01:08:50,839 --> 01:08:54,920 +basically + +1453 +01:08:52,120 --> 01:08:56,640 +like if I'm talking to you in a + +1454 +01:08:54,920 --> 01:08:58,000 +different language like you speak a + +1455 +01:08:56,640 --> 01:09:00,319 +different language I'm talking to you + +1456 +01:08:58,000 --> 01:09:02,679 +through a speech translation system I + +1457 +01:09:00,319 --> 01:09:05,799 +actually want to know if the speech + +1458 +01:09:02,679 --> 01:09:07,600 +recognition worked because I know if the + +1459 +01:09:05,799 --> 01:09:08,920 +speech recognition didn't work then I'll + +1460 +01:09:07,600 --> 01:09:10,440 +I'm pretty sure that the translation + +1461 +01:09:08,920 --> 01:09:11,920 +didn't work either right and I can + +1462 +01:09:10,440 --> 01:09:14,880 +verify the speech recognition but I + +1463 +01:09:11,920 --> 01:09:16,199 +can't verify the transation so um + +1464 +01:09:14,880 --> 01:09:18,279 +there's other reasons why you might want + +1465 +01:09:16,199 --> 01:09:20,239 +a Cascade system other than just like + +1466 +01:09:18,279 --> 01:09:22,440 +accuracy or or other things like that + +1467 +01:09:20,239 --> 01:09:25,880 +but this is a thing we definitely + +1468 +01:09:22,440 --> 01:09:29,120 +do um there's another idea of stacking + +1469 +01:09:25,880 --> 01:09:32,560 +and stacking is um very similar to cast + +1470 +01:09:29,120 --> 01:09:34,560 +skating but it allows you to take two + +1471 +01:09:32,560 --> 01:09:37,120 +different models for the same task but + +1472 +01:09:34,560 --> 01:09:39,400 +with predictions in different ways so + +1473 +01:09:37,120 --> 01:09:41,120 +just taking another um + +1474 +01:09:39,400 --> 01:09:43,600 +example + +1475 +01:09:41,120 --> 01:09:45,040 +uh actually maybe maybe ignore the + +1476 +01:09:43,600 --> 01:09:47,159 +example I have here but we could just + +1477 +01:09:45,040 --> 01:09:50,679 +take the example of speech uh + +1478 +01:09:47,159 --> 01:09:53,000 +translation um the speech translation + +1479 +01:09:50,679 --> 01:09:55,760 +model uh we would first do speech + +1480 +01:09:53,000 --> 01:09:57,520 +recognition into like let's say English + +1481 +01:09:55,760 --> 01:09:59,640 +and then we would do translation and the + +1482 +01:09:57,520 --> 01:10:03,840 +input to the translation model would be + +1483 +01:09:59,640 --> 01:10:05,560 +speech in English um text in English and + +1484 +01:10:03,840 --> 01:10:07,320 +we would generate the output in Japanese + +1485 +01:10:05,560 --> 01:10:10,080 +so it would take both the speech and the + +1486 +01:10:07,320 --> 01:10:12,920 +text uh when it was doing translation + +1487 +01:10:10,080 --> 01:10:14,840 +and that would allow it to number one + +1488 +01:10:12,920 --> 01:10:17,719 +basically get a second opinion about + +1489 +01:10:14,840 --> 01:10:21,080 +whether the transcription was correct + +1490 +01:10:17,719 --> 01:10:23,800 +but also like let's say there was + +1491 +01:10:21,080 --> 01:10:26,440 +some unique information that only + +1492 +01:10:23,800 --> 01:10:29,480 +appeared in the + +1493 +01:10:26,440 --> 01:10:31,679 +um uh that only appeared in the speech + +1494 +01:10:29,480 --> 01:10:34,840 +so just to give an example I read the + +1495 +01:10:31,679 --> 01:10:37,040 +book I read the book are both + +1496 +01:10:34,840 --> 01:10:38,640 +transcribed exactly the same way and + +1497 +01:10:37,040 --> 01:10:41,679 +they're different translations obviously + +1498 +01:10:38,640 --> 01:10:42,920 +because one is uh you know present or + +1499 +01:10:41,679 --> 01:10:45,560 +present tense and the other is past + +1500 +01:10:42,920 --> 01:10:47,239 +tense so there are examples where uh + +1501 +01:10:45,560 --> 01:10:51,600 +adding a cascaded system would lose + +1502 +01:10:47,239 --> 01:10:51,600 +information and a stacked system would + +1503 +01:10:53,400 --> 01:10:57,679 +not another thing is of refinement I + +1504 +01:10:56,440 --> 01:10:59,480 +think this is actually really + +1505 +01:10:57,679 --> 01:11:01,000 +interesting because large language + +1506 +01:10:59,480 --> 01:11:03,920 +models have opened up a whole bunch of + +1507 +01:11:01,000 --> 01:11:05,640 +possibilities for us in this space um + +1508 +01:11:03,920 --> 01:11:07,760 +this is like cascading and stacking but + +1509 +01:11:05,640 --> 01:11:09,640 +it it can be done multiple times and it + +1510 +01:11:07,760 --> 01:11:12,960 +can be done multiple times with the same + +1511 +01:11:09,640 --> 01:11:15,040 +model so um we have an input we feed it + +1512 +01:11:12,960 --> 01:11:17,320 +into the model we get an output and then + +1513 +01:11:15,040 --> 01:11:19,360 +we feed the output back in and gradually + +1514 +01:11:17,320 --> 01:11:23,080 +refine it and make it better and + +1515 +01:11:19,360 --> 01:11:24,760 +better and the first time this was done + +1516 +01:11:23,080 --> 01:11:27,440 +in neural networks was through something + +1517 +01:11:24,760 --> 01:11:29,679 +called Del ation networks and basically + +1518 +01:11:27,440 --> 01:11:32,360 +deliberation networks what they do is + +1519 +01:11:29,679 --> 01:11:33,760 +they uh take in an output and then they + +1520 +01:11:32,360 --> 01:11:34,920 +just gradually refine it to make it + +1521 +01:11:33,760 --> 01:11:37,280 +better and better they used a + +1522 +01:11:34,920 --> 01:11:39,159 +reinforcement learning algorithm to do + +1523 +01:11:37,280 --> 01:11:41,159 +this where you generated the output and + +1524 +01:11:39,159 --> 01:11:43,600 +then um improved + +1525 +01:11:41,159 --> 01:11:46,719 +it another thing that's really popular + +1526 +01:11:43,600 --> 01:11:48,280 +nowadays is uh diffusion models and I + +1527 +01:11:46,719 --> 01:11:50,400 +haven't quite decided whether I'll have + +1528 +01:11:48,280 --> 01:11:51,880 +time to cover diffusion models in depth + +1529 +01:11:50,400 --> 01:11:54,880 +but basically the way a diffusion model + +1530 +01:11:51,880 --> 01:11:55,880 +works is very similar you start out with + +1531 +01:11:54,880 --> 01:11:57,239 +nothing + +1532 +01:11:55,880 --> 01:11:59,840 +and then you gradually make it better + +1533 +01:11:57,239 --> 01:12:01,360 +and better um the key difference between + +1534 +01:11:59,840 --> 01:12:03,520 +deliberation networks and diffusion + +1535 +01:12:01,360 --> 01:12:05,520 +models is diffusion models um you can + +1536 +01:12:03,520 --> 01:12:08,600 +train from scratch by basically noising + +1537 +01:12:05,520 --> 01:12:10,600 +the input uh applying noise to the input + +1538 +01:12:08,600 --> 01:12:12,880 +um in training very efficiently and + +1539 +01:12:10,600 --> 01:12:15,639 +these are very widely used + +1540 +01:12:12,880 --> 01:12:18,199 +in image generation they're not super + +1541 +01:12:15,639 --> 01:12:20,120 +widely used in text just because regular + +1542 +01:12:18,199 --> 01:12:22,840 +autor regressive models are so good for + +1543 +01:12:20,120 --> 01:12:24,159 +text um but there are a few efforts to + +1544 +01:12:22,840 --> 01:12:26,880 +do + +1545 +01:12:24,159 --> 01:12:30,920 +that and then a final one is self- + +1546 +01:12:26,880 --> 01:12:35,120 +refine and the idea behind self- refine + +1547 +01:12:30,920 --> 01:12:39,400 +is you um actually maybe I can open the + +1548 +01:12:35,120 --> 01:12:39,400 +paper because the paper has a good + +1549 +01:12:54,120 --> 01:12:58,239 +figure + +1550 +01:12:56,280 --> 01:13:02,679 +actually I thought it had a good + +1551 +01:12:58,239 --> 01:13:05,600 +figure um yeah so maybe this is a figure + +1552 +01:13:02,679 --> 01:13:08,639 +um so basically uh what you do is you + +1553 +01:13:05,600 --> 01:13:10,639 +feed in the input you generate an output + +1554 +01:13:08,639 --> 01:13:12,679 +and then you ask the model to give you + +1555 +01:13:10,639 --> 01:13:15,520 +feedback on the output and say yes this + +1556 +01:13:12,679 --> 01:13:16,760 +output is good or um like let's say + +1557 +01:13:15,520 --> 01:13:19,679 +you're doing code generation it could + +1558 +01:13:16,760 --> 01:13:21,920 +say no this output has an error in it um + +1559 +01:13:19,679 --> 01:13:24,719 +this is a problem with your output and + +1560 +01:13:21,920 --> 01:13:27,840 +then you feed in both the output and the + +1561 +01:13:24,719 --> 01:13:29,480 +feedback back uh and ask the model to + +1562 +01:13:27,840 --> 01:13:32,239 +refine its output and you do this over + +1563 +01:13:29,480 --> 01:13:35,280 +and over again and this allows you to uh + +1564 +01:13:32,239 --> 01:13:36,840 +improve the output and uh this is has + +1565 +01:13:35,280 --> 01:13:39,600 +ended up being pretty effective in a + +1566 +01:13:36,840 --> 01:13:41,159 +pretty wide number of tasks one caveat + +1567 +01:13:39,600 --> 01:13:44,040 +about this is your model has to be + +1568 +01:13:41,159 --> 01:13:47,000 +really good for this to work so um only + +1569 +01:13:44,040 --> 01:13:49,239 +models kind of on the level of GPT 4 not + +1570 +01:13:47,000 --> 01:13:52,000 +on the level of GPT 3.5 have the ability + +1571 +01:13:49,239 --> 01:13:54,040 +to do this pretty consistently so it is + +1572 +01:13:52,000 --> 01:13:57,040 +something you need to be aware + +1573 +01:13:54,040 --> 01:13:57,040 +of + +1574 +01:13:59,760 --> 01:14:03,600 +cool yep that's all I I had for today + +1575 +01:14:02,400 --> 01:14:06,600 +I'm happy + +1576 +01:14:03,600 --> 01:14:06,600 +to + +1577 +01:14:07,159 --> 01:14:10,159 +take + +1578 +01:14:20,600 --> 01:14:27,320 +yep yep that this is a great question so + +1579 +01:14:23,920 --> 01:14:28,840 +if sta has the potential to address + +1580 +01:14:27,320 --> 01:14:32,120 +information loss why would we ever + +1581 +01:14:28,840 --> 01:14:33,840 +choose a Cascade model I think basically + +1582 +01:14:32,120 --> 01:14:37,440 +there's potentially two reasons one + +1583 +01:14:33,840 --> 01:14:39,199 +reason is um data availability so in + +1584 +01:14:37,440 --> 01:14:42,639 +order to train a stacked model you + +1585 +01:14:39,199 --> 01:14:43,430 +obviously need the outputs I guess you + +1586 +01:14:42,639 --> 01:14:44,639 +could + +1587 +01:14:43,430 --> 01:14:48,440 +[Music] + +1588 +01:14:44,639 --> 01:14:50,880 +um yeah I guess you could run + +1589 +01:14:48,440 --> 01:14:53,199 +the and generate outputs for every + +1590 +01:14:50,880 --> 01:14:54,840 +training example you have um but you + +1591 +01:14:53,199 --> 01:14:55,840 +would need to do that so you would need + +1592 +01:14:54,840 --> 01:14:58,639 +to to + +1593 +01:14:55,840 --> 01:14:59,920 +run speech recognition for every example + +1594 +01:14:58,639 --> 01:15:02,760 +and you also + +1595 +01:14:59,920 --> 01:15:05,199 +couldn't you couldn't use any examples + +1596 +01:15:02,760 --> 01:15:07,600 +where you don't have the original input + +1597 +01:15:05,199 --> 01:15:10,320 +so you couldn't use text to text + +1598 +01:15:07,600 --> 01:15:12,239 +examples unless you like synthesize + +1599 +01:15:10,320 --> 01:15:14,159 +speech from text for machine translation + +1600 +01:15:12,239 --> 01:15:15,840 +for example so makes it a little bit + +1601 +01:15:14,159 --> 01:15:17,360 +more tricky due to the data requirements + +1602 +01:15:15,840 --> 01:15:19,239 +but that's not + +1603 +01:15:17,360 --> 01:15:22,560 +insurmountable the second reason is + +1604 +01:15:19,239 --> 01:15:24,400 +complexity and efficiency so you know + +1605 +01:15:22,560 --> 01:15:27,920 +you do have to come up with a model that + +1606 +01:15:24,400 --> 01:15:29,520 +takes in speed and text and run set and + +1607 +01:15:27,920 --> 01:15:30,920 +it might be easier just to hook together + +1608 +01:15:29,520 --> 01:15:34,719 +a speech recognitional with a + +1609 +01:15:30,920 --> 01:15:37,920 +translation so but like I think overall + +1610 +01:15:34,719 --> 01:15:39,639 +I I like these methods I I think these + +1611 +01:15:37,920 --> 01:15:41,159 +are good methods to use if you're if + +1612 +01:15:39,639 --> 01:15:42,480 +you're thinking about using a Cascade + +1613 +01:15:41,159 --> 01:15:44,199 +system you should definitely consider + +1614 +01:15:42,480 --> 01:15:47,199 +using a stack system in + +1615 +01:15:44,199 --> 01:15:47,199 +sense + +1616 +01:15:52,080 --> 01:15:56,960 +yeah yeah can you measure the + +1617 +01:15:55,159 --> 01:15:59,400 +contribution of each component to an + +1618 +01:15:56,960 --> 01:16:00,639 +ensemble um the very very easy way to do + +1619 +01:15:59,400 --> 01:16:02,199 +that is look at the interpolation + +1620 +01:16:00,639 --> 01:16:05,360 +coefficients if you train the + +1621 +01:16:02,199 --> 01:16:06,800 +interpolation coefficients um otherwise + +1622 +01:16:05,360 --> 01:16:08,920 +I guess it depends on what you mean by + +1623 +01:16:06,800 --> 01:16:10,480 +each contribution but I you know looking + +1624 +01:16:08,920 --> 01:16:12,280 +at the interpolation coefficients is a + +1625 +01:16:10,480 --> 01:16:16,320 +pretty good way to do + +1626 +01:16:12,280 --> 01:16:16,320 +it also just how much did the + +1627 +01:16:21,480 --> 01:16:27,400 +accuracy is iterative refinement the + +1628 +01:16:24,159 --> 01:16:30,199 +same idea as boosting in traditional + +1629 +01:16:27,400 --> 01:16:30,199 +like machine Learning + +1630 +01:16:30,320 --> 01:16:34,920 +Systems I think it's a little bit + +1631 +01:16:32,920 --> 01:16:36,520 +different um because iterative + +1632 +01:16:34,920 --> 01:16:38,920 +refinement what I'm talking about here + +1633 +01:16:36,520 --> 01:16:41,120 +it's usually taking in the output like + +1634 +01:16:38,920 --> 01:16:43,320 +rather complex output of a system and + +1635 +01:16:41,120 --> 01:16:44,920 +modifying it so you're not just + +1636 +01:16:43,320 --> 01:16:47,080 +modifying the + +1637 +01:16:44,920 --> 01:16:49,880 +probabilities of like a single + +1638 +01:16:47,080 --> 01:16:53,080 +classifier you're modifying the actual + +1639 +01:16:49,880 --> 01:16:55,960 +outputs that were generated then from + +1640 +01:16:53,080 --> 01:16:59,560 +the point of view of a boosting + +1641 +01:16:55,960 --> 01:17:02,560 +model over a single categorical output + +1642 +01:16:59,560 --> 01:17:04,520 +it might actually be similar or the same + +1643 +01:17:02,560 --> 01:17:06,480 +but this is more like uh you you + +1644 +01:17:04,520 --> 01:17:08,159 +generated a textual output and then you + +1645 +01:17:06,480 --> 01:17:10,400 +feed in the textual output to the other + +1646 +01:17:08,159 --> 01:17:12,120 +model and refine like generated a new + +1647 +01:17:10,400 --> 01:17:14,239 +textual output so I feel like it's a lot + +1648 +01:17:12,120 --> 01:17:18,639 +more + +1649 +01:17:14,239 --> 01:17:18,639 +complex cool okay thank thanks a lot + +1650 +01:17:18,840 --> 01:17:21,840 +everyone \ No newline at end of file