1 00:00:00,919 --> 00:00:05,879 so in my slides here I'm going to talk 2 00:00:03,760 --> 00:00:10,040 about debugging and understanding NLP 3 00:00:05,879 --> 00:00:12,400 models and this is how to tell uh when 4 00:00:10,040 --> 00:00:14,759 for example both your implementations 5 00:00:12,400 --> 00:00:17,320 are wrong and uh for example your 6 00:00:14,759 --> 00:00:19,000 underlying assumptions are wrong or your 7 00:00:17,320 --> 00:00:21,240 model is failing on particular segments 8 00:00:19,000 --> 00:00:23,439 of data or stuff like that so going to 9 00:00:21,240 --> 00:00:26,160 go through uh a variety of things that 10 00:00:23,439 --> 00:00:29,000 can go wrong with your experiments 11 00:00:26,160 --> 00:00:31,679 basically so a typical situation is 12 00:00:29,000 --> 00:00:33,399 you've implemented some NLP system you 13 00:00:31,679 --> 00:00:35,840 know based on neural networks of course 14 00:00:33,399 --> 00:00:36,920 because that's what we use nowadays um 15 00:00:35,840 --> 00:00:40,000 and you've looked at the code it 16 00:00:36,920 --> 00:00:42,000 basically looks okay um but it has low 17 00:00:40,000 --> 00:00:44,559 accuracy or it makes incomprehensible 18 00:00:42,000 --> 00:00:45,680 errors and you would like to uh fix 19 00:00:44,559 --> 00:00:47,440 these or you'd like to improve the 20 00:00:45,680 --> 00:00:49,120 accuracy or something like that and so 21 00:00:47,440 --> 00:00:52,000 what do I 22 00:00:49,120 --> 00:00:53,680 do and I think there's three dimensions 23 00:00:52,000 --> 00:00:56,239 of how you can understand your model and 24 00:00:53,680 --> 00:00:57,960 your Model Behavior um the first one is 25 00:00:56,239 --> 00:01:00,199 debugging the implementation so it's 26 00:00:57,960 --> 00:01:03,760 identifying problems that you have when 27 00:01:00,199 --> 00:01:05,880 you uh implemented something uh second 28 00:01:03,760 --> 00:01:07,759 thing is actionable evaluation so 29 00:01:05,880 --> 00:01:09,799 identifying typical error cases and how 30 00:01:07,759 --> 00:01:11,840 you what you can do to fix them and 31 00:01:09,799 --> 00:01:13,720 finally uh interpreting predictions or 32 00:01:11,840 --> 00:01:18,080 interpreting what's happening inside the 33 00:01:13,720 --> 00:01:19,920 model and uh this can maybe give you a 34 00:01:18,080 --> 00:01:21,520 deeper idea about what's happening in 35 00:01:19,920 --> 00:01:22,720 happening in individual cases and 36 00:01:21,520 --> 00:01:25,240 there's a lot of reasons why you might 37 00:01:22,720 --> 00:01:27,920 want to do that uh both like to make 38 00:01:25,240 --> 00:01:30,280 your models better and also for example 39 00:01:27,920 --> 00:01:31,840 if you want to be sure that your ition 40 00:01:30,280 --> 00:01:34,840 isn't doing something illegal like 41 00:01:31,840 --> 00:01:36,439 discriminating against people uh due to 42 00:01:34,840 --> 00:01:38,680 protected attributes or other things 43 00:01:36,439 --> 00:01:41,399 like that so um there's a number of 44 00:01:38,680 --> 00:01:42,920 reasons why you'd want to do that so I'm 45 00:01:41,399 --> 00:01:44,399 going to talk about the first two and 46 00:01:42,920 --> 00:01:48,840 Nishant is mainly going to talk about 47 00:01:44,399 --> 00:01:52,000 the second one so uh going right into 48 00:01:48,840 --> 00:01:55,159 it so in neural network models uh 49 00:01:52,000 --> 00:01:58,880 debugging is really important because 50 00:01:55,159 --> 00:02:00,920 they're opaque they're unpredictable and 51 00:01:58,880 --> 00:02:03,119 uh if you make little mistakes they can 52 00:02:00,920 --> 00:02:05,439 cause big problems with your 53 00:02:03,119 --> 00:02:07,399 output and another thing is that 54 00:02:05,439 --> 00:02:09,640 everything is a hyperparameter including 55 00:02:07,399 --> 00:02:11,239 your network size your model variations 56 00:02:09,640 --> 00:02:14,440 your batch size your strategy your 57 00:02:11,239 --> 00:02:18,120 Optimizer and your learning rate 58 00:02:14,440 --> 00:02:19,560 and finally unlike kind of more 59 00:02:18,120 --> 00:02:21,200 traditional machine learning methods 60 00:02:19,560 --> 00:02:23,000 like logistic progression or support 61 00:02:21,200 --> 00:02:25,160 Vector machines or something like that 62 00:02:23,000 --> 00:02:27,879 you might that you might have studied in 63 00:02:25,160 --> 00:02:30,160 your machine learning class um 64 00:02:27,879 --> 00:02:32,599 stochastic optimization has no guarantee 65 00:02:30,160 --> 00:02:34,239 about convergence um your loss might go 66 00:02:32,599 --> 00:02:35,720 down then it might go up and there might 67 00:02:34,239 --> 00:02:38,120 be absolutely nothing wrong with your 68 00:02:35,720 --> 00:02:40,200 training or it might be you know a 69 00:02:38,120 --> 00:02:42,319 serious problem so that's another issue 70 00:02:40,200 --> 00:02:45,440 you need to deal 71 00:02:42,319 --> 00:02:48,800 with so first I'd like to go into 72 00:02:45,440 --> 00:02:51,400 possible causes of problems with your 73 00:02:48,800 --> 00:02:53,440 implementation and I'm going to break 74 00:02:51,400 --> 00:02:55,040 them down into a typology and based on 75 00:02:53,440 --> 00:02:57,040 what part of the typology you're running 76 00:02:55,040 --> 00:02:59,200 into problems with you will need to fix 77 00:02:57,040 --> 00:03:00,800 them in different ways so your first 78 00:02:59,200 --> 00:03:02,599 goal when you're experiencing the 79 00:03:00,800 --> 00:03:04,720 problem is identifying why you're 80 00:03:02,599 --> 00:03:06,400 experiencing the problem uh because that 81 00:03:04,720 --> 00:03:08,760 will lead you to a 82 00:03:06,400 --> 00:03:10,440 solution so for training time problems 83 00:03:08,760 --> 00:03:12,560 there's a bunch of uh things that could 84 00:03:10,440 --> 00:03:14,360 be wrong uh the first is a lack of model 85 00:03:12,560 --> 00:03:16,280 capacity so your model is not able to 86 00:03:14,360 --> 00:03:18,599 model the phenomena that you want to 87 00:03:16,280 --> 00:03:20,000 model in the first place um you could 88 00:03:18,599 --> 00:03:22,080 have a poor training 89 00:03:20,000 --> 00:03:24,920 algorithm uh you could just have a bug 90 00:03:22,080 --> 00:03:27,080 in your code at training time another 91 00:03:24,920 --> 00:03:29,319 thing is uh test time problems and these 92 00:03:27,080 --> 00:03:30,599 can include a disconnect between what 93 00:03:29,319 --> 00:03:33,040 you're doing at training time and what 94 00:03:30,599 --> 00:03:35,640 you're testing at testing time uh 95 00:03:33,040 --> 00:03:37,959 failure of search 96 00:03:35,640 --> 00:03:39,920 algorithms and another thing you want to 97 00:03:37,959 --> 00:03:41,360 deal with is overfitting so you're 98 00:03:39,920 --> 00:03:44,319 actually doing well on the training set 99 00:03:41,360 --> 00:03:48,360 but you're doing poorly on the test 100 00:03:44,319 --> 00:03:50,400 Set uh finally you could have um optimiz 101 00:03:48,360 --> 00:03:52,640 a mismatch between the function you're 102 00:03:50,400 --> 00:03:54,920 optimizing at evaluation time and uh 103 00:03:52,640 --> 00:03:56,519 what you're actually evaluating sorry 104 00:03:54,920 --> 00:03:58,079 the fun the function that you're 105 00:03:56,519 --> 00:04:01,079 optimizing at training time and what 106 00:03:58,079 --> 00:04:03,720 you're actually evaluating at test time 107 00:04:01,079 --> 00:04:05,280 and my my best piece of advice for 108 00:04:03,720 --> 00:04:07,959 figuring out why things are going wrong 109 00:04:05,280 --> 00:04:11,040 is don't uh try to do all of them at 110 00:04:07,959 --> 00:04:12,560 once and rather uh start from the top 111 00:04:11,040 --> 00:04:15,239 and work it down because the ones at the 112 00:04:12,560 --> 00:04:17,600 top are often easier to uh diagnose and 113 00:04:15,239 --> 00:04:20,680 the ones at the 114 00:04:17,600 --> 00:04:23,000 bottom so looking at how you can debug 115 00:04:20,680 --> 00:04:25,919 systems at training time uh there's a 116 00:04:23,000 --> 00:04:27,360 number of ways you can do this uh but 117 00:04:25,919 --> 00:04:30,039 the most important thing for training 118 00:04:27,360 --> 00:04:33,479 time uh issues is looking at the loss 119 00:04:30,039 --> 00:04:36,759 function calculated on the training set 120 00:04:33,479 --> 00:04:38,960 and what I mean by this is don't look uh 121 00:04:36,759 --> 00:04:41,240 we talked about how we can't optimize 122 00:04:38,960 --> 00:04:45,039 error or accuracy easily so instead we 123 00:04:41,240 --> 00:04:47,120 optimize likelihood um and so you might 124 00:04:45,039 --> 00:04:49,080 want to look at accuracy to see whether 125 00:04:47,120 --> 00:04:50,759 your model is working well but I would 126 00:04:49,080 --> 00:04:53,039 urge you first to look at your 127 00:04:50,759 --> 00:04:55,080 likelihood or your loss function on the 128 00:04:53,039 --> 00:04:57,000 training set instead of your accuracy on 129 00:04:55,080 --> 00:04:58,479 the test set for example to diagnose 130 00:04:57,000 --> 00:05:00,600 these variety of 131 00:04:58,479 --> 00:05:02,919 problems and the sorts of things you 132 00:05:00,600 --> 00:05:05,840 want to look at are um is the loss 133 00:05:02,919 --> 00:05:10,639 function going down so is it you know 134 00:05:05,840 --> 00:05:14,199 converging into a good place 135 00:05:10,639 --> 00:05:16,280 um in general if this is your your 136 00:05:14,199 --> 00:05:18,600 loss um the first thing you should know 137 00:05:16,280 --> 00:05:20,440 is like what is a good loss uh in most 138 00:05:18,600 --> 00:05:22,280 cases a good loss is zero like log 139 00:05:20,440 --> 00:05:26,280 likelihood the best loss you can achieve 140 00:05:22,280 --> 00:05:28,639 is zero so you have zero down here um 141 00:05:26,280 --> 00:05:31,639 something 142 00:05:28,639 --> 00:05:31,639 like 143 00:05:31,919 --> 00:05:36,680 this is uh essentially a good loss 144 00:05:38,080 --> 00:05:43,120 function something like that uh 145 00:05:41,360 --> 00:05:45,120 especially if this is a relatively High 146 00:05:43,120 --> 00:05:47,759 number is usually a bad loss 147 00:05:45,120 --> 00:05:50,319 function 148 00:05:47,759 --> 00:05:52,680 um something like that on your training 149 00:05:50,319 --> 00:05:54,240 set is a very bad loss function uh 150 00:05:52,680 --> 00:05:55,840 something something is going seriously 151 00:05:54,240 --> 00:05:57,960 wrong if you see this on your Dev set 152 00:05:55,840 --> 00:05:59,800 that could be or your test set that 153 00:05:57,960 --> 00:06:01,199 could be uh overfitting but but if 154 00:05:59,800 --> 00:06:03,440 you're seeing that on your training set 155 00:06:01,199 --> 00:06:05,759 that's usually symptomatic of a problem 156 00:06:03,440 --> 00:06:09,160 so uh these are uh things that you 157 00:06:05,759 --> 00:06:10,960 should be uh knowing um is it going down 158 00:06:09,160 --> 00:06:13,520 basically to zero if you run training 159 00:06:10,960 --> 00:06:16,000 long enough um for many epochs over your 160 00:06:13,520 --> 00:06:17,479 training data so if it's not going down 161 00:06:16,000 --> 00:06:20,599 to zero and it's sticking up here then 162 00:06:17,479 --> 00:06:20,599 that's also an 163 00:06:21,120 --> 00:06:25,759 issue and um if it's not going down to 164 00:06:23,840 --> 00:06:27,919 close to zero on whatever training set 165 00:06:25,759 --> 00:06:30,199 you're training on um let's say you make 166 00:06:27,919 --> 00:06:31,840 your training set extremely small 167 00:06:30,199 --> 00:06:33,319 uh at least in that case it should go 168 00:06:31,840 --> 00:06:34,960 down to zero otherwise you might have a 169 00:06:33,319 --> 00:06:37,199 serious problem in your 170 00:06:34,960 --> 00:06:39,240 implementation so these are good things 171 00:06:37,199 --> 00:06:41,960 to check first when you're training a 172 00:06:39,240 --> 00:06:45,199 model um and there's a number of reasons 173 00:06:41,960 --> 00:06:47,759 why this might not be helping or why 174 00:06:45,199 --> 00:06:50,880 this might not be happening so um your 175 00:06:47,759 --> 00:06:53,120 Mo model might be too weak and so in 176 00:06:50,880 --> 00:06:55,440 general larger models tend to perform 177 00:06:53,120 --> 00:06:58,000 better uh especially if you're using a 178 00:06:55,440 --> 00:06:59,800 pre-trained model and um this is just an 179 00:06:58,000 --> 00:07:03,800 example from the T5 paper where they 180 00:06:59,800 --> 00:07:06,680 scale up the T5 model um from a 181 00:07:03,800 --> 00:07:09,319 relatively small model to what at the 182 00:07:06,680 --> 00:07:12,199 time was a very large model of 11 183 00:07:09,319 --> 00:07:14,360 billion parameters now this is you know 184 00:07:12,199 --> 00:07:17,479 a moderately sized model or maybe even 185 00:07:14,360 --> 00:07:20,879 small model by some standards but anyway 186 00:07:17,479 --> 00:07:23,800 you can see that it uh in continues to 187 00:07:20,879 --> 00:07:26,479 increase one really interesting 188 00:07:23,800 --> 00:07:30,080 phenomenon is uh that actually larger 189 00:07:26,479 --> 00:07:33,879 models can learn faster or at least with 190 00:07:30,080 --> 00:07:36,680 fewer steps than uh smaller 191 00:07:33,879 --> 00:07:40,199 models and so this 192 00:07:36,680 --> 00:07:42,240 is an interesting example this paper uh 193 00:07:40,199 --> 00:07:43,919 on neural scaling was it's a very 194 00:07:42,240 --> 00:07:48,000 influential paper but basically what 195 00:07:43,919 --> 00:07:51,000 they show is the darker purple ones are 196 00:07:48,000 --> 00:07:54,599 smaller models the yellow ones are 197 00:07:51,000 --> 00:07:57,159 bigger models and what you can see here 198 00:07:54,599 --> 00:07:59,639 is the purple model and on the left side 199 00:07:57,159 --> 00:08:02,120 they have the number of tokens processed 200 00:07:59,639 --> 00:08:05,759 the right side they have the number of 201 00:08:02,120 --> 00:08:08,159 uh compute or the amount of compute um 202 00:08:05,759 --> 00:08:10,080 and so what you can see is if you just 203 00:08:08,159 --> 00:08:12,240 look at the number of tokens processed 204 00:08:10,080 --> 00:08:14,280 the larger the model the faster it 205 00:08:12,240 --> 00:08:17,720 converges which 206 00:08:14,280 --> 00:08:21,400 is maybe a little bit surprising maybe a 207 00:08:17,720 --> 00:08:22,680 little bit you or maybe uh like some 208 00:08:21,400 --> 00:08:24,879 people have the intuition that this 209 00:08:22,680 --> 00:08:26,440 should be the case but when I first saw 210 00:08:24,879 --> 00:08:27,759 this I found it a little bit surprising 211 00:08:26,440 --> 00:08:29,000 because I thought it would be so large 212 00:08:27,759 --> 00:08:29,960 and noisy that the model would have 213 00:08:29,000 --> 00:08:32,320 trouble fit 214 00:08:29,960 --> 00:08:34,200 you know fitting the data as quickly but 215 00:08:32,320 --> 00:08:36,200 there's actually a good reason for this 216 00:08:34,200 --> 00:08:37,240 does anyone have a guess about why this 217 00:08:36,200 --> 00:08:39,719 is 218 00:08:37,240 --> 00:08:41,240 thee we've talked a little bit about the 219 00:08:39,719 --> 00:08:44,120 underlying phenomena for this in 220 00:08:41,240 --> 00:08:48,360 previous classes so you might be able to 221 00:08:44,120 --> 00:08:48,360 think back to some of the things you 222 00:08:50,480 --> 00:08:56,040 yeah yeah so um just to repeat there's a 223 00:08:54,160 --> 00:08:57,720 lot of different parameters so it can 224 00:08:56,040 --> 00:08:59,880 try to converge along a lot of different 225 00:08:57,720 --> 00:09:01,920 dimensions so if we think back to the 226 00:08:59,880 --> 00:09:04,079 like model pruning class and other stuff 227 00:09:01,920 --> 00:09:06,640 like that um part of the reason why we 228 00:09:04,079 --> 00:09:08,000 can prune large models so efficiently is 229 00:09:06,640 --> 00:09:10,200 because only like a small number of the 230 00:09:08,000 --> 00:09:12,440 parameters are actually useful and so if 231 00:09:10,200 --> 00:09:15,120 you start out with a much larger model 232 00:09:12,440 --> 00:09:17,720 it's more likely to have useful subsets 233 00:09:15,120 --> 00:09:20,320 of the parameters basically um which is 234 00:09:17,720 --> 00:09:21,560 called the lottery ticket hypothesis uh 235 00:09:20,320 --> 00:09:23,839 there there's a famous paper called the 236 00:09:21,560 --> 00:09:27,560 lottery ticket hypothesis examines this 237 00:09:23,839 --> 00:09:29,680 phenomenon so um one one interesting 238 00:09:27,560 --> 00:09:32,160 thing is you can see that even if you 239 00:09:29,680 --> 00:09:35,640 scale up the compute even if you measure 240 00:09:32,160 --> 00:09:37,640 based on compute the uh larger models 241 00:09:35,640 --> 00:09:38,959 eventually surpass the smaller models in 242 00:09:37,640 --> 00:09:41,920 terms of how efficient they are at 243 00:09:38,959 --> 00:09:44,680 modeling the data and that's just 244 00:09:41,920 --> 00:09:46,760 because models tend to learn well for a 245 00:09:44,680 --> 00:09:49,560 while and then they basically reach 246 00:09:46,760 --> 00:09:51,760 their capacity and stop learning well or 247 00:09:49,560 --> 00:09:53,680 they start learning very slowly and once 248 00:09:51,760 --> 00:09:57,120 you get to that point the larger models 249 00:09:53,680 --> 00:09:58,800 work better so there's a kind of 250 00:09:57,120 --> 00:10:00,640 counterintuitive thing that if you want 251 00:09:58,800 --> 00:10:04,160 to train faster you actually can train a 252 00:10:00,640 --> 00:10:06,839 larger model and uh that will that will 253 00:10:04,160 --> 00:10:08,000 uh get you to a good solution at some 254 00:10:06,839 --> 00:10:09,640 point that will get you to a good 255 00:10:08,000 --> 00:10:11,120 solution faster than a smaller model 256 00:10:09,640 --> 00:10:15,200 would you know of course you need memory 257 00:10:11,120 --> 00:10:15,200 and stuff but why are looking 258 00:10:20,040 --> 00:10:26,920 at so this is test loss training loss 259 00:10:22,760 --> 00:10:30,680 also looks like this um I think on 260 00:10:26,920 --> 00:10:34,360 this particular 261 00:10:30,680 --> 00:10:37,519 on this particular paper they never 262 00:10:34,360 --> 00:10:39,399 repeated data and if you never repeat 263 00:10:37,519 --> 00:10:42,560 data actually your training loss looks 264 00:10:39,399 --> 00:10:44,680 very similar to your test loss because 265 00:10:42,560 --> 00:10:46,079 it if you like actually if you can 266 00:10:44,680 --> 00:10:48,760 assume your training data set and your 267 00:10:46,079 --> 00:10:50,880 test data set are um uh identically 268 00:10:48,760 --> 00:10:52,279 distributed your training loss of new 269 00:10:50,880 --> 00:10:54,600 training data should be exactly the same 270 00:10:52,279 --> 00:10:55,959 as your test loss so I think that's 271 00:10:54,600 --> 00:10:57,760 basically why they were justified in 272 00:10:55,959 --> 00:11:01,000 doing that good but they probably did 273 00:10:57,760 --> 00:11:03,639 test loss to like I swashed the concern 274 00:11:01,000 --> 00:11:05,839 that this was overfitting comp or 275 00:11:03,639 --> 00:11:09,200 something but good 276 00:11:05,839 --> 00:11:11,279 question um cool so the these are are 277 00:11:09,200 --> 00:11:13,000 good things to know um so basically if 278 00:11:11,279 --> 00:11:14,839 you see your model doing something like 279 00:11:13,000 --> 00:11:16,279 this um plateauing out maybe your 280 00:11:14,839 --> 00:11:18,680 model's too small and you need to tr a 281 00:11:16,279 --> 00:11:20,920 big 282 00:11:18,680 --> 00:11:22,200 basically another uh piece of trouble 283 00:11:20,920 --> 00:11:26,800 that you can have is trouble with 284 00:11:22,200 --> 00:11:29,519 optimization and basically um you should 285 00:11:26,800 --> 00:11:31,600 check your Optimizer um usually people 286 00:11:29,519 --> 00:11:35,639 are using atom variants nowadays like 287 00:11:31,600 --> 00:11:37,839 atom or atom W so just use that um 288 00:11:35,639 --> 00:11:39,639 learning rate uh so make sure that the 289 00:11:37,839 --> 00:11:41,160 learning rate you're using is standard 290 00:11:39,639 --> 00:11:43,399 for kind of the model size that you're 291 00:11:41,160 --> 00:11:44,920 using and the best way to do this is uh 292 00:11:43,399 --> 00:11:46,000 look at previous papers and see what 293 00:11:44,920 --> 00:11:50,160 they're 294 00:11:46,000 --> 00:11:51,680 using um initialization most people 295 00:11:50,160 --> 00:11:53,440 nowadays will not be training from 296 00:11:51,680 --> 00:11:55,440 scratch but if you are training from 297 00:11:53,440 --> 00:11:58,040 scratch how you initialize your model is 298 00:11:55,440 --> 00:11:59,399 really important and normally the way 299 00:11:58,040 --> 00:12:03,320 you do this is you do this with some 300 00:11:59,399 --> 00:12:05,079 sort of uniform random noise and uh 301 00:12:03,320 --> 00:12:06,959 specifically you can pick the uniform 302 00:12:05,079 --> 00:12:08,800 random noise in intelligent ways based 303 00:12:06,959 --> 00:12:12,240 on the the data size which I'll talk 304 00:12:08,800 --> 00:12:13,920 about in a second um also mini batching 305 00:12:12,240 --> 00:12:15,639 um are you using sufficiently large 306 00:12:13,920 --> 00:12:17,480 batches of data if you're using small 307 00:12:15,639 --> 00:12:18,720 batches of data you might have too much 308 00:12:17,480 --> 00:12:21,279 noise in your training and it might 309 00:12:18,720 --> 00:12:23,839 diverge so uh these are things you need 310 00:12:21,279 --> 00:12:23,839 think about as 311 00:12:25,279 --> 00:12:30,560 well 312 00:12:27,519 --> 00:12:35,000 cool um so these are training time 313 00:12:30,560 --> 00:12:37,320 things um the next thing is debugging at 314 00:12:35,000 --> 00:12:37,320 test 315 00:12:38,160 --> 00:12:43,839 time and this is particularly important 316 00:12:41,240 --> 00:12:47,320 if you're doing any sort 317 00:12:43,839 --> 00:12:48,880 of like I guess a lot of this has kind 318 00:12:47,320 --> 00:12:51,360 of been commoditized and it's 319 00:12:48,880 --> 00:12:52,560 implemented in hugging face and stuff 320 00:12:51,360 --> 00:12:55,120 like that and as long as you're using 321 00:12:52,560 --> 00:12:57,279 the standard implementations you're less 322 00:12:55,120 --> 00:12:59,000 likely to run into these bugs but if you 323 00:12:57,279 --> 00:13:00,519 are implementing anything on your own 324 00:12:59,000 --> 00:13:03,040 this is actually really tricky and you 325 00:13:00,519 --> 00:13:07,880 can easily make mistakes so uh it's 326 00:13:03,040 --> 00:13:08,959 important to to know about it so um what 327 00:13:07,880 --> 00:13:10,680 one of the reasons why you can have 328 00:13:08,959 --> 00:13:12,240 training and test disconnects especially 329 00:13:10,680 --> 00:13:14,399 if you're doing something like text 330 00:13:12,240 --> 00:13:15,959 generation is that usually your loss 331 00:13:14,399 --> 00:13:17,720 calculation and prodiction functions 332 00:13:15,959 --> 00:13:20,480 will be implemented in different 333 00:13:17,720 --> 00:13:23,360 functions and like anything in software 334 00:13:20,480 --> 00:13:25,440 engineering Um this can be a source of 335 00:13:23,360 --> 00:13:26,760 bugs duplicated sour code can be a 336 00:13:25,440 --> 00:13:28,440 source of bugs because you might 337 00:13:26,760 --> 00:13:30,199 Implement one thing in one place in one 338 00:13:28,440 --> 00:13:33,000 way another thing in another place in 339 00:13:30,199 --> 00:13:35,560 another way so this is no exception to 340 00:13:33,000 --> 00:13:37,399 that um it's especially true for 341 00:13:35,560 --> 00:13:39,000 structured prediction models so anything 342 00:13:37,399 --> 00:13:40,399 where you're not just making a single 343 00:13:39,000 --> 00:13:42,079 prediction but you're making multiple 344 00:13:40,399 --> 00:13:43,839 predictions in a row so you need to be a 345 00:13:42,079 --> 00:13:46,959 little bit careful about 346 00:13:43,839 --> 00:13:49,880 that um another thing that you need to 347 00:13:46,959 --> 00:13:51,079 be pay attention about is often uh 348 00:13:49,880 --> 00:13:52,680 especially if you're doing your own 349 00:13:51,079 --> 00:13:55,880 implementation loss calculation it's 350 00:13:52,680 --> 00:13:59,800 mini batched and generation is not or in 351 00:13:55,880 --> 00:14:02,199 highly optimized versions of um of 352 00:13:59,800 --> 00:14:03,880 inference you might be doing inference 353 00:14:02,199 --> 00:14:05,360 with Dynamic batching and stuff like 354 00:14:03,880 --> 00:14:06,720 that and it might become complicated you 355 00:14:05,360 --> 00:14:09,800 might make 356 00:14:06,720 --> 00:14:12,160 mistakes um so how do 357 00:14:09,800 --> 00:14:15,839 we make sure that we're not making any 358 00:14:12,160 --> 00:14:18,560 mistakes here um there's a really simple 359 00:14:15,839 --> 00:14:21,199 way to debug any sort of mini batched 360 00:14:18,560 --> 00:14:24,199 loss calculation because normally when 361 00:14:21,199 --> 00:14:27,000 we mini batch loss calculations we're 362 00:14:24,199 --> 00:14:31,079 simultaneously calculating uh the loss 363 00:14:27,000 --> 00:14:35,600 for like uh four four or eight or 364 00:14:31,079 --> 00:14:37,560 whatever sequences at a time and so you 365 00:14:35,600 --> 00:14:40,279 can calculate the loss with a large 366 00:14:37,560 --> 00:14:42,000 batch size like 32 and then calculate 367 00:14:40,279 --> 00:14:44,920 the loss for each uh sentence 368 00:14:42,000 --> 00:14:47,720 individually and sum them together and 369 00:14:44,920 --> 00:14:49,480 these uh value should be the same and 370 00:14:47,720 --> 00:14:52,160 this can help make sure that you don't 371 00:14:49,480 --> 00:14:55,120 have any you know issues with your 372 00:14:52,160 --> 00:14:57,959 padding or your masking or other things 373 00:14:55,120 --> 00:14:59,800 like this um so this is particularly 374 00:14:57,959 --> 00:15:01,959 important if you're not just using out 375 00:14:59,800 --> 00:15:04,240 of the box things so you have a slightly 376 00:15:01,959 --> 00:15:06,240 unusually structured model with like 377 00:15:04,240 --> 00:15:08,880 hierarchical encoding or anything like 378 00:15:06,240 --> 00:15:11,680 that you need to be really careful about 379 00:15:08,880 --> 00:15:15,440 that um you can even create unit tests 380 00:15:11,680 --> 00:15:17,399 that test this so like um in machine 381 00:15:15,440 --> 00:15:18,959 learning code we don't write unit test 382 00:15:17,399 --> 00:15:20,160 or especially neural network based 383 00:15:18,959 --> 00:15:22,440 machine learning code we don't write 384 00:15:20,160 --> 00:15:24,160 unit tests that often because it's kind 385 00:15:22,440 --> 00:15:26,279 of hard to do there's lots of Randomness 386 00:15:24,160 --> 00:15:27,959 and other stuff like that um but this is 387 00:15:26,279 --> 00:15:30,959 one thing that you can easily test and 388 00:15:27,959 --> 00:15:30,959 and make sure that you don't hear the 389 00:15:32,440 --> 00:15:39,319 mistakes um any sort of uh generation 390 00:15:36,480 --> 00:15:43,199 algorithm uh so when you're generating 391 00:15:39,319 --> 00:15:44,639 or decoding um you can make sure that 392 00:15:43,199 --> 00:15:47,639 your decoding code is getting the same 393 00:15:44,639 --> 00:15:50,040 score is when you calculate the loss and 394 00:15:47,639 --> 00:15:52,959 an easy way to do this is you call the 395 00:15:50,040 --> 00:15:54,759 decoding function to generate an output 396 00:15:52,959 --> 00:15:57,399 and normally when you're doing any sort 397 00:15:54,759 --> 00:15:59,480 of search or sampling or something like 398 00:15:57,399 --> 00:16:02,120 that during the search or sampling 399 00:15:59,480 --> 00:16:05,000 you're calculating the logits or the log 400 00:16:02,120 --> 00:16:07,399 probabilities of each step that you 401 00:16:05,000 --> 00:16:09,120 sample so you keep track of that during 402 00:16:07,399 --> 00:16:12,279 your sampling 403 00:16:09,120 --> 00:16:14,319 algorithm and then after that you call 404 00:16:12,279 --> 00:16:16,800 the loss function on the generated 405 00:16:14,319 --> 00:16:18,639 output and you calculate the loss 406 00:16:16,800 --> 00:16:20,360 according to the loss function and the 407 00:16:18,639 --> 00:16:22,240 score of these two things should be the 408 00:16:20,360 --> 00:16:26,440 same uh 409 00:16:22,240 --> 00:16:26,440 so um you know you do your 410 00:16:27,920 --> 00:16:35,279 generate and that gives you an 411 00:16:32,000 --> 00:16:35,279 output in 412 00:16:35,600 --> 00:16:42,360 score and then you do um 413 00:16:39,319 --> 00:16:45,839 loss on the 414 00:16:42,360 --> 00:16:49,040 output and that gives you the score 415 00:16:45,839 --> 00:16:53,079 two and then you just compare these two 416 00:16:49,040 --> 00:16:56,360 things together and this can uh in in my 417 00:16:53,079 --> 00:17:01,120 experience this has allowed me to find 418 00:16:56,360 --> 00:17:03,240 the majority of the bugs in um these two 419 00:17:01,120 --> 00:17:04,679 things um have allowed me to find the 420 00:17:03,240 --> 00:17:06,600 majority of the bugs whenever I was 421 00:17:04,679 --> 00:17:09,199 doing any sort of like complex thing 422 00:17:06,600 --> 00:17:11,880 with respect to generation or models and 423 00:17:09,199 --> 00:17:13,360 stuff like that so um it's a very common 424 00:17:11,880 --> 00:17:15,439 place for bugs even if you're pretty 425 00:17:13,360 --> 00:17:17,280 familiar with models so I I would highly 426 00:17:15,439 --> 00:17:19,760 recommend 427 00:17:17,280 --> 00:17:21,319 that um this is particularly bad when 428 00:17:19,760 --> 00:17:25,559 you're doing something like a search 429 00:17:21,319 --> 00:17:28,400 algorithm like beam search um and 430 00:17:25,559 --> 00:17:30,400 so beam search uh as you know from the 431 00:17:28,400 --> 00:17:34,200 generation class instead of picking one 432 00:17:30,400 --> 00:17:37,080 high probability uh you know word in 433 00:17:34,200 --> 00:17:40,160 your next step you maintain several 434 00:17:37,080 --> 00:17:41,960 paths and one way that you can fix this 435 00:17:40,160 --> 00:17:44,320 is as you make search better the model 436 00:17:41,960 --> 00:17:45,760 score should get better so the log 437 00:17:44,320 --> 00:17:48,240 likelihood of the output should get 438 00:17:45,760 --> 00:17:50,280 better almost all of the time so you can 439 00:17:48,240 --> 00:17:51,840 search with varying beam sizes and make 440 00:17:50,280 --> 00:17:55,280 sure that you get a better overall model 441 00:17:51,840 --> 00:17:57,559 score at the end so um and you can even 442 00:17:55,280 --> 00:17:59,320 create a unit test testing this as well 443 00:17:57,559 --> 00:18:01,000 I don't think that that many people will 444 00:17:59,320 --> 00:18:02,480 be reimplementing beam search so you 445 00:18:01,000 --> 00:18:04,120 might not need to worry about that too 446 00:18:02,480 --> 00:18:05,679 much but in case you are doing anything 447 00:18:04,120 --> 00:18:08,159 with respect to search algorithms it's a 448 00:18:05,679 --> 00:18:08,159 good thing to 449 00:18:08,880 --> 00:18:15,159 know 450 00:18:10,480 --> 00:18:15,159 cool um any questions about these two so 451 00:18:16,919 --> 00:18:24,159 far no okay um so the second the next 452 00:18:22,600 --> 00:18:25,400 thing I want to talk about this is 453 00:18:24,159 --> 00:18:27,840 something that people think about a 454 00:18:25,400 --> 00:18:29,400 little bit less uh but it's actually 455 00:18:27,840 --> 00:18:31,280 something really important to know 456 00:18:29,400 --> 00:18:34,280 because it will affect you it will 457 00:18:31,280 --> 00:18:35,799 affect everybody uh to some extent it 458 00:18:34,280 --> 00:18:40,760 will affect you to a greater or lesser 459 00:18:35,799 --> 00:18:41,520 extent depending on um what uh type of 460 00:18:40,760 --> 00:18:44,480 you 461 00:18:41,520 --> 00:18:46,799 know system you're building but it will 462 00:18:44,480 --> 00:18:48,760 definitely affect everybody and that's 463 00:18:46,799 --> 00:18:50,960 the mismatch between the the function 464 00:18:48,760 --> 00:18:53,440 that you're optimizing at training time 465 00:18:50,960 --> 00:18:55,240 and the evaluation metric that you're 466 00:18:53,440 --> 00:18:58,000 evaluating and 467 00:18:55,240 --> 00:18:59,679 so uh like as I said in the 468 00:18:58,000 --> 00:19:01,679 reinforcement learning class it's very 469 00:18:59,679 --> 00:19:03,640 common to optimize for maximum 470 00:19:01,679 --> 00:19:06,039 likelihood for training uh but there's 471 00:19:03,640 --> 00:19:07,840 all kinds of problems with this you know 472 00:19:06,039 --> 00:19:09,640 um with respect to the mistake it not 473 00:19:07,840 --> 00:19:11,640 being sensitive to mistakes it not being 474 00:19:09,640 --> 00:19:14,799 sensitive to your generation 475 00:19:11,640 --> 00:19:16,520 algorithm um but even though your 476 00:19:14,799 --> 00:19:19,880 likelihood is getting better accuracy 477 00:19:16,520 --> 00:19:22,799 can get worse and this is a super simple 478 00:19:19,880 --> 00:19:25,080 example with uh image classification on 479 00:19:22,799 --> 00:19:27,919 mest and I I ran this experiment with 480 00:19:25,080 --> 00:19:30,880 like 10 lines of pytorch code or 481 00:19:27,919 --> 00:19:36,840 something like this uh maybe more like 482 00:19:30,880 --> 00:19:40,080 40 lines of P um and so here um on the 483 00:19:36,840 --> 00:19:43,120 left side we have the loss on the 484 00:19:40,080 --> 00:19:46,600 training set and the test set or the dev 485 00:19:43,120 --> 00:19:48,559 set and here we have accuracy on the 486 00:19:46,600 --> 00:19:50,799 training set in the test 487 00:19:48,559 --> 00:19:55,000 set 488 00:19:50,799 --> 00:19:56,159 and so oops I showed you the answer so I 489 00:19:55,000 --> 00:19:58,799 was going to do a quiz but I 490 00:19:56,159 --> 00:20:00,559 accidentally showed you the answer um 491 00:19:58,799 --> 00:20:04,440 but the problem here is basically 492 00:20:00,559 --> 00:20:06,320 because um the the loss you're 493 00:20:04,440 --> 00:20:09,400 calculating the likelihood of the 494 00:20:06,320 --> 00:20:11,120 correct answer and the likelihood of the 495 00:20:09,400 --> 00:20:12,440 correct answer is the probability of 496 00:20:11,120 --> 00:20:15,000 getting the correct 497 00:20:12,440 --> 00:20:17,240 answer the accuracy is the number of 498 00:20:15,000 --> 00:20:20,280 times you're getting the correct answer 499 00:20:17,240 --> 00:20:23,799 so as you train a model to get more and 500 00:20:20,280 --> 00:20:25,440 more confident it gets better it gets 501 00:20:23,799 --> 00:20:27,840 better and better at getting more 502 00:20:25,440 --> 00:20:30,039 answers correct but it also gets more 503 00:20:27,840 --> 00:20:33,360 and more confident in its answers and so 504 00:20:30,039 --> 00:20:36,200 if the you know there's any example that 505 00:20:33,360 --> 00:20:37,840 it's really bad at um it might get very 506 00:20:36,200 --> 00:20:42,320 confident in 507 00:20:37,840 --> 00:20:44,760 that answer that bad answer and the log 508 00:20:42,320 --> 00:20:47,320 likelihood of that answer will go up or 509 00:20:44,760 --> 00:20:49,679 sorry the log likelihood will go down so 510 00:20:47,320 --> 00:20:54,360 the negative log likelihood will go up 511 00:20:49,679 --> 00:20:56,720 is the loss so basically 512 00:20:54,360 --> 00:20:59,559 um the 513 00:20:56,720 --> 00:21:01,039 uh the loss that you're calculating and 514 00:20:59,559 --> 00:21:03,840 the thing that you care about in the end 515 00:21:01,039 --> 00:21:07,120 accuracy can be decorrelated 516 00:21:03,840 --> 00:21:09,520 um so there's also an interesting 517 00:21:07,120 --> 00:21:12,080 example um in text generation and this 518 00:21:09,520 --> 00:21:14,000 is part of the reason why uh we have all 519 00:21:12,080 --> 00:21:15,880 these other text generation algorithms 520 00:21:14,000 --> 00:21:20,080 like nucleus samp playing or topk samp 521 00:21:15,880 --> 00:21:23,039 playing or other things like this is um 522 00:21:20,080 --> 00:21:25,080 actually in a maximum likelihood trained 523 00:21:23,039 --> 00:21:27,799 model better 524 00:21:25,080 --> 00:21:29,559 search uh in in other words finding a 525 00:21:27,799 --> 00:21:32,159 better model scope 526 00:21:29,559 --> 00:21:36,120 doesn't necessarily give you a better 527 00:21:32,159 --> 00:21:37,840 generation result and this is an example 528 00:21:36,120 --> 00:21:39,080 uh from machine translation from a 529 00:21:37,840 --> 00:21:41,880 really long time 530 00:21:39,080 --> 00:21:44,000 ago uh but you know it still persists 531 00:21:41,880 --> 00:21:47,520 today which is they did beam search with 532 00:21:44,000 --> 00:21:53,600 a larger and larger beam 533 00:21:47,520 --> 00:21:56,640 and the be the best Beam for finding um 534 00:21:53,600 --> 00:21:59,640 the best scoring output basically was 535 00:21:56,640 --> 00:22:01,600 four and then the accuracy goes down and 536 00:21:59,640 --> 00:22:05,559 down and down as they find a better 537 00:22:01,600 --> 00:22:07,200 output and does anyone remember when we 538 00:22:05,559 --> 00:22:09,679 talked about the generation class where 539 00:22:07,200 --> 00:22:09,679 this comes 540 00:22:10,120 --> 00:22:15,000 from I don't know how explicitly we said 541 00:22:12,960 --> 00:22:18,600 we mentioned it in the generation class 542 00:22:15,000 --> 00:22:20,360 but basically the problem is um maximum 543 00:22:18,600 --> 00:22:22,559 likelihood train models like shorter 544 00:22:20,360 --> 00:22:25,240 outputs generally because if as we make 545 00:22:22,559 --> 00:22:27,760 the output longer uh the probability of 546 00:22:25,240 --> 00:22:29,679 the longer outputs goes down so as you 547 00:22:27,760 --> 00:22:32,039 improve the beam it will start 548 00:22:29,679 --> 00:22:34,799 generating shorter and shorter outputs 549 00:22:32,039 --> 00:22:36,480 and because of that the score goes down 550 00:22:34,799 --> 00:22:39,039 because blue score doesn't like outputs 551 00:22:36,480 --> 00:22:41,520 that are too short essentially so there 552 00:22:39,039 --> 00:22:44,039 are um there are hex around this for 553 00:22:41,520 --> 00:22:46,200 beam search where essentially what you 554 00:22:44,039 --> 00:22:48,559 do is you uh take the average log 555 00:22:46,200 --> 00:22:51,159 likelihood of each token instead of the 556 00:22:48,559 --> 00:22:52,760 overall log likelihood of each token um 557 00:22:51,159 --> 00:22:54,679 and that improves a little bit but still 558 00:22:52,760 --> 00:22:59,720 you can see as you search more the the 559 00:22:54,679 --> 00:23:01,440 accuracy goes down so um so that's the 560 00:22:59,720 --> 00:23:04,039 the general idea 561 00:23:01,440 --> 00:23:08,760 here there's a bunch of ways you can fix 562 00:23:04,039 --> 00:23:10,600 this um the most principled way is to 563 00:23:08,760 --> 00:23:12,760 use a method like reinforcement learning 564 00:23:10,600 --> 00:23:14,120 or something uh some sort of you know 565 00:23:12,760 --> 00:23:15,520 structured training algorithm that 566 00:23:14,120 --> 00:23:17,159 allows you to train your models so that 567 00:23:15,520 --> 00:23:20,159 you don't get these bad 568 00:23:17,159 --> 00:23:22,159 outputs um another way that's much 569 00:23:20,159 --> 00:23:25,640 easier is to do early stopping with the 570 00:23:22,159 --> 00:23:30,480 evaluation metric as opposed to um early 571 00:23:25,640 --> 00:23:32,840 stopping with the loss and by doing this 572 00:23:30,480 --> 00:23:34,520 you would stop here so you would stop 573 00:23:32,840 --> 00:23:37,159 where you get the highest evaluation 574 00:23:34,520 --> 00:23:42,600 metric uh that you care about instead of 575 00:23:37,159 --> 00:23:44,400 stopping here uh so that's um that's one 576 00:23:42,600 --> 00:23:46,600 way you can fix this 577 00:23:44,400 --> 00:23:49,760 problem does anyone have an idea about 578 00:23:46,600 --> 00:23:49,760 why this might be a bad 579 00:23:49,840 --> 00:23:57,159 idea why might it be a bad idea to stop 580 00:23:52,480 --> 00:23:57,159 here instead of stopping here for 581 00:23:57,440 --> 00:24:00,440 example 582 00:24:05,320 --> 00:24:10,200 yeah it's kind of overfitting it's 583 00:24:07,760 --> 00:24:13,640 overfitting in a particular way um but 584 00:24:10,200 --> 00:24:16,000 remember here this is still the accuracy 585 00:24:13,640 --> 00:24:18,400 on the dev set so we're not overfitting 586 00:24:16,000 --> 00:24:20,080 so much that the dev accuracy is going 587 00:24:18,400 --> 00:24:24,279 down that would be a different variety 588 00:24:20,080 --> 00:24:27,360 of overfitting but any any 589 00:24:24,279 --> 00:24:29,799 ideas go for it we don't want to be too 590 00:24:27,360 --> 00:24:31,600 confident yeah exactly we don't want it 591 00:24:29,799 --> 00:24:32,880 to be too confident in its wrong answers 592 00:24:31,600 --> 00:24:35,279 and we talked about 593 00:24:32,880 --> 00:24:38,000 calibration um where calibration is 594 00:24:35,279 --> 00:24:40,039 basically like how accurate are the 595 00:24:38,000 --> 00:24:41,480 probability estimates so this model over 596 00:24:40,039 --> 00:24:43,600 here is going to be really poorly 597 00:24:41,480 --> 00:24:45,159 calibrated it's going to be very 598 00:24:43,600 --> 00:24:46,240 confident regardless of whether it's 599 00:24:45,159 --> 00:24:49,440 correct or not and that could be a 600 00:24:46,240 --> 00:24:50,840 problem in dopram uh dopram tasks 601 00:24:49,440 --> 00:24:52,130 there's also another thing that I I 602 00:24:50,840 --> 00:24:55,189 forgot to put on 603 00:24:52,130 --> 00:24:55,189 [Music] 604 00:24:57,320 --> 00:25:00,320 um 605 00:25:02,919 --> 00:25:08,120 that I forgot to put on the slides but 606 00:25:04,520 --> 00:25:10,720 it's a um an interesting phenomenon that 607 00:25:08,120 --> 00:25:12,720 actually um kind of a lot of people in 608 00:25:10,720 --> 00:25:16,360 interpretability are interested in it's 609 00:25:12,720 --> 00:25:18,120 this uh generalization gring 610 00:25:16,360 --> 00:25:19,640 generalization Beyond overfitting on 611 00:25:18,120 --> 00:25:21,120 small algorithmic data sets and 612 00:25:19,640 --> 00:25:27,360 basically what they 613 00:25:21,120 --> 00:25:29,720 show is um you can be training for a 614 00:25:27,360 --> 00:25:31,320 very very long time 615 00:25:29,720 --> 00:25:34,279 um 616 00:25:31,320 --> 00:25:35,919 and uh like reducing the loss reducing 617 00:25:34,279 --> 00:25:40,399 the loss reducing the loss and reducing 618 00:25:35,919 --> 00:25:42,480 the loss and it's only after a very long 619 00:25:40,399 --> 00:25:43,840 time does your Model start generalizing 620 00:25:42,480 --> 00:25:48,240 well and getting good 621 00:25:43,840 --> 00:25:49,799 accuracy um the this paper the types of 622 00:25:48,240 --> 00:25:52,120 data sets it's talking about are data 623 00:25:49,799 --> 00:25:55,520 sets where you need to get many things 624 00:25:52,120 --> 00:25:58,640 in a row correct before you get the 625 00:25:55,520 --> 00:26:00,880 final answer correct so basically you 626 00:25:58,640 --> 00:26:02,320 need to get like 20 steps in a row or 50 627 00:26:00,880 --> 00:26:06,200 steps in a row correct before you get 628 00:26:02,320 --> 00:26:10,679 the final answer correct and um 629 00:26:06,200 --> 00:26:13,000 basically the reason why this happens is 630 00:26:10,679 --> 00:26:15,720 because this accuracy will keep going up 631 00:26:13,000 --> 00:26:17,760 but you only get the accuracy of each 632 00:26:15,720 --> 00:26:20,520 individual decision will keep going up 633 00:26:17,760 --> 00:26:22,880 but you only get marked like 634 00:26:20,520 --> 00:26:25,440 correct uh 635 00:26:22,880 --> 00:26:29,799 after you get like all 50 in a row 636 00:26:25,440 --> 00:26:31,200 correct so um it this difference can be 637 00:26:29,799 --> 00:26:33,039 even more Stark when you're talking 638 00:26:31,200 --> 00:26:35,399 about things that require like 50 steps 639 00:26:33,039 --> 00:26:37,399 of reasoning or like multiple steps of 640 00:26:35,399 --> 00:26:39,559 reasoning but like 50 token Generations 641 00:26:37,399 --> 00:26:42,679 correct before you get them right so um 642 00:26:39,559 --> 00:26:42,679 that's another thing to be aware 643 00:26:43,000 --> 00:26:49,240 of cool um so now I want to switch gears 644 00:26:46,960 --> 00:26:51,919 a little bit to actionable evaluation 645 00:26:49,240 --> 00:26:54,240 and how you can um evaluate your models 646 00:26:51,919 --> 00:26:56,640 in a way that makes it easy to find uh 647 00:26:54,240 --> 00:26:58,600 next steps to be 648 00:26:56,640 --> 00:27:00,159 improving uh are there any questions 649 00:26:58,600 --> 00:27:02,600 about the debugging part before we get 650 00:27:00,159 --> 00:27:02,600 into this 651 00:27:03,360 --> 00:27:10,120 part okay I'll 652 00:27:05,880 --> 00:27:12,840 go so um my first suggestion with 653 00:27:10,120 --> 00:27:15,559 respect to how you can actually you know 654 00:27:12,840 --> 00:27:17,440 improve systems is make sure that you're 655 00:27:15,559 --> 00:27:21,039 looking at the data that you're 656 00:27:17,440 --> 00:27:22,679 using and um both bugs and new research 657 00:27:21,039 --> 00:27:24,080 directions can be found by looking at 658 00:27:22,679 --> 00:27:27,159 your model 659 00:27:24,080 --> 00:27:31,640 outputs um 660 00:27:27,159 --> 00:27:33,279 so to give one example um of a very 661 00:27:31,640 --> 00:27:36,200 common mistake that you can make when 662 00:27:33,279 --> 00:27:40,159 you're creating a a generation algorithm 663 00:27:36,200 --> 00:27:41,600 it's these sort of off by one erors um 664 00:27:40,159 --> 00:27:43,919 so like let's say you implemented a 665 00:27:41,600 --> 00:27:46,039 translation system and it's generating 666 00:27:43,919 --> 00:27:49,440 outputs like went to the store yesterday 667 00:27:46,039 --> 00:27:51,080 bought a dog um you can immediately look 668 00:27:49,440 --> 00:27:53,440 at this and say hey this doesn't look 669 00:27:51,080 --> 00:27:58,360 like natural English what's going uh 670 00:27:53,440 --> 00:28:00,000 what's going on and the the problem here 671 00:27:58,360 --> 00:28:04,600 is 672 00:28:00,000 --> 00:28:04,600 you're um you're doing something 673 00:28:05,159 --> 00:28:12,720 like output uh 674 00:28:09,240 --> 00:28:14,600 one uh and you have a slice of like one 675 00:28:12,720 --> 00:28:17,399 instead of zero here or something like 676 00:28:14,600 --> 00:28:18,640 this and so this is a really silly error 677 00:28:17,399 --> 00:28:21,000 that you might just make a mistake on 678 00:28:18,640 --> 00:28:23,679 python on your you know pre-processing 679 00:28:21,000 --> 00:28:26,200 or postprocessing or something like this 680 00:28:23,679 --> 00:28:28,399 um but the problem is like if you look 681 00:28:26,200 --> 00:28:30,600 at your blue score based evaluation or 682 00:28:28,399 --> 00:28:32,840 something like that you'll have like 683 00:28:30,600 --> 00:28:34,760 you'll be one point worse or two points 684 00:28:32,840 --> 00:28:36,720 worse or something like that and you'll 685 00:28:34,760 --> 00:28:38,600 be like Oh I'm I'm two points worse why 686 00:28:36,720 --> 00:28:40,600 am I two point wor two points worse in 687 00:28:38,600 --> 00:28:43,760 the state of the art and it turns out it 688 00:28:40,600 --> 00:28:45,279 was a really like silly thing like this 689 00:28:43,760 --> 00:28:46,519 and immediately you'll see this if you 690 00:28:45,279 --> 00:28:47,960 look at your data but if you're doing 691 00:28:46,519 --> 00:28:49,600 all your experiments and just looking at 692 00:28:47,960 --> 00:28:51,519 the numbers it's really hard to tell you 693 00:28:49,600 --> 00:28:53,720 know why this is 694 00:28:51,519 --> 00:28:58,720 happening 695 00:28:53,720 --> 00:29:02,360 um another thing is uh if you 696 00:28:58,720 --> 00:29:04,799 have a good eye and can like just look 697 00:29:02,360 --> 00:29:07,799 through the data points 698 00:29:04,799 --> 00:29:09,640 um we as humans are pretty good uh 699 00:29:07,799 --> 00:29:14,200 pattern recognizers and especially you 700 00:29:09,640 --> 00:29:16,360 know CMU students uh you're uh very good 701 00:29:14,200 --> 00:29:18,519 and quick at picking up on things so if 702 00:29:16,360 --> 00:29:20,600 you look at the data and pour through 703 00:29:18,519 --> 00:29:22,880 things you can uh probably pick up 704 00:29:20,600 --> 00:29:24,880 patterns about why things are failing 705 00:29:22,880 --> 00:29:27,720 and so um you know you might look and 706 00:29:24,880 --> 00:29:29,919 see that uh compared to some other model 707 00:29:27,720 --> 00:29:31,679 your model is really bad at answering 708 00:29:29,919 --> 00:29:33,679 questions about people or something like 709 00:29:31,679 --> 00:29:36,480 that and then you figure out you'll need 710 00:29:33,679 --> 00:29:38,320 a better model of uh people or your rag 711 00:29:36,480 --> 00:29:40,519 systems uh that you're building for 712 00:29:38,320 --> 00:29:42,880 assignment two is maybe failing on all 713 00:29:40,519 --> 00:29:45,559 the research related questions so you 714 00:29:42,880 --> 00:29:47,080 need to come up with the research uh 715 00:29:45,559 --> 00:29:48,320 like scrape more research data or 716 00:29:47,080 --> 00:29:50,080 something like 717 00:29:48,320 --> 00:29:53,840 that 718 00:29:50,080 --> 00:29:55,760 um so there are methods to do this more 719 00:29:53,840 --> 00:29:58,039 systematically and this is something I 720 00:29:55,760 --> 00:29:59,720 picked up when I was doing an internship 721 00:29:58,039 --> 00:30:04,080 at Google and it really stuck with me 722 00:29:59,720 --> 00:30:09,080 for you know 14 uh 14 years now I guess 723 00:30:04,080 --> 00:30:10,960 13 years um so uh a very simple way to 724 00:30:09,080 --> 00:30:12,600 do this more systematically than just 725 00:30:10,960 --> 00:30:16,200 browsing through things is to randomly 726 00:30:12,600 --> 00:30:19,000 sample a 100 outputs and look at a 100 727 00:30:16,200 --> 00:30:21,840 erors and try to group them into some 728 00:30:19,000 --> 00:30:23,799 sort of typology and say oh uh this kind 729 00:30:21,840 --> 00:30:27,799 of air is particularly 730 00:30:23,799 --> 00:30:31,279 frequent and this is just one example of 731 00:30:27,799 --> 00:30:33,120 a typology that was defined by V at all 732 00:30:31,279 --> 00:30:37,320 um where they tried to take machine 733 00:30:33,120 --> 00:30:39,480 translation errors and group them into 734 00:30:37,320 --> 00:30:43,440 uh various varieties like correct words 735 00:30:39,480 --> 00:30:46,640 filler words local uh local range long 736 00:30:43,440 --> 00:30:48,440 range um uh sorry word word level word 737 00:30:46,640 --> 00:30:50,440 ordering erors local range long range 738 00:30:48,440 --> 00:30:54,279 phrase level local range long range and 739 00:30:50,440 --> 00:30:55,679 stuff like this um you can definitely 740 00:30:54,279 --> 00:30:58,399 look at previous work and see the 741 00:30:55,679 --> 00:31:00,559 typologies of errors that they used but 742 00:30:58,399 --> 00:31:02,440 the problem is like systems get better 743 00:31:00,559 --> 00:31:04,240 and actually I don't think this is a 744 00:31:02,440 --> 00:31:06,760 super relevant typology for machine 745 00:31:04,240 --> 00:31:10,120 translation anymore uh because machine 746 00:31:06,760 --> 00:31:12,159 translation systems like they don't make 747 00:31:10,120 --> 00:31:14,639 a whole lot of local range Word level 748 00:31:12,159 --> 00:31:16,159 errors anymore and rather we might want 749 00:31:14,639 --> 00:31:18,279 to know more fine grain like are they 750 00:31:16,159 --> 00:31:21,720 making mistakes on named entities or 751 00:31:18,279 --> 00:31:24,720 other things like that so actually 752 00:31:21,720 --> 00:31:24,720 we 753 00:31:26,919 --> 00:31:29,919 um 754 00:31:30,519 --> 00:31:36,279 did a re a more recent thing it's I 755 00:31:34,279 --> 00:31:39,159 guess four years ago now um but it was 756 00:31:36,279 --> 00:31:42,720 when uh people first started saying that 757 00:31:39,159 --> 00:31:46,200 machine translation systems are about as 758 00:31:42,720 --> 00:31:50,720 good as humans at doing a 759 00:31:46,200 --> 00:31:50,720 translation and when we did this we 760 00:31:52,480 --> 00:31:58,440 compared we compared machine translation 761 00:31:55,200 --> 00:31:59,960 systems to humans and we tried to find 762 00:31:58,440 --> 00:32:02,240 you know different types of things and 763 00:31:59,960 --> 00:32:03,919 we were inspired by V but we recreated 764 00:32:02,240 --> 00:32:06,159 our typology based on the things that we 765 00:32:03,919 --> 00:32:10,279 thought were you know the most important 766 00:32:06,159 --> 00:32:13,399 types of errors in like 2020 instead of 767 00:32:10,279 --> 00:32:16,799 2006 so this is really helpful the 768 00:32:13,399 --> 00:32:19,039 reason why it's really helpful is if you 769 00:32:16,799 --> 00:32:20,440 can do this even for a small sample of 770 00:32:19,039 --> 00:32:23,440 the outputs that you're looking at and 771 00:32:20,440 --> 00:32:25,279 identify the most like prominent types 772 00:32:23,440 --> 00:32:27,440 of eras that you're facing it often 773 00:32:25,279 --> 00:32:29,360 leads you to the most successful ways of 774 00:32:27,440 --> 00:32:31,519 improving the accuracy of your systems 775 00:32:29,360 --> 00:32:33,120 because you might if you don't do this 776 00:32:31,519 --> 00:32:35,000 you might be focusing on an air type 777 00:32:33,120 --> 00:32:38,000 that's not actually an error it's kind 778 00:32:35,000 --> 00:32:39,200 of like if you learned in uh programming 779 00:32:38,000 --> 00:32:40,799 you know software engineering or 780 00:32:39,200 --> 00:32:42,639 something like that you should never 781 00:32:40,799 --> 00:32:46,360 optimize your code until you run a 782 00:32:42,639 --> 00:32:47,799 profiler um because actually your code 783 00:32:46,360 --> 00:32:50,320 might be slow in a place that you never 784 00:32:47,799 --> 00:32:52,720 expected and so it's kind of the same 785 00:32:50,320 --> 00:32:56,600 principle here right so don't optimize 786 00:32:52,720 --> 00:32:58,720 your systems errors in a place uh where 787 00:32:56,600 --> 00:33:03,240 like actually it's not having in years 788 00:32:58,720 --> 00:33:06,440 so um that's a general principle 789 00:33:03,240 --> 00:33:09,440 here uh cool another thing you can do is 790 00:33:06,440 --> 00:33:11,760 quantitative analysis so um if you can 791 00:33:09,440 --> 00:33:13,880 think of the phenomenon that you choose 792 00:33:11,760 --> 00:33:17,480 to focus on um is that phenomenon 793 00:33:13,880 --> 00:33:19,159 getting better so if you focused on uh 794 00:33:17,480 --> 00:33:22,240 something that should improve the 795 00:33:19,159 --> 00:33:23,760 quality of low frequency words uh you 796 00:33:22,240 --> 00:33:26,200 can check if the accuracy on low 797 00:33:23,760 --> 00:33:27,399 frequency words is increasing if you 798 00:33:26,200 --> 00:33:29,600 focused on something that should be 799 00:33:27,399 --> 00:33:32,120 improving the syntax in a low resource 800 00:33:29,600 --> 00:33:36,080 language you can measure um whether it's 801 00:33:32,120 --> 00:33:37,360 doing better on word ordering or uh long 802 00:33:36,080 --> 00:33:41,840 distance 803 00:33:37,360 --> 00:33:44,360 dependencies um if you focused on 804 00:33:41,840 --> 00:33:46,039 improving a search algorithm for you 805 00:33:44,360 --> 00:33:47,519 know generation or something like that 806 00:33:46,039 --> 00:33:49,880 are the number of search errors that 807 00:33:47,519 --> 00:33:53,120 you're encountering being reduced so 808 00:33:49,880 --> 00:33:56,320 depending on what you planned on uh you 809 00:33:53,120 --> 00:33:57,919 know improving it's often a good idea to 810 00:33:56,320 --> 00:33:59,480 measure more directly whether it's 811 00:33:57,919 --> 00:34:00,559 improving the the thing that you think 812 00:33:59,480 --> 00:34:04,880 it should 813 00:34:00,559 --> 00:34:06,000 improve um one example of um so I I 814 00:34:04,880 --> 00:34:09,240 basically 815 00:34:06,000 --> 00:34:11,240 created since my experience doing this 816 00:34:09,240 --> 00:34:15,159 manually uh when I I was on an 817 00:34:11,240 --> 00:34:18,280 internship at Google um I've 818 00:34:15,159 --> 00:34:20,639 gradually improved my methodology for 819 00:34:18,280 --> 00:34:20,639 doing 820 00:34:21,679 --> 00:34:26,320 this and um and worked on automating 821 00:34:24,879 --> 00:34:30,599 things and 822 00:34:26,320 --> 00:34:33,839 so the first thing I had was a super 823 00:34:30,599 --> 00:34:35,560 hacky uh hacky script that basically 824 00:34:33,839 --> 00:34:37,720 writes out HTML 825 00:34:35,560 --> 00:34:39,320 files um and then I I had something 826 00:34:37,720 --> 00:34:42,320 called explainer board where we had a 827 00:34:39,320 --> 00:34:44,879 leader board and uh recently one of the 828 00:34:42,320 --> 00:34:47,800 things I've worked on is uh this uh 829 00:34:44,879 --> 00:34:53,200 together with um Alex Alex Cabrera who's 830 00:34:47,800 --> 00:34:56,760 a student here um is this toolkit called 831 00:34:53,200 --> 00:34:59,640 Zeno and um this is just an example from 832 00:34:56,760 --> 00:34:59,640 machine translation 833 00:35:03,440 --> 00:35:09,200 it's being a little bit 834 00:35:06,599 --> 00:35:11,079 slow um but basically what it does is it 835 00:35:09,200 --> 00:35:14,920 allows you to look at the data on the 836 00:35:11,079 --> 00:35:18,000 right side um and so these are just 837 00:35:14,920 --> 00:35:19,680 examples um but you can go in and do 838 00:35:18,000 --> 00:35:22,760 things like say Okay I want to look at 839 00:35:19,680 --> 00:35:24,640 all machine translation examples 840 00:35:22,760 --> 00:35:28,040 from 841 00:35:24,640 --> 00:35:30,920 uh housea and so it shows you the ones 842 00:35:28,040 --> 00:35:32,960 from housea I want to look 843 00:35:30,920 --> 00:35:36,240 at all 844 00:35:32,960 --> 00:35:38,880 examples let me clear that off I want to 845 00:35:36,240 --> 00:35:40,800 look at all examples where the accuracy 846 00:35:38,880 --> 00:35:43,440 is 847 00:35:40,800 --> 00:35:45,280 low um and so now I can look at all the 848 00:35:43,440 --> 00:35:49,640 examples where the accuracy is low and I 849 00:35:45,280 --> 00:35:52,640 I can go in and uh uh examine them so uh 850 00:35:49,640 --> 00:35:54,880 you can also go in and build charts like 851 00:35:52,640 --> 00:35:58,280 this so like what is the overall 852 00:35:54,880 --> 00:36:02,200 performance um what is is the 853 00:35:58,280 --> 00:36:05,960 performance what is the performance 854 00:36:02,200 --> 00:36:07,520 um on different scripts so you can see 855 00:36:05,960 --> 00:36:10,880 which model which model is doing better 856 00:36:07,520 --> 00:36:13,960 at scripts and stuff like that so um or 857 00:36:10,880 --> 00:36:16,000 you can put things side by side and say 858 00:36:13,960 --> 00:36:20,720 okay I want to find all the examples 859 00:36:16,000 --> 00:36:21,800 where uh chat GPT is doing much worse 860 00:36:20,720 --> 00:36:25,280 than GPT 861 00:36:21,800 --> 00:36:28,240 4 uh or like GPT 3.5 is doing much worse 862 00:36:25,280 --> 00:36:29,680 than gp4 and here we can see that oh in 863 00:36:28,240 --> 00:36:31,520 this case it's generating something in 864 00:36:29,680 --> 00:36:34,079 the wrong script or something like that 865 00:36:31,520 --> 00:36:37,839 so um there's also tooling that you can 866 00:36:34,079 --> 00:36:40,480 use to make this easier as 867 00:36:37,839 --> 00:36:43,520 well and the way uh the way you use this 868 00:36:40,480 --> 00:36:46,079 is you basically 869 00:36:43,520 --> 00:36:48,000 um uh create a pandas data frame with 870 00:36:46,079 --> 00:36:49,680 all of your data in it and you upload 871 00:36:48,000 --> 00:36:52,400 the pandas data frame with any metadata 872 00:36:49,680 --> 00:36:54,280 you want to use and you can uh use and I 873 00:36:52,400 --> 00:36:56,520 think VJ will be having a recitation on 874 00:36:54,280 --> 00:37:02,560 this if you're interested in taking a 875 00:36:56,520 --> 00:37:04,680 look cool um so that is the my part and 876 00:37:02,560 --> 00:37:07,760 then we'll be doing Nishant next while 877 00:37:04,680 --> 00:37:09,480 Nishant comes up to set up are there any 878 00:37:07,760 --> 00:37:10,520 questions about the thing that I talked 879 00:37:09,480 --> 00:37:14,079 about 880 00:37:10,520 --> 00:37:14,079 here yeah 881 00:37:14,359 --> 00:37:18,200 so that when I 882 00:37:26,200 --> 00:37:30,079 regular um 883 00:37:28,160 --> 00:37:32,560 is that does that make a difference in 884 00:37:30,079 --> 00:37:35,400 terms of like what we're expecting when 885 00:37:32,560 --> 00:37:38,800 we're evaluating the model 886 00:37:35,400 --> 00:37:41,720 model yeah so just to repeat the 887 00:37:38,800 --> 00:37:43,680 question it's a a great question so if 888 00:37:41,720 --> 00:37:49,440 you apply 889 00:37:43,680 --> 00:37:49,440 regularization um will that change the 890 00:37:49,640 --> 00:37:54,079 overall expectation for the model loss 891 00:37:52,040 --> 00:37:55,680 so I was saying loss should converge to 892 00:37:54,079 --> 00:37:57,200 zero once you start applying 893 00:37:55,680 --> 00:37:59,079 regularization or weight Decay or 894 00:37:57,200 --> 00:38:02,640 something like that it definitely might 895 00:37:59,079 --> 00:38:04,520 not converge to Z um and the reason why 896 00:38:02,640 --> 00:38:06,520 is because once you start applying 897 00:38:04,520 --> 00:38:09,319 regularization there is no zero loss 898 00:38:06,520 --> 00:38:11,480 solion um because in order to reduce the 899 00:38:09,319 --> 00:38:14,960 loss you need to make move things away 900 00:38:11,480 --> 00:38:16,359 move weights away from zero um but when 901 00:38:14,960 --> 00:38:19,560 you move weights away from zero the 902 00:38:16,359 --> 00:38:22,200 regularization L becomes n negative so 903 00:38:19,560 --> 00:38:24,599 one thing you can do however is measure 904 00:38:22,200 --> 00:38:26,880 the losses separately so measure the 905 00:38:24,599 --> 00:38:27,960 regularization component of the loss and 906 00:38:26,880 --> 00:38:29,760 the um 907 00:38:27,960 --> 00:38:31,920 the log like we had component with the 908 00:38:29,760 --> 00:38:33,560 loss and with any reasonable 909 00:38:31,920 --> 00:38:35,280 regularization and a reasonably 910 00:38:33,560 --> 00:38:38,000 parameterized model I do think the loss 911 00:38:35,280 --> 00:38:39,760 should be getting closer to Zer like the 912 00:38:38,000 --> 00:38:41,920 actual likely should be getting closer 913 00:38:39,760 --> 00:38:41,920 to 914 00:38:42,200 --> 00:38:46,520 zero uh you were using an extremely 915 00:38:44,480 --> 00:38:49,240 small model in the mini L signed though 916 00:38:46,520 --> 00:38:53,680 so that might make it more 917 00:38:49,240 --> 00:38:56,440 difficult yeah and any other 918 00:38:53,680 --> 00:38:59,440 things okay if not 919 00:38:56,440 --> 00:38:59,440 I'll 920 00:39:13,720 --> 00:39:19,160 all right can everyone hear 921 00:39:15,319 --> 00:39:21,440 me sweet okay move this it looks like 922 00:39:19,160 --> 00:39:24,200 I'm talking to someone instead of 923 00:39:21,440 --> 00:39:24,200 between both of 924 00:39:26,359 --> 00:39:29,359 you 925 00:39:33,319 --> 00:39:37,680 all right so hi everyone um I'm going to 926 00:39:35,720 --> 00:39:39,400 talk about model interpretability for 927 00:39:37,680 --> 00:39:41,680 for those who don't know me I'm one of 928 00:39:39,400 --> 00:39:44,359 your Tas I'm a first year PhD student 929 00:39:41,680 --> 00:39:47,359 working with Mona diab on model 930 00:39:44,359 --> 00:39:47,359 interpretability 931 00:39:48,800 --> 00:39:55,400 um where what do I 932 00:39:51,839 --> 00:39:59,119 click your your mouse should be there 933 00:39:55,400 --> 00:40:01,599 yeah just 934 00:39:59,119 --> 00:40:04,160 cool okay um 935 00:40:01,599 --> 00:40:06,079 so what I want you to take away if you 936 00:40:04,160 --> 00:40:08,359 if you fall asleep this is too boring 937 00:40:06,079 --> 00:40:09,839 here are sort of the two main takeaways 938 00:40:08,359 --> 00:40:12,040 one I want to convince you that model 939 00:40:09,839 --> 00:40:14,720 interpretability is important to study 940 00:40:12,040 --> 00:40:16,720 and two I want I want you to find this 941 00:40:14,720 --> 00:40:18,880 interesting um and something you want to 942 00:40:16,720 --> 00:40:20,079 explore more there's a bunch of details 943 00:40:18,880 --> 00:40:21,800 here this is going to be kind of a 944 00:40:20,079 --> 00:40:24,599 whirlwind tour you're not going to get 945 00:40:21,800 --> 00:40:27,440 super deep into anything um so hopefully 946 00:40:24,599 --> 00:40:28,839 this acts as a starting point um then 947 00:40:27,440 --> 00:40:33,800 than anything 948 00:40:28,839 --> 00:40:37,040 else so interpretability in AI um the 949 00:40:33,800 --> 00:40:38,480 the definition is it's the study of 950 00:40:37,040 --> 00:40:40,440 understanding the decisions that AI 951 00:40:38,480 --> 00:40:42,640 systems make and putting them into 952 00:40:40,440 --> 00:40:44,280 easily human understandable terms this 953 00:40:42,640 --> 00:40:47,640 can mean a lot of different things and 954 00:40:44,280 --> 00:40:49,280 this is often really hard um and the why 955 00:40:47,640 --> 00:40:51,319 is to use that understanding to 956 00:40:49,280 --> 00:40:54,040 iteratively better Design Systems that 957 00:40:51,319 --> 00:40:56,240 are better they're more more performant 958 00:40:54,040 --> 00:40:59,240 but also those that are more human 959 00:40:56,240 --> 00:40:59,240 understandable 960 00:41:00,119 --> 00:41:06,599 um so interpretability is this big blah 961 00:41:03,720 --> 00:41:08,440 but there's a bunch of other uh spheres 962 00:41:06,599 --> 00:41:11,920 that intersect with it this is a super 963 00:41:08,440 --> 00:41:14,920 incomplete list uh so bear with me the 964 00:41:11,920 --> 00:41:16,560 causality and data integrate with this 965 00:41:14,920 --> 00:41:19,000 there's aspects that are interpretable 966 00:41:16,560 --> 00:41:20,480 there's aspects that matter here um 967 00:41:19,000 --> 00:41:22,400 explainable AI is another thing that 968 00:41:20,480 --> 00:41:24,440 you've probably heard this sits firmly 969 00:41:22,400 --> 00:41:27,800 in the interpretability blob and 970 00:41:24,440 --> 00:41:30,520 connects with ideas and causality and uh 971 00:41:27,800 --> 00:41:32,680 in data too um model interpretability 972 00:41:30,520 --> 00:41:34,200 sits on this kind of other side of 973 00:41:32,680 --> 00:41:37,680 things it intersects a little bit with 974 00:41:34,200 --> 00:41:40,000 causality and explainable AI but uh is a 975 00:41:37,680 --> 00:41:42,280 little bit separate for it um and from 976 00:41:40,000 --> 00:41:43,880 it and mechanistic interpretability 977 00:41:42,280 --> 00:41:45,400 which which you've probably heard of 978 00:41:43,880 --> 00:41:47,680 it's gotten a lot of Buzz recently kind 979 00:41:45,400 --> 00:41:48,880 of sits inside of model interpretability 980 00:41:47,680 --> 00:41:51,680 it's a special case of model 981 00:41:48,880 --> 00:41:53,160 interpretability I hope the mech people 982 00:41:51,680 --> 00:41:56,640 agree with me 983 00:41:53,160 --> 00:41:58,040 but um so yeah so historically we've 984 00:41:56,640 --> 00:42:00,880 been dealing with really really really 985 00:41:58,040 --> 00:42:03,680 small models you had Bas Nets this is a 986 00:42:00,880 --> 00:42:07,560 this is very small model um if all these 987 00:42:03,680 --> 00:42:10,000 are binary variables this is uh eight 988 00:42:07,560 --> 00:42:12,680 total parameters and only four of which 989 00:42:10,000 --> 00:42:14,880 are independent uh we also used to work 990 00:42:12,680 --> 00:42:18,160 with linear regression a lot and in the 991 00:42:14,880 --> 00:42:20,680 first case that's a nice line can be two 992 00:42:18,160 --> 00:42:23,240 parameters the multivariate case again 993 00:42:20,680 --> 00:42:25,880 that's a a small number of parameters 994 00:42:23,240 --> 00:42:27,880 we've moved to of more things we've 995 00:42:25,880 --> 00:42:30,400 moved to 996 00:42:27,880 --> 00:42:32,160 MLPs that have larger weight matrices 997 00:42:30,400 --> 00:42:33,920 but all these are kind of digestible and 998 00:42:32,160 --> 00:42:37,200 interpretable so the interpretability 999 00:42:33,920 --> 00:42:40,160 world was sort of uh not super concerned 1000 00:42:37,200 --> 00:42:41,280 with large ginormous things but we're 1001 00:42:40,160 --> 00:42:44,800 not there 1002 00:42:41,280 --> 00:42:47,000 anymore uh this is a language model this 1003 00:42:44,800 --> 00:42:50,839 is part of still part of a language 1004 00:42:47,000 --> 00:42:51,960 model now it's getting more and more and 1005 00:42:50,839 --> 00:42:55,119 more 1006 00:42:51,960 --> 00:42:57,920 hairing and this is just not 1007 00:42:55,119 --> 00:43:00,520 interpretable um I mentioned 1008 00:42:57,920 --> 00:43:03,280 on on the first day of class that I hate 1009 00:43:00,520 --> 00:43:05,240 when we update parameters of models also 1010 00:43:03,280 --> 00:43:07,720 hate when models are this big and this 1011 00:43:05,240 --> 00:43:10,000 is a six layer Transformer this is way 1012 00:43:07,720 --> 00:43:15,920 smaller than basically anything that we 1013 00:43:10,000 --> 00:43:18,040 have um and this makes things very very 1014 00:43:15,920 --> 00:43:20,920 uninterpretable um so we'll talk about 1015 00:43:18,040 --> 00:43:22,880 one one way that people sort of uh five 1016 00:43:20,920 --> 00:43:24,599 years ago started addressing this 1017 00:43:22,880 --> 00:43:25,680 problem and this is and this is the idea 1018 00:43:24,599 --> 00:43:28,000 of 1019 00:43:25,680 --> 00:43:30,880 probing so how do we make sense of a 1020 00:43:28,000 --> 00:43:35,160 giant model this is one way so we take 1021 00:43:30,880 --> 00:43:38,200 our giant model we cut the top off 1022 00:43:35,160 --> 00:43:40,520 basically um and now we have this thing 1023 00:43:38,200 --> 00:43:42,119 we stick a probe which actually in a lot 1024 00:43:40,520 --> 00:43:44,559 of cases looks very similar to a 1025 00:43:42,119 --> 00:43:47,280 language modeling head uh usually it's a 1026 00:43:44,559 --> 00:43:51,640 small two layer or one layer 1027 00:43:47,280 --> 00:43:54,319 MLP um and we basically treat the model 1028 00:43:51,640 --> 00:43:56,760 as something that uh that exists and we 1029 00:43:54,319 --> 00:44:00,240 only really care about the output of of 1030 00:43:56,760 --> 00:44:03,240 the model so more specifically what is a 1031 00:44:00,240 --> 00:44:05,720 probe it's a classifier this this green 1032 00:44:03,240 --> 00:44:07,680 thing here uh that is specifically 1033 00:44:05,720 --> 00:44:09,200 trained to predict some specific 1034 00:44:07,680 --> 00:44:11,480 property from the pre-trained models 1035 00:44:09,200 --> 00:44:16,440 representations 1036 00:44:11,480 --> 00:44:18,480 alone so um in 2019 Ian Tenny and folks 1037 00:44:16,440 --> 00:44:21,319 introduced Edge probing so this is a 1038 00:44:18,480 --> 00:44:23,240 general method um it works to probe 1039 00:44:21,319 --> 00:44:27,559 different types of information out of a 1040 00:44:23,240 --> 00:44:29,960 model so this bottom part here uh yeah 1041 00:44:27,559 --> 00:44:33,160 this bottom part here it you pass it in 1042 00:44:29,960 --> 00:44:36,520 a sequence you pass it into a model this 1043 00:44:33,160 --> 00:44:38,839 is Burt in their experiments often uh 1044 00:44:36,520 --> 00:44:40,960 and that outputs a set of contextual 1045 00:44:38,839 --> 00:44:44,359 vectors these contextual vectors can be 1046 00:44:40,960 --> 00:44:45,920 at any layer um often it's near the 1047 00:44:44,359 --> 00:44:49,280 often it's near the top but we'll talk 1048 00:44:45,920 --> 00:44:51,079 about uh the the fact that this can work 1049 00:44:49,280 --> 00:44:53,359 kind of across layers and different 1050 00:44:51,079 --> 00:44:55,599 layers and code different information 1051 00:44:53,359 --> 00:44:58,960 and on top of this you have this MLP 1052 00:44:55,599 --> 00:45:02,480 that you train to Output a prediction 1053 00:44:58,960 --> 00:45:05,599 your model is always always fixed um in 1054 00:45:02,480 --> 00:45:08,079 these cases so you can do things like 1055 00:45:05,599 --> 00:45:09,880 part of speech tagging where each 1056 00:45:08,079 --> 00:45:12,400 specific word you try to determine what 1057 00:45:09,880 --> 00:45:16,640 its part of speech is and in that case 1058 00:45:12,400 --> 00:45:18,000 this these S1 and S2 spans here uh only 1059 00:45:16,640 --> 00:45:19,440 one of them is active because you're 1060 00:45:18,000 --> 00:45:21,440 predicting for every single 1061 00:45:19,440 --> 00:45:23,240 contextualized Vector you're predicting 1062 00:45:21,440 --> 00:45:25,359 whether that thing is a noun or a verb 1063 00:45:23,240 --> 00:45:27,440 or something like this you can have 1064 00:45:25,359 --> 00:45:29,599 other sorts of tasks too like in ailment 1065 00:45:27,440 --> 00:45:32,520 where you have two sequences and two 1066 00:45:29,599 --> 00:45:35,079 spans um and you use the embeddings for 1067 00:45:32,520 --> 00:45:37,359 those spans um for like sentence one and 1068 00:45:35,079 --> 00:45:39,319 sentence two you pull them together in 1069 00:45:37,359 --> 00:45:43,359 some way and then you pass them to this 1070 00:45:39,319 --> 00:45:47,480 MLP and you see whether the MLP can uh 1071 00:45:43,359 --> 00:45:49,680 solve that test so they did this uh in 1072 00:45:47,480 --> 00:45:52,559 another paper uh Bert rediscovers the 1073 00:45:49,680 --> 00:45:54,280 NLP Pipeline and this there's a lot 1074 00:45:52,559 --> 00:45:57,079 going on in this figure the the only 1075 00:45:54,280 --> 00:45:59,599 major thing here to take away um is 1076 00:45:57,079 --> 00:46:02,720 these numbers that are in this like pink 1077 00:45:59,599 --> 00:46:05,359 purple color um so these are a bunch of 1078 00:46:02,720 --> 00:46:07,960 different uh properties supp part of 1079 00:46:05,359 --> 00:46:11,319 speech uh and and a bunch of other 1080 00:46:07,960 --> 00:46:13,520 things um and what they basically find 1081 00:46:11,319 --> 00:46:15,640 is that at earlier layers in the model 1082 00:46:13,520 --> 00:46:18,760 the things that are closer to the Token 1083 00:46:15,640 --> 00:46:21,480 level representation are more um 1084 00:46:18,760 --> 00:46:23,400 extractable using a probe and the things 1085 00:46:21,480 --> 00:46:26,440 that require more contextualized 1086 00:46:23,400 --> 00:46:29,440 information um is extractable later from 1087 00:46:26,440 --> 00:46:32,359 later layers in the model and so here's 1088 00:46:29,440 --> 00:46:34,599 sort of a brief uh description of what 1089 00:46:32,359 --> 00:46:37,599 these tasks are so the ones on the 1090 00:46:34,599 --> 00:46:40,040 bottom are more semantic more 1091 00:46:37,599 --> 00:46:42,040 contextualized like uh semantic prot 1092 00:46:40,040 --> 00:46:43,880 roles and relation relation 1093 00:46:42,040 --> 00:46:45,839 classification and then the first few 1094 00:46:43,880 --> 00:46:48,200 are more you know chunking in part of 1095 00:46:45,839 --> 00:46:51,880 speech tagging and um dependency 1096 00:46:48,200 --> 00:46:51,880 labeling in these in these sorts of 1097 00:46:52,040 --> 00:46:57,200 tests um so there's a bunch of issues 1098 00:46:54,480 --> 00:46:59,520 with probing um and there aren't many 1099 00:46:57,200 --> 00:47:03,559 probing papers now as there were many 1100 00:46:59,520 --> 00:47:05,960 years ago um and so if your probe let's 1101 00:47:03,559 --> 00:47:07,960 say your probe 1102 00:47:05,960 --> 00:47:09,920 works it's possible that the 1103 00:47:07,960 --> 00:47:12,200 representation actually encodes that 1104 00:47:09,920 --> 00:47:14,520 information it's also possible that it 1105 00:47:12,200 --> 00:47:16,359 doesn't and the probe solved the task by 1106 00:47:14,520 --> 00:47:18,119 itself uh keep in mind that you're 1107 00:47:16,359 --> 00:47:20,640 learning this probe you're training this 1108 00:47:18,119 --> 00:47:22,720 Probe on labeled data uh let's say your 1109 00:47:20,640 --> 00:47:24,599 probe doesn't work does that tell you 1110 00:47:22,720 --> 00:47:27,119 anything Maybe not maybe the 1111 00:47:24,599 --> 00:47:30,280 representation lacks the information or 1112 00:47:27,119 --> 00:47:31,800 maybe your probe doesn't doesn't 1113 00:47:30,280 --> 00:47:33,800 actually isn't actually able to 1114 00:47:31,800 --> 00:47:35,240 disentangle that information from your 1115 00:47:33,800 --> 00:47:36,720 representation maybe the probe is not 1116 00:47:35,240 --> 00:47:38,359 the right function class maybe you 1117 00:47:36,720 --> 00:47:40,839 poorly trained your probe there's 1118 00:47:38,359 --> 00:47:42,280 hyperparameters for your probe so often 1119 00:47:40,839 --> 00:47:43,000 times your probe doesn't give you that 1120 00:47:42,280 --> 00:47:46,119 much 1121 00:47:43,000 --> 00:47:49,040 information there's more problems too so 1122 00:47:46,119 --> 00:47:50,800 often we want to probe tasks themselves 1123 00:47:49,040 --> 00:47:53,240 and that requires a lot of supervised 1124 00:47:50,800 --> 00:47:55,880 data um but we can't collect a lot of 1125 00:47:53,240 --> 00:47:58,440 supervised data so we collect some of it 1126 00:47:55,880 --> 00:48:00,040 and then that instead produces this 1127 00:47:58,440 --> 00:48:02,480 convenient sample that we have that's a 1128 00:48:00,040 --> 00:48:04,119 data set that is a convenient sample of 1129 00:48:02,480 --> 00:48:07,000 your task so really what you're probing 1130 00:48:04,119 --> 00:48:10,040 is the data set and so with all these 1131 00:48:07,000 --> 00:48:11,800 limitations it's it's fallen out of 1132 00:48:10,040 --> 00:48:13,599 favor a little bit it's still very very 1133 00:48:11,800 --> 00:48:16,400 useful but it's fallen out of favor as 1134 00:48:13,599 --> 00:48:20,000 like a core model interpretability 1135 00:48:16,400 --> 00:48:22,160 idea um also probes designed in this way 1136 00:48:20,000 --> 00:48:26,079 are correlated they're correlative not 1137 00:48:22,160 --> 00:48:27,880 really causitive so your your underlying 1138 00:48:26,079 --> 00:48:29,640 model is trained in a specific way all 1139 00:48:27,880 --> 00:48:31,359 of that information is disentangled and 1140 00:48:29,640 --> 00:48:32,920 kind of thrown away and you're only 1141 00:48:31,359 --> 00:48:34,599 looking at the output representation and 1142 00:48:32,920 --> 00:48:36,559 you're saying is my output 1143 00:48:34,599 --> 00:48:39,200 representation correlated to the thing 1144 00:48:36,559 --> 00:48:42,400 that I'm training this probe for there's 1145 00:48:39,200 --> 00:48:44,960 no notion of intervening on this lat and 1146 00:48:42,400 --> 00:48:46,559 space there's no notion of of causation 1147 00:48:44,960 --> 00:48:49,119 really so you're just seeing whether 1148 00:48:46,559 --> 00:48:52,559 your representation is correlated with 1149 00:48:49,119 --> 00:48:54,480 your property that you're probing for um 1150 00:48:52,559 --> 00:48:56,200 and with these limitations the 1151 00:48:54,480 --> 00:48:58,720 community's moved a little bit away from 1152 00:48:56,200 --> 00:48:58,720 this area 1153 00:48:59,040 --> 00:49:02,200 uh there's a bunch of other probing 1154 00:49:00,240 --> 00:49:04,920 works so a bunch of people aim to solve 1155 00:49:02,200 --> 00:49:06,000 a bunch of these problems um and uh for 1156 00:49:04,920 --> 00:49:09,200 the sake of time I'm not going to go 1157 00:49:06,000 --> 00:49:12,599 into all of these but uh I'd encourage 1158 00:49:09,200 --> 00:49:14,000 you to look into these they for for some 1159 00:49:12,599 --> 00:49:17,319 of these problems they're able to 1160 00:49:14,000 --> 00:49:19,520 control for um control for like the 1161 00:49:17,319 --> 00:49:22,200 complexity of the of the probe and 1162 00:49:19,520 --> 00:49:24,359 things like this um but even despite 1163 00:49:22,200 --> 00:49:25,720 that probing is sort of slowly kind of 1164 00:49:24,359 --> 00:49:28,160 falling out of 1165 00:49:25,720 --> 00:49:29,640 favor uh so before I move into model 1166 00:49:28,160 --> 00:49:31,920 interpretability are there any questions 1167 00:49:29,640 --> 00:49:31,920 on 1168 00:49:35,520 --> 00:49:40,599 probing all right so what is model 1169 00:49:38,680 --> 00:49:44,000 interpretability so this is my 1170 00:49:40,599 --> 00:49:45,400 definition here uh this is the study of 1171 00:49:44,000 --> 00:49:46,599 understanding the internals of models 1172 00:49:45,400 --> 00:49:49,079 for example their weights and 1173 00:49:46,599 --> 00:49:51,160 activations putting those insights in 1174 00:49:49,079 --> 00:49:53,319 human intelligible terms and using that 1175 00:49:51,160 --> 00:49:55,920 insight to both patch current models and 1176 00:49:53,319 --> 00:49:57,359 develop better ones for not sort of able 1177 00:49:55,920 --> 00:49:58,760 to do both of these things patching 1178 00:49:57,359 --> 00:50:00,160 current models and develop better ones 1179 00:49:58,760 --> 00:50:02,440 we're kind of doing interpretability for 1180 00:50:00,160 --> 00:50:04,960 interpretability sake that's nice and 1181 00:50:02,440 --> 00:50:08,079 fun but it's not as applicable for the 1182 00:50:04,960 --> 00:50:09,720 for the community so you've probably 1183 00:50:08,079 --> 00:50:12,240 heard of the term mechanistic 1184 00:50:09,720 --> 00:50:14,480 interpretability it's in my opinion a 1185 00:50:12,240 --> 00:50:16,559 subfield of model interpretability and 1186 00:50:14,480 --> 00:50:19,319 this is sort of my definition I it 1187 00:50:16,559 --> 00:50:21,440 aligns reasonably well to the core 1188 00:50:19,319 --> 00:50:22,720 mechanistic interpretability people um 1189 00:50:21,440 --> 00:50:24,880 but it's the study of reverse 1190 00:50:22,720 --> 00:50:26,280 engineering parametric models often 1191 00:50:24,880 --> 00:50:28,839 neural networks because that's what we 1192 00:50:26,280 --> 00:50:31,400 use from their learned weights into more 1193 00:50:28,839 --> 00:50:32,839 human interpretable algorithmic units uh 1194 00:50:31,400 --> 00:50:36,839 and often they call these things 1195 00:50:32,839 --> 00:50:39,440 circuits um and and these are basically 1196 00:50:36,839 --> 00:50:42,880 functions that uh you can describe in a 1197 00:50:39,440 --> 00:50:45,000 human interpretable way that sit inside 1198 00:50:42,880 --> 00:50:46,760 models um there's a bunch of notable 1199 00:50:45,000 --> 00:50:50,720 work again for the sake of time I'm 1200 00:50:46,760 --> 00:50:54,319 going to just briefly talk about them 1201 00:50:50,720 --> 00:50:56,839 um so the first one is they they look 1202 00:50:54,319 --> 00:50:58,440 into analyzing small MLPs and 1203 00:50:56,839 --> 00:51:01,400 Transformers to build out the intuition 1204 00:50:58,440 --> 00:51:04,119 of what circuits exist um and this a lot 1205 00:51:01,400 --> 00:51:06,559 of this work came out of earlier work on 1206 00:51:04,119 --> 00:51:08,480 on lstms and doing similar sorts of 1207 00:51:06,559 --> 00:51:11,880 things with with 1208 00:51:08,480 --> 00:51:14,319 lstms um and they find a bunch of things 1209 00:51:11,880 --> 00:51:15,839 one thing that they find is this idea of 1210 00:51:14,319 --> 00:51:19,599 induction heads and these induction 1211 00:51:15,839 --> 00:51:21,760 heads they say is sort of sort of helps 1212 00:51:19,599 --> 00:51:24,680 prove why models can do in context 1213 00:51:21,760 --> 00:51:26,599 learning so so an induction head is 1214 00:51:24,680 --> 00:51:28,839 something that it's it's a specific 1215 00:51:26,599 --> 00:51:32,440 attention head that kind of allows you 1216 00:51:28,839 --> 00:51:35,599 to um when given a prefix allow you to 1217 00:51:32,440 --> 00:51:37,559 kind of copy the necessary resulting 1218 00:51:35,599 --> 00:51:39,640 token from the underlying training data 1219 00:51:37,559 --> 00:51:41,720 that the model seen before so in context 1220 00:51:39,640 --> 00:51:44,599 learning what you generally provide is 1221 00:51:41,720 --> 00:51:46,440 some sort of prefix and then you uh 1222 00:51:44,599 --> 00:51:48,480 provide some example and hopefully you 1223 00:51:46,440 --> 00:51:51,040 know you can classify the thing or 1224 00:51:48,480 --> 00:51:53,280 something like this um it's it's saying 1225 00:51:51,040 --> 00:51:56,200 that there's these attention heads 1226 00:51:53,280 --> 00:51:59,400 Loosely uh that exist that are able to 1227 00:51:56,200 --> 00:52:00,680 copy unearth that information um for for 1228 00:51:59,400 --> 00:52:03,319 a specific 1229 00:52:00,680 --> 00:52:07,200 context um other things that they' that 1230 00:52:03,319 --> 00:52:09,880 they've done is um on neurons so uh this 1231 00:52:07,200 --> 00:52:13,160 poly semanticity so what this this kind 1232 00:52:09,880 --> 00:52:15,240 of means is that your your neuron is a 1233 00:52:13,160 --> 00:52:18,000 uh you have a set of neurons in your 1234 00:52:15,240 --> 00:52:20,880 activation space so let's say at layer 1235 00:52:18,000 --> 00:52:23,200 10 in your model you have an output um 1236 00:52:20,880 --> 00:52:26,280 and so your activations is let's say a 1237 00:52:23,200 --> 00:52:28,400 thousand dimensional here those each of 1238 00:52:26,280 --> 00:52:31,319 those thousand individual neurons may 1239 00:52:28,400 --> 00:52:35,839 represent more than one specific 1240 00:52:31,319 --> 00:52:37,839 feature um and so they they talk about 1241 00:52:35,839 --> 00:52:41,280 this in that context and this is kind of 1242 00:52:37,839 --> 00:52:43,240 a theory but you can think about um 1243 00:52:41,280 --> 00:52:46,359 trying to process 1244 00:52:43,240 --> 00:52:49,400 input and when you're processing a a 1245 00:52:46,359 --> 00:52:50,960 vocab of size 50,000 or 250,000 at some 1246 00:52:49,400 --> 00:52:52,359 point in the model we're actually 1247 00:52:50,960 --> 00:52:55,400 compressing it down to the hidden 1248 00:52:52,359 --> 00:52:58,119 Dimension and so in some cases that 1249 00:52:55,400 --> 00:53:00,319 looks like you're going to compress a 1250 00:52:58,119 --> 00:53:03,440 much richer feature representation down 1251 00:53:00,319 --> 00:53:06,359 into a smaller set of neurons so it is 1252 00:53:03,440 --> 00:53:08,319 reasonable to believe that um a specific 1253 00:53:06,359 --> 00:53:10,799 neuron will represent multiple of those 1254 00:53:08,319 --> 00:53:15,480 features and given the structure of our 1255 00:53:10,799 --> 00:53:18,720 weight matrices um it it is the case 1256 00:53:15,480 --> 00:53:21,839 that if they are representing more 1257 00:53:18,720 --> 00:53:23,960 features than uh number of elements in 1258 00:53:21,839 --> 00:53:26,000 the actual or number of neurons in the 1259 00:53:23,960 --> 00:53:28,680 activation space then many of these 1260 00:53:26,000 --> 00:53:30,880 features linearly dependent and so we're 1261 00:53:28,680 --> 00:53:35,400 not really able to utilize them that 1262 00:53:30,880 --> 00:53:37,960 well um they they they talk about this 1263 00:53:35,400 --> 00:53:42,200 they don't talk about this in the the 1264 00:53:37,960 --> 00:53:44,799 most uh the the best way but uh it seems 1265 00:53:42,200 --> 00:53:48,040 kind of clear to me that um since you 1266 00:53:44,799 --> 00:53:50,880 have embedding matrices that are um not 1267 00:53:48,040 --> 00:53:53,599 square that you're that these neurons 1268 00:53:50,880 --> 00:53:56,400 have to exist um and they have to 1269 00:53:53,599 --> 00:53:59,200 incorporate multiple features at once m 1270 00:53:56,400 --> 00:54:02,559 multiple redundant features at 1271 00:53:59,200 --> 00:54:04,680 once um so before I move on to the rest 1272 00:54:02,559 --> 00:54:07,839 of model interpretability any questions 1273 00:54:04,680 --> 00:54:07,839 about mechanistic 1274 00:54:09,880 --> 00:54:12,880 interpretability 1275 00:54:21,480 --> 00:54:28,040 yeah so most of their studies are for uh 1276 00:54:24,920 --> 00:54:29,720 a very small set of of of models and 1277 00:54:28,040 --> 00:54:32,040 most of these are old GPT models there 1278 00:54:29,720 --> 00:54:34,160 have been a few works like in the last 1279 00:54:32,040 --> 00:54:36,760 couple of months on doing this for the 1280 00:54:34,160 --> 00:54:39,720 Llama based models um it seems like this 1281 00:54:36,760 --> 00:54:42,040 is a general more General phenomenal for 1282 00:54:39,720 --> 00:54:43,760 for language models it also is the case 1283 00:54:42,040 --> 00:54:46,839 that certain attention heads specialize 1284 00:54:43,760 --> 00:54:49,480 and talk about them a little bit um in 1285 00:54:46,839 --> 00:54:51,599 in the activations part um but yeah 1286 00:54:49,480 --> 00:54:53,799 there's they're not like all attention 1287 00:54:51,599 --> 00:54:56,400 heads aren't created equal they start 1288 00:54:53,799 --> 00:55:00,280 this way and it seems to be a general 1289 00:54:56,400 --> 00:55:01,799 princip of and one one other thing I you 1290 00:55:00,280 --> 00:55:04,520 might know about this better than I do 1291 00:55:01,799 --> 00:55:06,520 but I think there are some preliminary 1292 00:55:04,520 --> 00:55:09,160 words that say that Transformers seem to 1293 00:55:06,520 --> 00:55:11,720 be particularly good at doing things 1294 00:55:09,160 --> 00:55:15,160 like induction heads compared 1295 00:55:11,720 --> 00:55:17,200 to uh H for current models and there was 1296 00:55:15,160 --> 00:55:20,720 a paper really recently about comparing 1297 00:55:17,200 --> 00:55:23,599 like Mamba and um in Transformer based 1298 00:55:20,720 --> 00:55:26,400 models Mamba being a uh kind of more 1299 00:55:23,599 --> 00:55:30,280 like closer to a with network which we 1300 00:55:26,400 --> 00:55:33,119 also going to talk about us but um so I 1301 00:55:30,280 --> 00:55:37,319 I think there's some indication that 1302 00:55:33,119 --> 00:55:39,920 Transformers actually kind of are key or 1303 00:55:37,319 --> 00:55:43,680 are at least like better at kind of in 1304 00:55:39,920 --> 00:55:46,760 Contex learning than otheres are so 1305 00:55:43,680 --> 00:55:48,920 there is some 1306 00:55:46,760 --> 00:55:50,839 interesting implications of that which 1307 00:55:48,920 --> 00:55:53,240 is like well if Transformers are good 1308 00:55:50,839 --> 00:55:57,359 what's better than Transformer yeah like 1309 00:55:53,240 --> 00:55:58,799 naturally learning this s of thing so um 1310 00:55:57,359 --> 00:56:00,720 they're good at yeah they're like really 1311 00:55:58,799 --> 00:56:04,039 good at copying and like maintaining 1312 00:56:00,720 --> 00:56:06,799 information like more so um and yeah I 1313 00:56:04,039 --> 00:56:08,200 think it'd be cool to like be able to I 1314 00:56:06,799 --> 00:56:09,839 don't know how to do this but be able to 1315 00:56:08,200 --> 00:56:11,440 extract that kind of information like 1316 00:56:09,839 --> 00:56:13,359 what of the Transformers is actually 1317 00:56:11,440 --> 00:56:15,119 helping it do this copying mechanism or 1318 00:56:13,359 --> 00:56:17,799 like being a better in context learner 1319 00:56:15,119 --> 00:56:20,039 then we can develop a better structure a 1320 00:56:17,799 --> 00:56:23,119 slightly better structure than than a 1321 00:56:20,039 --> 00:56:26,000 Transformer um hopefully someone comes 1322 00:56:23,119 --> 00:56:28,240 up with that soon but cool any other 1323 00:56:26,000 --> 00:56:28,240 question 1324 00:56:29,799 --> 00:56:34,359 questions all right so let's move into 1325 00:56:32,240 --> 00:56:35,880 model interpretability so there are 1326 00:56:34,359 --> 00:56:37,480 weights and their activations I 1327 00:56:35,880 --> 00:56:39,160 mentioned these are these are the two 1328 00:56:37,480 --> 00:56:41,119 things these are the two things that 1329 00:56:39,160 --> 00:56:43,440 we're going to look at so what can you 1330 00:56:41,119 --> 00:56:45,480 do with the weights of an RD train model 1331 00:56:43,440 --> 00:56:47,799 really you can just edit them and then 1332 00:56:45,480 --> 00:56:49,200 kind of see what happens activations 1333 00:56:47,799 --> 00:56:51,240 similarly you can look at the 1334 00:56:49,200 --> 00:56:52,720 activations for different inputs you can 1335 00:56:51,240 --> 00:56:54,520 poke them with a stick and see what 1336 00:56:52,720 --> 00:56:56,359 happens a lot of my research is poking 1337 00:56:54,520 --> 00:56:58,559 models with a stick and looking at the 1338 00:56:56,359 --> 00:57:00,920 activations it's like predominantly what 1339 00:56:58,559 --> 00:57:02,240 I've done so we'll talk about that um 1340 00:57:00,920 --> 00:57:04,359 and the technical term for this is 1341 00:57:02,240 --> 00:57:06,599 intervening on them by adding some 1342 00:57:04,359 --> 00:57:07,839 vector or other sort of manipulation to 1343 00:57:06,599 --> 00:57:09,440 the lat space but really what you're 1344 00:57:07,839 --> 00:57:13,960 doing is like 1345 00:57:09,440 --> 00:57:17,599 Pok um so when you look at weights uh 1346 00:57:13,960 --> 00:57:19,920 one one class of methods or or area is 1347 00:57:17,599 --> 00:57:21,920 on model editing fine-tuning is like the 1348 00:57:19,920 --> 00:57:23,480 most extreme version of model editing 1349 00:57:21,920 --> 00:57:26,599 usually these things are much more 1350 00:57:23,480 --> 00:57:29,640 targeted um so in the model editing sort 1351 00:57:26,599 --> 00:57:32,160 of landscape your goal or your target is 1352 00:57:29,640 --> 00:57:35,119 you have a concept or a specific fact 1353 00:57:32,160 --> 00:57:37,440 that needs to be changed in the model um 1354 00:57:35,119 --> 00:57:39,640 and your approach here is you update or 1355 00:57:37,440 --> 00:57:41,359 edit the weights of the model to edit 1356 00:57:39,640 --> 00:57:43,640 the model's belief of that factor 1357 00:57:41,359 --> 00:57:45,599 concept and ideally you do this without 1358 00:57:43,640 --> 00:57:47,319 changing any of the other behavior of 1359 00:57:45,599 --> 00:57:49,760 the model so for example let's say 1360 00:57:47,319 --> 00:57:51,920 you're trying to say that Graham is no 1361 00:57:49,760 --> 00:57:54,559 longer a professor at CMU but is a 1362 00:57:51,920 --> 00:57:57,319 professor at Stanford you don't want 1363 00:57:54,559 --> 00:57:59,960 every single person at CMU to now be a 1364 00:57:57,319 --> 00:58:02,920 professor or uh now be affiliated with 1365 00:57:59,960 --> 00:58:07,839 Stanford right um gr pleas not to 1366 00:58:02,920 --> 00:58:09,039 Stanford um so here's one approach paper 1367 00:58:07,839 --> 00:58:11,720 that came out a couple years ago there's 1368 00:58:09,039 --> 00:58:13,559 a lot of work down here uh in in the 1369 00:58:11,720 --> 00:58:15,799 model editing World I'll give you sort 1370 00:58:13,559 --> 00:58:17,440 of a really brief overview of this but 1371 00:58:15,799 --> 00:58:20,520 basically they have facts that they want 1372 00:58:17,440 --> 00:58:22,400 to they want to manipulate um so for 1373 00:58:20,520 --> 00:58:24,680 example the the example that they give 1374 00:58:22,400 --> 00:58:26,640 in the figure is they want to associate 1375 00:58:24,680 --> 00:58:30,960 the Space Needle with Paris the Space 1376 00:58:26,640 --> 00:58:32,520 Needle is a a cool needle in in Seattle 1377 00:58:30,960 --> 00:58:36,000 has nothing to do with Paris but Paris 1378 00:58:32,520 --> 00:58:38,400 also has a tower so it's close um so 1379 00:58:36,000 --> 00:58:40,920 they use causal tracing to isolate the 1380 00:58:38,400 --> 00:58:43,839 causal effect uh of the individual 1381 00:58:40,920 --> 00:58:45,799 hidden States for this fact so they 1382 00:58:43,839 --> 00:58:47,839 basically continuously perturb the input 1383 00:58:45,799 --> 00:58:49,760 do a bunch of forward passes and 1384 00:58:47,839 --> 00:58:51,720 sequentially find the specific hidden 1385 00:58:49,760 --> 00:58:55,280 states that are associated kind of with 1386 00:58:51,720 --> 00:58:56,839 this fact um then they make an edit and 1387 00:58:55,280 --> 00:58:59,119 their edit 1388 00:58:56,839 --> 00:59:02,039 uh looks like this thing on the right um 1389 00:58:59,119 --> 00:59:05,280 so they treat this pair Space Needle and 1390 00:59:02,039 --> 00:59:07,240 Paris as this uh key value pair where 1391 00:59:05,280 --> 00:59:10,359 Space Needle is the key you pass this 1392 00:59:07,240 --> 00:59:12,480 into um into this weight Matrix this 1393 00:59:10,359 --> 00:59:14,640 original part of the model you want this 1394 00:59:12,480 --> 00:59:16,599 now instead of outputting Seattle to 1395 00:59:14,640 --> 00:59:19,119 Output Paris and they have some nice 1396 00:59:16,599 --> 00:59:21,599 math and a closed form solution to to 1397 00:59:19,119 --> 00:59:23,880 identify this this is super expensive 1398 00:59:21,599 --> 00:59:25,359 because they have to the causal tracing 1399 00:59:23,880 --> 00:59:27,680 part have to do a bunch of forward 1400 00:59:25,359 --> 00:59:30,680 passes um and they make this a little 1401 00:59:27,680 --> 00:59:33,480 bit better in future future work they 1402 00:59:30,680 --> 00:59:37,920 also do sort of a more 1403 00:59:33,480 --> 00:59:40,160 comprehensive um edit um so these are 1404 00:59:37,920 --> 00:59:44,599 kind like some of the things you can do 1405 00:59:40,160 --> 00:59:46,799 um I'm less excited about model editing 1406 00:59:44,599 --> 00:59:49,039 um there's there's some work on model 1407 00:59:46,799 --> 00:59:51,319 editing sort of it's it's hard to 1408 00:59:49,039 --> 00:59:53,160 control what other things break there's 1409 00:59:51,319 --> 00:59:56,240 a and there's some work with when you 1410 00:59:53,160 --> 01:00:00,000 edit a specific fact things start being 1411 00:59:56,240 --> 01:00:02,680 weird and being biased in other ways um 1412 01:00:00,000 --> 01:00:05,760 and so 1413 01:00:02,680 --> 01:00:09,119 yeah do all kind of seual information 1414 01:00:05,760 --> 01:00:11,880 like X is and Y would they Alles to the 1415 01:00:09,119 --> 01:00:14,319 same layer is it just with the 1416 01:00:11,880 --> 01:00:16,920 specific for this specific example it 1417 01:00:14,319 --> 01:00:19,039 looks at this specific point uh for 1418 01:00:16,920 --> 01:00:21,039 every example they'll probably find 1419 01:00:19,039 --> 01:00:22,119 different regions in a different degree 1420 01:00:21,039 --> 01:00:25,680 of 1421 01:00:22,119 --> 01:00:27,960 manipulation um and yeah that it gets a 1422 01:00:25,680 --> 01:00:30,920 little unprincipled kind of quickly it's 1423 01:00:27,960 --> 01:00:33,000 not like they're able to find you know a 1424 01:00:30,920 --> 01:00:35,680 specific attention head that or a 1425 01:00:33,000 --> 01:00:38,240 specific layer or specific weight Matrix 1426 01:00:35,680 --> 01:00:42,400 that corresponds to like 1427 01:00:38,240 --> 01:00:46,720 all yeah relations of a specific 1428 01:00:42,400 --> 01:00:49,160 type any for questions for yeah this is 1429 01:00:46,720 --> 01:00:51,119 actually just a question if you know um 1430 01:00:49,160 --> 01:00:53,200 it seems like more frequent facts might 1431 01:00:51,119 --> 01:00:55,240 appear in both places in the model is do 1432 01:00:53,200 --> 01:00:59,280 you know if that's actually the I have 1433 01:00:55,240 --> 01:01:02,440 no idea but uh I would imagine that um 1434 01:00:59,280 --> 01:01:06,240 it probably could occur in more places 1435 01:01:02,440 --> 01:01:08,160 but also um a lot of the information is 1436 01:01:06,240 --> 01:01:10,119 redundant anyway in the model especially 1437 01:01:08,160 --> 01:01:11,720 for larger models so you might have to 1438 01:01:10,119 --> 01:01:13,599 make targeted interventions in multiple 1439 01:01:11,720 --> 01:01:15,480 places but it's possible that one 1440 01:01:13,599 --> 01:01:17,680 intervention in one place sufficiently 1441 01:01:15,480 --> 01:01:21,039 destroys like contextualized information 1442 01:01:17,680 --> 01:01:22,680 in other places if it's close um it 1443 01:01:21,039 --> 01:01:24,839 depends on how big this intervention is 1444 01:01:22,680 --> 01:01:28,200 if it's like hitting it with a hammer 1445 01:01:24,839 --> 01:01:30,520 rather than some like nice fine grain 1446 01:01:28,200 --> 01:01:33,359 thing but that'd be a good be a good 1447 01:01:30,520 --> 01:01:36,839 experiment to see 1448 01:01:33,359 --> 01:01:36,839 um any other 1449 01:01:37,240 --> 01:01:41,559 questions all right so we'll move into 1450 01:01:39,760 --> 01:01:43,680 the stuff that I'm most familiar with 1451 01:01:41,559 --> 01:01:46,319 and some of my work so looking at 1452 01:01:43,680 --> 01:01:48,319 activations um so this is this is work 1453 01:01:46,319 --> 01:01:50,480 I've been doing for a while uh this idea 1454 01:01:48,319 --> 01:01:52,799 of steering vectors so I mentioned I 1455 01:01:50,480 --> 01:01:54,480 poke models so it's thick steering 1456 01:01:52,799 --> 01:01:57,000 Vector is that thick so it's basically a 1457 01:01:54,480 --> 01:01:59,000 fix length vector that steers a language 1458 01:01:57,000 --> 01:02:00,920 model to generate a specific sequence 1459 01:01:59,000 --> 01:02:02,720 exactly when added to the hidden sites 1460 01:02:00,920 --> 01:02:06,319 of a model at a specific 1461 01:02:02,720 --> 01:02:09,000 location um and I'll I'll read this 1462 01:02:06,319 --> 01:02:11,400 again it's there's a very like specific 1463 01:02:09,000 --> 01:02:13,319 form that I wrote this in so uh it's 1464 01:02:11,400 --> 01:02:15,359 it's a fix length Vector that steers a 1465 01:02:13,319 --> 01:02:17,640 language model to generate a specific 1466 01:02:15,359 --> 01:02:19,359 sequence exactly when added to the 1467 01:02:17,640 --> 01:02:22,559 hidden states of a model at a specific 1468 01:02:19,359 --> 01:02:24,480 point so this is different than um a 1469 01:02:22,559 --> 01:02:26,839 soft prompt or different than a model 1470 01:02:24,480 --> 01:02:29,520 editing sort of approach 1471 01:02:26,839 --> 01:02:31,400 um in this case there is a vector that 1472 01:02:29,520 --> 01:02:32,960 corresponds to a sequence and that 1473 01:02:31,400 --> 01:02:35,359 Vector doesn't correspond to any other 1474 01:02:32,960 --> 01:02:36,640 sequence there could be multiple vectors 1475 01:02:35,359 --> 01:02:39,079 and it turns out there are multiple 1476 01:02:36,640 --> 01:02:41,799 vectors that correspond to that sequence 1477 01:02:39,079 --> 01:02:44,160 it'll be a little bit clearer um based 1478 01:02:41,799 --> 01:02:46,279 on how we extract these 1479 01:02:44,160 --> 01:02:48,839 things um so this is the stick that 1480 01:02:46,279 --> 01:02:52,000 we're talking the language 1481 01:02:48,839 --> 01:02:53,599 model um so how do we extract them so 1482 01:02:52,000 --> 01:02:57,400 this is 1483 01:02:53,599 --> 01:03:00,200 gpt2 um basically this Z steer thing on 1484 01:02:57,400 --> 01:03:03,240 the left this is the steering Vector 1485 01:03:00,200 --> 01:03:05,799 this gets initialized randomly um with 1486 01:03:03,240 --> 01:03:09,520 small like in a in a reasonable way 1487 01:03:05,799 --> 01:03:11,440 uniformly and small um and for any 1488 01:03:09,520 --> 01:03:14,000 sequence a specific sequence that we 1489 01:03:11,440 --> 01:03:17,680 want the model to generate we 1490 01:03:14,000 --> 01:03:19,400 optimize this steering Vector Z steer uh 1491 01:03:17,680 --> 01:03:21,559 to generate that sequence keeping the 1492 01:03:19,400 --> 01:03:23,960 rest of the model entirely fixed so 1493 01:03:21,559 --> 01:03:26,200 think about it as we're nudging an a 1494 01:03:23,960 --> 01:03:29,880 frozen model to be able to generate a 1495 01:03:26,200 --> 01:03:31,680 specific sequence at a specific time um 1496 01:03:29,880 --> 01:03:33,880 and we have a lot of different options 1497 01:03:31,680 --> 01:03:35,559 on where to inject the steering intive 1498 01:03:33,880 --> 01:03:37,520 can put it basically anywhere in the 1499 01:03:35,559 --> 01:03:41,799 model we can put it at any time step any 1500 01:03:37,520 --> 01:03:43,839 number of these things in practice um 1501 01:03:41,799 --> 01:03:45,839 providing it just at the first time step 1502 01:03:43,839 --> 01:03:48,039 and somewhere in the middle of the model 1503 01:03:45,839 --> 01:03:52,480 basically not the first layer and not 1504 01:03:48,039 --> 01:03:56,240 the last layer works pretty well um and 1505 01:03:52,480 --> 01:04:00,279 so more formally um forget the kind of 1506 01:03:56,240 --> 01:04:03,640 notation um but right here we initialize 1507 01:04:00,279 --> 01:04:06,559 um this Z steer and for a few iterations 1508 01:04:03,640 --> 01:04:08,039 um we do forward passes first this 1509 01:04:06,559 --> 01:04:09,599 starts as random and then this gets 1510 01:04:08,039 --> 01:04:11,960 closer and closer and closer to being 1511 01:04:09,599 --> 01:04:14,279 able to generate this sequence and 1512 01:04:11,960 --> 01:04:16,599 eventually we get to a point uh and this 1513 01:04:14,279 --> 01:04:18,400 n is pretty small it's eight or 10 or 1514 01:04:16,599 --> 01:04:20,160 something like that um for most 1515 01:04:18,400 --> 01:04:22,200 sequences we get to a point where we 1516 01:04:20,160 --> 01:04:23,920 have found this stick that is allowed to 1517 01:04:22,200 --> 01:04:26,079 poke this model to generate that 1518 01:04:23,920 --> 01:04:29,319 sequence exactly now when we greedy 1519 01:04:26,079 --> 01:04:32,480 decode from the model we pass in just a 1520 01:04:29,319 --> 01:04:34,920 beginning of sequence token and this Z 1521 01:04:32,480 --> 01:04:37,119 steer the steering vector and it's able 1522 01:04:34,920 --> 01:04:39,720 to uncover a whole sequence that whole 1523 01:04:37,119 --> 01:04:41,319 sequence that we had at the beginning 1524 01:04:39,720 --> 01:04:44,240 entirely 1525 01:04:41,319 --> 01:04:46,640 um this is weird and interesting because 1526 01:04:44,240 --> 01:04:48,880 in a lot of cases um in like the 1527 01:04:46,640 --> 01:04:52,039 prompting world in the soft prompt World 1528 01:04:48,880 --> 01:04:54,640 usually you need a pretty large uh width 1529 01:04:52,039 --> 01:04:57,880 of a prompt to be able to do things with 1530 01:04:54,640 --> 01:05:00,400 um and this this generally in in that 1531 01:04:57,880 --> 01:05:02,000 structure you're doing a specific task 1532 01:05:00,400 --> 01:05:04,200 and you're providing kind of a large a 1533 01:05:02,000 --> 01:05:06,720 large prompt to do this with large soft 1534 01:05:04,200 --> 01:05:10,520 prompt to do this with this is often H 1535 01:05:06,720 --> 01:05:13,200 this often has a width of 50 and a and a 1536 01:05:10,520 --> 01:05:15,520 length of the the hidden size or the 1537 01:05:13,200 --> 01:05:17,160 embedding size of the model in our cases 1538 01:05:15,520 --> 01:05:20,079 all of our steering vectors are withth 1539 01:05:17,160 --> 01:05:21,440 one and they're of just the hidden size 1540 01:05:20,079 --> 01:05:24,039 of the 1541 01:05:21,440 --> 01:05:26,520 model um so what ends 1542 01:05:24,039 --> 01:05:29,559 up happening 1543 01:05:26,520 --> 01:05:31,520 um actually before I go to results any 1544 01:05:29,559 --> 01:05:34,720 questions this is a this is a weird 1545 01:05:31,520 --> 01:05:38,160 setup and weird relative to what other 1546 01:05:34,720 --> 01:05:39,310 people do so happy to take any 1547 01:05:38,160 --> 01:05:42,480 questions 1548 01:05:39,310 --> 01:05:42,480 [Music] 1549 01:05:42,880 --> 01:05:50,640 yeah similarly if your prompt was um of 1550 01:05:47,440 --> 01:05:53,440 a specific type so the prompt here is a 1551 01:05:50,640 --> 01:05:55,720 continuous Vector passed in it's a 1552 01:05:53,440 --> 01:05:59,760 single length width hidden size 1553 01:05:55,720 --> 01:06:02,799 continuous Vector so um it's kind of 1554 01:05:59,760 --> 01:06:05,559 like maybe collapsing your prompt into 1555 01:06:02,799 --> 01:06:08,480 this compressing it into this tiny 1556 01:06:05,559 --> 01:06:12,119 VOR you can think of that way 1557 01:06:08,480 --> 01:06:16,920 yeah any other questions 1558 01:06:12,119 --> 01:06:16,920 yeah this would be like 1559 01:06:18,160 --> 01:06:23,359 I'm 1560 01:06:20,880 --> 01:06:28,279 things potentially um this is something 1561 01:06:23,359 --> 01:06:30,640 that I want to work on it uh like a year 1562 01:06:28,279 --> 01:06:32,119 ago and didn't get didn't get this 1563 01:06:30,640 --> 01:06:34,559 sufficient Buy in and then had to apply 1564 01:06:32,119 --> 01:06:36,880 to grad school and all these things so 1565 01:06:34,559 --> 01:06:40,160 it went by the wayside but but 1566 01:06:36,880 --> 01:06:43,440 definitely something to something to 1567 01:06:40,160 --> 01:06:45,920 pursue um there's a lot of scope there 1568 01:06:43,440 --> 01:06:45,920 any other 1569 01:06:47,640 --> 01:06:54,480 questions all right so move over to 1570 01:06:51,319 --> 01:06:56,119 results so we can find steering vectors 1571 01:06:54,480 --> 01:06:58,520 and that's and that's interesting thing 1572 01:06:56,119 --> 01:07:00,559 um and we can find them pretty easily 1573 01:06:58,520 --> 01:07:02,559 and for most sequences even sequences 1574 01:07:00,559 --> 01:07:04,559 that the model hasn't seen before the 1575 01:07:02,559 --> 01:07:06,400 underlying language model hasn't seen 1576 01:07:04,559 --> 01:07:09,640 before 1577 01:07:06,400 --> 01:07:13,160 um it also works for and this is kind of 1578 01:07:09,640 --> 01:07:16,799 a negative but it also works for random 1579 01:07:13,160 --> 01:07:20,039 sequences of very small length but it's 1580 01:07:16,799 --> 01:07:22,359 harder to find so you can imagine if 1581 01:07:20,039 --> 01:07:24,760 your uh steering Vector is basically a 1582 01:07:22,359 --> 01:07:26,279 giant bulldozer it doesn't matter what 1583 01:07:24,760 --> 01:07:28,640 your model is learning learned similar 1584 01:07:26,279 --> 01:07:30,160 to the probe situation if you can 1585 01:07:28,640 --> 01:07:32,559 compress all that information of that 1586 01:07:30,160 --> 01:07:35,400 sequence into the vector you don't 1587 01:07:32,559 --> 01:07:37,400 really need the language model um so 1588 01:07:35,400 --> 01:07:39,559 there are cases when you're looking at 1589 01:07:37,400 --> 01:07:40,760 sequences of length like five seven 1590 01:07:39,559 --> 01:07:43,079 eight something like this you can 1591 01:07:40,760 --> 01:07:45,520 uniformly sample from the vocabulary at 1592 01:07:43,079 --> 01:07:47,359 random with replacement generate utter 1593 01:07:45,520 --> 01:07:49,799 garbage and find steering vectors for 1594 01:07:47,359 --> 01:07:53,200 them takes a little while but your model 1595 01:07:49,799 --> 01:07:55,520 is complex enough that you can basically 1596 01:07:53,200 --> 01:07:57,960 bulldo your model to be able to do this 1597 01:07:55,520 --> 01:08:00,200 even if that sequence is incredibly low 1598 01:07:57,960 --> 01:08:01,480 likelihood under the model but it works 1599 01:08:00,200 --> 01:08:05,319 better for things that are higher 1600 01:08:01,480 --> 01:08:07,760 likelihood under the model um 1601 01:08:05,319 --> 01:08:09,920 predictably the I think the thing that 1602 01:08:07,760 --> 01:08:12,760 surprised me the most was these steering 1603 01:08:09,920 --> 01:08:15,319 vectors themselves have interpretable 1604 01:08:12,760 --> 01:08:17,960 properties U so distances in steering 1605 01:08:15,319 --> 01:08:20,759 Vector space reflect semantic similarity 1606 01:08:17,960 --> 01:08:23,640 so if you have two sentences that are 1607 01:08:20,759 --> 01:08:26,719 close um they're also close in steering 1608 01:08:23,640 --> 01:08:29,759 Vector space that's kind of nice 1609 01:08:26,719 --> 01:08:32,359 um it does better than for example the 1610 01:08:29,759 --> 01:08:34,520 representations one would use for for 1611 01:08:32,359 --> 01:08:37,159 probing so mean pooling Bert hidden 1612 01:08:34,520 --> 01:08:39,600 States like we looked at before those do 1613 01:08:37,159 --> 01:08:42,080 actually worse than steering vectors um 1614 01:08:39,600 --> 01:08:45,799 just a bit 1615 01:08:42,080 --> 01:08:47,880 surprising um style transfer is possible 1616 01:08:45,799 --> 01:08:49,719 with simple Vector arithmetic so it' be 1617 01:08:47,880 --> 01:08:52,799 nice to say that I have a sequence I 1618 01:08:49,719 --> 01:08:56,000 want to subtract you know negativity and 1619 01:08:52,799 --> 01:08:58,799 add positivity for for sentiment or 1620 01:08:56,000 --> 01:09:00,520 other sorts of Styles um we can do this 1621 01:08:58,799 --> 01:09:02,159 and we can do this reasonably well in 1622 01:09:00,520 --> 01:09:05,319 steering VOR 1623 01:09:02,159 --> 01:09:07,920 space um we can also decode from 1624 01:09:05,319 --> 01:09:10,600 interpolations in the Laten space so you 1625 01:09:07,920 --> 01:09:12,759 take two steering vectors for two 1626 01:09:10,600 --> 01:09:14,759 sequences you look in the middle of them 1627 01:09:12,759 --> 01:09:17,400 you linearly interpolate between them 1628 01:09:14,759 --> 01:09:20,600 and you decode um if the space is kind 1629 01:09:17,400 --> 01:09:22,080 of weirdly peaky then you would have 1630 01:09:20,600 --> 01:09:23,839 issues and what you would generate is 1631 01:09:22,080 --> 01:09:25,080 garbage and there's no guarantee that 1632 01:09:23,839 --> 01:09:27,199 the space should be reasonable in 1633 01:09:25,080 --> 01:09:30,480 between but it turns out it 1634 01:09:27,199 --> 01:09:33,719 is um here's an example of one of these 1635 01:09:30,480 --> 01:09:36,359 style transfer cases so very very simple 1636 01:09:33,719 --> 01:09:39,239 easy easy sentence we found steering 1637 01:09:36,359 --> 01:09:41,679 vectors for The Taste is excellent and 1638 01:09:39,239 --> 01:09:43,640 and we took a sample of 100 positive 1639 01:09:41,679 --> 01:09:45,359 sentences and 100 negative sentences 1640 01:09:43,640 --> 01:09:47,159 found their steering vectors took the 1641 01:09:45,359 --> 01:09:48,960 mean and thought that you know that 1642 01:09:47,159 --> 01:09:51,400 looks like the positive concept steering 1643 01:09:48,960 --> 01:09:54,040 Vector negative concept steering Vector 1644 01:09:51,400 --> 01:09:56,600 we just did Vector arithmetic just did 1645 01:09:54,040 --> 01:09:59,880 uh current steering 1646 01:09:56,600 --> 01:10:02,440 Vector uh plus negative minus positive 1647 01:09:59,880 --> 01:10:03,520 and we got the taste is unpleasant um 1648 01:10:02,440 --> 01:10:06,960 and 1649 01:10:03,520 --> 01:10:08,880 similarly um in the reverse 1650 01:10:06,960 --> 01:10:12,520 directions it turns out that the 1651 01:10:08,880 --> 01:10:15,199 magnitude matters because um for every 1652 01:10:12,520 --> 01:10:17,800 single sequence there's kind of an end 1653 01:10:15,199 --> 01:10:20,640 dimensional ball around that steering 1654 01:10:17,800 --> 01:10:23,640 Vector that we found that also decodes 1655 01:10:20,640 --> 01:10:25,920 that specific sequence and so that shows 1656 01:10:23,640 --> 01:10:28,880 that the space is kind of reasonably 1657 01:10:25,920 --> 01:10:32,320 well formed there's there's of course uh 1658 01:10:28,880 --> 01:10:34,280 a lot of weird sort of areas um and so 1659 01:10:32,320 --> 01:10:37,120 if you go poke around in steering Vector 1660 01:10:34,280 --> 01:10:38,760 space and sort of try to sample from it 1661 01:10:37,120 --> 01:10:41,280 eventually you'll find some weird edge 1662 01:10:38,760 --> 01:10:43,320 cases and some garbage and repeated text 1663 01:10:41,280 --> 01:10:46,159 and little things like 1664 01:10:43,320 --> 01:10:50,520 this any questions here before I kind of 1665 01:10:46,159 --> 01:10:50,520 Rapid Fire through the the last few 1666 01:10:50,920 --> 01:10:57,239 things yeah like here 1667 01:10:57,400 --> 01:11:01,400 yeah so we went uh Beyond this um there 1668 01:11:00,199 --> 01:11:04,280 was 1669 01:11:01,400 --> 01:11:07,440 so in in these specific experiments we 1670 01:11:04,280 --> 01:11:09,600 looked at the middle of gpt2 um so this 1671 01:11:07,440 --> 01:11:12,679 was like layer six layer seven and at 1672 01:11:09,600 --> 01:11:15,280 the first time step we didn't do any um 1673 01:11:12,679 --> 01:11:17,239 like magnitude scaling and so you can 1674 01:11:15,280 --> 01:11:19,480 imagine if you put a giant Vector in 1675 01:11:17,239 --> 01:11:21,040 there the models never the rest of the 1676 01:11:19,480 --> 01:11:24,679 model has never seen something of that 1677 01:11:21,040 --> 01:11:26,159 magnitude so it's now in a weird State 1678 01:11:24,679 --> 01:11:28,280 and it's just going to break so if you 1679 01:11:26,159 --> 01:11:30,560 put this to like I don't know 500 or 1680 01:11:28,280 --> 01:11:32,960 something like this it break it just has 1681 01:11:30,560 --> 01:11:35,239 no idea it's like basically like telling 1682 01:11:32,960 --> 01:11:37,199 the rest your model hey it's like a 1683 01:11:35,239 --> 01:11:38,760 completely untrained model be it looks 1684 01:11:37,199 --> 01:11:42,000 similar to like random performance you 1685 01:11:38,760 --> 01:11:43,840 get repeats and things like this smaller 1686 01:11:42,000 --> 01:11:45,800 you end up staying in this ball for the 1687 01:11:43,840 --> 01:11:47,920 sequence two two seemed pretty 1688 01:11:45,800 --> 01:11:50,199 reasonable but we didn't spend a lot of 1689 01:11:47,920 --> 01:11:53,560 time just like the day before the paper 1690 01:11:50,199 --> 01:11:56,600 was do we were two seems reasonable we 1691 01:11:53,560 --> 01:11:59,159 went to three we went to five 10 broke 1692 01:11:56,600 --> 01:12:01,199 five somewhat broke two seems 1693 01:11:59,159 --> 01:12:03,440 reasonable 1694 01:12:01,199 --> 01:12:06,400 um decent signings 1695 01:12:03,440 --> 01:12:08,639 hopefully um cool so I'll talk about uh 1696 01:12:06,400 --> 01:12:10,920 a similar type of work uh that came out 1697 01:12:08,639 --> 01:12:13,000 more recently on inference time 1698 01:12:10,920 --> 01:12:14,159 intervention so basically they use some 1699 01:12:13,000 --> 01:12:16,719 of the ideas that we talked about 1700 01:12:14,159 --> 01:12:18,840 earlier they use linear probes um to 1701 01:12:16,719 --> 01:12:20,560 find a tension head that correspond to a 1702 01:12:18,840 --> 01:12:23,600 desired attribute they did this for 1703 01:12:20,560 --> 01:12:26,440 truthful QA so uh their Hope was to find 1704 01:12:23,600 --> 01:12:28,639 truthful directions in Len space 1705 01:12:26,440 --> 01:12:31,639 um and then they shifted the attention 1706 01:12:28,639 --> 01:12:33,199 head activations um during inference 1707 01:12:31,639 --> 01:12:35,280 along the directions determined by the 1708 01:12:33,199 --> 01:12:38,280 probes um so what this kind of looks 1709 01:12:35,280 --> 01:12:40,280 like is you take your attention heads 1710 01:12:38,280 --> 01:12:42,440 you probe them so you stick classify on 1711 01:12:40,280 --> 01:12:44,360 top um this classifier learns to 1712 01:12:42,440 --> 01:12:47,679 disentangle sort of truthful and 1713 01:12:44,360 --> 01:12:50,239 untruthful and now you have um now you 1714 01:12:47,679 --> 01:12:52,080 have a hyperplane and then you can move 1715 01:12:50,239 --> 01:12:54,320 orthogonally to this hyper plane in the 1716 01:12:52,080 --> 01:12:55,920 direction depending on which way you 1717 01:12:54,320 --> 01:12:58,080 want to shift so if you want to move 1718 01:12:55,920 --> 01:13:02,040 towards truthful you can move in that 1719 01:12:58,080 --> 01:13:04,400 direction or or away um and they do this 1720 01:13:02,040 --> 01:13:07,560 it works pretty well um I think they do 1721 01:13:04,400 --> 01:13:09,679 this for GPT model and maybe a llama 1722 01:13:07,560 --> 01:13:12,960 model um but can't can't remember the 1723 01:13:09,679 --> 01:13:15,960 exact details um and it's a similar 1724 01:13:12,960 --> 01:13:21,040 intervention um they basically add this 1725 01:13:15,960 --> 01:13:23,400 Vector um that they found and they they 1726 01:13:21,040 --> 01:13:25,679 have a little note on scaling they if 1727 01:13:23,400 --> 01:13:27,719 they scale if that if the magnitude of 1728 01:13:25,679 --> 01:13:30,000 the thing is too much things break so 1729 01:13:27,719 --> 01:13:33,880 they have a they like hyper parameter 1730 01:13:30,000 --> 01:13:36,800 search for the sort of magnitude of 1731 01:13:33,880 --> 01:13:38,840 activation um but it's sort of a very 1732 01:13:36,800 --> 01:13:41,520 similar approach to what we did but this 1733 01:13:38,840 --> 01:13:43,040 focuses on specific attention heads and 1734 01:13:41,520 --> 01:13:44,440 they don't do this for all the attention 1735 01:13:43,040 --> 01:13:46,600 heads so back to like your question 1736 01:13:44,440 --> 01:13:49,080 earlier do attention heads specialize it 1737 01:13:46,600 --> 01:13:52,360 seems like they do and so there are many 1738 01:13:49,080 --> 01:13:54,320 of them that uh have like no probing 1739 01:13:52,360 --> 01:13:57,719 accuracy or limited probing accuracy and 1740 01:13:54,320 --> 01:13:59,400 actually um are like distractors for the 1741 01:13:57,719 --> 01:14:03,400 CH FL 1742 01:13:59,400 --> 01:14:03,400 Direction any questions 1743 01:14:06,040 --> 01:14:11,760 here cool so more activation 1744 01:14:09,120 --> 01:14:14,760 manipulation so there's uh some work 1745 01:14:11,760 --> 01:14:17,600 recently on contrastive steering vectors 1746 01:14:14,760 --> 01:14:19,480 so the way we did this like sentiment 1747 01:14:17,600 --> 01:14:21,080 steering was we had some positive 1748 01:14:19,480 --> 01:14:23,040 sentences some negative sentences they 1749 01:14:21,080 --> 01:14:24,520 weren't tied together in any reasonable 1750 01:14:23,040 --> 01:14:26,360 way we found their steering vectors 1751 01:14:24,520 --> 01:14:30,040 separately you could imagine the case 1752 01:14:26,360 --> 01:14:33,159 and maybe a more useful case um with two 1753 01:14:30,040 --> 01:14:36,280 prompts that um you can design that go 1754 01:14:33,159 --> 01:14:38,000 two different ways you can sort of find 1755 01:14:36,280 --> 01:14:42,280 their representations and do the 1756 01:14:38,000 --> 01:14:45,679 manipulation the differences here um 1757 01:14:42,280 --> 01:14:48,800 like individually rather than um for a 1758 01:14:45,679 --> 01:14:52,400 whole concept or a whole attribute and 1759 01:14:48,800 --> 01:14:54,400 the value here is your context is um 1760 01:14:52,400 --> 01:14:56,600 preserved so if you're doing something 1761 01:14:54,400 --> 01:14:58,239 like you know you're doing retrieval 1762 01:14:56,600 --> 01:15:00,440 based things now you have some sort of 1763 01:14:58,239 --> 01:15:03,360 document and then you have a question if 1764 01:15:00,440 --> 01:15:05,040 your question sort of uh if you want to 1765 01:15:03,360 --> 01:15:07,560 ask it in two different ways for two 1766 01:15:05,040 --> 01:15:08,880 different things this would be a much 1767 01:15:07,560 --> 01:15:11,239 better approach if you want to use 1768 01:15:08,880 --> 01:15:14,600 steering vectors than the stuff I was 1769 01:15:11,239 --> 01:15:16,159 doing um and it seems to work a little 1770 01:15:14,600 --> 01:15:17,880 bit better they didn't compare against 1771 01:15:16,159 --> 01:15:19,400 our our things because it's not like an 1772 01:15:17,880 --> 01:15:21,880 Apples to Apples comparison but it seems 1773 01:15:19,400 --> 01:15:23,960 to work better and be more General um 1774 01:15:21,880 --> 01:15:25,560 and be more 1775 01:15:23,960 --> 01:15:27,840 useful 1776 01:15:25,560 --> 01:15:27,840 any 1777 01:15:31,400 --> 01:15:37,679 questions cool so what can model 1778 01:15:35,080 --> 01:15:40,080 interpretability give us these are these 1779 01:15:37,679 --> 01:15:41,960 are my concluding remarks so hopefully 1780 01:15:40,080 --> 01:15:43,920 we get a better understanding of how 1781 01:15:41,960 --> 01:15:46,840 language models work their their 1782 01:15:43,920 --> 01:15:49,520 internals their structure um we get to 1783 01:15:46,840 --> 01:15:52,800 understand uh kind of why they do really 1784 01:15:49,520 --> 01:15:55,239 well this is still like very very 1785 01:15:52,800 --> 01:15:57,320 unclear um and hopefully we find 1786 01:15:55,239 --> 01:15:59,400 lightweight methods to control and steer 1787 01:15:57,320 --> 01:16:03,360 models as models become more and more 1788 01:15:59,400 --> 01:16:05,280 useful um and and impact more more users 1789 01:16:03,360 --> 01:16:09,360 we need better ways to control and steer 1790 01:16:05,280 --> 01:16:13,120 them um and it's unclear how much 1791 01:16:09,360 --> 01:16:15,360 industry will devote to these things um 1792 01:16:13,120 --> 01:16:18,080 so it might be the role of Academia to 1793 01:16:15,360 --> 01:16:21,239 do more science in in order to figure 1794 01:16:18,080 --> 01:16:23,920 out how to control and steer these 1795 01:16:21,239 --> 01:16:25,520 better um and hopefully we can also find 1796 01:16:23,920 --> 01:16:29,199 potential Al Alternatives or 1797 01:16:25,520 --> 01:16:34,840 complimentary methods to to do alignment 1798 01:16:29,199 --> 01:16:37,480 um rhf is kind of expensive um and if if 1799 01:16:34,840 --> 01:16:40,080 we could do this with limited data and 1800 01:16:37,480 --> 01:16:42,760 um exploit structure um and information 1801 01:16:40,080 --> 01:16:46,400 that's already in the model more so than 1802 01:16:42,760 --> 01:16:48,600 than these methods um maybe maybe we can 1803 01:16:46,400 --> 01:16:50,920 align them better and these things don't 1804 01:16:48,600 --> 01:16:52,480 have to be uh Alternatives they can be 1805 01:16:50,920 --> 01:16:53,840 complimentary to to 1806 01:16:52,480 --> 01:16:57,159 [Music] 1807 01:16:53,840 --> 01:17:00,040 rhm um here's some resources this is an 1808 01:16:57,159 --> 01:17:01,280 extremely incomplete group but here are 1809 01:17:00,040 --> 01:17:04,080 some folks that work on model 1810 01:17:01,280 --> 01:17:07,040 interoperability there's many of these 1811 01:17:04,080 --> 01:17:09,120 um I cited some some work from some of 1812 01:17:07,040 --> 01:17:11,280 these teams but um there's a lot of 1813 01:17:09,120 --> 01:17:13,280 people working on it and in the last 1814 01:17:11,280 --> 01:17:15,040 like year there's been kind of an 1815 01:17:13,280 --> 01:17:17,480 explosion especially in the mechanistic 1816 01:17:15,040 --> 01:17:21,639 interpretability kind of World um Sasha 1817 01:17:17,480 --> 01:17:23,800 Rush had a recent tweet that uh asked 1818 01:17:21,639 --> 01:17:25,320 like prospective grad students what is 1819 01:17:23,800 --> 01:17:27,239 the topic that they're most excited 1820 01:17:25,320 --> 01:17:29,880 about and mechanistic interpretability 1821 01:17:27,239 --> 01:17:33,960 was a thing that seemed to have won out 1822 01:17:29,880 --> 01:17:37,040 um so I encourage you to to kind of dive 1823 01:17:33,960 --> 01:17:38,719 into this literature and read some of 1824 01:17:37,040 --> 01:17:41,679 the papers if you're if you're excited 1825 01:17:38,719 --> 01:17:45,199 about it and yeah thanks for your 1826 01:17:41,679 --> 01:17:45,199 attention and that's all I 1827 01:17:45,400 --> 01:17:48,400 have