diff --git "a/CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.vtt" "b/CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.vtt" new file mode 100644--- /dev/null +++ "b/CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.vtt" @@ -0,0 +1,5482 @@ +WEBVTT + +00:00:00.919 --> 00:00:05.879 +so in my slides here I'm going to talk + +00:00:03.760 --> 00:00:10.040 +about debugging and understanding NLP + +00:00:05.879 --> 00:00:12.400 +models and this is how to tell uh when + +00:00:10.040 --> 00:00:14.759 +for example both your implementations + +00:00:12.400 --> 00:00:17.320 +are wrong and uh for example your + +00:00:14.759 --> 00:00:19.000 +underlying assumptions are wrong or your + +00:00:17.320 --> 00:00:21.240 +model is failing on particular segments + +00:00:19.000 --> 00:00:23.439 +of data or stuff like that so going to + +00:00:21.240 --> 00:00:26.160 +go through uh a variety of things that + +00:00:23.439 --> 00:00:29.000 +can go wrong with your experiments + +00:00:26.160 --> 00:00:31.679 +basically so a typical situation is + +00:00:29.000 --> 00:00:33.399 +you've implemented some NLP system you + +00:00:31.679 --> 00:00:35.840 +know based on neural networks of course + +00:00:33.399 --> 00:00:36.920 +because that's what we use nowadays um + +00:00:35.840 --> 00:00:40.000 +and you've looked at the code it + +00:00:36.920 --> 00:00:42.000 +basically looks okay um but it has low + +00:00:40.000 --> 00:00:44.559 +accuracy or it makes incomprehensible + +00:00:42.000 --> 00:00:45.680 +errors and you would like to uh fix + +00:00:44.559 --> 00:00:47.440 +these or you'd like to improve the + +00:00:45.680 --> 00:00:49.120 +accuracy or something like that and so + +00:00:47.440 --> 00:00:52.000 +what do I + +00:00:49.120 --> 00:00:53.680 +do and I think there's three dimensions + +00:00:52.000 --> 00:00:56.239 +of how you can understand your model and + +00:00:53.680 --> 00:00:57.960 +your Model Behavior um the first one is + +00:00:56.239 --> 00:01:00.199 +debugging the implementation so it's + +00:00:57.960 --> 00:01:03.760 +identifying problems that you have when + +00:01:00.199 --> 00:01:05.880 +you uh implemented something uh second + +00:01:03.760 --> 00:01:07.759 +thing is actionable evaluation so + +00:01:05.880 --> 00:01:09.799 +identifying typical error cases and how + +00:01:07.759 --> 00:01:11.840 +you what you can do to fix them and + +00:01:09.799 --> 00:01:13.720 +finally uh interpreting predictions or + +00:01:11.840 --> 00:01:18.080 +interpreting what's happening inside the + +00:01:13.720 --> 00:01:19.920 +model and uh this can maybe give you a + +00:01:18.080 --> 00:01:21.520 +deeper idea about what's happening in + +00:01:19.920 --> 00:01:22.720 +happening in individual cases and + +00:01:21.520 --> 00:01:25.240 +there's a lot of reasons why you might + +00:01:22.720 --> 00:01:27.920 +want to do that uh both like to make + +00:01:25.240 --> 00:01:30.280 +your models better and also for example + +00:01:27.920 --> 00:01:31.840 +if you want to be sure that your ition + +00:01:30.280 --> 00:01:34.840 +isn't doing something illegal like + +00:01:31.840 --> 00:01:36.439 +discriminating against people uh due to + +00:01:34.840 --> 00:01:38.680 +protected attributes or other things + +00:01:36.439 --> 00:01:41.399 +like that so um there's a number of + +00:01:38.680 --> 00:01:42.920 +reasons why you'd want to do that so I'm + +00:01:41.399 --> 00:01:44.399 +going to talk about the first two and + +00:01:42.920 --> 00:01:48.840 +Nishant is mainly going to talk about + +00:01:44.399 --> 00:01:52.000 +the second one so uh going right into + +00:01:48.840 --> 00:01:55.159 +it so in neural network models uh + +00:01:52.000 --> 00:01:58.880 +debugging is really important because + +00:01:55.159 --> 00:02:00.920 +they're opaque they're unpredictable and + +00:01:58.880 --> 00:02:03.119 +uh if you make little mistakes they can + +00:02:00.920 --> 00:02:05.439 +cause big problems with your + +00:02:03.119 --> 00:02:07.399 +output and another thing is that + +00:02:05.439 --> 00:02:09.640 +everything is a hyperparameter including + +00:02:07.399 --> 00:02:11.239 +your network size your model variations + +00:02:09.640 --> 00:02:14.440 +your batch size your strategy your + +00:02:11.239 --> 00:02:18.120 +Optimizer and your learning rate + +00:02:14.440 --> 00:02:19.560 +and finally unlike kind of more + +00:02:18.120 --> 00:02:21.200 +traditional machine learning methods + +00:02:19.560 --> 00:02:23.000 +like logistic progression or support + +00:02:21.200 --> 00:02:25.160 +Vector machines or something like that + +00:02:23.000 --> 00:02:27.879 +you might that you might have studied in + +00:02:25.160 --> 00:02:30.160 +your machine learning class um + +00:02:27.879 --> 00:02:32.599 +stochastic optimization has no guarantee + +00:02:30.160 --> 00:02:34.239 +about convergence um your loss might go + +00:02:32.599 --> 00:02:35.720 +down then it might go up and there might + +00:02:34.239 --> 00:02:38.120 +be absolutely nothing wrong with your + +00:02:35.720 --> 00:02:40.200 +training or it might be you know a + +00:02:38.120 --> 00:02:42.319 +serious problem so that's another issue + +00:02:40.200 --> 00:02:45.440 +you need to deal + +00:02:42.319 --> 00:02:48.800 +with so first I'd like to go into + +00:02:45.440 --> 00:02:51.400 +possible causes of problems with your + +00:02:48.800 --> 00:02:53.440 +implementation and I'm going to break + +00:02:51.400 --> 00:02:55.040 +them down into a typology and based on + +00:02:53.440 --> 00:02:57.040 +what part of the typology you're running + +00:02:55.040 --> 00:02:59.200 +into problems with you will need to fix + +00:02:57.040 --> 00:03:00.800 +them in different ways so your first + +00:02:59.200 --> 00:03:02.599 +goal when you're experiencing the + +00:03:00.800 --> 00:03:04.720 +problem is identifying why you're + +00:03:02.599 --> 00:03:06.400 +experiencing the problem uh because that + +00:03:04.720 --> 00:03:08.760 +will lead you to a + +00:03:06.400 --> 00:03:10.440 +solution so for training time problems + +00:03:08.760 --> 00:03:12.560 +there's a bunch of uh things that could + +00:03:10.440 --> 00:03:14.360 +be wrong uh the first is a lack of model + +00:03:12.560 --> 00:03:16.280 +capacity so your model is not able to + +00:03:14.360 --> 00:03:18.599 +model the phenomena that you want to + +00:03:16.280 --> 00:03:20.000 +model in the first place um you could + +00:03:18.599 --> 00:03:22.080 +have a poor training + +00:03:20.000 --> 00:03:24.920 +algorithm uh you could just have a bug + +00:03:22.080 --> 00:03:27.080 +in your code at training time another + +00:03:24.920 --> 00:03:29.319 +thing is uh test time problems and these + +00:03:27.080 --> 00:03:30.599 +can include a disconnect between what + +00:03:29.319 --> 00:03:33.040 +you're doing at training time and what + +00:03:30.599 --> 00:03:35.640 +you're testing at testing time uh + +00:03:33.040 --> 00:03:37.959 +failure of search + +00:03:35.640 --> 00:03:39.920 +algorithms and another thing you want to + +00:03:37.959 --> 00:03:41.360 +deal with is overfitting so you're + +00:03:39.920 --> 00:03:44.319 +actually doing well on the training set + +00:03:41.360 --> 00:03:48.360 +but you're doing poorly on the test + +00:03:44.319 --> 00:03:50.400 +Set uh finally you could have um optimiz + +00:03:48.360 --> 00:03:52.640 +a mismatch between the function you're + +00:03:50.400 --> 00:03:54.920 +optimizing at evaluation time and uh + +00:03:52.640 --> 00:03:56.519 +what you're actually evaluating sorry + +00:03:54.920 --> 00:03:58.079 +the fun the function that you're + +00:03:56.519 --> 00:04:01.079 +optimizing at training time and what + +00:03:58.079 --> 00:04:03.720 +you're actually evaluating at test time + +00:04:01.079 --> 00:04:05.280 +and my my best piece of advice for + +00:04:03.720 --> 00:04:07.959 +figuring out why things are going wrong + +00:04:05.280 --> 00:04:11.040 +is don't uh try to do all of them at + +00:04:07.959 --> 00:04:12.560 +once and rather uh start from the top + +00:04:11.040 --> 00:04:15.239 +and work it down because the ones at the + +00:04:12.560 --> 00:04:17.600 +top are often easier to uh diagnose and + +00:04:15.239 --> 00:04:20.680 +the ones at the + +00:04:17.600 --> 00:04:23.000 +bottom so looking at how you can debug + +00:04:20.680 --> 00:04:25.919 +systems at training time uh there's a + +00:04:23.000 --> 00:04:27.360 +number of ways you can do this uh but + +00:04:25.919 --> 00:04:30.039 +the most important thing for training + +00:04:27.360 --> 00:04:33.479 +time uh issues is looking at the loss + +00:04:30.039 --> 00:04:36.759 +function calculated on the training set + +00:04:33.479 --> 00:04:38.960 +and what I mean by this is don't look uh + +00:04:36.759 --> 00:04:41.240 +we talked about how we can't optimize + +00:04:38.960 --> 00:04:45.039 +error or accuracy easily so instead we + +00:04:41.240 --> 00:04:47.120 +optimize likelihood um and so you might + +00:04:45.039 --> 00:04:49.080 +want to look at accuracy to see whether + +00:04:47.120 --> 00:04:50.759 +your model is working well but I would + +00:04:49.080 --> 00:04:53.039 +urge you first to look at your + +00:04:50.759 --> 00:04:55.080 +likelihood or your loss function on the + +00:04:53.039 --> 00:04:57.000 +training set instead of your accuracy on + +00:04:55.080 --> 00:04:58.479 +the test set for example to diagnose + +00:04:57.000 --> 00:05:00.600 +these variety of + +00:04:58.479 --> 00:05:02.919 +problems and the sorts of things you + +00:05:00.600 --> 00:05:05.840 +want to look at are um is the loss + +00:05:02.919 --> 00:05:10.639 +function going down so is it you know + +00:05:05.840 --> 00:05:14.199 +converging into a good place + +00:05:10.639 --> 00:05:16.280 +um in general if this is your your + +00:05:14.199 --> 00:05:18.600 +loss um the first thing you should know + +00:05:16.280 --> 00:05:20.440 +is like what is a good loss uh in most + +00:05:18.600 --> 00:05:22.280 +cases a good loss is zero like log + +00:05:20.440 --> 00:05:26.280 +likelihood the best loss you can achieve + +00:05:22.280 --> 00:05:28.639 +is zero so you have zero down here um + +00:05:26.280 --> 00:05:31.639 +something + +00:05:28.639 --> 00:05:31.639 +like + +00:05:31.919 --> 00:05:36.680 +this is uh essentially a good loss + +00:05:38.080 --> 00:05:43.120 +function something like that uh + +00:05:41.360 --> 00:05:45.120 +especially if this is a relatively High + +00:05:43.120 --> 00:05:47.759 +number is usually a bad loss + +00:05:45.120 --> 00:05:50.319 +function + +00:05:47.759 --> 00:05:52.680 +um something like that on your training + +00:05:50.319 --> 00:05:54.240 +set is a very bad loss function uh + +00:05:52.680 --> 00:05:55.840 +something something is going seriously + +00:05:54.240 --> 00:05:57.960 +wrong if you see this on your Dev set + +00:05:55.840 --> 00:05:59.800 +that could be or your test set that + +00:05:57.960 --> 00:06:01.199 +could be uh overfitting but but if + +00:05:59.800 --> 00:06:03.440 +you're seeing that on your training set + +00:06:01.199 --> 00:06:05.759 +that's usually symptomatic of a problem + +00:06:03.440 --> 00:06:09.160 +so uh these are uh things that you + +00:06:05.759 --> 00:06:10.960 +should be uh knowing um is it going down + +00:06:09.160 --> 00:06:13.520 +basically to zero if you run training + +00:06:10.960 --> 00:06:16.000 +long enough um for many epochs over your + +00:06:13.520 --> 00:06:17.479 +training data so if it's not going down + +00:06:16.000 --> 00:06:20.599 +to zero and it's sticking up here then + +00:06:17.479 --> 00:06:20.599 +that's also an + +00:06:21.120 --> 00:06:25.759 +issue and um if it's not going down to + +00:06:23.840 --> 00:06:27.919 +close to zero on whatever training set + +00:06:25.759 --> 00:06:30.199 +you're training on um let's say you make + +00:06:27.919 --> 00:06:31.840 +your training set extremely small + +00:06:30.199 --> 00:06:33.319 +uh at least in that case it should go + +00:06:31.840 --> 00:06:34.960 +down to zero otherwise you might have a + +00:06:33.319 --> 00:06:37.199 +serious problem in your + +00:06:34.960 --> 00:06:39.240 +implementation so these are good things + +00:06:37.199 --> 00:06:41.960 +to check first when you're training a + +00:06:39.240 --> 00:06:45.199 +model um and there's a number of reasons + +00:06:41.960 --> 00:06:47.759 +why this might not be helping or why + +00:06:45.199 --> 00:06:50.880 +this might not be happening so um your + +00:06:47.759 --> 00:06:53.120 +Mo model might be too weak and so in + +00:06:50.880 --> 00:06:55.440 +general larger models tend to perform + +00:06:53.120 --> 00:06:58.000 +better uh especially if you're using a + +00:06:55.440 --> 00:06:59.800 +pre-trained model and um this is just an + +00:06:58.000 --> 00:07:03.800 +example from the T5 paper where they + +00:06:59.800 --> 00:07:06.680 +scale up the T5 model um from a + +00:07:03.800 --> 00:07:09.319 +relatively small model to what at the + +00:07:06.680 --> 00:07:12.199 +time was a very large model of 11 + +00:07:09.319 --> 00:07:14.360 +billion parameters now this is you know + +00:07:12.199 --> 00:07:17.479 +a moderately sized model or maybe even + +00:07:14.360 --> 00:07:20.879 +small model by some standards but anyway + +00:07:17.479 --> 00:07:23.800 +you can see that it uh in continues to + +00:07:20.879 --> 00:07:26.479 +increase one really interesting + +00:07:23.800 --> 00:07:30.080 +phenomenon is uh that actually larger + +00:07:26.479 --> 00:07:33.879 +models can learn faster or at least with + +00:07:30.080 --> 00:07:36.680 +fewer steps than uh smaller + +00:07:33.879 --> 00:07:40.199 +models and so this + +00:07:36.680 --> 00:07:42.240 +is an interesting example this paper uh + +00:07:40.199 --> 00:07:43.919 +on neural scaling was it's a very + +00:07:42.240 --> 00:07:48.000 +influential paper but basically what + +00:07:43.919 --> 00:07:51.000 +they show is the darker purple ones are + +00:07:48.000 --> 00:07:54.599 +smaller models the yellow ones are + +00:07:51.000 --> 00:07:57.159 +bigger models and what you can see here + +00:07:54.599 --> 00:07:59.639 +is the purple model and on the left side + +00:07:57.159 --> 00:08:02.120 +they have the number of tokens processed + +00:07:59.639 --> 00:08:05.759 +the right side they have the number of + +00:08:02.120 --> 00:08:08.159 +uh compute or the amount of compute um + +00:08:05.759 --> 00:08:10.080 +and so what you can see is if you just + +00:08:08.159 --> 00:08:12.240 +look at the number of tokens processed + +00:08:10.080 --> 00:08:14.280 +the larger the model the faster it + +00:08:12.240 --> 00:08:17.720 +converges which + +00:08:14.280 --> 00:08:21.400 +is maybe a little bit surprising maybe a + +00:08:17.720 --> 00:08:22.680 +little bit you or maybe uh like some + +00:08:21.400 --> 00:08:24.879 +people have the intuition that this + +00:08:22.680 --> 00:08:26.440 +should be the case but when I first saw + +00:08:24.879 --> 00:08:27.759 +this I found it a little bit surprising + +00:08:26.440 --> 00:08:29.000 +because I thought it would be so large + +00:08:27.759 --> 00:08:29.960 +and noisy that the model would have + +00:08:29.000 --> 00:08:32.320 +trouble fit + +00:08:29.960 --> 00:08:34.200 +you know fitting the data as quickly but + +00:08:32.320 --> 00:08:36.200 +there's actually a good reason for this + +00:08:34.200 --> 00:08:37.240 +does anyone have a guess about why this + +00:08:36.200 --> 00:08:39.719 +is + +00:08:37.240 --> 00:08:41.240 +thee we've talked a little bit about the + +00:08:39.719 --> 00:08:44.120 +underlying phenomena for this in + +00:08:41.240 --> 00:08:48.360 +previous classes so you might be able to + +00:08:44.120 --> 00:08:48.360 +think back to some of the things you + +00:08:50.480 --> 00:08:56.040 +yeah yeah so um just to repeat there's a + +00:08:54.160 --> 00:08:57.720 +lot of different parameters so it can + +00:08:56.040 --> 00:08:59.880 +try to converge along a lot of different + +00:08:57.720 --> 00:09:01.920 +dimensions so if we think back to the + +00:08:59.880 --> 00:09:04.079 +like model pruning class and other stuff + +00:09:01.920 --> 00:09:06.640 +like that um part of the reason why we + +00:09:04.079 --> 00:09:08.000 +can prune large models so efficiently is + +00:09:06.640 --> 00:09:10.200 +because only like a small number of the + +00:09:08.000 --> 00:09:12.440 +parameters are actually useful and so if + +00:09:10.200 --> 00:09:15.120 +you start out with a much larger model + +00:09:12.440 --> 00:09:17.720 +it's more likely to have useful subsets + +00:09:15.120 --> 00:09:20.320 +of the parameters basically um which is + +00:09:17.720 --> 00:09:21.560 +called the lottery ticket hypothesis uh + +00:09:20.320 --> 00:09:23.839 +there there's a famous paper called the + +00:09:21.560 --> 00:09:27.560 +lottery ticket hypothesis examines this + +00:09:23.839 --> 00:09:29.680 +phenomenon so um one one interesting + +00:09:27.560 --> 00:09:32.160 +thing is you can see that even if you + +00:09:29.680 --> 00:09:35.640 +scale up the compute even if you measure + +00:09:32.160 --> 00:09:37.640 +based on compute the uh larger models + +00:09:35.640 --> 00:09:38.959 +eventually surpass the smaller models in + +00:09:37.640 --> 00:09:41.920 +terms of how efficient they are at + +00:09:38.959 --> 00:09:44.680 +modeling the data and that's just + +00:09:41.920 --> 00:09:46.760 +because models tend to learn well for a + +00:09:44.680 --> 00:09:49.560 +while and then they basically reach + +00:09:46.760 --> 00:09:51.760 +their capacity and stop learning well or + +00:09:49.560 --> 00:09:53.680 +they start learning very slowly and once + +00:09:51.760 --> 00:09:57.120 +you get to that point the larger models + +00:09:53.680 --> 00:09:58.800 +work better so there's a kind of + +00:09:57.120 --> 00:10:00.640 +counterintuitive thing that if you want + +00:09:58.800 --> 00:10:04.160 +to train faster you actually can train a + +00:10:00.640 --> 00:10:06.839 +larger model and uh that will that will + +00:10:04.160 --> 00:10:08.000 +uh get you to a good solution at some + +00:10:06.839 --> 00:10:09.640 +point that will get you to a good + +00:10:08.000 --> 00:10:11.120 +solution faster than a smaller model + +00:10:09.640 --> 00:10:15.200 +would you know of course you need memory + +00:10:11.120 --> 00:10:15.200 +and stuff but why are looking + +00:10:20.040 --> 00:10:26.920 +at so this is test loss training loss + +00:10:22.760 --> 00:10:30.680 +also looks like this um I think on + +00:10:26.920 --> 00:10:34.360 +this particular + +00:10:30.680 --> 00:10:37.519 +on this particular paper they never + +00:10:34.360 --> 00:10:39.399 +repeated data and if you never repeat + +00:10:37.519 --> 00:10:42.560 +data actually your training loss looks + +00:10:39.399 --> 00:10:44.680 +very similar to your test loss because + +00:10:42.560 --> 00:10:46.079 +it if you like actually if you can + +00:10:44.680 --> 00:10:48.760 +assume your training data set and your + +00:10:46.079 --> 00:10:50.880 +test data set are um uh identically + +00:10:48.760 --> 00:10:52.279 +distributed your training loss of new + +00:10:50.880 --> 00:10:54.600 +training data should be exactly the same + +00:10:52.279 --> 00:10:55.959 +as your test loss so I think that's + +00:10:54.600 --> 00:10:57.760 +basically why they were justified in + +00:10:55.959 --> 00:11:01.000 +doing that good but they probably did + +00:10:57.760 --> 00:11:03.639 +test loss to like I swashed the concern + +00:11:01.000 --> 00:11:05.839 +that this was overfitting comp or + +00:11:03.639 --> 00:11:09.200 +something but good + +00:11:05.839 --> 00:11:11.279 +question um cool so the these are are + +00:11:09.200 --> 00:11:13.000 +good things to know um so basically if + +00:11:11.279 --> 00:11:14.839 +you see your model doing something like + +00:11:13.000 --> 00:11:16.279 +this um plateauing out maybe your + +00:11:14.839 --> 00:11:18.680 +model's too small and you need to tr a + +00:11:16.279 --> 00:11:20.920 +big + +00:11:18.680 --> 00:11:22.200 +basically another uh piece of trouble + +00:11:20.920 --> 00:11:26.800 +that you can have is trouble with + +00:11:22.200 --> 00:11:29.519 +optimization and basically um you should + +00:11:26.800 --> 00:11:31.600 +check your Optimizer um usually people + +00:11:29.519 --> 00:11:35.639 +are using atom variants nowadays like + +00:11:31.600 --> 00:11:37.839 +atom or atom W so just use that um + +00:11:35.639 --> 00:11:39.639 +learning rate uh so make sure that the + +00:11:37.839 --> 00:11:41.160 +learning rate you're using is standard + +00:11:39.639 --> 00:11:43.399 +for kind of the model size that you're + +00:11:41.160 --> 00:11:44.920 +using and the best way to do this is uh + +00:11:43.399 --> 00:11:46.000 +look at previous papers and see what + +00:11:44.920 --> 00:11:50.160 +they're + +00:11:46.000 --> 00:11:51.680 +using um initialization most people + +00:11:50.160 --> 00:11:53.440 +nowadays will not be training from + +00:11:51.680 --> 00:11:55.440 +scratch but if you are training from + +00:11:53.440 --> 00:11:58.040 +scratch how you initialize your model is + +00:11:55.440 --> 00:11:59.399 +really important and normally the way + +00:11:58.040 --> 00:12:03.320 +you do this is you do this with some + +00:11:59.399 --> 00:12:05.079 +sort of uniform random noise and uh + +00:12:03.320 --> 00:12:06.959 +specifically you can pick the uniform + +00:12:05.079 --> 00:12:08.800 +random noise in intelligent ways based + +00:12:06.959 --> 00:12:12.240 +on the the data size which I'll talk + +00:12:08.800 --> 00:12:13.920 +about in a second um also mini batching + +00:12:12.240 --> 00:12:15.639 +um are you using sufficiently large + +00:12:13.920 --> 00:12:17.480 +batches of data if you're using small + +00:12:15.639 --> 00:12:18.720 +batches of data you might have too much + +00:12:17.480 --> 00:12:21.279 +noise in your training and it might + +00:12:18.720 --> 00:12:23.839 +diverge so uh these are things you need + +00:12:21.279 --> 00:12:23.839 +think about as + +00:12:25.279 --> 00:12:30.560 +well + +00:12:27.519 --> 00:12:35.000 +cool um so these are training time + +00:12:30.560 --> 00:12:37.320 +things um the next thing is debugging at + +00:12:35.000 --> 00:12:37.320 +test + +00:12:38.160 --> 00:12:43.839 +time and this is particularly important + +00:12:41.240 --> 00:12:47.320 +if you're doing any sort + +00:12:43.839 --> 00:12:48.880 +of like I guess a lot of this has kind + +00:12:47.320 --> 00:12:51.360 +of been commoditized and it's + +00:12:48.880 --> 00:12:52.560 +implemented in hugging face and stuff + +00:12:51.360 --> 00:12:55.120 +like that and as long as you're using + +00:12:52.560 --> 00:12:57.279 +the standard implementations you're less + +00:12:55.120 --> 00:12:59.000 +likely to run into these bugs but if you + +00:12:57.279 --> 00:13:00.519 +are implementing anything on your own + +00:12:59.000 --> 00:13:03.040 +this is actually really tricky and you + +00:13:00.519 --> 00:13:07.880 +can easily make mistakes so uh it's + +00:13:03.040 --> 00:13:08.959 +important to to know about it so um what + +00:13:07.880 --> 00:13:10.680 +one of the reasons why you can have + +00:13:08.959 --> 00:13:12.240 +training and test disconnects especially + +00:13:10.680 --> 00:13:14.399 +if you're doing something like text + +00:13:12.240 --> 00:13:15.959 +generation is that usually your loss + +00:13:14.399 --> 00:13:17.720 +calculation and prodiction functions + +00:13:15.959 --> 00:13:20.480 +will be implemented in different + +00:13:17.720 --> 00:13:23.360 +functions and like anything in software + +00:13:20.480 --> 00:13:25.440 +engineering Um this can be a source of + +00:13:23.360 --> 00:13:26.760 +bugs duplicated sour code can be a + +00:13:25.440 --> 00:13:28.440 +source of bugs because you might + +00:13:26.760 --> 00:13:30.199 +Implement one thing in one place in one + +00:13:28.440 --> 00:13:33.000 +way another thing in another place in + +00:13:30.199 --> 00:13:35.560 +another way so this is no exception to + +00:13:33.000 --> 00:13:37.399 +that um it's especially true for + +00:13:35.560 --> 00:13:39.000 +structured prediction models so anything + +00:13:37.399 --> 00:13:40.399 +where you're not just making a single + +00:13:39.000 --> 00:13:42.079 +prediction but you're making multiple + +00:13:40.399 --> 00:13:43.839 +predictions in a row so you need to be a + +00:13:42.079 --> 00:13:46.959 +little bit careful about + +00:13:43.839 --> 00:13:49.880 +that um another thing that you need to + +00:13:46.959 --> 00:13:51.079 +be pay attention about is often uh + +00:13:49.880 --> 00:13:52.680 +especially if you're doing your own + +00:13:51.079 --> 00:13:55.880 +implementation loss calculation it's + +00:13:52.680 --> 00:13:59.800 +mini batched and generation is not or in + +00:13:55.880 --> 00:14:02.199 +highly optimized versions of um of + +00:13:59.800 --> 00:14:03.880 +inference you might be doing inference + +00:14:02.199 --> 00:14:05.360 +with Dynamic batching and stuff like + +00:14:03.880 --> 00:14:06.720 +that and it might become complicated you + +00:14:05.360 --> 00:14:09.800 +might make + +00:14:06.720 --> 00:14:12.160 +mistakes um so how do + +00:14:09.800 --> 00:14:15.839 +we make sure that we're not making any + +00:14:12.160 --> 00:14:18.560 +mistakes here um there's a really simple + +00:14:15.839 --> 00:14:21.199 +way to debug any sort of mini batched + +00:14:18.560 --> 00:14:24.199 +loss calculation because normally when + +00:14:21.199 --> 00:14:27.000 +we mini batch loss calculations we're + +00:14:24.199 --> 00:14:31.079 +simultaneously calculating uh the loss + +00:14:27.000 --> 00:14:35.600 +for like uh four four or eight or + +00:14:31.079 --> 00:14:37.560 +whatever sequences at a time and so you + +00:14:35.600 --> 00:14:40.279 +can calculate the loss with a large + +00:14:37.560 --> 00:14:42.000 +batch size like 32 and then calculate + +00:14:40.279 --> 00:14:44.920 +the loss for each uh sentence + +00:14:42.000 --> 00:14:47.720 +individually and sum them together and + +00:14:44.920 --> 00:14:49.480 +these uh value should be the same and + +00:14:47.720 --> 00:14:52.160 +this can help make sure that you don't + +00:14:49.480 --> 00:14:55.120 +have any you know issues with your + +00:14:52.160 --> 00:14:57.959 +padding or your masking or other things + +00:14:55.120 --> 00:14:59.800 +like this um so this is particularly + +00:14:57.959 --> 00:15:01.959 +important if you're not just using out + +00:14:59.800 --> 00:15:04.240 +of the box things so you have a slightly + +00:15:01.959 --> 00:15:06.240 +unusually structured model with like + +00:15:04.240 --> 00:15:08.880 +hierarchical encoding or anything like + +00:15:06.240 --> 00:15:11.680 +that you need to be really careful about + +00:15:08.880 --> 00:15:15.440 +that um you can even create unit tests + +00:15:11.680 --> 00:15:17.399 +that test this so like um in machine + +00:15:15.440 --> 00:15:18.959 +learning code we don't write unit test + +00:15:17.399 --> 00:15:20.160 +or especially neural network based + +00:15:18.959 --> 00:15:22.440 +machine learning code we don't write + +00:15:20.160 --> 00:15:24.160 +unit tests that often because it's kind + +00:15:22.440 --> 00:15:26.279 +of hard to do there's lots of Randomness + +00:15:24.160 --> 00:15:27.959 +and other stuff like that um but this is + +00:15:26.279 --> 00:15:30.959 +one thing that you can easily test and + +00:15:27.959 --> 00:15:30.959 +and make sure that you don't hear the + +00:15:32.440 --> 00:15:39.319 +mistakes um any sort of uh generation + +00:15:36.480 --> 00:15:43.199 +algorithm uh so when you're generating + +00:15:39.319 --> 00:15:44.639 +or decoding um you can make sure that + +00:15:43.199 --> 00:15:47.639 +your decoding code is getting the same + +00:15:44.639 --> 00:15:50.040 +score is when you calculate the loss and + +00:15:47.639 --> 00:15:52.959 +an easy way to do this is you call the + +00:15:50.040 --> 00:15:54.759 +decoding function to generate an output + +00:15:52.959 --> 00:15:57.399 +and normally when you're doing any sort + +00:15:54.759 --> 00:15:59.480 +of search or sampling or something like + +00:15:57.399 --> 00:16:02.120 +that during the search or sampling + +00:15:59.480 --> 00:16:05.000 +you're calculating the logits or the log + +00:16:02.120 --> 00:16:07.399 +probabilities of each step that you + +00:16:05.000 --> 00:16:09.120 +sample so you keep track of that during + +00:16:07.399 --> 00:16:12.279 +your sampling + +00:16:09.120 --> 00:16:14.319 +algorithm and then after that you call + +00:16:12.279 --> 00:16:16.800 +the loss function on the generated + +00:16:14.319 --> 00:16:18.639 +output and you calculate the loss + +00:16:16.800 --> 00:16:20.360 +according to the loss function and the + +00:16:18.639 --> 00:16:22.240 +score of these two things should be the + +00:16:20.360 --> 00:16:26.440 +same uh + +00:16:22.240 --> 00:16:26.440 +so um you know you do your + +00:16:27.920 --> 00:16:35.279 +generate and that gives you an + +00:16:32.000 --> 00:16:35.279 +output in + +00:16:35.600 --> 00:16:42.360 +score and then you do um + +00:16:39.319 --> 00:16:45.839 +loss on the + +00:16:42.360 --> 00:16:49.040 +output and that gives you the score + +00:16:45.839 --> 00:16:53.079 +two and then you just compare these two + +00:16:49.040 --> 00:16:56.360 +things together and this can uh in in my + +00:16:53.079 --> 00:17:01.120 +experience this has allowed me to find + +00:16:56.360 --> 00:17:03.240 +the majority of the bugs in um these two + +00:17:01.120 --> 00:17:04.679 +things um have allowed me to find the + +00:17:03.240 --> 00:17:06.600 +majority of the bugs whenever I was + +00:17:04.679 --> 00:17:09.199 +doing any sort of like complex thing + +00:17:06.600 --> 00:17:11.880 +with respect to generation or models and + +00:17:09.199 --> 00:17:13.360 +stuff like that so um it's a very common + +00:17:11.880 --> 00:17:15.439 +place for bugs even if you're pretty + +00:17:13.360 --> 00:17:17.280 +familiar with models so I I would highly + +00:17:15.439 --> 00:17:19.760 +recommend + +00:17:17.280 --> 00:17:21.319 +that um this is particularly bad when + +00:17:19.760 --> 00:17:25.559 +you're doing something like a search + +00:17:21.319 --> 00:17:28.400 +algorithm like beam search um and + +00:17:25.559 --> 00:17:30.400 +so beam search uh as you know from the + +00:17:28.400 --> 00:17:34.200 +generation class instead of picking one + +00:17:30.400 --> 00:17:37.080 +high probability uh you know word in + +00:17:34.200 --> 00:17:40.160 +your next step you maintain several + +00:17:37.080 --> 00:17:41.960 +paths and one way that you can fix this + +00:17:40.160 --> 00:17:44.320 +is as you make search better the model + +00:17:41.960 --> 00:17:45.760 +score should get better so the log + +00:17:44.320 --> 00:17:48.240 +likelihood of the output should get + +00:17:45.760 --> 00:17:50.280 +better almost all of the time so you can + +00:17:48.240 --> 00:17:51.840 +search with varying beam sizes and make + +00:17:50.280 --> 00:17:55.280 +sure that you get a better overall model + +00:17:51.840 --> 00:17:57.559 +score at the end so um and you can even + +00:17:55.280 --> 00:17:59.320 +create a unit test testing this as well + +00:17:57.559 --> 00:18:01.000 +I don't think that that many people will + +00:17:59.320 --> 00:18:02.480 +be reimplementing beam search so you + +00:18:01.000 --> 00:18:04.120 +might not need to worry about that too + +00:18:02.480 --> 00:18:05.679 +much but in case you are doing anything + +00:18:04.120 --> 00:18:08.159 +with respect to search algorithms it's a + +00:18:05.679 --> 00:18:08.159 +good thing to + +00:18:08.880 --> 00:18:15.159 +know + +00:18:10.480 --> 00:18:15.159 +cool um any questions about these two so + +00:18:16.919 --> 00:18:24.159 +far no okay um so the second the next + +00:18:22.600 --> 00:18:25.400 +thing I want to talk about this is + +00:18:24.159 --> 00:18:27.840 +something that people think about a + +00:18:25.400 --> 00:18:29.400 +little bit less uh but it's actually + +00:18:27.840 --> 00:18:31.280 +something really important to know + +00:18:29.400 --> 00:18:34.280 +because it will affect you it will + +00:18:31.280 --> 00:18:35.799 +affect everybody uh to some extent it + +00:18:34.280 --> 00:18:40.760 +will affect you to a greater or lesser + +00:18:35.799 --> 00:18:41.520 +extent depending on um what uh type of + +00:18:40.760 --> 00:18:44.480 +you + +00:18:41.520 --> 00:18:46.799 +know system you're building but it will + +00:18:44.480 --> 00:18:48.760 +definitely affect everybody and that's + +00:18:46.799 --> 00:18:50.960 +the mismatch between the the function + +00:18:48.760 --> 00:18:53.440 +that you're optimizing at training time + +00:18:50.960 --> 00:18:55.240 +and the evaluation metric that you're + +00:18:53.440 --> 00:18:58.000 +evaluating and + +00:18:55.240 --> 00:18:59.679 +so uh like as I said in the + +00:18:58.000 --> 00:19:01.679 +reinforcement learning class it's very + +00:18:59.679 --> 00:19:03.640 +common to optimize for maximum + +00:19:01.679 --> 00:19:06.039 +likelihood for training uh but there's + +00:19:03.640 --> 00:19:07.840 +all kinds of problems with this you know + +00:19:06.039 --> 00:19:09.640 +um with respect to the mistake it not + +00:19:07.840 --> 00:19:11.640 +being sensitive to mistakes it not being + +00:19:09.640 --> 00:19:14.799 +sensitive to your generation + +00:19:11.640 --> 00:19:16.520 +algorithm um but even though your + +00:19:14.799 --> 00:19:19.880 +likelihood is getting better accuracy + +00:19:16.520 --> 00:19:22.799 +can get worse and this is a super simple + +00:19:19.880 --> 00:19:25.080 +example with uh image classification on + +00:19:22.799 --> 00:19:27.919 +mest and I I ran this experiment with + +00:19:25.080 --> 00:19:30.880 +like 10 lines of pytorch code or + +00:19:27.919 --> 00:19:36.840 +something like this uh maybe more like + +00:19:30.880 --> 00:19:40.080 +40 lines of P um and so here um on the + +00:19:36.840 --> 00:19:43.120 +left side we have the loss on the + +00:19:40.080 --> 00:19:46.600 +training set and the test set or the dev + +00:19:43.120 --> 00:19:48.559 +set and here we have accuracy on the + +00:19:46.600 --> 00:19:50.799 +training set in the test + +00:19:48.559 --> 00:19:55.000 +set + +00:19:50.799 --> 00:19:56.159 +and so oops I showed you the answer so I + +00:19:55.000 --> 00:19:58.799 +was going to do a quiz but I + +00:19:56.159 --> 00:20:00.559 +accidentally showed you the answer um + +00:19:58.799 --> 00:20:04.440 +but the problem here is basically + +00:20:00.559 --> 00:20:06.320 +because um the the loss you're + +00:20:04.440 --> 00:20:09.400 +calculating the likelihood of the + +00:20:06.320 --> 00:20:11.120 +correct answer and the likelihood of the + +00:20:09.400 --> 00:20:12.440 +correct answer is the probability of + +00:20:11.120 --> 00:20:15.000 +getting the correct + +00:20:12.440 --> 00:20:17.240 +answer the accuracy is the number of + +00:20:15.000 --> 00:20:20.280 +times you're getting the correct answer + +00:20:17.240 --> 00:20:23.799 +so as you train a model to get more and + +00:20:20.280 --> 00:20:25.440 +more confident it gets better it gets + +00:20:23.799 --> 00:20:27.840 +better and better at getting more + +00:20:25.440 --> 00:20:30.039 +answers correct but it also gets more + +00:20:27.840 --> 00:20:33.360 +and more confident in its answers and so + +00:20:30.039 --> 00:20:36.200 +if the you know there's any example that + +00:20:33.360 --> 00:20:37.840 +it's really bad at um it might get very + +00:20:36.200 --> 00:20:42.320 +confident in + +00:20:37.840 --> 00:20:44.760 +that answer that bad answer and the log + +00:20:42.320 --> 00:20:47.320 +likelihood of that answer will go up or + +00:20:44.760 --> 00:20:49.679 +sorry the log likelihood will go down so + +00:20:47.320 --> 00:20:54.360 +the negative log likelihood will go up + +00:20:49.679 --> 00:20:56.720 +is the loss so basically + +00:20:54.360 --> 00:20:59.559 +um the + +00:20:56.720 --> 00:21:01.039 +uh the loss that you're calculating and + +00:20:59.559 --> 00:21:03.840 +the thing that you care about in the end + +00:21:01.039 --> 00:21:07.120 +accuracy can be decorrelated + +00:21:03.840 --> 00:21:09.520 +um so there's also an interesting + +00:21:07.120 --> 00:21:12.080 +example um in text generation and this + +00:21:09.520 --> 00:21:14.000 +is part of the reason why uh we have all + +00:21:12.080 --> 00:21:15.880 +these other text generation algorithms + +00:21:14.000 --> 00:21:20.080 +like nucleus samp playing or topk samp + +00:21:15.880 --> 00:21:23.039 +playing or other things like this is um + +00:21:20.080 --> 00:21:25.080 +actually in a maximum likelihood trained + +00:21:23.039 --> 00:21:27.799 +model better + +00:21:25.080 --> 00:21:29.559 +search uh in in other words finding a + +00:21:27.799 --> 00:21:32.159 +better model scope + +00:21:29.559 --> 00:21:36.120 +doesn't necessarily give you a better + +00:21:32.159 --> 00:21:37.840 +generation result and this is an example + +00:21:36.120 --> 00:21:39.080 +uh from machine translation from a + +00:21:37.840 --> 00:21:41.880 +really long time + +00:21:39.080 --> 00:21:44.000 +ago uh but you know it still persists + +00:21:41.880 --> 00:21:47.520 +today which is they did beam search with + +00:21:44.000 --> 00:21:53.600 +a larger and larger beam + +00:21:47.520 --> 00:21:56.640 +and the be the best Beam for finding um + +00:21:53.600 --> 00:21:59.640 +the best scoring output basically was + +00:21:56.640 --> 00:22:01.600 +four and then the accuracy goes down and + +00:21:59.640 --> 00:22:05.559 +down and down as they find a better + +00:22:01.600 --> 00:22:07.200 +output and does anyone remember when we + +00:22:05.559 --> 00:22:09.679 +talked about the generation class where + +00:22:07.200 --> 00:22:09.679 +this comes + +00:22:10.120 --> 00:22:15.000 +from I don't know how explicitly we said + +00:22:12.960 --> 00:22:18.600 +we mentioned it in the generation class + +00:22:15.000 --> 00:22:20.360 +but basically the problem is um maximum + +00:22:18.600 --> 00:22:22.559 +likelihood train models like shorter + +00:22:20.360 --> 00:22:25.240 +outputs generally because if as we make + +00:22:22.559 --> 00:22:27.760 +the output longer uh the probability of + +00:22:25.240 --> 00:22:29.679 +the longer outputs goes down so as you + +00:22:27.760 --> 00:22:32.039 +improve the beam it will start + +00:22:29.679 --> 00:22:34.799 +generating shorter and shorter outputs + +00:22:32.039 --> 00:22:36.480 +and because of that the score goes down + +00:22:34.799 --> 00:22:39.039 +because blue score doesn't like outputs + +00:22:36.480 --> 00:22:41.520 +that are too short essentially so there + +00:22:39.039 --> 00:22:44.039 +are um there are hex around this for + +00:22:41.520 --> 00:22:46.200 +beam search where essentially what you + +00:22:44.039 --> 00:22:48.559 +do is you uh take the average log + +00:22:46.200 --> 00:22:51.159 +likelihood of each token instead of the + +00:22:48.559 --> 00:22:52.760 +overall log likelihood of each token um + +00:22:51.159 --> 00:22:54.679 +and that improves a little bit but still + +00:22:52.760 --> 00:22:59.720 +you can see as you search more the the + +00:22:54.679 --> 00:23:01.440 +accuracy goes down so um so that's the + +00:22:59.720 --> 00:23:04.039 +the general idea + +00:23:01.440 --> 00:23:08.760 +here there's a bunch of ways you can fix + +00:23:04.039 --> 00:23:10.600 +this um the most principled way is to + +00:23:08.760 --> 00:23:12.760 +use a method like reinforcement learning + +00:23:10.600 --> 00:23:14.120 +or something uh some sort of you know + +00:23:12.760 --> 00:23:15.520 +structured training algorithm that + +00:23:14.120 --> 00:23:17.159 +allows you to train your models so that + +00:23:15.520 --> 00:23:20.159 +you don't get these bad + +00:23:17.159 --> 00:23:22.159 +outputs um another way that's much + +00:23:20.159 --> 00:23:25.640 +easier is to do early stopping with the + +00:23:22.159 --> 00:23:30.480 +evaluation metric as opposed to um early + +00:23:25.640 --> 00:23:32.840 +stopping with the loss and by doing this + +00:23:30.480 --> 00:23:34.520 +you would stop here so you would stop + +00:23:32.840 --> 00:23:37.159 +where you get the highest evaluation + +00:23:34.520 --> 00:23:42.600 +metric uh that you care about instead of + +00:23:37.159 --> 00:23:44.400 +stopping here uh so that's um that's one + +00:23:42.600 --> 00:23:46.600 +way you can fix this + +00:23:44.400 --> 00:23:49.760 +problem does anyone have an idea about + +00:23:46.600 --> 00:23:49.760 +why this might be a bad + +00:23:49.840 --> 00:23:57.159 +idea why might it be a bad idea to stop + +00:23:52.480 --> 00:23:57.159 +here instead of stopping here for + +00:23:57.440 --> 00:24:00.440 +example + +00:24:05.320 --> 00:24:10.200 +yeah it's kind of overfitting it's + +00:24:07.760 --> 00:24:13.640 +overfitting in a particular way um but + +00:24:10.200 --> 00:24:16.000 +remember here this is still the accuracy + +00:24:13.640 --> 00:24:18.400 +on the dev set so we're not overfitting + +00:24:16.000 --> 00:24:20.080 +so much that the dev accuracy is going + +00:24:18.400 --> 00:24:24.279 +down that would be a different variety + +00:24:20.080 --> 00:24:27.360 +of overfitting but any any + +00:24:24.279 --> 00:24:29.799 +ideas go for it we don't want to be too + +00:24:27.360 --> 00:24:31.600 +confident yeah exactly we don't want it + +00:24:29.799 --> 00:24:32.880 +to be too confident in its wrong answers + +00:24:31.600 --> 00:24:35.279 +and we talked about + +00:24:32.880 --> 00:24:38.000 +calibration um where calibration is + +00:24:35.279 --> 00:24:40.039 +basically like how accurate are the + +00:24:38.000 --> 00:24:41.480 +probability estimates so this model over + +00:24:40.039 --> 00:24:43.600 +here is going to be really poorly + +00:24:41.480 --> 00:24:45.159 +calibrated it's going to be very + +00:24:43.600 --> 00:24:46.240 +confident regardless of whether it's + +00:24:45.159 --> 00:24:49.440 +correct or not and that could be a + +00:24:46.240 --> 00:24:50.840 +problem in dopram uh dopram tasks + +00:24:49.440 --> 00:24:52.130 +there's also another thing that I I + +00:24:50.840 --> 00:24:55.189 +forgot to put on + +00:24:52.130 --> 00:24:55.189 +[Music] + +00:24:57.320 --> 00:25:00.320 +um + +00:25:02.919 --> 00:25:08.120 +that I forgot to put on the slides but + +00:25:04.520 --> 00:25:10.720 +it's a um an interesting phenomenon that + +00:25:08.120 --> 00:25:12.720 +actually um kind of a lot of people in + +00:25:10.720 --> 00:25:16.360 +interpretability are interested in it's + +00:25:12.720 --> 00:25:18.120 +this uh generalization gring + +00:25:16.360 --> 00:25:19.640 +generalization Beyond overfitting on + +00:25:18.120 --> 00:25:21.120 +small algorithmic data sets and + +00:25:19.640 --> 00:25:27.360 +basically what they + +00:25:21.120 --> 00:25:29.720 +show is um you can be training for a + +00:25:27.360 --> 00:25:31.320 +very very long time + +00:25:29.720 --> 00:25:34.279 +um + +00:25:31.320 --> 00:25:35.919 +and uh like reducing the loss reducing + +00:25:34.279 --> 00:25:40.399 +the loss reducing the loss and reducing + +00:25:35.919 --> 00:25:42.480 +the loss and it's only after a very long + +00:25:40.399 --> 00:25:43.840 +time does your Model start generalizing + +00:25:42.480 --> 00:25:48.240 +well and getting good + +00:25:43.840 --> 00:25:49.799 +accuracy um the this paper the types of + +00:25:48.240 --> 00:25:52.120 +data sets it's talking about are data + +00:25:49.799 --> 00:25:55.520 +sets where you need to get many things + +00:25:52.120 --> 00:25:58.640 +in a row correct before you get the + +00:25:55.520 --> 00:26:00.880 +final answer correct so basically you + +00:25:58.640 --> 00:26:02.320 +need to get like 20 steps in a row or 50 + +00:26:00.880 --> 00:26:06.200 +steps in a row correct before you get + +00:26:02.320 --> 00:26:10.679 +the final answer correct and um + +00:26:06.200 --> 00:26:13.000 +basically the reason why this happens is + +00:26:10.679 --> 00:26:15.720 +because this accuracy will keep going up + +00:26:13.000 --> 00:26:17.760 +but you only get the accuracy of each + +00:26:15.720 --> 00:26:20.520 +individual decision will keep going up + +00:26:17.760 --> 00:26:22.880 +but you only get marked like + +00:26:20.520 --> 00:26:25.440 +correct uh + +00:26:22.880 --> 00:26:29.799 +after you get like all 50 in a row + +00:26:25.440 --> 00:26:31.200 +correct so um it this difference can be + +00:26:29.799 --> 00:26:33.039 +even more Stark when you're talking + +00:26:31.200 --> 00:26:35.399 +about things that require like 50 steps + +00:26:33.039 --> 00:26:37.399 +of reasoning or like multiple steps of + +00:26:35.399 --> 00:26:39.559 +reasoning but like 50 token Generations + +00:26:37.399 --> 00:26:42.679 +correct before you get them right so um + +00:26:39.559 --> 00:26:42.679 +that's another thing to be aware + +00:26:43.000 --> 00:26:49.240 +of cool um so now I want to switch gears + +00:26:46.960 --> 00:26:51.919 +a little bit to actionable evaluation + +00:26:49.240 --> 00:26:54.240 +and how you can um evaluate your models + +00:26:51.919 --> 00:26:56.640 +in a way that makes it easy to find uh + +00:26:54.240 --> 00:26:58.600 +next steps to be + +00:26:56.640 --> 00:27:00.159 +improving uh are there any questions + +00:26:58.600 --> 00:27:02.600 +about the debugging part before we get + +00:27:00.159 --> 00:27:02.600 +into this + +00:27:03.360 --> 00:27:10.120 +part okay I'll + +00:27:05.880 --> 00:27:12.840 +go so um my first suggestion with + +00:27:10.120 --> 00:27:15.559 +respect to how you can actually you know + +00:27:12.840 --> 00:27:17.440 +improve systems is make sure that you're + +00:27:15.559 --> 00:27:21.039 +looking at the data that you're + +00:27:17.440 --> 00:27:22.679 +using and um both bugs and new research + +00:27:21.039 --> 00:27:24.080 +directions can be found by looking at + +00:27:22.679 --> 00:27:27.159 +your model + +00:27:24.080 --> 00:27:31.640 +outputs um + +00:27:27.159 --> 00:27:33.279 +so to give one example um of a very + +00:27:31.640 --> 00:27:36.200 +common mistake that you can make when + +00:27:33.279 --> 00:27:40.159 +you're creating a a generation algorithm + +00:27:36.200 --> 00:27:41.600 +it's these sort of off by one erors um + +00:27:40.159 --> 00:27:43.919 +so like let's say you implemented a + +00:27:41.600 --> 00:27:46.039 +translation system and it's generating + +00:27:43.919 --> 00:27:49.440 +outputs like went to the store yesterday + +00:27:46.039 --> 00:27:51.080 +bought a dog um you can immediately look + +00:27:49.440 --> 00:27:53.440 +at this and say hey this doesn't look + +00:27:51.080 --> 00:27:58.360 +like natural English what's going uh + +00:27:53.440 --> 00:28:00.000 +what's going on and the the problem here + +00:27:58.360 --> 00:28:04.600 +is + +00:28:00.000 --> 00:28:04.600 +you're um you're doing something + +00:28:05.159 --> 00:28:12.720 +like output uh + +00:28:09.240 --> 00:28:14.600 +one uh and you have a slice of like one + +00:28:12.720 --> 00:28:17.399 +instead of zero here or something like + +00:28:14.600 --> 00:28:18.640 +this and so this is a really silly error + +00:28:17.399 --> 00:28:21.000 +that you might just make a mistake on + +00:28:18.640 --> 00:28:23.679 +python on your you know pre-processing + +00:28:21.000 --> 00:28:26.200 +or postprocessing or something like this + +00:28:23.679 --> 00:28:28.399 +um but the problem is like if you look + +00:28:26.200 --> 00:28:30.600 +at your blue score based evaluation or + +00:28:28.399 --> 00:28:32.840 +something like that you'll have like + +00:28:30.600 --> 00:28:34.760 +you'll be one point worse or two points + +00:28:32.840 --> 00:28:36.720 +worse or something like that and you'll + +00:28:34.760 --> 00:28:38.600 +be like Oh I'm I'm two points worse why + +00:28:36.720 --> 00:28:40.600 +am I two point wor two points worse in + +00:28:38.600 --> 00:28:43.760 +the state of the art and it turns out it + +00:28:40.600 --> 00:28:45.279 +was a really like silly thing like this + +00:28:43.760 --> 00:28:46.519 +and immediately you'll see this if you + +00:28:45.279 --> 00:28:47.960 +look at your data but if you're doing + +00:28:46.519 --> 00:28:49.600 +all your experiments and just looking at + +00:28:47.960 --> 00:28:51.519 +the numbers it's really hard to tell you + +00:28:49.600 --> 00:28:53.720 +know why this is + +00:28:51.519 --> 00:28:58.720 +happening + +00:28:53.720 --> 00:29:02.360 +um another thing is uh if you + +00:28:58.720 --> 00:29:04.799 +have a good eye and can like just look + +00:29:02.360 --> 00:29:07.799 +through the data points + +00:29:04.799 --> 00:29:09.640 +um we as humans are pretty good uh + +00:29:07.799 --> 00:29:14.200 +pattern recognizers and especially you + +00:29:09.640 --> 00:29:16.360 +know CMU students uh you're uh very good + +00:29:14.200 --> 00:29:18.519 +and quick at picking up on things so if + +00:29:16.360 --> 00:29:20.600 +you look at the data and pour through + +00:29:18.519 --> 00:29:22.880 +things you can uh probably pick up + +00:29:20.600 --> 00:29:24.880 +patterns about why things are failing + +00:29:22.880 --> 00:29:27.720 +and so um you know you might look and + +00:29:24.880 --> 00:29:29.919 +see that uh compared to some other model + +00:29:27.720 --> 00:29:31.679 +your model is really bad at answering + +00:29:29.919 --> 00:29:33.679 +questions about people or something like + +00:29:31.679 --> 00:29:36.480 +that and then you figure out you'll need + +00:29:33.679 --> 00:29:38.320 +a better model of uh people or your rag + +00:29:36.480 --> 00:29:40.519 +systems uh that you're building for + +00:29:38.320 --> 00:29:42.880 +assignment two is maybe failing on all + +00:29:40.519 --> 00:29:45.559 +the research related questions so you + +00:29:42.880 --> 00:29:47.080 +need to come up with the research uh + +00:29:45.559 --> 00:29:48.320 +like scrape more research data or + +00:29:47.080 --> 00:29:50.080 +something like + +00:29:48.320 --> 00:29:53.840 +that + +00:29:50.080 --> 00:29:55.760 +um so there are methods to do this more + +00:29:53.840 --> 00:29:58.039 +systematically and this is something I + +00:29:55.760 --> 00:29:59.720 +picked up when I was doing an internship + +00:29:58.039 --> 00:30:04.080 +at Google and it really stuck with me + +00:29:59.720 --> 00:30:09.080 +for you know 14 uh 14 years now I guess + +00:30:04.080 --> 00:30:10.960 +13 years um so uh a very simple way to + +00:30:09.080 --> 00:30:12.600 +do this more systematically than just + +00:30:10.960 --> 00:30:16.200 +browsing through things is to randomly + +00:30:12.600 --> 00:30:19.000 +sample a 100 outputs and look at a 100 + +00:30:16.200 --> 00:30:21.840 +erors and try to group them into some + +00:30:19.000 --> 00:30:23.799 +sort of typology and say oh uh this kind + +00:30:21.840 --> 00:30:27.799 +of air is particularly + +00:30:23.799 --> 00:30:31.279 +frequent and this is just one example of + +00:30:27.799 --> 00:30:33.120 +a typology that was defined by V at all + +00:30:31.279 --> 00:30:37.320 +um where they tried to take machine + +00:30:33.120 --> 00:30:39.480 +translation errors and group them into + +00:30:37.320 --> 00:30:43.440 +uh various varieties like correct words + +00:30:39.480 --> 00:30:46.640 +filler words local uh local range long + +00:30:43.440 --> 00:30:48.440 +range um uh sorry word word level word + +00:30:46.640 --> 00:30:50.440 +ordering erors local range long range + +00:30:48.440 --> 00:30:54.279 +phrase level local range long range and + +00:30:50.440 --> 00:30:55.679 +stuff like this um you can definitely + +00:30:54.279 --> 00:30:58.399 +look at previous work and see the + +00:30:55.679 --> 00:31:00.559 +typologies of errors that they used but + +00:30:58.399 --> 00:31:02.440 +the problem is like systems get better + +00:31:00.559 --> 00:31:04.240 +and actually I don't think this is a + +00:31:02.440 --> 00:31:06.760 +super relevant typology for machine + +00:31:04.240 --> 00:31:10.120 +translation anymore uh because machine + +00:31:06.760 --> 00:31:12.159 +translation systems like they don't make + +00:31:10.120 --> 00:31:14.639 +a whole lot of local range Word level + +00:31:12.159 --> 00:31:16.159 +errors anymore and rather we might want + +00:31:14.639 --> 00:31:18.279 +to know more fine grain like are they + +00:31:16.159 --> 00:31:21.720 +making mistakes on named entities or + +00:31:18.279 --> 00:31:24.720 +other things like that so actually + +00:31:21.720 --> 00:31:24.720 +we + +00:31:26.919 --> 00:31:29.919 +um + +00:31:30.519 --> 00:31:36.279 +did a re a more recent thing it's I + +00:31:34.279 --> 00:31:39.159 +guess four years ago now um but it was + +00:31:36.279 --> 00:31:42.720 +when uh people first started saying that + +00:31:39.159 --> 00:31:46.200 +machine translation systems are about as + +00:31:42.720 --> 00:31:50.720 +good as humans at doing a + +00:31:46.200 --> 00:31:50.720 +translation and when we did this we + +00:31:52.480 --> 00:31:58.440 +compared we compared machine translation + +00:31:55.200 --> 00:31:59.960 +systems to humans and we tried to find + +00:31:58.440 --> 00:32:02.240 +you know different types of things and + +00:31:59.960 --> 00:32:03.919 +we were inspired by V but we recreated + +00:32:02.240 --> 00:32:06.159 +our typology based on the things that we + +00:32:03.919 --> 00:32:10.279 +thought were you know the most important + +00:32:06.159 --> 00:32:13.399 +types of errors in like 2020 instead of + +00:32:10.279 --> 00:32:16.799 +2006 so this is really helpful the + +00:32:13.399 --> 00:32:19.039 +reason why it's really helpful is if you + +00:32:16.799 --> 00:32:20.440 +can do this even for a small sample of + +00:32:19.039 --> 00:32:23.440 +the outputs that you're looking at and + +00:32:20.440 --> 00:32:25.279 +identify the most like prominent types + +00:32:23.440 --> 00:32:27.440 +of eras that you're facing it often + +00:32:25.279 --> 00:32:29.360 +leads you to the most successful ways of + +00:32:27.440 --> 00:32:31.519 +improving the accuracy of your systems + +00:32:29.360 --> 00:32:33.120 +because you might if you don't do this + +00:32:31.519 --> 00:32:35.000 +you might be focusing on an air type + +00:32:33.120 --> 00:32:38.000 +that's not actually an error it's kind + +00:32:35.000 --> 00:32:39.200 +of like if you learned in uh programming + +00:32:38.000 --> 00:32:40.799 +you know software engineering or + +00:32:39.200 --> 00:32:42.639 +something like that you should never + +00:32:40.799 --> 00:32:46.360 +optimize your code until you run a + +00:32:42.639 --> 00:32:47.799 +profiler um because actually your code + +00:32:46.360 --> 00:32:50.320 +might be slow in a place that you never + +00:32:47.799 --> 00:32:52.720 +expected and so it's kind of the same + +00:32:50.320 --> 00:32:56.600 +principle here right so don't optimize + +00:32:52.720 --> 00:32:58.720 +your systems errors in a place uh where + +00:32:56.600 --> 00:33:03.240 +like actually it's not having in years + +00:32:58.720 --> 00:33:06.440 +so um that's a general principle + +00:33:03.240 --> 00:33:09.440 +here uh cool another thing you can do is + +00:33:06.440 --> 00:33:11.760 +quantitative analysis so um if you can + +00:33:09.440 --> 00:33:13.880 +think of the phenomenon that you choose + +00:33:11.760 --> 00:33:17.480 +to focus on um is that phenomenon + +00:33:13.880 --> 00:33:19.159 +getting better so if you focused on uh + +00:33:17.480 --> 00:33:22.240 +something that should improve the + +00:33:19.159 --> 00:33:23.760 +quality of low frequency words uh you + +00:33:22.240 --> 00:33:26.200 +can check if the accuracy on low + +00:33:23.760 --> 00:33:27.399 +frequency words is increasing if you + +00:33:26.200 --> 00:33:29.600 +focused on something that should be + +00:33:27.399 --> 00:33:32.120 +improving the syntax in a low resource + +00:33:29.600 --> 00:33:36.080 +language you can measure um whether it's + +00:33:32.120 --> 00:33:37.360 +doing better on word ordering or uh long + +00:33:36.080 --> 00:33:41.840 +distance + +00:33:37.360 --> 00:33:44.360 +dependencies um if you focused on + +00:33:41.840 --> 00:33:46.039 +improving a search algorithm for you + +00:33:44.360 --> 00:33:47.519 +know generation or something like that + +00:33:46.039 --> 00:33:49.880 +are the number of search errors that + +00:33:47.519 --> 00:33:53.120 +you're encountering being reduced so + +00:33:49.880 --> 00:33:56.320 +depending on what you planned on uh you + +00:33:53.120 --> 00:33:57.919 +know improving it's often a good idea to + +00:33:56.320 --> 00:33:59.480 +measure more directly whether it's + +00:33:57.919 --> 00:34:00.559 +improving the the thing that you think + +00:33:59.480 --> 00:34:04.880 +it should + +00:34:00.559 --> 00:34:06.000 +improve um one example of um so I I + +00:34:04.880 --> 00:34:09.240 +basically + +00:34:06.000 --> 00:34:11.240 +created since my experience doing this + +00:34:09.240 --> 00:34:15.159 +manually uh when I I was on an + +00:34:11.240 --> 00:34:18.280 +internship at Google um I've + +00:34:15.159 --> 00:34:20.639 +gradually improved my methodology for + +00:34:18.280 --> 00:34:20.639 +doing + +00:34:21.679 --> 00:34:26.320 +this and um and worked on automating + +00:34:24.879 --> 00:34:30.599 +things and + +00:34:26.320 --> 00:34:33.839 +so the first thing I had was a super + +00:34:30.599 --> 00:34:35.560 +hacky uh hacky script that basically + +00:34:33.839 --> 00:34:37.720 +writes out HTML + +00:34:35.560 --> 00:34:39.320 +files um and then I I had something + +00:34:37.720 --> 00:34:42.320 +called explainer board where we had a + +00:34:39.320 --> 00:34:44.879 +leader board and uh recently one of the + +00:34:42.320 --> 00:34:47.800 +things I've worked on is uh this uh + +00:34:44.879 --> 00:34:53.200 +together with um Alex Alex Cabrera who's + +00:34:47.800 --> 00:34:56.760 +a student here um is this toolkit called + +00:34:53.200 --> 00:34:59.640 +Zeno and um this is just an example from + +00:34:56.760 --> 00:34:59.640 +machine translation + +00:35:03.440 --> 00:35:09.200 +it's being a little bit + +00:35:06.599 --> 00:35:11.079 +slow um but basically what it does is it + +00:35:09.200 --> 00:35:14.920 +allows you to look at the data on the + +00:35:11.079 --> 00:35:18.000 +right side um and so these are just + +00:35:14.920 --> 00:35:19.680 +examples um but you can go in and do + +00:35:18.000 --> 00:35:22.760 +things like say Okay I want to look at + +00:35:19.680 --> 00:35:24.640 +all machine translation examples + +00:35:22.760 --> 00:35:28.040 +from + +00:35:24.640 --> 00:35:30.920 +uh housea and so it shows you the ones + +00:35:28.040 --> 00:35:32.960 +from housea I want to look + +00:35:30.920 --> 00:35:36.240 +at all + +00:35:32.960 --> 00:35:38.880 +examples let me clear that off I want to + +00:35:36.240 --> 00:35:40.800 +look at all examples where the accuracy + +00:35:38.880 --> 00:35:43.440 +is + +00:35:40.800 --> 00:35:45.280 +low um and so now I can look at all the + +00:35:43.440 --> 00:35:49.640 +examples where the accuracy is low and I + +00:35:45.280 --> 00:35:52.640 +I can go in and uh uh examine them so uh + +00:35:49.640 --> 00:35:54.880 +you can also go in and build charts like + +00:35:52.640 --> 00:35:58.280 +this so like what is the overall + +00:35:54.880 --> 00:36:02.200 +performance um what is is the + +00:35:58.280 --> 00:36:05.960 +performance what is the performance + +00:36:02.200 --> 00:36:07.520 +um on different scripts so you can see + +00:36:05.960 --> 00:36:10.880 +which model which model is doing better + +00:36:07.520 --> 00:36:13.960 +at scripts and stuff like that so um or + +00:36:10.880 --> 00:36:16.000 +you can put things side by side and say + +00:36:13.960 --> 00:36:20.720 +okay I want to find all the examples + +00:36:16.000 --> 00:36:21.800 +where uh chat GPT is doing much worse + +00:36:20.720 --> 00:36:25.280 +than GPT + +00:36:21.800 --> 00:36:28.240 +4 uh or like GPT 3.5 is doing much worse + +00:36:25.280 --> 00:36:29.680 +than gp4 and here we can see that oh in + +00:36:28.240 --> 00:36:31.520 +this case it's generating something in + +00:36:29.680 --> 00:36:34.079 +the wrong script or something like that + +00:36:31.520 --> 00:36:37.839 +so um there's also tooling that you can + +00:36:34.079 --> 00:36:40.480 +use to make this easier as + +00:36:37.839 --> 00:36:43.520 +well and the way uh the way you use this + +00:36:40.480 --> 00:36:46.079 +is you basically + +00:36:43.520 --> 00:36:48.000 +um uh create a pandas data frame with + +00:36:46.079 --> 00:36:49.680 +all of your data in it and you upload + +00:36:48.000 --> 00:36:52.400 +the pandas data frame with any metadata + +00:36:49.680 --> 00:36:54.280 +you want to use and you can uh use and I + +00:36:52.400 --> 00:36:56.520 +think VJ will be having a recitation on + +00:36:54.280 --> 00:37:02.560 +this if you're interested in taking a + +00:36:56.520 --> 00:37:04.680 +look cool um so that is the my part and + +00:37:02.560 --> 00:37:07.760 +then we'll be doing Nishant next while + +00:37:04.680 --> 00:37:09.480 +Nishant comes up to set up are there any + +00:37:07.760 --> 00:37:10.520 +questions about the thing that I talked + +00:37:09.480 --> 00:37:14.079 +about + +00:37:10.520 --> 00:37:14.079 +here yeah + +00:37:14.359 --> 00:37:18.200 +so that when I + +00:37:26.200 --> 00:37:30.079 +regular um + +00:37:28.160 --> 00:37:32.560 +is that does that make a difference in + +00:37:30.079 --> 00:37:35.400 +terms of like what we're expecting when + +00:37:32.560 --> 00:37:38.800 +we're evaluating the model + +00:37:35.400 --> 00:37:41.720 +model yeah so just to repeat the + +00:37:38.800 --> 00:37:43.680 +question it's a a great question so if + +00:37:41.720 --> 00:37:49.440 +you apply + +00:37:43.680 --> 00:37:49.440 +regularization um will that change the + +00:37:49.640 --> 00:37:54.079 +overall expectation for the model loss + +00:37:52.040 --> 00:37:55.680 +so I was saying loss should converge to + +00:37:54.079 --> 00:37:57.200 +zero once you start applying + +00:37:55.680 --> 00:37:59.079 +regularization or weight Decay or + +00:37:57.200 --> 00:38:02.640 +something like that it definitely might + +00:37:59.079 --> 00:38:04.520 +not converge to Z um and the reason why + +00:38:02.640 --> 00:38:06.520 +is because once you start applying + +00:38:04.520 --> 00:38:09.319 +regularization there is no zero loss + +00:38:06.520 --> 00:38:11.480 +solion um because in order to reduce the + +00:38:09.319 --> 00:38:14.960 +loss you need to make move things away + +00:38:11.480 --> 00:38:16.359 +move weights away from zero um but when + +00:38:14.960 --> 00:38:19.560 +you move weights away from zero the + +00:38:16.359 --> 00:38:22.200 +regularization L becomes n negative so + +00:38:19.560 --> 00:38:24.599 +one thing you can do however is measure + +00:38:22.200 --> 00:38:26.880 +the losses separately so measure the + +00:38:24.599 --> 00:38:27.960 +regularization component of the loss and + +00:38:26.880 --> 00:38:29.760 +the um + +00:38:27.960 --> 00:38:31.920 +the log like we had component with the + +00:38:29.760 --> 00:38:33.560 +loss and with any reasonable + +00:38:31.920 --> 00:38:35.280 +regularization and a reasonably + +00:38:33.560 --> 00:38:38.000 +parameterized model I do think the loss + +00:38:35.280 --> 00:38:39.760 +should be getting closer to Zer like the + +00:38:38.000 --> 00:38:41.920 +actual likely should be getting closer + +00:38:39.760 --> 00:38:41.920 +to + +00:38:42.200 --> 00:38:46.520 +zero uh you were using an extremely + +00:38:44.480 --> 00:38:49.240 +small model in the mini L signed though + +00:38:46.520 --> 00:38:53.680 +so that might make it more + +00:38:49.240 --> 00:38:56.440 +difficult yeah and any other + +00:38:53.680 --> 00:38:59.440 +things okay if not + +00:38:56.440 --> 00:38:59.440 +I'll + +00:39:13.720 --> 00:39:19.160 +all right can everyone hear + +00:39:15.319 --> 00:39:21.440 +me sweet okay move this it looks like + +00:39:19.160 --> 00:39:24.200 +I'm talking to someone instead of + +00:39:21.440 --> 00:39:24.200 +between both of + +00:39:26.359 --> 00:39:29.359 +you + +00:39:33.319 --> 00:39:37.680 +all right so hi everyone um I'm going to + +00:39:35.720 --> 00:39:39.400 +talk about model interpretability for + +00:39:37.680 --> 00:39:41.680 +for those who don't know me I'm one of + +00:39:39.400 --> 00:39:44.359 +your Tas I'm a first year PhD student + +00:39:41.680 --> 00:39:47.359 +working with Mona diab on model + +00:39:44.359 --> 00:39:47.359 +interpretability + +00:39:48.800 --> 00:39:55.400 +um where what do I + +00:39:51.839 --> 00:39:59.119 +click your your mouse should be there + +00:39:55.400 --> 00:40:01.599 +yeah just + +00:39:59.119 --> 00:40:04.160 +cool okay um + +00:40:01.599 --> 00:40:06.079 +so what I want you to take away if you + +00:40:04.160 --> 00:40:08.359 +if you fall asleep this is too boring + +00:40:06.079 --> 00:40:09.839 +here are sort of the two main takeaways + +00:40:08.359 --> 00:40:12.040 +one I want to convince you that model + +00:40:09.839 --> 00:40:14.720 +interpretability is important to study + +00:40:12.040 --> 00:40:16.720 +and two I want I want you to find this + +00:40:14.720 --> 00:40:18.880 +interesting um and something you want to + +00:40:16.720 --> 00:40:20.079 +explore more there's a bunch of details + +00:40:18.880 --> 00:40:21.800 +here this is going to be kind of a + +00:40:20.079 --> 00:40:24.599 +whirlwind tour you're not going to get + +00:40:21.800 --> 00:40:27.440 +super deep into anything um so hopefully + +00:40:24.599 --> 00:40:28.839 +this acts as a starting point um then + +00:40:27.440 --> 00:40:33.800 +than anything + +00:40:28.839 --> 00:40:37.040 +else so interpretability in AI um the + +00:40:33.800 --> 00:40:38.480 +the definition is it's the study of + +00:40:37.040 --> 00:40:40.440 +understanding the decisions that AI + +00:40:38.480 --> 00:40:42.640 +systems make and putting them into + +00:40:40.440 --> 00:40:44.280 +easily human understandable terms this + +00:40:42.640 --> 00:40:47.640 +can mean a lot of different things and + +00:40:44.280 --> 00:40:49.280 +this is often really hard um and the why + +00:40:47.640 --> 00:40:51.319 +is to use that understanding to + +00:40:49.280 --> 00:40:54.040 +iteratively better Design Systems that + +00:40:51.319 --> 00:40:56.240 +are better they're more more performant + +00:40:54.040 --> 00:40:59.240 +but also those that are more human + +00:40:56.240 --> 00:40:59.240 +understandable + +00:41:00.119 --> 00:41:06.599 +um so interpretability is this big blah + +00:41:03.720 --> 00:41:08.440 +but there's a bunch of other uh spheres + +00:41:06.599 --> 00:41:11.920 +that intersect with it this is a super + +00:41:08.440 --> 00:41:14.920 +incomplete list uh so bear with me the + +00:41:11.920 --> 00:41:16.560 +causality and data integrate with this + +00:41:14.920 --> 00:41:19.000 +there's aspects that are interpretable + +00:41:16.560 --> 00:41:20.480 +there's aspects that matter here um + +00:41:19.000 --> 00:41:22.400 +explainable AI is another thing that + +00:41:20.480 --> 00:41:24.440 +you've probably heard this sits firmly + +00:41:22.400 --> 00:41:27.800 +in the interpretability blob and + +00:41:24.440 --> 00:41:30.520 +connects with ideas and causality and uh + +00:41:27.800 --> 00:41:32.680 +in data too um model interpretability + +00:41:30.520 --> 00:41:34.200 +sits on this kind of other side of + +00:41:32.680 --> 00:41:37.680 +things it intersects a little bit with + +00:41:34.200 --> 00:41:40.000 +causality and explainable AI but uh is a + +00:41:37.680 --> 00:41:42.280 +little bit separate for it um and from + +00:41:40.000 --> 00:41:43.880 +it and mechanistic interpretability + +00:41:42.280 --> 00:41:45.400 +which which you've probably heard of + +00:41:43.880 --> 00:41:47.680 +it's gotten a lot of Buzz recently kind + +00:41:45.400 --> 00:41:48.880 +of sits inside of model interpretability + +00:41:47.680 --> 00:41:51.680 +it's a special case of model + +00:41:48.880 --> 00:41:53.160 +interpretability I hope the mech people + +00:41:51.680 --> 00:41:56.640 +agree with me + +00:41:53.160 --> 00:41:58.040 +but um so yeah so historically we've + +00:41:56.640 --> 00:42:00.880 +been dealing with really really really + +00:41:58.040 --> 00:42:03.680 +small models you had Bas Nets this is a + +00:42:00.880 --> 00:42:07.560 +this is very small model um if all these + +00:42:03.680 --> 00:42:10.000 +are binary variables this is uh eight + +00:42:07.560 --> 00:42:12.680 +total parameters and only four of which + +00:42:10.000 --> 00:42:14.880 +are independent uh we also used to work + +00:42:12.680 --> 00:42:18.160 +with linear regression a lot and in the + +00:42:14.880 --> 00:42:20.680 +first case that's a nice line can be two + +00:42:18.160 --> 00:42:23.240 +parameters the multivariate case again + +00:42:20.680 --> 00:42:25.880 +that's a a small number of parameters + +00:42:23.240 --> 00:42:27.880 +we've moved to of more things we've + +00:42:25.880 --> 00:42:30.400 +moved to + +00:42:27.880 --> 00:42:32.160 +MLPs that have larger weight matrices + +00:42:30.400 --> 00:42:33.920 +but all these are kind of digestible and + +00:42:32.160 --> 00:42:37.200 +interpretable so the interpretability + +00:42:33.920 --> 00:42:40.160 +world was sort of uh not super concerned + +00:42:37.200 --> 00:42:41.280 +with large ginormous things but we're + +00:42:40.160 --> 00:42:44.800 +not there + +00:42:41.280 --> 00:42:47.000 +anymore uh this is a language model this + +00:42:44.800 --> 00:42:50.839 +is part of still part of a language + +00:42:47.000 --> 00:42:51.960 +model now it's getting more and more and + +00:42:50.839 --> 00:42:55.119 +more + +00:42:51.960 --> 00:42:57.920 +hairing and this is just not + +00:42:55.119 --> 00:43:00.520 +interpretable um I mentioned + +00:42:57.920 --> 00:43:03.280 +on on the first day of class that I hate + +00:43:00.520 --> 00:43:05.240 +when we update parameters of models also + +00:43:03.280 --> 00:43:07.720 +hate when models are this big and this + +00:43:05.240 --> 00:43:10.000 +is a six layer Transformer this is way + +00:43:07.720 --> 00:43:15.920 +smaller than basically anything that we + +00:43:10.000 --> 00:43:18.040 +have um and this makes things very very + +00:43:15.920 --> 00:43:20.920 +uninterpretable um so we'll talk about + +00:43:18.040 --> 00:43:22.880 +one one way that people sort of uh five + +00:43:20.920 --> 00:43:24.599 +years ago started addressing this + +00:43:22.880 --> 00:43:25.680 +problem and this is and this is the idea + +00:43:24.599 --> 00:43:28.000 +of + +00:43:25.680 --> 00:43:30.880 +probing so how do we make sense of a + +00:43:28.000 --> 00:43:35.160 +giant model this is one way so we take + +00:43:30.880 --> 00:43:38.200 +our giant model we cut the top off + +00:43:35.160 --> 00:43:40.520 +basically um and now we have this thing + +00:43:38.200 --> 00:43:42.119 +we stick a probe which actually in a lot + +00:43:40.520 --> 00:43:44.559 +of cases looks very similar to a + +00:43:42.119 --> 00:43:47.280 +language modeling head uh usually it's a + +00:43:44.559 --> 00:43:51.640 +small two layer or one layer + +00:43:47.280 --> 00:43:54.319 +MLP um and we basically treat the model + +00:43:51.640 --> 00:43:56.760 +as something that uh that exists and we + +00:43:54.319 --> 00:44:00.240 +only really care about the output of of + +00:43:56.760 --> 00:44:03.240 +the model so more specifically what is a + +00:44:00.240 --> 00:44:05.720 +probe it's a classifier this this green + +00:44:03.240 --> 00:44:07.680 +thing here uh that is specifically + +00:44:05.720 --> 00:44:09.200 +trained to predict some specific + +00:44:07.680 --> 00:44:11.480 +property from the pre-trained models + +00:44:09.200 --> 00:44:16.440 +representations + +00:44:11.480 --> 00:44:18.480 +alone so um in 2019 Ian Tenny and folks + +00:44:16.440 --> 00:44:21.319 +introduced Edge probing so this is a + +00:44:18.480 --> 00:44:23.240 +general method um it works to probe + +00:44:21.319 --> 00:44:27.559 +different types of information out of a + +00:44:23.240 --> 00:44:29.960 +model so this bottom part here uh yeah + +00:44:27.559 --> 00:44:33.160 +this bottom part here it you pass it in + +00:44:29.960 --> 00:44:36.520 +a sequence you pass it into a model this + +00:44:33.160 --> 00:44:38.839 +is Burt in their experiments often uh + +00:44:36.520 --> 00:44:40.960 +and that outputs a set of contextual + +00:44:38.839 --> 00:44:44.359 +vectors these contextual vectors can be + +00:44:40.960 --> 00:44:45.920 +at any layer um often it's near the + +00:44:44.359 --> 00:44:49.280 +often it's near the top but we'll talk + +00:44:45.920 --> 00:44:51.079 +about uh the the fact that this can work + +00:44:49.280 --> 00:44:53.359 +kind of across layers and different + +00:44:51.079 --> 00:44:55.599 +layers and code different information + +00:44:53.359 --> 00:44:58.960 +and on top of this you have this MLP + +00:44:55.599 --> 00:45:02.480 +that you train to Output a prediction + +00:44:58.960 --> 00:45:05.599 +your model is always always fixed um in + +00:45:02.480 --> 00:45:08.079 +these cases so you can do things like + +00:45:05.599 --> 00:45:09.880 +part of speech tagging where each + +00:45:08.079 --> 00:45:12.400 +specific word you try to determine what + +00:45:09.880 --> 00:45:16.640 +its part of speech is and in that case + +00:45:12.400 --> 00:45:18.000 +this these S1 and S2 spans here uh only + +00:45:16.640 --> 00:45:19.440 +one of them is active because you're + +00:45:18.000 --> 00:45:21.440 +predicting for every single + +00:45:19.440 --> 00:45:23.240 +contextualized Vector you're predicting + +00:45:21.440 --> 00:45:25.359 +whether that thing is a noun or a verb + +00:45:23.240 --> 00:45:27.440 +or something like this you can have + +00:45:25.359 --> 00:45:29.599 +other sorts of tasks too like in ailment + +00:45:27.440 --> 00:45:32.520 +where you have two sequences and two + +00:45:29.599 --> 00:45:35.079 +spans um and you use the embeddings for + +00:45:32.520 --> 00:45:37.359 +those spans um for like sentence one and + +00:45:35.079 --> 00:45:39.319 +sentence two you pull them together in + +00:45:37.359 --> 00:45:43.359 +some way and then you pass them to this + +00:45:39.319 --> 00:45:47.480 +MLP and you see whether the MLP can uh + +00:45:43.359 --> 00:45:49.680 +solve that test so they did this uh in + +00:45:47.480 --> 00:45:52.559 +another paper uh Bert rediscovers the + +00:45:49.680 --> 00:45:54.280 +NLP Pipeline and this there's a lot + +00:45:52.559 --> 00:45:57.079 +going on in this figure the the only + +00:45:54.280 --> 00:45:59.599 +major thing here to take away um is + +00:45:57.079 --> 00:46:02.720 +these numbers that are in this like pink + +00:45:59.599 --> 00:46:05.359 +purple color um so these are a bunch of + +00:46:02.720 --> 00:46:07.960 +different uh properties supp part of + +00:46:05.359 --> 00:46:11.319 +speech uh and and a bunch of other + +00:46:07.960 --> 00:46:13.520 +things um and what they basically find + +00:46:11.319 --> 00:46:15.640 +is that at earlier layers in the model + +00:46:13.520 --> 00:46:18.760 +the things that are closer to the Token + +00:46:15.640 --> 00:46:21.480 +level representation are more um + +00:46:18.760 --> 00:46:23.400 +extractable using a probe and the things + +00:46:21.480 --> 00:46:26.440 +that require more contextualized + +00:46:23.400 --> 00:46:29.440 +information um is extractable later from + +00:46:26.440 --> 00:46:32.359 +later layers in the model and so here's + +00:46:29.440 --> 00:46:34.599 +sort of a brief uh description of what + +00:46:32.359 --> 00:46:37.599 +these tasks are so the ones on the + +00:46:34.599 --> 00:46:40.040 +bottom are more semantic more + +00:46:37.599 --> 00:46:42.040 +contextualized like uh semantic prot + +00:46:40.040 --> 00:46:43.880 +roles and relation relation + +00:46:42.040 --> 00:46:45.839 +classification and then the first few + +00:46:43.880 --> 00:46:48.200 +are more you know chunking in part of + +00:46:45.839 --> 00:46:51.880 +speech tagging and um dependency + +00:46:48.200 --> 00:46:51.880 +labeling in these in these sorts of + +00:46:52.040 --> 00:46:57.200 +tests um so there's a bunch of issues + +00:46:54.480 --> 00:46:59.520 +with probing um and there aren't many + +00:46:57.200 --> 00:47:03.559 +probing papers now as there were many + +00:46:59.520 --> 00:47:05.960 +years ago um and so if your probe let's + +00:47:03.559 --> 00:47:07.960 +say your probe + +00:47:05.960 --> 00:47:09.920 +works it's possible that the + +00:47:07.960 --> 00:47:12.200 +representation actually encodes that + +00:47:09.920 --> 00:47:14.520 +information it's also possible that it + +00:47:12.200 --> 00:47:16.359 +doesn't and the probe solved the task by + +00:47:14.520 --> 00:47:18.119 +itself uh keep in mind that you're + +00:47:16.359 --> 00:47:20.640 +learning this probe you're training this + +00:47:18.119 --> 00:47:22.720 +Probe on labeled data uh let's say your + +00:47:20.640 --> 00:47:24.599 +probe doesn't work does that tell you + +00:47:22.720 --> 00:47:27.119 +anything Maybe not maybe the + +00:47:24.599 --> 00:47:30.280 +representation lacks the information or + +00:47:27.119 --> 00:47:31.800 +maybe your probe doesn't doesn't + +00:47:30.280 --> 00:47:33.800 +actually isn't actually able to + +00:47:31.800 --> 00:47:35.240 +disentangle that information from your + +00:47:33.800 --> 00:47:36.720 +representation maybe the probe is not + +00:47:35.240 --> 00:47:38.359 +the right function class maybe you + +00:47:36.720 --> 00:47:40.839 +poorly trained your probe there's + +00:47:38.359 --> 00:47:42.280 +hyperparameters for your probe so often + +00:47:40.839 --> 00:47:43.000 +times your probe doesn't give you that + +00:47:42.280 --> 00:47:46.119 +much + +00:47:43.000 --> 00:47:49.040 +information there's more problems too so + +00:47:46.119 --> 00:47:50.800 +often we want to probe tasks themselves + +00:47:49.040 --> 00:47:53.240 +and that requires a lot of supervised + +00:47:50.800 --> 00:47:55.880 +data um but we can't collect a lot of + +00:47:53.240 --> 00:47:58.440 +supervised data so we collect some of it + +00:47:55.880 --> 00:48:00.040 +and then that instead produces this + +00:47:58.440 --> 00:48:02.480 +convenient sample that we have that's a + +00:48:00.040 --> 00:48:04.119 +data set that is a convenient sample of + +00:48:02.480 --> 00:48:07.000 +your task so really what you're probing + +00:48:04.119 --> 00:48:10.040 +is the data set and so with all these + +00:48:07.000 --> 00:48:11.800 +limitations it's it's fallen out of + +00:48:10.040 --> 00:48:13.599 +favor a little bit it's still very very + +00:48:11.800 --> 00:48:16.400 +useful but it's fallen out of favor as + +00:48:13.599 --> 00:48:20.000 +like a core model interpretability + +00:48:16.400 --> 00:48:22.160 +idea um also probes designed in this way + +00:48:20.000 --> 00:48:26.079 +are correlated they're correlative not + +00:48:22.160 --> 00:48:27.880 +really causitive so your your underlying + +00:48:26.079 --> 00:48:29.640 +model is trained in a specific way all + +00:48:27.880 --> 00:48:31.359 +of that information is disentangled and + +00:48:29.640 --> 00:48:32.920 +kind of thrown away and you're only + +00:48:31.359 --> 00:48:34.599 +looking at the output representation and + +00:48:32.920 --> 00:48:36.559 +you're saying is my output + +00:48:34.599 --> 00:48:39.200 +representation correlated to the thing + +00:48:36.559 --> 00:48:42.400 +that I'm training this probe for there's + +00:48:39.200 --> 00:48:44.960 +no notion of intervening on this lat and + +00:48:42.400 --> 00:48:46.559 +space there's no notion of of causation + +00:48:44.960 --> 00:48:49.119 +really so you're just seeing whether + +00:48:46.559 --> 00:48:52.559 +your representation is correlated with + +00:48:49.119 --> 00:48:54.480 +your property that you're probing for um + +00:48:52.559 --> 00:48:56.200 +and with these limitations the + +00:48:54.480 --> 00:48:58.720 +community's moved a little bit away from + +00:48:56.200 --> 00:48:58.720 +this area + +00:48:59.040 --> 00:49:02.200 +uh there's a bunch of other probing + +00:49:00.240 --> 00:49:04.920 +works so a bunch of people aim to solve + +00:49:02.200 --> 00:49:06.000 +a bunch of these problems um and uh for + +00:49:04.920 --> 00:49:09.200 +the sake of time I'm not going to go + +00:49:06.000 --> 00:49:12.599 +into all of these but uh I'd encourage + +00:49:09.200 --> 00:49:14.000 +you to look into these they for for some + +00:49:12.599 --> 00:49:17.319 +of these problems they're able to + +00:49:14.000 --> 00:49:19.520 +control for um control for like the + +00:49:17.319 --> 00:49:22.200 +complexity of the of the probe and + +00:49:19.520 --> 00:49:24.359 +things like this um but even despite + +00:49:22.200 --> 00:49:25.720 +that probing is sort of slowly kind of + +00:49:24.359 --> 00:49:28.160 +falling out of + +00:49:25.720 --> 00:49:29.640 +favor uh so before I move into model + +00:49:28.160 --> 00:49:31.920 +interpretability are there any questions + +00:49:29.640 --> 00:49:31.920 +on + +00:49:35.520 --> 00:49:40.599 +probing all right so what is model + +00:49:38.680 --> 00:49:44.000 +interpretability so this is my + +00:49:40.599 --> 00:49:45.400 +definition here uh this is the study of + +00:49:44.000 --> 00:49:46.599 +understanding the internals of models + +00:49:45.400 --> 00:49:49.079 +for example their weights and + +00:49:46.599 --> 00:49:51.160 +activations putting those insights in + +00:49:49.079 --> 00:49:53.319 +human intelligible terms and using that + +00:49:51.160 --> 00:49:55.920 +insight to both patch current models and + +00:49:53.319 --> 00:49:57.359 +develop better ones for not sort of able + +00:49:55.920 --> 00:49:58.760 +to do both of these things patching + +00:49:57.359 --> 00:50:00.160 +current models and develop better ones + +00:49:58.760 --> 00:50:02.440 +we're kind of doing interpretability for + +00:50:00.160 --> 00:50:04.960 +interpretability sake that's nice and + +00:50:02.440 --> 00:50:08.079 +fun but it's not as applicable for the + +00:50:04.960 --> 00:50:09.720 +for the community so you've probably + +00:50:08.079 --> 00:50:12.240 +heard of the term mechanistic + +00:50:09.720 --> 00:50:14.480 +interpretability it's in my opinion a + +00:50:12.240 --> 00:50:16.559 +subfield of model interpretability and + +00:50:14.480 --> 00:50:19.319 +this is sort of my definition I it + +00:50:16.559 --> 00:50:21.440 +aligns reasonably well to the core + +00:50:19.319 --> 00:50:22.720 +mechanistic interpretability people um + +00:50:21.440 --> 00:50:24.880 +but it's the study of reverse + +00:50:22.720 --> 00:50:26.280 +engineering parametric models often + +00:50:24.880 --> 00:50:28.839 +neural networks because that's what we + +00:50:26.280 --> 00:50:31.400 +use from their learned weights into more + +00:50:28.839 --> 00:50:32.839 +human interpretable algorithmic units uh + +00:50:31.400 --> 00:50:36.839 +and often they call these things + +00:50:32.839 --> 00:50:39.440 +circuits um and and these are basically + +00:50:36.839 --> 00:50:42.880 +functions that uh you can describe in a + +00:50:39.440 --> 00:50:45.000 +human interpretable way that sit inside + +00:50:42.880 --> 00:50:46.760 +models um there's a bunch of notable + +00:50:45.000 --> 00:50:50.720 +work again for the sake of time I'm + +00:50:46.760 --> 00:50:54.319 +going to just briefly talk about them + +00:50:50.720 --> 00:50:56.839 +um so the first one is they they look + +00:50:54.319 --> 00:50:58.440 +into analyzing small MLPs and + +00:50:56.839 --> 00:51:01.400 +Transformers to build out the intuition + +00:50:58.440 --> 00:51:04.119 +of what circuits exist um and this a lot + +00:51:01.400 --> 00:51:06.559 +of this work came out of earlier work on + +00:51:04.119 --> 00:51:08.480 +on lstms and doing similar sorts of + +00:51:06.559 --> 00:51:11.880 +things with with + +00:51:08.480 --> 00:51:14.319 +lstms um and they find a bunch of things + +00:51:11.880 --> 00:51:15.839 +one thing that they find is this idea of + +00:51:14.319 --> 00:51:19.599 +induction heads and these induction + +00:51:15.839 --> 00:51:21.760 +heads they say is sort of sort of helps + +00:51:19.599 --> 00:51:24.680 +prove why models can do in context + +00:51:21.760 --> 00:51:26.599 +learning so so an induction head is + +00:51:24.680 --> 00:51:28.839 +something that it's it's a specific + +00:51:26.599 --> 00:51:32.440 +attention head that kind of allows you + +00:51:28.839 --> 00:51:35.599 +to um when given a prefix allow you to + +00:51:32.440 --> 00:51:37.559 +kind of copy the necessary resulting + +00:51:35.599 --> 00:51:39.640 +token from the underlying training data + +00:51:37.559 --> 00:51:41.720 +that the model seen before so in context + +00:51:39.640 --> 00:51:44.599 +learning what you generally provide is + +00:51:41.720 --> 00:51:46.440 +some sort of prefix and then you uh + +00:51:44.599 --> 00:51:48.480 +provide some example and hopefully you + +00:51:46.440 --> 00:51:51.040 +know you can classify the thing or + +00:51:48.480 --> 00:51:53.280 +something like this um it's it's saying + +00:51:51.040 --> 00:51:56.200 +that there's these attention heads + +00:51:53.280 --> 00:51:59.400 +Loosely uh that exist that are able to + +00:51:56.200 --> 00:52:00.680 +copy unearth that information um for for + +00:51:59.400 --> 00:52:03.319 +a specific + +00:52:00.680 --> 00:52:07.200 +context um other things that they' that + +00:52:03.319 --> 00:52:09.880 +they've done is um on neurons so uh this + +00:52:07.200 --> 00:52:13.160 +poly semanticity so what this this kind + +00:52:09.880 --> 00:52:15.240 +of means is that your your neuron is a + +00:52:13.160 --> 00:52:18.000 +uh you have a set of neurons in your + +00:52:15.240 --> 00:52:20.880 +activation space so let's say at layer + +00:52:18.000 --> 00:52:23.200 +10 in your model you have an output um + +00:52:20.880 --> 00:52:26.280 +and so your activations is let's say a + +00:52:23.200 --> 00:52:28.400 +thousand dimensional here those each of + +00:52:26.280 --> 00:52:31.319 +those thousand individual neurons may + +00:52:28.400 --> 00:52:35.839 +represent more than one specific + +00:52:31.319 --> 00:52:37.839 +feature um and so they they talk about + +00:52:35.839 --> 00:52:41.280 +this in that context and this is kind of + +00:52:37.839 --> 00:52:43.240 +a theory but you can think about um + +00:52:41.280 --> 00:52:46.359 +trying to process + +00:52:43.240 --> 00:52:49.400 +input and when you're processing a a + +00:52:46.359 --> 00:52:50.960 +vocab of size 50,000 or 250,000 at some + +00:52:49.400 --> 00:52:52.359 +point in the model we're actually + +00:52:50.960 --> 00:52:55.400 +compressing it down to the hidden + +00:52:52.359 --> 00:52:58.119 +Dimension and so in some cases that + +00:52:55.400 --> 00:53:00.319 +looks like you're going to compress a + +00:52:58.119 --> 00:53:03.440 +much richer feature representation down + +00:53:00.319 --> 00:53:06.359 +into a smaller set of neurons so it is + +00:53:03.440 --> 00:53:08.319 +reasonable to believe that um a specific + +00:53:06.359 --> 00:53:10.799 +neuron will represent multiple of those + +00:53:08.319 --> 00:53:15.480 +features and given the structure of our + +00:53:10.799 --> 00:53:18.720 +weight matrices um it it is the case + +00:53:15.480 --> 00:53:21.839 +that if they are representing more + +00:53:18.720 --> 00:53:23.960 +features than uh number of elements in + +00:53:21.839 --> 00:53:26.000 +the actual or number of neurons in the + +00:53:23.960 --> 00:53:28.680 +activation space then many of these + +00:53:26.000 --> 00:53:30.880 +features linearly dependent and so we're + +00:53:28.680 --> 00:53:35.400 +not really able to utilize them that + +00:53:30.880 --> 00:53:37.960 +well um they they they talk about this + +00:53:35.400 --> 00:53:42.200 +they don't talk about this in the the + +00:53:37.960 --> 00:53:44.799 +most uh the the best way but uh it seems + +00:53:42.200 --> 00:53:48.040 +kind of clear to me that um since you + +00:53:44.799 --> 00:53:50.880 +have embedding matrices that are um not + +00:53:48.040 --> 00:53:53.599 +square that you're that these neurons + +00:53:50.880 --> 00:53:56.400 +have to exist um and they have to + +00:53:53.599 --> 00:53:59.200 +incorporate multiple features at once m + +00:53:56.400 --> 00:54:02.559 +multiple redundant features at + +00:53:59.200 --> 00:54:04.680 +once um so before I move on to the rest + +00:54:02.559 --> 00:54:07.839 +of model interpretability any questions + +00:54:04.680 --> 00:54:07.839 +about mechanistic + +00:54:09.880 --> 00:54:12.880 +interpretability + +00:54:21.480 --> 00:54:28.040 +yeah so most of their studies are for uh + +00:54:24.920 --> 00:54:29.720 +a very small set of of of models and + +00:54:28.040 --> 00:54:32.040 +most of these are old GPT models there + +00:54:29.720 --> 00:54:34.160 +have been a few works like in the last + +00:54:32.040 --> 00:54:36.760 +couple of months on doing this for the + +00:54:34.160 --> 00:54:39.720 +Llama based models um it seems like this + +00:54:36.760 --> 00:54:42.040 +is a general more General phenomenal for + +00:54:39.720 --> 00:54:43.760 +for language models it also is the case + +00:54:42.040 --> 00:54:46.839 +that certain attention heads specialize + +00:54:43.760 --> 00:54:49.480 +and talk about them a little bit um in + +00:54:46.839 --> 00:54:51.599 +in the activations part um but yeah + +00:54:49.480 --> 00:54:53.799 +there's they're not like all attention + +00:54:51.599 --> 00:54:56.400 +heads aren't created equal they start + +00:54:53.799 --> 00:55:00.280 +this way and it seems to be a general + +00:54:56.400 --> 00:55:01.799 +princip of and one one other thing I you + +00:55:00.280 --> 00:55:04.520 +might know about this better than I do + +00:55:01.799 --> 00:55:06.520 +but I think there are some preliminary + +00:55:04.520 --> 00:55:09.160 +words that say that Transformers seem to + +00:55:06.520 --> 00:55:11.720 +be particularly good at doing things + +00:55:09.160 --> 00:55:15.160 +like induction heads compared + +00:55:11.720 --> 00:55:17.200 +to uh H for current models and there was + +00:55:15.160 --> 00:55:20.720 +a paper really recently about comparing + +00:55:17.200 --> 00:55:23.599 +like Mamba and um in Transformer based + +00:55:20.720 --> 00:55:26.400 +models Mamba being a uh kind of more + +00:55:23.599 --> 00:55:30.280 +like closer to a with network which we + +00:55:26.400 --> 00:55:33.119 +also going to talk about us but um so I + +00:55:30.280 --> 00:55:37.319 +I think there's some indication that + +00:55:33.119 --> 00:55:39.920 +Transformers actually kind of are key or + +00:55:37.319 --> 00:55:43.680 +are at least like better at kind of in + +00:55:39.920 --> 00:55:46.760 +Contex learning than otheres are so + +00:55:43.680 --> 00:55:48.920 +there is some + +00:55:46.760 --> 00:55:50.839 +interesting implications of that which + +00:55:48.920 --> 00:55:53.240 +is like well if Transformers are good + +00:55:50.839 --> 00:55:57.359 +what's better than Transformer yeah like + +00:55:53.240 --> 00:55:58.799 +naturally learning this s of thing so um + +00:55:57.359 --> 00:56:00.720 +they're good at yeah they're like really + +00:55:58.799 --> 00:56:04.039 +good at copying and like maintaining + +00:56:00.720 --> 00:56:06.799 +information like more so um and yeah I + +00:56:04.039 --> 00:56:08.200 +think it'd be cool to like be able to I + +00:56:06.799 --> 00:56:09.839 +don't know how to do this but be able to + +00:56:08.200 --> 00:56:11.440 +extract that kind of information like + +00:56:09.839 --> 00:56:13.359 +what of the Transformers is actually + +00:56:11.440 --> 00:56:15.119 +helping it do this copying mechanism or + +00:56:13.359 --> 00:56:17.799 +like being a better in context learner + +00:56:15.119 --> 00:56:20.039 +then we can develop a better structure a + +00:56:17.799 --> 00:56:23.119 +slightly better structure than than a + +00:56:20.039 --> 00:56:26.000 +Transformer um hopefully someone comes + +00:56:23.119 --> 00:56:28.240 +up with that soon but cool any other + +00:56:26.000 --> 00:56:28.240 +question + +00:56:29.799 --> 00:56:34.359 +questions all right so let's move into + +00:56:32.240 --> 00:56:35.880 +model interpretability so there are + +00:56:34.359 --> 00:56:37.480 +weights and their activations I + +00:56:35.880 --> 00:56:39.160 +mentioned these are these are the two + +00:56:37.480 --> 00:56:41.119 +things these are the two things that + +00:56:39.160 --> 00:56:43.440 +we're going to look at so what can you + +00:56:41.119 --> 00:56:45.480 +do with the weights of an RD train model + +00:56:43.440 --> 00:56:47.799 +really you can just edit them and then + +00:56:45.480 --> 00:56:49.200 +kind of see what happens activations + +00:56:47.799 --> 00:56:51.240 +similarly you can look at the + +00:56:49.200 --> 00:56:52.720 +activations for different inputs you can + +00:56:51.240 --> 00:56:54.520 +poke them with a stick and see what + +00:56:52.720 --> 00:56:56.359 +happens a lot of my research is poking + +00:56:54.520 --> 00:56:58.559 +models with a stick and looking at the + +00:56:56.359 --> 00:57:00.920 +activations it's like predominantly what + +00:56:58.559 --> 00:57:02.240 +I've done so we'll talk about that um + +00:57:00.920 --> 00:57:04.359 +and the technical term for this is + +00:57:02.240 --> 00:57:06.599 +intervening on them by adding some + +00:57:04.359 --> 00:57:07.839 +vector or other sort of manipulation to + +00:57:06.599 --> 00:57:09.440 +the lat space but really what you're + +00:57:07.839 --> 00:57:13.960 +doing is like + +00:57:09.440 --> 00:57:17.599 +Pok um so when you look at weights uh + +00:57:13.960 --> 00:57:19.920 +one one class of methods or or area is + +00:57:17.599 --> 00:57:21.920 +on model editing fine-tuning is like the + +00:57:19.920 --> 00:57:23.480 +most extreme version of model editing + +00:57:21.920 --> 00:57:26.599 +usually these things are much more + +00:57:23.480 --> 00:57:29.640 +targeted um so in the model editing sort + +00:57:26.599 --> 00:57:32.160 +of landscape your goal or your target is + +00:57:29.640 --> 00:57:35.119 +you have a concept or a specific fact + +00:57:32.160 --> 00:57:37.440 +that needs to be changed in the model um + +00:57:35.119 --> 00:57:39.640 +and your approach here is you update or + +00:57:37.440 --> 00:57:41.359 +edit the weights of the model to edit + +00:57:39.640 --> 00:57:43.640 +the model's belief of that factor + +00:57:41.359 --> 00:57:45.599 +concept and ideally you do this without + +00:57:43.640 --> 00:57:47.319 +changing any of the other behavior of + +00:57:45.599 --> 00:57:49.760 +the model so for example let's say + +00:57:47.319 --> 00:57:51.920 +you're trying to say that Graham is no + +00:57:49.760 --> 00:57:54.559 +longer a professor at CMU but is a + +00:57:51.920 --> 00:57:57.319 +professor at Stanford you don't want + +00:57:54.559 --> 00:57:59.960 +every single person at CMU to now be a + +00:57:57.319 --> 00:58:02.920 +professor or uh now be affiliated with + +00:57:59.960 --> 00:58:07.839 +Stanford right um gr pleas not to + +00:58:02.920 --> 00:58:09.039 +Stanford um so here's one approach paper + +00:58:07.839 --> 00:58:11.720 +that came out a couple years ago there's + +00:58:09.039 --> 00:58:13.559 +a lot of work down here uh in in the + +00:58:11.720 --> 00:58:15.799 +model editing World I'll give you sort + +00:58:13.559 --> 00:58:17.440 +of a really brief overview of this but + +00:58:15.799 --> 00:58:20.520 +basically they have facts that they want + +00:58:17.440 --> 00:58:22.400 +to they want to manipulate um so for + +00:58:20.520 --> 00:58:24.680 +example the the example that they give + +00:58:22.400 --> 00:58:26.640 +in the figure is they want to associate + +00:58:24.680 --> 00:58:30.960 +the Space Needle with Paris the Space + +00:58:26.640 --> 00:58:32.520 +Needle is a a cool needle in in Seattle + +00:58:30.960 --> 00:58:36.000 +has nothing to do with Paris but Paris + +00:58:32.520 --> 00:58:38.400 +also has a tower so it's close um so + +00:58:36.000 --> 00:58:40.920 +they use causal tracing to isolate the + +00:58:38.400 --> 00:58:43.839 +causal effect uh of the individual + +00:58:40.920 --> 00:58:45.799 +hidden States for this fact so they + +00:58:43.839 --> 00:58:47.839 +basically continuously perturb the input + +00:58:45.799 --> 00:58:49.760 +do a bunch of forward passes and + +00:58:47.839 --> 00:58:51.720 +sequentially find the specific hidden + +00:58:49.760 --> 00:58:55.280 +states that are associated kind of with + +00:58:51.720 --> 00:58:56.839 +this fact um then they make an edit and + +00:58:55.280 --> 00:58:59.119 +their edit + +00:58:56.839 --> 00:59:02.039 +uh looks like this thing on the right um + +00:58:59.119 --> 00:59:05.280 +so they treat this pair Space Needle and + +00:59:02.039 --> 00:59:07.240 +Paris as this uh key value pair where + +00:59:05.280 --> 00:59:10.359 +Space Needle is the key you pass this + +00:59:07.240 --> 00:59:12.480 +into um into this weight Matrix this + +00:59:10.359 --> 00:59:14.640 +original part of the model you want this + +00:59:12.480 --> 00:59:16.599 +now instead of outputting Seattle to + +00:59:14.640 --> 00:59:19.119 +Output Paris and they have some nice + +00:59:16.599 --> 00:59:21.599 +math and a closed form solution to to + +00:59:19.119 --> 00:59:23.880 +identify this this is super expensive + +00:59:21.599 --> 00:59:25.359 +because they have to the causal tracing + +00:59:23.880 --> 00:59:27.680 +part have to do a bunch of forward + +00:59:25.359 --> 00:59:30.680 +passes um and they make this a little + +00:59:27.680 --> 00:59:33.480 +bit better in future future work they + +00:59:30.680 --> 00:59:37.920 +also do sort of a more + +00:59:33.480 --> 00:59:40.160 +comprehensive um edit um so these are + +00:59:37.920 --> 00:59:44.599 +kind like some of the things you can do + +00:59:40.160 --> 00:59:46.799 +um I'm less excited about model editing + +00:59:44.599 --> 00:59:49.039 +um there's there's some work on model + +00:59:46.799 --> 00:59:51.319 +editing sort of it's it's hard to + +00:59:49.039 --> 00:59:53.160 +control what other things break there's + +00:59:51.319 --> 00:59:56.240 +a and there's some work with when you + +00:59:53.160 --> 01:00:00.000 +edit a specific fact things start being + +00:59:56.240 --> 01:00:02.680 +weird and being biased in other ways um + +01:00:00.000 --> 01:00:05.760 +and so + +01:00:02.680 --> 01:00:09.119 +yeah do all kind of seual information + +01:00:05.760 --> 01:00:11.880 +like X is and Y would they Alles to the + +01:00:09.119 --> 01:00:14.319 +same layer is it just with the + +01:00:11.880 --> 01:00:16.920 +specific for this specific example it + +01:00:14.319 --> 01:00:19.039 +looks at this specific point uh for + +01:00:16.920 --> 01:00:21.039 +every example they'll probably find + +01:00:19.039 --> 01:00:22.119 +different regions in a different degree + +01:00:21.039 --> 01:00:25.680 +of + +01:00:22.119 --> 01:00:27.960 +manipulation um and yeah that it gets a + +01:00:25.680 --> 01:00:30.920 +little unprincipled kind of quickly it's + +01:00:27.960 --> 01:00:33.000 +not like they're able to find you know a + +01:00:30.920 --> 01:00:35.680 +specific attention head that or a + +01:00:33.000 --> 01:00:38.240 +specific layer or specific weight Matrix + +01:00:35.680 --> 01:00:42.400 +that corresponds to like + +01:00:38.240 --> 01:00:46.720 +all yeah relations of a specific + +01:00:42.400 --> 01:00:49.160 +type any for questions for yeah this is + +01:00:46.720 --> 01:00:51.119 +actually just a question if you know um + +01:00:49.160 --> 01:00:53.200 +it seems like more frequent facts might + +01:00:51.119 --> 01:00:55.240 +appear in both places in the model is do + +01:00:53.200 --> 01:00:59.280 +you know if that's actually the I have + +01:00:55.240 --> 01:01:02.440 +no idea but uh I would imagine that um + +01:00:59.280 --> 01:01:06.240 +it probably could occur in more places + +01:01:02.440 --> 01:01:08.160 +but also um a lot of the information is + +01:01:06.240 --> 01:01:10.119 +redundant anyway in the model especially + +01:01:08.160 --> 01:01:11.720 +for larger models so you might have to + +01:01:10.119 --> 01:01:13.599 +make targeted interventions in multiple + +01:01:11.720 --> 01:01:15.480 +places but it's possible that one + +01:01:13.599 --> 01:01:17.680 +intervention in one place sufficiently + +01:01:15.480 --> 01:01:21.039 +destroys like contextualized information + +01:01:17.680 --> 01:01:22.680 +in other places if it's close um it + +01:01:21.039 --> 01:01:24.839 +depends on how big this intervention is + +01:01:22.680 --> 01:01:28.200 +if it's like hitting it with a hammer + +01:01:24.839 --> 01:01:30.520 +rather than some like nice fine grain + +01:01:28.200 --> 01:01:33.359 +thing but that'd be a good be a good + +01:01:30.520 --> 01:01:36.839 +experiment to see + +01:01:33.359 --> 01:01:36.839 +um any other + +01:01:37.240 --> 01:01:41.559 +questions all right so we'll move into + +01:01:39.760 --> 01:01:43.680 +the stuff that I'm most familiar with + +01:01:41.559 --> 01:01:46.319 +and some of my work so looking at + +01:01:43.680 --> 01:01:48.319 +activations um so this is this is work + +01:01:46.319 --> 01:01:50.480 +I've been doing for a while uh this idea + +01:01:48.319 --> 01:01:52.799 +of steering vectors so I mentioned I + +01:01:50.480 --> 01:01:54.480 +poke models so it's thick steering + +01:01:52.799 --> 01:01:57.000 +Vector is that thick so it's basically a + +01:01:54.480 --> 01:01:59.000 +fix length vector that steers a language + +01:01:57.000 --> 01:02:00.920 +model to generate a specific sequence + +01:01:59.000 --> 01:02:02.720 +exactly when added to the hidden sites + +01:02:00.920 --> 01:02:06.319 +of a model at a specific + +01:02:02.720 --> 01:02:09.000 +location um and I'll I'll read this + +01:02:06.319 --> 01:02:11.400 +again it's there's a very like specific + +01:02:09.000 --> 01:02:13.319 +form that I wrote this in so uh it's + +01:02:11.400 --> 01:02:15.359 +it's a fix length Vector that steers a + +01:02:13.319 --> 01:02:17.640 +language model to generate a specific + +01:02:15.359 --> 01:02:19.359 +sequence exactly when added to the + +01:02:17.640 --> 01:02:22.559 +hidden states of a model at a specific + +01:02:19.359 --> 01:02:24.480 +point so this is different than um a + +01:02:22.559 --> 01:02:26.839 +soft prompt or different than a model + +01:02:24.480 --> 01:02:29.520 +editing sort of approach + +01:02:26.839 --> 01:02:31.400 +um in this case there is a vector that + +01:02:29.520 --> 01:02:32.960 +corresponds to a sequence and that + +01:02:31.400 --> 01:02:35.359 +Vector doesn't correspond to any other + +01:02:32.960 --> 01:02:36.640 +sequence there could be multiple vectors + +01:02:35.359 --> 01:02:39.079 +and it turns out there are multiple + +01:02:36.640 --> 01:02:41.799 +vectors that correspond to that sequence + +01:02:39.079 --> 01:02:44.160 +it'll be a little bit clearer um based + +01:02:41.799 --> 01:02:46.279 +on how we extract these + +01:02:44.160 --> 01:02:48.839 +things um so this is the stick that + +01:02:46.279 --> 01:02:52.000 +we're talking the language + +01:02:48.839 --> 01:02:53.599 +model um so how do we extract them so + +01:02:52.000 --> 01:02:57.400 +this is + +01:02:53.599 --> 01:03:00.200 +gpt2 um basically this Z steer thing on + +01:02:57.400 --> 01:03:03.240 +the left this is the steering Vector + +01:03:00.200 --> 01:03:05.799 +this gets initialized randomly um with + +01:03:03.240 --> 01:03:09.520 +small like in a in a reasonable way + +01:03:05.799 --> 01:03:11.440 +uniformly and small um and for any + +01:03:09.520 --> 01:03:14.000 +sequence a specific sequence that we + +01:03:11.440 --> 01:03:17.680 +want the model to generate we + +01:03:14.000 --> 01:03:19.400 +optimize this steering Vector Z steer uh + +01:03:17.680 --> 01:03:21.559 +to generate that sequence keeping the + +01:03:19.400 --> 01:03:23.960 +rest of the model entirely fixed so + +01:03:21.559 --> 01:03:26.200 +think about it as we're nudging an a + +01:03:23.960 --> 01:03:29.880 +frozen model to be able to generate a + +01:03:26.200 --> 01:03:31.680 +specific sequence at a specific time um + +01:03:29.880 --> 01:03:33.880 +and we have a lot of different options + +01:03:31.680 --> 01:03:35.559 +on where to inject the steering intive + +01:03:33.880 --> 01:03:37.520 +can put it basically anywhere in the + +01:03:35.559 --> 01:03:41.799 +model we can put it at any time step any + +01:03:37.520 --> 01:03:43.839 +number of these things in practice um + +01:03:41.799 --> 01:03:45.839 +providing it just at the first time step + +01:03:43.839 --> 01:03:48.039 +and somewhere in the middle of the model + +01:03:45.839 --> 01:03:52.480 +basically not the first layer and not + +01:03:48.039 --> 01:03:56.240 +the last layer works pretty well um and + +01:03:52.480 --> 01:04:00.279 +so more formally um forget the kind of + +01:03:56.240 --> 01:04:03.640 +notation um but right here we initialize + +01:04:00.279 --> 01:04:06.559 +um this Z steer and for a few iterations + +01:04:03.640 --> 01:04:08.039 +um we do forward passes first this + +01:04:06.559 --> 01:04:09.599 +starts as random and then this gets + +01:04:08.039 --> 01:04:11.960 +closer and closer and closer to being + +01:04:09.599 --> 01:04:14.279 +able to generate this sequence and + +01:04:11.960 --> 01:04:16.599 +eventually we get to a point uh and this + +01:04:14.279 --> 01:04:18.400 +n is pretty small it's eight or 10 or + +01:04:16.599 --> 01:04:20.160 +something like that um for most + +01:04:18.400 --> 01:04:22.200 +sequences we get to a point where we + +01:04:20.160 --> 01:04:23.920 +have found this stick that is allowed to + +01:04:22.200 --> 01:04:26.079 +poke this model to generate that + +01:04:23.920 --> 01:04:29.319 +sequence exactly now when we greedy + +01:04:26.079 --> 01:04:32.480 +decode from the model we pass in just a + +01:04:29.319 --> 01:04:34.920 +beginning of sequence token and this Z + +01:04:32.480 --> 01:04:37.119 +steer the steering vector and it's able + +01:04:34.920 --> 01:04:39.720 +to uncover a whole sequence that whole + +01:04:37.119 --> 01:04:41.319 +sequence that we had at the beginning + +01:04:39.720 --> 01:04:44.240 +entirely + +01:04:41.319 --> 01:04:46.640 +um this is weird and interesting because + +01:04:44.240 --> 01:04:48.880 +in a lot of cases um in like the + +01:04:46.640 --> 01:04:52.039 +prompting world in the soft prompt World + +01:04:48.880 --> 01:04:54.640 +usually you need a pretty large uh width + +01:04:52.039 --> 01:04:57.880 +of a prompt to be able to do things with + +01:04:54.640 --> 01:05:00.400 +um and this this generally in in that + +01:04:57.880 --> 01:05:02.000 +structure you're doing a specific task + +01:05:00.400 --> 01:05:04.200 +and you're providing kind of a large a + +01:05:02.000 --> 01:05:06.720 +large prompt to do this with large soft + +01:05:04.200 --> 01:05:10.520 +prompt to do this with this is often H + +01:05:06.720 --> 01:05:13.200 +this often has a width of 50 and a and a + +01:05:10.520 --> 01:05:15.520 +length of the the hidden size or the + +01:05:13.200 --> 01:05:17.160 +embedding size of the model in our cases + +01:05:15.520 --> 01:05:20.079 +all of our steering vectors are withth + +01:05:17.160 --> 01:05:21.440 +one and they're of just the hidden size + +01:05:20.079 --> 01:05:24.039 +of the + +01:05:21.440 --> 01:05:26.520 +model um so what ends + +01:05:24.039 --> 01:05:29.559 +up happening + +01:05:26.520 --> 01:05:31.520 +um actually before I go to results any + +01:05:29.559 --> 01:05:34.720 +questions this is a this is a weird + +01:05:31.520 --> 01:05:38.160 +setup and weird relative to what other + +01:05:34.720 --> 01:05:39.310 +people do so happy to take any + +01:05:38.160 --> 01:05:42.480 +questions + +01:05:39.310 --> 01:05:42.480 +[Music] + +01:05:42.880 --> 01:05:50.640 +yeah similarly if your prompt was um of + +01:05:47.440 --> 01:05:53.440 +a specific type so the prompt here is a + +01:05:50.640 --> 01:05:55.720 +continuous Vector passed in it's a + +01:05:53.440 --> 01:05:59.760 +single length width hidden size + +01:05:55.720 --> 01:06:02.799 +continuous Vector so um it's kind of + +01:05:59.760 --> 01:06:05.559 +like maybe collapsing your prompt into + +01:06:02.799 --> 01:06:08.480 +this compressing it into this tiny + +01:06:05.559 --> 01:06:12.119 +VOR you can think of that way + +01:06:08.480 --> 01:06:16.920 +yeah any other questions + +01:06:12.119 --> 01:06:16.920 +yeah this would be like + +01:06:18.160 --> 01:06:23.359 +I'm + +01:06:20.880 --> 01:06:28.279 +things potentially um this is something + +01:06:23.359 --> 01:06:30.640 +that I want to work on it uh like a year + +01:06:28.279 --> 01:06:32.119 +ago and didn't get didn't get this + +01:06:30.640 --> 01:06:34.559 +sufficient Buy in and then had to apply + +01:06:32.119 --> 01:06:36.880 +to grad school and all these things so + +01:06:34.559 --> 01:06:40.160 +it went by the wayside but but + +01:06:36.880 --> 01:06:43.440 +definitely something to something to + +01:06:40.160 --> 01:06:45.920 +pursue um there's a lot of scope there + +01:06:43.440 --> 01:06:45.920 +any other + +01:06:47.640 --> 01:06:54.480 +questions all right so move over to + +01:06:51.319 --> 01:06:56.119 +results so we can find steering vectors + +01:06:54.480 --> 01:06:58.520 +and that's and that's interesting thing + +01:06:56.119 --> 01:07:00.559 +um and we can find them pretty easily + +01:06:58.520 --> 01:07:02.559 +and for most sequences even sequences + +01:07:00.559 --> 01:07:04.559 +that the model hasn't seen before the + +01:07:02.559 --> 01:07:06.400 +underlying language model hasn't seen + +01:07:04.559 --> 01:07:09.640 +before + +01:07:06.400 --> 01:07:13.160 +um it also works for and this is kind of + +01:07:09.640 --> 01:07:16.799 +a negative but it also works for random + +01:07:13.160 --> 01:07:20.039 +sequences of very small length but it's + +01:07:16.799 --> 01:07:22.359 +harder to find so you can imagine if + +01:07:20.039 --> 01:07:24.760 +your uh steering Vector is basically a + +01:07:22.359 --> 01:07:26.279 +giant bulldozer it doesn't matter what + +01:07:24.760 --> 01:07:28.640 +your model is learning learned similar + +01:07:26.279 --> 01:07:30.160 +to the probe situation if you can + +01:07:28.640 --> 01:07:32.559 +compress all that information of that + +01:07:30.160 --> 01:07:35.400 +sequence into the vector you don't + +01:07:32.559 --> 01:07:37.400 +really need the language model um so + +01:07:35.400 --> 01:07:39.559 +there are cases when you're looking at + +01:07:37.400 --> 01:07:40.760 +sequences of length like five seven + +01:07:39.559 --> 01:07:43.079 +eight something like this you can + +01:07:40.760 --> 01:07:45.520 +uniformly sample from the vocabulary at + +01:07:43.079 --> 01:07:47.359 +random with replacement generate utter + +01:07:45.520 --> 01:07:49.799 +garbage and find steering vectors for + +01:07:47.359 --> 01:07:53.200 +them takes a little while but your model + +01:07:49.799 --> 01:07:55.520 +is complex enough that you can basically + +01:07:53.200 --> 01:07:57.960 +bulldo your model to be able to do this + +01:07:55.520 --> 01:08:00.200 +even if that sequence is incredibly low + +01:07:57.960 --> 01:08:01.480 +likelihood under the model but it works + +01:08:00.200 --> 01:08:05.319 +better for things that are higher + +01:08:01.480 --> 01:08:07.760 +likelihood under the model um + +01:08:05.319 --> 01:08:09.920 +predictably the I think the thing that + +01:08:07.760 --> 01:08:12.760 +surprised me the most was these steering + +01:08:09.920 --> 01:08:15.319 +vectors themselves have interpretable + +01:08:12.760 --> 01:08:17.960 +properties U so distances in steering + +01:08:15.319 --> 01:08:20.759 +Vector space reflect semantic similarity + +01:08:17.960 --> 01:08:23.640 +so if you have two sentences that are + +01:08:20.759 --> 01:08:26.719 +close um they're also close in steering + +01:08:23.640 --> 01:08:29.759 +Vector space that's kind of nice + +01:08:26.719 --> 01:08:32.359 +um it does better than for example the + +01:08:29.759 --> 01:08:34.520 +representations one would use for for + +01:08:32.359 --> 01:08:37.159 +probing so mean pooling Bert hidden + +01:08:34.520 --> 01:08:39.600 +States like we looked at before those do + +01:08:37.159 --> 01:08:42.080 +actually worse than steering vectors um + +01:08:39.600 --> 01:08:45.799 +just a bit + +01:08:42.080 --> 01:08:47.880 +surprising um style transfer is possible + +01:08:45.799 --> 01:08:49.719 +with simple Vector arithmetic so it' be + +01:08:47.880 --> 01:08:52.799 +nice to say that I have a sequence I + +01:08:49.719 --> 01:08:56.000 +want to subtract you know negativity and + +01:08:52.799 --> 01:08:58.799 +add positivity for for sentiment or + +01:08:56.000 --> 01:09:00.520 +other sorts of Styles um we can do this + +01:08:58.799 --> 01:09:02.159 +and we can do this reasonably well in + +01:09:00.520 --> 01:09:05.319 +steering VOR + +01:09:02.159 --> 01:09:07.920 +space um we can also decode from + +01:09:05.319 --> 01:09:10.600 +interpolations in the Laten space so you + +01:09:07.920 --> 01:09:12.759 +take two steering vectors for two + +01:09:10.600 --> 01:09:14.759 +sequences you look in the middle of them + +01:09:12.759 --> 01:09:17.400 +you linearly interpolate between them + +01:09:14.759 --> 01:09:20.600 +and you decode um if the space is kind + +01:09:17.400 --> 01:09:22.080 +of weirdly peaky then you would have + +01:09:20.600 --> 01:09:23.839 +issues and what you would generate is + +01:09:22.080 --> 01:09:25.080 +garbage and there's no guarantee that + +01:09:23.839 --> 01:09:27.199 +the space should be reasonable in + +01:09:25.080 --> 01:09:30.480 +between but it turns out it + +01:09:27.199 --> 01:09:33.719 +is um here's an example of one of these + +01:09:30.480 --> 01:09:36.359 +style transfer cases so very very simple + +01:09:33.719 --> 01:09:39.239 +easy easy sentence we found steering + +01:09:36.359 --> 01:09:41.679 +vectors for The Taste is excellent and + +01:09:39.239 --> 01:09:43.640 +and we took a sample of 100 positive + +01:09:41.679 --> 01:09:45.359 +sentences and 100 negative sentences + +01:09:43.640 --> 01:09:47.159 +found their steering vectors took the + +01:09:45.359 --> 01:09:48.960 +mean and thought that you know that + +01:09:47.159 --> 01:09:51.400 +looks like the positive concept steering + +01:09:48.960 --> 01:09:54.040 +Vector negative concept steering Vector + +01:09:51.400 --> 01:09:56.600 +we just did Vector arithmetic just did + +01:09:54.040 --> 01:09:59.880 +uh current steering + +01:09:56.600 --> 01:10:02.440 +Vector uh plus negative minus positive + +01:09:59.880 --> 01:10:03.520 +and we got the taste is unpleasant um + +01:10:02.440 --> 01:10:06.960 +and + +01:10:03.520 --> 01:10:08.880 +similarly um in the reverse + +01:10:06.960 --> 01:10:12.520 +directions it turns out that the + +01:10:08.880 --> 01:10:15.199 +magnitude matters because um for every + +01:10:12.520 --> 01:10:17.800 +single sequence there's kind of an end + +01:10:15.199 --> 01:10:20.640 +dimensional ball around that steering + +01:10:17.800 --> 01:10:23.640 +Vector that we found that also decodes + +01:10:20.640 --> 01:10:25.920 +that specific sequence and so that shows + +01:10:23.640 --> 01:10:28.880 +that the space is kind of reasonably + +01:10:25.920 --> 01:10:32.320 +well formed there's there's of course uh + +01:10:28.880 --> 01:10:34.280 +a lot of weird sort of areas um and so + +01:10:32.320 --> 01:10:37.120 +if you go poke around in steering Vector + +01:10:34.280 --> 01:10:38.760 +space and sort of try to sample from it + +01:10:37.120 --> 01:10:41.280 +eventually you'll find some weird edge + +01:10:38.760 --> 01:10:43.320 +cases and some garbage and repeated text + +01:10:41.280 --> 01:10:46.159 +and little things like + +01:10:43.320 --> 01:10:50.520 +this any questions here before I kind of + +01:10:46.159 --> 01:10:50.520 +Rapid Fire through the the last few + +01:10:50.920 --> 01:10:57.239 +things yeah like here + +01:10:57.400 --> 01:11:01.400 +yeah so we went uh Beyond this um there + +01:11:00.199 --> 01:11:04.280 +was + +01:11:01.400 --> 01:11:07.440 +so in in these specific experiments we + +01:11:04.280 --> 01:11:09.600 +looked at the middle of gpt2 um so this + +01:11:07.440 --> 01:11:12.679 +was like layer six layer seven and at + +01:11:09.600 --> 01:11:15.280 +the first time step we didn't do any um + +01:11:12.679 --> 01:11:17.239 +like magnitude scaling and so you can + +01:11:15.280 --> 01:11:19.480 +imagine if you put a giant Vector in + +01:11:17.239 --> 01:11:21.040 +there the models never the rest of the + +01:11:19.480 --> 01:11:24.679 +model has never seen something of that + +01:11:21.040 --> 01:11:26.159 +magnitude so it's now in a weird State + +01:11:24.679 --> 01:11:28.280 +and it's just going to break so if you + +01:11:26.159 --> 01:11:30.560 +put this to like I don't know 500 or + +01:11:28.280 --> 01:11:32.960 +something like this it break it just has + +01:11:30.560 --> 01:11:35.239 +no idea it's like basically like telling + +01:11:32.960 --> 01:11:37.199 +the rest your model hey it's like a + +01:11:35.239 --> 01:11:38.760 +completely untrained model be it looks + +01:11:37.199 --> 01:11:42.000 +similar to like random performance you + +01:11:38.760 --> 01:11:43.840 +get repeats and things like this smaller + +01:11:42.000 --> 01:11:45.800 +you end up staying in this ball for the + +01:11:43.840 --> 01:11:47.920 +sequence two two seemed pretty + +01:11:45.800 --> 01:11:50.199 +reasonable but we didn't spend a lot of + +01:11:47.920 --> 01:11:53.560 +time just like the day before the paper + +01:11:50.199 --> 01:11:56.600 +was do we were two seems reasonable we + +01:11:53.560 --> 01:11:59.159 +went to three we went to five 10 broke + +01:11:56.600 --> 01:12:01.199 +five somewhat broke two seems + +01:11:59.159 --> 01:12:03.440 +reasonable + +01:12:01.199 --> 01:12:06.400 +um decent signings + +01:12:03.440 --> 01:12:08.639 +hopefully um cool so I'll talk about uh + +01:12:06.400 --> 01:12:10.920 +a similar type of work uh that came out + +01:12:08.639 --> 01:12:13.000 +more recently on inference time + +01:12:10.920 --> 01:12:14.159 +intervention so basically they use some + +01:12:13.000 --> 01:12:16.719 +of the ideas that we talked about + +01:12:14.159 --> 01:12:18.840 +earlier they use linear probes um to + +01:12:16.719 --> 01:12:20.560 +find a tension head that correspond to a + +01:12:18.840 --> 01:12:23.600 +desired attribute they did this for + +01:12:20.560 --> 01:12:26.440 +truthful QA so uh their Hope was to find + +01:12:23.600 --> 01:12:28.639 +truthful directions in Len space + +01:12:26.440 --> 01:12:31.639 +um and then they shifted the attention + +01:12:28.639 --> 01:12:33.199 +head activations um during inference + +01:12:31.639 --> 01:12:35.280 +along the directions determined by the + +01:12:33.199 --> 01:12:38.280 +probes um so what this kind of looks + +01:12:35.280 --> 01:12:40.280 +like is you take your attention heads + +01:12:38.280 --> 01:12:42.440 +you probe them so you stick classify on + +01:12:40.280 --> 01:12:44.360 +top um this classifier learns to + +01:12:42.440 --> 01:12:47.679 +disentangle sort of truthful and + +01:12:44.360 --> 01:12:50.239 +untruthful and now you have um now you + +01:12:47.679 --> 01:12:52.080 +have a hyperplane and then you can move + +01:12:50.239 --> 01:12:54.320 +orthogonally to this hyper plane in the + +01:12:52.080 --> 01:12:55.920 +direction depending on which way you + +01:12:54.320 --> 01:12:58.080 +want to shift so if you want to move + +01:12:55.920 --> 01:13:02.040 +towards truthful you can move in that + +01:12:58.080 --> 01:13:04.400 +direction or or away um and they do this + +01:13:02.040 --> 01:13:07.560 +it works pretty well um I think they do + +01:13:04.400 --> 01:13:09.679 +this for GPT model and maybe a llama + +01:13:07.560 --> 01:13:12.960 +model um but can't can't remember the + +01:13:09.679 --> 01:13:15.960 +exact details um and it's a similar + +01:13:12.960 --> 01:13:21.040 +intervention um they basically add this + +01:13:15.960 --> 01:13:23.400 +Vector um that they found and they they + +01:13:21.040 --> 01:13:25.679 +have a little note on scaling they if + +01:13:23.400 --> 01:13:27.719 +they scale if that if the magnitude of + +01:13:25.679 --> 01:13:30.000 +the thing is too much things break so + +01:13:27.719 --> 01:13:33.880 +they have a they like hyper parameter + +01:13:30.000 --> 01:13:36.800 +search for the sort of magnitude of + +01:13:33.880 --> 01:13:38.840 +activation um but it's sort of a very + +01:13:36.800 --> 01:13:41.520 +similar approach to what we did but this + +01:13:38.840 --> 01:13:43.040 +focuses on specific attention heads and + +01:13:41.520 --> 01:13:44.440 +they don't do this for all the attention + +01:13:43.040 --> 01:13:46.600 +heads so back to like your question + +01:13:44.440 --> 01:13:49.080 +earlier do attention heads specialize it + +01:13:46.600 --> 01:13:52.360 +seems like they do and so there are many + +01:13:49.080 --> 01:13:54.320 +of them that uh have like no probing + +01:13:52.360 --> 01:13:57.719 +accuracy or limited probing accuracy and + +01:13:54.320 --> 01:13:59.400 +actually um are like distractors for the + +01:13:57.719 --> 01:14:03.400 +CH FL + +01:13:59.400 --> 01:14:03.400 +Direction any questions + +01:14:06.040 --> 01:14:11.760 +here cool so more activation + +01:14:09.120 --> 01:14:14.760 +manipulation so there's uh some work + +01:14:11.760 --> 01:14:17.600 +recently on contrastive steering vectors + +01:14:14.760 --> 01:14:19.480 +so the way we did this like sentiment + +01:14:17.600 --> 01:14:21.080 +steering was we had some positive + +01:14:19.480 --> 01:14:23.040 +sentences some negative sentences they + +01:14:21.080 --> 01:14:24.520 +weren't tied together in any reasonable + +01:14:23.040 --> 01:14:26.360 +way we found their steering vectors + +01:14:24.520 --> 01:14:30.040 +separately you could imagine the case + +01:14:26.360 --> 01:14:33.159 +and maybe a more useful case um with two + +01:14:30.040 --> 01:14:36.280 +prompts that um you can design that go + +01:14:33.159 --> 01:14:38.000 +two different ways you can sort of find + +01:14:36.280 --> 01:14:42.280 +their representations and do the + +01:14:38.000 --> 01:14:45.679 +manipulation the differences here um + +01:14:42.280 --> 01:14:48.800 +like individually rather than um for a + +01:14:45.679 --> 01:14:52.400 +whole concept or a whole attribute and + +01:14:48.800 --> 01:14:54.400 +the value here is your context is um + +01:14:52.400 --> 01:14:56.600 +preserved so if you're doing something + +01:14:54.400 --> 01:14:58.239 +like you know you're doing retrieval + +01:14:56.600 --> 01:15:00.440 +based things now you have some sort of + +01:14:58.239 --> 01:15:03.360 +document and then you have a question if + +01:15:00.440 --> 01:15:05.040 +your question sort of uh if you want to + +01:15:03.360 --> 01:15:07.560 +ask it in two different ways for two + +01:15:05.040 --> 01:15:08.880 +different things this would be a much + +01:15:07.560 --> 01:15:11.239 +better approach if you want to use + +01:15:08.880 --> 01:15:14.600 +steering vectors than the stuff I was + +01:15:11.239 --> 01:15:16.159 +doing um and it seems to work a little + +01:15:14.600 --> 01:15:17.880 +bit better they didn't compare against + +01:15:16.159 --> 01:15:19.400 +our our things because it's not like an + +01:15:17.880 --> 01:15:21.880 +Apples to Apples comparison but it seems + +01:15:19.400 --> 01:15:23.960 +to work better and be more General um + +01:15:21.880 --> 01:15:25.560 +and be more + +01:15:23.960 --> 01:15:27.840 +useful + +01:15:25.560 --> 01:15:27.840 +any + +01:15:31.400 --> 01:15:37.679 +questions cool so what can model + +01:15:35.080 --> 01:15:40.080 +interpretability give us these are these + +01:15:37.679 --> 01:15:41.960 +are my concluding remarks so hopefully + +01:15:40.080 --> 01:15:43.920 +we get a better understanding of how + +01:15:41.960 --> 01:15:46.840 +language models work their their + +01:15:43.920 --> 01:15:49.520 +internals their structure um we get to + +01:15:46.840 --> 01:15:52.800 +understand uh kind of why they do really + +01:15:49.520 --> 01:15:55.239 +well this is still like very very + +01:15:52.800 --> 01:15:57.320 +unclear um and hopefully we find + +01:15:55.239 --> 01:15:59.400 +lightweight methods to control and steer + +01:15:57.320 --> 01:16:03.360 +models as models become more and more + +01:15:59.400 --> 01:16:05.280 +useful um and and impact more more users + +01:16:03.360 --> 01:16:09.360 +we need better ways to control and steer + +01:16:05.280 --> 01:16:13.120 +them um and it's unclear how much + +01:16:09.360 --> 01:16:15.360 +industry will devote to these things um + +01:16:13.120 --> 01:16:18.080 +so it might be the role of Academia to + +01:16:15.360 --> 01:16:21.239 +do more science in in order to figure + +01:16:18.080 --> 01:16:23.920 +out how to control and steer these + +01:16:21.239 --> 01:16:25.520 +better um and hopefully we can also find + +01:16:23.920 --> 01:16:29.199 +potential Al Alternatives or + +01:16:25.520 --> 01:16:34.840 +complimentary methods to to do alignment + +01:16:29.199 --> 01:16:37.480 +um rhf is kind of expensive um and if if + +01:16:34.840 --> 01:16:40.080 +we could do this with limited data and + +01:16:37.480 --> 01:16:42.760 +um exploit structure um and information + +01:16:40.080 --> 01:16:46.400 +that's already in the model more so than + +01:16:42.760 --> 01:16:48.600 +than these methods um maybe maybe we can + +01:16:46.400 --> 01:16:50.920 +align them better and these things don't + +01:16:48.600 --> 01:16:52.480 +have to be uh Alternatives they can be + +01:16:50.920 --> 01:16:53.840 +complimentary to to + +01:16:52.480 --> 01:16:57.159 +[Music] + +01:16:53.840 --> 01:17:00.040 +rhm um here's some resources this is an + +01:16:57.159 --> 01:17:01.280 +extremely incomplete group but here are + +01:17:00.040 --> 01:17:04.080 +some folks that work on model + +01:17:01.280 --> 01:17:07.040 +interoperability there's many of these + +01:17:04.080 --> 01:17:09.120 +um I cited some some work from some of + +01:17:07.040 --> 01:17:11.280 +these teams but um there's a lot of + +01:17:09.120 --> 01:17:13.280 +people working on it and in the last + +01:17:11.280 --> 01:17:15.040 +like year there's been kind of an + +01:17:13.280 --> 01:17:17.480 +explosion especially in the mechanistic + +01:17:15.040 --> 01:17:21.639 +interpretability kind of World um Sasha + +01:17:17.480 --> 01:17:23.800 +Rush had a recent tweet that uh asked + +01:17:21.639 --> 01:17:25.320 +like prospective grad students what is + +01:17:23.800 --> 01:17:27.239 +the topic that they're most excited + +01:17:25.320 --> 01:17:29.880 +about and mechanistic interpretability + +01:17:27.239 --> 01:17:33.960 +was a thing that seemed to have won out + +01:17:29.880 --> 01:17:37.040 +um so I encourage you to to kind of dive + +01:17:33.960 --> 01:17:38.719 +into this literature and read some of + +01:17:37.040 --> 01:17:41.679 +the papers if you're if you're excited + +01:17:38.719 --> 01:17:45.199 +about it and yeah thanks for your + +01:17:41.679 --> 01:17:45.199 +attention and that's all I + +01:17:45.400 --> 01:17:48.400 +have