diff --git "a/transcript/allocentric_MuRVOQY8KoY.txt" "b/transcript/allocentric_MuRVOQY8KoY.txt" new file mode 100644--- /dev/null +++ "b/transcript/allocentric_MuRVOQY8KoY.txt" @@ -0,0 +1,1376 @@ +[0.000 --> 12.000] Here's the agenda for today. +[12.000 --> 17.240] As usual, a bunch of announcements in red assignment 4 was graded. +[17.240 --> 23.080] There will be comments showing up online on stellar soon on any of you didn't get a near +[23.080 --> 25.080] perfect score on it. +[25.080 --> 29.920] And I'll also be going over a little bit of it in a moment. +[29.920 --> 33.560] And then once we do that, we're going to talk about navigation, how we know where we are +[33.560 --> 38.040] and how to get from here to some place else, which is much more awesome than it sounds +[38.040 --> 40.320] at first, as you will see. +[40.320 --> 43.320] Okay, so quick review. +[43.320 --> 45.680] Okay, so what was the key point? +[45.680 --> 49.520] Why did I assign the HACSPE 2001 article for you guys to read? +[49.520 --> 54.560] It presents this important challenge to the functional specificity of the face area and +[54.560 --> 55.560] the place area. +[55.560 --> 57.200] What was that challenge? +[57.200 --> 59.200] What was HACSPE's key point? +[59.200 --> 61.200] Yes, as of all. +[61.200 --> 72.200] So whether the EPA, use just, it's just a has a preference for linearity of scenes, but +[72.200 --> 74.200] it's actually a scene for it. +[74.200 --> 76.200] It's not truly technical. +[76.200 --> 83.400] Yeah, he wasn't worrying about rectilinearity so much back then, but his point was that if +[83.400 --> 88.880] you, we shouldn't care just about the overall magnitude of response of a region. +[88.880 --> 94.160] Like, okay, it's nice if the face area responds like this to faces and like that to objects, +[94.160 --> 101.800] but even if it responds low and the same to cars and chairs, it might still have information +[101.800 --> 106.560] to enable you to distinguish cars from chairs if the pattern of response across voxels in +[106.560 --> 110.640] that region was stably different for cars and chairs. +[110.640 --> 112.080] Okay, that's really key. +[112.080 --> 114.680] We'll go over it a few more points, but that's essential. +[114.680 --> 118.720] Right, a lot of the details that I'm going to teach you that go by in class don't matter +[118.720 --> 122.080] but I really want you guys to understand the VPA and that's the nub of it. +[122.080 --> 129.120] Okay, so the idea is that selective his kind of claim is that selective regions like the +[129.120 --> 134.400] face area contain information about non-preferred stimuli, that is like non-faces for the face +[134.400 --> 138.280] area or non-places for the place area. +[138.280 --> 142.760] And because they contain information, those regions don't care only about their preferred +[142.760 --> 143.760] category. +[143.760 --> 148.480] So why does Camerchor get off saying that the fae is only about faces and the PPA is only +[148.480 --> 152.040] about places if we can see information about other things in those regions. +[152.040 --> 153.720] Okay, that's a really important critique. +[153.720 --> 155.720] That's why we're spending time on it. +[155.720 --> 157.400] Okay, okay. +[157.400 --> 165.400] Next, what kind of empirical data might be an answer to Haxby's charge? +[165.400 --> 170.040] I've presented at least three different kinds of data that can address this and say, +[170.040 --> 174.760] hey, wait a minute, you know, you have a point but what kind of data could speak to that +[174.760 --> 177.440] and respond to Haxby? +[177.440 --> 181.080] We didn't actually talk about this explicitly in class, but think about it. +[181.080 --> 183.440] Here's the claim he makes. +[183.440 --> 184.560] What might we say, right? +[184.560 --> 186.440] So that's empirically true. +[186.440 --> 192.560] Like you look in the FFA even in my own data, I can distinguish chairs from shoes a little +[192.560 --> 195.120] teeny bit in the FFA. +[195.120 --> 198.000] Okay, so that empirical claim is true. +[198.000 --> 204.600] Why might it nonetheless be the case that the face area is really only about face-fraining +[204.600 --> 205.600] fashion? +[205.600 --> 210.280] What other data have you heard in here that might make you think that? +[210.280 --> 211.280] Yes, Ben? +[211.280 --> 218.520] The presence of the facial features is the presence of a little stimuli that are generally +[218.520 --> 224.520] in faces but also scarcely about chairs or cards. +[224.520 --> 226.520] Absolutely. +[226.520 --> 232.680] So yes, so put another way, even if you had a perfect coder for faces, you know, like +[232.680 --> 237.840] take your best deep net for face recognition, VGG face, it can distinguish chairs and +[237.840 --> 239.160] shoes too, right? +[239.160 --> 244.500] The features that you use to represent faces will slightly discriminate between other +[244.500 --> 245.500] non-face objects. +[245.500 --> 250.880] So the fact that we can see that information in itself isn't strong evidence that that +[250.880 --> 254.840] region isn't doing selective, isn't selective for face perception. +[254.840 --> 255.840] Absolutely. +[255.840 --> 257.840] What else? +[257.840 --> 258.840] Yeah. +[258.840 --> 259.840] Okay. +[259.840 --> 265.280] So with trans-cranial magnetic simulation, that when you stimulate the ff and ff and look +[265.280 --> 269.760] at a face that affects it, when you are parallel to that other objects, it's not going +[269.760 --> 270.760] to affect us. +[270.760 --> 271.760] Exactly. +[271.760 --> 275.400] And so what does that tell you about, okay, so there's pattern information in there about +[275.400 --> 283.240] other things beyond faces, but apparently it's not used, right? +[283.240 --> 286.000] Now with every bit of evidence, you can always argue back. +[286.000 --> 288.760] People who say, well, TMS, those effects are tiny. +[288.760 --> 293.080] There isn't, we didn't have power detected, blah, blah, blah, blah, but at least absolutely +[293.080 --> 294.960] you're right, TMS argues against that. +[294.960 --> 295.960] What else? +[295.960 --> 299.840] Or at least there's a way to argue against it and the picture paper that I assigned and +[299.840 --> 304.440] other papers that we've talked about in here provides some evidence that actually, at least +[304.440 --> 309.960] the occipital face area really is only causally involved in face perception, even if there's +[309.960 --> 312.560] information in there about other things. +[312.560 --> 313.560] Okay. +[313.560 --> 314.560] What else? +[314.560 --> 316.560] What other methods can address this? +[316.560 --> 317.560] Yeah. +[317.560 --> 324.560] So, this is a simulation where even when you present non-face or face, it's actually +[324.560 --> 326.560] you can proceed face-to-face. +[326.560 --> 327.560] Exactly. +[327.560 --> 328.560] Exactly. +[328.560 --> 329.560] So these are both causal tests, right? +[329.560 --> 333.960] Okay, there's information in there, but is it causally used in behavior, right? +[333.960 --> 340.440] TMS suggests not the little bit of direct intercranial stimulation data that I showed you +[340.440 --> 345.800] also suggests the causal effects when you stimulate that region are specific to face +[345.800 --> 346.800] perception. +[346.800 --> 350.800] And suggesting that even if there's pattern information in there, it's not doing anything +[350.800 --> 354.760] important because we can mess it up and nothing happens to the perception of things that +[354.760 --> 355.760] aren't faces. +[355.760 --> 356.760] Absolutely. +[356.760 --> 357.760] What else? +[357.760 --> 362.800] We talked about it very briefly a few weeks ago. +[362.800 --> 363.800] Yeah. +[363.800 --> 369.280] So if you know the model or the origin, it just completely makes a person and incapable +[369.280 --> 370.280] of perceiving face-to-face. +[370.280 --> 372.800] That is like a person. +[372.800 --> 376.200] Yes, but the crucial way, yes. +[376.200 --> 381.200] And the crucial way to address Hacksby would be what further aspect of that? +[381.200 --> 382.200] Yes. +[382.200 --> 386.960] And by the way, we don't remove the area in humans, but occasionally we find a human who +[386.960 --> 390.040] had a lesion there due to a stroke and then we study them. +[390.040 --> 392.200] So they still need to do other categories? +[392.200 --> 393.200] Exactly. +[393.200 --> 394.200] Exactly. +[394.200 --> 401.120] So all three lines of evidence and suggest from studies of prosopagnosia, electrical stimulation +[401.120 --> 405.760] directly on the brain and TMS, all can provide evidence to various degrees. +[405.760 --> 410.040] Again, one can quibble about each of these particular studies, but all of those suggest +[410.040 --> 414.600] that even though there's information in the pattern, Hacksby's right, there's information +[414.600 --> 419.480] in there about other things that aren't faces, the only causal effects when you mess up +[419.480 --> 423.240] with that region are on faces, not on other things. +[423.240 --> 427.360] That suggests that pattern information is what they sometimes say in philosophical circles +[427.360 --> 429.000] is epiphenomital. +[429.000 --> 435.200] That is, it's just not related to behavior and perception. +[435.200 --> 436.200] Does that make sense? +[436.200 --> 437.200] Okay. +[437.200 --> 438.800] Moving along. +[438.800 --> 443.480] How can we then use Hacksby's method to not just engage in this little fight about the +[443.480 --> 449.680] FFA and how specific it is, but to ask whether to harness this method and ask other +[449.680 --> 452.760] interesting questions from functional MRI data? +[452.760 --> 458.320] How can we use it to find out, for example, does the place area discriminate, say, beach +[458.320 --> 459.640] scenes from city scenes? +[459.640 --> 461.080] We want to know what's represented in there. +[461.080 --> 464.760] How can we use this method to find out? +[464.760 --> 465.760] Yes, join me. +[465.760 --> 471.760] What do you want to have to be put in, like, trying to decoder and see if it goes to the +[471.760 --> 482.440] decoder because, like, the side, the difference between the city and the right container +[482.440 --> 483.440] sheet? +[483.440 --> 484.440] Exactly. +[484.440 --> 485.440] Exactly. +[485.440 --> 490.480] So, we talked about decoding methods last time as a way to use machine learning to look +[490.480 --> 495.520] at the pattern of response in a region of the brain and train the decoder so it knows +[495.520 --> 500.360] what beach scene, what the response looks like during viewing of beach scenes, train +[500.360 --> 503.920] it so it knows what the response in that region looks like when you're looking at city +[503.920 --> 508.440] scenes and then take a new pattern and say, okay, is this more like the beach pattern or +[508.440 --> 510.040] is it more like the city pattern? +[510.040 --> 512.240] And that's how you could decode from that region. +[512.240 --> 513.240] Yes? +[513.240 --> 514.240] That doesn't tell as much, right? +[514.240 --> 515.240] It doesn't tell. +[515.240 --> 516.240] It's not telling the... +[516.240 --> 522.160] I mean, we know that there's residue of information, there's an discriminator, can be better +[522.160 --> 525.280] than any region, consider it any problems. +[525.280 --> 526.280] So... +[526.280 --> 528.040] We have a true nihilist here. +[528.040 --> 531.160] No, it's a good question. +[531.160 --> 536.440] It's not the case that you can discriminate anything based on any region of the brain. +[536.440 --> 537.880] So there are some constraints. +[537.880 --> 541.640] There's some things you can find in some places and other things you can find in other +[541.640 --> 544.560] places and they're not uniformly distributed over the brain. +[544.560 --> 546.040] However, the fact we just... +[546.040 --> 550.520] The point I just made about, yes, there's discriminative information in the face area +[550.520 --> 557.440] about non-faces, but maybe it's not used, should raise a huge caveat about this whole method. +[557.440 --> 558.720] How do we ever know? +[558.720 --> 560.800] We see some discriminative information. +[560.800 --> 564.840] How do we know whether it's actually used by the brain, part of the brain's own code +[564.840 --> 570.440] for information or just epiphenomenal garbage that's a byproduct of something else? +[570.440 --> 573.560] It's a really important question about all of pattern analysis. +[574.080 --> 578.760] We do it anyway because we're beggars, we can't be choosers in terms of methods with human +[578.760 --> 579.920] cognitive neuroscience. +[579.920 --> 583.160] And we want to know desperately what's represented in each region. +[583.160 --> 584.160] So we do this. +[584.160 --> 589.840] But whenever you see these lovely, I can decode x from y, things you should always be wondering. +[589.840 --> 594.480] Who knows if that fact that you, the scientist, can decode it from that region, means the +[594.480 --> 598.480] brain itself is reading that information out of that region. +[598.480 --> 600.040] Big important question. +[600.040 --> 601.040] Okay. +[602.040 --> 604.280] All right, put another way. +[604.280 --> 607.880] So Jimmy mentioned just decoding in general, and that's absolutely right. +[607.880 --> 611.640] But to directly harness the Hacksby version of this, what would we do? +[611.640 --> 615.280] First, we would functionally localize the PPA. +[615.280 --> 619.560] By scanning them, looking at scenes and objects, find that region in each subject. +[619.560 --> 624.560] Then we would collect the pattern of response across foxes in the PPA, while subjects were +[624.560 --> 626.560] looking at, say, beach scenes. +[626.560 --> 630.960] And so if this is the PPA, this is the pattern of response across foxes in that region. +[630.960 --> 636.040] When they're looking at beach scenes, fake data, obviously, just to give you the idea. +[636.040 --> 639.600] So we would split the data in half, even runs odd runs. +[639.600 --> 641.160] That would be like even runs. +[641.160 --> 644.000] Then we get another pattern for odd runs. +[644.000 --> 648.520] And then we get another pattern for when they're looking at city scenes with even runs +[648.520 --> 653.800] and another pattern when they're looking at city scenes in odd runs. +[653.800 --> 659.280] So then, once we have those four patterns, what is the key prediction? +[659.280 --> 663.840] If using Hacksby's correlation method, what is the key prediction? +[663.840 --> 668.400] If the PPA, if pattern of response in the PPA can discriminate beach scenes from city +[668.400 --> 671.240] scenes, what should we see from these patterns? +[671.240 --> 675.560] What's the key prediction? +[675.560 --> 677.920] Claire. +[677.920 --> 678.920] Key prediction. +[678.920 --> 681.320] You have these four patterns in the PPA. +[681.320 --> 685.480] Now you want to know, is there information in there that enables you to discriminate +[685.480 --> 686.800] beach scenes from city scenes? +[686.800 --> 692.040] Is that like beach even in the beach order, more similar than beach even in city scenes? +[692.040 --> 693.040] Exactly. +[693.040 --> 694.040] Exactly. +[694.040 --> 698.880] It actually, it sounds all complicated and it's easy to get confused, but the no of the +[698.880 --> 700.320] idea is really simple. +[700.320 --> 703.040] It just says, look, the beach patterns are stable. +[703.040 --> 706.800] We do beach a few times, we get the same pattern more or less. +[706.800 --> 711.280] We do city, we get a different pattern, and we keep doing city, we get the same pattern +[711.280 --> 716.080] more or less, and the beach pattern and the city pattern are different. +[716.080 --> 720.320] So that's the no of the idea, and so you can implement it with decoding methods or the +[720.320 --> 727.360] Hacksby versions just to ask whether the correlation between two beach patterns, beach even beach +[727.360 --> 735.440] odd, is more similar than the pattern between one of the beaches and one of the cities. +[735.440 --> 740.600] Just asking, are they more, are they stably similar within a category and stably different +[740.600 --> 741.800] from another category? +[741.800 --> 744.120] Does that make sense? +[744.360 --> 745.360] Okay. +[745.360 --> 750.160] Boom, this is just a variant of this thing I showed you guys before. +[750.160 --> 755.080] We just harness this to ask whether that region can discriminate. +[755.080 --> 757.160] Okay, and I just set all of this. +[757.160 --> 761.440] Okay, if you still feel shaky on this, there's a few things you can do. +[761.440 --> 767.600] A version of my little lecture on this method is here at the my website. +[767.600 --> 770.760] You can look at that, it's just like six minutes and it's basically what I did before, but +[770.760 --> 773.120] if you want to go over it again, there it is. +[773.120 --> 777.040] You can reread the Hacksby paper, which I know is not super easy, but it's actually nicely +[777.040 --> 780.600] written, and if you read it carefully, it explains the method pretty clearly. +[780.600 --> 784.680] You can talk to me or a TA, and we'll get back to this question of whether we should do +[784.680 --> 788.320] a whole MATLAB-based problem set on this. +[788.320 --> 789.320] All right? +[789.320 --> 793.320] Okay, let's move on and talk about navigation. +[793.320 --> 797.920] Okay, this is a monarch butterfly. +[797.920 --> 801.160] It weighs about half a gram. +[801.160 --> 809.000] And yet, each fall, the monarch migrates over 2,000 miles from the USA and Canada down +[809.000 --> 810.760] to Mexico. +[810.760 --> 817.120] In fact, it migrates, a single monarch flies 50 miles in a single day. +[817.120 --> 821.440] It's pretty amazing for this tiny little beautiful delicate thing. +[821.440 --> 826.160] Even more amazing, it flies to a very specific forest in Mexico. +[826.160 --> 830.960] It's just a few acres in size, and it arrives at that particular forest. +[830.960 --> 836.360] Now that's already amazing, but here's the part that is just totally mind-blowing. +[836.360 --> 842.640] And that is, and it flies back north in the spring, and that is that this whole cycle +[842.640 --> 846.080] takes four generations to complete. +[846.080 --> 850.480] And that means that the monarch that starts up in Canada and flies down to that forest +[850.480 --> 858.080] in Mexico, one monarch does that, is the great, great grand kid of his ancestor that last +[858.080 --> 860.600] went on that route. +[860.600 --> 863.400] That in your head and smog it. +[863.400 --> 864.920] That's pretty amazing. +[864.920 --> 865.920] Okay? +[865.920 --> 869.120] All right. +[869.120 --> 872.640] Consider the female loggerhead turtle. +[872.640 --> 879.840] She hatches at a beach and goes out in the sea and swims around in the sea for 20 years +[879.840 --> 884.680] before she comes back 20 years later for the first time to the beach that she hatched +[884.680 --> 885.680] at. +[885.680 --> 888.760] Okay? +[888.760 --> 893.040] Now it's pretty amazing, but some mothers miss by 20 miles. +[893.040 --> 896.840] They go to the wrong island or the wrong beach on the same island. +[896.840 --> 897.840] Okay? +[897.840 --> 900.760] And so you might think, okay, it's pretty good. +[900.760 --> 902.560] It's not amazing. +[902.560 --> 904.200] But here's the thing. +[904.200 --> 909.440] The wrong beach that those mothers go to is the exactly right beach had the Earth's magnetic +[909.440 --> 912.800] field not shifted slightly over those 20 years. +[912.800 --> 917.120] They're exactly precise, but they just don't compensate for the shift in the Earth's +[917.120 --> 920.120] magnetic field. +[920.120 --> 922.120] Okay? +[922.120 --> 923.440] Here's a bat. +[923.440 --> 930.000] This bat maintains its sense of direction even while it flies 30 to 50 miles in a single +[930.000 --> 933.400] night in the dark, catching food. +[933.400 --> 934.400] Okay? +[934.400 --> 938.400] And it maintains its sense of direction even though it's flying around in all different +[938.400 --> 941.240] orientations in three dimensions. +[941.240 --> 949.120] And even as it flips over in lands to perch on the surface of a cave, it doesn't get confused +[949.120 --> 951.400] by being upside down. +[951.400 --> 953.680] Okay? +[953.680 --> 956.920] This is Categlifus, the Tunisian Desert Ant. +[956.920 --> 958.000] These guys are amazing. +[958.000 --> 964.200] They crawl around on the surface of the Tunisian Desert where it's 140 degrees in the daytime. +[964.200 --> 967.120] They have to crawl around up there to forage for food. +[967.120 --> 971.080] And then because it's so damn hot, as soon as they find food, they zoom back to their nest +[971.080 --> 973.760] and go down in the nest where it's cooler. +[973.760 --> 979.920] So here is a track of Categlifus starting at point A and foraging. +[979.920 --> 986.160] He's meandering around looking for food going along this whole crazy path to point B. +[986.160 --> 992.440] And then if he finds food at point B, boom, straight line back exactly to the nest. +[992.440 --> 998.040] Now we might ask, how does Categlifus keep track as he's doing all this stuff of where +[998.040 --> 1001.600] his heading is back to his nest? +[1001.600 --> 1005.440] The first thing you might think of is things like what it looks like, maybe their landmarks, +[1005.440 --> 1009.280] maybe their otters, maybe their otters, but no. +[1009.280 --> 1010.840] He doesn't use any of those things. +[1010.840 --> 1017.400] And we know that because when scientists who have set up this measurement device capture +[1017.400 --> 1022.400] Categlifus after he goes out on this tortuous path and finds the feeding station, they +[1022.400 --> 1026.680] capture him and move him across the desert on which they've drawn all these grid lines +[1026.680 --> 1028.600] for the convenience of their experiment. +[1028.600 --> 1030.120] And they release him here. +[1030.120 --> 1031.920] And what does Categlifus do? +[1031.920 --> 1036.800] He goes on the exactly correct vector. +[1036.800 --> 1039.800] No landmarks, no relevant otters. +[1039.800 --> 1045.080] And yet he's obviously encoded the exact vector of how to get home. +[1045.080 --> 1047.160] Think about what that entails and what's involved. +[1047.160 --> 1048.160] Okay. +[1048.160 --> 1050.160] The same vector with respect to. +[1050.160 --> 1051.160] Nor? +[1051.160 --> 1052.680] With respect to, yes. +[1052.680 --> 1053.680] Yes. +[1053.680 --> 1060.360] With respect to like absolute external direction, absolutely. +[1060.360 --> 1062.640] Okay. +[1062.640 --> 1065.680] So that's what I just said. +[1065.680 --> 1070.920] So these feats of animal navigation are amazing. +[1070.920 --> 1075.720] And animals have evolved ways to solve all these problems unique to their environment. +[1075.720 --> 1080.920] They've evolved these abilities because they really have to be able to find food and +[1080.920 --> 1084.360] mates and shelter. +[1084.360 --> 1088.000] And this is not just a satirica in the natural world. +[1088.000 --> 1095.800] MIT students too need to be able to find food and mates and shelter. +[1095.800 --> 1101.120] So what is navigation anyway and what is it in tail? +[1101.120 --> 1105.080] Well, I'll argue over the next two lectures that there are two fundamental questions that +[1105.080 --> 1108.400] organisms need to solve to be able to navigate. +[1108.400 --> 1110.840] This one is where am I? +[1110.840 --> 1115.800] And the second one is how do I get from here to there, A to B, wherever there is that +[1115.800 --> 1116.920] you need to get. +[1116.920 --> 1117.920] Okay. +[1117.920 --> 1118.920] So we'll unpack this. +[1118.920 --> 1121.200] There are many different facets of each. +[1121.200 --> 1128.600] But so for example, if you see this image, you immediately know where you are. +[1128.600 --> 1131.240] And you also know where to go. +[1131.240 --> 1136.720] If for example, it starts raining, you might brush into lobby seven. +[1136.720 --> 1143.280] Or if you're hungry, you might turn around and go back to the student center. +[1143.280 --> 1144.360] Same deal here. +[1144.360 --> 1148.760] If you see this, then you know where you are and where you would go to get to various +[1148.760 --> 1150.560] things. +[1150.560 --> 1151.560] Okay. +[1151.560 --> 1156.400] Now these judgments rely on the specific knowledge you guys have of those particular places. +[1156.400 --> 1158.360] You recognize that exact place. +[1158.360 --> 1162.080] And you know you have some kind of map in your head that we'll talk more about in a moment +[1162.080 --> 1165.640] that tells you where everything else is with respect to it. +[1165.640 --> 1170.680] But even if you're in a place you don't know at all, you can still extract some information. +[1170.680 --> 1176.720] So suppose you miraculously found yourself, boom, here, I wouldn't mind actually, but +[1176.720 --> 1179.200] that's not in the cards for a while. +[1179.200 --> 1180.760] So you're here. +[1180.760 --> 1184.720] Even if you've just hiked around the corner, if you've never seen this place before, you +[1184.720 --> 1187.960] have some kind of idea of what sort of place this is. +[1187.960 --> 1190.320] Where would you pitch your tent? +[1190.320 --> 1193.280] Where might you try to go to get out of this valley? +[1193.280 --> 1197.840] If it was me, I would have friends who would go straight up there and try to drag me along +[1197.840 --> 1198.840] complaining. +[1198.840 --> 1201.840] If it was me, I'd rather look for some other route. +[1201.840 --> 1205.080] But you can tell all of that just by looking at this image. +[1205.080 --> 1206.080] Okay. +[1206.080 --> 1207.080] Okay. +[1207.080 --> 1210.680] Where you can go from there, not just what kind of a place it is, but what are the possible +[1210.680 --> 1212.680] routes you might take. +[1212.680 --> 1213.680] Okay. +[1213.680 --> 1217.960] So these fundamental problems that we solve in navigation of knowing, where am I and +[1217.960 --> 1221.120] how do I get from here to there? +[1221.120 --> 1223.000] Include multiple components. +[1223.000 --> 1228.920] In terms of where am I, the first piece is recognizing a specific place you know. +[1228.920 --> 1229.920] Okay. +[1229.920 --> 1232.960] So you might open your eyes and say, okay, this is my living room. +[1232.960 --> 1234.920] I know this particular place. +[1234.920 --> 1235.920] Okay. +[1235.920 --> 1240.320] But as I just pointed out, even if the place is unfamiliar, we can get a sense of what kind +[1240.320 --> 1241.320] of place this is. +[1241.320 --> 1242.320] Right? +[1242.320 --> 1246.000] And am I in an urban environment, a natural environment, a living room, a bathroom where +[1246.000 --> 1248.160] am I? +[1248.160 --> 1250.160] A third aspect of where am I? +[1250.160 --> 1254.760] A third way that we might answer that question is something about the geometry of the environment +[1254.760 --> 1256.000] we're in. +[1256.000 --> 1258.760] So try this right now, close your eyes. +[1258.760 --> 1259.760] Okay. +[1259.760 --> 1263.760] Now think about how far the wall is in front of you. +[1263.760 --> 1264.840] Don't open your eyes. +[1264.840 --> 1266.880] Just think about how far away it is. +[1266.880 --> 1270.640] How far away the left wall is and the right wall is. +[1270.640 --> 1271.960] And how about the wall behind you? +[1271.960 --> 1273.040] Don't open your eyes. +[1273.040 --> 1276.840] How far back is the wall behind you from where you are right now? +[1276.840 --> 1277.840] Okay. +[1277.840 --> 1278.840] You can open your eyes. +[1278.840 --> 1279.840] It's not rocket science. +[1279.840 --> 1284.680] I just wanted you to intuit that even though you're presumably riveted by this lecture +[1284.680 --> 1289.680] and thinking only about navigation, you sort of have a kind of situational awareness of +[1289.680 --> 1292.720] the spatial layout of the space you're in. +[1292.720 --> 1297.720] So you might have a sense of, okay, I'm in a space like this and I'm over here in it. +[1297.720 --> 1298.720] Right? +[1298.720 --> 1304.000] And we'll talk more about that exact kind of awareness of your position relative to the +[1304.000 --> 1306.360] spatial layout of your immediate environment. +[1306.360 --> 1310.480] It's something that's very important in navigation. +[1310.480 --> 1315.240] And another part of that is you might think how would I get out of here if I'm seriously +[1315.240 --> 1319.760] bored by the lecture or for any other reason I urgently need to get out of here, you probably +[1319.760 --> 1322.240] know exactly where the doors are in the space. +[1322.240 --> 1327.040] It's just part of one of those things that we keep track of, okay? +[1327.040 --> 1328.040] Okay. +[1328.040 --> 1331.120] So those are aspects of where am I in this place? +[1331.120 --> 1335.120] What are the things we need to know to know how we get from here to some place else? +[1335.120 --> 1336.120] Okay. +[1336.200 --> 1341.960] Well, the simplest way to navigate to another location, another goal is called beaconing. +[1341.960 --> 1346.520] And this is the case where you can directly see or hear your target location. +[1346.520 --> 1349.960] So you're sailing in the fog, you can't see a damn thing, but you hear the fog horn +[1349.960 --> 1354.800] over there and you know you're sailing to that point, so you just go toward the sound. +[1354.800 --> 1355.800] Nice and simple. +[1355.800 --> 1357.320] You don't need any broader map of anything else. +[1357.320 --> 1361.200] You just hear it and head toward it. +[1361.440 --> 1368.040] Or if you see this and your goal is to get to the green building, well, you know there's +[1368.040 --> 1370.040] a green building and you just head that way. +[1370.040 --> 1373.560] Now you're going to have to like go around a little bit to get around those obstacles, +[1373.560 --> 1376.840] but you know where it ahead because you can see your target directly. +[1376.840 --> 1381.880] Okay, these are cases where you don't need a broader long-term knowledge of the whole +[1381.880 --> 1382.880] environment. +[1382.880 --> 1385.680] If you can see your target, you just go straight for it. +[1385.680 --> 1386.680] Okay. +[1386.680 --> 1388.000] So that's beaconing. +[1388.000 --> 1394.440] This kind of A to B and it requires no mental map, no kind of internal model of the whole +[1394.440 --> 1396.680] world you're navigating in. +[1396.680 --> 1402.040] But if you can't see the place you want to go, then you need some kind of mental map +[1402.040 --> 1403.280] of the world. +[1403.280 --> 1405.760] So what do we mean by a mental map of the world? +[1405.760 --> 1412.040] Well, this idea was first articulated in a classic experiment way back in the 1940s. +[1412.040 --> 1416.960] So this was actually one of the original experiments that launched the Cognitive Revolution +[1416.960 --> 1423.840] when we emerged from the scourge of behaviorism to realize it was actually okay and indeed +[1423.840 --> 1427.680] of the essence to talk about what's going on in the mind. +[1427.680 --> 1433.480] And a really influential study that launched the Cognitive Revolution by Tolman was dawn +[1433.480 --> 1435.240] on rats and it went like this. +[1435.240 --> 1436.240] He trained rats. +[1436.240 --> 1442.040] He put them down in this area and they had to learn that there would be food out there +[1442.040 --> 1443.040] at the goal. +[1443.040 --> 1447.120] So they just have to make the series of left and right turns to find the food. +[1447.120 --> 1451.560] Okay, so you train them on that for a while till they're really good at it. +[1451.560 --> 1454.360] And then he put the rats in this environment. +[1454.360 --> 1460.400] Okay, now the environment is similar except there's multiple paths. +[1460.400 --> 1463.320] One that seems analogous to the old route. +[1463.320 --> 1465.720] So what do the rats do in this situation? +[1465.720 --> 1470.880] They run down here, they run into a wall and they realize, okay, that's not going to work. +[1470.880 --> 1473.280] Okay, nothing no surprise is yet. +[1473.280 --> 1481.600] But then the rats immediately come back out and they go straight out that way. +[1481.600 --> 1484.040] What does that tell you? +[1484.040 --> 1485.640] What do they learn? +[1485.640 --> 1489.280] Did they learn a series of like go straight and then left and then right and then right +[1489.280 --> 1491.360] and then go for a long ways? +[1491.360 --> 1494.000] No, that wouldn't work over here. +[1494.000 --> 1496.480] They learned something much more interesting. +[1496.480 --> 1501.400] Even though they were only being trained on this task here, they learned some much more +[1501.400 --> 1507.240] interesting thing about the kind of vector average of all of those turns. +[1507.240 --> 1508.240] Everybody get this? +[1508.240 --> 1511.000] It's really simple but really deep. +[1511.000 --> 1516.000] Okay, so from this, Tolman and others started talking about cognitive maps. +[1516.000 --> 1520.920] Whatever it is you have to have learned in a situation like this so you can abstract +[1520.920 --> 1522.280] the general direction. +[1522.280 --> 1523.280] Okay? +[1523.280 --> 1528.600] We don't just learn specific roots as a series of stimulus and responses. +[1528.600 --> 1533.560] Okay, so there must be some kind of map in your head to be able to do this. +[1533.560 --> 1536.840] And rats have that and so do you. +[1536.840 --> 1539.160] So let's consider this question right now. +[1539.160 --> 1540.160] Where am I? +[1540.160 --> 1542.280] Where are you? +[1542.280 --> 1546.840] To answer that question to yourself, there's something like this in your head. +[1546.840 --> 1551.240] And it probably doesn't look exactly like that in your head but there's some version of +[1551.240 --> 1556.440] this information that's in your head that you're using when you answer the question of +[1556.440 --> 1558.040] where you are. +[1558.040 --> 1559.280] Okay? +[1559.280 --> 1565.640] So and you have some way to say in that map of the world, I know not just what the MIT +[1565.640 --> 1569.600] campus looks like and how it's arranged but I know where I am in it. +[1569.600 --> 1572.800] Okay? +[1572.800 --> 1576.800] Now if you want to know how to get somewhere else, like suppose you're hungry and you +[1576.800 --> 1580.760] want to go over to the state of cafeteria over there. +[1580.760 --> 1585.280] What else do you need to know besides knowledge of the map of your environment and where +[1585.280 --> 1586.880] you are in it? +[1586.880 --> 1591.680] What else do you need to know? +[1591.680 --> 1595.880] You know you have this map, you know where you are and you know where your goal is. +[1595.880 --> 1597.800] Now you have to plan how to get over there. +[1597.800 --> 1599.560] What else do you need to know? +[1599.560 --> 1600.560] Yeah. +[1600.560 --> 1604.320] Yes, you have to know which parts are like paths and which parts are buildings? +[1604.320 --> 1605.320] Yes, exactly. +[1605.320 --> 1606.800] Where can you go in there? +[1606.800 --> 1609.360] Actually, where can you physically get through? +[1609.360 --> 1613.520] Like actually our vector is right over there but you can't go that way because you can't +[1613.520 --> 1616.800] go through that glass even though you can see through it. +[1616.800 --> 1622.120] So knowledge of physical barriers and what's an actual path and what isn't is crucial? +[1622.120 --> 1625.240] What else do you need to know? +[1625.240 --> 1629.840] Suppose we had a robot in this room sitting right here facing the front of room like you +[1629.840 --> 1633.760] guys and we're programming the robot on how to get over there. +[1633.760 --> 1638.160] What are other things we'd have to tell the robot to get it to plan how to get over to +[1638.160 --> 1640.600] the state of cafeteria? +[1640.600 --> 1642.600] Yeah. +[1642.600 --> 1643.600] Absolutely. +[1643.600 --> 1650.800] We'd have to know about obstacles like moving obstacles not just fixed ones. +[1650.800 --> 1651.800] Absolutely. +[1651.800 --> 1652.800] What else? +[1652.800 --> 1653.800] Yeah. +[1653.800 --> 1654.800] Yes. +[1654.800 --> 1656.800] Yes, you have to know which way he's headed. +[1656.800 --> 1660.600] Right, you're going to give this robot instructions on which way to go. +[1660.600 --> 1665.280] It matters a whole lot if the robot is starting like this or starting like that. +[1665.280 --> 1669.600] The instructions are different in the two cases and likewise for you guys to plan a +[1669.600 --> 1673.640] route you need to know which way you're heading. +[1673.640 --> 1678.080] If you guys ever been in Manhattan and you come up from the subway and you see the streets +[1678.080 --> 1681.360] go on like this and you know it's north-south and you don't know if you're heading south +[1681.360 --> 1682.960] or north, right? +[1682.960 --> 1683.960] Really common thing. +[1683.960 --> 1684.960] Okay. +[1684.960 --> 1689.760] It's not enough to know I'm at the junction of fifth and twenty second. +[1689.760 --> 1693.400] You need to know I'm facing south or north otherwise you can't figure out which way to +[1693.400 --> 1694.400] go. +[1694.400 --> 1696.320] That's called heading direction. +[1696.320 --> 1697.320] Okay. +[1697.320 --> 1698.320] Okay. +[1698.320 --> 1699.960] We just did all that. +[1699.960 --> 1700.960] Okay. +[1700.960 --> 1702.760] You need to know your current heading. +[1702.760 --> 1703.760] Okay. +[1703.760 --> 1709.240] You also need to know the direction of your goal in order to plan a route to it. +[1709.240 --> 1710.240] Okay. +[1710.240 --> 1715.000] So in this kind of taxonomy of all the things you need to know to navigate, we've just +[1715.000 --> 1719.920] added that if you're going to navigate in your known environment you need to know where +[1720.000 --> 1725.600] not just where you are in it but which way you are facing in that mental map. +[1725.600 --> 1726.600] Okay. +[1726.600 --> 1730.040] And we also talked about this business of what routes are possible from here. +[1730.040 --> 1732.680] How do we move around obstacles? +[1732.680 --> 1734.160] Where are the doors? +[1734.160 --> 1738.080] Where are the hazards like cars, etc. +[1738.080 --> 1742.600] A final thing you need to know is that even if you have a good system for all of these +[1742.600 --> 1746.920] other bits it's still possible to get lost in all kinds of ways. +[1746.920 --> 1750.720] If you lose track, you get confused, you get lost. +[1750.720 --> 1754.720] So we also need a way to reorient ourselves when we're lost. +[1754.720 --> 1757.720] And we'll talk a lot about that in the next lecture. +[1757.720 --> 1758.720] Okay. +[1758.720 --> 1762.160] So this is just common sense we're doing a kind of low tech version of more computational +[1762.160 --> 1764.160] theory for navigation. +[1764.160 --> 1767.960] Like what are the things that we would need to know or that a robot would need to know +[1767.960 --> 1769.320] to be able to navigate. +[1769.320 --> 1770.320] Okay. +[1770.320 --> 1772.480] Just thinking about the nature of the problem. +[1772.480 --> 1774.040] All right. +[1774.040 --> 1775.760] So that's what we need. +[1775.760 --> 1778.360] What's the neural basis of all of this? +[1778.360 --> 1779.360] All right. +[1779.360 --> 1782.200] So I'm going to start right in with the pariah of the Campbell Place area. +[1782.200 --> 1786.120] Not to imply it is the total neural bit neural basis of this whole thing. +[1786.120 --> 1788.560] It's just one little piece of a much bigger puzzle. +[1788.560 --> 1791.240] But we'll start in there because it's nice and concrete. +[1791.240 --> 1792.240] Okay. +[1792.240 --> 1793.240] All right. +[1793.240 --> 1797.000] So this story starts, oh god, about 20 years ago. +[1797.000 --> 1800.920] I think I mentioned some of this in the first class when I talked about the story of Bob. +[1800.920 --> 1805.360] When I talked about Russell Epstein, it was in my postdoc and it was doing nice behavioral +[1805.360 --> 1809.480] experiments and thought it was trashy and cheap to mess around with brain imaging and he +[1809.480 --> 1813.280] was going to have none of it until I said, Russell, just do one experiment. +[1813.280 --> 1814.600] Scan subject looking at scenes. +[1814.600 --> 1819.240] I know it's kind of stupid, but just do it and you'll have a slide for your job talk. +[1819.240 --> 1823.160] And he scans subjects looking at scenes and looking at objects. +[1823.160 --> 1827.320] And here is one of those early subjects that probably me, I don't remember, with a bunch +[1827.320 --> 1832.280] of vertical slices through the brain, near the back of the brain down there, moving forward +[1832.280 --> 1833.680] as we go up to here. +[1833.680 --> 1835.080] Everybody oriented. +[1835.080 --> 1836.080] Okay. +[1836.080 --> 1838.960] So, sorry, it's not showing up very well in this lighting. +[1838.960 --> 1844.280] But there's a little bilateral region right in the middle there that shows a stronger response +[1844.280 --> 1848.760] when people look at pictures of scenes than when they look at pictures of objects. +[1848.760 --> 1849.760] Okay. +[1849.760 --> 1852.560] So we hadn't predicted this. +[1852.560 --> 1853.560] Yeah. +[1853.560 --> 1854.560] Is the thing is it higher? +[1854.560 --> 1855.560] Yeah. +[1855.560 --> 1856.560] Yeah. +[1856.560 --> 1860.680] All the colors are, there's significance maps or P levels. +[1860.680 --> 1865.480] So pink is higher than blue, but blue is borderline significant. +[1865.480 --> 1866.800] Okay. +[1866.800 --> 1868.400] So, this is kind of dopey. +[1868.400 --> 1870.520] We didn't actually predict it for any deep reason. +[1870.520 --> 1873.600] We hadn't been thinking about theories of navigation or anything like that. +[1873.600 --> 1878.480] It was just one of those dumb experiments where we found something and we followed the data. +[1878.480 --> 1882.080] So we found this and it's like, okay, let's try some other subjects. +[1882.080 --> 1885.000] So here are the first nine subjects we scanned. +[1885.000 --> 1891.320] Every single subject had that kind of signature response in exactly the same place. +[1891.320 --> 1892.320] Okay. +[1892.320 --> 1895.840] In a part of the brain called parahypic ample cortex. +[1895.840 --> 1896.840] Okay. +[1896.840 --> 1900.760] So this is very systematic and there's lots of ways to make progress in science. +[1900.760 --> 1906.040] One way is to have a big theory and use it to motivate brilliant, elegantly designed experiments. +[1906.040 --> 1910.280] And another is you just see something salient and robust that you didn't predict and you +[1910.280 --> 1912.480] follow your nose and try to figure it out. +[1912.480 --> 1913.480] So that's what we did in this case. +[1913.480 --> 1916.000] It's like, okay, what the hell is that? +[1916.000 --> 1917.040] All right. +[1917.040 --> 1922.920] So if you think about, we've eventually called it the parahypic ample place area after +[1922.920 --> 1924.680] a little more work. +[1924.680 --> 1928.920] If you think about what we have so far, we've scanned people looking at pictures like this +[1928.920 --> 1930.720] and pictures like that. +[1930.720 --> 1934.320] And what we've shown is that little patch of brain responds a bunch more to these than +[1934.320 --> 1936.160] those. +[1936.160 --> 1942.760] So my first question is, is that a minimal pair? +[1942.760 --> 1944.760] Tally, is that a minimal pair? +[1944.760 --> 1946.880] Sorry, I'm like boys. +[1946.880 --> 1947.880] Sorry. +[1947.880 --> 1948.880] Simple, simple, simple. +[1948.880 --> 1950.880] We're contrasting this with that. +[1950.880 --> 1951.880] Okay. +[1951.880 --> 1958.720] Minimal pair is this thing we aspire towards in experimental design where we have two conditions +[1958.720 --> 1961.880] that are identical except for one little thing we're manipulating. +[1961.880 --> 1968.320] Well, I don't really think it's minimal pair, but I'm not really sure. +[1968.320 --> 1971.480] Well, I even told you what we were designing to manipulate. +[1971.480 --> 1975.640] There seems to be like too many differences between a delivery room. +[1975.640 --> 1977.160] I'm like, it's ludicrous. +[1977.160 --> 1978.160] Right? +[1978.160 --> 1981.160] I mean, there's a million differences here, right? +[1981.160 --> 1983.320] So we don't know that we have anything yet. +[1983.320 --> 1987.160] There's all kinds of uninteresting accounts of the systematic activation in that part of +[1987.160 --> 1988.760] the brain, right? +[1988.760 --> 1993.720] So just to list a few that you've probably already noticed, these things have rich high +[1993.720 --> 1996.240] level meaning and complexity, right? +[1996.240 --> 2003.440] So you could think about living rooms or where you might sit or somebody's aesthetic home +[2003.440 --> 2007.240] design or, there's all kinds of stuff to think about there. +[2007.240 --> 2009.560] Much more than just, okay, it's a blender, right? +[2009.560 --> 2013.400] So there's just complexity in every possible way. +[2013.400 --> 2019.560] There are also lots of objects present here and only a single object over there. +[2019.560 --> 2023.080] So maybe that region just represents objects and if you have more objects to get a higher +[2023.080 --> 2026.120] signal, right? +[2026.120 --> 2031.760] There's another possibility and that is that these images depict spatial layout and that +[2031.760 --> 2032.760] one does not. +[2032.760 --> 2037.560] Okay, so you have some sense of the walls and the floor and like the layout of the local +[2037.560 --> 2041.560] environment here that you don't have over there, all right? +[2041.560 --> 2044.320] And we could probably list a million other things, okay? +[2044.320 --> 2046.800] It's a very, very sloppy contrast. +[2046.800 --> 2051.680] Okay, so how are we going to ask which of these things might be driving the response of +[2051.680 --> 2053.640] that region? +[2053.640 --> 2058.840] Well, a natural thing to do is just deconstruct the stimuli. +[2058.840 --> 2060.640] So here's what we did. +[2060.640 --> 2062.480] This is actually way back 20 years ago. +[2062.480 --> 2066.360] There were better methods at the time but I didn't know them so I actually drove around +[2066.360 --> 2071.520] Cambridge, photographed my friends' apartments, left the camera on the same tripod, moved +[2071.560 --> 2074.080] all the furniture out of the way and photographed the space again. +[2074.080 --> 2077.200] Ha, ha, I know. +[2077.200 --> 2081.800] And then these, and we probably cut out with some horrific version of Adobe Photoshop that +[2081.800 --> 2083.640] existed 20 years ago. +[2083.640 --> 2089.160] Anyway, we deconstructed the scenes into their component objects and the bare spatial layout. +[2089.160 --> 2093.880] Okay, everybody get the logic here just to try to make a big cut in this hypothesis space +[2093.880 --> 2096.400] of what might be driving that region. +[2096.400 --> 2097.920] Okay. +[2097.920 --> 2101.720] So what do we predict that the PPA will respond? +[2101.720 --> 2105.680] How strongly will it respond? +[2105.680 --> 2107.960] Oops. +[2107.960 --> 2111.000] How strongly will it respond if these two things are true? +[2111.000 --> 2117.440] If it's a complexity or multiplicity of objects that's driving it, what do you predict? +[2117.440 --> 2118.440] We will see over there. +[2118.440 --> 2120.720] We already know you get a high response here. +[2120.720 --> 2124.520] What do we get over there? +[2124.520 --> 2125.520] Yeah. +[2126.520 --> 2127.520] Yeah. +[2127.520 --> 2129.200] Probably get more bias from the screen. +[2129.200 --> 2131.640] Yeah, respond more to this than that, right? +[2131.640 --> 2135.520] It's really simple-minded, right? +[2135.520 --> 2141.960] If instead it responds more to the spatial layout, what do we predict as well? +[2141.960 --> 2144.560] How is going to respond to the entry when it's more? +[2144.560 --> 2146.800] Yeah. +[2146.800 --> 2149.960] And that seems like a weird hypothesis because these are really boring. +[2149.960 --> 2153.280] There's kind of nothing going on here and there's just lots of stuff going on here. +[2153.280 --> 2157.160] I mean, it's not riveting, but it's a whole bunch of- whole but- whole up more interesting +[2157.160 --> 2158.640] to look at these than those. +[2158.640 --> 2161.880] Believe me, I got scanned for hours and hours looking at these things. +[2161.880 --> 2165.600] And whenever the empty rooms came on, I was like, oh my god, I'm just so bored, right? +[2165.600 --> 2166.600] There's just nothing here. +[2166.600 --> 2170.360] Whereas here at least there's stuff, right? +[2170.360 --> 2172.600] But that's not what the PPA thinks. +[2172.600 --> 2178.160] What the PPA does, oops, oops, we just did the localizer. +[2178.160 --> 2179.160] Okay. +[2179.160 --> 2180.640] It responds like this. +[2180.640 --> 2187.200] This is percent signal change, a measure of magnitude of response to the full scenes +[2187.200 --> 2192.160] way down less than half the response to all those objects and almost the same response +[2192.160 --> 2199.160] as the original scene when all you have is a bare spatial layout. +[2199.160 --> 2201.280] Pretty surprising, isn't it? +[2201.280 --> 2202.280] We were blown away. +[2202.280 --> 2203.520] We're like, what? +[2203.520 --> 2206.320] What? +[2206.320 --> 2211.040] But can you see how even this really simple-minded experiment enables us to just pretty much +[2211.040 --> 2213.240] rule out that whole space of hypotheses? +[2213.240 --> 2217.840] It's not about the richness or interest or multiplicity of objects. +[2217.840 --> 2221.320] It's something much more like spatial layout because that's kind of all there is in those +[2221.320 --> 2223.120] empty rooms. +[2223.120 --> 2226.160] I mean, it could be something like, you know, the texture of wood floors or something +[2226.160 --> 2228.560] weird like that. +[2228.560 --> 2231.040] But one's first guess is it's something about spatial layout. +[2231.040 --> 2232.040] Does this make sense? +[2232.040 --> 2238.480] It's just a way to take a big sloppy contrast and try to formulate initial hypotheses and +[2238.480 --> 2240.960] knock out a whole big space of hypotheses. +[2240.960 --> 2241.960] Yes. +[2241.960 --> 2242.960] Is it Alana? +[2242.960 --> 2243.960] Yeah. +[2243.960 --> 2245.960] I'm sorry, I met a question here at the point. +[2245.960 --> 2248.960] So we're looking at the empty room. +[2248.960 --> 2249.960] Not empty room. +[2249.960 --> 2251.960] We're looking at the empty room. +[2251.960 --> 2252.960] Ah, good question. +[2252.960 --> 2255.240] I skipped over all of that. +[2255.240 --> 2256.240] We did. +[2256.240 --> 2257.800] We did, yes, that's true. +[2257.800 --> 2261.560] We did mush them all together and one could worry about that. +[2261.560 --> 2267.200] When you see this, you remember that that's a version of this, right? +[2267.200 --> 2268.200] Absolutely. +[2268.200 --> 2271.000] Absolutely. +[2271.000 --> 2276.560] And so maybe, yes, nonetheless, if what you were doing, that's absolutely true. +[2276.560 --> 2282.800] But if what you were doing here is kind of mentally recalling this, right? +[2282.800 --> 2286.000] Then why couldn't you also do that here? +[2286.000 --> 2288.120] Maybe you could. +[2288.120 --> 2292.680] You might argue that this is more evocative of that than this is, but it's also got lots +[2292.680 --> 2294.280] of relevant information. +[2294.280 --> 2295.280] Okay? +[2295.280 --> 2296.280] Yeah, Jimmy. +[2296.280 --> 2301.520] You guys were the, for example, if you guys try placing them in the same like exact position +[2301.520 --> 2303.880] as the same, seeing if that. +[2303.880 --> 2304.880] We did both versions. +[2304.880 --> 2309.360] For exactly the reasons you guys are pointing out and it didn't make a difference. +[2309.360 --> 2310.360] Yeah. +[2310.360 --> 2311.360] Yeah. +[2311.360 --> 2312.360] Sorry, quickly. +[2312.360 --> 2317.920] Maybe you could give the, we'll just have you pointed to that. +[2317.920 --> 2321.000] There's more stuff. +[2321.000 --> 2326.200] Like in the interior, there's more background, but there's still more background. +[2326.200 --> 2327.200] Totally. +[2327.200 --> 2328.200] You're absolutely right. +[2328.200 --> 2331.720] This has taken us pretty far, but it's still pretty sloppy. +[2331.720 --> 2334.640] This stuff goes all the way out to the edge of the frame and here are this lots of empty +[2334.640 --> 2335.640] space. +[2335.640 --> 2336.640] Is that what you're getting at? +[2336.640 --> 2337.640] Absolutely. +[2337.640 --> 2342.080] I took out those slides because I felt I didn't want to spend the entire lecture doing millions +[2342.080 --> 2343.440] of control conditions on the PPA. +[2343.440 --> 2349.360] I thought you'd get bored, but actually another version that we did was we then took all +[2349.360 --> 2353.480] of these conditions and we chopped them into little bits and rearranged the bits so that +[2353.480 --> 2360.280] you have much more coverage of stuff when the chopped up scenes than the chopped up objects. +[2360.280 --> 2363.360] And in the chopped up versions, it doesn't respond differently at all. +[2363.360 --> 2366.080] So it's not the amount of total spatial coverage. +[2366.080 --> 2369.520] It's the actual something more like the depiction of space. +[2369.520 --> 2371.640] Was there a question over there? +[2371.640 --> 2377.480] I was wondering if there would be any difference between a shooting image as a 2D board of +[2377.480 --> 2382.680] 3D scene and actually being there to see the 3D inside of the 3D. +[2382.680 --> 2383.680] Totally. +[2383.680 --> 2384.680] Totally. +[2384.680 --> 2385.680] It's a real challenge. +[2385.680 --> 2389.080] With navigation, navigation is very much about being there and moving around in the +[2389.080 --> 2390.600] space. +[2390.600 --> 2394.040] And this is just a pretty rudimentary thing where you're lying in the scanner and these +[2394.040 --> 2398.680] images are just flashing on and you're doing some simple tasks like pressing a button +[2398.680 --> 2400.680] when consecutive images are identical. +[2400.680 --> 2402.440] It's not like moving around in the real world. +[2402.440 --> 2404.840] You don't think you're actually there. +[2404.840 --> 2409.080] But here's where video games and VR come in. +[2409.080 --> 2414.200] Because actually, they produce a pretty powerful simulation of knowing your environment, +[2414.200 --> 2416.440] feeling you're in a place in it. +[2416.440 --> 2421.720] And so lots of studies have used those methods to give something closer to the actual experience +[2421.720 --> 2424.400] of navigation. +[2424.400 --> 2427.000] Okay. +[2427.000 --> 2428.000] So. +[2428.000 --> 2429.000] So where are we so far? +[2429.000 --> 2433.600] We've said the PPA seems to be involved in recognizing a particular scene. +[2433.600 --> 2438.760] So this just says it responds to scenes and something about spatial layout maybe. +[2438.760 --> 2443.840] Does it care about that particular scene or something? +[2443.840 --> 2448.080] Do you have to recognize that particular scene to be able to use the information? +[2448.080 --> 2452.200] Now our subjects mostly didn't know those particular scenes but we wanted to do a tighter +[2452.200 --> 2457.080] contrast asking if knowledge of the particular scene matters. +[2457.080 --> 2462.360] So what we did was we took a bunch of pictures around the MIT campus and we took a bunch of +[2462.360 --> 2464.920] pictures around the Tufts campus. +[2464.920 --> 2470.680] And we scanned MIT students looking at MIT pictures versus Tufts pictures. +[2470.680 --> 2473.680] And then what else do we do? +[2473.680 --> 2475.880] Get the Tufts to. +[2475.880 --> 2477.280] Yeah, why? +[2477.280 --> 2482.320] Oh, just to make sure that it's not all about the weird architecture. +[2482.320 --> 2483.320] Exactly. +[2483.320 --> 2484.320] Exactly. +[2484.320 --> 2487.080] So this is called counter, who's weird architecture? +[2487.080 --> 2489.600] I think ours is weirder. +[2489.600 --> 2493.880] So it's not just about the particular scenes or the particular subjects. +[2493.880 --> 2498.320] So everybody get how with that counterbalance design you can really pull out the essence +[2498.320 --> 2502.840] of familiarity itself, unconfounded from the particular images. +[2502.840 --> 2503.840] Okay. +[2503.840 --> 2511.560] So when we did that we found a very similar response magnitude in the PPA for the Tufts students +[2511.560 --> 2514.400] and for the familiar and unfamiliar scenes. +[2514.400 --> 2515.400] Okay. +[2515.400 --> 2517.880] Really didn't make much difference. +[2517.880 --> 2518.880] Yeah. +[2518.880 --> 2524.440] Taking a step back, so we started off with the one question of navigation and it involving +[2524.440 --> 2526.440] all these different components. +[2526.440 --> 2528.800] I just want the players to wear this. +[2528.800 --> 2529.800] We're getting there. +[2529.800 --> 2530.800] We're getting there. +[2530.800 --> 2531.800] There won't be like a perfect answer. +[2531.800 --> 2535.400] We're not going to end up with that slide with the exact brain region of each of those +[2535.400 --> 2536.400] things. +[2536.400 --> 2539.880] We'll get some justy vague senses of what this is. +[2539.880 --> 2540.880] Yeah. +[2541.280 --> 2541.880] Okay. +[2541.880 --> 2548.080] So this tells us it's not about, whatever the PPA is responding to in a scene, it's not +[2548.080 --> 2550.640] something that hinges on knowing that exact scene. +[2550.640 --> 2553.960] So it can't be something like, okay, if I was here and I wanted to get coffee, what would +[2553.960 --> 2557.880] my root from this location be, given my knowledge of the environment? +[2557.880 --> 2560.240] Because otherwise we wouldn't get this result. +[2560.240 --> 2564.520] So whatever it is, it's something more immediate and perceptual to do with just seeing this +[2564.520 --> 2565.520] place. +[2565.520 --> 2566.520] Okay. +[2566.520 --> 2567.520] All right. +[2567.520 --> 2568.520] All right. +[2569.520 --> 2570.760] So where are we? +[2570.760 --> 2576.080] We've said that there's this region that responds more to scenes than objects. +[2576.080 --> 2581.080] That when all the objects are removed from the scenes, the response, you know, barely drops. +[2581.080 --> 2582.320] Okay. +[2582.320 --> 2587.080] And its response is pretty much the same for familiar and unfamiliar scenes. +[2587.080 --> 2591.080] So all of that suggests that it's involved in something like perceiving the shape of +[2591.080 --> 2592.360] space around you. +[2592.360 --> 2596.800] It doesn't nail it yet, but it kind of pushes you towards that hypothesis. +[2596.800 --> 2597.800] Yeah. +[2597.800 --> 2598.800] I'm going to go here a second. +[2598.800 --> 2599.800] I'm going to go. +[2599.800 --> 2600.800] No. +[2600.800 --> 2601.800] Okay. +[2601.800 --> 2604.440] No, it's very, no, but is it actually looking at that? +[2604.440 --> 2606.240] Oh, great question. +[2606.240 --> 2608.240] Not very much. +[2608.240 --> 2609.240] Okay. +[2609.240 --> 2610.240] Yeah. +[2610.240 --> 2616.560] If you take pictures of places from above versus this kind of view, you get a response +[2616.560 --> 2619.560] in this kind of view, but not above. +[2619.560 --> 2620.560] Yeah. +[2620.560 --> 2623.560] Very telling. +[2623.560 --> 2624.560] Yeah. +[2624.560 --> 2625.560] Okay. +[2625.560 --> 2627.120] So I'm going to skip. +[2627.120 --> 2629.880] We're not going to do like, you know, the third-eother experiments. +[2629.880 --> 2633.760] We're going to skip to the general picture that hears the PPA and four subjects in this +[2633.760 --> 2635.760] very stereotype location. +[2635.760 --> 2638.560] And here are some of the many conditions we've tested. +[2638.560 --> 2640.960] It's not just, you know, abstract maps like this. +[2640.960 --> 2642.440] They don't produce a strong response. +[2642.440 --> 2644.880] Oh, this is an answer to Koolie's question way back. +[2644.880 --> 2647.400] Here's the scrambled up scene, much lower response. +[2647.400 --> 2651.880] So it's not just coverage of visual junk, right? +[2651.880 --> 2656.320] And it responds pretty strongly to scenes made out of legos compared to objects made out +[2656.320 --> 2660.160] of legos and various other silly things. +[2660.160 --> 2661.160] Okay. +[2661.160 --> 2665.560] So all of that seems to suggest that it's processing something like the shape or geometry +[2665.560 --> 2670.160] of space around you, visible space in your immediate environment. +[2670.160 --> 2671.160] Okay. +[2671.160 --> 2675.240] Nonetheless, there's always pushback. +[2675.240 --> 2676.840] And there's pushback on multiple fronts. +[2676.840 --> 2679.040] And there should be that's proper science. +[2679.040 --> 2684.800] So one of the lines of pushback was this paper by Nasser et al. +[2684.800 --> 2685.800] That I didn't assign. +[2685.800 --> 2687.200] I signed you the response to it. +[2687.200 --> 2693.480] Anyway, what Nasser et al did was scan people looking at rectilinear things like cubes +[2693.480 --> 2698.920] and pyramids versus curvilinear roundy things like cones and spheres. +[2698.920 --> 2704.960] And what they showed is the PPA responds more to the rectilinear than the curvilinear +[2704.960 --> 2706.360] shapes. +[2706.360 --> 2708.320] Okay. +[2708.320 --> 2711.040] And okay, that's the first thing. +[2711.040 --> 2717.600] And so then they argue that in general scenes have more rectilinear structure than curvilinear +[2717.600 --> 2718.600] structure. +[2718.600 --> 2721.160] And they did a bunch of math to make that case. +[2721.160 --> 2729.640] And so they argue that maybe the apparent scene selectivity of the PPA is due to a what +[2729.640 --> 2732.800] of scenes with rectilinearity. +[2732.800 --> 2733.800] Yeah. +[2734.800 --> 2735.800] Confound? +[2735.800 --> 2736.800] Yes. +[2736.800 --> 2737.800] Exactly. +[2737.800 --> 2738.800] A confound. +[2738.800 --> 2742.240] This is exactly what a confound is. +[2742.240 --> 2746.520] Something else that co-varies with the manipulation you care about that gives you an alternative +[2746.520 --> 2747.520] account. +[2747.520 --> 2749.400] Namely, okay, it's not seen selectivity. +[2749.400 --> 2751.400] It's just rectilinearity. +[2751.400 --> 2754.560] I mean, that might be interesting to other people, but it would make it not very relevant +[2754.560 --> 2758.480] to navigation and much less interesting to me at least, right? +[2758.480 --> 2759.480] Okay. +[2759.480 --> 2761.960] So that's an important criticism. +[2761.960 --> 2766.200] And so then the Brian at all paper, the you guys read, starts from there and says, okay, +[2766.200 --> 2767.200] let's take that seriously. +[2767.200 --> 2768.800] Let's find out. +[2768.800 --> 2774.360] And so you guys should have read all of this, but just to remind you, they have a nice +[2774.360 --> 2776.160] little 2x2 design. +[2776.160 --> 2780.400] Remember we talked about 2x2 designs where they manipulate whether the image has a lot +[2780.400 --> 2785.080] of rectilinear structure or less rectilinear structure and whether the image is a place +[2785.080 --> 2786.280] or a face. +[2786.280 --> 2789.280] Okay. +[2789.280 --> 2795.160] And what they find in the PPA is the same response to these and it's higher to the scenes +[2795.160 --> 2800.360] than the faces and rectilinearity didn't add her for the scenes. +[2800.360 --> 2801.360] Okay. +[2801.360 --> 2806.440] So evidently, even though it does matter with these abstract shapes in actual scenes, it +[2806.440 --> 2809.000] doesn't seem and faces, it doesn't seem to be doing much. +[2809.000 --> 2811.000] It's not accounting for this difference. +[2811.000 --> 2812.000] Okay. +[2812.000 --> 2814.000] Everybody get that? +[2814.000 --> 2815.000] Okay. +[2815.000 --> 2816.000] Okay. +[2816.000 --> 2817.000] Let's talk about this graph. +[2817.000 --> 2819.240] Are there main effects or interactions here? +[2819.240 --> 2824.000] And what are those main effects or interactions? +[2824.000 --> 2827.000] Yes, fully. +[2827.000 --> 2829.000] There's a many things. +[2829.000 --> 2830.000] Yeah. +[2830.000 --> 2833.000] Of categories, seeing versus face. +[2833.000 --> 2834.000] Yeah. +[2834.000 --> 2835.000] Anything else? +[2835.000 --> 2842.000] What's that there? +[2842.000 --> 2845.000] What's the first thing? +[2845.000 --> 2847.000] What's the first thing? +[2847.000 --> 2848.000] Wait. +[2848.000 --> 2850.000] These are scenes and those are faces. +[2850.000 --> 2851.000] Okay. +[2851.000 --> 2852.000] And this is the code here. +[2852.000 --> 2856.000] These are rectilinear versus curvilinear. +[2856.000 --> 2860.000] Just one main effect or is there an interaction or another main effect? +[2860.000 --> 2861.000] No. +[2861.000 --> 2862.000] Just one main effect. +[2862.000 --> 2863.000] Okay. +[2863.000 --> 2864.000] Right? +[2864.000 --> 2865.000] These guys are higher than those guys. +[2865.000 --> 2866.000] That's it. +[2866.000 --> 2867.000] Okay. +[2867.000 --> 2872.000] So that just tells you there's nothing else going on in these data other than seeing selectivity. +[2872.000 --> 2873.000] Okay. +[2873.000 --> 2879.000] Rectolinearity doesn't interact with or modify seeing selectivity and it doesn't have a separate effect. +[2879.000 --> 2880.000] Okay. +[2880.000 --> 2890.000] Nonetheless, as we've been arguing with all the whole Hacksby-Rigmarole, does the fact that there's no main effect of rectilinearity in here +[2890.000 --> 2896.000] mean that the PPA doesn't have information about rectilinearity? +[2896.000 --> 2897.000] No. +[2897.000 --> 2898.000] Josh. +[2898.000 --> 2899.000] Why? +[2899.000 --> 2904.000] I mean, instead of tiny amount of that could be, you know, this is not the right experiment. +[2904.000 --> 2905.000] That's right. +[2905.000 --> 2907.000] This is a big, well, it's a right experiment. +[2907.000 --> 2909.000] It's not the right analysis, right? +[2909.000 --> 2912.000] It's the big average responses are the same. +[2912.000 --> 2914.000] But maybe the patterns are different. +[2914.000 --> 2915.000] Okay. +[2915.000 --> 2916.000] That wouldn't directly engage with this. +[2916.000 --> 2920.000] So what I want to know is there information in there about rectilinearity. +[2920.000 --> 2922.000] Okay. +[2922.000 --> 2925.000] So how would we find out? +[2925.000 --> 2928.000] So this was your assignment and I think most people got it right. +[2928.000 --> 2935.000] But in case anybody missed it, we were zooming in on this figure four here. +[2935.000 --> 2939.000] So again, this is just the same basic design of experiment two. +[2939.000 --> 2942.000] And now let's consider what's going on here. +[2942.000 --> 2946.000] So you guys read the paper and you understood what was going on here. +[2946.000 --> 2950.000] What's represented in that cell right there? +[2950.000 --> 2952.000] What is the point of this diagram? +[2952.000 --> 2959.000] What are they doing here and what does that cell mean in that matrix? +[2959.000 --> 2963.000] You can't understand the paper without knowing that. +[2963.000 --> 2964.000] Is it Ollie? +[2964.000 --> 2965.000] No. +[2965.000 --> 2966.000] Sorry. +[2966.000 --> 2967.000] What's your name? +[2967.000 --> 2968.000] Shardun. +[2968.000 --> 2969.000] I've only asked you like six times. +[2969.000 --> 2970.000] Yeah, go ahead. +[2971.000 --> 2979.000] So they want to see whether the activation patterns can better discriminate between +[2979.000 --> 2986.000] rectilinearity of the same category of things or between categories of things with the same rectilinearity. +[2986.000 --> 2995.000] So the first thing I said is to the left and the second one is to the right. +[2995.000 --> 2996.000] And they. +[2996.000 --> 2997.000] Sorry. +[2997.000 --> 2998.000] Wait here and here. +[2998.000 --> 2999.000] No. +[2999.000 --> 3000.000] Right side. +[3000.000 --> 3001.000] Yeah. +[3001.000 --> 3008.000] So this part that part is discriminating between rectilinearity and that side of discriminating between categories. +[3008.000 --> 3011.000] And they take the differences of well, not the differences. +[3011.000 --> 3018.000] They take the how well it can distinguish between each of those categories and plug them down there. +[3018.000 --> 3019.000] Right. +[3019.000 --> 3020.000] Okay. +[3020.000 --> 3021.000] That's exactly right. +[3021.000 --> 3024.000] So this is how well it can discriminate plotted down here. +[3024.000 --> 3027.000] But based on an analysis that follows this scheme. +[3027.000 --> 3030.000] So what does that cell in there represent? +[3030.000 --> 3031.000] That dark green cell. +[3031.000 --> 3032.000] What are they? +[3032.000 --> 3042.000] What is the number that's going to be calculated from the data corresponding to that cell? +[3042.000 --> 3046.000] Similar piece of same rectangular piece. +[3046.000 --> 3047.000] Exactly. +[3047.000 --> 3048.000] Exactly. +[3048.000 --> 3052.000] So just as if you want to distinguish chairs from cars or something else. +[3052.000 --> 3056.000] If you want to know, is there information about rectilinearity in there? +[3056.000 --> 3060.000] You take these two cases which are the same in rectilinearity. +[3060.000 --> 3065.000] Both hy rectilinear, both low rectilinear for run one and run two. +[3065.000 --> 3069.000] And that's the correlation between run and one two for those cells. +[3069.000 --> 3072.000] That's the within rectilinearity case. +[3072.000 --> 3073.000] Right. +[3073.000 --> 3080.000] And if there's information about rectilinearity, the prediction is those within correlations are higher than the between correlations. +[3080.000 --> 3085.000] Just as we argued a bit back with beaches and cities and everything else. +[3085.000 --> 3086.000] Same argument. +[3086.000 --> 3090.000] This is just presenting the data in terms of run one and run two. +[3090.000 --> 3095.000] And which cells do we grab to do this computation? +[3095.000 --> 3097.000] Okay. +[3097.000 --> 3101.000] So each of the cells in there, for each of the cells, +[3101.000 --> 3106.000] we're going to calculate an r value of how similar those patterns are. +[3106.000 --> 3108.000] Okay. +[3108.000 --> 3114.000] Pattern and, you know, a pattern for rectilinear scenes in run two, +[3114.000 --> 3116.000] a pattern for rectilinear scenes in run one, +[3116.000 --> 3120.000] this cell is a correlation between those two patterns. +[3120.000 --> 3123.000] How stable is that pattern across repeated measures? +[3123.000 --> 3125.000] Okay. +[3125.000 --> 3126.000] All right. +[3126.000 --> 3129.000] So that's what that r value is. +[3129.000 --> 3135.000] The two darker blue squares here are the r values +[3135.000 --> 3139.000] for stimuli that differ in rectilinearity. +[3139.000 --> 3144.000] And remember that the essence of the Hacksby-Syle pattern analysis +[3144.000 --> 3151.000] is to see if the within correlations are higher than the between correlations. +[3151.000 --> 3158.000] In this case, the within correlations are within rectilinearity versus between rectilinearity. +[3158.000 --> 3161.000] Okay. +[3161.000 --> 3162.000] All right. +[3162.000 --> 3167.000] And so then they calculate all those correlation differences +[3167.000 --> 3172.000] and they plot them as discrimination abilities. +[3172.000 --> 3178.000] And so what this is showing us here is that actually the PBA doesn't have any information +[3178.000 --> 3183.000] in its pattern of response about the rectilinearity of the scene. +[3183.000 --> 3189.000] However, if we take the same data and now choose within category versus between category, +[3189.000 --> 3196.000] ignoring rectilinearity, and we get the same kind of selectivity correlation difference +[3196.000 --> 3201.000] within versus between for category, there's heaps of information about category. +[3201.000 --> 3204.000] Does that make sense? +[3204.000 --> 3205.000] Okay. +[3205.000 --> 3208.000] Again, if you're fuzzy about this, look back on that slide. +[3208.000 --> 3211.000] I have lots of suggestions for how to unfuzzle yourself on it. +[3211.000 --> 3213.000] Okay. +[3213.000 --> 3214.000] All right. +[3214.000 --> 3218.000] So, interim summary, PBA responds more to scenes and objects. +[3218.000 --> 3222.000] It seems to like spatial layout in particular. +[3222.000 --> 3230.000] It does respond more to boxes and circles, but that rectilinearity bias can't account for scene selectivity. +[3230.000 --> 3232.000] That's all very nice. +[3232.000 --> 3239.000] But what is a whole other kind of fundamental question we haven't yet asked about the PBA? +[3239.000 --> 3243.000] So we've been messing around with Function Lemma-I measuring magnitudes of response, +[3243.000 --> 3250.000] trying to test these kind of vague, you know, general hypotheses about what it might be responding to. +[3250.000 --> 3251.000] Yes. +[3251.000 --> 3252.000] A causation? +[3252.000 --> 3253.000] Yes. +[3253.000 --> 3257.000] What particular causation? +[3257.000 --> 3266.000] I guess like how the PBA would work in place and person can see anything. +[3266.000 --> 3267.000] Exactly. +[3267.000 --> 3268.000] Exactly. +[3268.000 --> 3271.000] Again, we can test the causal rule of a stimulus on the PPA. +[3271.000 --> 3272.000] So, I talked about that. +[3272.000 --> 3273.000] Minipulate the stimulus. +[3273.000 --> 3275.000] Find different PPA responses. +[3275.000 --> 3285.000] But what we haven't done yet is ask what is the causal relationship if any between activity and the PPA and perception of scenes or navigation? +[3285.000 --> 3286.000] Okay. +[3286.000 --> 3288.000] So, so far this is all just suggestive. +[3288.000 --> 3291.000] We have no causal evidence for its role in navigation. +[3291.000 --> 3292.000] Right? +[3292.000 --> 3293.000] Or perception. +[3293.000 --> 3294.000] All right. +[3294.000 --> 3295.000] So, let's get some. +[3295.000 --> 3297.000] I'll show you a few examples. +[3298.000 --> 3304.000] So, one, as you guys have learned by now, is these rare cases where there's direct electrical stimulation of a region. +[3304.000 --> 3309.000] And there's one patient in whom this is reported. +[3309.000 --> 3313.000] This patient, again, is being mapped out before neurosurgery. +[3313.000 --> 3316.000] They did Function Lemma-I in the patient first. +[3316.000 --> 3320.000] This is his Function Lemma-I response to, I think, houses versus objects. +[3321.000 --> 3325.000] Houses are not as strong an activator as scenes for the PPA, but they're pretty good. +[3325.000 --> 3328.000] PPA responds much more to houses than other objects. +[3328.000 --> 3331.000] And so, that's a nice activation map showing the PPA. +[3331.000 --> 3335.000] And those little circles are where the electrodes are, little black circles. +[3335.000 --> 3336.000] Okay. +[3336.000 --> 3342.000] So, they know they're in the PPA because they did Function Lemma-I first to localize that region. +[3342.000 --> 3344.000] Now those electrodes are sitting there. +[3344.000 --> 3348.000] And so, first thing we do is record, or first thing they did is record responses. +[3348.000 --> 3354.000] They flash up a bunch of different kind of images and they measure the response in those electrodes. +[3354.000 --> 3359.000] And so, what you see is, in those electrodes right over there, one, two, three, that correspond to the PPA, +[3359.000 --> 3364.000] you see a higher response to house images than to any of the other images. +[3364.000 --> 3368.000] And you see the time course here over a few seconds. +[3368.000 --> 3371.000] Okay. Everybody clear? This has not caused a leavenance yet. +[3371.000 --> 3375.000] It's just amazing direct-intercranial recordings from the PPA. +[3375.000 --> 3380.000] I think the only time this was ever done because it's pretty rare to have the electrodes right there +[3380.000 --> 3384.000] and a patient who's willing to look at your silly pictures and all of that. +[3384.000 --> 3385.000] Right? Okay. +[3385.000 --> 3389.000] But now, what happens when they stimulate there? +[3389.000 --> 3394.000] Okay. So, let's look at what happens when they stimulate on these sites, +[3394.000 --> 3399.000] four and three that are off to the side of the scene selectivity. +[3399.000 --> 3401.000] And this is just a dialogue. +[3401.000 --> 3407.000] We don't have a video, unfortunately, the videos are more fun, but this is just a dialogue between the neurologist and the patient. +[3407.000 --> 3412.000] And the neurologist electrically stimulates that region and says, +[3412.000 --> 3414.000] did you see anything there? +[3414.000 --> 3417.000] Patient says, I don't know. I started feeling something. +[3417.000 --> 3421.000] I don't know. It's probably just me. Oh, no, it's not you. +[3421.000 --> 3426.000] And then they stimulate again. Anything there? No. Anything here? No. +[3426.000 --> 3430.000] Okay. So, that's right next to the side of the scene selective electrodes, right next door. +[3430.000 --> 3435.000] A few millimeters away. Then they move their stimulator over here. +[3435.000 --> 3438.000] They don't move anything. They just control where they're going to stimulate. +[3438.000 --> 3440.000] Patient, of course, has no idea. +[3440.000 --> 3443.000] neurologist says, anything here? Do you see anything? +[3443.000 --> 3450.000] Feel anything? Patient says, yeah. I feel like he looks perplexed, puts hand to forehead. +[3450.000 --> 3454.000] I feel like I saw like some other site. +[3454.000 --> 3456.000] We were at the train station. +[3457.000 --> 3461.000] neurologist cleverly says, so it feels like you're at a train station. +[3461.000 --> 3465.000] Patient says, yeah. Outside the train station. +[3465.000 --> 3469.000] neurologist, let me know if you get any sensation like that again. +[3469.000 --> 3474.000] Stimulates, do you feel anything here? No. +[3474.000 --> 3479.000] And they does it again. Do you see the train station or did it, +[3479.000 --> 3484.000] oh, did you see the train station or did it feel like you were at the train station? +[3484.000 --> 3487.000] Patient, I saw it. +[3487.000 --> 3491.000] These are very sparse, precious data, but that's so telling. +[3491.000 --> 3494.000] It's not that he knew we was at the train station abstractly. +[3494.000 --> 3497.000] He saw it. +[3497.000 --> 3502.000] So then they stimulate again, right on those scene selective regions. +[3502.000 --> 3507.000] Patient says, again, I saw almost like, I don't know, like I saw. +[3507.000 --> 3510.000] It was very brief. neurologist says, I'm going to show it to you one more time. +[3510.000 --> 3513.000] Really what he means is I'm going to stimulate you in the same place one more time. +[3513.000 --> 3516.000] See if you can describe it any further. +[3516.000 --> 3522.000] I'm going to give you one last time. What do you think? +[3522.000 --> 3528.000] I don't really know what to make of it, but I saw like another staircase. +[3528.000 --> 3532.000] The rest I couldn't make out, but I saw a closet space. +[3532.000 --> 3536.000] But not this one. He points to a closet door in the room. +[3536.000 --> 3539.000] That one was stuffed and it was blue. +[3539.000 --> 3541.000] Have you seen it before? It's neurologist. +[3541.000 --> 3544.000] Have you seen it before at some point in your life? +[3544.000 --> 3547.000] Yeah, I mean when I saw the train station. +[3547.000 --> 3550.000] Train station you've been at. +[3550.000 --> 3552.000] Yeah, et cetera, et cetera. +[3552.000 --> 3555.000] So it's not a lot of data, but it's very compelling. +[3555.000 --> 3557.000] What is the patient describing? +[3557.000 --> 3563.000] Places that he's in, that he sees, and then he describes this closet space. +[3563.000 --> 3566.000] And it's colors. Interestingly, color regions are right next to scene regions. +[3566.000 --> 3569.000] So that's kind of cool too. +[3569.000 --> 3572.000] So it's causal evidence. It's sparse. +[3572.000 --> 3576.000] Ideally, we'd like more in science, but it's pretty cool. +[3576.000 --> 3579.000] And what does the patient just state in that quality? +[3579.000 --> 3582.000] You know, I actually forget in the paper. I got to go look that up. +[3582.000 --> 3584.000] I forget exactly what the patient was doing. +[3584.000 --> 3587.000] Whether I think he's just in the room looking out. +[3587.000 --> 3590.000] Usually they don't control it that much because it's done kind of for clinical reasons. +[3590.000 --> 3593.000] And the patient is in their hospital bed and they're just stimulating. +[3593.000 --> 3595.000] So he's probably just looking out at the space he's in. +[3595.000 --> 3600.000] He must have been because at one point he says the closet, not like that one over there. +[3600.000 --> 3606.000] So if he was staring at a blank thing, he was also looking out at his room. +[3606.000 --> 3609.000] Okay, so yeah. +[3609.000 --> 3615.000] So the region of color perception is very close to this. +[3615.000 --> 3621.000] Is there any relationship between like functional proximity and? +[3621.000 --> 3625.000] That's a great question. Nobody in the field has an answer to this. +[3625.000 --> 3629.000] People often make hay about the proximity of two regions like, +[3629.000 --> 3633.000] oh, there's some deep link because this thing is next to that thing. +[3633.000 --> 3636.000] You know, the body selective region is right next to it. +[3636.000 --> 3640.000] In fact, slightly overlapping with area MT that responds to motion. +[3640.000 --> 3642.000] It's like, ooh, body's moved. +[3642.000 --> 3645.000] And well, you know, face is move and cars move too. +[3645.000 --> 3647.000] Like, I don't know. It's tantalizing. +[3647.000 --> 3651.000] It feels like it ought to mean something and people often talk about, you know, +[3651.000 --> 3658.000] talk as if it does and maybe it does, but nobody's really put their finger on what exactly it would mean. +[3658.000 --> 3660.000] But it's youthful, right? +[3660.000 --> 3664.000] So when Rosa, Lafarsusa, who you met in the color demo, +[3664.000 --> 3671.000] and I showed that in humans, you get face color and place regions right next to each other in that order. +[3671.000 --> 3675.000] That was really cool because Rosa had previously shown that in monkeys. +[3675.000 --> 3679.000] In the monkey brain, it goes face color place in exactly the same order. +[3679.000 --> 3682.000] And so we thought, okay, that's really interesting. +[3682.000 --> 3685.000] That suggests common inheritance because that's so weird and arbitrary. +[3685.000 --> 3686.000] Why would it be the same? +[3686.000 --> 3690.000] So it can be useful in ways like that, at least. +[3690.000 --> 3693.000] Okay. So we just went through all of this. +[3693.000 --> 3698.000] So how does this go beyond what we knew from functional MRI? +[3698.000 --> 3701.000] I'm insulting your intelligence. You know the answer to this. +[3701.000 --> 3706.000] It goes beyond it because it tells you it implies that there's a causal role of that region in place perception, +[3706.000 --> 3709.000] some aspect of seeing a place. +[3709.000 --> 3715.000] Okay. Now all of this about the PPA, I just started in there because it's nice and concrete and easy to think about. +[3715.000 --> 3719.000] But no complex mental process happens in just one brain region. +[3719.000 --> 3721.000] Nothing is ever like that. +[3721.000 --> 3727.000] And likewise, scene perception and navigation is part of a much broader set of regions. +[3727.000 --> 3731.000] So if you do a contrast, scan people looking at scenes versus objects. +[3731.000 --> 3734.000] You see, not just the PPA in here. +[3734.000 --> 3738.000] Again, this is a folded up brain and this is the mathematically unfolded version. +[3738.000 --> 3740.000] So you can see the whole cortex. +[3740.000 --> 3744.000] Dark bits are the bits that were that used to be inside a solcus until it was mathematically unfolded. +[3744.000 --> 3747.000] So there's the PPA kind of hiding up in that solcus. +[3747.000 --> 3750.000] And when you unfold it, you see this nice big huge region. +[3750.000 --> 3753.000] Okay. But you also see all these other regions. +[3753.000 --> 3755.000] Okay. Now there's a bunch of terminology. +[3755.000 --> 3758.000] And I don't panic. I don't think you should memorize everything about each region. +[3758.000 --> 3760.000] You should know that there's multiple scene regions. +[3760.000 --> 3767.000] You should know some of the kinds of ways you tease apart the functions and some of the functions that have been tested and how they're tested. +[3767.000 --> 3770.000] But you don't need to memorize every last detail. +[3770.000 --> 3771.000] Okay. +[3771.000 --> 3773.000] Because it's going to get a little hairy. +[3773.000 --> 3780.000] Okay. So here's a second scene region right there called retro-splenial cortex or RSC. +[3780.000 --> 3787.000] And actually Russell Fstein and I saw that activation in the very very first experiments we did in the 1990s. +[3787.000 --> 3790.000] But we really didn't know what we were doing back then. +[3790.000 --> 3793.000] And we knew that this is right near the calcrine solcus. +[3793.000 --> 3796.000] Remind me. What happens in the calcrine solcus? +[3796.000 --> 3800.000] What functional region lives in the calcrine solcus? +[3800.000 --> 3806.000] It's just a weird little fact, but it's kind of an important one. +[3806.000 --> 3809.000] That we mentioned weeks ago. +[3809.000 --> 3812.000] D1, primary visual cortex. +[3812.000 --> 3815.000] That's where primary visual cortex lives. +[3815.000 --> 3824.000] And remember, primary visual cortex has a map of retinotopic space, with next door bits of primary visual cortex responding to next door bits of space. +[3824.000 --> 3830.000] And in fact, that map has the center of gaze out here and the periphery out there. +[3830.000 --> 3836.000] So when Russell and I first saw that activation, we had the same worry that Quilly mentioned a while back. +[3836.000 --> 3838.000] And that is the scenes are sticking out. +[3838.000 --> 3841.000] There's stuff everywhere, the objects, there isn't that much sticking out. +[3841.000 --> 3844.000] And we thought, oh, that's just peripheral retinotopic cortex. +[3844.000 --> 3846.000] But it's not. It's right next to there. +[3846.000 --> 3847.000] And it's a totally different thing. +[3847.000 --> 3849.000] And it turns out to be extremely interesting. +[3849.000 --> 3853.000] You don't need to know all that. It's just a little history. +[3853.000 --> 3858.000] Okay. There's a third region up there that's on the outer surface out there. +[3858.000 --> 3860.000] That used to be called TOS. +[3860.000 --> 3862.000] And now is now called OPA. I'm sorry about that. +[3862.000 --> 3863.000] You don't need to remember this. +[3863.000 --> 3866.000] No, that there's not there at least three regions. +[3866.000 --> 3874.000] But TOS slash OPA is interesting because there's a method we can apply to it that we can't apply to the others. +[3874.000 --> 3876.000] What would that method be? +[3876.000 --> 3882.000] Yeah, TMS. It's right out on the surface. +[3882.000 --> 3884.000] You just stick the coil there and go zap. +[3884.000 --> 3887.000] So of course, we've done a lot of that. +[3887.000 --> 3891.000] Okay. Can't get the coil into the PPA or RSC. It's too medial. +[3891.000 --> 3896.000] Okay. And there's another region that we'll talk about more next time called the hippocampus. +[3896.000 --> 3903.000] You saw the hippocampus when Ann Gray-Bill spent all that time digging in the temporal of to find that bumpy little dendrite gyrus. +[3903.000 --> 3905.000] Approximately right in there. +[3905.000 --> 3908.000] And so all of these and probably other regions. +[3908.000 --> 3914.000] But these are the core elements of the of the scene selective regions that are implicated in different aspects of navigation. +[3914.000 --> 3919.000] Okay. So when you have multiple regions that seem to be part of a system. +[3919.000 --> 3921.000] That's an opportunity. +[3921.000 --> 3926.000] Because now we have the possibility that maybe we could figure out different functions for different regions. +[3926.000 --> 3930.000] And then maybe that would really tell us more than just, okay, scenes and navigation end of story. +[3930.000 --> 3932.000] It's got a root of entry. +[3932.000 --> 3938.000] Right? It would be nice if different aspects of the navigation story engage different parts of this system. +[3938.000 --> 3944.000] Okay. So really what we want to know is how does each of these regions help us navigate and see scenes. +[3944.000 --> 3949.000] And I'm not going to answer that fully. We're still at the field is still trying to understand all of this. +[3949.000 --> 3953.000] But I'll give you a few tantalizing little snippets. Okay. +[3953.000 --> 3957.000] So let's take retro-spinial cortex right here. +[3957.000 --> 3966.000] So this is first the response of the PPA right there and retro-spinial cortex, which is just behind it. +[3966.000 --> 3970.000] This is just its mean response to a bunch of different kinds of stimuli. +[3970.000 --> 3976.000] Showing you that it likes landscapes and cityscapes, scenes more than a bunch of other categories of objects. +[3976.000 --> 3979.000] And that's true of both the PPA and RSC. +[3979.000 --> 3983.000] Okay. No surprises here. They're both somewhat scenes elective. +[3983.000 --> 3984.000] Okay. +[3984.000 --> 3991.000] But then in a whole bunch of other studies summarized in this graph here, Russell Epstein and his colleagues, +[3991.000 --> 3995.000] had subjects engage in different tasks while they were looking at scenes. +[3995.000 --> 3998.000] In some tasks, they had to say where they were. +[3998.000 --> 4002.000] He's at UPEN and he showed his subjects pictures of the UPEN campus. +[4002.000 --> 4009.000] And they had to answer all kinds of questions about what part of campus they were, where they were on campus, +[4009.000 --> 4014.000] and also about which way they were facing given the view of the campus they were looking at. +[4014.000 --> 4022.000] Okay. Then he also showed people familiar scenes and unfamiliar scenes, much like we did with our Tufts study. +[4022.000 --> 4024.000] And he had object controls. +[4024.000 --> 4027.000] And you can see the PPA doesn't care about any of that. +[4027.000 --> 4032.000] Doesn't care really if they're familiar or unfamiliar. Doesn't care what task you're doing on the scene. +[4032.000 --> 4035.000] You're looking at a scene. It's just going. +[4035.000 --> 4038.000] Okay. So we didn't really tease of heart functions there. +[4038.000 --> 4043.000] But RSC responds differently in these conditions. +[4043.000 --> 4051.000] It's more engaged in both the location task and the orientation task. +[4051.000 --> 4059.000] It responds substantially more when you look at images of a familiar place than an unfamiliar place. +[4059.000 --> 4062.000] So this is the first time we've seen that in the scene network. +[4062.000 --> 4068.000] And so now, think about all the things you can do when you're looking at a picture of a scene and you know that place. +[4068.000 --> 4071.000] You have memories of having been there. +[4071.000 --> 4076.000] You can think about what you might do if you were there, how you would get from there to someplace. +[4076.000 --> 4082.000] And all of those things are possible things that might be driving RSC. +[4082.000 --> 4089.000] Another thing that might be driving RSC is that if you're looking at a picture of a familiar place, +[4089.000 --> 4094.000] you orient yourself with respect to the broader environment that that view is part of. +[4094.000 --> 4097.000] Right? So when I showed you that picture of the front of the state, +[4097.000 --> 4103.000] you immediately imagine, oh, like I'm out on Vastor Street facing that way, roughly Northwest, I think. +[4104.000 --> 4107.000] If you look at a picture of a scene and you don't know that scene, +[4107.000 --> 4111.000] it doesn't tell you anything about your broader heading in the broader world. +[4111.000 --> 4118.000] So all of those are things that the RSC, its function seems to depend on knowing that place. +[4118.000 --> 4120.000] Okay. +[4120.000 --> 4127.000] Perhaps the most telling case comes from a patient who had damage in retrospectible in the O'Courtex. +[4127.000 --> 4134.000] And the description in the paper of this says that this patient could recognize buildings and the landmarks +[4134.000 --> 4137.000] and therefore understand where he was. +[4137.000 --> 4139.000] Okay. So lots is intact. +[4139.000 --> 4143.000] Can recognize scenes and know where he is. +[4143.000 --> 4151.000] But the landmarks he recognized did not provoke directional information about any other places with respect to those landmarks. +[4151.000 --> 4153.000] Okay. +[4153.000 --> 4157.000] So this person can look at a picture and say, yeah, I know that place. +[4157.000 --> 4159.000] That's the front of my house. +[4159.000 --> 4167.000] But then if you say, in which direction is a coffee shop two blocks away, he doesn't know which way it is from there. +[4167.000 --> 4172.000] Okay. So this should sound familiar. +[4172.000 --> 4179.000] This is my guess of the bit that my friend Bob got messed up. +[4179.000 --> 4182.000] This is exactly his description. He could recognize places. +[4182.000 --> 4186.000] But it wouldn't tell him how to get from there to somewhere else. +[4186.000 --> 4187.000] Okay. +[4187.000 --> 4194.000] And so the best current guess about retrospectible in O'Courtex is that it's involved in anchoring where you are. +[4194.000 --> 4199.000] You have this mental map of the world and you have a scene and you're trying to put them together. +[4199.000 --> 4204.000] Given that I see this, where am I on the map in which way am I heading in that map? +[4204.000 --> 4205.000] Okay. +[4205.000 --> 4209.000] Again, think about the problem you faced when you emerged from the subway in Manhattan. +[4209.000 --> 4213.000] Right? Like you look around where am I in which way am I heading? +[4213.000 --> 4216.000] That's what you need retrospectible in O'Courtex for. +[4216.000 --> 4218.000] Okay. +[4218.000 --> 4221.000] All right. How about this TOS thing? +[4221.000 --> 4223.000] I'll give you, there's lots of studies of it. +[4223.000 --> 4226.000] I'll give you just one little offering. +[4226.000 --> 4233.000] Okay. So this is a causal investigation because as we discussed, TOS is out on the lateral surface. +[4233.000 --> 4236.000] So we can zap it. And so of course we do. +[4236.000 --> 4246.000] And so in this study, we were asking whether TOS is involved in perceiving the structure of space around you. +[4246.000 --> 4251.000] So we took scenes like this from CAD programs and we just varied them slightly. +[4251.000 --> 4261.000] So for example, the position of this wall moves around, the aspect ratio, the height of the ceiling moves around and we make this subtle morph space of different versions of this image. +[4261.000 --> 4265.000] Okay. And then for control condition, we do the same with faces. +[4265.000 --> 4269.000] We morph between this guy and that guy who make a whole spectrum in between. +[4269.000 --> 4273.000] And then in the task, what we do is here's one trial. +[4273.000 --> 4280.000] One of the scenes or faces comes on briefly and then shortly thereafter you get a choice of two. +[4280.000 --> 4283.000] And you have to say which of these matches that one. +[4283.000 --> 4289.000] Okay. And then what we do is we zap people right after we present this stimulus. +[4290.000 --> 4295.000] Okay. And so the idea is this is as close as we can get to a pretty pure perceptual task. +[4295.000 --> 4300.000] How well can you see the shape of that environment or the shape of that face? +[4300.000 --> 4303.000] Okay. You don't have to remember it for more than a few hundred milliseconds. +[4303.000 --> 4306.000] So it's really more of a perception task than a memory task. +[4306.000 --> 4318.000] Okay. And what we measure is we actually muck with how different these two images are in each trial and measure how far apart they have to be in morph space for you to be about seven seconds. +[4318.000 --> 4320.000] For you to be about 75% correct. +[4320.000 --> 4323.000] Okay. That's the kind of standards, psychophysical measure. +[4323.000 --> 4333.000] The details don't matter, but our dependent measure is how different do this stimuli have to be for you to discriminate them as a function of whether you're getting, whether you're getting zapped in TOS or not. +[4333.000 --> 4337.000] Okay. And so you're the data. +[4337.000 --> 4341.000] So let's take the case where you're doing the scene task here. +[4341.000 --> 4346.000] What this threshold is is again, how different the stimuli need to be for you to discriminate them. +[4346.000 --> 4349.000] So the higher the bar, the worse performance. +[4349.000 --> 4352.000] Okay. They have to be really different. You can't tell them apart. +[4352.000 --> 4362.000] And so what you see is when you zapped OPA, that lateral scene selective region discrimination threshold goes up a bit. +[4362.000 --> 4364.000] That means you get worse at the discrimination. +[4364.000 --> 4369.000] The stimuli need to be more different compared to zapping the top of your head. +[4369.000 --> 4375.000] Okay. You remember you always want to control condition and there's no perfect control condition because it feels differently to be zapped in different places. +[4375.000 --> 4381.000] But getting zapped up here is a, you know, better than nothing control. +[4381.000 --> 4383.000] And then here's the occipital face area. +[4383.000 --> 4387.000] That's the lateral face region we talked about before when I should do another TMS study. +[4387.000 --> 4392.000] Basically, whenever there's anything lateral, we zapped it because we can't. +[4392.000 --> 4399.000] And see, it's not affected here. Zapping the occipital face area does not mess up your ability to discriminate the scenes. +[4399.000 --> 4404.000] However, in the face task, we see the opposite pattern. +[4404.000 --> 4414.000] For the face task, zapping the occipital place area doesn't do anything compared to zapping the top of your head, but zapping the face area does. +[4414.000 --> 4417.000] This is a double dissociation. +[4417.000 --> 4424.000] If we just had the scene task and be like, yeah, maybe, who knows? +[4424.000 --> 4428.000] Maybe, maybe, who knows why? But, you know, it's not very strong. +[4428.000 --> 4437.000] But when you have these opposite things, then we really have much more strong evidence that these two regions have different functions from each other. +[4437.000 --> 4449.000] And everybody get that this is a double dissociation in the same sense of when you have one patient with damage and one location and another patient with damage and another location and they have opposite patterns of deficit, then we're really kind of in business. +[4449.000 --> 4453.000] Then we can draw strong inferences. +[4453.000 --> 4455.000] All right, so we just said all of that. +[4455.000 --> 4458.000] Okay, so that's just a little snippet. +[4458.000 --> 4472.000] And other data suggests that that region is strongly active when you look at scenes and it seems to be involved in something like perceiving, you know, just directly online perceiving the structure of the space in front of you. +[4472.000 --> 4475.000] Okay. +[4475.000 --> 4476.000] All right. +[4476.000 --> 4482.000] So, yeah, we already did retrospective no cortex. +[4482.000 --> 4489.000] And next time we'll talk about the hippocampus in there and its role in the whole navigation thing. +[4489.000 --> 4502.000] Now, since I have ended early, rare event, I actually put together a whole other piece of this lecture and I thought, no, don't always have a part you don't get to. +[4502.000 --> 4507.000] So it turns out, we do get to it. +[4507.000 --> 4513.000] Okay, we're going to go over this more later, but we're going to start with this business right here. +[4513.000 --> 4516.000] Anybody have questions about this stuff so far? +[4516.000 --> 4517.000] Okay. +[4517.000 --> 4527.000] So I've spent a lot of time talking about multiple voxel pattern analysis because it's the only method I've mentioned so far that enables us to go beyond the business of saying, +[4527.000 --> 4534.000] how strongly to the neurons fire in this region to the more interesting question of what information is contained in this region. +[4534.000 --> 4535.000] Okay. +[4535.000 --> 4543.000] But I also ended the last lecture with this kind of depressive note that you can't see much with MVPA applied to face patches. +[4543.000 --> 4547.000] Even when we know there's information in there with electrophysiology data. +[4547.000 --> 4554.000] Remember, I showed you that monkey study where they tried MVPA and the face patches and monkeys and they couldn't kind of read out a damn thing. +[4554.000 --> 4561.000] And then they try MVPA on individual neural responses of the same region and they can read out all kinds of information. +[4561.000 --> 4566.000] And that tells you that the information is there and we just can't always see it with MVPA. +[4566.000 --> 4573.000] Now, today you've seen cases where you can see stuff with MVPA in the scene region so sometimes it works, sometimes it doesn't. +[4573.000 --> 4579.000] And when it doesn't work, we're left in this unsatisfying situation that we don't know if the information isn't there. +[4579.000 --> 4586.000] Or if the neurons are just so scrambled together that we just don't, we can't see the different patterns. +[4586.000 --> 4592.000] Okay. So bottom line, we need another method. MVPA is a whole lot better than nothing. +[4592.000 --> 4601.000] But we want to be able to ask, is there information present in this region, even when we think the relevant neurons are all spatially intermingled. +[4601.000 --> 4605.000] Okay. So let me just do a little bit of this and then we'll continue later. +[4606.000 --> 4612.000] So goal, we want this new method is called event-related functional MRI adaptation. +[4612.000 --> 4621.000] And we use it when we want to know if neural populations in a particular region can discriminate between two stimulus classes. +[4621.000 --> 4629.000] So for example, do neurons in the FFA distinguish between this image and that image. +[4630.000 --> 4642.000] So if we want to know that, we could measure the functional MRI response in the FFA and find this would be an event-related response, similar responses to the two. +[4642.000 --> 4649.000] Okay. And as I just mentioned, that wouldn't mean that there isn't information in the FFA that discriminates that. +[4649.000 --> 4655.000] It just says they have the same mean response. Everybody get that? Okay. +[4655.000 --> 4672.000] Now, if we zoom in and think about what might neurons be doing, it's still possible, even with the same mean response, that neurons could be organized like this, with some of them responding only to this image and some of them responding only to that image. +[4672.000 --> 4680.000] But it's also possible that all of the neurons respond equally to both. And we kind of desperately need to know. +[4680.000 --> 4689.000] So, we're not in this case. This is a toy example, obviously. But we often, when we're trying to understand the region of the brain, we need to know which situation we're in. +[4689.000 --> 4698.000] Okay. So that neural population can discriminate these two and that one can't. Okay. How are we going to tell which is true? +[4698.000 --> 4709.000] Well, we talked before about multiple voxel pattern analysis. But as I just said, it only works when the neurons are spatially clustered on the scale of voxels. +[4709.000 --> 4717.000] So, imagine you have these situations here. This is getting more and more of a toy example, but just to give you the idea. +[4717.000 --> 4730.000] Suppose where those neural populations land with respect to voxels is like this. So if each of these is a voxel in the brain, a little say two by two by three millimeter chunk of brain that we're getting an MRI signal from. +[4730.000 --> 4739.000] If you have the different neural populations spatially segregated enough that they mostly land in different voxels, then MVPA might work here. +[4739.000 --> 4746.000] Is that intuitive? If you guys all see that, then we get a different pattern in these voxels for looking at those two different images. +[4747.000 --> 4761.000] But even if we have this situation here, which is kind of informationally the same, if they're spatially scrambled so that they're in roughly equal proportion in each voxel, MVPA won't work. +[4761.000 --> 4773.000] Does that make sense? And so that's when we need this other method called functional MRI adaptation. Make sense? Okay. I'm going to go one minute over probably. +[4773.000 --> 4783.000] Okay, so the point of functional MRI adaptation is it can work even when there's no spatial clustering of the relevant neural populations on the scale of voxels. +[4783.000 --> 4798.000] So let me go through it quickly. We'll come back to it later. So here's how it goes. The basic idea is any measure that's sensitive to the sameness versus difference between two stimuli can reveal what that system takes to be same or different. +[4798.000 --> 4811.000] So for example, if a brain region discriminates between two similar stimuli like these, then if we measure the functional MRI response in that region to same versus different trials. +[4811.000 --> 4818.000] Okay, so this would be a different trial. You present Trump and then the chimp back to back. That's one trial. +[4818.000 --> 4830.000] Compare to a same trial, chimp and then chimp. And of course we counterbalance everything. So we also do chimp and then Trump and another different case and then Trump and then Trump and another same case. +[4830.000 --> 4846.000] Right? If we find that the neural response is higher when the two stimuli are different than when they're the same, then we know that that region has neurons that respond differentially to the two. +[4847.000 --> 4854.000] Okay, so remember we started with a case where the mean response is the same to this image in this image if you just measure them along. +[4854.000 --> 4867.000] But now we want to know, do we really have neurons that respond differentially? So we're using the fact that neurons are like people and muscles. If you keep doing the same thing to them, they get bored. Then they're done that. +[4867.000 --> 4882.000] Okay, so you present this back to back, you get a lower response than if you present this and then this. Okay, that's called functional MRI adaptation. It's like that waterfall MTA adaptation we talked about before, but just crammed into a fine timescale. +[4882.000 --> 4893.000] Okay, and so then if you do that, you can ask what a region thinks is the same. Okay, so then we could ask, okay, what about these two images? +[4893.000 --> 4910.000] Does it think those are the same? And if we find a response like that, what have we learned? So if these two respond like that, what have we learned about a region that shows this is all fake data, obviously, but if we saw that, what have we learned? And then I'll let you go. +[4910.000 --> 4917.000] It's going to say get a nice answer to this. Yeah. +[4917.000 --> 4932.000] So it's the same between two pictures of the same stimuli that needs that it's activated. Like it can discreetly, but if we're if the yellow one at the same degree is the red thing, you would just be interacting to different pictures. +[4932.000 --> 4949.000] And totally get that is probably right, but totally get it. And keep point just because I don't want to torture you guys and go way over, but keep point is it's the same response is the lower response. We tell that with this case, we actually give it a same one. So same is lower than different. That's just how this method works. +[4949.000 --> 4963.000] Then we're basically asking, does that count as the same to this brain region? And we're finding, yes, it does. That tells us that those neurons are invariant to all kinds of things viewpoint facial expression. +[4963.000 --> 4969.000] You know, when he last died his hair, you know, who the hell knows all these other things, right? +[4969.000 --> 4981.000] So we'll talk more about this, but the idea is now we have another method in addition to MVPA that can start to tell us what neurons are actually discriminating. Okay, sorry to go over.